Advertisements
Artificial Intelligence,  Start-Ups,  Technology

Ethics and Artificial Intelligence in Robotics

“What use was time to those who’d soon achieve Digital Immortality?”

By Clyde DeSouza.

The debate on artificial intelligence and robotics is gaining increasing social relevance once one begins to glimpse the possible consequences for the economy, employment and society as a whole. Along with this type of considerations, various institutions are also beginning to raise the ethical dilemmas that will be the basis of future legislation.

Keys

The European Parliament, on the one hand, and a group of experts gathered in Asilomar (California), on the other, have developed in parallel a series of principles and ethical values ​​on robotics and artificial intelligence.

This concern for ethics is based on the growing awareness of the need to regulate, in the near future, the advances in this field.

There is a difference between the ethics of robotics – which is the ethics of researchers, designers and users – and the ethics of machines: robots do not have values ​​or awareness beyond what they are programmed and allowed.

Some shared values: respect for human dignity, freedom, security, privacy, search for the common good, inclusion, no military use… will be the basis of future laws and regulations.

Uncertainty is also recognized: it is impossible to know with certainty what will be the advances of AI in one or two decades. Therefore, ethical dilemmas and mid-term regulations will have to be rethought.

From the consequences of Artificial Intelligence to the principles

Advances in Artificial Intelligence are beginning to show some glimpses of what a not-so-distant future will be like -autonomous cars or drones, the triumph of AI against humans in chess, poker or Go- and its more than foreseeable consequences with respect to employment by automation The great future changes and their deep social impact raise equally expectations of a different and better world, but also fears based on a mastery of machines in a world without employment.

Given this, different actors are positioning themselves. First, governments. The White House released two reports on Artificial Intelligence last October 2016 and its effect on the American economy in December 2016. The universities and academia are also putting it at the center of their discussions. Stanford University published its report on “Artificial Intelligence and life in 2030” in September 2016. And many other actors, industry or not, are positioning themselves on some of the main issues.

In this sense, both the European Parliament, on the one hand, and some of the leading exponents of artificial intelligence worldwide, on the other, have begun to reflect on the ethical components of artificial intelligence. On what are they based?

The ethical code of European conduct

The European Parliament, after a draft proposal in June 2016 and a reasoned report in February 2017, has approved a report on Robotics in which an Ethical Code of Conduct is established.

The resolution proposal of the European Parliament establishes that it is necessary to establish “an ethical guide framework for the design, production and use of robots” that serves as a complement to the different purely legal recommendations that are made. That is, to delve into a new discipline, “theft”. The basic idea is that ethical standards should be addressed to humanity – that is, the designers, producers and users of robots – and not so much to the robots themselves. As Professor Nathalie Nevejans, responsible for the report commissioned by the EP himself, points out, one should not confuse ethics in robotics with machine ethics, that is, an ethic that forces robots themselves to adhere to ethical rules. There are several fundamental principles that have been included in the resolution, which include the protection of human dignity, privacy, freedom, equal access or social effects, among others.

The principles sanctioned by the European Parliament

  1. Protect humans from damage caused by robots: human dignity.
  2. Respect the refusal to be cared for by a robot.
  3. Protect human freedom against robots.
  4. Protect privacy and the use of data: especially when autonomous cars, drones, personal assistants or security robots advance.
  5. Protection of humanity against the risk of manipulation by robots: Especially in certain collectives – children, children, dependents – that can generate artificial empathy.
  6. Avoid the dissolution of social ties by making robots monopolize, in a certain sense, the relationships of certain groups.
  7. Equal access to progress in robotics: Like the digital divide, the robotic gap may be essential.
  8. Restricting access to technologies for improvement by regulating the idea of ​​transhumanism and the search for physical and / or mental improvements.

The vision of the experts: the 23 principles of the Asilomar AI

As Cade Metz points out in Wired, in February 1975 a group of geneticists met in a small town in California, Asilomar, to decide if their work could destroy the world. We were at the beginning of genetic engineering and DNA manipulation, and from that meeting emerged a series of principles and a strict ethical framework for biotechnology. Four decades later – organized by the Future of Life Institute – another group of scientists met in the same place and with the same problem. But this time it was about analyzing the possible consequences of AI. The underlying idea was clear and shared: a profound change is coming and will affect the whole society and the people who have some kind of responsibility in this transition have as much responsibility as the opportunity to give it the best possible shape.

After a deep deliberation of months, a series of principles were approved that were supported by at least 90% of the attendees. These 23 approved principles are not exhaustive, but show the need for certain fundamental principles to be respected and that, through the development of this discussion, AI be used to improve the lives of all. In this way, a series of principles are grouped in three general considerations: a) Principles related to research; b) Ethics and values; and c) Long-term issues.

Roboethics: A common perspective?

The way of reflecting and posing ethical frameworks – and later legal – offers a clear example of the different way that Americans and Europeans have to face problems: from a more institutional point of view, with committees, subcommittees, reports, proposals and legal frameworks (more or less obligatory), as opposed to a more flexible, open and voluntary discussion that characterizes the way of doing Anglo-Saxon. Even so, we can observe a series of common parameters and ideas, which can serve as a shared basis to try to conceptualize and regulate, in due course, the practices and consequences of AI.

We can observe that there are a series of common considerations in the two ways of realizing this approach to the ethics of AI.

AI must be done for the good of humanity and benefit the greatest number. It is necessary to reduce the risk of exclusion.

The standards with respect to the AI ​​must be very high when it comes to the safety of humans: For this, it is necessary to have an ethical and finalist control of research, transparency and cooperation in the development of the AI.

Researchers and designers have a crucial responsibility: all research and development of

AI must be characterized by transparency, reversibility and traceability of processes.

Need for human control: That at all times be the humans who decide what they can do or not robotic or AI-based systems.

Manage risk: The more serious the potential risk, the more stringent risk control and management systems should be.

No development of the AI ​​to perform weapons of destruction.

Uncertainty: It is recognized that the advances in these fields are uncertain, in scopes and scopes that in certain cases are unimaginable. Therefore, regulations and frameworks must be rethought in the medium term when other advances have become reality.

Differences exist? As in many cases, more than explicit differences are usually implicit. First, the European and American legislative culture is different. This may have effects in the future. The process by which the different principles have been identified – more ‘bureaucratic’ in the European case, in a conference between experts and entrepreneurs in the American case – exemplifies the different way of approaching it. Consequently, the European Robotics Charter is, in a certain sense, more exhaustive, and tries to start regulating also the end users, and not just the designers.

Finally, and this is a fundamental difference, it seems that there are two different visions: transhumanism, that is, the improvement of human capacities -physical and / or intellectual- through technology that transcend human limits. From the European perspective, transhumanism should be regulated since it can potentially go against many of the basic principles of AI such as equal access and human dignity, among others. In contrast, the Asilomar principles are not explicitly limited.

Conclusion

Robotics and AI will profoundly affect our economic, social or political relations. All sectors and social groups will be affected: what principles should govern robots? How does it affect different social groups differently? And in different regions of the world? How will the AI in the near future?

Many questions and many uncertainties, but one reality: artificial intelligence is going to develop and we must start thinking and managing its change.

Advertisements

One Comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: