The American-Russian science fiction writer Isaac Asimov enacted in his new Vicious Circle of 1942 the three laws of robotics, which have today become a reference in legal research concerning the supervision of robots. According to these laws, « a robot cannot harm a human being, nor, by remaining passive, allow a human being to be exposed to danger ;
a robot must obey orders given to it by a human being, unless such orders conflict with the first law;
a robot must protect its existence as long as that protection does not conflict with the first or second law. »
These laws inspired South Korea to draft a Robot Ethics Charter based on these three rules, the basic idea being to protect humans from robots but also to protect robots from humans. This charter was announced by the South Korean government in 2007 with the intention of determining « ethical guidelines on the role and functions of robots », as robots have the potential to « develop a keen intelligence »[i]. Although never made public, this text is currently the first to deal with robot law through the prism of ethics, and in particular cases involving relationships between humans and robots capable of making decisions, and therefore possessing artificial intelligence.
This charter is part of South Korea’s plans to mitigate the low population growth associated with population aging, the role of robots being to serve as companions to humans in the performance of their daily tasks[ii].
This objective is also explained by the importance of the robotics market in the South Korean economy, South Korea being one of the world leaders in robots.
The construction of this text is divided into three stages. First, the charter deals with manufacturing standards. Then the rights and duties of users and owners, and finally the rights and duties of robots are highlighted.
Second among the states interested in the issue of robots is the United Kingdom. In April 2016, the British Standards Institute published a document entitled « Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems. This text has no legal value but sets out recommendations for robot creators, emphasising the ethical risks associated with the development of robots, such as the danger that they may resort to racist behaviour, for example. The aim is thus to avoid a « lack of respect for cultural diversity or pluralism[iii] ».
Moreover, as far as responsibility for robots is concerned, the authors of this text take a stand to make it fully incumbent on human beings, insisting that the person responsible must be clearly identifiable.
At the present time, this text is one of the most up-to-date with regard to the ethics of robots[iv].
Great Britain is not at its first attempt. In September 2010, it had already organized a working group, « Principles of robotics », composed of several renowned academics and specialists gathered around the idea of discussing robotics and especially its application in the real world[v]. However, this project is expressed at the expense of recognition for robots as legal entities.
In the United States, robots play a dominant role in the economy, particularly in the job market, to such an extent that they already pose a threat to low-skilled jobs[vi]. However, there is virtually no U.S. law regarding the legal status of robots. This is probably only a matter of time, since the National Highway Traffic Safety Administration (NHTSA) has proposed that Google grant driver status to its autonomous cars, which will therefore be responsible instead of the humans they transport[vii].
As regards the European level, Luxembourg MEP Mady Delvaux proposed a report to Parliament which it adopted on 16 February 2017 by a large majority (396 votes in favour, 123 against, 85 abstentions). This resolution calls on the European Commission to work towards the establishment of ethical rules concerning robotics and artificial intelligence « in order to exploit their full economic potential and to guarantee a standard level of safety and security ».
Among the subjects on which the report asked the Commission to take an interest was initially the granting of an electronic personality to robots. According to Mady Devaux, « it would be the same principle that we currently have for companies. However, this solution would take time to come into being[viii]. According to the report, an electronic person would potentially be « any robot that makes autonomous decisions intelligently or interacts independently with third parties.
However, the Commission has not yet adopted this notion of electronic personality. Furthermore, the European Economic and Social Committee (EESC) has opposed the recognition of any form of legal personality for robots. In an opinion of 4 May 2017, it considered that this would be a « threat to an open and equitable democracy[ix] ». However, the EESC can only issue opinions which have no legal value.
Generally speaking, European decision-makers are in a hurry to seek to build a basis of rules on robots that would be applicable to the Member States because they fear that the States will begin to legislate on their own. This would raise problems of harmony and at the same time undermine the stability of the European legal framework, especially if States adopt standards that differ or even contradict each other from one State to another. In addition, it is also important to unify future robotics standards in view of strong competition from other geographical sectors already at the forefront in this field, such as Asia or the United States.
It is true that, although still far behind South Korea, some European states are considering the possible recognition of the legal personality of robots. This is the case in France, where some efforts have been made to answer this type of question. Examples include the report adopted on 29 March 2017 by the Office parlementaire d’évaluation des choix scientifiques et technologiques (OPESCT), or the France IA strategic plan introduced in January 2017 by Axelle Lemaire, then Secretary of State for Digital and Innovation[x].
Despite this, France is still far from reaching concrete solutions, in particular because of strong doctrinal opposition. Indeed, the majority of the authors oppose the recognition of the legal personality of robots, invoking arguments such as the effectiveness of existing legal regimes or uncertainty as to the development of robotic innovation.
Other specialists, such as Alain Bensoussan, believe that the adoption of a personality regime specific to robots is necessary today because of the decision-making freedom of the robot due to artificial intelligence.