Strong artificial intelligence and autonomous robots: which liability regime?

Two specific liability regimes are often referred to regulate autonomous robots equipped with artificial intelligence: liability for things and liability for defective products.

 

The first regime is not adapted to the hypothesis of an autonomous robot because it implies a certain power of use, direction and control. The second regime is also problematic because damage can be caused without the "producer" being responsible for a "defect" strictly speaking,the robot only evolving autonomously, in a self-learning mode.

 

For this reason, some have suggested the adoption of a liability regime based on the animal liability regime, while others have proposed the introduction of a liability regime specific to autonomous robots. In the latter case, a new legal personality would be created, in addition to the human person and legal persons (such as companies and associations recognized by law). This trend was favourably received by the European Parliament in a highly controversial resolution adopted in 2017.

 

What ethics?

 

To try to bring a more global perspective to the debate, let's start at the beginning: what ethical rules do we want to apply to intelligent and autonomous robots?

 

It was in 1944 that the question was first raised by the famous science fiction author Isaac Asimov, in his three laws of robotics: 

 

  • First Law: a robot cannot injure a human being or, by its inaction, allow a human being to be injured.

 

  • Second Law: a robot must obey orders given by human beings, unless such orders are in contradiction with the First Law.

 

  • Third Law: a robot must protect its own existence as long as such protection does not contradict the First and/or Second Law.

These three laws are now at the heart of the debate, to such an extent that they have been mentioned in the European Parliament's above-mentioned resolution.


The European Commission is, for its part, very active on the subject since it has set up a "High Level Group of Experts on Artificial Intelligence" (HLG AI). On 9 April 2019, the Commission published the guidelines proposed by this Group of Experts.

 

According to him, the AI must:

 

  • respect fundamental rights, applicable regulations and basic values and principles, ensuring an "ethical purpose";

  • be human-centred: AI must be designed, deployed and used with an "ethical purpose", based on fundamental rights, societal values and ethical principles of beneficence (doing good), non-maleficence (not harming), human autonomy, justice and explainability. This is an essential aspect of achieving a trustworthy AI;

  •  be trustworthy from the first stage of design: accountability, data governance, design for all, governance of AI autonomy (human supervision), non-discrimination, respect for human autonomy, respect for privacy, robustness, security, transparency.

In addition, the group stresses the need to facilitate the verifiability of AI systems, particularly in critical contexts or situations. The AI should be designed in such a way that it can be traced back to the various decisions it makes. This is a major challenge for humanity. In this respect, it will probably be necessary to couple AI to a blockchain technology that will track its decision-making process in real time. 

 

A new liability regime?

 

In the end, should we create a new regime of liability for intelligent things? 


That's likely. The following guidelines could be proposed to outline such a regime:

  • the designer of the AI would be held responsible if he or she has not developed a "trustworthy" intelligence, i.e. one that complies with the above-mentioned precepts;

  • the designer should establish an AI "preliminary impact assessment" on individual freedoms, similar to the preliminary impact assessment imposed by the GDPR in the case of massive or sensitive data processing;

  • AI should be permanently controllable by an "artificial moral agent" who would report abnormal behaviour and allow humans to intervene, if necessary by disabling AI (and this technical possibility must be possible);

  • the use of this artificial agent, as well as the prior establishment of an impact study and a traceability tool, would be based on a general principle of accountability, always in the same way as the GDPR;

  • a collective compensation fund should be set up in the event of damage caused by the AI through no fault of its designer.

 

Share on Facebook
Share on Twitter
Please reload

À l'affiche

3èmes Assises des Technologies Financières - Jeudi 17 octobre 2019

26/07/2019

1/10
Please reload

Posts récents
Please reload

Par tags
Please reload

Nous suivre
  • Google+ Long Shadow
  • Facebook Long Shadow
  • LinkedIn Long Shadow
  • Twitter Long Shadow

© 2018 DS Avocats.

Contact Us

Tél : +33 1 53 67 50 00

verbiest@dsavocats.com

bellanca@dsavocats.com

  • Google+ Long Shadow
  • Facebook Long Shadow
  • LinkedIn Long Shadow
  • Twitter Long Shadow