AI Liability Directive: liability rules in the digital age

by Bianca Filipoiu on 15 Dec 2022

The European Commission has proposed new rules providing compensation for damage caused by AI systems. Below, we summarise the two key instruments which users and providers of AI systems will need to comply with.

Background and scope of the AI Liability Directive  

On 28 September 2022, the European Commission published the proposed Artificial Intelligence (AI) Liability Directive, to provide compensation for damage caused by AI systems. The Directive eases the burden of proof for victims of damage caused by AI applications and services through two instruments: disclosure of information and a rebuttable presumption of causality. The proposed directive seeks to complement the AI Act. While the AI Act aims at preventing damage caused by AI systems, the directive lays down rules for compensation for damage caused by AI systems.  

The Directive covers any type of AI system although it is mainly aimed at high-risk AI systems such as those used in managing and operating critical infrastructure or in recruitment purposes. It places obligations on providers and users of AI systems to compensate any type of damage covered by national civil liability rules, including damage to life, health, property, and privacy, as well as discrimination. 

Disclosure of information 

Under the proposed directive, providers and users of AI systems could be found liable for any harm caused by their AI-enabled products and services under a given set of circumstances. 

Providers and users of high-risk AI systems (defined in the EU AI Act Annex III) will be obliged to disclose evidence about their high-risk AI systems to national courts, if those courts suspect the systems of having caused damage. The evidence includes information about the datasets used to develop the AI system, technical documentation, logs, and the quality management system. Providers of high-risk AI systems will have to keep all the information concerning their AI systems for ten years after the AI system has been placed on the market as part of their obligations under the AI Act. These disclosures are nonetheless subject to safeguards to protect trade secrets and other confidential information.  

In cases where the provider and user do not comply with the disclosure and preservation order (of relevant documentation concerning their AI system) by the national court a rebuttable presumption of non-compliance will apply. In this instance, the defendant has the right to rebut this presumption if it can prove that its AI system did not cause the harm suffered.   

Rebuttable presumption 

The second instrument used by this directive is a rebuttable presumption of causality which seeks to make it easier for claimants to prove the causal link between the fault of the defendant and the output produced by the AI system. In civil law, rebuttable presumption is an assumption made by a court that is taken to be true unless someone proves otherwise. In the case of a claim for damages against a provider of a high-risk AI system, the presumption of causality applies if the provider fails to put in place suitable risk management measures, the training datasets do not meet the appropriate quality criteria, or the AI system does not meet transparency, accuracy and cybersecurity requirements under the AI Act (Chapter 2 and 3 of title III of the AI Act).  

The presumption also applies to users of high-risk AI systems, where the user failed to monitor the AI system in accordance with the accompanying instructions of use or exposed the AI system to input data not relevant to the system’s intended purpose. 

For AI systems that are not considered high-risk, the presumption applies if a set of conditions are met: the claimant has proven the defendant’s failure to comply with a duty of care to protect against the damage, the defendant’s negligent conduct negatively impacted the output of the AI system, or the claimant has proved that the output by the AI system caused the damage.  

Impact on businesses and ways forward: risk and opportunities 

The AI Liability Directive poses a series of risks for providers and users of high-risk AI systems since the claims brought against them can be very broad, covering any type of damage including non-material damage such as discrimination and privacy-related harms. This may result in a wave of claims which will make it increasingly difficult for providers and users of AI systems to appropriately protect themselves against compensation claims for damage caused by their AI systems. 

The directive states that providers and users of high-risk AI systems may face a presumption of causality if they fail to ensure the security of their AI systems, monitor the AI system while in use or interrupt the use of the AI system in case of significant risk (as required by the AI Act). Companies should consider working on their incident response planning and testing by creating a plan for swiftly identifying AI incidents and responding to allegations of harm caused by their AI system. 

The proposed directive offers opportunities for businesses to better anticipate how the existing liability rules will be applied, and to assess and insure their liability exposure. 

In addition, since the presumption of causality is rebuttable, companies may be able to defend themselves from claims against their AI systems by providing documentation of their AI model testing and activity logs of model performance. Here, companies should direct their attention towards improving their documentation and audit capabilities. 

In terms of next steps, the European Commission will set up a monitoring programme to gather more information on incidents involving AI systems. This programme will be complemented by a targeted review which will assess whether additional measures would be needed such as a strict liability regime and mandatory insurance. The file is still in the early stages with negotiations expected to commence shortly in the European Parliament and in the Council of the European Union. The Committee on Legal Affairs (JURI) will lead the work on the file. 

At Inline, we help businesses to understand and influence policy and regulation. If you have concerns about how this directive will impact your business, please contact us at enquiries@inlinepolicy.com 

Topics: Artificial Intelligence (AI), Regulation, Technology

Bianca Filipoiu

Written by Bianca Filipoiu

Bianca provides policy analysis and monitoring to clients in the tech sector.

Get the latest updates from our blog

Related Articles

Three key EU institutions - the European Commission, the European Parliament, and the Council of the European ... Read more

The Media Bill is a broad piece of legislation which will, amongst other things, makes changes to the way in ... Read more

In the ever-changing global landscape marked by geopolitical tensions and technological shifts, the European ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

Comments