Introducing the Product Liability Directive (Part II): what about AI?

by Giulia Iop on 26 Aug 2020

As we explained in our previous blog, European policy makers are pondering whether to revise the 1985 Product Liability Directive to make it ‘future-proof’ and ensure it remains fit for purpose amidst the growth of new technologies. Both the European Commission and European Parliament have addressed the issue in various formats and within different frameworks, both as part of a broader revision of European product safety regulation and/or as part of a planned regulation on Artificial Intelligence – whose aim would be to address the legal challenges of new automated technologies.

The potential revision of the Product Liability Directive raises a number of issues around new technologies. For one, proposals tabled by MEPs and Expert Groups highlight the need to revise key definitions in the Directive - ‘product’, ‘damage’, ‘producer’ - and to potentially shift the burden of proof from the consumer to the producer or supplier.

While these are key factors to be considered when defining liability for defective technologies, amending a 35-year-old directive that constitutes a cornerstone of the European liability framework is not easy nor straightforward. One solution could be for the EU to establish a new, parallel framework - potentially as part of the planned AI regulation - that deals with the loopholes identified in the Product Liability Directive related to new AI-based technologies. In this way, the issues would still be addressed, but without risking opening the pandora’s box of a complete revision of the Directive.

The key challenges

As we outlined in our previous blog, some proposals suggest that the Directive’s scope and coverage be expanded.

Adding new “terms” and definitions to the list currently in the Product Liability Directive however wouldn’t necessarily address the existing issues in the most effective way. When it comes to AI applications and automated decision-making processes, there is definitely more to ‘products’, ‘physical damage’ and ‘producer’, yet the field (AI) is so complex that it would be difficult to include all of its facets - and address them appropriately – in the revised Directive while also making sure that the Directive’s provisions fit well with other, non-AI liability rules. 

Definition of product

For starters, the definition of ‘product’ could theoretically be expanded to include systems, software and other AI/automated-decision making processes – even services. A first step would therefore be to define ‘software’ and ‘system’. Moreover, if the scope of the Directive is expanded to include services, would this mean that services that use AI only to some extent (or not at all, such as cleaning services in hotels) would also fall within the scope? In addition, the Directive would need to address the fact that AI-powered products (but not only) often become “unsafe/defective” only after they are put onto the market.

Given the variety of factors that need to be considered when changing old or including new definitions, one wonders whether it is even worth having so many - and such broad - definitions within a single Directive, or whether it would risk undermining its efficacy.

Definition of damage

There is a similar conundrum when it comes to broadening the definition of ‘damage’ and related liability requirements. While including some clearer forms of non-physical damage such as data privacy breaches may look relatively straightforward, things get more complicated when it comes to mental health damages (e.g. with virtual reality technologies), discrimination and/or even cybersecurity breaches. One could argue that if all of these situations are to fall within a single definition of ‘damage’, they should all be quantified according to some common benchmarks - and that is difficult, given the very different nature of these ‘damages’.

Definition of producer

Similarly, expanding responsibilities for defective products beyond the producer to, for example, the engineer, deployer, developer, or whoever is in charge of the updated software would need to consider the different life cycles of AI products, and who is responsible for what at each stage. In the case of a software that becomes ‘damaging’ only after a few years of use, would the producer still be liable, or would it be the engineer in charge of the security updates? And would this be different for different AI applications? To address the issue, some MEPs have suggested establishing a “joint liability system”, whereby liability would fall onto “whoever has an economic interest in the product” - even if this means multiple people. Recent reports prepared by the Commission and its Expert Groups proposed adopting a risk-based approach to liability that differentiates requirements based on the risk associated with the AI application. There still remains, however, a number of unaddressed questions, including which applications would fall under each category, whether these categorisations would also potentially apply to non-AI products, and how this new liability framework would fit with existing, sometimes sector-specific civil/tort legislations at the national level.

A potential solution

This is not to say that these issues should not be addressed. A partial reform of the Product Liability Directive would still be advisable, given its now rather outdated outlook. However, it would perhaps not be sufficient to address properly the regulation of AI-based technologies at a European level in a way that enhances harmonisation of rules across all Member States. 

A potential solution could be to have a separate, more targeted legislative framework that deals specifically with liability of AI technologies. This would allow for a more comprehensive approach to all the challenges brought about by new technologies - including new ‘products’, ‘damages’ and ‘producers’ - without risking undermining or over-complicating the Product Liability Directive. A new AI-focused liability framework could be envisaged as part of the European Commission’s broader plan for Artificial Intelligence, as presented in its White Paper in February 2019. The liability framework could also take into account and possibly build upon sector-specific industry standards, so as to have an even more targeted and efficient system in place. 

These (and other) considerations will likely lead to heated discussions now the EU bodies are back to work after the summer recess - and after the European Commission has gone through the contributions to its public consultation on the White Paper. While we wait, stakeholders have been invited to submit feedback on four different policy options to address liability of AI-based technologies through a new Inception Impact Assessment. This could be a way to consider how a separate liability framework for AI would work. 

Topics: Artificial Intelligence (AI), Regulation, Technology

Giulia Iop

Written by Giulia Iop

Giulia provides monitoring and political analysis to emerging technology clients on the collaborative economy, online platforms and smart mobility.

Get the latest updates from our blog

Related Articles

Three key EU institutions - the European Commission, the European Parliament, and the Council of the European ... Read more

The Media Bill is a broad piece of legislation which will, amongst other things, makes changes to the way in ... Read more

In the ever-changing global landscape marked by geopolitical tensions and technological shifts, the European ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

Comments