Does the EU need a new framework to regulate AI?

by Giulia Iop on 24 Jan 2020

The European Union is working on a new regulatory framework for artificial intelligence that seeks to ensure better consumer protection, while enhancing Europe’s technological competitiveness. The risk is for it to become but a duplication of already-existing practices and regulations.


Of the many 2020 resolutions put forward by the European Commission at the beginning of the year, one clearly stood out: the ambition to launch a new framework to regulate Artificial Intelligence by the end of the summer. While no official proposals have been tabled yet – besides a leaked early
draft - Member States and industry stakeholders have started to debate what this framework might look like, and especially what kind of AI applications it is going to affect. 

As of now, the so-called ‘EU legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence’ is set to focus almost exclusively on issues related to data protection, liability and discrimination. This includes algorithmic accountability (whether platforms should be held responsible/liable for results of their algorithms), transparency (digital businesses having clear terms of use and disclosing what they do with the data they collect), data privacy, disinformation and advertisement.

All of these issues have already been at least partly addressed by either existing regulation on data protection and platforms’ activity – such as the GDPR, the P2B regulation and the Product Liability Directive – or upcoming packages, such as those on e-Evidence and the New Deal for Consumers. 

For example, the P2B regulation asks platforms to make rankings transparent. Meanwhile, GDPR introduces strict provisions and requirements for the processing of individuals’ personal data, including ensuring the privacy of what they write. The New Deal for Consumers, where discussion is ongoing, plans to update EU consumer law in light of the digital age, thus ensuring better consumer protection in case of unfair competitive behaviour by digital businesses. 

One could therefore argue that many of the issues arising from AI applications such as algorithmic activities have already been addressed by other directives and regulations. As a consequence, is there actually a need for an additional AI regulatory framework then? 

Natural Language Processing

Take Natural Language Processing (NLP) as an example. Because of its widespread integration into platforms and digital businesses more generally, NLP can be used as a good proxy for to understanding whether existing regulations already cover the ground that proposed AI regulations intend to. 

Consumer protection legislation in the connected future

NLP is a form of AI that allows computers to analyse, understand and derive meaning from human language via algorithms and to subsequently make decisions. In other words, NLP allows companies to read users’ private messages, Google searches and questions and use them to inform targeted ads, rankings and/or e-mail assistants. So far, it has found its way into several different markets, particularly e-commerce and virtual assistants such as Amazon’s Alexa, Google Home or Siri. Across its various applications, NLP fundamentally involves data access, use, transfer and codification. 

Critics of the technology have cited privacy concerns, transparency issues and unfair competitive behaviour as potential - if not likely - drawbacks of NLP and the way it is used by platforms and virtual assistant developers. As such, many see a future AI regulatory framework as a first step to try to avoid a breach of both consumers’ fundamental rights and of competition policies by such businesses. But perhaps it will not make a big difference after all.

One could argue that when it comes to making sure that platforms’ use of NLP is respectful of fair competition principles and consumers’ rights to privacy and transparency, many of the controversies have already been addressed by pre-existing data-related directives. Drafting a new regulation along these lines is not likely to add anything new, or at least not without reopening old “black-boxes” such as liability exemptions and data privacy principles. It risks becoming the same document with a different name. 

Filling the regulatory gaps

However, other elements of the likely scope of such regulation are less well covered. After all, the upcoming AI regulation would seek to safeguard citizens’ privacy and safety by specifically addressing the use of new technologies with unprecedented precision. Recent controversies surrounding facial recognition (in Germany and the UK, among others), self-driving cars and automated weapons, for instance, have prompted lawmakers to start to gauge the possibility of including these technologies in the document. 

Similarly, the Commission’s Expert Group on Liability and New Technologies has suggested that medical robots and drones be included in the framework and appropriately addressed based on the level of risk they pose to the population. Such a broadening of the regulation’s scope would entail introducing new rules, or at least compliance principles, never addressed before. 

As such, the application of NLP to virtual assistants could potentially be affected. Indeed, as of now there are no rules to control whether and how Alexa records users’ orders and conversations. There are also no rules to limit the amount of information captured by Siri that Apple can use for marketing and advertising purposes. There aren’t even rules on microphones, for that matter. 

This suggests that one possible way to develop and interpret AI regulation would be to differentiate applications based on the way they are used rather than based on their basic functionalities (which may well have already been covered by existing regulatory frameworks). In practice, this might mean that in the case of NLP, it would be worth trying to address its application in virtual assistants, rather than platforms. In this way, the risk of duplication would be avoided; the chance to successfully address all challenges and potential risks of new AI technologies would be greater; and Europe would actually be, as President von der Leyen has suggested, “fit for the digital age”.

If the goal of the new AI regulation is to control the way in which businesses use AI technologies and the extent to which they should be held liable for any damages, mistakes or anti-competitive behaviours, any new rules should focus on the regulatory gaps that exist in current legislations: the new, specific challenges represented by AI, rather than what has already been addressed.

Topics: Artificial Intelligence (AI), Online Platforms, big data, Regulation, Europe

Giulia Iop

Written by Giulia Iop

Giulia provides monitoring and political analysis to emerging technology clients on the collaborative economy, online platforms and smart mobility.

Get the latest updates from our blog

Related Articles

Three key EU institutions - the European Commission, the European Parliament, and the Council of the European ... Read more

The Media Bill is a broad piece of legislation which will, amongst other things, makes changes to the way in ... Read more

In the ever-changing global landscape marked by geopolitical tensions and technological shifts, the European ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

Comments