What is the EU doing to regulate artificial intelligence?
by Shane Cumberton on 07 Aug 2023
In this blog, we look at the steps the European Union is taking to regulate artificial intelligence.
There are few sectors as dynamic and as rapidly evolving as AI. The speed at which AI technologies are developing has led to an increasing impact on society and a more urgent priority for policymakers, who are rushing to introduce new principles for regulating AI.
The EU’s AI Act will guide the ethical standards for AI development and to introduce obligations for industry players to ensure transparency, accountability, and respect for human rights when developing new systems.
Generative AI and foundation models of AI
Two areas of AI have emerged as major talking points in the last few months: generative AI and foundation models of AI. In this blog we will explore the potential controls that the EU may put on these systems with its proposed AI Act and what the future regulatory environment may look like. Both systems have similarities in the way in which they can rapidly develop based on deep learning, and both use large datasets to train their models. However, there are important differences between the two. In simple terms, generative AI is geared towards creating new content, while foundation models are focused on understanding existing content.
In the case of generative AI, these are systems that can create new content by learning patterns from existing data and then utilising this information to generate new outputs. The content created can include things such as text, images, video, or audio. Given the learning capabilities of generative AI and the vast amounts of existing data available, it can produce complex and realistic content that can mimic human creativity and design.
By comparison, foundation models are trained on very large amounts of “unlabelled data” - data that does not have any predefined categories or labels, which is widely available and cheap. Utilising such data allows these models to learn general patterns and concepts which can then be applied to a wide range of tasks, from natural language processing to machine translation. An example of an application for these models is customer service chatbots.
Another difference between the two is the size of the datasets on which they are trained. Generative AI models typically require much larger datasets than foundation models as this helps these models to learn to generate new content.
Despite these differences, both types of AI are complementary technologies and foundation models can be used to pre-train generative AI models. Given the learning capabilities of these systems and their recent rapid emergence on the market, lawmakers – particularly in the EU – have been rushing to regulate them.
The EU regulatory environment
Under the EU legislative process, once each of the three EU institutions establishes its own position on an act, the three bodies enter trilogue negotiations and eventually agree one text to become EU law. The European Commission published its proposal for the AI Act in April 2021, with Member States in the Council of the EU and members of the European Parliament (MEPs) subsequently scrutinising the proposal and drafting their own positions.
The Council of the EU finalised its general approach to the AI Act by December 2022, while the European Parliament’s report took slightly longer to finalise and was eventually adopted in June 2023.
The Parliament’s position took longer to establish for various reasons. Logistically, it was being drafted between two lead committees: the Internal Market and Consumer Protection (IMCO) Committee and the Committee on Industry, Research and Energy (ITRE). In addition, the Parliament’s report was still being drafted when the widespread public use of AI applications such as ChatGPT occurred – leading MEPs to include provisions which referred explicitly to these generative AI systems along with their foundation model counterparts. By contrast, the European Commission’s original AI Act proposal includes no explicit reference to either generative AI or foundation models, while the Council’s position refers only in few instances to generative AI.
As a result, the Parliament’s text offers the only hint of what the EU’s final regulation of generative AI and foundation models may look like when the legislation is expected to be finalised by the end of this year.
The Parliament’s provisions
The Parliament has proposed its own definition of foundation models and states that these are “AI system models trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks”. This definition puts generative AI systems explicitly within scope of the AI Act for the first time.
The Parliament has introduced a new Article (4a) titled “general principles applicable to all AI systems” which encourages developers of AI systems or foundation models to develop and use such technology with the following principles in mind:
- Human agency and oversight
- Technical robustness and safety
- Respect for privacy and data governance
- Transparency so that AI systems shall have appropriate traceability and reasoning
- Diversity, non-discrimination and fairness
- Social and environmental wellbeing.
MEPs have added a new section to the AI Act - Article 28b - laying out the obligations for providers of foundation models. The proposed article emphasises the need for providers of these models to ensure that systems comply with the AI Act before they are made available on the market. This applies regardless of whether a system is provided “as a standalone model or embedded in an AI system or a product, or provided under free and open-source licences, as a service, as well as other distribution channels”.
Primarily, Article 28b obliges providers to identify foreseeable risks which their systems could pose along with mitigation and reduction measures. The Parliament’s report states that only datasets which are subject to appropriate data governance measures should be used, cybersecurity should be ensured, and energy use should be considered.
The Parliament aims to increase the transparency surrounding these models by introducing obligations for providers of foundation models to draw up extensive technical documentation and instructions. These will enable downstream providers to comply with any obligations to which they may be subject under the AI Act, along with an obligation to register foundation models in an EU database to be established in accordance with this legislation. With these provisions, MEPs are putting the onus on the developers to ensure that any systems fully comply with EU law before being deployed to downstream providers – alleviating much of the compliance burden for businesses further down the value chain.
Apart from these general obligations, paragraph 4 of Article 28b proposes additional provisions for generative models, such as that these models should be trained and designed with appropriate safeguards and should not encroach on fundamental rights or copyright. The Parliament’s report also refers to these models in Article 52, which lays out transparency measures to ensure that systems are designed in a way in which natural persons being exposed to AI content are informed that they are being exposed to an AI system. This includes obligations to make users aware if they are viewing a deep fake generation.
Future
The Parliament’s proposed provisions are not yet finalised and are being considered by negotiators from the European Commission and Council of the EU. Given that the Commission and Council have not included provisions surrounding generative AI and foundation models in their own approaches to the AI Act, it remains to be seen if the Parliament’s proposals make it to the final text in their current form, or if they will appear in an edited form.
If all goes according to the EU’s projected timeline, a political agreement on the AI Act should be reached and adopted by Q4 of this year, followed by a two-to-three-year transition period and the AI Act becoming fully applicable around 2026. However, to fill the interim period before 2026, the European Commission has proposed the introduction of an ‘AI Pact’. The Pact would introduce a voluntary set of principles for both EU and non-EU actors in the AI sector in anticipation of the eventual AI Act. While the final AI Pact has yet to be drafted, European Commission officials have suggested that it will contain guidelines such as introducing more stringent transparency obligations for generative AI, and banning AI systems that conduct social scoring – controversial systems which can assign scores to individuals based on their behaviour and activities.
The legislative developments in the coming months are expected to be massively influential for the AI industry. If you would like to keep up to date with these developments or learn more about this subject, please email shane.cumberton@inlinepolicy.com.
Topics: European Politics, Artificial Intelligence (AI), Big Tech, EU, Europe, Digitaleconomy, techpolicy
Comments