AI Act: A Game Changer in the Tech World

by Isabella Morgott on 21 Dec 2023

After an intense three-day negotiation marathon, the European Parliament and the Council of the EU reached a provisional agreement on the much-anticipated EU AI Act on 8 December 2023. As the world’s first comprehensive legislation on artificial intelligence (AI), this marks a pivotal moment in global AI regulation. With a risk-based approach, the AI Act introduces a structured approach to AI oversight, tailoring regulations to the complexity and capability of various AI systems. 

The AI Act’s core: General-purpose AI systems 

At the heart of the Act is a tiered system for regulating general-purpose AI (GPAI), particularly focusing on 'systemic' models characterised by substantial computing power and extensive business usage. For these high-capacity models, the Act mandates a set of comprehensive obligations to mitigate systemic risks. Providers of these systemic GPAI models must carry out detailed model evaluations, system risk assessment and mitigation processes, often involving adversarial testing to identify and address potential vulnerabilities. They must implement robust cybersecurity measures and are obliged to report any serious incidents and their systems' energy consumption to the European Commission.  

The second tier, encompassing less powerful GPAI systems, faces less onerous obligations. These models are subject to transparency requirements, including detailed technical documentation and summaries of the content used in model training, adherence to EU copyright laws, as well as clear labelling of AI-generated content. Codes of practice for GPAI systems will guide compliance with the AI act, serving as interim measures until harmonised EU standards are developed. 

Navigating high-risk AI and banned applications 

The Act addresses concerns about the ethical use of AI. All AI systems deemed as high-risk will be subject to stringent regulations. Deployers of such systems are required to conduct fundamental rights impact assessments before market launch. Additionally, there is a mandate for transparency, which includes the registration of public entities using high-risk AI systems in an EU database. 

The Act bans practices deemed to pose unacceptable risks with a potential to harm citizens’ rights and democracy. These include biometric categorization systems using sensitive characteristics (such as race or religious beliefs), untargeted scraping for facial recognition databases, emotion recognition in workplaces and schools, social scoring, AI that manipulates human behaviours, and AI exploiting vulnerabilities of certain groups. 

The Act grants law enforcement authorities narrow exemptions for remote biometric identification systems (e.g. facial recognition) in public spaces. This is subject to judicial authorisation and strict guidelines, including targeted searches for victims of serious crimes, preventing specific terrorist threats, and locating suspects of specified serious crimes. ‘Post-remote’ biometric identification, where the identification of an individual does not take place in real-time, is limited to searching for individuals convicted or suspected of serious offenses, ensuring a balanced approach between security and privacy rights. 

Inside the governance of the AI Act 

The governance framework of the EU AI Act is a multi-layered structure, with the AI Office as its centre piece. The AI Office, which will be established within the European Commission, will be tasked with overseeing GPAI models, and enforcing the Act across EU Member States. Supplementing this is a scientific panel of independent experts, providing expert guidance on foundation models, their assessment, and risk management. Additionally, the AI Board, comprising representatives from Member States, will be responsible for the coordination and development of codes of practice for foundation models. Lastly, the Advisory Forum will bring together a diverse group of stakeholders, offering technical input to the AI Board. 

Catalysing vs stifling innovation  

The EU AI Act provides a boost to small developers and researchers by largely exempting free and open-source AI models. This tiered approach to GPAI systems also aims to foster innovation by allowing more flexibility for lower-risk AI models. Additionally, the Act includes provisions to support innovation and SMEs, promoting regulatory sandboxes and real-world testing, set up by national authorities. This aims to enable businesses, especially SMEs, to develop and refine innovative AI solutions without the overwhelming influence of larger industry players, creating a more equitable and dynamic AI development landscape. 

In spite of this, the AI Act also raises concerns about its potentially harmful impact on innovation. The stringent regulations, particularly for high-risk AI systems, could impose heavy burdens on AI developers and startups, potentially stifling creativity and slowing down technological advancement. The comprehensive nature of the Act might inadvertently favour larger, established companies that can more easily absorb the costs of compliance. This cautious approach, while beneficial for ethical and safety considerations, could put the EU at a competitive disadvantage in the rapidly evolving global AI landscape. 

What the AI Act means for the tech industry 

For businesses in the tech sector, the AI Act carries substantial implications. The AI Act introduces significant compliance requirements for high-risk AI systems, with substantial penalties for violations. Businesses must integrate ethical considerations and adapt to these regulations for sustainable growth in the AI landscape. Violations carry hefty fines, up to €35 million or 7% of annual turnover, with more lenient caps for SMEs and startups. These regulations could pose challenges, especially for smaller developers and startups, due to potential increases in administrative and operational burdens. 

The path forward 

As the EU leads the way in AI regulation, its approach could set a global benchmark, influencing future regulatory frameworks worldwide. However, as the EU navigates the intricate process of finalising the AI Act, there is still considerable work ahead. The upcoming weeks will see a series of technical meetings, crucial for refining the Act's nuances. The Spanish Presidency of the Council of the EU is gearing up to present the draft for endorsement by Member States representatives by the end of January 2024, but this is just a step in a longer journey. The process, which will transition to the Belgian Presidency starting in January 2024, includes legal-linguistic revisions and requires formal approval from both the European Parliament and the Council of the EU. 

If you have any questions, or would like to discuss AI regulation further, please contact isabella.morgott@inlinepolicy.com

Topics: European Politics, Artificial Intelligence (AI), Regulation, Technology

Isabella Morgott

Written by Isabella Morgott

Isabella provides policy analysis, monitoring and advice to tech clients. Her areas of expertise span the range of EU tech policy, including AI, digital platforms and data protection. Before joining Inline, Isabella worked as a researcher at the University of Malta, as well as a Project Communication Officer at the European Forum for Urban Security. She holds an MA in European Public Affairs and Governance from the University of Paris 1 Panthéon-Sorbonne, a graduate degree in Political Science from the University of Paris 2 Panthéon-Assas, as well as an undergraduate degree in Romance Philology and Communication Science from the Ludwig Maximilian University of Munich. Isabella speaks English, French, German, Spanish and Polish.

Get the latest updates from our blog

Related Articles

The Media Bill is a broad piece of legislation which will, amongst other things, makes changes to the way in ... Read more

In the ever-changing global landscape marked by geopolitical tensions and technological shifts, the European ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

In November 2023 the UK Government outlined its regulatory intentions for the cryptocasset industry. In this ... Read more

Comments