Does regulation of AI in the UK strike the right balance for business?

by Pia Doering on 23 Jun 2023

In this blog, Inline Policy looks at how the UK, home to many promising AI start-ups, is seeking to balance certainty with flexibility in its regulatory framework.

Artificial intelligence (AI) is much in the news now, but it has underpinned technological applications for some time. AI was propelled into the spotlight last year by the astronomical rise of ChatGPT (generative pre-trained transformer), a so-called large language model (LLM) chatbot. Faced with more and more pressing questions about the safe and ethical use of AI-powered technology, regulators around the world were compelled to take a more thorough look at their regulatory frameworks for AI. In this blog, Inline Policy looks at how the UK, home to many promising AI start-ups, is seeking to balance certainty with flexibility in its regulatory framework.

How the UK is approaching AI regulation

The UK published its National AI Strategy back in 2021, in which it committed to nourishing the AI ecosystem and defining an approach to AI regulation. It followed up with a high-level white paper in summer 2022, but it was not until March 2023 that the Government’s Department for Science, Innovation and Technology (DSIT) published a detailed plan for implementing a regulatory framework, entitled ‘A pro-innovation approach to AI regulation’.

The paper sets out a flexible, non-statutory regime based on five principles:

  1. Safety, security, and robustness – AI systems should be technically secure and function reliably throughout their life cycle.
  2. Appropriate transparency and explainability – regulators must have sufficient information about AI systems, their input data and how they arrive at outputs to ensure that other principles (e.g., safety) can be adhered to.
  3. Fairness – AI systems must comply with fairness principles in areas such as data protection law (data should be processed in a way people would expect it to be and that does not create unjustified adverse effects) or competition (AI systems should not underpin anticompetitive practices).
  4. Accountability and governance – the principles should be incorporated into AI products across different stages in their life cycle, and regulators must ensure that clear expectations for regulatory compliance and good practice are placed on the appropriate actors in the AI supply chain.
  5. Contestability and redress – where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI-derived decision or outcome that is harmful or creates material risk of harm (for example, wrongful profiling).

Ensuring that these principles apply across different industries, applications, and regulatory remits is a complicated endeavour. The Government has therefore proposed that it will be up to individual regulators to develop guidance for their sectors and ensure that principles are implemented appropriately. For example, the Competition and Markets Authority (CMA) would be in charge of AI regulation in the area of competition. The CMA has already launched a review of foundation models (AI models trained on large, unlabeled sets of data so that they develop the ability to adapt to a range of tasks), which looks at the ability of companies to enter this market, how they might affect competition across other markets, and whether there are novel risks to consumers that should be guarded against. In the sphere of data protection, it would be the Information Commissioner’s Office (ICO), which recently published its response to the Government’s framework. The ICO welcomed the approach and highlighted the alignment of some of the abovementioned principles, such as fairness, with current data protection law. However, the data regulator cautioned that if guidance is issued in a way that is conflicting or unclear, businesses’ confidence that implementing any guidance or advice will minimise the risk of legal or enforcement action by regulators may be impacted.

Given that artificial intelligence underpins many different systems in many different sectors, not all applications warrant a close regulatory response. The Government thus proposes that regulators consider two characteristics of AI systems in particular: adaptiveness (the degree to which an AI system operates based on instructions which have not been expressly programmed with human intent, having instead been ‘learnt’ based on a variety of techniques) and autonomy (the ability to make decisions without express intent or ongoing control of a human).

To ensure that the UK’s regulatory framework is as flexible as possible, the Government will not legislate to implement the five principles expressed in its white paper, although it may later put them onto a statutory footing and require regulators to have ‘due regard’ to them. However, this is unlikely to happen in the current parliamentary session, which runs until November 2023.

Finding the right balance for businesses

There are, undoubtedly, advantages to the flexible approach the UK has decided to take. Artificial intelligence is a fast-evolving technology which powers new applications daily. A rigid regime could quickly be outpaced by the speed of research and development in the sector, which would constitute a significant barrier to innovation and investment. It may also not be ‘future-proof’ in that it could prove unable to address currently unknown issues as new uses for AI systems are found.

On the other hand, the Government’s approach risks being fragmented and confusing, failing to give businesses the necessary certainty as to whether and how their products may be regulated. This has implications both for development and deployment of AI technology. For example, take the issue of liability: a business that develops a chatbot must understand to what extent it may be held responsible for harmful outputs of its product (e.g., a chatbot spewing insults or giving misleading medical information) to decide whether the benefits of commercialising it outweigh the risks.

Moreover, outputs may depend on the exact use, over which developers do not necessarily hold control. The situation may be difficult to assess when the answer differs from sector regulator to sector regulator, or where one regulator puts out guidance more quickly than others. Meanwhile, businesses that are looking to implement such a product may be reluctant to accept legal responsibility for a technology whose design they do not control, and while this is not a dilemma that is limited to AI systems, it is especially pertinent in this area as it concerns a technology with potentially unpredictable and unintended effects. This could slow down the pace of AI adoption significantly.

Conclusion

Developing a regulatory framework that appropriately balances regulatory certainty with flexibility is a difficult task. The UK’s approach means that businesses should have more freedom in developing and implementing AI systems than in other regulatory spheres, but they will also likely have to deal with more ambiguity. Regulators may take some time to develop their guidance, and this could result in contradictory provisions if there is insufficient cross-regulatory coordination. Businesses developing or thinking of adopting AI-powered tools should consider responding to the Government’s consultation on its newest AI white paper, which closes on 21 June, and monitor the situation closely over the coming months and ensure that their products can be adapted to changing regulatory requirements.

If your company would like help navigating the regulatory environment for AI in the UK, please contact us.

Topics: Artificial Intelligence (AI), Big Tech, Digitaleconomy, techpolicy

Pia Doering

Written by Pia Doering

Pia provides policy analysis and monitoring to clients in the tech sector.

Get the latest updates from our blog

Related Articles

The Media Bill is a broad piece of legislation which will, amongst other things, makes changes to the way in ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

After an intense three-day negotiation marathon, the European Parliament and the Council of the EU reached a ... Read more

The US has witnessed a rapid increase in electric vehicles (EVs) in recent years. In this blog, we will ... Read more

Comments