Finding the path to ethical artificial intelligence?

by Olaf Cramme on 16 Apr 2018

Another day, another report on artificial intelligence? Not quite. 

Published today, the 180-page volume by the House of Lords’ Select Committee is more than just the latest contribution to the emerging debate about the opportunities and challenges of AI. Led by experienced lawyers such as Baron Clement-Jones and renowned scholars like Lord Anthony Giddens, former director of the London School of Economics, it might well prove influential both in the UK and beyond.

True, many of the points in the report have already been made elsewhere. They largely cover familiar ground and repeat those arguments and warnings that have been dominating the headlines over the past few years, including:

  • The risks of building existing prejudices and biases into the autonomous systems of the future;
  • The danger of data monopolisation, whereby small companies will be unable to compete with the larger conglomerates that hold the quantities and quality of data that are required to develop better AI systems;
  • The need for greater diversity in the training and recruitment of AI specialists.
  • The risk of over-reliance on data-intensive deep learning techniques at the expense of other methods and areas of innovation around AI;
  • The potential impact of AI and autonomous systems on the labour market and the extent to which this requires a much greater emphasis on skills and life-long learning; and
  • The potential value of a smart public procurement strategies to strengthen the social dimension of future AI applications, in particular in the health system.

There is much more in the report, which provides a great synthesis of the big issues at stake. But the real significance lies in the breadth and depth of the investigation: over 10 months, the members of Lords Committee have brought together an impressive number of witnesses and experts. The diversity and range of evidence informing the conclusions thus give this exercise a degree of weight and authority which few others have managed before.

It also offers a fine lesson on the everyday interactions with artificial intelligence, recommending for instance the establishment of a voluntary industry mechanism that helps inform consumers when AI is already being used to make significant or sensitive decisions. Educating the public about applications powered by AI will clearly be one of the biggest tasks in the years to come.

Yet where does it leave the adequacy of existing legislation? The question of whether current regulatory frameworks can handle the fast-paced developments and experimental nature of AI remains highly controversial; and the Lords’ report admits as much. It could well be the weak spot despite a whole slew of good suggestions for policymakers.

A few months ago, I looked in more depth at how AI systems put existing policy frameworks to the test. Evidence in relation to market stability and collusion, intellectual property, medical responsibility and data norms suggest that decisive action is required sooner rather than later, for example:

  1. Sector-specific ‘AI audits’  to better understand how early applications of autonomous systems already interact with regulation, as has been done in areas such as transport.

  2. Strengthen the independent oversight of emerging AI systems, e.g. by creating a National Algorithms Safety Board (as proposed by Professor Ben Shneiderman) that plays a role in the planning, ongoing monitoring and retrospective analysis of major AI developments.

  3. A determined public dialogue led by government introducing the many promises and trade-offs that AI applications are likely to generate.

The future of artificial intelligence will be played out at many different levels internationally given the vast sums of money that countries like the US and China are pouring into the research and development in the field. Closer international cooperation is indispensable, and it is therefore no surprise that the EU has just agreed a formal cooperation amongst Member States.

The extent to which the UK can thus claim a leadership role on AI – something specifically urged on by the Lords Committee – is going to depend on how it positions itself vis-à-vis these other initiatives.

Topics: European Politics, UK politics, Platforms, Artificial Intelligence (AI), Immersive Tech, Economic policy, Big Tech, Olaf Cramme

Olaf Cramme

Written by Olaf Cramme

Olaf's public policy expertise draws on his experience in government, Parliament and leadership roles in consulting and at a leading European think tank.

Get the latest updates from our blog

Related Articles

The Digital Markets, Competition and Consumers (DMCC) Act establishes a new regime for digital competition in ... Read more

The prime minister has announced that he is calling a general election on 4 July. Here is what happens next. ... Read more

In the fast-paced world of social media, concerns about digital addiction are taking centre stage once again, ... Read more

Three key EU institutions - the European Commission, the European Parliament, and the Council of the European ... Read more