Artificial intelligence: can public policy cope?

by Inline Policy on 12 Oct 2017

Business has long been convinced about the many opportunities offered by artificial intelligence (AI). Reports abound with estimates about the added value that applications powered by AI can create in the future. Literally everyone is on to it, from the dominant tech players in Silicon Valley all the way to established companies in the transport and utilities sectors. Even public authorities are joining the race. Countries as diverse as China, Canada, Germany and Singapore run significant programmes investing heavily in AI research capabilities or experimenting with early applications.

Against this background, policymakers face a dilemma: how to deal with something that is so promising yet often opaque and diffuse, still largely immature but fast-evolving, moderately impactful at present-day though believed to have enormous consequences for humanity in the years to come.

As a ‘general purpose technology’, artificial intelligence will be universal in its reach, potentially affecting every aspect of our lives. But those effects will play out very differently across all kinds of sectors and societal groups. It is this particular challenge which makes dealing with artificial intelligence so exceptional from a public policy perspective - with limited parallels in modern history.

To be sure, many authorities have started to prepare for a future in which AI plays a significant role. The White House, so often taking the lead in tech policies, has set the pace with last year’s major AI report covering labour market implications as well as questions of fairness and security.[1] Others have followed suit: in the UK, the Government Office for Science [2] provided a useful introduction for policy-makers, while the Digital Strategy launched an expert-led review to provide a forward plan (to be published soon). At EU level, members of the European Parliament have shown particular determination to develop early standards for future AI uses.

However, many of these deliberations have yet to demonstrate how we can develop intelligent policy solutions that can deal with the implications of early AI applications without losing sight of the bigger, long-term questions surrounding the proliferation of autonomous systems. Or how we can nurture the massive potential of AI while safeguarding established norms and traditions through democratic processes. Public policy has a big job to do.

Algorithms and intelligence

In essence, the goal for artificial intelligence is to create technology that can genuinely complement (or even substitute) the human intellect. AI approaches this challenge through the development of software and hardware capable of continuous and independent improvement in their decision-making. At the core of any such system are algorithms which carry out specified calculations and categorisations based on input data, and which tend to stop when they find an answer. Artificial intelligence basically explores how far these algorithms can be stretched.

To some, this still sounds rather futuristic. The human brain’s processing is extremely complex, using an unknowable number of inputs in non-linear processes every second. Similarly, many elements of daily life are too intricate or abstract to be described to a computer through simple categorisation.

However, major advances in artificial intelligence techniques show that algorithms can be used to drive ‘intelligent’ systems. Take for example ‘machine learning’ (ML), an important subset of AI techniques. Through processing large quantities of data and recognising complex patterns, algorithms in these systems adapt to generate better predictions and outputs. As a result, ML systems are capable of refining, or even correcting, erroneous instructions given by human programmers. Algorithms thus make a distinct contribution to the decisions made by machine learning systems, and it is for this reason that these technologies are considered to be ‘intelligent’.

Another sub-set of ML techniques - ‘deep learning’ - goes even further. Systems that employ deep learning use ‘neural networks’ to replicate human neural processing methods, joining up many algorithm output ‘nodes’. Such systems can not only make improvements to their algorithmic decision-making (optimisation) but also, through the interaction and feedback of these algorithm nodes, independently define certain features to analyse [3]. Algorithms are powering these advances towards computers “understanding” concepts in a human-like way.

Smart algorithms and their policy impact: learning from examples

Although AI systems have been in development for decades, the qualitative leaps in recent years have ushered in a new era - one where applications driven by smart algorithms start to have real-life implications. This, in turn, raises important questions for policy-makers who grapple with the existing interactions as well as the looming tensions between artificial intelligence and different policy areas.

The purpose of this note is to shed light on some of these interactions and tensions. The examples that follow highlight areas in which the intrinsically ‘intelligent’ components of emerging AI systems put existing policy frameworks to the test. The conclusion considers three possible responses.

(1) Market stability and collusion

Given the importance of information to the functioning of competitive markets, it is unsurprising that intelligent algorithms are proving particularly useful in finance and business. When in May 2010 an automated execution ‘sell algorithm’ contributed the so-called ‘Flash Crash’, which wiped a staggering 1,000 points from the Dow Jones Index in less than 10 minutes, US regulators learnt an important lesson about the extent to which automated systems can undermine the stability of markets.

Since then, sell algorithms or related practices like algorithm-led ‘quote stuffing’ [4] have become much more powerful and sophisticated. Global financial institutions, such as JP Morgan, are now trialing AI trading programmes which can execute highly-complex customised client orders “on a larger and more efficient scale”[5] than humans. These industry innovations have moved automated trading to a whole new level, requiring machines not only to operate but also to learn within established risk frameworks. The monitoring and validation challenge for regulators is considerable.

For sure, the impact of algorithms goes far beyond issues of stability. Increased processing power gives agents the ability to monitor market prices and respond almost instantaneously, also raising serious questions about market collusion.

In 2015, the UK’s Competition and Markets Authority and U.S. Department of Justice investigated cases of price-fixing by sellers on Amazon. It was found that a group of online merchants had conspired to fix the price of their goods (posters and framed art). The scheme was successful in doing so for a period of five months, thanks to “complex pricing algorithms”[6] which set each firm’s prices in order to uphold the arrangement.

What is particularly striking about this example is that the prosecution essentially rested on evidence of “conversations and communications” during which the price-fixing agreement was discussed. It is not clear that the group’s use of commercially available algorithm-based pricing software would by itself have resulted in the detection and subsequent successful investigation into collusive activities.

Consider then recent advances in artificial intelligence and how they relate to this case. The complexity of algorithmic decision-making in AI systems is such that it is often not possible even for the algorithm’s originator to understand a given outcome or process. In effect, it may not be possible for investigators to audit the ‘black box’ and trace the source of a given course of action such as collusion - commonly known as the transparency conundrum of AI.

In truth, even if algorithmic decisions could be readily subjected to inspection, it is not clear that explicitly collusive instructions are necessary for collusive outcomes. If agents in a market each design their systems to follow predictable market strategies, anticompetitive outcomes may be realised even without co-operation or co-ordination. And even algorithms which designers had intended to follow more aggressive market strategies may independently discover that stable, collusive markets optimise prices and thus adopt this course of action.

So what are the implications? In a recent discussion note on ‘Algorithms and Collusion, the EU gave a first hint of whether and how liability may be shifting. The document confirms that evidence of communication is needed to establish and prosecute explicit collusion. Vertical tacit collusion (or “intelligent adaptation”) is ultimately not against EU competition law.

The EU’s suggested policy response therefore makes it clear that it is up to individual firms to “ensure that its algorithms do not engage in illegal behaviour”[7]. Hence “firms involved in illegal pricing practices cannot avoid liability on the grounds that their prices were determined by algorithms”. On that basis, algorithms will have to stand a much greater test of transparency if companies have to ensure “antitrust compliance by design” [8].

(2) Intellectual Property

No doubt, questions about legal liability will soon gain even more importance in the debate about artificial intelligence. They will reach far beyond market participants and impact on how different sectors will collaborate in the quest for new AI-powered applications. But there are other legal frameworks too that will face their own particular challenges.

Take intellectual property and copyright. Intelligent algorithms are set to disrupt the practice of assigning legal ownership and pursuing the income generated by an object. Today, UK law attributes copyright of an artefact to its creator or author. An employer has copyright ownership of any artefacts made by their employees as part of their employment. This principle is also extended to “computer generated” works. Because legal ownership can only be attributed to humans, “the author is taken to be the person that makes the arrangements necessary for the creation of the work” [9].

Artificial intelligence is likely to challenge this framework in at least two ways. Firstly, the network of individuals who have made arrangements necessary for the creation of the work may be very complex. It has been noted that AI systems are a particularly diffuse form of technology - often a composite of the input of many individuals working from a variety of organisations and locations [10]. Moreover, claims to ownership may arise not only from developers but from the owners of datasets which have been used to train algorithms.

Secondly, although the current legal approach to computer generated works seems sensible, given that artificial intelligence systems are used by humans as tools to develop an output, this may not be the case in the future. Advances in artificial intelligence mean that algorithms are increasingly instrumental in creating work, if not capable of producing something that is much more than the sum of its initial human inputs. Adjusting IP and copyright laws to reflect these developments will require a huge effort.

(3) Medical responsibility

If our understanding of liability and ownership is seriously put to the test by the autonomous characteristics of AI, so is the related notion of responsibility. And there are few more prominent positions of responsibility than individuals working in the health profession. Indeed, one of the greatest challenges from AI to date has come in the growth of the pre-primary care sector.

There is an increasing expectation, for example, that minor ailments and initial screening can soon be more efficiently delegated to AI systems than medical experts. Apps such as Your.MD now offer a free ‘personal health assistant’ which individuals can use to discover the most likely cause of their current symptoms. Intelligent algorithms power the app’s chat-bot such that it is capable of deciding which are the most informative questions to ask a patient and generate dynamic predictions as a result. The system now claims to be capable of checking for symptoms against the 20 most common conditions with 85% accuracy. [11]

These assisted self-care apps support a vision in which people diagnose themselves, order their own medical tests and then use doctors to help them interpret the results. Significantly, these AI-powered apps still explicitly limit their challenge to the established role and responsibility of doctors, highlighting that “far from displacing doctors”[12], they are to be used in an advisory and signposting capacity. However, the direction of travel seems evident. Artificial intelligence is poised to take a more active role in treatment decisions once the evidence of potential benefits becomes overwhelming.

Our current reliance on doctors is safeguarded by strict medical standards and obligations. The General Medical Council sets industry guidelines which require that doctors are “personally accountable for your professional practice” and, moreover, “must be prepared to explain and justify your decisions and actions”.[13]

The fascinating question for policymakers is therefore how to adjust a system that is so firmly built around the human practitioner, allowing to harness the power of artificial intelligence algorithms to improve treatment decisions without compromising fundamental principles in respect to the “medical duty of care”. Given the autonomous, opaque nature of intelligent algorithms, this is a very tall order.

(4) Data rights and norms

If the notions of liability, ownership and responsibility face their unique challenges by the proliferation of artificial intelligence, little compares to the fundamental transformation that is impacting data norms and concepts. So far, the public debate has centred around the extent to which information fed into AI systems is prone to discriminatory bias on grounds of race, sex or disability, essentially exacerbating existing human biases. This is indeed a very serious issue and will undoubtedly occupy policymakers for many years to come. But there are other matters of contention that already impact specific policy areas.

One such example is employment practices. Complex pattern recognition and information referencing can add considerable value in recruitment and ultimately make it more responsive and efficient. Companies such as Beamery, Alexander Mann Solutions and ThisWay Global rely increasingly on intelligent algorithms to help companies make important hiring decisions, using sophisticated data-mining techniques to identify publicly available information that can fill any gaps in a candidate’s profile.[14]

Hiring naturally involves highly personal data, much of which relates in one way or another to our digital footprints that can be traced all over the internet. What is strictly private and what is meant for public consumption is often hard to define, even less so for AI systems programmed to establish individual recruitment profiles off the back of openly accessible information.

Artificial intelligence processing techniques make it extremely difficult to predict what a certain piece of data may reveal when contextualised in other datasets. What may be thought of as a relatively benign input about an individual may later be associated with other sensitive characteristics. A recent joint report by the UK’s British Academy and Royal Society reiterated the extent to which it is now “difficult, or near impossible - to be certain whether data initially considered non-sensitive might subsequently reveal sensitive information”. [15]

Unsurprisingly, national regulators struggle to come up with a coherent response. A recent statement by the EU’s Article 29 Working Party emphasised that “employers should not assume that merely because an individual’s social media profile is publicly available they are then allowed to process those data for their own purposes”. [16] Yet the extent to which this will conform with reality is rather unclear.

That said, many of these issues were meant to be resolved by the incoming General Data Protection Regulation (GDPR), Europe’s latest attempt to update the legislative framework in the age of big data. GDPR establishes new rights for individuals with respect to how their data is collected, stored and processed. Similarly, it establishes new obligations for data ‘controllers’ to handle individual data in a responsible way - or face high financial penalties.

However, when it comes to intelligent algorithms and autonomous systems, GDPR offers limited clarity. It starts with the right of individuals not to be subjected to any decision of ‘significant effect’ made solely as a result of automated decision-making (Article 22). Taken at face value, this seems to impose heavy restrictions on future AI systems and could prohibit many applications currently in development.

Many would argue though that the rationale here is not to slow down the proliferation of intelligent algorithms, but to give individuals a right to “meaningful information about the logic involved” in important decisions (Article 13) which are made about them by automated systems. [17] This begs the question of whether decisions made by AI must always be explainable, or does it suffice to understand the logic behind a system’s functionality so that individuals can exert their ‘right to be informed’? The answer either way will have profound consequences for how our relationship with artificial intelligence will develop in the years to come.

How public policy should respond

Acutely aware of the above dilemmas and challenges, business has started to invest heavily into research and initiatives that focus on the possible societal influences of AI; transparent, accountable and ‘interpretable’ AI; or indeed the ethics and safety surrounding AI decision-making capabilities. Google, Microsoft, Apple, IBM, Facebook and Amazon have formed a ‘Partnership on AI to Benefit People and Society’, now involving many other organisations. This must surely be welcomed and should contribute to a greater acceptance of AI-powered applications going forward.

Yet policymakers will also have to lead from the front. There is too much at stake, not least because intelligent algorithms are already testing established regulatory frameworks as the examples mentioned above sought to demonstrate. And this is only the beginning.

So what should be done?

Action seems required on at least three fronts. Firstly, individual government departments should pay particular attention- e.g. with the help of ‘AI audits’ - to how early applications of autonomous systems relate to sector-specific policies and adjust the rulebook, where necessary. This is already being done in some areas, such as transport, but agile policy-makingneeds to gain much greater traction for artificial intelligence to develop in line with existing regulation.

Secondly, policy-makers should consider interventions that can lead to greater accountability and responsibility for processes involving AI. If the goal is to encourage the use of smart autonomous systems without losing control, algorithms need to become more comprehensible and predictable. Some experts, such as Professor Ben Shneiderman, therefore want to strengthen independent oversight, for instance by creating a National Algorithms Safety Board that plays a role in the planning, ongoing monitoring and retrospective analysis of major AI developments.[18] A powerful idea worth looking into.

Thirdly, AI as the next big general-purpose technology will generate challenges that go beyond individual policy areas and require a more holistic approach. While there is a danger of regulating artificial intelligence before we understand its true potential or how the underlying technology will evolve, there is also the risk of public perceptions taking a sceptical turn. At what points this calls for something like an AI code of conduct, an ombudsman (redress) system or indeed an independent regulator is a tricky question. A determined public dialogue introducing the many promises and trade-offs would be a good start.

Feedback and comments welcome: olaf.cramme@inlinepolicy.com

Special thanks to Kathryn Pritchard for her research in support of this article.

This article was originally published by Medium.

------------------------------------------------------

[1] Executive Office of the President, National Science and Technology Council, and Committee on Technology. “Preparing for the Future of Artificial Intelligence,” October 2016

[2] Government Office for Science. “Artificial Intelligence: Opportunities and Implications for the Future of Decision Making”, November 2016.

[3] Kelnar, David. “The Fourth Industrial Revolution: A Primer on Artificial Intelligence (AI).” Medium, December 2, 2016. https://medium.com/mmc-writes/the-fourth-industrial-revolution-a-primer-on-artificial-intelligence-ai-ff5e7fffcae1

[4] U.S. Commodity Futures Trading Commission and U.S. Securities & Exchange Commission, “Findings regarding the market events of May 6, 2010” p.79, September 30, 2010.

[5] Noonan, Laura. “JPMorgan Develops Robot to Execute Trades.” Financial Times, July 31, 2017.

[6] Department of Justice, Office of Public Affairs, “Former E-Commerce Executive Charged with Price Fixing in the Antitrust Division’s First Online Marketplace Prosecution”, April 6, 2015.

[7] European Union. “Algorithms and Collusion - Note from the European Union,” 2017.

[8] Vestager, Margrethe. “Algorithms and Competition.” Bundeskartellamt 18th Conference on Competition, Berlin, March 16, 2017.

[9] Copyright, Designs and Patents Act 1988 (1988).

[10] Scherer, Matthew U. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 30, 2015.

[11] Carr-Brown, Jonathon, and Matteo Berlucchi. “Pre-Primary Care: An Untapped Global Health Opportunity,” 2016

[12] https://faq.your.md/hc/en-us

[13] General Medical Council. “Good Medical Practice,” 2013.

[14] Dickinson, Ben. “How artificial intelligence optimizes recruitment”, TNW, June 2017.

[15] British Academy and Royal Society, “Data Management and Use: Governance in the 21st Century.”, June 2017.

[16] Article 29 Data Protection Working Party. “Opinion 2/2017 on Data Processing at Work,” June 8, 2017

[17] Wachter, Sandra, Brent Mittelstadt and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, December 2016.

[18] Ben Shneiderman, ‘The dangers of faulty, biased or malicious algorithms requires independent oversight’, Proceedings of the National Academy of Sciences of the United States of America (PNAS), November 2016.

Topics: Artificial Intelligence (AI), Immersive Tech, Olaf Cramme

Inline Policy

Written by Inline Policy

Get the latest updates from our blog

Related Articles

I sat down for an hour-and-a-half in early August with Kai Zenner, longtime policy advisor to MEP Axel Voss. ... Read more

On 28 June, the US Supreme Court overturned the ‘Chevron deference’, an administrative law principle allowing ... Read more

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

After an intense three-day negotiation marathon, the European Parliament and the Council of the EU reached a ... Read more

Comments