The race for regulating facial recognition technology

by Megan Stagman on 16 Oct 2019

Facial recognition technology is controversial amongst consumers, and a lack of clear rules about how to apply it has caused concerns amongst both the public and regulators. However, the benefits in certain contexts are there for all to see, and the race is on between business and lawmakers to shape the regulatory landscape.

Last month saw a fascinating development when it comes to tech regulation: a private company not just feeding into but actually writing the draft legislation for its own technology.

In many ways, Jeff Bezos’ announcement that Amazon is in the process of coming up with its own draft regulation on facial recognition as a starting point for lawmakers should not be surprising. After all, the company is fully aware that regulation will come sooner or later, so it is strategically sensible to keep ahead of the curve – seeking to shape the eventual output while simultaneously building a brand of responsibility. This is especially warranted because concerns about risk continue to build parallel to the unprecedented public interest in the technology. One only has to look at San Francisco, which has already completely banned facial recognition, to understand the direction we could be heading in.

Why the public remains wary

Public acceptance, which is so often an indicator of future regulatory stance, is certainly mixed on facial recognition. A significant proportion of the UK public feel that a voluntary undertaking by companies not to sell their facial recognition products until there is greater understanding would be appropriate – something that has also been supported by Members of Parliament. In addition, although there is much higher tolerance for use of the technology by public sector authorities than in retail (7% approval) or HR (4% approval), over half of the public still want to see Government-imposed restrictions on how and when police would be able to use such capabilities.

This wariness likely arises from a number of high-profile revelations that the technology is already seeping into society, whether we were aware of it or not. For example, it was revealed this summer that two facial recognition technology software cameras were used across the London’s King’s Cross Central to track tens of thousands of people for almost two years between 2016 and 2018 without their knowledge.

Authorities are now suddenly scrambling to catch up, with London Mayor Sadiq Khan raising “serious and widespread concerns” and the Information Commissioner’s Office (ICO) launching an investigation into the case in August, stating that they would “not hesitate to use our… enforcement powers to protect people’s legal rights”. Similarly, the Government’s Biometrics Commissioner, Professor Paul Wiles, has expressed unease that the technology is being rolled out in a “chaotic” fashion in the absence of any clear laws.

Does the EU need a new framework to regulate AI?

Greater public scrutiny becomes inevitable

In this context, significant parliamentary scrutiny has now commenced. A new parliamentary enquiry into facial recognition has kicked off in Scottish Parliament, following a debate on the topic held in the House of Commons in May and a select committee report publication in July. Focus has shifted.

It is worth considering why people feel so differently about facial recognition surveillance than they do about normal CCTV cameras. After all, London already has an estimated 420,000 CCTV cameras operating in the city and yet this does not seem to elicit the same level of concern. Equally, facial recognition is already in use for certain applications that we use unthinkingly every day, such as Apple’s face ID unlock and Facebook’s ‘tag suggestions’ for photos of friends.

One of the most widespread anxieties about the technology relates to its accuracy. A Freedom of Information request revealed that facial recognition technology used by London’s Metropolitan Police incorrectly identified members of the public in 96% of matches made between 2016 and 2018. While we might not mind an erroneous Facebook photo tag that can be easily corrected without implications, it does matter if innocent people are misidentified as potential criminals. In one of the incidents uncovered, a 14 year old child in school uniform was stopped and fingerprinted by the police after being incorrectly flagged by the technology.

Further concerns have also been raised about racial and gender biases in the technology; research published by Massachusetts Institute of Technology earlier this year found that Amazon’s Rekognition technology reportedly returned more false positives for women and ethnic minorities, building on earlier findings that Microsoft’s software had a 21% error rate for darker skinned women and IBM and Megvii’s were nearly as high as 35%. Since then, a report by the London policing ethics panel has explicitly said that facial recognition software should only be used by the police if they can prove with certainty that it will not introduce gender or racial bias to operations.

The quest for ‘good’ regulation

The methods for remedying these issues are fraught with controversy in themselves, however. Such technology requires training to improve its accuracy and eradication of bias, and this can give rise to whole new problems. The story of an activist from Berlin hit the press in April of this year when she found that there were almost a dozen photographs of her in a US government database used to train facial recognition algorithms, scraped from YouTube videos and Google without her consent. Similarly, IBM was exposed for scraping over one million photos from Flickr in March 2019 to train their own software.

In spite of these real concerns, there is no questioning that there can be real public benefits to the use of facial recognition technology. It has already been used to catch the second most wanted person on Interpol’s list for South America and was used to track down 3,000 missing children in New Dehli in just four days. Many more offerings are in the making by an industry expected to grow to a predicted global value of $9.5 billion by 2022.

Against this background, policymakers are poised to intensify their work trying to figure out what ‘good regulation’ of facial recognition technology actually looks like. Will Amazon’s opening shot set the new benchmark, and set the company up to benefit from first mover advantage? The race is clearly on.

policy-regulation-tech-sector-guide

Topics: Artificial Intelligence (AI), Facial recognition technology

Megan Stagman

Written by Megan Stagman

Megan provides political analysis and monitoring to emerging technology clients, with a focus on drones and data.

Get the latest updates from our blog

Related Articles

As political institutions slowly emerge from their Christmas hibernation, we look at the key unresolved ... Read more

After an intense three-day negotiation marathon, the European Parliament and the Council of the EU reached a ... Read more

In this blog, we look at the steps the European Union is taking to regulate artificial intelligence. There ... Read more

In this blog, Inline Policy looks at how the UK, home to many promising AI start-ups, is seeking to balance ... Read more

Comments