Is transparency the key for turning AI into a force for good?
by Pia Doering on 20 Jan 2022
In recent years, artificial intelligence (AI) has become embedded in many of the processes of business operations, public life, and politics. Yet as AI is increasingly becoming a part of people’s lives, suspicions have mounted as to whether AI is a force for good, or whether its algorithms create bad outcomes for some of those on the receiving end of its calculations. In a 2020 survey by KPMG, only 26% of UK citizens were willing to rely on information provided by an AI or to share data with an AI. To combat the festering mistrust in AI, the UK Government published its novel Algorithmic Transparency Standard in late November 2021. This blogpost introduces the standard, evaluates its potential, and points to the questions which remain open.
Although there may have been a time when AI was thought to be a superior, more objective, and more effective than human decision-making, recent research has revealed how biased AI can be when bias is not explicitly addressed in its programming. In 2018, an MIT study showed that AI-powered facial recognition tools erred significantly more when analysing female and/ or black faces, as opposed to male and/ or white faces - a concerning failure given a context where facial recognition has become increasingly important for law enforcement. Not long after, the UK was hit by its very own AI scandals: first, when students from disadvantaged backgrounds were more likely to have their A-level results downgraded by an algorithm after in-person exams were cancelled in 2020; and second, when the Department for Work and Pensions was hit with a legal challenge in December 2021 over allegedly disproportionately targeting disabled people for benefits fraud investigations on the basis of its algorithm. Events like these have damaged the reputation of AI in the UK and inspired calls for greater transparency of algorithm-supported decisions by the Centre for Data Ethics and Innovation, the Alan Turing Institute, the Ada Lovelace Institute, and the OECD (amongst others).
On 29 November 2021, the UK Government published its Algorithmic Transparency Standard, delivering on commitments made in its National AI Strategy and its National Data Strategy. The standard obliges public sector organisations to provide more information on the role of algorithms in supporting decisions that affect individuals (especially legally, as in law enforcement, or economically, as in benefits claims). According to the Government, greater transparency will ‘promote trustworthy innovation by providing better visibility of the use of algorithms across the public sector and enabling unintended consequences to be mitigated early on’.
The standard is organised in two tiers. The first tier describes the algorithmic tool in question, detailing how it is used and incorporated into the decision-making process, what problem it aims to solve and the justification for its use. The second tier provides more detailed information. For instance, it should specify which datasets the model has been trained on and which datasets it will be employed on. This is important information to discover eventual inherent biases. It should also describe what the tool is designed to do and how it could be misused. Finally, it must analyse how exactly the tool affects decision-making - including how much information the tool provides to the decision-maker, to what extent humans can intervene in the algorithmic process, and what training people deploying the tool receive. Organisations obliged to use the standard (currently certain public sector bodies) can provide this information by answering the questions in this form.
What does this mean for businesses?
The Algorithmic Transparency Standard will be piloted by several government departments and public sector bodies in the months ahead. The standard will be then reviewed by the Central Digital and Data Office before formal endorsement from the Data Standards Authority is sought in 2022. However, although businesses are not yet required to comply with the Algorithmic Transparency Standard, there are several ways in which the standard may affect them. Firstly, public sector bodies rarely develop their own algorithmic tools - instead they use those developed by private companies. If you are a business providing algorithmic tools or analysis to the public sector, you will have to ensure that you provide clients with the information required by the Algorithmic Transparency Standard. Secondly, at a later stage, the standard could be rolled out to all entities that use algorithms to support decisions affecting individuals. Likely candidates for this include banks and insurance companies which make decisions about loans, mortgages, insurance coverage or pay-outs; companies that use algorithms to target potential customers or develop flexible pricing; and online platforms using algorithms to structure content (similar provisions are already being developed in the European Union’s final push on its Digital Services Act). In short, if your use of algorithms has any ramifications for individuals, you may sooner or later be required to comply with the UK’s Algorithmic Transparency Standard. At the same time, in anticipation of a broader rollout, the Algorithmic Transparency Standard provides an opportunity for businesses to position themselves as responsible actors and increase public trust in AI, which will only benefit the further commercialisation of AI.
How effective will it be?
The UK’s Algorithmic Transparency Standard has been welcomed by those organisations which had called for increased transparency in AI applications. And as long as we live in a world that increasingly relies on algorithms but in which we cannot yet all understand whether their work is beneficial, initiatives that develop transparency and strengthen public trust are to be welcomed. However, there are still certain gaps to fill in.
For example, just because an algorithmic process is transparent, it does not mean that it is ethical or that it works well - so who will ensure that algorithms made transparent by the standard are also ethically sound and of high quality? Relatedly, who will decide what constitutes an ethical, high-quality algorithm? And what will happen to actors whose algorithms are discovered to be flawed, or even those who seek to mislead the public by withholding information or entering false information into the standard?
According to the Government, the standard aims at ‘empowering experts and the public to engage with the data and provide external scrutiny’. What will happen if these experts and the general public call out an algorithm, or worse, if they disagree, remains unclear for now.
The Government has taken a first step towards incentivising actors to look at algorithms and resolving the issue of missing benchmarks to judge AI by in its subsequently published roadmap to an effective AI assurance ecosystem. In this vision, a developing AI assurance market provides services, tools and standards by which AI capabilities can be evaluated. But for now, the Government still has a lot to clarify regarding how it intends to leverage its transparency standard to actually turn AI into a force for good.
If you have any questions about AI regulation, or are interested in an informal chat, please contact us at firstname.lastname@example.org.
Written by Pia Doering
Pia provides policy analysis and monitoring to clients in the tech sector.