COVID-19: a turning point for online content regulation in the UK?
by Alessandra Venier on 10 Jun 2020
As the UK prepares some of the most ambitious online harms legislation in the world, the unprecedented efforts taken by tech companies to curb the spread of COVID-19 falsehoods have raised a number of questions for regulators and policymakers. The UK may need to adapt its original stance on online harms in order to face the ‘new normal’.In the months preceding the COVID-19 health crisis, numerous governments had been vocally supporting further regulation for large technology firms. Whether in terms of digital tax proposals, competition, privacy, or online content moderation, regulators had a number of concerns regarding the power and influence of the digital giants.
The state of online content legislation in the UK
In the UK, the flagship Online Harms White Paper - which sets out the British government’s plans for a package of measures to keep UK users safe online - was published in April 2019. On 12 February 2020, the then-Secretary of State for Digital, Culture, Media and Sport (DCMS) Nicky Morgan announced the government’s initial consultation response to the White Paper.
The long-awaited response did not provide a detailed update on all the policy proposals outlined in the consultation, but rather gave an indication of the government’s direction in a number of key areas. In particular, it focused on the implications for freedom of expression as well as the businesses in scope of the obligation to ensure people’s safety and wellbeing online, the so-called “duty of care”: large social media platforms. The government had also announced the establishment of a multi-stakeholder Transparency Working Group – details of which have yet to be published – to ensure “representation from all sides of the debate, including from industry and civil society”.
The UK government stipulated that it would maintain a proportionate and risk-based approach, rather than a “one-size-fits-all” approach, on content removal, and appointed the Office of Communications, Ofcom, as the regulator to enforce the forthcoming rules. To ensure freedom of expression, the regulatory framework would focus on the wider systems and processes that platforms have in place to deal with online harms, rather than requiring the removal of specific pieces of content. However, the COVID-19 crisis could potentially change this position.
The rise of COVID-19 misinformation
From the onset of the COVID-19 crisis, the need to regulate against the spread of falsehoods circulating online has become more evident than ever. In the Online Harms White Paper, misinformation refers to “the inadvertent sharing of false information”, as opposed to disinformation, which is defined as “spreading false information to deceive deliberately”. The most prominent form of misinformation relating to the pandemic in the UK is the conspiracy theory linking the spread of the virus to the rollout of 5G technology, which has led to the destruction of 5G equipment and even violence against telecom employees throughout the UK and Europe.
Other falsehoods include warnings over made-up extraordinary measures by governments to keep people in their homes, myths on how to avoid the virus, or false information about vaccines (such as those identified by the NGO EU DisinfoLab).
A recent study by Ofcom found that almost half of UK adults have come across misleading information about COVID-19 online, and that 40% are finding it hard to know what is true or false. As a consequence, opportunists and fraudsters have reacted swiftly: British consumer group Which? has identified a number of cases where individuals sell items such as hand sanitiser at extortionate prices on online marketplaces. Similarly, the Action Fraud group reported that those targeted had lost a total of over £5 million as of June 2020.
Reactions from platforms and policymakers
Misleading information circulating online – and its negative consequences – has gained increased attention from policymakers and regulators. For instance, the UK Parliament’s DCMS Sub-committee on Online Harms and Disinformation’s ongoing work focuses on the role of platforms with respect to COVID-19 misinformation and disinformation. Beyond the UK, the European Commission has also repeatedly urged platforms to track misinformation on encrypted services and platforms, and to remove the financial incentives lying behind falsehoods being shared online.
Platforms have taken unprecedented action in countering the spread of misinformation about the pandemic. For instance, Google announced $6.5 million in funding for fact-checkers and nonprofits fighting misinformation, with an immediate focus on COVID-19, and tweaked its search results to promote official and authoritative information. The search giant has also modified the policies on its video streaming platform YouTube to ban “medically unsubstantiated” content, after strengthening rules on content suggesting links between COVID-19 and 5G.
Similarly, Facebook has also ramped up its efforts to moderate online content, through both human moderators and automated moderation, but most importantly, the platform has committed to retroactively alerting users who interacted with content subsequently labelled as misleading. Twitter has adapted its account verification techniques in face of the COVID-19 pandemic, among others.
Ministers in the UK have welcomed the platforms’ unprecedented moves. As DCMS Secretary of State Oliver Dowden told Members of Parliament (MPs) at the end of April, “most platforms have taken positive steps to curtail the spread of harmful and misleading narratives related to COVID-19”. However, Dowden added that platforms “need to explore how they can further limit the spread of misinformation”, while a recent report highlights that platforms’ solutions to tackle COVID-19 misinformation have not been so successful.
Implications for future tech regulation in the UK
Against this background, a number of difficult questions will influence the debate about the future regulatory framework.
Firstly, whether platforms will maintain their ramped-up efforts if they are not legally required to do so remains unclear, as highlighted by Oxford Information Labs’ Stacie Hoffmann during a UK Parliament Evidence Session on COVID-19 misinformation. Secondly, could the new ability shown by platforms to upgrade their moderating systems make previous arguments about the technical limitations of those systems less convincing?
The government’s full consultation response to the Online Harms White Paper, which was expected this Summer, could have shed some light on the UK government’s position. Although the government should now provide a full response by the end of the year, Dowden suggested to MPs that he was considering carrying out “pre-legislative scrutiny” of any draft laws before presenting an Online Harms Bill, which could further delay the process. The legislation is now expected to be introduced “later in this Session”, or according to some, in the coming years.
The COVID-19 crisis could potentially change the UK government’s original stance on online content regulation. Indeed, the platforms’ proactive response to COVID-19 misinformation demonstrates that they can play an important role in curbing online harms without necessarily threatening free speech. However, the crisis has also highlighted that the consequences of misinformation may go far beyond the platforms’ remit.
The UK government recently backed the newly-established Online Safety Tech Industry Association (OSTIA), which aims to represent the interests of UK technology companies in the debate on how online content should be regulated. Such initiatives could potentially enable a more collaborative approach, as we work toward the ‘new normal’ for tech regulation.
Topics: UK politics, Big Tech, Regulation, Technology