Platforms’ actions against hate speech

by Inline Policy on 14 Sep 2020

Preventing illegal hate speech online is a priority for policymakers worldwide, and the need to do so is increasingly evident. How can governments strike the right balance between tackling the mechanisms and incentives behind the proliferation of illegal hateful content online, while also ensuring that platforms do not enable censorship? A closer look at present and future debates demonstrates the intricacies of keeping an ever-growing number of internet users safe and preserving their fundamental rights.

The challenges of preventing hate speech online

In recent months, well over 1,000 organisations took part in the #StopHateforProfit campaign and stopped all advertising on Facebook during the month of July. The main reason behind the movement is the tech giant’s perceived inaction in tackling hate speech - which includes “hate, bigotry, racism, antisemitism”, but also “disinformation” - on its platform.

Facebook has implemented a series of measures to appease criticisms, which yielded mixed results. For instance, the company started labelling content that violates its terms of service - but is deemed newsworthy - and tackled hate speech in ads. Other social media companies have implemented seemingly stricter measures to counter hateful content. Twitter updated its policies to further address hateful content and suspended a number of accounts, while Reddit removed thousands of hate-filled ‘subreddits’.

Ongoing issues around hate speech - starting with the lack of a universally agreed definition - raise fundamental questions for policymakers. Given that tech giants hold the technical ability and sufficient oversight to effectively moderate online content, should they be responsible - or even liable - for removing user-generated hateful material from their platforms? Are voluntary codes of conduct – which are designed and implemented by platforms - sufficient? Should governments intervene, and if so, how?

Governments’ attempts at regulating hate speech: the case of France

A few countries have tried preventing illegal hate speech online through legislation. The French National Assembly adopted a bill on hate speech (Loi visant à lutter contre les contenus haineux sur internet), also known as the Loi Avia, on 13 May 2020. The bill would have created an obligation for platforms to take down ‘explicitly’ hateful content flagged by users within 24 hours. For terrorist and child sexual exploitation content, the takedown time limit would have been one hour. If the platforms failed to do so, they risked fines of up to €1.25 million.

On 18 June 2020, the Conseil Constitutionnel - a national court that reviews legislation to ensure it complies with the French constitution - noted that these measures placed the burden of analysing content solely on the platforms, creating an incentive for them to indiscriminately remove flagged content. The court ruled that such provisions “infringe upon the exercise of freedom of expression and communication” and were deemed unconstitutional.

The deadline provisions were subsequently dropped, and so were the so-called “obligations of means” such as requirements for platforms to create content moderation processes to enforce the 24-hour (or one-hour) deadline. A more lenient version of the Loi Avia became law on 24 June 2020. Meanwhile, French Digital Minister Cédric O stated that France is yet to decide whether to put forward a new law against illegal hate speech online or to wait for EU legislation on the matter.

The European Union’s current and future approach to hate speech

The EU is working on an ambitious legislative package called the Digital Services Act (DSA) – on which the public consultation closed on 8 September 2020 – aimed at revamping the current legal framework for digital services. It should include a set of rules framing the responsibilities of digital services to address risks faced by their users (including hate speech) and to protect their rights (such as the freedom of expression), as well as “ex ante rules” to address competition in the digital space.

Although the European Commission refers to ‘hate speech’ as online content that is illegal and needs a coordinated EU-wide response, online platforms are currently in charge of moderating illegal content through their voluntary, self-regulatory measures. These include, most notably, the platforms-driven EU Code of Conduct on countering illegal hate speech online and the 2018 European Commission Recommendation on measures to effectively tackle illegal content online.

Sector-specific legislation that also deals with hate speech has been either adopted (in the field of audiovisual and media services and copyright) or proposed (with respect to terrorist content online).

The latest evaluation of the EU Code of Conduct (from June 2020) shows that participating companies assessed 90% of flagged content within 24 hours, and 71% of the content deemed as illegal hate speech was removed. Then why is it still so widely debated?

Going beyond platforms’ voluntary measures

During the DSA Week organised by Forum Europe in July 2020, a number of additional challenges were discussed. During the session on ‘Platforms and User Generated Content’, Tiemo Wölken MEP emphasised the need for a clear distinction between provisions dealing with illegal content versus those addressing legal but harmful content.

For illegal content, the European Parliament’s JURI Committee Rapporteur for the DSA called for a reliable “notice-and-action” system that would provide legal clarity without creating incentives for the unwarranted removal of content. His proposals resonated with France’s original Loi Avia - indeed, the MEP also stated that the ruling by France’s Constitutional Council should be taken as a warning for EU policymakers.

Regulators should instead address the platforms’ advertisement-based business model and opaque algorithms. In fact, the content curation process on which platforms rely can lead to so-called ‘echo chambers’ facilitating the spread of hateful content and material, as identified in Facebook’s Civil Rights Audit.

Providing users with more control over content curation criteria, rather than it being “solely determined by a piece of content’s ability to generate ad revenues”, as suggested by Mr Wölken, could limit the virality of specific material.

Conclusion

Whether the #StopHateforProfit campaign will prove to be successful remains to be seen. A number of companies have brought back their ads on the Facebook platform, as predicted by CEO Mark Zuckerberg, while others remain committed to the boycott.

Nonetheless, platforms cannot be the sole judges of illegal hateful material that their very modus operandi encourages to disseminate. Indeed, when social media giants try to weigh in, they either get criticised for not doing enough or accused of ‘silencing’ voices (prompting the emergence of alternative channels such as Parler).

Current events including the French court’s ruling, or the recent provision that Germany passed to tighten its NetzDG laws against online hate speech - forcing social media platforms to delete potentially illegal content and report user data to the Federal Criminal Police Office - must continue to inform the debate.

In France, the Loi Avia no longer risks imposing onerous obligations on platforms. However, it led to the establishment of an “Online Hate Observatory”. The new body, officially launched in July 2020, is responsible for monitoring and analysing hateful content online, in collaboration with online platforms, associations, and researchers.

Collaborative forums could allow for much-needed data and knowledge-sharing among relevant stakeholders. However, if governments want to keep an ever-growing number of internet users safe, while also preserving their fundamental rights, they should go further and address the economic incentives and technical mechanisms that allow online hate speech to proliferate.

Topics: UK politics, Big Tech, Regulation, Technology

Inline Policy

Written by Inline Policy

Get the latest updates from our blog

Related Articles

The Labour Party's annual conference took place in Liverpool from Sunday 22 September to Wednesday 25 ... Read more

I sat down for an hour-and-a-half in early August with Kai Zenner, longtime policy advisor to MEP Axel Voss. ... Read more

In this blog, we look at the future of European telecoms legislation, including the possibilities for a ... Read more

In this blog, we analyse the Labour government's planned reforms to the UK's data protection and cyber ... Read more

Comments