Hate Speech EU Regulations

Published on October 12, 2021
Last Updated on August 24, 2022

TaskUs Policy Research Lab studies the European Commission's call to act more decisively and rapidly in preventing, identifying, and removing the illegal content published by their users, including but not limited to Hate Speech. The European Commission threatens to resort to legislative measures1 if change is not enacted.

In recent years, European digital corporations have become more conscious of the need to stamp out illegal content and hate speech against protected groups on their platforms. This is due in part to the growing pressure from the European Commission and support from NGOs, as well as stricter online regulations enacted within countries such as Germany, France, and Ireland, with other countries following suit. There is a greater demand for transparency from major platforms, and failure to address illegal content could result in legislative measures and heavy fines for the corporations.

The Digital Services Act (DSA) and the Digital Markets Act (DMA) aim to rebalance the responsibilities of users and authorities, and revamp the way Big Tech companies and digital services operate with a single set of rules for the entire EU. Protecting the fundamental rights and safety of European citizens is the European Commission's number one priority. In line with this, the DSA calls for more moderation of harmful and illegal content, a clearer accountability and transparency framework for online platforms, and the fostering of innovation and growth for smaller platforms and start-ups competing within the single market. We can expect to see the effects of these initiatives throughout the rest of 2021, and going into 2022.

While the European Union is taking significant strides towards a safer internet, other countries around the world are struggling to tackle the explosion of online hate speech and egregious content. Platforms are not always able to respond to reports in a timely manner or with the appropriate compliance measures; and the need for well-trained, resilient content moderators is ever-growing. Governments have also encountered difficulties in defining hate speech and enacting law reform. In underserved regions such as Southeast Asia, though Australia is driving change in terms of hate speech regulation--the vast linguistic diversity combined with the exponential increase in users poses a major challenge to unifying these efforts. 

Given all of these challenges, the TaskUs Policy Research Lab recommends: 

  • Human Resources & Geo and Linguistic skills: Ensure that operation teams are linguistically equipped and have market knowledge for EU cultures and politics. Linguistic diversity is vital for content review with global coverage across different time zones.
  • Engineering resources: Ensure that the AI features comply with new laws to guarantee the consumer’s right to be informed if a service is enabled by AI or Machine Learning and the right to redress. Users would also be able to opt out and be given more control over the way content is ranked. Ensure an audit of the reporting flows, messaging, and SLAs is done ahead of time before the implementation of the law.
  • Ads expertise, ’’scope and scale’’: Ensure scalable solutions for reviewers to be able to spot violations when unfair competition occurs. Users will be less exposed to illegal activities and dangerous goods. 
  • Customer Service, Media Ops, Trust & Safety experts: Rely on field experts to build moderation related to hate speech, terrorism and CSAM (e.g. the National Center for Missing & Exploited Children, or NCMEC) to effectively regulate illegal content. Rely on digital customer experience specialists to enhance customer satisfaction. Ensure that teams can handle graphic content, high volumes, and tight SLAs. Ensure a comprehensive wellness program that provides psychological support to the team handling sensitive content. For more, read up on our best practices here.
  • Policy Consultancy: Ensure policies are reviewed and reassessed by Trust & Safety experts and legal advisors who will be able to flag the risk of compliance, and provide specific training to in-house policy teams.

The TaskUs Policy Research Lab provides consultative and value-add services in Trust & Safety and Content Moderation. We service a wide range of policy areas and content types, with the end goal of helping create a safer internet for all.

Want to know more about our studies and significant findings?


Cyrielle Thimis
Global Policy Lead
Cyrielle Thimis has developed expertise in drafting and enforcing Trust & Safety policies over the past seven years. As Global Policy Lead, she produce research containing the most interesting, complex, and impactful blind spots amid the changing global online landscape.