In recent years, European digital corporations have become more conscious of the need to stamp out illegal content and hate speech against protected groups on their platforms. This is due in part to the growing pressure from the European Commission and support from NGOs, as well as stricter online regulations enacted within countries such as Germany, France, and Ireland, with other countries following suit. There is a greater demand for transparency from major platforms, and failure to address illegal content could result in legislative measures and heavy fines for the corporations.
The Digital Services Act (DSA) and the Digital Markets Act (DMA) aim to rebalance the responsibilities of users and authorities, and revamp the way Big Tech companies and digital services operate with a single set of rules for the entire EU. Protecting the fundamental rights and safety of European citizens is the European Commission’s number one priority. In line with this, the DSA calls for more moderation of harmful and illegal content, a clearer accountability and transparency framework for online platforms, and the fostering of innovation and growth for smaller platforms and start-ups competing within the single market. We can expect to see the effects of these initiatives throughout the rest of 2021, and going into 2022.
While the European Union is taking significant strides towards a safer internet, other countries around the world are struggling to tackle the explosion of online hate speech and egregious content. Platforms are not always able to respond to reports in a timely manner or with the appropriate compliance measures; and the need for well-trained, resilient content moderators is ever-growing. Governments have also encountered difficulties in defining hate speech and enacting law reform. In underserved regions such as Southeast Asia, though Australia is driving change in terms of hate speech regulation–the vast linguistic diversity combined with the exponential increase in users poses a major challenge to unifying these efforts.
Given all of these challenges, the TaskUs Policy Research Lab recommends:
The TaskUs Policy Research Lab provides consultative and value-add services in Trust & Safety and Content Moderation. We service a wide range of policy areas and content types, with the end goal of helping create a safer internet for all.