Examining the French Anti-Separatism Law and Its Regulatory and Operational Implications for Online Platforms

Published on November 19, 2021
Last Updated on August 24, 2022

What is the Anti-Separatism Law?

The principle of laïcité has long been a part of the French political fabric. Since its declaration in 1905, this principle asserts the French government’s neutral position in religious matters, giving the people the freedom to practice any or no religion1.

There is much controversy surrounding this principle in French society, with the concept of freedom of expression being misunderstood especially by communities outside France. This is mostly because the term laïcité has no direct equivalent in the English language. It is usually translated to “Secularism” in English, which blurs its actual meaning of encouraging neutrality towards religion2.

In light of the murder of French educator Samuel Paty in 2020, the French government introduced the Anti-Separatism Bill to address the proliferation of hate speech on online platforms. The law is in line with the French government’s anti-terrorism efforts, providing a framework for the countering of “separatism” and measures against forming counter-societies and radical groups. 

Among other things, the law aims to guarantee neutrality in organizations that work with public institutions, train public officials on secularism, and appoint a contact person for issues related to secularism within all public administrations. 

Key Themes and Its Implications to Trust & Safety

With the Anti-Separatism Law’s policies on addressing online hate speech and harassment in social media laid down, our Policy Research Lab focused on the four key themes found within the law and how its requirement of user data disclosure could affect online platforms’ content policy and trust and safety measures.

  1. ​​Article 18: Creation of a new offense

    Also known as “Article Samuel Paty,” a reference to Samuel Paty’s social media exposure being directly linked to his death, this article clarifies that publicly exposing a person or their relatives in an attempt to directly endanger their lives and property constitutes a major offense. The article establishes that heavier consequences are in place for violators who target public workers, elected officials, journalists, or minors. The term of imprisonment could last anywhere from 3-5 years, with a fine of €45,000-€75,000.

    This could lead to litigation issues and increased information, legal, and content removal requests for online platforms. There is also a clear need for policy formulation and enforcement actions by internal teams specializing in combatting extremist content, and for more proactive moderation for Islamophobic content and potential threats, especially those directed towards public servants and government employees.
  1. Article 19: Combating the spread of hate speech by clarifying the procedure to block ‘’mirror sites’’

    Mirror sites make it possible for flagged social media content that incites hatred and harassment to continue spreading online. This amendment asserts the right of parties to request a shutdown on communication services with harmful content from services covered by other legal decisions.

    Services would be expected to withhold or remove any social media content linking to mirror sites on their platforms. They would also be required to facilitate content management solutions designed to track down such websites and file reports with French law enforcement agencies.
  1. Article 19a: A new scheme for online content moderation 

    This amendment indicates that the government shall impose new responsibilities and transparency guarantees on digital platforms according to the Digital Services Act by 2023. 

    Social media platforms would be required to become more transparent about their efforts to combat illegal activities within their respective platforms. The implementation of appropriate measures like seeking human and technological security solutions to fight harmful content would also be needed. Regular risk assessments must also be conducted to directly solve existing security gaps and identify new ones. 
  1. Article 19b: Empowering France's broadcast media regulator

    This law entitles the Conseil supérieur de l’audiovisuel (CSA) to oversee content policy, algorithms, and moderation processes of social media platforms, websites, search engines, and other services. As the supervisor, the CSA is empowered to fine up to €20 million or 6% of the platforms' global revenue.

    This could set precedent for social media services to pursue collaborations with the French broadcast media regulator for periodic audits on policies, moderation processes, and security algorithms while creating an operational workflow that addresses audit gaps.

Operational Framework Recommendations

Several concerns over platform algorithms, content moderation, legal, and policy could arise once this law is fully realized. Our Policy Research Lab has identified four potential partnership areas on platforms looking for outsourcing services to scale with the passage of the Anti-Separatism Law, highlighting operational innovation recommendations and how TaskUs’s capabilities can deliver the desired results and more.

  1. Volumetric Moderation

    TaskUs believes in the importance of the human touch across all of our operations. Recruiting and training only the most qualified candidates for content moderation results in closely accurate, if not completely accurate, moderation of text, images, audio, videos, and even live broadcasts. 

    Additionally, the operational feedback gathered from the moderation process can then be used to further build upon and make improvements to a platform’s external and internal content policy.
  1. Comprehensive Target Operating Model

    TaskUs implements the most effective tools, resources, and programs to ensure our clients get excellent results. We do this by hiring moderators and policy experts who are not just fluent in the language but also bring contextual depth and market-specific insights to the moderation workflow. 

    Additionally, we can predict and handle volume fluctuations for major incidents through comprehensive workforce management. This requires building a dedicated Workforce Management team to ensure maximum coverage and proactive forecasting with the client to anticipate long-term spikes. We manage these spikes by adapting efficient workforce management tools, platforms, and initiatives for short-term occurrences, cross-training programs for additional Teammates, a burst capacity process, and a rigorous incident management process. 

    We also go in line with the TaskUs Method for Resilience by giving our operations team unlimited access to a fully tested global wellness program, backed by research and local experts from different regions.

    Lastly, we can add to these efforts by forming a dedicated Data Transparency Insights team to cover the areas of policy in which a thorough understanding of French society and culture is essential.
  1. AI Operations

    TaskUs’s AI Operations service line is capable of performing video classification & tagging for Machine Learning Training. This can be supplemented by creating a process flow that captures policy insights to be handed over to the platform’s existing AI models, and by co-developing platform algorithms that automatically detect illicit content for repeat moderation flags. 

The implementation of appropriate measures like seeking human and technological security solutions to fight harmful content would also be needed.
  1. Real-Time Dramatic Events Command Centre

    Having a well-oiled content moderation process is key to driving efficiency and accuracy and delivering outstanding results. TaskUs can support this by creating a system of escalation that puts platforms’ specialized teams at the forefront, and by assigning a cross-functional team for transparency reports. Content and data will be thoroughly analyzed, along with weekly reports indicating volume and issue type breakdown.

    Value-add services under this area include creating a watchlist for keywords and hashtags to properly segregate content that needs to be prioritized, and filtering hashtags within content reported ahead of the CSA, Pharos, and NGOs audit. 

    We could also include applying a reactive “dramatic events” monitoring system through cross-collaboration with platforms’ Engineering, Legal, and Operations in France; and verifying how their systems are geo-localizing the French territories when reporting hateful content.

TaskUs has years of experience driving exceptional content management and moderation solutions for several of today’s leading online platforms. Our content moderation capabilities, paired with our expertise in value-add services, makes Us the ideal outsourcing partner in making safer spaces online.

Want to know more about our studies and significant findings?

References

Phil Tomlinson
VP, Trust + Safety
Leading the Trust + Safety service line at TaskUs, Phil spearheads strategic partnerships and works at the intersection of content moderation and platform policy. His team helps define online safety's purpose in our broader society.