Navigating Election Integrity, Misinformation and the DSA with TaskUs and Tremau

Published on May 29, 2024
Last Updated on July 4, 2024

2024 is a pivotal year for elections globally. More than 60 countries, including the United States, India, Brazil and many in the European Union (EU) — around two billion voters or approximately a quarter of the world's population — are expected at the polls.

As the electoral landscape evolves, digital platforms and online information have become increasingly important in influencing driven public discourse and voter sentiment. This impact highlights the need for concerted efforts and collaboration to safeguard the integrity of the democratic process.

The EU has taken a significant step by releasing election guidelines through the Digital Services Act (DSA).

Protecting electoral integrity

One of the DSA’s explicit objectives is to protect democratic processes and public debates by requiring greater transparency and accountability from online platforms on their content moderation practices. The DSA’s obligations on very large online platforms (VLOPs) and very large online search engines (VLOSEs) include the annual identification, analysis, and mitigation of risks related to fair elections and public discussion while protecting free speech.

These requirements faced their first test during the Slovak parliamentary elections in September 2023, contributing to shorter response times to flagged issues, clearer escalation processes, increased fact-checking capabilities and growth in resources and capacities.

However, threats continue to evolve and stand to influence the 2024 polls not just in the EU but also across the world, as witnessed in several election-related incidents:

  • In the United States, a fake robocall using an AI-cloned voice of President Joe Biden urged Democratic voters not to participate in the New Hampshire primary.
  • In India, fake videos went viral, with two prominent Bollywood actors appearing to criticize Prime Minister Narendra Modi and endorse the opposition Congress party during the general election. A fake voting schedule circulated online, prompting the Election Commission to issue warnings, and a fabricated ABP-CVoter survey on the Andra Pradesh Assembly Election 2024 caused confusion among voters.
  • Indonesia’s Golkar party published a deepfake video of the late President Suharto, raising concerns over the influence of AI content on voter perception.
  • In Bangladesh, a viral video falsely claimed to show a crowd booing incumbent Prime Minister Sheikh Hasina.
  • Just days before a tight election in Slovakia, manipulated audio recordings spread on social media, purportedly featuring political candidates plotting voter fraud.

What VLOPs, VLOSEs can do to mitigate electoral risks

To address these election-related challenges online, which include Foreign Information Manipulation and Interference (FIMI), illegal manipulation of online discourse and voter suppression tactics, the DSA election guidelines outline key areas where platforms can focus their mitigation efforts:

  • Reinforcement of internal processes
    • Establish dedicated internal teams with relevant expertise during election periods.
    • Calibrate mitigation measures based on information and analysis of local context-specific and Member State-specific risks.
  • Risk identification
    • Pinpoint potential dangers to elections by analyzing local factors, media literacy landscapes, and previous instances of illegal activities, among others.
  • Risk mitigation
    • Spread access to official election information, including the schedule and locations for voting.
    • Provide context such as fact-checking labels on potential disinformation and FIMI content, reminders encouraging users to read content carefully and labels to accounts of authorities. 
    • Regulate political advertising by clearly labeling political ads, sharing information on ad sponsors and audience targeting, and adopting efficient verification systems.
    • Disincentivize FIMIs through targeted advertising policies.
  • Protection of fundamental rights
    • Take measures to respect essential rights, especially the freedom of expression.
  • Generative AI content mitigation
    • Utilize technical measures to mitigate AI-related risks by applying watermarks, fact-checking AI content and conducting regular red-team testing on AI systems.
    • Apply clear labels to synthetic or manipulated media, especially deepfakes.
    • Embrace adaptive moderation processes that can detect AI content, either through specific techniques or cooperation with AI providers.

The guidelines also provide online platforms with suggested actions to take during and after an election:

  • During an electoral period
    • Time-bound resource activation: Adjust mitigation measures based on the specific electoral timeframe and risk assessments.
    • Focus on preventing voter suppression:  Provide users with accurate voting information and address potential suppression tactics.
    • Incident response mechanisms: Establish a pre-defined and tested response plan with relevant stakeholders for critical situations.
  • After an electoral period
    • Post-election review: Conduct thorough reviews of the mitigation measures taken, assessing their effectiveness and identifying areas for improvement.
    • Cooperation with experts: Integrate contributions from independent researchers and fact-checkers on the impact of mitigation measures in the post-election review.
    • Transparency and public engagement: Share a non-confidential version of post-election review documents to provide an opportunity for public feedback on election risk mitigation measures.

TaskUs & Tremau: Your partners in protecting elections

TaskUs, a leading provider of Trust & Safety solutions, and Tremau, a renowned regulatory advisory firm, have partnered to assist platforms in navigating the complexities of the DSA, particularly guidelines protecting electoral integrity.

This partnership leverages our combined operational expertise, regulatory advisory services, innovative tooling, and advanced technologies, including AI. With a robust focus on employee well-being, we offer a comprehensive approach that empowers platforms to safeguard electoral integrity and strengthen overall trust and safety.

Operations

Moderation of political ads

Our skilled moderators ensure that political advertising complies with regulations, promoting transparency and accountability.

Moderation of illegal content

We combine human expertise with cutting-edge technologies to identify and address illegal content based on platform policies, protecting the integrity of online discourse.

Prevention of voter suppression

Our dedicated efforts focus on countering voter suppression tactics and empowering users with accurate voting information.

Multi-language capabilities

We are proficient in 30+ languages and have deep knowledge of local cultural nuances, allowing for accurate and contextual moderation of election-related content globally.

Subject-matter expertise

Our team comprises policy and subject-matter experts who stay ahead of evolving trends, enabling swift and informed responses to emerging challenges.

Regulatory and policy advisory

Risk assessment

We review the risk registry and logic underlying the organization of risk assessments and assist in drafting individual evaluations.

Auditing and testing

Our team assesses internal controls and governance processes to help platforms prepare for DSA audits.

T&S health check

We analyze T&S policies, processes, tooling, and implementation standards against industry best practices to identify opportunities for innovation and efficiency gains.

Tooling

Efficient content moderation enterprise platform

Provides an end-to-end integrated moderation platform that ensures efficient and effective processes in line with industry best practices.

Trusted user and community management

Tech-supported processes that establish transparency, safety, and fairness for the platform users, leading to improved user satisfaction, retention, and growth.

Integrated AI solutions

The latest and best-in-class collection of AI detection and decision support technologies that best fit the needs of online platforms, with trusted third-party AI analytics and continuous improvement.

DSA-compliant platform

Software to support cost-efficient and solid compliance with the DSA, as well as other regulations, supporting requirements such as statements of reason, user appeal processes and transparency reports.

Well-being

Moderator well-being

We recognize the psychological toll of content moderation, which is why we offer world-class, evidence-based and proactive resiliency interventions to prioritize the well-being of our moderators.

Global life coaching program

We support our employees in pursuing personal well-being through transformative coaching conversations.

The Resiliency Studio

Our psychological health and safety program provides innovative interventions to boost brain health and protect employees from the negative effects of content moderation.

Wellness + Resiliency research division

Our dedicated health research team conducts innovative research and enhanced data collection to enhance employee mental health.

Together, we empower platforms to proactively identify and mitigate risks, implement robust content moderation practices and foster a safe and free online environment for democratic processes to thrive.

Stay ahead of evolving regulations and safeguard electoral integrity and the well-being of your users by partnering with us for a comprehensive solution to comply with the DSA and other election regulations.

Navigate the DSA and ensure compliance

References

TaskUs