5 AI Lessons from the 2025 TSPA EMEA Summit

Published on June 25, 2025
Last Updated on June 25, 2025

The main takeaway from the 2025 TSPA EMEA Summit is that AI is rewriting the trust & safety (T&S) playbook. Technology is speeding up moderation and scaling detection. It’s also amplifying risks just as easily. 

Our industry-recognized Trust & Safety team was there, both as presenters and attendees, to explore how businesses can adapt, responsibly and effectively. 

Here are their top 5 insights from the conversations.

1. AI is creating new types of harm

“A lot of conversations surround AI, both as a threat but also as a tool for combating T&S challenges,” says Martha Grivaki, behavioral scientist, Wellness & Resiliency.

While technology boosts efficiency, new tools also help bad actors move quicker to create deepfakes, coordinated disinformation and AI-generated scams. Harmful content is slipping through the cracks.

“Emerging tech is speeding up exploitation and misuse,” says Dr. Marlyn Savio, research manager, Wellness & Resiliency. “That’s pushing global service providers to innovate and build smarter, more ethical tools that meet real-world complexity.”

T&S teams must shift from reactive enforcement to proactive design. For example, keyword detection alone won’t catch AI-generated hate speech. Platforms need smarter systems that can anticipate and prevent the next wave of threats before they spread.

2. Build AI for local context, not just global scale

Language, culture and legal standards vary widely around the world. Yet, many platforms still apply the same policies and AI models (often trained on English language data) to moderate content worldwide. But this one-size-fits-all approach falls short. 

“We need AI systems that reflect local languages, cultures and even regulatory frameworks,” explains Andrea Ran, product director, Trust & Safety. “Too many tools are still built for a default user that doesn’t represent the diversity of the real world.”

Localization applies to moderation frameworks too. “Effective content moderation requires enforcing global policies with local context,” says Siva Raghava, senior director, Trust & Safety. “Multilingual training, cultural awareness and nearshore talent ensure decisions are accurate, empathetic and aligned with regional norms.”

3. People matter — a lot

Even advanced tools struggle with gray areas, like satire, coded language or content where intent is unclear. That’s why human moderators and policy reviewers are still critical to T&S.

“AI enhances speed and scale in moderation, but human judgment remains essential for context and empathy,” says Siva. “Combining AI with human oversight ensures balanced, nuanced decisions in complex Trust & Safety scenarios.”

On top of guiding edge-case decisions, moderators also train, fine tune and audit AI systems. A human-in-the-loop approach helps models be more accurate, inclusive and aligned with evolving norms. 

4. Regulations are getting complicated

While Europe is moving quickly to regulate AI and online content (see: the EU AI Act and DSA), the United States is leaning toward a looser, more market-driven approach. This split is already creating operational challenges for global platforms.

“There’s a growing divergence between how different regions are approaching AI and online safety,” according to Andrea. “Companies now need systems flexible enough to meet very different expectations.”

To stay compliant, platforms must localize enforcement flows, tailor transparency reporting and adapt content policies to fit region-specific laws.

5. T&S professionals are shaping the rules

As the regulatory landscape grows more complex, Trust & Safety professionals are stepping into more strategic roles. “T&S professionals are no longer just policy enforcers but actively take part in policy conversations and creation of regulatory frameworks,” says Martha. 

It’s a natural shift. These teams understand how policies land in real life and how rigid systems can miss nuance, overlook threats or unintentionally harm vulnerable groups. Their frontline insight is essential to shaping smarter, more adaptable frameworks.

And if there’s one thing T&S experts agree on, it’s that users and moderators need better protection and care. “Regulations must widen coverage to include labor considerations for T&S workers,” Marlyn highlights. 

The way forward

AI is changing how businesses moderate content and what it means to run a responsible digital platform. According to Siva, “Success lies in blending human insight with AI, delivering seamless experiences while staying vigilant, globally aware and locally grounded.”

T&S must be intentional. That means designing for complexity, leading with care and staying grounded in the needs of the people platforms are built to serve: users, moderators and communities around the world.

To build a safer platform in the AI era, talk to our Trust & Safety experts today.

Speak to an expert


References

TaskUs