The main takeaway from the 2025 TSPA EMEA Summit is that AI is rewriting the trust & safety (T&S) playbook. Technology is speeding up moderation and scaling detection. It’s also amplifying risks just as easily. 

Our industry-recognized Trust & Safety team was there, both as presenters and attendees, to explore how businesses can adapt, responsibly and effectively. 

Here are their top 5 insights from the conversations.

1. AI is creating new types of harm

“A lot of conversations surround AI, both as a threat but also as a tool for combating T&S challenges,” says Martha Grivaki, behavioral scientist, Wellness & Resiliency.

While technology boosts efficiency, new tools also help bad actors move quicker to create deepfakes, coordinated disinformation and AI-generated scams. Harmful content is slipping through the cracks.

T&S teams must shift from reactive enforcement to proactive design. For example, keyword detection alone won’t catch AI-generated hate speech. Platforms need smarter systems that can anticipate and prevent the next wave of threats before they spread.

2. Build AI for local context, not just global scale

Language, culture and legal standards vary widely around the world. Yet, many platforms still apply the same policies and AI models (often trained on English language data) to moderate content worldwide. But this one-size-fits-all approach falls short. 

Localization applies to moderation frameworks too. “Effective content moderation requires enforcing global policies with local context,” says Siva Raghava, senior director, Trust & Safety. “Multilingual training, cultural awareness and nearshore talent ensure decisions are accurate, empathetic and aligned with regional norms.”

3. People matter — a lot

Even advanced tools struggle with gray areas, like satire, coded language or content where intent is unclear. That’s why human moderators and policy reviewers are still critical to T&S.

On top of guiding edge-case decisions, moderators also train, fine tune and audit AI systems. A human-in-the-loop approach helps models be more accurate, inclusive and aligned with evolving norms. 

4. Regulations are getting complicated

While Europe is moving quickly to regulate AI and online content (see: the EU AI Act and DSA), the United States is leaning toward a looser, more market-driven approach. This split is already creating operational challenges for global platforms.

To stay compliant, platforms must localize enforcement flows, tailor transparency reporting and adapt content policies to fit region-specific laws.

5. T&S professionals are shaping the rules

As the regulatory landscape grows more complex, Trust & Safety professionals are stepping into more strategic roles. “T&S professionals are no longer just policy enforcers but actively take part in policy conversations and creation of regulatory frameworks,” says Martha. 

It’s a natural shift. These teams understand how policies land in real life and how rigid systems can miss nuance, overlook threats or unintentionally harm vulnerable groups. Their frontline insight is essential to shaping smarter, more adaptable frameworks.

And if there’s one thing T&S experts agree on, it’s that users and moderators need better protection and care. “Regulations must widen coverage to include labor considerations for T&S workers,” Marlyn highlights. 

The way forward

AI is changing how businesses moderate content and what it means to run a responsible digital platform. According to Siva, “Success lies in blending human insight with AI, delivering seamless experiences while staying vigilant, globally aware and locally grounded.”

T&S must be intentional. That means designing for complexity, leading with care and staying grounded in the needs of the people platforms are built to serve: users, moderators and communities around the world.

To build a safer platform in the AI era, talk to our Trust & Safety experts today.