The main takeaway from the 2025 TSPA EMEA Summit is that AI is rewriting the trust & safety (T&S) playbook. Technology is speeding up moderation and scaling detection. It’s also amplifying risks just as easily.
Our industry-recognized Trust & Safety team was there, both as presenters and attendees, to explore how businesses can adapt, responsibly and effectively.
Here are their top 5 insights from the conversations.
“A lot of conversations surround AI, both as a threat but also as a tool for combating T&S challenges,” says Martha Grivaki, behavioral scientist, Wellness & Resiliency.
While technology boosts efficiency, new tools also help bad actors move quicker to create deepfakes, coordinated disinformation and AI-generated scams. Harmful content is slipping through the cracks.
“Emerging tech is speeding up exploitation and misuse,” says Dr. Marlyn Savio, research manager, Wellness & Resiliency. “That’s pushing global service providers to innovate and build smarter, more ethical tools that meet real-world complexity.”
T&S teams must shift from reactive enforcement to proactive design. For example, keyword detection alone won’t catch AI-generated hate speech. Platforms need smarter systems that can anticipate and prevent the next wave of threats before they spread.
Language, culture and legal standards vary widely around the world. Yet, many platforms still apply the same policies and AI models (often trained on English language data) to moderate content worldwide. But this one-size-fits-all approach falls short.
“We need AI systems that reflect local languages, cultures and even regulatory frameworks,” explains Andrea Ran, product director, Trust & Safety. “Too many tools are still built for a default user that doesn’t represent the diversity of the real world.”
Localization applies to moderation frameworks too. “Effective content moderation requires enforcing global policies with local context,” says Siva Raghava, senior director, Trust & Safety. “Multilingual training, cultural awareness and nearshore talent ensure decisions are accurate, empathetic and aligned with regional norms.”
Even advanced tools struggle with gray areas, like satire, coded language or content where intent is unclear. That’s why human moderators and policy reviewers are still critical to T&S.
“AI enhances speed and scale in moderation, but human judgment remains essential for context and empathy,” says Siva. “Combining AI with human oversight ensures balanced, nuanced decisions in complex Trust & Safety scenarios.”
On top of guiding edge-case decisions, moderators also train, fine tune and audit AI systems. A human-in-the-loop approach helps models be more accurate, inclusive and aligned with evolving norms.
While Europe is moving quickly to regulate AI and online content (see: the EU AI Act and DSA), the United States is leaning toward a looser, more market-driven approach. This split is already creating operational challenges for global platforms.
“There’s a growing divergence between how different regions are approaching AI and online safety,” according to Andrea. “Companies now need systems flexible enough to meet very different expectations.”
To stay compliant, platforms must localize enforcement flows, tailor transparency reporting and adapt content policies to fit region-specific laws.
As the regulatory landscape grows more complex, Trust & Safety professionals are stepping into more strategic roles. “T&S professionals are no longer just policy enforcers but actively take part in policy conversations and creation of regulatory frameworks,” says Martha.
It’s a natural shift. These teams understand how policies land in real life and how rigid systems can miss nuance, overlook threats or unintentionally harm vulnerable groups. Their frontline insight is essential to shaping smarter, more adaptable frameworks.
And if there’s one thing T&S experts agree on, it’s that users and moderators need better protection and care. “Regulations must widen coverage to include labor considerations for T&S workers,” Marlyn highlights.
AI is changing how businesses moderate content and what it means to run a responsible digital platform. According to Siva, “Success lies in blending human insight with AI, delivering seamless experiences while staying vigilant, globally aware and locally grounded.”
T&S must be intentional. That means designing for complexity, leading with care and staying grounded in the needs of the people platforms are built to serve: users, moderators and communities around the world.
To build a safer platform in the AI era, talk to our Trust & Safety experts today.
References
We exist to empower people to deliver Ridiculously Good innovation to the world’s best companies.
Services
Cookie | Duration | Description |
---|---|---|
__q_state_ | 1 Year | Qualified Chat. Necessary for the functionality of the website’s chat-box function. |
_GRECAPTCHA | 1 Day | www.google.com. reCAPTCHA cookie executed for the purpose of providing its risk analysis. |
6suuid | 2 Years | 6sense Insights |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
NID, 1P_JAR, __Secure-3PAPISID,__Secure-3PSID,__ Secure-3PSIDCC | 30 Days | Cookies set by Google. Used to store a unique ID for various Google services such as Google Chrome, Autocomplete and more. Read more here: https://policies.google.com/technologies/cookies#types-of-cookies |
pll_language | 1 Year | Polylang, Used for storing language preferences on the website. |
ppwp_wp_session | 30 Minutes | This cookie is native to PHP applications. Used to store and identify a users’ unique session ID for the purpose of managing user session on the website. This is a session cookie and is deleted when all the browser windows are closed. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
_ga | 2 Years | Google Analytics, Used to distinguish users. |
_gat_gtag_UA_5184324_2 | 1 Minute | Google Analytics, It compiles information about how visitors use the site. |
_gid | 1 Day | Google Analytics, Used to distinguish users. |
pardot | Until Cleared | Salesforce Pardot. Used to store and track if the browser tab is active. |
Cookie | Duration | Description |
---|---|---|
bcookie | 2 Years | Browser identifier cookie. Used to uniquely identify devices accessing LinkedIn to detect abuse on the platform. |
bito, bitolsSecure | 30 Days | Set by bidr.io. Beeswax’s advertisement cookie based on uniquely identifying your browser and internet device. If you do not allow this cookie, you will experience less relevant advertising from Beeswax. |
checkForPermission | 10 Minutes | bidr.io. Beeswax’s audience targeting cookie. |
lang | Session | Used to remember a user’s language setting to ensure LinkedIn.com displays in the language selected by the user in their settings. |
pxrc | 3 Months | rlcdn.com. Used to deliver advertising more relevant to the user and their interests. |
rlas3 | 1 Year | rlcdn.com. Used to deliver advertising more relevant to the user and their interests. |
tuuid | 2 Years | company-target.com. Used for analytics and targeted advertising. |