3 Ways to Ensure Trust
in AI for CX

Published on May 15, 2025
Last Updated on May 15, 2025

When AI starts making decisions that impact customers directly, trust is a dealbreaker. If users don’t feel it’s fair, they’ll find another product or service. If something goes wrong and no one understands why, the brand gets blamed. Regulators notice. And once trust is broken, there’s usually no getting it back.

At Executive Connect in Dublin, industry experts explored what trust in AI means in customer experience (CX) and why it’s emerging as a major differentiator.

Here are 3 key insights that emerged from the discussion.

1. Enforce global policies with local context

Moderation is full of nuances. Cultural norms, regulations and user behavior vary across regions. For example, what’s considered acceptable in Western markets (e.g., content featuring dance, clothing styles or certain relationship dynamics) might be flagged as offensive in the Middle East. 

Languages have subtle differences too. Even when there’s an official language, dialects and tones vary. What’s friendly in one region might feel off in another. And across places like EMEA, where there are dozens, getting it wrong isn’t just awkward, it’s risky.

“Multilingual and multicultural training is key,” says Yahya Ouzen, regulatory investigator, Data Protection Commission Ireland. “Teams must have both language skills and socio-cultural understanding, so moderation decisions are not just accurate, but also empathetic and context-aware.”

“Nearshore locations [for moderation outsourcing] like Egypt, Croatia, Ireland and Greece help. They offer multilingual talent plus regional and cultural proximity to key markets,” says Jennifer Kavanagh, VP of client services, TaskUs. 

2. Power AI with people

AI excels at quickly spotting patterns of abuse and flagging issues, but it struggles in situations requiring context, understanding and the emotional intelligence to make truly sound decisions.

“Safety cannot be fully automated,” Siva Raghava, senior director, Trust & Safety, TaskUs points out. “AI and automation can be powerful tools in personalized protection by handling high-volume, repetitive tasks at speed and scale. But to preserve empathy, it’s critical to keep humans in the loop for nuanced decisions.”

Yahya recalls a spike in online safety issues during the 2020 lockdowns, when digital interactions surged and in-person connections dropped. “This serves as a reminder that, even as AI advances, human oversight remains essential,” he says.

3. Embrace strategic friction

A seamless journey is the ultimate goal in customer service, but finding the right balance between CX and strong protections is one of the biggest challenges businesses face in this area. 

“A completely frictionless journey for the user can sometimes be risky to a platform as safety can be compromised,” Siva argues. “The goal isn’t to remove friction entirely, but to design it thoughtfully where it adds value and protects the ecosystem.” 

Strategic friction like an added step for high-risk actions or a pause for user verification, can deter abuse without frustrating legitimate users. Human-in-the-loop moderation is key. It ensures there’s oversight, contextual judgment and feedback that helps fine-tune AI systems so they’re aligned with business and CX goals.

Rewriting the rules

The rules for CX and Trust & Safety are being rewritten in real time. As AI evolves and expectations rise, success will come to those who can navigate complexity — blending human judgment with machine intelligence, delivering great experiences while staying vigilant and thinking globally without losing sight of local realities. 

Looking to build smarter, safer experiences? Talk to our Trust & Safety experts today.

Speak to an expert

Interested in Working With Us?

References

TaskUs