CCW Digital reports that 38% of customers are concerned about how chatbots and AI handle their data. That’s significant, considering the convenience, speed and personalization AI brings in customer service.

In the webinar “The Biggest Problem With AI for CX: Your Customers Don’t Trust It,” CCW Digital’s Content Analyst, Audrey Steeves and TaskUs’ Sr. Director of Product & Operations, Trust & Safety, Andrea Ran, explain that this lack of trust is why customers want a human agent. They also cover how organizations can build confidence to improve AI adoption.

Here are three takeaways from the conversation:

1. Transparency is the foundation of customer trust

How can AI in CX build customer trust? Sometimes, a single negative interaction can break a customer relationship built over years. Consider the frustration a customer feels when they realize they have been chatting with an AI bot, despite believing they were engaging with a human agent.

Solving the trust deficit requires both transparency and consistency.

When customers trust a brand, hesitation diminishes. They become more willing to share information, engage more willingly and adopt new features faster.

2. AI misuse is a threat to brand reputation

While transparency is crucial for managing customer perception, external threats increase skepticism. “It’s hard to be a user today, to be honest. You don’t know what to believe,” says Andrea. Bad actors are using GenAI to create hyper-realistic, deceptive content to scam users, hack accounts and spread misinformation fast and at scale.

What’s more, agentic systems are making high-stakes decisions on their own, from determining content visibility to moderating platform rules. However, according to Audrey, “Customers should feel that they’re the ones that get to make the decisions when they engage with brands or when they want to interact with AI.”

3. Scaling trust requires an intelligent ecosystem 

Addressing both internal and external challenges requires a strategic approach to scaling trust and safety initiatives. Yet often, many organizations resort to adding more moderators or investing in an expensive, standalone tool.

That means combining three pillars:

  • Technology: AI should be utilized where it excels: triage and classification. It serves as the first line of defense — scanning millions of interactions and flagging obvious violations — and passing on ambiguous content for human review.
  • People: Apart from having moderators, organizations must invest in specialized roles like policy experts (to write clear rules), data analysts (to spot emerging threat patterns) and wellness coaches (to support frontline mental health).
  • Process: The goal is to adopt a proactive approach. This means having feedback loops between frontline moderators and policy teams, as well as maintaining a detailed, tested crisis response plan.

The webinar offers actionable strategies for navigating the complexities of AI in CX and effectively building customer trust in this new era.