What Your Customer Service Metrics Aren’t Telling You

Share this

TaskUs

October 01, 2014
customer care metrics

Your customer care metrics only provide part of the picture. Image Courtesy of Creative Commons. Credit Milstan

A lot of time, ink and effort go into talking about and developing a strategy around the customer experience (CX), as it should be. That said, many organizations use customer satisfaction metrics to help them determine how their customer experience is performing. This is also a good thing, with one caveat: Customer service metrics are not the be-all and end-all of the customer experience story. While most customer-service metrics can tell you what is wrong, they typically don’t tell you why things are going wrong. Here’s a quick guide to some of the most common customer-service metrics tracked that contact centers measure and how to view them more objectively:

Average Speed-to-Answer (ASA)

ASA is the average time a customer waits for a response to their customer service inquiry. It could be the speed at which emails are returned or the speed at which phone calls are answered. Either way, a low ASA doesn’t always translate into a positive customer experience; knowing their call was answered quickly, for instance, tells you nothing about whether the customer’s problem was actually solved.

Average Call Duration (ACD)

ACD is the average time a customer-service agent spends on the phone with a customer. Again, fast and good aren’t always synonymous. On the one hand, a low ACD suggests a more efficient and cost-effective call. On the other hand, it could reflect that customer-service reps are rushing through calls and failing to establish emotional connections with customers. The metric alone won’t tell you which is the case.

First Contact Resolution (FCR)

FCR tracks how often a customer-service issue is resolved on the first contact, by one customer-service rep, versus multiple contacts and multiple reps. Although a low FCR is good for both the company and the customer – the former benefits from increased efficiency and lower costs, the latter from faster resolutions – it says little about why customers are calling in the first place. In other words, it helps you be more reactive to customers’ issues but does nothing to help you be proactive in preventing them.

Customer Satisfaction Score (CSAT)

After a customer-service interaction, many companies ask their customers to rate how satisfied they are with them on a scale of one (i.e., very dissatisfied) to five (i.e., very satisfied). The resulting number, the company’s CSAT score, is indicative of how well the company met or exceeded the customer’s expectations. Because CSAT is typically assessed after a discrete event, however, it’s often unclear whether it reflects the customer’s problem – a defective product, for instance – or the actual customer-service experience.

Net Promoter Score (NPS)

Companies that measure NPS ask customers to rate, on a scale of zero to 10, how likely they are to recommend them to others. Calculated by subtracting the percentage of detractors from the percentage of promoters, NPS assumes that happy customers will recommend the company, and unhappy customers won’t. However, it fails to capture whether customers do, in fact, make referrals, and gives no information about why they will or won’t.

At the end of the day, metrics alone shouldn’t guide your approach to providing an efficient and effective CX. You need to take into consideration what the metrics don’t tell you as well as what your customers really think. And finally, you need to put yourself into the shoes of your customer so you can approach your CX in a meaningful manner that delivers real value to your customers.