The FT Word – March 2010
The FT Word
The FT Word is a free monthly newsletter with support management tips. To subscribe, send us email with your email address. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers. Here’s a sample.
Welcome
Welcome to the March 2010 edition of the FT Word. Please forward it to your colleagues. (They can get their own subscription here.)
Two metrics discussions this month – and an invitation to hear about selecting a great location for a new support center:
- Measuring customer satisfaction: the options and the pros and cons
- Social media and case deflection: can we make a case?
- After a successful first run, the Third Tuesday Forum will welcome Barry Duplantis of HP on March 16th in Santa Clara, CA, to discuss locating support centers in unusual locations. Want to experiment with something different than Bangalore or North Dakota? Sign up now. Space is strictly limited to ensure an interactive session.
Measuring Customer Satisfaction: Options, Pros, and Cons
Many thanks to Hollis Sheppard for suggesting this topic.
Customer satisfaction is the bedrock of support groups. Unfortunately it’s rather more difficult to measure success when it comes to customer satisfaction than for productivity. So let’s inventory mechanisms for capturing customer satisfaction and contrast their effectiveness. We will consider two bona fide customer satisfaction measurement tools, the periodic survey and the transactional survey; one rarely-used but useful method, the secret shopper approach; a common approach in low-complexity support centers, quality monitoring; and finally a commonly-used alternative to customer satisfaction metrics, using operational metrics from the case-tracking system.
1. Periodic customer relationship surveys
Description: Once a year, or perhaps every six months, a significant portion of the customer base is surveyed, often by phone to obtain a high rate of response. The survey covers many aspects of the customer’s relationship with the vendor and can therefore be quite long, with dozens of questions. The results of the survey can be compared with those of prior years.
Pros
- A relationship survey can capture the entire relationship with the customer, not just what’s happening with support.
- If delivered by phone it can achieve very high response rates, hence a better validity.
Cons
- Because you pretty much have to spring for phone delivery (as few customers would fill out a long written survey), customer relationship surveys are expensive.
- They create long delays between process changes and results. Because of the delays and the general nature of the results, it can be difficult to translate the results into practical insights.
- The quality of the surveys depends greatly on the choice of the survey recipients. Vendors that rely heavily on relationship surveys find that staff members become quite adept at targeting the contacts who will give the best ratings…
Recommendation: Unlike other approaches described here, relationship surveys allow vendors to get a holistic picture of the state of customer satisfaction so they are useful, but not very useful for support executives who seek fast, actionable feedback.
2. Transactional customer satisfaction surveys
Description: As each case closes (or for a sample of cases), customers receive a short survey asking them to rate the particular interaction with support. Surveys are typically delivered electronically although some vendors choose to conduct phone surveys to increase the response rate. Results can be tracked over time in addition to being matched to specific staff members.
Pros
- Transactional surveys allow service recovery, as managers can easily follow up on poor surveys.
- Results are available quickly since most customers either respond within a few days or never do.
- Unlike relationship surveys, transactional surveys are matched to specific events so it can be used to evaluate individual performance.
Cons
- The main issue with transactional surveys is the halo effect. Customers who are either very satisfied or very dissatisfied are most likely to respond. It’s very important to maximize the response rate to increase validity.
- Transactional surveys can be gamed to a certain extent. Support can encourage customers to fill them out when the customers are particularly happy, which is not so bad, but more importantly some organizations allow reps and managers to suppress surveys for certain cases, which can create systematic bias and make the surveys worthless.
Recommendation: Transactional surveys are an essential tool for support managers, at least in environments where customers are likely to respond, which is the case for most high-complexity support environments. To maximize the response rate I recommend the one-question approach, the one question being, “Please rate your satisfaction with this particular case.” Response rates lower than 10% limit the validity of the survey. Finally allow customers to add comments. Much can be learned from them.
3. Secret shopper programs
Description: In a secret shopper program an outside entity makes support requests and reports back on the quality of the service received. This approach is rarely used for support, perhaps because it requires a careful setup, especially for complex support.
Pros
- A secret shopper program can capture the entire support process and yield a holistic evaluation unlike other tools.
- If you can keep the identity of the secret shopper secret, the programs are hard to game
Cons
- Secret shopper programs are complicated to set up and therefore expensive, especially for highly-complex support.
- Because of that they are not practical to capture individual performances.
Recommendation: Secret shopper programs are well worth it for low- and medium complexity support, where they don’t require a contrived setup. They are good for spot checks.
4. Quality monitoring
Description: Quality monitoring programs rely either on direct observations or after-the-fact evaluations of recordings or emails by trained evaluators, either dedicated to that task or managers and team leaders trained for it. They use a preset checklist to evaluate various aspects of the interaction. While quality monitoring often sticks with superficial issues such as using proper greetings, the better programs evaluate more meaningful aspects of the interaction and are therefore much more valuable.
Pros
- Quality monitoring programs can get into more details than a transactional survey so are helpful for capturing adherence to new processes.
- They are helpful as a mentoring tool.
Cons
- The main problem of quality monitoring programs is that the results can be useless if the monitoring checklist is poor or the quality monitors are not trained properly.
- They are relatively expensive to run.
- And of course they are not the true reflection of what customers think, only what the monitors interpret.
Recommendation: In situations where customer satisfaction surveys are awkward to use such as customer service interactions, quality monitoring is a great substitute. Take good care of the checklist and the monitors: a weak foundation will yield nothing with worthless results.
5. SLA and related metrics
Description: Many metrics that can be gleaned directly from the case-tracking system are directly related to customer satisfaction: response time, resolution time, backlog levels, even the number of interactions required to resolve a case. Support organizations measure at least SLA targets (so response time performance is usually captured). ACD statistics such as wait time are also useful.
Pros
- Any metric that comes directly from the support-tracking system are pretty easy to measure.
- Such metrics can, at least in theory, be computed instantaneously.
Cons
- The enormous con of SLA metrics is that they are only distantly related to customer satisfaction. Sure, customers won’t be too happy if you systematically miss response time targets, but on the other end you can meet response time targets consistently and still fail to deliver what customers want.
- Operational metrics, especially pickily-defined ones, can be gamed and become useless. For instance if you measure the number of times cases are updated you will get lots of updates – and not a lot of value from them!
Recommendation: SLA metrics are very useful but they are not true customer satisfaction metrics, with the possible exception of the percentage of cases resolved on first contact, or cases closed within 24 hours. You can, however, use SLA metrics as an early indicator of performance, Surely if response time performance is sinking and resolution times increasing customer satisfaction is sure to suffer, right? For meaningful metrics, calculate them as percentage towards a goal. For instance you met response time goals 90% of the time, or resolution goals 78% of the time. (You are unlikely to promise resolution time goals to customers but you should always set internal targets.)
Putting this all together, what’s a support executive to do?
- Unless absolutely inappropriate, use a transactional satisfaction survey. There’s no substitute to asking customers directly for their opinion. Since response rate is so important for the results to be valid, do all you can to maximize it. Start with a very short survey (try the one-survey approach!) Follow up on poor surveys. Post the results of the survey for all to see.
- If your customers just won’t respond to a survey, use a quality monitoring program. Make sure the checklist goes beyond superficial characteristics and regularly calibrate the monitors to ensure that they are using the checklist in a reliable manner. Perform enough surveys so each support rep gets a meaningful number of ratings.
- If you are using a transactional survey, use a quality monitoring program as a mentoring and spot-checking device. Under this regime you don’t need to rate so many cases since the volume will come from the transactional survey.
- Use SLA metrics – but never believe they are true customer satisfaction metrics! At a minimum, watch response time and resolution time performance. The size of the backlog (cases bring worked / cases closed) is a great early indicator of trouble. You want to keep that ratio low, no more than two weeks for a complex-support operation, and no more than a couple of days for low-complexity support.
- Include the satisfaction metrics into individual goals and objectives, taking great care to balance them out. Perhaps you can assign a 30% weight to the transactional survey, 30% to volume, and 30% to SLA achievement (the remaining 10% would be for individual progress goals).
- Participate in a relationship survey or even volunteer to lead one, but don’t rely on it to make operational decisions, since the level of detail and timing are not tight enough for that.
For more information about quality monitoring programs, see Best Practices for Quality Monitoring
We can help develop your team’s soft skills with the Tech Support Skills workshop.
Social Media and Case Deflection: Can we Make a Case?
Many thanks to Liz Shapiro for suggesting this topic.
Social media are all the rage these days but it’s not always clear that they are really helping with case deflection. And even if they are, how can we prove that they are helping?
Here are a few thoughts:
1. It’s awfully difficult to measure something that doesn’t happen. I have three kids and none of them is an axe murderer. Does that prove I’m a good mom? Nope (naturally I am a wonderful mother, not to mention a modest one, but the absence of axe-murdering children is not sufficient evidence to demonstrate it). What would it take to prove that they would be axe murderers were it not for my wonderful mothering skills? Not an easy task…
2. Activity levels are necessary but not sufficient. So you have a zillion customers using social media. Does that mean that they won’t open cases? Of course not. But certainly if you are offering social media solutions and no customers are using them that’s a problem. So measure activity as a hygiene pattern: activity is a necessary component of success, but it’s not sufficient.
3. Never replace one for one! You have 2000 answered threads in your community. Does that mean you “deflected” 2000 cases. No! Those 2000 threads may (a) come from customers who are not under a support contract and (b) may never have given rise to a case anyway because the customer would not have bothered to call support in the first place. Think of online banking: would you call your bank as often as you check your balance online? Unlikely. Would you walk over to your physical mailbox as often as you check your electronic mailbox? Nope.
4. Don’t replace N for 1 either! Since clearly a 1:1 deflection ratio is silly some vendors are trying to get to a magic ratio such as “one use of the community in seven deflects a support case.” Says who? Don’t fall for this easy trick.
5. Be careful with online satisfaction surveys. If 60% of customers tell you that they found their answer in self-service does it mean that 60% of self-service sessions deflect cases? No, see #3 and 4. The customer found an answer but may not have bothered logging a case for it. Now if you also ask customers whether they would have opened a case instead, that’s a little more interesting (but what people say and what they do is different!) Contrast this with approach #8.
6. Do measure before and after. Before your community was in place customers placed, on average, 1.05 cases a month. Since the opening of the community they now place 1.02 cases a month. Hence the community deflected .03 cases per customer per month. Do you like the math? Don’t fall in love. All kinds of other reasons could have caused the decrease, the most obvious of which being improvement in product quality. But still, before and after comparisons are helpful– and much more believable than approaches #4, 5, and 6.
7. Do contrast users and non-users. Community users open 1.02 cases a month but non-users open 1.05 cases a month. That means the community deflects .03 cases per customer per month. Correct? Not necessarily. It could be that non-users are also the power-users who will always have many more requests no matter what. But contrasting non-users and users is an intriguing approach, especially if you can contrast incident rates for community users before and after they started using the community. You can also contrast heavy and light users of social media.
8. Do capture online deflection. Customers who place online requests are presented with an option to search the community. If successful, the system aborts the case logging process. Is that proof of defection? Yes! (At least if customers don’t just call the hotline…)
Bottom line: you will probably never get a completely exact answer of the impact of social media on case deflection, but you should be able to get reasonable estimates by capturing before and after figures and cannily comparing various groups of users and non-users.
For more information about self-service metrics, see Best Practices in Self-Service Support
FT Works in the News
Third Tuesday Forum
Are you based in the San Francisco area (or will you be there on Tuesday March 16th)? That morning, David Kay and I will be hosting The Third Tuesday Forum, a roundtable for support executives to discuss the topics we embrace and wrestle with every day. Our presenter will be Barry Duplantis, Director of Developing Technologies, Hewlett-Packard Software, who will speak about locating support centers in out-of-the-way locations. Please join us in Santa Clara, CA, around a breakfast buffet. There is no membership fee, just sign up ahead of time and make a small contribution for the food and room fees ($40). Interested? Register now!
If you cannot make it this time but would like to be on the mailing list, sign up. You will be the first to know about new events (we have speakers lined up for April and May!)
Selling Value – Everything you always wanted to know about support marketing
My support marketing book has a name, Selling Value (the subtitle is Designing, Marketing and Selling Support Packages) and is expected back from the printer momentarily. You can order it here. I expect that the book will ship later this month.
More Support Marketing – at the TSIA Conference!
I will be presenting a special pre-conference workshop on designing and selling support packages on May 6th in Santa Clara, CA. Click here for more details.
Articles of Note
Inside Technology Services published an article I wrote entitled Hand in Hand? Working in Harmony with Engineering.
Curious about something? Send me your suggestions for topics and your name will appear in future newsletters. I’m thinking of doing a compilation of “tips and tricks about support metrics” in the coming months so if you have favorites, horror stories, or questions about metrics, please don’t be shy.
Regards,
Françoise Tourniaire
FT Works
www.ftworks.com
650 559 9826
About FT Works
FT Works helps technology companies create and improve their support operations. Areas of expertise include designing support offerings, creating hiring plans to recruit the right people quickly, training support staff to deliver effective support, defining and implementing support processes, selecting support tools, designing effective metrics, and support center audits. See more details at www.ftworks.com.
Subscription Information
To request a subscription, please drop us a note. The mailing list is confidential and is never shared with anyone, for any reason. To unsubscribe, click here.
You may reproduce items in this newsletter as long as you clearly display this entire copyright notice. Contact us if you have questions about republications.