The FT Word – May 2009

The FT Word

The FT Word is a free monthly newsletter with support management tips. To subscribe, send us email with your email address. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers. Here’s a sample.

Welcome

Welcome to the May 2009 edition of the FT Word. Please forward it to your colleagues. (They can get their own subscription here.)

This month’s topics:

  • Metrics common sense
  • Supporting custom applications
  • Would love to see you next week if you will be attending the SSPA conference.

Metrics Common Sense

I just read a wonderful, simple book about statistics that inspired me to think, once again, about support metrics. The book is The Numbers Game by Michael Blastland and Andrew Dilnot, two British economist and journalist that set out to de-mystify statistics bandied about by politicians, mostly, but much of their clear and often funny advice can be used in many other contexts, including the one that we all care about, technical support.

1. Know what you are counting

Are “cases per day” incoming (new) cases? Closed cases? “Resolved” cases (we gave the customer the solution, but s/he has not confirmed it’s working yet)? Cases that were “touched” or worked? Make sure each concept is carefully defined so everyone can measure the same thing. The people who write the reports will thank you, and you’ll get much better trust in the metrics from the entire organization.

And I would strongly advise to use the simplest possible definitions. If I can be permitted a rant so early in the newsletter: stay away from “resolved” cases and “worked” cases. No one really knows what they are and they are o-so-easy to fudge.

2. Use benchmarks or percentages to anchor the numbers

Is 1000 cases per day a lot? Gee, I don’t know. How many do you normally get? How many customers do you have? If you normally get 10,000 cases per day, 1000 would be a very slow day. On the other hand if you normally get 100, something went very wrong… You (should) know what’s normal for you but other people looking at your support metrics do not: offer a comparison to the average or a trend over time.

Is 7 cases per day per person a lot? It depends. If you run a high-complexity support organization that would be a lot (really: I have customers who resolve about 3 cases per day per head and are thrilled with their high productivity!) If you run a low-complexity group that would be very low.

Let’s go back to the first example again. Is 1000 cases per day a lot? It would be sickeningly high if you have 1000 customers (everyone called??) but normal for some products if you have 10,000 customers who log about 2 cases per customer per month or 120,000 customers who log 2 cases a year…

3. Expect clusters and coincidences – without an underlying cause

You got 40 installation cases yesterday when you normally get 2. Is it time to scream bloody murder about the installation process? Maybe, but look at the details first. Especially on a small base you can see spikes come and go with no apparent reason. Don’t get too excited without digging into the specifics.

4. What goes up often comes down

Customer satisfaction ratings were at an all-time low so you implemented a new coaching program, and sure enough numbers came back up. This proves that the coaching program is a success, right? Maybe, but perhaps it was just a normal variation. Now if you show that coached individuals got higher scores while uncoached ones stagnated, you would have a better “proof.” Ditto if you can see that the improvements are stable and long-lasting.

5. Averages are simple, but deceiving

As we were taught in that soporific statistics class years ago, long-tail distributions mean that averages can be distorted. So if you have 99 support reps closing 7 cases a day and one rep, perhaps allowed to cherry-pick for whatever reason, closing 80 (I have a customer that matches this exact profile), all support reps but one will have below-average productivity (The average will be 7.73, and 99 reps will be below it.) Don’t just look at averages: check the distribution of the data and use other measuring techniques.

6. Setting targets distorts results

If cases have to be responded within an hour, many will get a response between 50 and 60 minutes. If the target is 2 hours the peak will be right before that 2 hour mark. Perhaps that’s not too much of a problem for you but think of customers who would much prefer a response earlier in the timeframe.

And then there’s the cheating: if you have a target of X cases per rep per day, some reps may be tempted to cherry-pick the easy ones, and perhaps to invent a few extras (it’s so easy to just create a new case for that ten-second, “I just needed to check on something else” query from a customer.) If you set targets guard against manipulating the results.

And watch out for reshaping the definition of the target. For instance it seems that many support organizations invent a new “resolved” status so they can meet their (self-imposed) resolution targets despite customers’ opposition to closing cases outright. That could put the entire organization on the cheating slippery slope.

7. Sampling may be hazardous to your data integrity

I love sampling. For instance, I often advise my customers to sample cases worked by just two or three reps for a day to get a feel for case distribution, or the time it takes to resolve cases (assuming there are no measurement techniques in place, and the results are usually very good despite the small size of the sample.

Now sampling only works with properly randomized samples. So if you sample only Monday’s cases and Monday is “naïve customer day” (say), your sample won’t be true. Or if you only sample cases from the Australia office, which just happens to take all the emergency after-hours cases from US customers. Or if you sample cases from the backlog (being worked) queue, which by definition will be a little more complex than the average case.

8. Don’t draw straight lines

Case volume increased 20% this month so will continue to increase 20%. No (see also #3.) Case volume increased 20% this month and customer satisfaction also increased 20% so clearly increases in case volume cause increases in customer satisfaction. Silly, isn’t it? But we might be tempted to make the same (opposite) conclusion if customer satisfaction was down instead of up wouldn’t we?

9. Compare the comparable

Jane resolved 20 cases yesterday and Joe resolved 2. Let’s hire more reps like Jane. OK, but Joe is the only tier 3 rep we have. Isn’t he valuable to the entire organization? 5000 customers used the self-service support web site today, so that means we avoided (deflected) 5000 cases, right? Nope. For starters we don’t know what they did on the web site (did they all log new cases? Were they all checking on existing cases? Can you track whether these same 5000 visitors log any cases? That would be a much better approach than comparing cases and visits.

Here’s another angle. Reps used to resolve 5 cases per day; now they resolve 4. Did they get lazy? Perhaps incoming cases are truly more complex, thanks to a better self-service offering – or a new product, which is either more complex or less familiar to the reps. Be careful when you make comparisons.

There is more on this topic in the FT Works booklet Best Practices for Support Metrics.

Supporting Custom Applications

Thank you to Chris Farnath for suggesting this topic

Many support organizations decline to support custom applications. They only support the “vanilla” version, they say, and customers who run custom versions should be prepared to create small test cases against the vanilla version before they are provided support. Sounds good in theory but in practice it leads to endless, frustrating “negotiations” with customers, long-running, resource intensive cases, and increased risk of escalations. So what are the alternatives and how do they work?

1. VARs or third-party implementers may be interested in supporting custom applications

If you feel that you simply do not want to provide custom support you may be surprised to find interested parties to do it. In particular if your customers routinely hire third-party implementers they may be quite happy to get support through them, too. The benefit of this setup is that the implementer will be very familiar with the customization (or should be!) and the line between fixes and enhancements can be easily blurred. You get requests filtered through the third-party, and those requests can be appropriately reframed as questions against the vanilla application.

The danger is that the implementer may present a barrier between you and the customer. So for instance if there is a problem with the application (1) you may not know about it at all and (2) the implementer may represent your unwillingness to fix a minor bug as the main reason for the problem, when in fact the application was not architected properly in the first place, amplifying the impact of the minor bug. So you lose the close relationship to your customer.

2. Ask your Professional Services group to provide support for custom applications – not

So what if your own Professional Services team created the custom application? Perhaps you could follow the same path as #1 above and ask the Professional Services team to provide support since it will know the customizations inside-out. You could, but I would say it’s (in general) a bad idea: Professional Services teams are organized to deliver professional services and are usually ill-equipped to deliver support, from the processes they follow to the type of people they hire to the systems they use. The Technical Support team is organized to deliver support and should take over custom support, assuming you decide to do it.

If your company will provide support for custom applications, it’s best if the support comes from the Technical Support organization.

3. Think twice about providing end-user support

Some customers may be looking to you to provide support to their own end-users (on their custom applications.) This may or may not be a business you want to get into depending on your setup. If you provide support on highly complex products you are probably staffed with high-end (i.e. highly skilled and highly expensive) staff members who expect to interact with IT specialists. If they are suddenly asked to support end users they may lose their patience – and you would lose your shirt. Perhaps you will want to create a separate organization to handle end-user support, but it’s certainly a different game (and likely a lower-margin game.)

4. Mandate a transfer and certification of each custom application

Any custom application you take over needs to be inspected for soundness before you can support it. If nothing else you will want to have the customization code handy so you can delve into it as needed. You should have some type of escape clause in case the quality of the application is simply not up to snuff. This could apply to all custom applications, regardless of who created it in the first place: customer, third-party, or your own Professional Services team.

5. Ask for a transition period from Professional Services

As much as you want to have a proper transfer of knowledge it’s best to keep Professional Services “on call” for the first month or so. Think of it as a warranty period.

6. Charge appropriately

Supporting a custom application is more complex than the vanilla product so it’s perfectly appropriate to create a new level of support with additional charges. You can make it a custom quote if the variety of potential customizations is wide.

Would love to hear about your adventures with custom support, as I think it will become more widespread in the future.

FT Works in the News

• ICMI published an article I wrote about preventing burnout in the Call Center Insider and their Knowledge Center. It’s called Preventing Agent Burnout: A Manager’s Handbook. You can read it at http://www.icmi.com/knowledgecenter/details.aspx?id=952.

• Lithium posted the presentation and slides for the webinar I participated in last month on the subject of ROI for support communities. You can find it at http://w.on24.com/r.htm?e=139949&s=1&k=D1E932305EB85B8C5F4B1398727B3E4B&partnerref=090409house

• And talking about ROI for support communities I will be presenting a breakout session on the topic at the SSPA conference in Santa Clara (Tuesday 11:30) with Tarik Mahmoud of Cisco Consumer Business Group (CBG – Linksys). This is a great opportunity to both learn about creating a community ROI and hear Tarik describe the wonderful use they are making of communities.

• Finally check out the new posts at Marketing Wise, the FT Works support marketing blog (or subscribe to the blog.)

Curious about something? Send me your suggestions for topics and your name will appear in future newsletters.

regards,
Francoise Tourniaire
FT Works
www.ftworks.com
650 559 9826

About FT Works

FT Works helps technology companies create and improve their support operations. Areas of expertise include designing support offerings, creating hiring plans to recruit the right people quickly, training support staff to deliver effective support, defining and implementing support processes, selecting support tools, designing effective metrics, and support center audits. See more details at www.ftworks.com.

Subscription Information

To request a subscription, please drop us a note. The mailing list is confidential and is never shared with anyone, for any reason. To unsubscribe, click here.

You may reproduce items in this newsletter as long as you clearly display this entire copyright notice. Contact us if you have questions about republications.

Back to Newsletter Archive