The FT Word
The FT Word is a free monthly newsletter with support management tips. To subscribe, send us email with your email address. The subscription list is absolutely confidential; we never sell, rent, or give information about our subscribers. Here’s a sample.
Welcome to the April 2008 edition of the FT Word. Please forward it to colleagues who would enjoy it.
Only a month until the first Marketing Wise workshop, where you can learn everything you need to know about support marketing. Join me on May 7th in Santa Clara, CA. More info.
This months’ topics:
- What do I need to measure? Essential metrics for support organizations
- Implementing new processes – what works for lasting success
What do I need to measure?
Thanks to Kira Harz and Ram Ramadas for suggesting this topic (it must be on a lot of people’s minds!)
Many support organizations run multiple reports but still feel that they are not capturing critical information to make informed management decisions.
1. Less is more
It’s extremely easy to gather dozens of numbers on volume, abandonment rate, average backlog size, or any aspect of what is tracked in the various tools we use. However, experience shows that the human brain can’t keep up with more than a handful of metrics so prune mercilessly. Focus on just a few numbers to gauge performance; it’s fine to run additional metrics as needed to explore a particular area, just don’t clutter everyone’s mind by publishing a slew of numbers daily.
2. Stick to the big 4 (or 5)
Most every support organization needs to look at just four areas: volume, SLA achievement, customer satisfaction, and knowledge base health. P&L organizations also need to consider financial performance (revenue and profits); all organizations would be well-advised to keep their eyes on cost.
In a little more detail:
- Volume includes incoming, closed, backlog, and numbers escalated throughout the organization. So if you have 2 levels of support and you then escalate to Engineering you want to know how many cases went to level 2 and how many went to Engineering. Over time support managers develop a keen sense of what the “right” numbers are for their teams. They instantly know that (say) 180 backlog is ok and 220 is too high.
- SLA achievement typically covers both response time and resolution time. Note that you should measure this even if you are not making specific commitments to customers around them (in other words, measure internal performance regardless.) Some organizations will want to measure more SLA detail; for instance you may measure both initial response and speed of dispatch to the customer site.
- Customer satisfaction is often (and ideally) measured via a transaction-based customer satisfaction survey. If that’s not feasible you can substitute the outcome of an internally-run monitoring program.
- Knowledge base health. I could write a book about this one (wait! I did! Collective Wisdom) but here I will just say to use whatever you can from your tool: volume of new submissions, quality of submissions, customer traffic. Just make sure to balance quantity and quality to avoid the usual disasters of focusing solely on quantity.
3. Measure against targets
Metrics are most useful when they can be put in context (even if, as mentioned above, individual support managers quickly learn what the “right” numbers are for their teams.
- Express volumes as percentages whenever feasible. For instance, the number of cases escalated from level 1 to level 2 is not that important. What matters is the percentage of escalations, from which you can make meaningful comparisons from month to month. Similarly, the number of cases in the backlog is not that important; what matters is the ratio of open cases to incoming: express the backlog in days (or weeks…)
- Averages are easy to calculate but it’s much more meaningful to measure achievement against targets. Contrast responding to 95% of cases within the one-hour response target to a two-hour average response time. Which is better for customers? If they are expecting a one-hour response, probably the 95% achievement…
Measuring against targets helps compare teams against each other, makes benchmarking with other groups easier, and makes it easy to gradually increase performance over time.
4. Slice and dice from one source
It’s wonderful to know that the support organization as a whole is meeting its response goals but what about Bob’s team? And what about Jane the support analyst’s performance? Ideally every individual in the support organization should be able to monitor his or her individual performance. Don’t declare victory until you bring the metrics all the way from the top to the individual.
A benefit of running team and individual metrics is that any errors in the reports can quickly be spotted at that level (and would be intractable at the top.) do expect a certain amount of fudging attempts: it’s important to have one owner and one source for all metrics. There’s nothing more disconcerting than adding up all the backlog numbers and not getting the number on the overall report, which can easily happen if you allow each team to run its own set of numbers.
5. Make it instantaneous
I have a customer who’s getting ready to display basic metrics (in the 4 categories above) to the entire support center via TV monitors (for group metrics) and on everyone’s computer for both group and individual metrics. I believe that it will result in much better performance as instant feedback becomes available. We’ve come a long way from those display boards that simply showed how many customers were in the queue, huh?
Most organizations find that it’s difficult to create accurate metrics, and that pulling information from disparate systems is particularly time-consuming. The IT team never seems to have time to do it, either. But if you have a very clear idea of what you need (see #2) it’s easy and not that expensive to hire a contractor to create the handful of reports needed.
Implementing new processes – What works for lasting success
If you think a great process can make all the difference in support, you’re right. Kinda.
What makes all the difference is a process that’s actually used regularly and appropriately. And many organizations with great ideas fall down on implementation. Here’s how to translate all your good ideas into lasting improvements.
1. Get the team involved in the design
Here’s one way to torpedo even the most clever process: design it in great secret; hire an outside consultant who is not allowed to talk to any of the people who will actually use the process; and swear the couple of individuals involved to complete secrecy (ideally: avoid working with any of the delivery managers on the project.) You think I’m kidding? Unfortunately I’ve been asked to be the consultants in a few of these disaster-bound exercises…
Always involve the team in designing new processes. It gets you two benefits: one, you can build buy-in as you build the processes and two, you benefit from the hands-on knowledge of the team. I personally enjoy working with at least a few veterans attached to the status quo: after all, if they can be won over to the new order, it bodes well for the rest of the organization! While you may not want to specifically seek out potential resistors, work with a representative cross-section of the team including hands-on support analysts. Even the most plugged-in managers seem to be ignorant of some important operational details that are obvious to hands-on staff.
2. Anticipate implementation challenges
The last thing you want to do when designing a new process is to get stuck into the old ways of doing things, or to get scared by the resistance you will encounter in rolling out changes but don’t completely forget the past when thinking about the future. For instance if you are currently using a rigid tiered system and you’re thinking of putting everyone on the queue, you can bet there will be resistance from the current tier 2 staffers who enjoy being buffered from direct customer contact. As part of the redesign, spell out the benefits for them and for the customers of the change and incorporate ways to soften the blow, perhaps by devising an “on queue” schedule that preserves uninterrupted work time on harder cases.
3. Be enthusiastic – but don’t go overboard
Clearly if you’re rolling out a new process you should be enthusiastic about it: it will work well, it will make things better for customers, staff member, or both. Create some positive PR. For instance, if you’re instigating an on-call system for emergencies you may want to highlight the expectation that it will reduce the time required to respond to emergencies from 30 minutes to 15 minutes.
But don’t get carried away! Support staff has an uncanny ability to detect overblown promises, and may reject the entire initiative if oversold. Stay realistic. Your new quality monitoring program won’t do much to cure response time issues (or world hunger) and that’s ok. Just stick with the realistic impact on customer satisfaction and everyone will believe you.
The chosen few who helped create the new processes will understand them inside out and may occasionally fail to grasp that others need training and time to internalize the training before being ready to use the new processes.
Process training is a critical success factor. It should be conducted from a task-based perspective: you want to hand off cases to Engineering, here is the checklist you need to fill out and here are 3 or 4 examples to work through as practice. You want to transfer a case to another queue, here are the steps to follow to ensure we meet response time, let’s do a role-play to practice them. Schedule the training as close as possible to the rollout day to minimize forgetting.
6. Train again
Experience shows that one, isolated training experience is rarely sufficient to ensure consistent application. Hand out job aids so newly-minted trainees can help themselves for the first few days. This is particularly important if the new process is not used very often. Plan for a cadre of mentors or helpers during the initial phases of the rollout. Schedule refreshers, which don’t necessarily have to be more than a quick reminder during staff meetings, depending on the scope of the change.
I would suggest that a process without a metric is not a good process, or at least is unlikely to succeed. For instance, if you’re changing the way you manage cases in the backlog you should be measuring resolution time, or backlog per staff member, or some other characteristic related to the backlog. Make sure the metric is available as you roll out the process and make it available to all for self-monitoring.
6. Audit compliance
Yikes! Audit! Call it assessment if you prefer but the principle remains: a few days or weeks after the rollout, check with the staff on how they are using the new processes. Make the interviews pointedly relaxed and open, otherwise you may get pat answers that significantly deviate from the actual practice. It’s very helpful to gather metrics (see #6, above) ahead of time and ask the interviewees about how the new processes are helping or hindering their performance on metrics.
Use the assessment to compare individual performances: are the people using the new processes doing better than others on the metrics? (If not, perhaps the processes are not that successful…) Are they doing better than when they were using the old processes? (If not, there’s definitely something wrong about the processes, or at least about the way they are being used.)
7. Accept and welcome tinkering
During the audit compliance and afterwards, welcome the tinkerers. Many times it’s the small tweaks in the process that makes it work really well. Watch out that you don’t lose the original intent through the tweaks, but remain open to the idea that the first draft can be improved upon.
FT Works in the News
Are you planning to attend the SSPA conference on 5/5-6 in Santa Clara? I hope to see you there! I will be speaking with Don Frye of The MathWorks about KCS implementation on Tuesday at 2pm.
Also, immediately following the conference please join me for a one-day workshop, Marketing Wise, that covers everything you always wanted to know about support marketing. Marketing Wise is for you if
- Your customers are unhappy with your support offerings. They complain about slow SLAs, high prices, unrealistic end-of-life policies. It seems that customers simply don’t see the value of support.
- Your sales team is struggling to sell support even as product sales are going well. It seems that the main “strategies” for selling support are to give it away or to agree to devastatingly tough SLAs.
- You find that you are giving away a lot of free support, whether it’s multiple “free” calls about installation problems that turn into lengthy handholding sessions on how to use the product, or emergency onsite visits for customers who should have engaged an implementation team.
- It’s getting tougher to get customers to renew support contracts. Treating renewals as a cash cow just isn’t working any more
We’ll talk about best practices for designing, marketing, selling, and renewing support, and offer live mini-makeovers for your support portfolio. This is suitable for support managers and executives, not just support marketing specialists.
Curious about something? Send me your suggestions for topics and your name will appear in future newsletters. I’m thinking of doing a compilation of “tips and tricks about support metrics” in the coming months so if you have favorites, horror stories, or questions about metrics, please don’t be shy.
650 559 9826
About FT Works
FT Works helps technology companies create and improve their support operations. Areas of expertise include designing support offerings, creating hiring plans to recruit the right people quickly, training support staff to deliver effective support, defining and implementing support processes, selecting support tools, designing effective metrics, and support center audits. See more details at www.ftworks.com.