Many thanks to DeWayne McNally for suggesting this topic.
When we ask support engineers to create self-help solutions or to be active in online communities, the feedback is often that something has to give: if they are busy authoring articles or jumping into discussion forums, then they cannot work as many cases as they used to. And that’s true, but the reality is not always reflected in their individual performance metrics, hence the pushback.
What’s the goal of support? To provide a great customer experience, at an acceptable cost (and to provide actionable customer intelligence to the rest of the company, but we’ll set that goal aside for a moment). In traditional organizations that are focused on assisted support, this translates into individual goals that are a mix of customer satisfaction ratings and case volume. If you are moving your organization to a more holistic support perspective (and I hope you are), case volume + customer sat simply does not cut it.
Let’s start at the organizational level. If you are looking at a great customer experience at an acceptable cost, you will want to measure customer satisfaction (measure overall satisfaction; don’t rely on case surveys since they neglect customers who are not opening cases) and cost (or profit, in a fee-based environment). You can and should dive into more details such as renewal rates but the basic equation is profits + sat = success.
This approach does not work so well for the individual support engineer, who has no control of budgets and can only affect customer satisfaction in a limited way. So try a combination of the following:
- Case productivity. Include all channels you use, from phone to chat to email. If your support model allows for peer assistance, include cases where the support engineer provided assistance as well as the ones solved independently.
- Customer satisfaction. As above, consider the ratings for cases solved independently and, ideally, cases to which the support engineer contributed.
- Case quality. Customer satisfaction is critically important but it won’t tell you about what’s happening behind the stage. Are escalation resources used appropriately? Is knowledge created and maintained appropriately? A random audit of a couple of cases will tell you.
- Knowledge leverage. Assuming the support engineer is creating knowledge (as noted above), is that knowledge being reused successfully, both in self-service and for assisted support? I prefer to use this metric for awards rather than performance objectives since leverage is greatly influenced by the topic, so that a great article about an obscure product may have limited leverage through no fault of its author.
- Participation in communities. Is the support engineer an active participant in discussion forums and other online support venues? If you have an established MVP point-system, you can use it here, assuming that it combines quantity with some quality measurement such as approved answers and the like.
What have you tried and what has worked? Please share.