Quality Programs

Many thanks to Amanjhit Sandhu for suggesting this topic.

One day, we will have AI tools that can weigh case quality, but it may take some time! AI tools are pretty good at capturing sentiment, for instance, but evaluating case quality is hard: how difficult was the problem to begin with? Was the correct root cause identified? Could the issue have been resolved faster? Were the right questions asked? AI cannot do that today, at least not for complex cases–and it’s hard for humans to do as well!

So for now, we need to rely on process and people, with two main building blocks: a thoughtful checklist, and skilled evaluators.

The Checklist

The quality checklist must capture all aspects of the case, and hit a good middle ground between tactical and strategic items. Here’s a suggestion:

  1. Steady progression following your troubleshooting methodology (you have one, right? If not: does the support engineer first confirm the issue, then proceeds to find solutions, tests them, and provides a full response to the customer?). AKA, did the support engineer do the right things, in order?
  2. Proper collaboration requests or handoffs, to the right people, at the right time (no crying wolf, but no sitting on cases either). AKA, did the support engineer ask for help appropriately?
  3. Timely actions and updates to the customer throughout the life of the case. AKA, did the support engineer work as quickly as possible?
  4. Quality of customer correspondence: Does each exchange tie to the last one? Is there always a next step? Is the right medium used (no hiding behind email)? Are the grammar and language adequate? AKA, did the support engineer communicate well with the customer, regardless of how long the case took to resolve?
  5. Correct knowledge management: add or modify article(s), link relevant articles to the case. AKA, did the support engineer enrich the knowledge base?
  6. Complete and clear case notes that allow others to understand what happened in the case. AKA, did the support engineer allow handoffs and escalations to occur smoothly?
  7. Correct settings for the metadata so cases can be used in meta-analyses. (Automate the metadata whenever possible, of course, but support engineers always need to ensure it’s correct). AKA, did the support engineer make it possible to do root cause analysis?

Most checklists are too long. See how the one above only include 7 items? Keep yours short.

Many checklists focus on easily ratable, but not-that-important items. Is it critical that all emails start with the customer’s name? Probably not. Aim for more impactful items and provide guidance on ratings, as described below.

The Evaluators

Having a checklist is half of the battle. You also need to use it properly, which means that ratings need to be consistent. This is particularly important if you use the ratings for performance management in addition to coaching.

  • Create a rubric for the checklist with specific examples of what constitutes mastery, since it will depend on your specific circumstances. For instance, a high-severity case will require faster response than others (item 3, above) but all cases need solid metadata (item 7).
  • Keep the rubric very simple, with maybe 3 levels: not done/done/done well. Multiple levels require lots of work to characterize, and don’t yield much better results.
  • Provide sample case evaluations in addition to the rubric so raters have real examples to work from.
  • Train the evaluators on the rubric.
  • Conduct regular validation exercises where multiple evaluators rate the same cases. If results differ, refine the rubric and/or retrain the evaluators.

 

How do you handle case monitoring? Please share in a comment.

Tagged under:

Leave a Reply

nineteen − 5 =

Your email address will not be published.