Many thanks for Sam Levy and Neil Lillemark for suggesting this topic.
When talking about case productivity, or resolution time, there’s always a moment when someone will point out that all cases are not created equal. Some cases are easy and will be closed fast and with little effort. Others are complex and will require hours of troubleshooting, and will take weeks to resolve. So it seems that we should try to capture case complexity, right?
I say: not so fast…
- Case complexity is subjective. A case may run long (elapsed time), demand hours of work (effort time), or require many back-and-forth communications with the customer, without being particularly complex. The length and communication intensity could be caused by the inexperience of the support engineer, a lack of technical knowledge, or an elusive customer, rather than the sheer complexity of the case. In some support organizations, some categories of cases will naturally fall into “simple” or “complex” categories, for instance a password-reset request would be simple while a performance-tuning question may be complex. By all means rely on such straightforward distinctions if they apply — but they rarely go very far.
- Case complexity can best be gauged at case closure. It’s often very difficult to tell whether a new case will be complex or not. This means that the level of complexity of open cases, the cases in the backlog, is usually unknown, even though it would be handy to know whether they are lingering for good reason (they are complex) or bad ones (the support engineer is inept). So measuring case complexity is not a big help to manage the support backlog.
- Measuring case complexity does not help customers. The reason to measure complexity is driven by internal concerns to meet productivity or resolution targets, neither of which is important to customers. (Customers do care about resolution time, of course, but they don’t care that a particular case is taking a long time because it is complex.) Expanding effort on activities that do not benefit customers is questionable.
- When it comes to metrics, using average volumes is adequate. The main reason why support managers want to measure complexity is to be able to judge whether Joe’s closing 100 cases of an average complexity level of 2.2 is equivalent to Anna’s closing 50 cases of of an average complexity level of 2.9. But if Anna and Joe are working from the same pool of cases, over time the average complexity of the cases they handle should be equivalent. So yes, today Joe closed 5 easy cases and Anna closed 1 hard case (only), but over a month or a quarter they should be pulling about the same load, complexity and numbers-wise. The reasoning is the same for resolution time: if Anna resolves 80% of her cases in a week and Joe is working from the same pool, then Joe should also be resolving 80% of his cases in a week (again, over time, not necessarily this week).
- Average volumes are fine for staffing models, too. This is the same reasoning as above.
- Short-term audits are helpful. Although the mix of case complexity tends to be quite stable in established support organizations, it’s useful to conduct short-scale audits to check. To do that, take a week’s worth of closed cases (or a day’s worth of cases if your volumes are high) and manually assign complexity ratings to each. You will quickly see whether your mix is changing over time, and whether different categories of cases can be (automatically) classified as complex or simple. This type of audit is a great way to decide whether certain groups of support engineers, because they work on different types of cases, should be given different productivity or resolution targets. (Pair this exercise with a timekeeping audit for better accuracy.)
Do you measure case complexity? If so, please share why, how, and how you use the results.