Measuring case complexity — a fool’s errand?
Many thanks for Sam Levy and Neil Lillemark for suggesting this topic.
When talking about case productivity, or resolution time, there’s always a moment when someone will point out that all cases are not created equal. Some cases are easy and will be closed fast and with little effort. Others are complex and will require hours of troubleshooting, and will take weeks to resolve. So it seems that we should try to capture case complexity, right?
I say: not so fast…
- Case complexity is subjective. A case may run long (elapsed time), demand hours of work (effort time), or require many back-and-forth communications with the customer, without being particularly complex. The length and communication intensity could be caused by the inexperience of the support engineer, a lack of technical knowledge, or an elusive customer, rather than the sheer complexity of the case. In some support organizations, some categories of cases will naturally fall into “simple” or “complex” categories, for instance a password-reset request would be simple while a performance-tuning question may be complex. By all means rely on such straightforward distinctions if they apply — but they rarely go very far.
- Case complexity can best be gauged at case closure. It’s often very difficult to tell whether a new case will be complex or not. This means that the level of complexity of open cases, the cases in the backlog, is usually unknown, even though it would be handy to know whether they are lingering for good reason (they are complex) or bad ones (the support engineer is inept). So measuring case complexity is not a big help to manage the support backlog.
- Measuring case complexity does not help customers. The reason to measure complexity is driven by internal concerns to meet productivity or resolution targets, neither of which is important to customers. (Customers do care about resolution time, of course, but they don’t care that a particular case is taking a long time because it is complex.) Expanding effort on activities that do not benefit customers is questionable.
- When it comes to metrics, using average volumes is adequate. The main reason why support managers want to measure complexity is to be able to judge whether Joe’s closing 100 cases of an average complexity level of 2.2 is equivalent to Anna’s closing 50 cases of of an average complexity level of 2.9. But if Anna and Joe are working from the same pool of cases, over time the average complexity of the cases they handle should be equivalent. So yes, today Joe closed 5 easy cases and Anna closed 1 hard case (only), but over a month or a quarter they should be pulling about the same load, complexity and numbers-wise. The reasoning is the same for resolution time: if Anna resolves 80% of her cases in a week and Joe is working from the same pool, then Joe should also be resolving 80% of his cases in a week (again, over time, not necessarily this week).
- Average volumes are fine for staffing models, too. This is the same reasoning as above.
- Short-term audits are helpful. Although the mix of case complexity tends to be quite stable in established support organizations, it’s useful to conduct short-scale audits to check. To do that, take a week’s worth of closed cases (or a day’s worth of cases if your volumes are high) and manually assign complexity ratings to each. You will quickly see whether your mix is changing over time, and whether different categories of cases can be (automatically) classified as complex or simple. This type of audit is a great way to decide whether certain groups of support engineers, because they work on different types of cases, should be given different productivity or resolution targets. (Pair this exercise with a timekeeping audit for better accuracy.)
Do you measure case complexity? If so, please share why, how, and how you use the results.
Françoise, read this post with great interest and while I agree with most of it, there are a few points I have to differ.
First, I am not sure case complexity is subjective, but it definitely is relative. I believe it is possible to build a metric to represent case complexity consistently, or at least represent the amount of work associated with the case which is not a bad proxy, and I have done a number of those in the past. For example, complexity can be (1 x note count) + (3 x customer contacts) + (3 x customer actions) – biasing the metric towards customer effort — it’s easy to develop other indexes based on relative weight of the different variables. The purpose of such a metric is not to assess individual agents or cases, rather, it is to help us understand the evolution of the organization and its workload, as well as the difference between product lines or customer segments for planning purposes.
Second, while it is impossible to know case complexity ahead of time, an experienced support engineer can gauge it with a reasonable degree of accuracy, especially in a highly technical enterprise environment where customers provide good information and documentation. In many past organizations I established case dispatching which is a less fancy and not as attractively named version of swarming. We’d have the more experienced engineers sit on the incoming queue in rotation and assign cases to team members based on problem area, assumed complexity, customer identity, etc., and often add some guidance in the case. They obviously didn’t get every case right, but there were not that many errors in their assessment of the case complexity either.
Last, I think assuming people working off the same queue will end up with similar workload over time is true only if there’s no case picking allowed, and in relatively simple environments, maybe consumer oriented rather than enterprise. Otherwise, I can very well imagine Anna working on highly involved server cases and resolving one case every other day, while Joe helping customers with installation questions and handling five cases a day successfully. Evaluating whether this is the best each of them can do and how to make them more efficient is a manager’s job.
The bottom line, I suppose, is that there’s no single true answer, and more importantly, there’s no substitute to knowing your case load, your organization and what purpose each metric will serve and its limitations.