Striking a Balance between Individual and Team Metrics
Many thanks to Angus Klein for suggesting this topic.
Support is a team sport, right? So why do most support teams rely exclusively on individual achievements to measure performance?
A typical set of objectives/QBRs for a complex-support organization reads something like this:
- 50% rating from CSAT and the internal quality audit, which includes contributions to knowledge management (target: 8/10)
- 30% case productivity (target: close X cases per quarter)
- 5% achievement of resolution time goal (target: 80% of cases closed in less than a week)
- 5% achievement of response time goal (target: 90% of cases responded to within SLA; team goal)
- 10% achievement of training or other personal goals
This means that a whopping 5% of the goals is a team goal — and that’s for the least important aspect of support work, response time. How can we do better? It all depends on how the work is organized, but here are some ideas:
- Capture assists in the case-tracking tool and have them “count” alongside case productivity. (A side benefit of this approach is that you will be able to spot the support engineers who need frequent assists.)
- If you organize teams around experts, measure the experts by the team output. So instead of an individual rating, the expert carries the customer satisfaction rating and productivity for all the cases in the team.
- Make a portion of the goals based on team performance. So an individual support engineer would carry both her individual ratings (for a large percentage of the total) and team ratings. This is a good incentive to work cooperatively.
- If you are using swarming, use team performance as the main metric, with individual performance as a minor factor.
How do you balance individual and team goals in your organization?