Bug Fest!
A big thank you to Rick Morris for suggesting this topic, all about bugs and bug processing.
Q: Bugs? Defects? Does it matter?
A: FT Works clients use a variety of names for bugs so the specific term you use may not be that important. What’s more important, in my mind is to:
- Be transparent about bugs. Pretending that your products are 100% perfect is plain silly and creates a climate of distrust between customers and the vendor. Old-fashioned, uncool, counterproductive. So don’t pretend that a “request” is just that: if it’s a bug, say it’s a bug, or say it’s a defect, but don’t pretend.
- Be consistent. Pick a word and use it throughout the company, for clarity’s sake.
Q: Should the support team handle enhancement requests and feature requests?
A: It makes a lot of sense to maintain a single input channel for all customer requests, which suggests that support should own customers’ input on enhancement requests. On a more important note, customer requests commonly start as bugs and end up categorized as enhancement requests, both because it’s genuinely difficult to draw a firm line between the two and because, ahem, vendors find it difficult to resist the temptation of reclassifying a bug that must be fixed, someday, into an enhancement request that has no such requirement. Bottom line: it’s useful to treat enhancement requests and bugs as related rather than separate entities.
Q: Should enhancement requests be treated differently from bugs?
A: For most vendors, bugs are processed by the Engineering team and enhancement requests by the Product Marketing team. Most vendors also make specific commitments to fix (address) bugs according to a predetermined schedule but make no such commitments for enhancement requests. However, when it comes to the user interface the treatment is similar or at least can be, with support owning the intake, tracking, and communicating of decisions about both bugs and enhancement requests.
Q: Should we leave cases open until the underlying bug is fixed?
A: This is probably the #1 question I get about bug processing, and I’m sometimes surprised of how heated discussions about this topic can become.
Let’s back up and think about the purpose of support cases: they exist to track the progress of specific issues, both from an internal (vendor) perspective and external (customer) perspective. We love to count cases, to measure their ages, to classify their root causes, but in the end cases exist to track issues, and should disappear once the issue is gone. Haha, say the “never close” zealots: this implies that cases must remain open until the issue is fully resolved, which means that cases must remain open until the fix is released, the customer has installed it, and the customer has confirmed that the fix resolves the issue.Could be, but how many customers remain interested in each and every fix to that extent?
Your answer may be “all of them”. For instance, customers in regulated industries may be required to track each and every fix to completion. If you are in this situation, cases must remain open until confirmation. The best approach would be to create a dormant status and queue for those cases that are simply waiting for a fix and require no support attention until the fix is released, so as not to clutter active work-in-progress queues.
But for most vendors the answer is more nuanced. Customers are terribly interested in a few specific fixes that impact their business, but for other, non-critical fixes, they are content with being notified of availability, if that. In this situation, the best approach is to keep open those cases with critical bugs (usually fairly easy to categorize) and to close the others, ideally with a mechanism that allows automatic notification to the customer when the fix is delivered. So far, so good. But customers may object, and support engineers may object.
Customers object because they are concerned that closing the case will cause the associated bug to fall into a black hole. And they may be right on, as some vendors focus on new releases rather than fixing bugs and blithely ignore non-critical bugs. However, many vendors have a well-established routine for rounding up and fixing non-critical bugs as part of the normal release process. In this situation, it’s a matter of pointing out to customers the long list of fixes for each release.
Support engineers object because metrics are set up in such a way that they feel they will be dinged by the policy. On the one hand, keeping critical cases open means that their closure rates will go down and on the other, closing cases on (minor) bugs means that customer satisfaction may be impacted. Part of the solution is to tailor the metrics to the reality. For instance, targeting that 80% of cases be closed in a week (say) means that an ample 20% can stay open. And the customer satisfaction issue may go away by never forcing customers to close cases before they are ready and by providing a clear, customer-focused script on why closing a case does not mean that bugs go untouched.
Please comment about your experience about closing cases with a bug — and let me know if you’d like an “enhancement request fest” post to match this one.
We leave incidents open for Priority 1 and 2 issues so we can follow up with customers through the Development correction process all the way through the customer loading the fix and confirming it resolved their issue. We close our lowest priority issues (Priority 3) but we create a KB article for these and link the defect to the KB so the customer is notified when the fix is available. That way, the customer still receives communication but we don’t leave an incident open for these low priority issues which could be open for a long time or may not be fixed until a future release.
We leave our cases open until the bug is fixed and deployed. This is because for most of our customers, they dont see the issue as resolved until that takes place. They want our Support team to track this through to the end and provide status updates on any changing deploy dates.
We have introduced two statuses to allow us to track this. We have a Waiting On ETA (meaning CS is still accountable to follow up with Engineering to get that deploy date) and Pending Deploy (which means we have a date, and dont actively work the case but follow up with the customer once that deploy date hits).
We also track our case closure rate in our metrics separately, for cases with Bugs, and cases without Bugs. That allows us to weed out the cases where CS doesnt control the resolution time due to the bug.
thanks, Robin
Hi Robin!
I like the two statuses that help you distinguish when support needs to be actively involved vs. not.