Metrics for Knowledge Management – Q&A

On July 8th, Melissa Burch of Irrevo invited me to speak about metrics for knowledge management to a webinar audience.  We were lucky to get many great questions, so I thought I would share some of the answers with you blog readers. Thank you to Jen Diaz for managing the Q&A, and giving me permission to cross-post the answers.

Q: How do you measure case deflection as a result of knowledge?

FT: I wrote a book on this! [Collective Wisdom, co-authored with David Kay] Seriously, it’s a very difficult topic. Depending on the tool you are using, you may be able to present possible solutions to users as they are logging cases. If so, you can measure the percentages of cases not logged. Voila! But note that some, maybe many users may have found solutions and gone away happy without starting to log a case.

Otherwise, you need to have a method for measuring what’s not happening, which is very difficult. I like to simply measure the incident rate, so volume of cases per customer (or per seat, per license, whatever method helps you capture the size of the customer base). If the incident rate goes down when you are improving the knowledge base, that’s a positive result. Of course, incident rate depends on many other factors, most notably product quality… If you have multiple product lines you can check them against each other to eliminate these other factors.

Q: How do you measure quality when the customer needs to go and do some work and only then determine whether the solution worked? They are unlikely to come back and score the item.

FT: Determining the quality of an individual solution is best determined by (1) feedback on the solution itself and (2) reuse during case resolution. The vast majority of customers will not bother rating solutions at all, so be sure to use whatever feedback is given: if one person complains about a solution, chances are that dozens of others also had a problem.

Q: What’s your recommendation for the number of case evaluations per analyst and who should do them?

FT: [I recommend conducting regular case audits on a small number of cases, including whether the proper knowledge management steps were taken on the case, so .] For established analysts, a couple per quarter, randomly chosen, should suffice, assuming that the outcome is positive. (If not, review more to determine whether there is a real issue, or if you just happened to pick problematic cases.) For new hires, or anyone with performance issues, you should review more, maybe all of them for brand-new hires.

As to who should do the case evaluations, I’m a strong proponent of having the analyst’s manager perform them. It’s best to have the same person perform the evaluations, deliver feedback, and manage the performance. That being said, with very technical products it’s often helpful to enlist the help of a senior technical resource who would be better able to assess the quality of the troubleshooting process.

Q: Aren’t case reviews  lagging since it is after the case is closed? How are they leading?

FT: Case reviews are often conducted on closed cases, in which case they do, indeed, come after the fact. But they can be conducted on cases that are still open. Also, not every customer will return a customer satisfaction survey so the case quality review can be considered as a leading indicator of quality, suggesting what customers might say in the future about cases closed by that same individual.It’s not always easy to cleanly distinguish between leading and lagging indicators.

Question: I manage a doc/user assistance team of a brand within a multi-national software company. We don’t have any metrics about users interactions. Where do we start? Is there a good set of books or papers that give us some metrics that we can start managing?

FT: If you have no metrics at all, that’s great because you have no bad metrics. I would suggest starting from the balanced scorecard approach. There are a lot of ways you can look on the metrics. I have some books available that talk about metrics [The edition of The Art of Support will contain an expanded coverage of metrics], and you can also read my blog.

The main thing about metrics is to accommodate both the theory of what you should measure and the reality of what you can measure. Start small. Start with metrics that are meaningful. If you can measure satisfaction at all, that’s a good start. Start with the ideal and accommodate what you can do.

Melissa : In addition to FT’s book, I’d add another book to read; It’s called How to Measure Anything: Finding the Value of Intangibles in Business by Douglas Hubbard. The beginning of the book is very inspirational and makes you think differently about measuring for intangible value.

Q: I’ve talked to numerous support groups who say their corporate culture does not embrace knowledge sharing. How successful can support be in creating a good knowledge sharing culture when executives may not embrace it?

Melissa: Without executive support, you will be able to make some progress in how effectively you can encourage capture of knowledge because in general, most support agents want to help each other and their customers. It’s a much smaller return than you’d see if you had executive sponsorship, but some participation would occur.

FT: I agree with Melissa. What I’d encourage you to do is do active knowledge sharing, preferably using KCS within the support organization. People in support are usually well-disposed to knowledge sharing. It’s not easy, but they understand that it’s important to share knowledge.

The important thing to avoid is being the tail wagging the dog. Start with what you can control within your support group, and hopefully it will spread. Lead by example, but don’t try to transform the entire organization. I have several clients that have tried to do that and three years later they’re still trying to get started because not everyone agrees yet. If they had started where they could, they’d have a system that works for them, and they might have inspired others. Start where you are and then inspire others.

 

If you missed the live broadcast, you can watch the recorded webinar.

And if you have questions of your own, please add them in a comment and I will respond.

 

2 Comments on “Metrics for Knowledge Management – Q&A

  1. Hi Francoise, at Progress we’ve started measuring simply the ratio of kb visits to the number of support cases. I really don’t like using “deflection” as the measure/term because it is so difficult to truly measure and I’ve never seen anyone do it in a way that doesn’t make a lot of assumptions.

    Now we simply know that for every support case there were a little over 100 visits to the kb and that ratio has been trending up, which is what we want.

    Kirk

    • Kirk-

      That is a very impressive ratio and yes, an upward trend means that customers are voting with their feet (fingers) and finding the online experience valuable. Anyone else?