“How professional was your support rep?” and other useless survey methods

Many thanks to Haim Toeg for suggesting this topic.

Are you still stuck with the 80s-style “Please rate your support engineer’s level of professionalism and courtesy” question in your customer survey? This post is for you, with 3 simple questions, and one simple proposal

Are you using the results of the survey?

If not, stop doing surveys.

Do you use the survey to measure the support engineers’ performance?

If so, you need to be able to associate surveys and cases to specific engineers.

  • Easy enough if cases are owned by the same individual throughout.
  • Fairly easy if you have a backline organization (by measuring the backline engineers either on their specific contributions to specific cases, or on all cases for their specialty).
  • If you have a tiered process, try measuring the level-1 engineers both on the cases they solve themselves and the ones they escalate to level 2, to encourage them to do good handoffs.
  • If cases change hands a lot, associating the survey with the last owner may not work so well, so try associating it with all the owners. (And changing owners a lot is not a good idea, worth another discussion entirely)

If you cannot accurately associate engineers to cases, find other ways to measure engineers’ contributions. Case audits are a wonderful alternative (or add-on).

Are you getting a decent response rate?

You need a return rate of at least 10% to ensure that the data is valid. One of the key techniques to raise the response rate is to make the survey very short (see here for more ideas). You are officially allowed to drop that “professionalism” question! While you are at it, drop all but a couple.

Getting a healthy response rate also ensures that there are enough surveys to evaluate individual engineers, if that’s your goal.

Are you pushing engineers to game the survey?

If managers routinely use the survey results as a punishment tool, support engineers will twist themselves into unnatural behaviors to get more surveys from satisfied customers and to avoid surveys from dissatisfied customers. The survey results must be only one element of performance management, and must be interpreted only through comparisons with support engineers with similar assignments. For instance, supporting a new or otherwise buggy product will guarantee worse surveys: compare engineers by specialty or level or role.

A simple proposal

Try this for a transactional case survey (note it’s essentially a ONE-question survey!)

  • Rate your experience with case xyz (I find that a 0-10 scale is self-explanatory, works pretty well across cultures, and allows reasonable gradations)
  • Do you want to tell us anything else? (open comment)
  • Would you like us to contact you about this survey?

For self-service experience surveys, you can change the first question to “Did you find what you wanted?[y|n]”. It’s simple, and it helps define the success rate for self-service.

What about relationship surveys?

Relationship surveys such as NPS are very useful and need to be conducted separately from the transactional surveys discussed here because (1) you want to reach all customers, not just the ones who contact support, and (2) you want to reach the decision makers, who may not be the support contacts.

How do you measure customer satisfaction and what do you like about your approach? Please add your comment!

Tagged under:

Leave a Reply

five + 2 =

Your email address will not be published.