9 Traps for Satisfaction Surveys

Almost every support organization uses satisfaction surveys, but poor execution can undermine the outcomes. We’ll look at issues with the questions themselves, the choice of scales, and finally sampling.

Wording Traps

  1. Leading statements (“Did our hard-working reps provide you with great service?”)
  2. Double-barreled questions (e.g. “How satisfied are you with the professionalism and knowledge of the rep?”)
  3. Assumptive questions (e..g “Please rate the website” — the customer may not have visited the website)
  4. Internal jargon (e.g. talking about response time, which for us in support means the initial response, but for most customers means the time it took to get a solution)

Scale Traps

  1. Unbalanced scale (e.g. Very Good, Good, Fair, Poor — the middle is not the middle!)
  2. Limited options (e.g. “Are you satisfied? yes or no)
  3. Overlapping options (e.g. “How many cases did you log this month? 1, 2-3, 3 or more”)

Sampling Traps

  1. Response bias (e.g. only sending surveys for cases without bugs)
  2. Small samples (for low-volume organizations,  the issue is often at the individual support engineer’s level: a handful of surveys does not allow a meaningful rating)

Are you proud of your survey? Please tell us why in the comments.

Tagged under:

Leave a Reply

two × three =

Your email address will not be published.