Almost every support organization uses satisfaction surveys, but poor execution can undermine the outcomes. We’ll look at issues with the questions themselves, the choice of scales, and finally sampling.
- Leading statements (“Did our hard-working reps provide you with great service?”)
- Double-barreled questions (e.g. “How satisfied are you with the professionalism and knowledge of the rep?”)
- Assumptive questions (e..g “Please rate the website” — the customer may not have visited the website)
- Internal jargon (e.g. talking about response time, which for us in support means the initial response, but for most customers means the time it took to get a solution)
- Unbalanced scale (e.g. Very Good, Good, Fair, Poor — the middle is not the middle!)
- Limited options (e.g. “Are you satisfied? yes or no)
- Overlapping options (e.g. “How many cases did you log this month? 1, 2-3, 3 or more”)
- Response bias (e.g. only sending surveys for cases without bugs)
- Small samples (for low-volume organizations, the issue is often at the individual support engineer’s level: a handful of surveys does not allow a meaningful rating)
Are you proud of your survey? Please tell us why in the comments.