Do I get enough customer surveys back to be able to trust the results?

I hear this apparently simple question on a regular basis – and I thought I would invite my colleague Fred Van Bennekom, the guru of practical customer surveys, to share with us his wisdom and recommendations. Fred says:

“Everyone conducting a survey is concerned about response rates and the level of confidence they can place in the survey results, and in conference presentations I get asked many questions that show how misunderstood survey accuracy is. (Hint: don’t listen to what your marketing team says.) Survey accuracy does require some fundamental understanding of statistics. In my Survey Design Workshop, I spend considerable time on this topic with a fun exercise using M&Ms to explain “sampling error.”

Here’s an obvious statement: the more completed surveys you get, the greater the confidence. Four factors determine the statistical confidence or accuracy.  But unfortunately, the required sample size is not just a simple percentage.  Statistical accuracy is determined by four factors:

  • Size of the population. The population is the “group of interest” for the survey. (For instance, all the customers who create a case in February, 2015.)
  • Segmentation analysis desired. Typically, we analyze survey data along some demographic segmentation. For instance, if you analyze the data by support rep, you need enough responses for each rep.)
  • Degree of variance in responses from the population. This factor is the hardest to understand. If the respondents’ responses tend to be tightly clustered, then we don’t need to sample as many people to get the same confidence as we would if the responses range widely.  To be safe, we use the worse case variance in our calculations below.
  • Tolerance for error. How accurate do you need the results to be? If you’re going to make multi-million dollar business decisions, then you probably have less tolerance for error.

The sample size equations here are a bit daunting. (Check your statistics book.)  I created this chart to make this more understandable.

SurveyAccuracy

The horizontal axis shows the population. The vertical axis show the percentage of the population from whom we have a response. (This is not the response rate. The response rate is the percentage of those receiving an invitation who respond. Note the critical distinction if you do not send surveys to all customers.)

The chart shows seven lines or curves that depict seven levels of accuracy. The horizontal line at the top shows that, if we perform a census and everyone responds, then we are 100% certain that we are 100% accurate.  Of course, that will likely never happen.

Before I explain how to interpret the curves, let’s bring out a couple of points from the chart. First, as the percentage responding increases, the accuracy increases. No surprise there. Second, as the size of the population grows, the percentage responding needed for the same level of accuracy decreases. Conversely, when we have a small population, we have to talk to a larger percentage of the population for reasonable accuracy.

Now let’s interpret those curves. Each curve shows 95% certainty of some range of accuracy. The 95% is chosen by convention. Let’s focus on the accuracy part of the statement.

Say you have a population of 1000, and you sent invitations to 500 people. Half of those responded. So, 25% of the population responded. Find the intersection of 1000 on the horizontal axis and 25% on the vertical axis. You would be approximately 95% certain of +/-5% accuracy in your survey results.  If we conducted this survey 20 times, 19 out the 20 times (95%), we would expect the mean score to lie within +/-5% of the mean score found when we conducted the survey.

Conversely, if we have an accuracy goal for the survey project, we can use this chart to determine the number of responses needed. Say, we have that population of 500, and we wanted an accuracy of +/-10%. Then we would need about 18% of the population to respond, or 90. (Find those coordinates on the chart.)

When we actually conduct our survey and analyze the results, we will then know something about the variance in the responses. The confidence statistic incorporates the variance found in each survey question and can be calculated for each survey question. The confidence statistic tells us the size of the band or interval in which the population mean most likely lies – with a 95% certainty.

For a more extended discussion of this topic, please go to: http://www.greatbrook.com/survey_statistical_confidence.htm

And if you’d like to request an Excel response rate calculator, use our request form: https://ww03.elbowspace.com/servlets/cfd?xr4=&formts=2006-11-10%2011:40:01.296001

 

Thank you Fred!

As a reminder, Fred is offering a $200 discount on his upcoming workshop in San Francisco on February 24-26. Sign up now here (use the discount code FT Word)! You can find more information about the workshop here.

 

 

Tagged under:

Leave a Reply

twelve − 5 =

Your email address will not be published.