Print

How to Design a Survey that Gives You the Information You Need

22 questionnaire design best practices; how to design a survey that gives you the accurate and reliable information you need to meet your objectives.

The most important goal in questionnaire design is constructing questions that will supply you with the accurate and reliable information you need in order to accomplish your objectives with confidence. This is a guide to help you craft such questions and avoid the danger that your survey will produce unusable or, even worse, misleading results.

We begin with 11 fundamental rules, followed by 11 more that go above and beyond what you'd normally find in articles like this one. This comprehensive collection of best practices will put you well on your way to designing surveys that give you the trustworthy and useful results you desire.

1. Define your objectives. First define your main objectives, which will be your overall purpose for doing the survey. Next, determine the research objectives that will help achieve these goals. For example, a main objective might be to improve customer loyalty. Your research objectives stemming from this could be to determine how loyal current customers are, why past customers have left, and what actions can be taken to improve the current situation.

2. Determine the information you need in order to achieve your research objectives. Turn the research objectives into a set of information requirements. Continuing with the example in #1 above, information needs might include the satisfaction level of current customers, the likelihood current customers will switch to a competitor, and the reasons that past customers have left. From this point, you then create questions that will address these needs.

3. Define and seek your target audience for the survey. Decide if you'll need to use the survey questions to screen out those who are ineligible. For example, if you want to survey only those people who have been customers for more than a year, can you put together that list on your own or do you need to ask that question up-front in order to screen people out of the survey? Make sure you include a question to screen out people who aren't knowledgeable about the survey topic if they'll need to be in order to answer the survey questions (and you aren't trying to measure level of knowledge).

4. Choose the appropriate question type and rating scale for your questionnaire design. Understand the range of question types available and select the ones that best fit your purpose. If you are using a rating scale, the same principle applies. (View our library of survey question examples)

5. Know when to ask questions that provide answer choices (close-ended) and that don't (open-ended). Use open-ended questions when you don't know what the range of responses is likely to be, want to know the reasoning behind a behavior or preference, or just want to read the respondent's own words or language. In general, open-ended questions give you top-of-mind responses, not necessarily the full picture of what the respondent knows or feels. Prompting the survey participants with choices helps them recall certain actions and express answers in the context you desire. It's important to keep in mind that open-ended questions take more time and effort to analyze.

6. Avoid ambiguous questions. The respondent must read the question as you intend it to be read and not be confused or think you are asking a different question. Precision is critical. A slight change in wording can change a question's meaning. Common ambiguities include:

  •  A question that is too general. For example, instead of "Why did you buy our product?", ask "What did you dislike about the product you were using before?" and "What do you like about our product?"
  •  A term is too vague. Instead of "how much did your company grow, ask "how much did your revenue grow"
  •  An unclear timeframe is given. For instance, say "in the last 12 months" instead of "in the last year".

7. Avoid biased questions. This can cause skewed results. A few of the more common examples of bias are:

  • Trying to obtain particular answers to support your position by asking leading questions. Strive to maintain balanced questions. An example of an unbalanced question is, "Do you think Republicans look out more for small business?" The balanced version of this is "Who looks out more for small business? Republicans/Democrats/Other/None.
  •  Influencing later questions with earlier questions.
  •  For an open-ended question, giving examples of possible answer choices in the question itself might cause respondents to answer differently than they would otherwise.
  •  Making an assumption. For example, rather than say, "What effect are poor economic conditions having on your business?" say "What effect are current economic conditions having on your business?"

8. Avoid double-barreled questions. This is when you ask two questions in one. Examples: How would you rate our friendliness and speed of service? What do you like or dislike about our company?

9. Move from most important questions to least important questions, and save your sensitive questions for the end. This way, if the respondent doesn't complete the survey, you still have the most essential data.

10. Provide answer choices that don't overlap (mutually exclusive). This mistake often occurs in questionnaire design when using numerical ranges. Sometimes the opposite happens and a gap is present, such as when asking for frequency. The following answer set looks complete but is missing a choice: Never, A Few Times A Week, Weekly, A Few Times A Month, Monthly, Less Often Than Monthly. The missing choice is "Daily".

11. Offer answer choices that are as inclusive as possible. Aim for choices that will contain 80-90% of all possible answers, and provide an "Other (specify)" option for write-ins. Looking at open-ended responses from past surveys can help you assemble the list. If you think you can't create a near-complete list, then make the question entirely open-ended. Don't forget a "None of the above" choice if it's warranted.

11 bonus tips for even greater insight...

12. Biased questions are a pitfall, but so are biased answer choices. Two of the most prevalent instances of this are:

  • Order bias – Respondents tend to choose items near the beginning or end of lists, or can get tired at the end of a long list of items they are rating. Randomizing lists for each respondent will fix this problem. The exceptions are lists that form an ordered sequence (such as time or frequency) and lists that are better left as alphabetical (such as a list of brand names the respondent is picking from).
  • Unbalanced items will affect the way responses are chosen, such as scales that have more positive than negative points (which people tend to answer more positively). Example: Excellent/Very Good/Average/Poor/Very Poor (2 positive, 1 middle, 2 negative) is better than Excellent/Very Good/Good/Fair/Poor (3 Positive, 1 middle, 1 negative).

13. Make sure you provide clear instructions in your questions. This is especially important when a respondent is writing in a response. For example, if they are to write in a number and you don't want them to enter a range, you need to make that explicit. Also, it can be helpful to say things like "your best estimate is fine" and "please be as specific as possible".

14. In most cases, it's a better survey design to have respondents rate items on a scale than put them in rank order. Having people put items in rank order (such as top choice, second choice, third choice, and so on) has a couple of drawbacks. First, you learn how a person orders the items, but you don't find out the distance between the items. A person might like the first choice a lot more than the second, but the second not much more than the third. Ranking won't measure this distinction. Second, rankings don't tell you how a person feels about each item, only how they feel about it in relation to the others. For instance, something ranked number one might still be disliked...it's just disliked the least. Instead, it's better to have people rate each item using a scale. You get more information about how much the respondent cares about each item, plus you can put the items in rank order yourself by using the ratings (such as average rating or % giving the highest rating).

15. Be careful about offering a "Don't Know" choice. Whether to offer a "Don't Know" choice depends on your objectives and the survey taker's familiarity with the topic. Offer it if you are trying to determine their level of knowledge or if it's reasonable for them to not know the answer. Leave it out of they should be able to answer, especially if you've already established that they have a certain level of knowledge.

16. On your rating scales, know when to use a neutral midpoint and a "no opinion" option. People with no opinion because they lack enough information to form one should be treated differently than those who have the information but are genuinely neutral or undecided. The former should be given a "No opinion" or "Not able to answer" option while the latter should be given a "Neutral" or "Undecided" option. As an example, it's acceptable to use a "Neutral" option on an Agree/Disagree scale. The "No opinion" option can be added to this scale when the topic is something that requires a certain degree of knowledge, such as politics or current affairs.

17. Choose the right number of points on your rating scale. How many points to use depends upon the degree of discrimination you seek and the ability of respondents to distinguish between the points. Scales with seven points and higher give you greater separation and reduce the risk that items being rated will receive many scores close together. Odd numbered scales maintain the neutral midpoint. The stronger your end point labels, the more points are required on the scale to create the proper separation. The 0-10 scale is popular because it is easy to understand.

18. Know when to use labels for your scale points. It's generally advisable to label all points when it is a bipolar 5-pt scale, such as very unsatisfied/somewhat unsatisfied/neutral/somewhat satisfied/very satisfied. Label only the end points for unipolar scales (such as "Not at all Important" to "Extremely Important") and scales with 7 or more points. The reasoning is that if you try to label the midpoints, there is a risk of mislabeling them and creating uneven intervals, or of respondents having different interpretations of the labels.

19. Be careful not to use too few answer choices or categories in your questionnaire design. For example, break out "customer service" into "customer service friendliness" and "customer service helpfulness". You can always combine them into one "customer service" category during analysis if necessary, but there's no way to split it out after the fact. There's a danger of having too many choices also, so use this approach selectively.

20. Make it easy to organize open-ended responses. If you are asking open-ended questions where the respondent might provide a list of things (such as brands they are aware of or reasons they like your organization), provide a group of individual comment boxes rather than one big box. This makes it easier for you to distinguish the different items when tabulating results.

21. Don't forget questions that will help you analyze, classify, and make better sense of your data. Demographics are commonly used for this purpose, but also consider characteristics such as active vs. inactive customers and new vs. long-time customers. You can then use a graph to compare the results across these segments.

22. Don't give too much detail in your message inviting people to your survey. If you are too specific, you might bias the results by getting only respondents who are interested in the topic. For example, if you are surveying about interest in a new product and say too much about it in your message, people who might not be interested in the new product will just choose not to take your survey. Even simply saying your survey is about a new product could bias the results because people not interested in any new products will not respond. It's best to be as non-specific as possible, while still providing just enough information to entice them to participate.

As this article perhaps shows, designing a survey properly is not as simple as it first seems. We believe that our advice here is your key to designing a successful survey. But we also acknowledge that it could be difficult and time-consuming to put these ideas into practice and adapt them to your specific situation. For help with this, let us personally apply our expertise to your survey and save you the time, effort, and worry. Learn about our affordable survey writing service and how we will make the survey experience of your respondents the best it can be.

View our library of Survey Question Examples


Need more help with
your survey?


Learn About Our Service ►
sidebarshadow