Research Methods: Building a Survey

Building a survey isn’t as easy as you would think. Simple things like layout and formatting (like those are simple?) can have a large impact on the results.

So to start, lets discuss the directions. You need ease of use and clarity. If something looks difficult or unclear, it can have drastic effect on response rates. Also, we have to meet Federal regulations for the use of humans with regards to confidentiality and/or anonymity. So the directions need to disclose the confidential or anonymous nature of the survey.  This has the impact of building trust, which encourages greater participation, and more honest responses.

The directions should also be clear that the survey is voluntary, and give an accurate estimate of the amount of time it will take to complete the survey. Each section or subsection should have a copy of the directions included with the heading of the section.

The final point on the directions is the one I find most interesting.  The directions should show appreciation and gratitude for people taking the time to fill them out.

In constructing items there are three ways (Dillman, 2007):

  • Open ended, with no choices provided
  • Close ended, with ordered responses (e.g. strongly agree to strongly disagree)
  • Close ended, with unordered responses

Common mistakes in writing survey questions include:

  • Using ambiguous wording and/or response choices
  • Overly complex
  • Double-barrelled, asking more than one thing
  • Too long
  • Not giving respondents enough information to fill out an open-ended question

The way you construct your items depends on which one of the three types of questions you use.

Starting the survey with the easiest questions gets people comfortable in answering the questions and encourages them to continue.

They recommend using ordered, because of the amount and type of analysis they allow you to do. Numerical responses allow you to do math on the answers.

One of the ordered response types that can be used is a Likert-Type Scale. While most of them are 5-point, my professor prefers a 7-point scale because of the greater sensitivity in the results.

Some people suggest 11 points, but I cannot see how adding more numbers helps the respondent, gets them confused on how much to refine the response.

You have to have an odd number with a midpoint to have a continuum. So that is why these are all odd numbers.

You can have frequency, quality or satisfaction as the types of variables being used to measure via the Likert scale. Here is a link for some word suggestions:

http://karlkapp.blogspot.com/2010/09/likert-type-scales-examples-samples-and.html

Ensure that the scale headers are only over one number, not multiple (layout).

Semantic differential, need to have terms that are actually bi-polar that people can use as a continuum. Be careful about using words like very in front of a term.  Since extremes are discouraged in some cultures, it might get people to not use the end points, when without the very people might use the end points.

Survey length, you can get in trouble for being too long, or too short. There is a psychology you are using to get good results. A short survey isn’t always taken seriously  But most of the time too long is the problem.  Survey length is usually best at 5-10 minutes, 15 max.

For longer surveys, people might fatigue at the end, and so just circle items rather than respond. So if you have a multiple section survey, reordering the sections for various people ensures that each section gets some of the fatigue effect, and thus the answers overall should be balanced in the bias.

Steps for crafting a survey (Measure twice, cut once)

  • Draft and refine items of interest according to the principles above
  • Create draft of instrument with clear directions and user-friendly layout
  • Pilot the draft with a few people to look for confusing or vague items, and record the time it took to complete the survey.
  • Revise and finalize the instrument

Now we will look at electronic delivery of surveys. Some of the advantages are:

  • Cost
  • Convenience/accessibility
  • Data doesn’t have to be hand entered for analysis
  • Easier to send multiple times to increase response rate

Conventional rule of thumb is that a 25% response rate is good.

Dillman’s tailored design method has designed an approach to get response rates from 50-85%. His work is informed by Social Exchange Theory.

“In a social exchange (like this survey) people weigh the rewards to cost ratio and a key element in this dynamic is trust. That is, if someone agrees to an exchange, she/he trusts that, in the long run the rewards of doing something will outweigh the costs.”

Ways of providing rewards for filling out a survey include:

  • Show appreciation and gratitude
  • Make the survey interesting
  • When needed, provide incentives, even token ones
  • Communicate that the respondents, responses are appreciated and valued.

Ways to reduce the perception of cost:

  • Avoid overly intrusive or embarrassing questions
  • Make the layout and directions clear and easy
  • Keep survey length in mind
  • Avoid over complex items
  • Communicate the confidentiality of the survey

Other ways of establishing trust are by providing a token of appreciation, and sponsorship by legitimate authority.

Follow-up contact is another important element of getting higher response rates.

Reference

Cook, D., Patterson, J., Downs, C. (2004). Final Analysis and Interpretation.In Downs, C., Adrian, A. (Eds), Assessing Organizational Communication: Strategic Communication Audits(Location 990-1281), New York, NY: The Guildford Press.

 

 

2 thoughts on “Research Methods: Building a Survey

  1. I’ve been working in Marketing Research for more than 20 years and though your summary of the field was well done. Still, I couldn’t resist chiming in with a few thoughts:

    Estimating the length of a survey is more difficult than one might expect; generally, the person writing the survey will under-estimate the amount of time needed. It’s often a good idea to have someone that is NOT involved with the project take the survey and tell you how long it took. You might not like the answer but it’s probably more accurate.

    I’ve had multiple in-depth discussions (some lasting for an hour or more) about the number of points to include on a scale. I agree with your recommendation to use an odd number of points so that a mid-point is available but some want to force the respondent to go one way or the other. Use of a 10-point scale makes people think of grades but I don’t think that really helps; in fact, I’d argue it causes problems because it makes a “6” mean failure when it’s actually a positive.

    If you want to calculate averages, you should only label your end-points. Different people will see a different distance between Agree and either Strongly Agree or Somewhat Agree but we can generally get consistent feedback on the difference between a 6 and a 7 or 5.

    Sensitive questions are generally asked last because some respondents will hang up (or close the survey if it’s online) if you ask about age, income, sexual orientation, politics or religion. So, you want to get as many answers as possible before you ask those questions.

    Research on research has shown an incentive is helpful although it doesn’t really matter what incentive you provide. I’ve been involved in projects where every respondent was sent $1 or $5 and we saw no difference in response rates although sending a $2 bill worked a bit better because of the novelty. However, I’ve also seen that a drawing for a $50 QT card (where only one person actually wins) works just as well. However, if you are doing a survey of government workers or some other fields, ethics policy dictate that they cannot receive an incentive so sending anything can actually cause you problems. Therefore, it’s best to have a drawing where the respondent has to opt-in for the drawing.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.