We continue from our previous post, where we discussed how guidelines for CV surveys can generally be broken up into three phases of research:
- Determining a Survey Sample Universe
- Creating the Survey Instrument
- Collecting, Analyzing, and Reporting the Data
This time we will discuss the second phase, creating the survey instrument.
Creating the Survey Instrument
When researchers develop questions for a survey, they need to avoid bias. Lipscomb et al. (2011) discusses different biases that should carefully be eliminated. According to Lipscomb et al. (2011), the following are some of the biases that may affect the results of a willingness-to-pay (WTP) estimate:
- Anchoring or Starting Point Bias – occurs when the survey presents a WTP amount that influences the subsequent WTP amount(s) stated by the survey respondent.
- Range Bias – occurs when the survey introduces a range of potential WTP amounts that influences a respondent’s specified WTP.
- Relational Bias – occurs when the description of the good to be valued includes information about its relationship to other benchmark goods that may influence a respondent’s stated WTP.
- Position Bias – the order in which questions are asked and order in which the answers are provided (for closed-end questions) may influence the outcome.
- Vehicle Bias – the method of payment (e.g., increased mortgage, increased taxes, service fee) may influence a respondent’s WTP.
In addition to the items above, a researcher must make sure that the questions are clear and do not lead respondents to choose a particular answer. The survey questions should be presented in a variety of formats (e.g., open-ended questions, rating scale questions, closed-ended [yes/no] questions). Then, the order of answer choices on any given question may be rotated to prevent bias from respondents seeing the same answer first (a type of position bias sometimes referred to as “ordering bias”). Additionally, checks can be included to determine whether respondents will answer the same way twice. This may be tested by providing respondents the same question worded differently at a later point in the survey, or using an open-ended follow-up question that simply asks why respondents chose a given answer. There are also tests that measure the level of confidence in respondents’ answers; these usually come in the form of certainty scale questions where respondents are asked to indicate the level of confidence in their answer on a scale ranging from 1 to 10.
Another thing to decide during survey design is which method of data collection to use. Some examples of data collection methods include the Internet, telephone, mail, mall-intercept, or any combination thereof. All data collection methods have advantages and disadvantages to consider.
After the survey has been designed, but before it is administered to the sample, researchers often hold focus groups to determine if questions are clear, if language contains any bias, and to identify any other potential problems. Focus groups can be very useful to ensure that the questions will be understandable to the future survey respondents. Note: If you are conducting a survey that will be used in litigation, you will need to maintain very thorough notes, even at the focus group stage.
While these are just a few examples, they indicate the degree of detail involved with designing a survey before a researcher can even begin conducting the survey. For more information, feel free to use our Ask an Expert page for any questions. In our next post, we will discuss the guidelines for collecting, analyzing, and reporting data.
– Abigail Mooney and Sarah Kilpatrick