Find an Expert Witness

Forensic, General & Medical
Expert Witnesses

A Primer on Creating and Using Surveys


Expert Witness: Fulcrum Inquiry
Opinion polls (surveys) are used extensively to understand almost everything that cannot be looked up as a scientific fact. Polls are reported everyday on subjects ranging from what flavor toothpaste kids like most (Colgate probably already funded this poll), to what voters think is the best way to get out of an economic recession.

Opinion polls help ascertain the policies, products, services, and leaders that affect our daily lives. In litigation, surveys are often used in trademark, unfair advertising, and other business disputes to assist in determining issues involving customer behavior.

Sources of Data Bias

Like anything, surveys can be misused and misinterpreted. Survey or poll inaccuracies generally fall into the following five categories:

1. Sampling error – This entails correctly identifying the represented population, and identifying a sufficiently large sample to obtain reliable results. Our online interactive
FIND MORE ARTICLES
tool can help calculate a proper sample size.
2. Coverage bias – The method used to collect the sample may not be representative of the population to which the conclusions are directed. For example, assume a poll is being conducted by phone. Some people only have cell phones and not a landline. Yet, it is unlawful in the United States for pollsters to make unsolicited calls to phones where the owner may be charged for taking a call. Thus, cell phone-only users will not be included in polling samples conducted via phone. If the subset of the population with a landline phones differs from the rest of the population, the poll results will be skewed.
3. Non-response (Selection) bias – Some people may not answer calls from strangers, or refuse to answer, perhaps because of the time involved. Because of this selection bias, the characteristics of those who agree to be interviewed may be significantly different from those who decline.
4. Response bias – Answers given by respondents may not reflect their true beliefs, perhaps because of embarrassment. This bias can sometimes be controlled by the wording and/or order of poll questions.
5. Wording and order of questions – The (i) wording of questions, (ii) order of questions, and (iii) number and the form of alternative answers offered may influence poll results.

When surveying attitudes, opinions, and/or projections of future behavior, collection bias is difficult to control. Responses to questions can vary based on factors such as:

1. The respondent’s perception of the person asking the questions (most respondents react to the person as well as the question), and
2. The setting or environment in which the questions are asked (e.g., one’s own living room filling out a paper survey, versus a telephone survey, versus being stopped in a shopping mall).

The technical aspects of data collection (i.e., the first four items above) are usually handled well (or as well as is possible) by the major national polling organizations (e.g., Gallup, Pew Research, Rasmussen). However, we are surprised by how frequently these technical collection matters are handled poorly by customized surveys done in support of either marketing claims or litigation claims.

Drafting Survey Questions

In spite of these various challenges involving sampling and collection, question wording and question order (i.e., context/placement) are usually the largest source of bias. The goal is to have a clear, consistent, and unbiased meaning and intent for each question. Small question wording/order differences can result in significantly different results between seemingly similar surveys.

Here are examples of potential question biases from presidential election polls and policy issues:

1. When the question is read to respondents, pollsters may or may not include (i) the name of the vice presidential candidates along with the presidential candidates, and (ii) the party affiliation of each candidate. These inclusions or exclusions affect some respondents’ answers. One solution is to phrase the question in a way that mimics the voting experience (i.e., the way that the voter would normally see the names when reading the ballots in the voting booth).
2. Studies indicate that respondents are more likely to support a person (i) described as one of the “leading candidates”, and/or (ii) listed at the beginning of the choices versus towards the end. A suggested solution is to have multiple surveys in which the listing order of the choices is rotated.
3. Policy issues have an even wider range of wording options. For example, when asking whether respondents favor or oppose programs such as food stamps and Section 8 housing grants, should they be described as “welfare” or as “programs for the poor”? Should the recent 2010 health care reform (i.e., Patient Protection and Affordable Care Act) be described as “ObamaCare”, “health care reform”, or “health care system overhaul”? Each of these word choices may have impact on the responses, with such differences varying based on the ethnic and economic demographic of each respondent.

There is substantial research that attempts (i) to measure the impact of question wording differences and (ii) to develop methods that minimize differences in the way respondents interpret what is being asked. Some of the items to consider when formulating survey questions include:

1. Did you ask enough questions to allow necessary aspects of the issue(s) to be covered?
2. Are the questions worded neutrally (without taking sides on an issue)?
3. Is the order of the questions logical? General questions should usually be asked before specific questions. For example, overall job approval should be asked before specific questions are asked that remind respondents about the leader’s successes or failures.
4. Do questions asked early in the survey have any unintended effects on how respondents answer subsequent questions (aka “order effects”)?
5. Are the questions written in clear, unambiguous, concise language to insure that all respondents, regardless of educational level, understand them?
6. Did you ask one question at a time? Questions that require respondents to evaluate more than one concept (aka double-barreled questions) often lead to respondent confusion, and/or confusion in interpreting the results.

Testing one’s proposed questions can identify challenges early, and thereby avoid wasted costs in a full survey. Commonly used techniques to test proposed surveys are:

1. Pilot tests/focus groups (“pretests”) are conducted on randomly selected small samples from the survey population. These are usually conducted using the same protocols and settings as the survey. Surveyors obtain (i) feedback from the interviewers about the questions, including whether respondents had problems with the wording and/or order of questions, and (ii) estimates of how much time the interview takes.
2. A split sample involves at least two different versions of a question, with each version presented to a subset of the respondents. This technique allows pollsters to compare the impact of differences in wordings/order of questions.

An Example: Recent Rasmussen Survey Flawed Due to Poor Questions

On December 28th, 2010, Rasmussen Reports released the results of a poll taken shortly after the Federal Communications Commission (FCC) reached its recent decision on network (aka net) neutrality. Fulcrum’s recent article provides background on net neutrality and the FCC’s recent related decision. Rasmussen’s reported that 54 percent of the country was against the FCC’s action on net neutrality, and 21 percent was supportive of the FCC.

Rasmussen’s survey consisted of the following four questions:

1.”How closely have you followed stories about Internet neutrality issues?
2. Should the Federal Communications Commission regulate the Internet like it does radio and television?
3. What is the best way to protect those who use the Internet—more government regulation or more free market competition?
4. If the Federal Communications Commission is given the authority to regulate the Internet, will they use that power in an unbiased manner or will they use it to promote a political agenda?”

In evaluating these questions, consider:

1. Once one understands the FCC’s ruling, much of the survey does not relate to net neutrality.
2. Respondents were not given background that would assure that all respondents had a common knowledge starting point. In this case, the technical nature of the topic required additional background. The first question does not solve this challenge.
3. The second question leads a respondent to believe that net neutrality means the FCC will regulate the Internet like it does radio and television. With radio and television, (i) the FCC sets and enforces decency standards, and (ii) gives a restricted group of people the right to operate stations in particular places based on auction results. There is no FCC proposal to regulate the Internet in a similar fashion.
4. The third question exploits the lack of understanding as to the FCC’s ruling. The question thereby sets a false dichotomy. Net neutrality is not more of either of the two choices (i.e., more government regulation, or more free market competition). Net neutrality involves preserving how the Internet functions now, and maintaining the competition that already exists. Yet, a respondent is forced to select one of the question’s two options.
5. The fourth question is a hypothetical. Posing the question as a hypothetical allows one to evaluate something that is not being proposed as if it was reasonably conceivable. While the second question inverts the function of net neutrality to its exact opposite, this fourth question suggests the possibility that the FCC proposals involve a possible political motive. Yet, the proposed FCC regulations would allow neither the FCC nor the companies supplying and managing the Internet an opportunity to exercise an agenda over what sites and services are delivered over the Internet.

This example illustrates that surveys and their evaluation can be seriously flawed. There is no way that this survey on net neutrality could demonstrate the public’s support or lack of support over the FCC’s actual ruling. One should not accept proposed survey results until after first checking the (i) methodology and (ii) the questions’ wording and order. Similarly, when evaluating litigation surveys, careful analysis of the sampling techniques, survey instrument, and data analysis is necessary to ensure that the results are not biased in favor of any particular position.



ABOUT THE AUTHOR: David Nolte
Fulcrum Inquiry performs economic and statistical consulting. We prepare and analyze surveys as a means of obtaining data for our consulting assignments when needed information is not otherwise available.

Copyright Fulcrum Inquiry

Disclaimer: While every effort has been made to ensure the accuracy of this publication, it is not intended to provide legal advice as individual situations will differ and should be discussed with an expert and/or lawyer.For specific technical or legal advice on the information provided and related topics, please contact the author.

Find an Expert Witness