Imagine you were taking a survey. That survey offers some hypothetical scenario, and asks you what you would do. You may say you are going to buy a new product, or vote for a new party. If that hypothetical scenario becomes real: do your actual reactions match your survey response?

This article looks at hypothetical bias in survey research.

What is hypothetical bias?

In environmental economics, it is important to understand people’s willingness to pay — how much people value their environment. Similarly, in competition analysis, we need to understand how people will respond to price changes or new entrants.

“Contingent valuation” is to ask people directly what they would be willing to pay or willing to give up. Choice modelling is another family of techniques: for instance, options may be ranked, or respondents are offered a fixed set of alternatives.

Hypothetical bias arises when the value people say in the survey’s hypothetical scenario exceeds what they are actually willing to pay in laboratory or field experiments. As environmental economist John Loomis writes:

It is easy to say you will pay £500 in a survey — it is less easy hand £500 over. There is a difference between saying and doing.

Brian Buckley explains what willingness to pay means. (Video: Bryan Buckley, 2013)

How do we deal with hypothetical bias?

In 2009, Mark Morrison and Thomas C. Brown looked at three strategies for mitigating hypothesis bias.

One strategy is ‘cheap talk’. Cheap talk involves alerting people answering the survey to the issue of hypothetical bias prior to the hypothetical questions, as to reduce the subsequent presence of this bias.

Another strategy is to follow up the hypothetical question with a question of certainty. A scale of uncertainty is typically used, with the lowest score representing ‘very uncertain’ and the highest being ‘very certain’. Different scales and cut-off points have been used in various studies.

Finally, there is ‘dissonance minimisation’. One tactic is saying that there is going to be a auction after the hypothetical auction. Consequently, people answer the hypothetical survey question more like they would act — because they do not wish to give different, dissonant answers.

What other examples of hypothetical bias are there?

Hypothetical bias casts its shadow over public opinion research.

Voters answer in surveys that they wish to hold corrupt politicians accountable. However, when looking at vote choice across countries, information about corrupt politicians has little effect. Trevor Incerti’s conclusion is:

Anthony Wells of YouGov has found numerous instances where hypothetical questions about new parties were found wanting by reality. One example is repeated here:

  • At the 1999 European elections, two former Conservative MEPs set up a “Pro-Euro Conservative party”. A hypothetical MORI poll asked how people would vote in the European elections “if breakaway Conservatives formed their own political party supporting entry to the single European currency”. 14% of those certain or very likely to vote said they would vote for the new breakaway pro-Euro Conservatives. In reality, the pro-Euro Conservative party won 1.3%.

The reasons why party support may differ from hypothetical survey questions to reality are analogous. There is a long distance between the radio button and the ballot box.

Constraints conceived in the hypothetical question may differ when the real vote choice presents itself. Such questions may focus attention on what is hypothetically different — such as party leadership — leading people to overestimate changes. There is often more than one factor driving our vote.

Image for post
Image for post

Caution when reporting hypothetical questions

With two British parties currently holding leadership contests, and another party seeming to change its stance on a major issue, polling companies are currently abound with hypothetical vote intention questions.

As an example, this scenario was asked in a recent YouGov survey:

Image for post
Image for post
This question was — without the wall of text — cited by Liberal Democrat leadership candidate Jo Swinson. (Image: YouGov)

Journalists are asked to avoid saying a hypothetical survey result is what “would” happen. Surveys cannot tell the future. Additionally, survey researchers are advised to appropriately convey the limitations of asking such questions.

Hypothetical vote intention questions provide an indication, but reality is often disappointing.

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store