A letter from the leader of the Liberal Democrats claims the party is ahead of Labour in “many polls”.
This article examines this claim about opinion polling, and looks at house effects.
“Winning up and down”
As part of their leafleting campaign, Jo Swinson MP (East Dunbartonshire, Liberal Democrats) claimed that:
In many polls, we are now ahead of Jeremy Corbyn’s Labour Party.
Using the BBC’s poll tracker, we can look at published vote intention polls for general elections in Westminster.
In the 80 published polls with fieldwork ending between 1st June and 21st October 2019, covering either Great Britain or the United Kingdom:
- Eight estimated that the Liberal Democrats were ahead of Labour;
- Two showed ties in the vote intention share estimate;
- The remaining 70 polls estimated Labour were ahead.
All eight of the estimated Liberal Democrat leads and both ties were from YouGov polls. In this period, the largest estimated lead was four points.
Saying the party is ahead of Labour in “many” polls may stretch credible usage of that word — particularly when only 1 in 10 recent polls showed such a lead, and all were from the same research company.
In the house effects
As a small part of their overall research, polling companies seek to measure how people intend to vote at the next general election. There are many methodological choices that research companies which can affect the final estimates:
- Target population: does the survey look only at Great Britain, or does it also include Northern Ireland?
- Sampling method and mode: does the survey randomly contact people people by mobile and landline phones, or does the company use an opt-in internet panel? Other methods have also developed, such as internet river sampling.
- Quotas: British opinion polls typically use quota sampling, meaning they have to contact enough people with certain characteristics to ‘fill’ the poll. Which quotas (like age or gender) does the survey use?
- Question wording: How does the company ask the question about how people intend to vote?
- Response options: Which parties does the company offer as answers to that vote intention question? Is there a secondary list, or do people need to write in or say ‘other’ parties?
- Turnout: How does the company ask if people will vote at the next election? How are these answers used: like a filter, or do we treat the self-reported likelihood literally?
- Treatment of Don’t Knows: How are people who say they do not know how they intend to vote treated? Are they removed, or assigned to their recalled past vote?
- Imputation: If people skip or refuse to answer the vote intention question, how does the company treat those missing values?
- Weighting: What weighting does the company use to make the sample look more ‘like’ the target population? Does the company include turnout in its weighting? What statistical procedures does the company use for weighting?
Each of these questions may have multiple, justifiable answers. These decisions are the reason why — beyond random differences — different polling companies show different vote intention estimates. The combined effect of these choices by polling companies is called the house effect.
The Polling Observatory have produced an analysis on house effects in recent polling: