After listening to Anthony Reuben’s talk at the Royal Statistical Society conference in Belfast, I thought much about “what we could do without”.
A oft-confusing phrase seen in articles and discussions of opinion polling is ‘polling at’. This article looks at problems with this phrase.
The author Owen Jones stated on Twitter:
Two numbers would haunt [a Tory] every single day: 24. That’s the percentage Labour was polling at the beginning of the 2017 general election campaign.
Aside from 24 being one number, 24% was Labour’s lowest Westminster vote intention estimate after the general election had been called. It was the estimated vote share from one YouGov poll conducted on 18–19th April.
House effects hunting
This is just one example of many on social media, but why can referring to a single point estimate of public opinion inadvertently mislead?
Survey research is a difficult task. In such a political environment, there are many different, justifiable choices that can be taken by polling companies. These choices then affect their vote intention estimates.
For instance, when asking people who they intend to vote for: what parties should the company provide as potential answers? Including parties with few candidates could lead to over-estimates of support those smaller parties. Consequently, the company would under-estimate vote intentions for larger parties.
Should you ask people how they voted last time (in order to make the sample more representative), given some people may not be able to accurately recall their past vote? There are technical differences in weighting procedures too.
These combined effects of these methodological choices are called ‘house effects’ (since the companies themselves are often referred to as ‘houses’).
University of Southampton’s Polling Observatory have estimated house effects in recent polling:
Companies may provide different estimates due to their different methods.
As an inherent cost of conducting a survey, sampling variability means some polls must show an unusually high (or low) support for particular parties.
The temptation for partisan people is to select single point estimates most suited to their argument (such as ‘my party is doing well, we are polling at X%’.) Additionally, journalists may simplify stories by only referring to the central estimate of one survey result.
Instead of meaning a centrality of support, the single point estimate could refer to an extremity of estimated support.
Uncertainty should be offered, and not hidden.
All the most recent polls, their average or range should be referred to, rather than a single point estimate of public opinion. Real vote intention could be somewhat different to what one company estimates a party is ‘polling at’.