After listening to Anthony Reuben’s talk at the Royal Statistical Society conference in Belfast, I thought much about “what we could do without”.

A oft-confusing phrase seen in articles and discussions of opinion polling is ‘polling at’. This article looks at problems with this phrase.


Two numbers would haunt [a Tory] every single day: 24. That’s the percentage Labour was polling at the beginning of the 2017 general election campaign.

Aside from 24 being one number, 24% was Labour’s lowest Westminster vote intention estimate after the general election had been called. It was the estimated vote share from one YouGov poll conducted on 18–19th April.

Using all polling data from April 2019, polling averages and other means of estimation do not suggest Labour’s vote intention share was quite that low:

The increase in Labour support throughout the campaign was well-tracked. (Image: Polling Observatory)

House effects hunting

Survey research is a difficult task. In such a political environment, there are many different, justifiable choices that can be taken by polling companies. These choices then affect their vote intention estimates.

For instance, when asking people who they intend to vote for: what parties should the company provide as potential answers? Including parties with few candidates could lead to over-estimates of support those smaller parties. Consequently, the company would under-estimate vote intentions for larger parties.

Should you ask people how they voted last time (in order to make the sample more representative), given some people may not be able to accurately recall their past vote? There are technical differences in weighting procedures too.

These combined effects of these methodological choices are called ‘house effects’ (since the companies themselves are often referred to as ‘houses’).

University of Southampton’s Polling Observatory have estimated house effects in recent polling:

We observe some large house effects. (Image: Polling Observatory)

Companies may provide different estimates due to their different methods.

As an inherent cost of conducting a survey, sampling variability means some polls must show an unusually high (or low) support for particular parties.

The temptation for partisan people is to select single point estimates most suited to their argument (such as ‘my party is doing well, we are polling at X%’.) Additionally, journalists may simplify stories by only referring to the central estimate of one survey result.

Instead of meaning a centrality of support, the single point estimate could refer to an extremity of estimated support.

Uncertainty should be offered, and not hidden.

All the most recent polls, their average or range should be referred to, rather than a single point estimate of public opinion. Real vote intention could be somewhat different to what one company estimates a party is ‘polling at’.

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store