The author Richard Seymour wrote in The Guardian about the polling industry, claiming that research companies do not ‘measure’ public opinion.

This article examines Seymour’s various claims.

“A 2% lead”?

One gives the Tories a 10-point lead, another gives Labour a 2% lead.

Looking at the BBC’s poll tracker, the last time a poll estimated a 2-point Labour lead was by Opinium, at the start of July. What Seymour appears to be referring to is a hypothetical vote intention question asked by ComRes on 4–6th September about the following scenario:

A General Election is held after extending the Brexit deadline beyond the 31st of October

Hypothetical questions of this kind have their own problems — as people cannot fully predict their future behaviour — and should not be compared to ‘standard’ vote intention questions.

The BBC have now produced their own poll aggregator. (Source: BBC)

Nonetheless, there is a wide range of Conservative lead estimates.

The article continues:

Polling was once a specialised sector of market research. Now it is a niche area of the much bigger data industry, using the same Bayesian techniques of probabilistic analysis that stock markets employ in financial forecasting.

Opinion polling remains part of market and social research — voluntarily regulated by the Market Research Society.

To give a very brief primer: Bayesian statistics treats probability as a subjective evaluation — to be updated with new information — whereas frequentist statistics says probability is the long-run frequency of an event occurring. The differences between Bayesian and frequentist statistics are philosophical. As a thought experiment, how should we interpret the ‘long-run frequency’ of events only happen once, like a political party winning an election?

At present, Bayesian statistics has three principal roles within opinion polling:

  • Uncertainty: providing estimates of uncertainty for internet panel polls due to how much the sample varies;
  • Aggregation: different ways to pool many polls together;
  • Seat estimates: building a model of individual vote intention based on their demographics and geography, which is then used to estimate constituencies.

None of these techniques are similar to Bayesian methods in financial time series forecasting. It is for Seymour to elaborate on his meaning here.

Public non-attitudes

As Prof Sturgis (LSE) describes in the 2019 compilation book Sex, Lies and Politics, political scientists have studied public non-attitudes or pseudo-opinions.

In a British survey by Ipsos MORI in 2006, 11% of respondents offered a view on the fictitious ‘Agricultural Trade Bill’. These responses were not mentally flipping a coin — a higher proportion of people intending to Conservative gave an opinion than non-voters. Also, those who self-reported being very interested in politics were more likely to offer their pseudo-opinion than those were not interested at all.

Of those that gave their view, the graph suggests Conservatives opposed the fictional Agricultural Trade Bill. (Source: Academia.edu)

Some people look for clues in the question, and then offer responses in concordance with their political beliefs. This leads to two considerations: how questions are ‘framed’ is important, and recognition that some responses will be based on informed guesses.

Concerns about non-attitudes should especially be highlighted if the subject matter is little-known, as opinions expressed in those surveys may be somewhat weaker than naive interpretations suggest.

Exogenous opinion

Seymour states: “Formulating a preference, or even purchasing an item, is quite unlike casting a vote.” Party identification is an incredibly well-studied area of political science. That is about people formulating a preference for a political party.

The author simultaneously asserts that polling companies are ‘producing’ opinion and “scrabbling to update their models”. This conflicting argument suggests polling companies are measuring something real and external, contrary to the entire thesis of Seymour’s article.

The Brexit Party’s performance in the 2019 European elections was supposedly dependent upon a few thousand people sharing poll results on social media. To the author, it is “difficult” to see otherwise. Their success will need to be properly studied. As a putative explanation, the party was led by a highly prominent campaigner to leave the European Union — during an election that should not have occurred, if the country had already left following the result a referendum on the matter three years ago.

Liberal Democrat support was overestimated by polling companies in 2010, by 3 to 6 points. National opinion polls cannot “predict” the “outcome” of a party losing five seats — as these polls measure vote intention share across the nation (usually Great Britain), not constituencies.

It is important that caveats are properly provided when surveying public opinion. Surveys provide estimates. Question wordings matter. Methodology matters.

Contemporary British politics should be adequate demonstration that consensus need not exist — and social research highlights how split public opinion can be.

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store