Our Survey Says: November 2018

Anthony B. Masters
6 min readDec 8, 2018

We want to know what people think, what attitudes they hold and values they possess. Consequently, survey statistics are often used in political debates and media reports.

However, survey results can be misunderstood, misinterpreted or misused. This article looks at examples of such errors in British media publications, as well as on social media, in November 2018.

In short

Self-selecting surveys at ITV: The broadcaster used a Twitter ‘poll’ in some of its reports. Self-selecting surveys are not reliable measures of public opinion.

Sample quality: Two separate articles questioned the statistical theory behind using opt-in internet panels for polling.

Social stories: A Survation poll from March 2018 was posted on Twitter as if it were recently published. One account made numerous errors on EU referendum polling.

ITV cites Self-Selecting Survey

Self-selecting surveys, such as those on Twitter and Facebook, are not reliable measures of public opinion.

The broadcaster ITV quoted a Twitter survey:

According to a poll by Good Morning Britain, 56% of people want a second referendum on the UK’s EU membership. Over 165,000 people voted in the Twitter poll, sparking fierce debate between Brexiteers and Remainers.

In such surveys, there is self-selection bias. People who see and click on the survey typically do not represent the whole population (like GB or UK adults).

Self-selecting samples have no controls, no weights and no credibility.

‘Briefings for Brexit’ discover internet panel polling

At Briefings for Brexit, Dr Edwards suggests that citing an internet panel poll is “a false representation of the facts on which it is based”:

In simple terms, the fast one is pulled when “the panel” metamorphoses into “the public”.

To get technical, the article is based upon the frequentist (or classical) understanding of statistical sampling.

In this framework, we must randomly draw a sample from the entire population we wish to make inferences about (such as GB or UK adults). Every member of the population must have a known and non-zero probability of selection.

Alternately, we can build a model of public opinion. When we randomly draw a sample from our opt-in internet panel (as one example), we update our beliefs about public opinion — from a prior known state, like a referendum or election result. In statistical language, we are using Bayesian inference.

Internet panel polling is a non-probability method: this simply means that not everyone can take part — if you are not a member of the panel, your selection probability is zero. It does not mean that selection within the panel is not random, nor does it imply inaccuracy.

Survation uses both telephones and internet panels for its social and market research, and is a member of the British Polling Council.

The article then falsely asserts that NatCen’s British Social Attitudes survey is “the only research in the UK that is based closely on a random sample”. Here is a non-exhaustive list of other random probability surveys:

Spectating Errors

In his article for The Spectator, Robert Tombs makes numerous misrepresentations related to polling:

New opinion polls sporadically appear, none of them based on genuinely random samples, and hence none very reliable.

Non-probability sampling methods, such as polling via an internet panel, do appear reliable — based on historical record. For example, the only two final polls for the EU referendum (including 22nd June 2016 in their fieldwork period) that showed Leave ahead were both internet panel polls (Opinium and Kantar TNS).

The EU’s own polling organisation, Eurobarometer

‘Eurobarometer’ is the name of a survey series, not a polling organisation. The latest biannual study was conducted by Kantar TNS: “The basic sample design applied in all states is a multi-stage, random (probability) one”.

Respondents are aged 15 or over. (Source: European Commission, Eurobarometer 89)

The article continues:

the recent survey for Channel 4 indicated only about a third of those polled wanted to stay in the EU in the event of a ‘No Deal’.

This refers to the Survation internet panel poll on behalf of Renegade Productions, and does not elucidate the other possible responses:

3 in 10 either want a delay or do not know. (Source: Survation)

But it cannot seriously be denied that British Euroscepticism (matched in other member countries now)

It is hard to square this “matched” claim against the latest Eurobarometer survey, where the UK was the only EU member state where a plurality of respondents agreed their country “could better face the future outside the EU”.

This does not mean that support for the EU is universal: merely the UK appears unique in this regard.

Exclusive Choice and Principle

Kevin Schofield wrote an article for Politics Home, which claimed in its headline:

Just 9% of voters want fresh referendum on EU membership if MPs reject Brexit deal

This claim was then repeated by ‘Stand Up 4 Brexit’ — representing a group of Conservative MPs:

In the Hansbury Strategy poll for Politico, the actual question was:

If parliament rejects the deal to leave the EU negotiated by the U.K. government, what do you think should happen next?

There were a series of exclusive options.

This is different to asking about the principle of holding a referendum, where public support (in the same poll) for “a public vote on the outcome” was 43%.

You may think something is a good idea, even if you do not think it is the best option.

Social Stories

An account tried to pass off a Survation poll in March 2018 as if it were recently published. This led to criticism from the polling company:

Another Twitter account has made numerous, repeated errors about EU-related polling.

Polling are snapshots of public opinion.

What Anthony Wells of YouGov said in his analysis of EU referendum polling after June 2016 was:

The weight of evidence means that we can be as good as certain that, at least as far as the polls are concerned, Remain is now ahead of Leave.

There are numerous sources of uncertainty in those polls, and that lead could be overturned during an actual referendum campaign.

The claim about Survation is clear cherry-picking. Survation conducted two polls after the cited one: both with slender Remain leads.

The graph shows Survation polls in 2016 with Don’t Know responses removed. (Source: What UK Thinks EU)

NatCen does not stand for ‘Nat Census’: it means National Centre for Social Research, and their social research is not a census.

Exiting polling is not conducted by NatCen. The analysis is led by Prof Curtice, but the fieldwork is jointly done by GfK and Ipsos MORI. The accuracy of exit polling is irrelevant to NatCen’s mixed-mode panel.

Claiming “8% Remain [lead] AFTER every [possible] bias correction applied” is inaccurate. That is referring to calibrating the raw survey result to counter the increasingly Remain-heavy nature of the panel:

The second half of the table shows what share of the panel report voting for each option in 2016, excluding those who did not vote. (Source: NatCen/What UK Thinks EU)

This was a non-exhaustive look at how survey data is used (or misused) in British public debate, in one month.

--

--

Anthony B. Masters

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.