We often want to know what people think. Surveys offer this mirror onto society. Articles published in newspapers and through digital platforms may cite survey data.
Alongside survey results, the name of the polling company is ideally included. Additionally, journalists when and how the polling was conducted, and how many people were surveyed.
What happens when this critical information is omitted? This article looks at two recent examples.
Good companies lose out
Political opinion polling — whilst highly visible — is a small part of what market research companies do. The Market Research Society estimates political opinion polling accounts for about 1% of research undertaken outside of general elections.
Publishing opinion polls serves to enlighten public debate — to offer that mirror to society — and demonstrate the capabilities of the research company.
Let us look at a recent example involving a now-rare constituency opinion poll. In the 2010–15 Parliament, numerous constituency surveys were paid for Lord Ashcroft.
Number Cruncher Politics (which are not members of the British Polling Council) conducted an internet river sample of 509 people living in Brecon & Radnorshire.
The survey itself was innovative. This is one of the first published vote intention polls of a British sub-national area to use internet river sampling. This type of sampling involves finding people as they visit websites and mobile apps, and inviting them via ads or banners to complete your survey. Like fish, respondents are caught in the river and then thrown back.
However, some news organisations chose to report on this poll in a very limited fashion — sometimes not including reference to the research company.
A recent editorial in The Guardian read:
One poll suggests the party will capture Brecon and Radnorshire in a by-election, helped by other remain parties standing down.
A provided link initially went to the Left Foot Forward blog. This link has now been edited to go to the Number Cruncher Politics website.
Both the Huffington Post UK and the Financial Times originally had articles which did not even refer to the research company. The respective articles have since been amended. (I am grateful to the journalists involved for remedying this problem.)
Constituency polling can be properly conducted through digital means. If unnamed, the research company loses out on recognition and future business. Omitting critical information also does not aid the reader. Failing to attribute is poor journalistic practice.
Questionable results spread without checks
An article on People.com states that:
52% of respondents admitted to kissing their dog more than their partner.
Their website does not refer to any press release containing the results. No article provides any additional information so we can judge the survey’s value — not sampling, mode, sample size, fieldwork dates, weightings, question wording, order or response formats. Nothing.
Without this information, we cannot judge the probable accuracy of the supposed survey estimate.
I have contacted Riley’s Organics, but received no reply.
If all surveys can be written about in this way, what distinguishes a self-selecting Twitter ‘poll’ from a scientific random survey?
Polling results may be detached from critical information like the company, survey mode and fieldwork dates.
When using statistics in political debate, it is important to have proper sourcing. ‘A poll says’ — without naming the company, methods or dates — is unhelpful.
(The R code for the graph is available to read on R Pubs.)