Britain’s “Best” Pollster?
Democracy Institute is not a British Polling Council member.
The US President made claims about the views of the “best pollster in Britain”. This refers to Democracy Institute. This think-tank is not a partner company of the Market Research Society.
This article looks at polling by Democracy Institute, and its claims of accuracy.
Express to the President
The Sunday Express published an article by Patrick Basham. Democracy Institute provided US Presidential vote intention polls on behalf of the paper. Newt Gingrich, former Speaker of the House, asserted on Fox & Friends:
Interestingly, Patrick Basham — who is the most accurate pollster, is British — wrote in a British paper this morning: that this clearly was a stolen election.
Quoting Gingrich, the US President referred to Mr Basham as the “best pollster in Britain”.
Democracy Institute is a think-tank residing in Washington. There are no records of Democracy Institute running a UK General Election poll. It is not a member of the Market Research Society, nor part of the British Polling Council. Democracy Institute does not appear in FiveThirtyEight’s ratings.
Democracy Institute produced at least two polls on the 2016 EU referendum. The former was on 23rd — 26th April, and the latter was on 19th — 22nd June 2016. The polling mode was random-digit dialling phone calls with interactive voice responses. The latter poll estimated a three point lead for Leave. For these polls, there are no published data tables.
The organisation published vote intention figures for the last two US Presidential elections. There is no polling method on their website. The August 2020 article on the Express website stated:
The poll assumes a particular composition of voters by party identity.
The Sunday Express article
Mr Basham begins by claiming his Institute were accurate in both 2016 and 2020:
The accurate pollsters were also the most accurate in 2016. Along with Richard Baris’ meticulous state-level Big Data Poll and Robert Cahaly’s innovative Trafalgar Group swing state surveys, our Democracy Institute Sunday Express polling pretty much nailed the 2020 election.
This is incorrect. Democracy Institute’s final vote intention poll in 2016 estimated Trump on 50% and Clinton on 45%.
In that election, Clinton received 48% of votes, compared to Trump’s 46%. The Democracy Institute poll had a seven point error on the Democrat lead. Trump won the electoral college with 306 pledged electors.
The FiveThirtyEight polling average estimated Clinton’s lead to be four points. That is a two point error — closer than what Mr Basham claims are the “most accurate” polls.
For 2020, the Democracy Institute final poll estimated Trump on 48% and Biden on 47%. (These figures include undecided voters.)
Votes are still counting. At the time of writing, Biden holds a national vote share lead of over three points.
Mr Basham continues on the supposed inaccuracy of other companies:
They relied upon surveys of registered voters that skew Democratic because a third of the respondents don’t vote. We survey only “likely” voters who’ll actually cast a ballot.
This is about turnout models. In the US, companies produce estimates for adults, registered voters, and “likely voters”. An estimate of likely voters is not an innovation on Democracy Institute’s part.
Turnout modelling caused trouble in polling for the 2017 UK General Election. It is one possible cause for Democrat overestimation in the 2020 US election. It is difficult to work out who will vote.
Mr Basham persists:
They refused to accept the Shy Trump voter was real in 2016. So, they didn’t look for them in 2020. We knew they were real, and we found far more of them this year.
The AAPOR post-mortem suggested most evidence favoured these explanations:
- There were real changes in vote preference during the final week of the campaign.
- Many polls did not adjust for educational attainment. In 2016, that was important for estimating vote intention.
- Late-revealing Trump voters outnumbered late-revealing Clinton voters.
“A number of other tests for the Shy Trump theory yielded no evidence to support it.” (AAPOR, 2017)
On the final Democracy Institute estimate, Mr Basham writes:
As a result, we predicted his higher Black vote to the actual percentage point.
‘Exit’ polling in the US is difficult. The Edison Research survey contains telephony respondents, to estimate early voters. Researchers can estimate the share of Black voters casting ballots for the incumbent.
This is not “to the actual percentage point”. The article itself does not include a benchmark source for this claim.
Other correct calls include Trump’s precise national popular vote percentage and identifying the economy as the crucial issue.
Mr Basham asserts there was mass electoral fraud. The Express article notes: “We are yet to see the evidence of voting fraud”.
Given an assumption of fraud, it is incoherent to claim a “precise” estimation of Trump’s vote share. As counters tally votes, vote shares change.
I wrote last Sunday that, should our Biden popular vote projection be off by a couple of points, it would reflect voter fraud rather than our having missed a Biden landslide.
Surveys are imperfect. The total survey error framework allocates six types of survey error. These are: specification, coverage, sampling, non-response, measurement, and processing.
An error of two points could be due to sampling error alone, or other errors. Survey researchers should not claim pinpoint accuracy. Nor claim divergence from their estimates must mean cheating.
Investigations into US voter fraud find few cases. This is not of the scale that Mr Basham needs make such claims.
The Sunday Express article makes false claims about how accurate Democracy Institute were. Mr Basham’s claims do not align with exit polling estimates or provisional vote counts. The main thrust is the Democracy Institute poll is more accurate than the election. This is absurd.
Those surveys provide estimates, subject to many sources of potential error. Electoral fraud in the United States is very unlikely to be such a source. With all votes counted, social research companies should review their methods.