The Right Frame

Did broadcasters show ‘systemic bias’ by focusing on seats?

Two academics write that it is a ‘scandal’ that:

seats projections were announced at 10pm, while information on the parties’ national vote shares came along only seven hours later.

This central claim is likely to be misleading, compounded by multiple errors about opinion polling, exit polling methods and the BBC’s coverage.

A solar system of errors

Writing at LSE Blogs, Prof Pippa Norris (Harvard) and Prof Patrick Dunleavy (LSE) say the BBC coverage — based on the broadcasters’ exit poll — established “a dominant narrative…with no counter-notes of any kind”.

The article’s central claim that vote share information only came after 5am is likely to be misleading, with multiple errors orbiting this main misrepresentation.

On the BBC’s election night coverage, the change in vote share (incorrectly showing percentages rather than percentage points) appears throughout the night. This is displayed at the bottom of the coverage, and it is updated after constituencies are declared. The bar flicks between total seats so far, net seat changes and vote share changes.

Here is the results bar (at one hour 35 minutes) after two declarations:

The forecast starts the coverage. (Image: BBC)

As constituencies declare their results, vote share changes are then updated (here, at around the four hour mark):

The bar changes, reflecting declared results. (Image: BBC)

Let’s look at the above image, which explicitly refers to Labour’s votes (despite the assertion of a “seats-only Exit Poll perspective”):

Labour vote expected to fall heavily in North and Midlands

The story, then, was accurate:

Labour’s vote share fell by 13 points in the North East. (Image: Commons Library)

The accurate exit poll, again

The academics write:

Why did the broadcasters vest all their national analysis in the Exit Poll, an exercise which since methods were changed in recent years has not been able to generate an accurate estimate of the national vote share?

This question is based on a false premise. The exit poll can produce national vote share estimates, with appropriate limitations on variations in turnout. The Curtice-Firth method of exit polling looks at the changes between exit polls, which is then used to build a model for change between elections.

The exit poll does produce national vote shares, as shown for the 2005 election. (Image: University of Warwick)

This exit poll is paid for by the broadcasters, and its fieldwork and analysis are undertaken with considerable care and expense — under major pressures of time. Given its general accuracy, this investment has undoubtedly paid off.

The exit poll marginally overestimated the Conservative lead.

The SNP over-estimation by seven seats was not a “major gaff” (meaning ‘gaffe’). Prof Curtice repeatedly said during the BBC coverage that the Scottish estimates should be treated with some caution.

This is due to a lack of coverage with polling stations in Scotland. Also, many seats in Scotland were highly marginal following the 2017 General Election. Having later watched the ITV coverage, Prof Rallings made the same points.

There were five academic election analysts (Curtice, Rallings, Green, Thrasher, and Jennings) appearing on the three main broadcasters. Their presence undermines Norris and Dunleavy’s general proposition that election night coverage failed to convey electoral understanding and meaning.

That is the role of the psephologist: explaining the results to viewers.

“Highly likely”

Strangely, the article states:

Any political scientist could have told the BBC that the median result here was highly likely to be accurate on national vote share.

Given the systemic error in 2015 — and the Labour underestimation in 2017 — a political scientist should have surely urged caution.

The median vote intention share would have given a flawed impression in the past two General Elections. It was not “highly likely to be accurate”.

Polling errors exist. There seems to be a genuine misunderstanding — accuracy cannot be evaluated in advance.

The article uses an incomplete list of final polls from different companies, as not every last poll included the 11th December in its fieldwork dates. Additionally, their table wrongly compares these polls of Great British adults with the UK vote share.

The median vote intention share was 43% for the Conservatives, and 34% for Labour. (Image: BPC)

A vote share frame?

I agree that further discussion of vote shares would be welcome. This is a matter of emphasis, rather than existence. Election night coverage is ultimately about which MPs are elected to the House of Commons.

The article states:

Only after 5am did the BBC’s Jeremy Vine at last announce an estimated three-party national vote share for Britain, to a residual audience of insomniacs and election geeks.

This is based on actual results — which is why it was not published until most of the constituencies were declared.

The broadcasters’ exit poll, conducted by Ipsos MORI, centrally estimated the Conservatives would win 368 seats. The actual result was 365 seats — well within the credible interval.

Alas, accuracy does not please everyone. (Image: Ipsos MORI)

A Conservative lead of 11.8 points over Labour (in GB vote share) is the largest difference between the two parties since 1997, and the largest Conservative lead since 1983. The Labour net loss in vote share of 8.1 points has only been exceeded in the 1983 election, where it fell by 9.4 points. Claiming that Labour’s fall was “driven by supply-side patterns of party competition which split the Remain camp” risks the ecological fallacy.

After net movements, where in the country these votes were won and lost mattered. This is the story that exit polling analysis estimated well, and was widely discussed during the broadcasts.

It is noteworthy that the article’s suggested summary:

  • misstates the Conservative majority, which cannot be an odd number;
  • provides historical comparison for the Labour vote share but not for the Conservatives —increased in every election since the 2001 nadir;
  • wrongly says the SNP “gained” 48 seats, when their net seat gain was 13;
  • for the Brexit Party and UKIP, compares vote shares in General Elections and European Parliament elections — which have different systems, parties, campaigns, and turnout;
  • uses vote shares like a ledger for ‘Remain’ and ‘Leave’ blocs, despite voters’ choices being determined by more than this single issue.

Accuracy is important, both in broadcasts and criticisms. Attacking broadcasters for basing their analysis on an accurate exit poll is astonishing. These broadcasts run for over eight hours, remaining fast-paced whilst journalists, academics and politicians work through the night. Undoubtedly, the BBC, ITV and Sky will see what can be improved next time.

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store