Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Can opinion polls tell us for certain who will win the election?

The 2024 general election has been the most “polled” in British history. More companies have conducted more polls, and with a greater variety of methods, than in any previous contest. In terms of seats, the predictions have varied wildly – from around 55 seats for the Conservatives up to around 200.
One fact, though, has been remarkably constant since before the beginning of the campaign – indeed, since at least the start of the year. Labour has enjoyed a consistent lead of around 20 percentage points over the Conservatives, and the near-certainty of forming the next government with at the very least a substantial majority. Still, there are some interesting questions to be asked…
Yes. They were wrong in 1970, 1992 (the worst errors) and 2015, for example. Even in elections that yield very clear winners, such as the one in 1997, the polls can be out, but in a landslide scenario nobody much cares. The usual rule for conventional polls is that any given party can be about three percentage points out in either direction, in 19 out of 20 polls – meaning there will be some “rogue” outliers. It’s best to survey the whole scene, not focus so much on lead (which doubles the margin of error), and to watch for broader trends.
With so many polls employing so many radically different methods, they would all have to be out for different reasons, but in the same (pro-Labour, pro-Reform) direction, and at such a historic scale, that they’re unlikely to be giving a completely bogus picture.
Lots of things. Sometimes a pollster might use out-of-date census figures, for example, to “weight” responses when they are short of a particular demographic. They might also fail to detect how likely people are to vote; and some respondents, like the famous “shy Tories” of 1992, might be a bit ashamed to declare their intentions – though using methods such as telephone and internet polling reduces that.
Some pollsters use a pool of recruits to form a “panel”, rather than getting fresh random samples every time. This has the great advantage that they can capture changes in sentiment, and their causes, much better – though the pool has to be especially representative, and “right first time”. The downside is that panel members can become politicised, follow the news more closely, and thus become more like actors in the scene than observers. This could well be a factor this time.
That stands for “multilevel regression and post-stratification” – which means the use of statistical modelling in each of the 632 seats in Great Britain (the 18 seats in Northern Ireland, which has a different party system, are ignored) to determine a likely winner.
It’s a new idea, and thus not yet fully tested. It starts with a mega-sample of 10,000 plus to pinpoint how specific groups intend to vote, and then uses a database of constituency profiles, alongside other data, to model what might happen in each seat. So, for instance, if the megapoll finds that poorer English pensioners are unusually attracted to Reform UK, then those seats in England that happen to contain a relatively high proportion of poorer older people will have their Reform UK reading boosted accordingly.
The pollsters obviously also take into account the last general election result in 2019, possibly local and by-election results, tactical voting potential, the Leave/Remain split in 2016, and other data. The output is presented in terms of Commons seats rather than vote share. The different techniques used by market research firms are a sort of black box – for commercial reasons.
It’s a hot topic in psephological circles. An easy way of looking at it is this. A national poll might indicate that the Tories would lose, say, 25 percentage points on average in every seat. But in many seats, eg in Liverpool, they got way less than 25 per cent in 2019. So if they were to drop 25 percentage points in a Liverpool seat where they won 15 per cent under Boris Johnson, they’d have a “negative vote” of minus 10 per cent, which is silly.
That’s what is known as “uniform” swing, which doesn’t work so well when we have extreme movements in voting, as is the case now. So adjustments have to be made to reflect the necessary fact that a party suffering in this manner must be losing more votes in other places, such as maybe its safest seats, for the polling result to be consistent. One consequence is that it renders the famous BBC “swingometer”, which assumes uniform swing, redundant.
For the sake of the quality of public debate about current affairs, rather than the “horse race” aspects of an election, the answer is yes. From the point of view of maximising the available evidence to see what’s happening out there, one would have to say no.

en_USEnglish