In the year ahead one of the biggest – probably *the* biggest – political stories in the UK will be the September referendum on whether Scotland should leave the UK.

I am not going to comment here on what I hope the outcome will be – other than to say I hope and believe there will be a strong ‘no’ vote.

But I am going to take issue with how the campaign is reported and, in particular, the dismal way in which opinion polls are covered.

My ire has been provoked by a claim by columnist in today’s Scotsman that a 1% change in one side’s support between two polls in September and December indicates the race is “tightening”.

My argument is that it indicates nothing of the sort. The two polls are essentially mathematically identical. I realise that “things just the same” does not, as a headline, sell many papers, but it does not make it acceptable to invent new mathematical facts where none exist. The fact that opinion polls today essentially show the same result as opinion polls of two months ago and – in this case – two years ago and twenty years ago – may be a journalistic disappointment, but it is also the reality.

So here is my brief guide to the mathematics of opinion polls. If you want to know more I strongly recommend the classic Statistics without Tears: An Introduction for Non-Mathematicians which, as the subtitle suggests, gives the reader a clear grounding with requiring a lot of maths knowledge.

I will begin with a few ground rules…

**Firstly**, remember what a poll is based on: not the truth about people’s opinions but what they say their opinions are. If some people systematically lie to pollsters (as, in certain cases, it is known they do because they might be afraid or ashamed to tell the truth) then your poll is flawed from the start. And the best you can say of any poll’s accuracy is that it is as good as the best poll can be.

**Secondly**, the best we can say about a poll is that, if conducted properly, it has a given degree of accuracy compared to any other poll. So when people talk of a “margin of error” in a poll, what they typically really mean is that 95% of all properly conducted polls will give an answer within that margin of error. (This is both an amplification of the first point but also completely independent of it – if people lie then they will likely lie to all pollsters and so no polls are immune.)

**Thirdly**, it is a mathematical fact that for even the best conducted polls, we should expect one in twenty to give results outside that “margin of error” – this isn’t because we can expect pollsters to mess it up one time in twenty, but because of the mathematical rules of the universe in which we live. It is an unavoidable feature of opinion polling. And because it is unavoidable we do not know which of the polls is the “rogue” and whether any seeming shift (or non-shift, remember) is because of this “rogue” effect or because of a real change in what people are likely to say to opinion pollsters.

And now a little bit of maths…

Claims about polling accuracy are based on the fact that opinion poll results (surveys of a small part of the population from which we hope to draw conclusions about the whole population) will be distributed about the “real” result (ie the answer we’d get if we asked every single person) in a bell-shaped “normal distribution“. The maths of this “normal distribution” are very well understood and so we can make some well-grounded claims about the potential accuracy of our polls.

These include the fact that, above a basic minimum sample size, the margin of error in our poll (i.e., the error compared to other polls) varies by the inverse of the square root of the sample size. Or to be blunt about it, a poll with 2000 respondents is not twice as precise (i.e., with half the margin of error) as one with 1000, but merely 1.4 times more accurate, while the gap between 2000 and 500 is not a shrinkage in the margin of error by a factor of 4 but of 2 (you can tell straight away that the economics of large scale polling is a bit perverse – if you go from a 1000 to 10000 sample poll, your costs increase by a factor of 10, but the margin or error only shrinks by a factor of 3).

The “one-in-twenty will be rogue” rule comes from the fact that when we talk about the margin of error in a poll what we really mean is that in 95% of all polls the result will be in a band twice the size of the margin of error, centred on the result we have published. This 95% figure is the “confidence interval” (more precisely this is the band of two “standard errors” in each direction about the sample mean).

You may interject now and say “but that doesn’t mean a 1% difference is not real” and you would be right – if you are willing to live with a lower confidence interval or pay for a very much bigger sample. So, to make a 1% figure “real” we might be prepared to live with a margin of error of 0.5% on either side of the reported poll result. We could get that in two ways – shelling out to increase the sample size to roughly 40,000 (compared to the typical 1,000), which would keep our 95% confidence interval, or accepting that about 60% of polls would give a result that was **not** within +/- 0.5% of our figure – or, crudely, we were more likely to be wrong than to be right when we claimed the 1% was a “real” shift (we would have a 40% confidence interval).

###### Related articles

- Rouge over rogue: When is it okay to call a poll rogue? (grumpollie.wordpress.com)
- Correcting the math of journalists in Ohio (3quarksdaily.com)
- Scottish independence poll shows Yes narrowing gap (scotsman.com)
- Opinion polls: the way forward (thehindu.com)
- Poll sinks hope for a ‘yes’ vote (thetimes.co.uk)
- And you thought Scottish opinion polls were peculiar (weegingerdug.wordpress.com)
- The issue of Scottish passports shows how small-minded the SNP’s campaign is (blogs.spectator.co.uk)
- Eddie Barnes: Game-changers to come for Yes camp (scotsman.com)

Well put. It is also worth noting that the margin for error is based on the assumptions that the survey instrument is valid, the sample unbiased, and observations are independent, all of which are easily screwed up. In the US, the Chicago Tribune headline “Dewey Defeats Truman” (https://en.wikipedia.org/wiki/Sampling_bias#Historical_examples) is an oft-cited example of sampling bias. (The survey was conducted by calling random telephones. At that time, having a phone in your home, as opposed to using a party line, was a sign of affluence.) Surveying multiple people in the same household violates the independence assumption. Making the user jump through hoops to respond can create volunteer bias (http://www.psychwiki.com/wiki/What_is_volunteer_bias%3F), in which those with strong opinions respond and “swing voters” and undecided voters remain mum.

So the safest thing is to ignore all polls.🙂

Pingback: A further point about Scottish referendum polls | cartesian product