Polls, predictions, and the power of three

The power of three is a well-known technique for getting your message across. “Friends, Romans, countrymen…”, “Liberté, égalité, fraternité”. For anyone wanting to denounce the polls they now have “Trump, Brexit, Tory Majority”.

Donald Trump’s unexpected win followed failed polling predictions on both Brexit and the 2015 general election, leaving many to question the value of polling.

The post-mortem is just beginning. Was the polls’ failure down to errors with sampling or weighting, the use of online polls, the level of turnout, ‘shy’ voters, or the difference between intended behaviour and what voters actually do on the day?

Then there’s the influence of the polls themselves on voting behaviour, the influence of the commentators’ interpretations of the polls, and the suggestion that it’s safer to go along with the pack rather than be the one poll that stands out. It’s complex, to say the least.

In 2015, our friends at YouGov held a very honest and thorough investigation into what went wrong, concluding that the pressure for cost-effective and quick-turnaround polls compromised the quality, particularly in relation to sampling. But the polls failed again. And again. So what’s going on?

This is something the whole of the research industry must take seriously. It would be easy for us to say Chrysalis Research doesn’t do polls, and certainly we aren’t called upon to give the definitive answers that an election prediction requires. But we are asked to judge the views of a large group of people based on a response from a sample. So what can we learn from the political polls? How do I persuade a client that a survey of 400 students is reliable when poll after poll of tens of thousands of people is so clearly not?

The BBC fact-checker points out that the polls were not too far out in terms of numbers – they just fell the wrong side of the line in a close race (Clinton won the popular vote and some states were tight). You can argue that, in statistical terms, the margins of error on the large polls should account for even these narrow differences, but the real-world truth is more complex and the stats on their own just weren’t enough. We talk about statistical significance in our reports to indicate whether a survey response is likely to represent the views of the whole population and it’s a vital factor to consider. But statistically significant differences aren’t always important or interesting differences, and even at a 95% confidence level, they’re not always correct. So we need something else.

A mixed-methods approach can help. A different way of asking the same question, a different type of conversation, a dialogue rather than a tick box – all these approaches can create a richer picture. The principles of triangulation show that, if all the evidence is pointing in the same direction, we can be pretty sure that we understand what’s going on. Perhaps more interestingly, and more commonly, when we see apparent contradictions in the data, we dig deeper to reveal a more nuanced explanation.

Non-response surveys are another useful technique: finding ways to reach non-responders, to establish whether they are intrinsically different from those who took part in the research.

The pollsters have an unenviable job and I’m not claiming that we would have called the election results. But it’s worth remembering –  whatever we are investigating –  that we should be wary of placing too much weight on one single measure. A qualitative, exploratory approach might not provide a headline-grabbing figure, but it can be the most valuable way to understand your stakeholders and to plan for the future.

To talk to us about qualitative and quantitative research, and mixed-methods, approaches, call Tom Levesley on 0117 230 9933 or email tom.levesley@chrysalisresearch.co.uk