So, a bad night for Labour, Liberal Democrats and Nigel Farage. An even worse hangover for the 14 major opinion poll companies whose forecasts (right up to 10pm on 7th May) were for a renewed coalition, potentially involving more than two parties. The instant response has been to question the methods of market research and assume social media tools would have been more accurate.
But wait. There is direct proof that neither of those are true. Take the project by University of Warwick, the Department of Journalism at City University London and the Information Technologies Institute in Greece to develop an algorithm, drawing on Twitter data, to create a daily forecast of voting share (see http://www.electionprediction.eu/uk/). On the afternoon of election day, it was predicting Labour would get around 34 per cent of votes (in a 31.9 to 36.3 per cent spread), with the Conservatives on 33 per cent (31.1 to 35.5 per cent spread). Wrong.
And how about Nate Silver, the much-lauded forecaster of President Obama’s second-term victory against the odds. After detailed modelling of big data from a wide range of sources by his team of statisticians, he came up with a forecast of 280 seats for the Tories, with Labour on 265. Also wrong.
With conventional market research and social media both failing to call an admittedly complex election outcome, can any reliable conclusion be drawn about the accuracy of these techniques. I believe there are three lessons to be learned:
1. Impulse rules
Behavioural psychologists regularly note that up to seven out of ten decisions we make are based on impulse. It is always assumed that casting a vote is such a rational choice, impulse will play no part. Many voters may have left it until the moment the pencil is in their hand before they finally chose where to place their cross.
2. Twitter is biased
2015 was an election campaign in which social media were more heavily-used than ever before. Leading the way were the Liberal Democrats who were more active than any other party. Yet that noise did not translate into a signal at the polling stations. Equally, forecasters looking at tweets were just as deceived about the final result. The important truth to remember is that active users of Twitter are not remotely representative of the population as a whole.
3. People lie
There has been much discussion about the “shy Tory” effect which understates support for the party in opinion polls. Political views remain a taboo in British culture, so there is less habituation to stating clearly which party you support. Unlike in the US, where party affiliation is a publicly-available data set, nobody knows who you voted for or may vote for in the future. When presented with a request to provide that information, many consumers simply lie (or do not know what will happen - see point 1).
It might seem frustrating in an era when social media appear to make individual views more visible than ever and big data provides incredible insight into actual behaviour. But that is to ignore just how analogue voting for a new government really is. The most important lesson is for the politicians - never take voters for granted.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024