As Big Data gains weight, can it be trusted without market research insights?
It seems like a long time since President Obama was re-elected. The Republicans still control the House of Representatives, the Democrats still control the Senate and President Obama is still President. While it seems like we are back to business as usual, some things have changed! Politics has shown us that the era of big data has now permeated through all industries, even politics.
In 2008, President Obama’s election campaign was funded by many small donors rather than a few big hitters. For all the praise Obama’s team won in 2008 for its high-tech wizardry, they realised that they managed too many separate databases which was a huge weakness. Back then, volunteers making phone calls through the Obama website were working from lists that differed from the lists used by callers in the campaign office. None of these databases were connected. So the Obama team created a massive single system that could merge the information collected from pollsters, fundraisers, field workers and consumer databases as well as social media and mobile contacts with the main Democratic voter files in the swing states.
“Public details of the regular briefings that President Obama’s team of scientists presented were (and still are) in short supply as the campaign guarded what it believed to be its biggest institutional advantage over Mitt Romney’s campaign: its data” (Scherer, 2012).
Raw data are rarely intrinsically interesting. Data without analysis are practically useless. Data scientists search large swathes of data for patterns, turning meaningless collections of numbers into a political or commercial advantage. While modern data analysis has been around for about a century, the rapid pace of technological advancement has had a staggering impact on the techniques we use. Our always-online culture has led to a proliferation of scattered data sources, and low-cost cloud storage has meant that it is often easier to keep data than to throw it away. Improvements in computational speed, cloud computing and increasingly easy to understand programming languages have enabled projects that would have been prohibitively expensive only five years ago.
Yet, although technology has made Big Data possible, it hasn’t made it easy. Big Data is about insight, finding small needles – patterns and relationships – hidden in enormous haystacks. Getting the best out of your tools requires expertise in system administration, software engineering and statistical analysis.
It is easy to dismiss Big Data as a fad. Results aren’t as robust as our standard methods, because those are more refined, while Big Data research methodologies are still in its infancy. How do we weight a political analysis to account for demographic differences in internet use? We can make educated guesses, but it would be premature to try to compete with established and – as the presidential election showed – extremely effective techniques. However, we should remember that Big Data is about finding patterns. It can generate hypotheses that established methods can verify in more quantifiable and reliable ways.
This is unlikely to be true forever. As research continues, we expect that the reliability of Big Data techniques will improve. Whether they will ever match traditional survey-based methods is debatable, but it certainly won’t happen in the foreseeable future.
If traditional data analysis is like panning for gold, Big Data is an early industrial gold mine – it may lack the detail, but its capacity is enormous and its efficiency can only improve.
Claire Emes is head of Ipsos MORI Digital. Follow her on Twitter @c_emes. Gabriela Mancero is a social research project manager at Ipsos MORI.