In reading through the Pew/Elon University Big Data survey analysis, I come away with the sense that I diverge with many others on a basic notion around big data. I don’t believe that big data technology and techniques will end the volatility, uncertainty, complexity, and ambiguity of the post-normal world. Those factors are growing faster than our data exhaust, and our capacity to mine it.
This doesn’t mean that big data collection and analysis is pointless. On the contrary, it may be a critical factor in developing new ways to sense, model, and think about a future growing increasingly opaque.
But some comments, like those of Patrick Tucker, strike me as techno-utopian and unsupportable. He makes that case that Google is already in the business of predicting the small-bore future around issue like flu prevention, based on flu sufferers search queries.
But the passing of the flu bug from person to person is a first-order result of contact. The nature of complex and unpredictable systems is that unforeseeable results arise from second-, third-, and higher-order connections. The public health implications of direct contact have been know for hundreds of years: Google is merely finding a faster way to track the well-understood.
As Tucker argues,
Futurist machines are taking over the job of inventing the future. Their predictions have consequences in the real world because our interaction with the future as individuals, groups, and nations is an expression of both personal and national identity. Regardless of what may or may not happen, the future as an idea continually shapes buying, voting, and social behavior. The future is becoming increasingly knowable. We sit on the verge of a potentially tremendous revolution in science and technology. But even those aspects of the future that are the most potentially beneficial to humankind will have disastrous effects if we fail to plan for them.
The future, alas, is not becoming more knowable.
Unconstrained and dynamic complex systems – like our society, the economic system of Europe, or the Earth’s weather – are fundamentally unknowable: their progression from one state to another cannot be predicted consistently, even if you have a relatively good understanding of both the starting state and the present state, because the behavior of the system as a whole is an emergent property of the interconnections between the parts. And the parts are themselves made up of interconnected parts, and so on.
Yes, weather forecasting and other scientific domains have been benefited by better models and more data, and more data and bigger analysis approaches will increase the level of consistency for weather, but only to a certain extent. There are rounding errors that grow from the imprecision of measures and oversimplifications in our models, so that even something as potentially opaque as the weather – where no one is intentionally hiding data, or degrading it – cannot be predicted completely. In everyday life, this is why the weather forecast for the next few hours is several orders of magnitude better than the forecast for 10 days ahead. Big data – as currently conceived – may allow us to improve weather prediction for the next 10 days dramatically, but the inverse square law of predictability means that predictions about the weather 10 months ahead are unlikely to dramatically improve.
So, consider it this way: Big data is unlikely to increase the certainty about what is going to happen in anything but the nearest of near futures – in weather, politics, and buying behavior – because uncertainty and volatility grow along with the interconnectedness of human activities and institutions across the world. Big data is itself a factor in the increased interconnectedness of the world: as companies, governments, and individuals take advantage of insights gleaned from big data, we are making the world more tightly interconnected, and as a result (perhaps unintuitively) less predictable.