Posts tagged with ‘data’
(Source: The New York Times)
Congratulations, Barack Obama: You have prevailed in the nerdiest election in the history of the American Republic.
If 2008 was about hope and change, 2012 was about data and memes. The unemployment rate. The effective tax rates. The 47 percent. The budget deficit projections. Of all things, the Reddit AMAs.
Same goes for understanding the elections. Never mind the baby-kissing, the fish fries, the bus tours and the conventions. What mattered in 2012 was data, and the tools to process it — which were so abundant, you could thankfully tune out the pundits. If it could be quantified, it was collected. If it could be collected, it was memed. If it could be memed, it was disputed. The disputes were answered with more data.
Wrath of the Math: Obama Wins Nerdiest Election Ever - Spencer Ackerman via Wired.com
I guess the ends justified the memes, this time?
Vivek Wadhwa falls into the web 3.0 trap, like so many others, making the case that the data underneath our feet will be the most important aspect of the new information age.
But there is much, much more happening in the Web 3.0 world. It’s not just “social”.
In 2009, President Obama launched an ambitious program to modernize our healthcare system by making all health records standardized and electronic. The goal is to have all paper medical records—for the entire U.S. population—digitized and available online. This way, an emergency room will have immediate access to a patient’s medical history, the effectiveness of medicines can be researched over large populations, and general practitioners and specialists can coordinate their treatments.
The government is also opening up its massive datasets of information with the Data.gov initiative. Four hundred thousand datasets are already available, and more are being added every week. They include regional data on the efficiency of government services, on poverty and wealth, education, on federal government spending, on transportation, etc. We can, for example, build applications that challenge schools or health-care providers to perform better by comparing various localities’ performance. And we can hold the government more accountable by analyzing its spending and wastage.
There are more than 24 hours of video uploaded to YouTube every minute, and far more video is being collected world wide through the surveillance cameras that you see everywhere. Whether we realize it or not, our mobile phones are able to keep track of our every movement—everywhere we go; how fast we move; what time we wake. Various mobile applications are beginning to record these data.
And then there is the human genome. We only learned how to sequence this a decade ago at a cost of billions of dollars. The price of sequencing an individual’s genome is dropping at a double exponential rate, from millions to about $10,000 per sequence in 2011. More than one million individuals are projected to be sequenced in 2013. It won’t be long before genome sequencing costs $100—or is free—with services that you purchase (as with cell phones).
Now imagine the possibilities that could derive from access to an integration of these data collections: being able to match your DNA to another’s and to learn what diseases the other person has had and how effective different medications were in curing them; learning the other person’s abilities, allergies, likes, and dislikes; who knows, maybe being able to find a DNA soul mate. We are entering an era of crowd-sourced, data-driven, participatory, genomic-based medicine. (If you’re interested, Dr. Daniel Kraft, a physician–scientist who chairs the Medicine track for Singularity University, is hosting a program called FutureMed, next month, which brings together clinicians, AI experts, bioinformaticists, medical-device and pharma executives, entrepreneurs, and investors to discuss these technologies.)
You may think that the U.S. leads in information collection. But the most ambitious project in the world is happening in India. Its government is gathering demographic data, fingerprints, and iris scans from of all its 1.2 billion residents. This will lead to the creation of the largest, most complex identity database in the world. I’ll cover this subject in a future piece.
It’s not all wine and roses. There are major privacy and security implications such as those I discussed in this piece. Forget about the “powers that be”: merely the information that Google is gathering today would make Big Brother envious. After all, Google is able to read our e-mails even before we do; it knows who our friends are and what they tell us in confidence; it maintains our diaries and our calendars; it can even guess what we are thinking by watching our surfing habits. Imagine what happens once Google has access to our DNA information.
Regardless of the risks and security implications, the technology will advance, however.
Winnowed down, Wadhwa is gee-whizzing about increasingly large collections of data, but that’s what we have been doing for 10 years in the social web, already. You can’t boggle my mind by telling me we are going to be moving into an era of unprecedented growth in data. Yawn. What else is new?
This is what Tim O’Reilly said about Web 2.0, you may recall, starting back in 2007. But Web 2.0 turned out to be about social (if something as grand and squishy as Web 2.0 can be said to be ‘about’ something), and there is no neat boundary between yesterday and tomorrow.
I agree at an abstract level that the confluence of a small number of explosive innovations are coming together to create a new information are, but it is not an explosion of ‘data’ per se that is fundamental.
What is, then? We’ll see a rapid shift in computing experience, from ‘computers’ to tablets (touch and gestural innovations), a new day in user experience based on the fall of the desktop metaphor and the rise of apps, a change from disconnected computing to ubiquitous mobility and connectivity, and another cycle of social: this time social will become interoperable and built into the operating platforms that everything is built on, so Facebook, Twitter, and other social silos will have to ‘desilo’ is they are to remain relevant.
And, yes, this will entail all sorts of new data for the data gnomes to toil on, but the new information age will be based on what normal people experience, not algorithms predicting our every move or massive supercomputers making supply chains more efficient.