April 25th & 26th
287 Kent Ave, Brooklyn, NY 11211
Abstract Submission Deadline: January 19th
What does it mean that digital technologies are increasingly a part of...
I have to agree with David Brooks, at least with regard to some of the observations he offers this morning in the NY Times, when he says that participating on the Internet is different from simply reading books. However, I disagree with nearly every specific point he makes, and the conclusions he draws.
He starts with Nick Carr’s polemic, ‘The Shallows’, which makes a case for the Internet ruining our minds and by extension, our culture. And then he heaps on some dime-store philosophy, and wraps it with an elitist bow:
David Brooks, The Medium Is The Medium
Carr argues that the Internet is leading to a short-attention-span culture. He cites a pile of research showing that the multidistraction, hyperlink world degrades people’s abilities to engage in deep thought or serious contemplation.
Carr’s argument has been challenged. His critics point to evidence that suggests that playing computer games and performing Internet searches actually improves a person’s ability to process information and focus attention. The Internet, they say, is a boon to schooling, not a threat.
In particular, Brooks does not touch on research that shows that reading on the Internet engages more parts of the brain that reading a book, which has led some to suggest it is a more intellectual activity than leaning back with the newest Harry Potter.
But there was one interesting observation made by a philanthropist who gives books to disadvantaged kids. It’s not the physical presence of the books that produces the biggest impact, she suggested. It’s the change in the way the students see themselves as they build a home library. They see themselves as readers, as members of a different group.
The Internet-versus-books debate is conducted on the supposition that the medium is the message. But sometimes the medium is just the medium. What matters is the way people think about themselves while engaged in the two activities. A person who becomes a citizen of the literary world enters a hierarchical universe. There are classic works of literature at the top and beach reading at the bottom.
This last paragraph is a puzzler: a series of sentences that don’t add up.
First, McLuhan’s point is that exposure to a new medium — say, reading books, or participating on the Internet — changes those who are exposed. That change in the individual’s mind is the real ‘message’ of the medium, not the stories in the books, or the sports on TV. And isn’t that what he is trying to say? That kids exposed to books are changed? Then why does he say that sometimes a ‘medium is just a medium’?
He isolates the one dimension of the many changes that media induce in us — self-identity — but leaves out others, like the shift in values that generally comes along with being exposed to new media.
A person enters this world as a novice, and slowly studies the works of great writers and scholars. Readers immerse themselves in deep, alternative worlds and hope to gain some lasting wisdom. Respect is paid to the writers who transmit that wisdom.
Brooks focuses on the supposed ‘hierarchical’ nature of literature — a construct of his personal, or cultural feelings about literature — and seems to make that a law of the universe. I guess I buy that things like books are culturally situated, but they aren’t all in a single culture. There isn’t a single hit parade for all books ever.
Also, this allusion to the world of reading as if it is a Zen monastery is pure hyperbole.
A citizen of the Internet has a very different experience. The Internet smashes hierarchy and is not marked by deference. Maybe it would be different if it had been invented in Victorian England, but Internet culture is set in contemporary America. Internet culture is egalitarian. The young are more accomplished than the old. The new media is supposedly savvier than the old media. The dominant activity is free-wheeling, disrespectful, antiauthority disputation.
Brooks offer up an image of an academic quadrangle where learned authors stroll, in gowns, chatting with eager young accolytes, and then contrasts it with the anti-authoritarian Internet which is — shudder — egalitarian.
These different cultures foster different types of learning. The great essayist Joseph Epstein once distinguished between being well informed, being hip and being cultivated. The Internet helps you become well informed — knowledgeable about current events, the latest controversies and important trends. The Internet also helps you become hip — to learn about what’s going on, as Epstein writes, “in those lively waters outside the boring mainstream.”
But the literary world is still better at helping you become cultivated, mastering significant things of lasting import. To learn these sorts of things, you have to defer to greater minds than your own. You have to take the time to immerse yourself in a great writer’s world. You have to respect the authority of the teacher.
Then, in a neat bit of legerdemain, Brooks uses a quote about being cultivated — and explicitly making the case that becoming cultivated — basically taking on the values and manners of the elite — is more important than being well informed or hip. He also insinuates that learning how to respect your elders (‘betters’?) and those who have been accepted by the elite as authoritative is one mark of becoming cultivated. And of course, he states that the Internet does not do any of that, which is why the Internet is a playground and not the haunt of the learned.
Right now, the literary world is better at encouraging this kind of identity. The Internet culture may produce better conversationalists, but the literary culture still produces better students.
It’s better at distinguishing the important from the unimportant, and making the important more prestigious.
So, it’s a culture war, and Brooks joins Nick Carr, Andrew Keen, and a long list of others who say that what we are doing on the web is immoral, illegitimate, and immature. They are threatened by the change in values that seems to accompany deep involvement in web culture, a change that diminishes much of what Brooks holds up for our regard in his piece. I don’t mean the specific authors he may have been alluding to — although he names none but Carr — but rather a supposed hierarchical structure of western culture, which is reflected in the literary niche is supports.
Brooks is actually making a more sinister case: to the young that would like to get ahead, avoid the rabble on the web with their egalitarian and multitasking ways. Read books instead, because it is the mark of aspiring members of the elite, the ruling class.
Like Steven Pinker, Steven Johnson makes short work of most of Nic Carr’s hand-wringing about the Web ruining our minds, and by extension, Western civilization:
Mr. Carr spends a great deal of his book’s opening section convincing us that new forms of media alter the way the brain works, which I suspect most of his readers have long ago accepted as an obvious truth. The question is not whether our brains are being changed. (Of course new experiences change your brain — that’s what experience is, on some basic level.) The question is whether the rewards of the change are worth the liabilities.
The problem with Mr. Carr’s model is its unquestioned reverence for the slow contemplation of deep reading. For society to advance as it has since Gutenberg, he argues, we need the quiet, solitary space of the book. Yet many great ideas that have advanced culture over the past centuries have emerged from a more connective space, in the collision of different worldviews and sensibilities, different metaphors and fields of expertise. (Gutenberg himself borrowed his printing press from the screw presses of Rhineland vintners, as Mr. Carr notes.)
It’s no accident that most of the great scientific and technological innovation over the last millennium has taken place in crowded, distracting urban centers. The printed page itself encouraged those manifold connections, by allowing ideas to be stored and shared and circulated more efficiently. One can make the case that the Enlightenment depended more on the exchange of ideas than it did on solitary, deep-focus reading.
Quiet contemplation has led to its fair share of important thoughts. But it cannot be denied that good ideas also emerge in networks.
Yes, we are a little less focused, thanks to the electric stimulus of the screen. Yes, we are reading slightly fewer long-form narratives and arguments than we did 50 years ago, though the Kindle and the iPad may well change that. Those are costs, to be sure. But what of the other side of the ledger? We are reading more text, writing far more often, than we were in the heyday of television.
And the speed with which we can follow the trail of an idea, or discover new perspectives on a problem, has increased by several orders of magnitude. We are marginally less focused, and exponentially more connected. That’s a bargain all of us should be happy to make.
Johnson also touches on the new Kindle ‘popular highlights’ feature — where Amazon aggregates highlights from other readers, to let you know which passages are more popular — but doesn’t mention the inherent creepiness of Amazon watching our reading activities. It appears that turning this feature off requires disabling backups of all annotations, which seems like a Facebook-like coercive agreement.
Pinker undoes Nick Carr’s attack on web culture in The Shallows without naming him, but this follows pretty directly:
Steven Pinker, Mind Over Mass Media
Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.
And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.
The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.
The web is our only hope, on so many levels.
I have not read Nick Carr’s new work, The Shallows, although I will order it from my library: I am certain it is not a book I want to own, since it is a continuation of the Luddism he has been peddling on his blog and earlier books, arguing that Google, and by extension, the web, web is making us stupid.
Thankfully, Jonah Lehrer has read the work, and dissects Carr’s arguments:
This is a measured manifesto. Even as Carr bemoans his vanishing attention span, he’s careful to note the usefulness of the Internet, which provides us with access to a near infinitude of information. We might be consigned to the intellectual shallows, but these shallows are as wide as a vast ocean.
Nevertheless, Carr insists that the negative side effects of the Internet outweigh its efficiencies. Consider, for instance, the search engine, which Carr believes has fragmented our knowledge. “We don’t see the forest when we search the Web,” he writes. “We don’t even see the trees. We see twigs and leaves.” […]
But wait: it gets worse. Carr’s most serious charge against the Internet has nothing to do with Google and its endless sprawl of hyperlinks. Instead, he’s horrified by the way computers are destroying our powers of concentration. […] The online world has merely exposed the feebleness of human attention, which is so weak that even the most minor temptations are all but impossible to resist.
Carr extends these anecdotal observations by linking them to the plasticity of the brain, which is constantly being shaped by experience. While plasticity is generally seen as a positive feature — it keeps the cortex supple — Carr is interested in its dark side. He argues that our mental malleability has turned us into servants of technology, our circuits reprogrammed by our gadgets.
It is here that he starts to run into problems. There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.
Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.
But Carr, and other web critics like Andrew Keen, won’t let cognitive science dampen their enthusiasm for moralizing. What we are doing online is immoral, illegitimate, and immature. It is causing anxiety, acne, and the dissolution of the family. This is what I call the ‘war on flow’: and it will never end.
I will have more to say, I wager, when I read the book.
Nick Carr wonders if we should sequester all of out links at the end of posts, instead of spread wherever they are referred to, to minimize distraction from what the author is getting at.
Links are wonderful conveniences, as we all know (from clicking on them compulsively day in and day out). But they’re also distractions. Sometimes, they’re big distractions - we click on a link, then another, then another, and pretty soon we’ve forgotten what we’d started out to do or to read. Other times, they’re tiny distractions, little textual gnats buzzing around your head. Even if you don’t click on a link, your eyes notice it, and your frontal cortex has to fire up a bunch of neurons to decide whether to click or not. You may not notice the little extra cognitive load placed on your brain, but it’s there and it matters. People who read hypertext comprehend and learn less, studies show, than those who read the same material in printed form. The more links in a piece of writing, the bigger the hit on comprehension.
The link is, in a way, a technologically advanced form of a footnote. It’s also, distraction-wise, a more violent form of a footnote. Where a footnote gives your brain a gentle nudge, the link gives it a yank. What’s good about a link - its propulsive force - is also what’s bad about it.
I don’t want to overstate the cognitive penalty produced by the hyperlink (or understate the link’s allure and usefulness), but the penalty seems to be real, and we should be aware of it. In The Shallows, I examine the hyperlink as just one element among many - including multimedia, interruptions, multitasking, jerky eye movements, divided attention, extraneous decision making, even social anxiety - that tend to promote hurried, distracted, and superficial thinking online. To understand the effects of the Web on our minds, you have to consider the cumulative effects of all these features rather than just the effects of any one individually.
Carr is not explicitly trying to ‘unbuild the web’ as Jay Rosen styles it. He’s worried about our vagrant attention, like a school marm that simply wants us to get back to practicing our penmanship instead of looking at the clouds out the window.
Carr has forgotten that the journey is more important than the map, no matter how skilled the cartographer.
So this is actually not an knock against the architecture of the web, but an attack on flow: the way our minds work when presented with the increasingly fluid and meandering streams of information and connection that make up our online world.
Carr’s all about focus, and getting things done. It’s not that links are bad, in and of themselves: it those that click on them, wander off for ten minutes reading some supporting or dissenting opinion or three, they are the malefactors. How dare them! Don’t them know they are supposed to read the essay word for word the way the author intended? The liberties they take by wandering all over!
It’s the same problem with newspapers, I bet Carr would say. The editors in their wisdom know how much of the newshole is supposed to be devoted to stories, and what’s on the front page. That’s their expertise. We aren’t supposed to make those decisions for ourselves!
Anyway, I think that Carr is just a bit too bookish and disconnected from the pulsing flow of the web to see what’s at play, here. Carr has forgotten that the journey is more important than the map, no matter how skilled the cartographer.
I seldom agree with Nick Carr in an unabridged fashion, but here’s the counter example:
[after a long recapitulation of Ozzie’s longterm vision of Microsoft’s technology vision to dominate future web services, Nick gets to the core.]
Ozzie closed his talk with an attempt to position Microsoft as the company best suited to dominate the cloud, as it’s dominated the desktop, through a combination of “software plus services”:
We’re building a platform to support our own apps and solutions, and to support our partners’ applications and solutions, and to support enterprise solutions and enterprise infrastructure. We are the only company in the industry that has the breadth of reach from consumer to enterprises to understand and deliver and to take full advantage of the services opportunity in all of these markets. I believe we’re the only company with the platform DNA that’s necessarily to viably deliver this highly leveragable platform approach to services. And we’re certainly one of the few companies that has the financial capacity to capitalize on this sea change, this services transformation.
I remember, back when the computing industry was going through its last great sea change, with the arrival of the personal computer, IBM assumed it was the “only company in the industry” with the customer base, the capabilities, and the cash to dominate the next generation of computing. But as a small upstart named Microsoft showed Big Blue, that ain’t necessarily so. Microsoft and Ozzie have been talking a good game about cloud computing for the past two years. But we’re still waiting for the Redmond team to take the field.
The likelihood is that Microsoft will be/is being blindsided by a wave of tiny startups that won’t be building anything on the Microsoft cloud. Microsoft may think they will dominate the cloud based on the attractiveness of their own software sitting there.
If a person like me can simply opt out of Office by using Google Docs and Neooffice, trust me, the majority of other professionals will follow in a few years. Ozzie is puffing another pipedream, just like Groove and Notes. Its a dream that sounds plausible to consultants who are in the business of building customized business systems (like the Notes value proposition), but a wholesale transition to Web 2.0 apps would invalidate that premise.
We’ll have to see, but I agree with Nick. Its too late and not enough. Microsoft looks like a dinosaur watching the meteor streaming across the sky overhead, about to crash and cataclysmically alter the environment. The last great Information Age company in the post-everything economy.
I find little to like in Andrew Keen’s elitist writings, but Kevin Marks (bless him) can find something worthwhile there, despite all:
[from Keening for Culture]
This kind of Oxbridge cleverness for its own sake is part of the Guardian/BBC Platonist culture that sees its role to lead the uneducated masses to better themselves, while sneering at their plebian interests. Keen continually calls Google “Orwellian”, while ignoring the emotional core of 1984, which is the tension between Winston’s day job at the (BBC-derived) Ministry of Information, controlling the party line, and his private diary, written “to the future or to the past, to a time when thought is free, when men are different for one another and do not live alone”. […]
So what did I like? I liked that he said the web was a mirror of ourselves, but I see him as a Caliban cursing his reflection, as I said five years ago:
- The web we see is a reflection of ourselves individually as well as collectively.
- With 2 billion pages and counting, we can never see it all, and when we venture outside the well trodden paths of the personal web we know, we are more likely to make mistakes in our maps, and come back with ‘here be dragons’ written across entire continents and tales of men with no heads.
- I think this effect, rather than malice or wilful misrepresentation is what is behind such things as journalists’ clueless articles on weblogs or congressman fulminating against the net consisting mostly of porn and piracy.
There is more, delivered in the wonderful Marks style. I find Kevin rewarding, perhaps in part because he is more than willing to employ Shakespearean references to make a point.
Meanwhile, I have found myself softening to trolls like Keen, Dvorak, and Carr. Perhaps because I believe that they increasingly don’t matter, and that the majority of people have no time for them: even the extremely literate and wired. Or perhaps, like Kevin, I have come to believe that the occasional insight or well-turned metaphor makes the rest , if not worthwhile, at least legitimized.
I guess I am unsurprised by the conservative, anti-Internet tides once again rising, in the form of recent commentary from Mick Carr and this recent Businesspundit piece:
[…] will the successful companies (and employees) of the future be the ones that can do the hard things? Will concentration be a major source of competitive advantage in the coming years? When everyone is focusing on strategy, leadership, and technology as their sources of competitive advantage, will you be able to win by building a workforce that can execute because they can block out the mass of digital distraction and get things done? If thinking is the primary skill of knowledge workers, will the depth of your thinking determine your success? And if so, is it better to spend your time reading financial statements (for example) than scanning Digg for the latest Web2.0 app? I have cut back on blog reading the last 6 months, trying to search for quality over quantity. I’ve changed most of my RSS subscriptions to not display full posts anymore. That way, if I want to read something all the way through, I have to click to the site, which means I am much more discerning about the content I consume.
Nick Carr is even more dismissive:
[from A beuatiful mindfulness]
You can’t have too much information. Or can you? Writing in the Guardian, Andrew Orlowski examines the “glut of hazy information, the consequences of which we have barely begun to explore, that the internet has made endlessly available.” He wonders whether the “aggregation of [online] information,” which some see as “synonymous with wisdom,” isn’t actually eroding our ability to think critically. He quotes Will Davies, of the Institute of Public Policy Research, who observes that
we can endlessly delay having to interpret and judge things by stacking more and more bits of data in front of us … That data is a comfort blanket in a way - we all do this. People are becoming addicted to getting more information all the time. You can see it when they get out their BlackBerrys as soon as they’ve stepped off a plane.
Like me, you’ve probably sensed the same thing, in yourself and in others - the way the constant collection of information becomes an easy substitute for trying to achieve any kind of true understanding. It seems a form of laziness as much as anything else, a laziness that the internet both encourages and justifies. The web is “a hall of mirrors” that provides the illusion of thinking, Michael Gorman, the president of the American Library Association, tells Orlowski.
So what I am doing, right now, is not thinking, I guess. It is an illusion of thinking, a hall or mirrors, where — because the information is reaching my eyeballs on a computer monitor instead from a printed page — I am demonstrating my laziness, and unwillingness to think critically.
Pardon me, but that’s pure and unadulterated bullshit.
This is another go at anti-web psychobabble. Might as well suggest throwing away books, too, and go back to pre-literate memorization of the epics in Homeric Greek. This is like saying Hip Hop or Rock&Roll is not “real” music, because it wasn’t written by some dead 19th Century European.
Thinking is thinking. Some thinking produces high quality, creative, and useful results — in the form of writing, plans, recommendations, insight — and other thinking is garbage.
The notion that the Web is more likely to lead to second rate thinking is just stupid. It’s like saying that those reading hard cover books are more likely to be wise than those reading paperbacks, or that longer poems are better than short ones. It’s simply a subtle form of prejudice. “Yes, all that writing on the Web is well and good, but it’s not real writing, you know. If they could really write well, stuff that can really change the world, they’d put it in a book, not is a series of posts on a silly little blog.”
I don’t dispute that the explosion of writing on the Web makes it hard to find things worth reading, but the same is true of the printing press. And, oh, that was instrumental in the birth of the Renaissance, remember?
This “get a horse”, “golden age of humanity is behind us”, anti-Web rhetoric even manages to persuade normally sensible people, like Rob Hyndman.
I will concede that being beleaguered by jangling devices, like blackberries, can disrupt your thinking process, but the Web isn’t a gizmo strapped to your hip. You don’t have to read the web through an RSS reader — which can be an assembly line process, and one that I don’t much enjoy — but the native value of what is on the web is not less by being embedded in blogs. It’s not toxic.
I haven’t seen much going on at Paolo Valdemarin’s blog recently — although my reading habits are very spotty — so it was kind of a sobering experience to read his 4 blog birthday, or blirthday, post.
Blogging allowed me to meet the most interesting people of my life, to get an infinite number of ideas, to develop new products, find new partner, new customers, to learn more then I had ever learned before. It changed my life.
I’m not blogging much anymore on the English part of my blog, I write a little bit more on the Italian side. I’m not involved in many conversations or I don’t feel I have much to add to what is discussed. The atmosphere is changing, pretty soon you won’t even be able to say that blogging is not “mainstream media”.
Hmmm. Kind of sad to think of the fading of that joy and involvement. A number of people have been commenting on the change blowing in the breeze in blogland. Threads like Dave Winer’s blog suicide note, Robert Scoble’s screechiness, Jeneane Sessum’s Shitting Point, and Joi Ito walking away from blogging to immerse himself in Second Life — it all points to a fundamental change in the world of blogging.
Personally, I have been blogging since 1999, although in a interrupted way. You can see my first blog, Message From Edge City, only in the Wayback Machine at the Internet Archives, since the hosting company went under, and sold the servers before I could get my posts off:
I think one of the reasons I still have a fresh feeling for the whole thing is that I have had periods of low or no blogging along the way, and I have changed my blog a lot. Since Message From Edge City (MFEC) I have penned Timing, Instant Messaging, Get Real, and now /Message. I also contributed to a number of group projects, like Operating Manual for Social Tools (with danah boyd and David Weinberger), and Centrality (with Stan Wasserman and others). I don’t remember the official start date for my blogging, so I won’t have a Blirthday of my own. I will just celebrate Paolo’s with him.
I am too hypomanic to give into the gloom and doom that many are feeling about the shifting currents in the Blogosphere, but I will offer some completely unsolicited advice for staying fresh. It’s not as corporate as Nick Carr’s heavyhanded riff on Robert Scoble’s recent spell of misguided lashing out at the rising tide of unquiet about Vista and Office slippage, but then I have already written a piece explicitly for Robert (What We Can Learn From Scoble’s Lament). No, these ideas are just for anyone who wants to retain the sense of fun and even joy that can come from the daily ritual of writing.
In the final analysis, you have to stay green or you are fading, fast. My personal mantra is “always beginning, never finished,” and it is that sort of attitude that brings me back, day after day, week after week, month after month, to the task — and joy — of writing.