Install Theme

Posts tagged with ‘cognitive science’

Everything We Think We Know About People Is Wrong - Stowe Boyd  →

The result of a great deal of cognitive science research demonstrates that people don’t really understand how we think, how we influence each other, and the degree to which we are connected. We also lack an understanding of water, which is the most common liquid on Earth:

Everything We Think We Know About People Is Wrong - Stowe Boyd via Nexalogy blog:

[…] It turns out that people — and marketers — don’t really understand influence very well, despite being embedded in social networks their entire lives: we really don’t understand the way that we are influenced by other people. For example, if someone touches you when you first meet, you are ten times more likely to remember that person. But we are unaware, later, that the touch was the reason for our recollection. We underestimate the impact of a kind word, or the chilling effects of workplace fear. There are dozens of examples of this sort coming out of cognitive science that demonstrate that we are being strongly influenced below the conscious level, physiologically, all the time. The actions of others can make us fearful, or confident, or curious, or suspicious — and it can happen invisibly. People just don’t have a great insight into the social interactions of people, despite being involved in them. Most contemporary thinking about our social interactions is derived from an economic view that considers groups as collections of individuals, where each individual makes more-or-less rational decisions intended to maximize benefits to themselves and their loved ones. I think there is a analogy with the historical physics view of how fluids work, like water, or water specifically.

read more at Nexalogy blog

Why Angry Birds is so successful and popular: a cognitive teardown of the user experience - Charles Mauro →

In a fascinating and detailed analysis of the UX factors behind Angry Birds success, Charles Mauro touches on many aspects of the game’s design, including human short term memory:

Charles Mauro via Pulse UX Blog

It is a well-known fact of cognitive science that human short-term memory (SM), when compared to other attributes of our memory systems, is exceedingly limited. This fact has been the focus of thousands of studies over the last 50 years. Scientists have poked and prodded this aspect of human cognition to determine exactly how SM operates and what impacts SM effectiveness. As we go about our daily lives, short-term memory makes it possible for you to engage with all manner of technology and the environment in general. SM is a temporary memory that allows us to remember a very limited number of discrete items, behaviors, or patterns for a short period of time. SM makes it possible for you to operate without constant referral to long-term memory, a much more complex and time-consuming process. This is critical because SM is fast and easily configured, which allows one to adapt instantly to situations that might otherwise be fatal if one were required to access long-term memory. In computer-speak, human short-term memory is also highly volatile. This means it can be erased instantly, or more importantly, it can be overwritten by other information coming into the human perceptual system. Where things get interesting is the point where poor user interface design impacts the demand placed on SM. For example, a user interface design solution that requires the user to view information on one screen, store it in short-term memory, and then reenter that same information in a data field on another screen seems like a trivial task. Research shows that it is difficult to do accurately, especially if some other form of stimulus flows between the memorization of the data from the first screen and before the user enters the data in the second. This disruptive data flow can be in almost any form, but as a general rule, anything that is engaging, such as conversation, noise, motion, or worst of all, a combination of all three, is likely to totally erase SM. When you encounter this type of data flow before you complete transfer of data using short-term memory, chances are very good that when you go back to retrieve important information from short-term memory, it is gone!

One would logically assume that any aspect of user interface design that taxes short-term memory is a really bad idea. As was the case with response time, a more refined view leads to surprising insights into how one can use the degradation of short-term memory to actually improve game play engagement. Angry Birds is a surprisingly smart manager of the player’s short-term memory.

By simple manipulation of the user interface, Angry Birds designers created significant short-term memory loss, which in turn increases game play complexity but in a way that is not perceived by the player as negative and adds to the addictive nature of the game itself. The subtle, yet powerful concept employed in Angry Birds is to bend short-term memory but not to actually break it. If you do break SM, make sure you give the user a very simple, fast way to accurately reload.

(Source: underpaidgenius)

A Brief Guide to Embodied Cognition: Why You Are Not Your Brain - Samuel McNerney →

Good recap of the rise of embodied cognition as a rich field of inquiry, and especially George Lakoff’s contributions:

Samuel McNerney via Scientific American

Metaphors We Live By [by George Lakoff and Mark Johnson] was a game changer. Not only did it illustrate how prevalent metaphors are in everyday language, it also suggested that a lot of the major tenets of western thought, including the idea that reason is conscious and passionless and that language is separate from the body aside from the organs of speech and hearing, were incorrect. In brief, it demonstrated that “our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.”

After Metaphors We Live By was published, embodiment slowly gained momentum in academia. In the 1990s dissertations by Christopher Johnson, Joseph Grady and Srini Narayanan led to a neural theory of primary metaphors. They argued that much of our language comes from physical interactions during the first several years of life, as the Affection is Warmth metaphor illustrated. There are many other examples; we equate up with control and down with being controlled because stronger people and objects tend to control us, and we understand anger metaphorically in terms of heat pressure and loss of physical control because when we are angry our physiology changes e.g., skin temperature increases, heart beat rises and physical control becomes more difficult.

This and other work prompted Lakoff and Johnson to publish Philosophy in the Flesh, a six hundred-page giant that challenges the foundations of western philosophy by discussing whole systems of embodied metaphors in great detail and furthermore arguing that philosophical theories themselves are constructed metaphorically. Specifically, they argued that the mind is inherently embodied, thought is mostly unconscious and abstract concepts are largely metaphorical. What’s left is the idea that reason is not based on abstract laws because cognition is grounded in bodily experience (A few years later Lakoff teamed with Rafael Núñez to publish Where Mathematics Comes From to argue at great length that higher mathematics is also grounded in the body and embodied metaphorical thought).

As Lakoff points out, metaphors are more than mere language and literary devices, they are conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior. For example, in a study done by Yale psychologist John Bargh, participants holding warm as opposed to cold cups of coffee were more likely to judge a confederate as trustworthy after only a brief interaction. Similarly, at the University of Toronto, “subjects were asked to remember a time when they were either socially accepted or socially snubbed. Those with warm memories of acceptance judged the room to be 5 degrees warmer on the average than those who remembered being coldly snubbed. Another effect of Affection Is Warmth.” This means that we both physically and literary “warm up” to people.

The last few years have seen many complementary studies, all of which are grounded in primary experiences:

• Thinking about the future caused participants to lean slightly forward while thinking about the past caused participants to lean slightly backwards. Future is Ahead

• Squeezing a soft ball influenced subjects to perceive gender neutral faces as female while squeezing a hard ball influenced subjects to perceive gender neutral faces as male. Female is Soft

• Those who held heavier clipboards judged currencies to be more valuable and their opinions and leaders to be more important. Important is Heavy.

• Subjects asked to think about a moral transgression like adultery or cheating on a test were more likely to request an antiseptic cloth after the experiment than those who had thought about good deeds. Morality is Purity

Studies like these confirm Lakoff’s initial hunch – that our rationality is greatly influenced by our bodies in large part via an extensive system of metaphorical thought.
Different languages are spoken at varying speeds but thanks to correlated differences in data-density, the same amount of information is conveyed within a given time period. For all of the other languages, the researchers discovered, the more data-dense the average syllable is, the fewer of those syllables had to be spoken per second — and the slower the speech thus was. English, with a high information density of .91, is spoken at an average rate of 6.19 syllables per second. Mandarin, which topped the density list at .94, was the spoken slowpoke at 5.18 syllables per second. Spanish, with a low-density .63, rips along at a syllable-per-second velocity of 7.82. The true speed demon of the group, however, was Japanese, which edges past Spanish at 7.84, thanks to its low density of .49. Despite those differences, at the end of, say, a minute of speech, all of the languages would have conveyed more or less identical amounts of information.

- Jeffrey Kluger, The speed and density of language, via Jason Kottke

Which means that they are all timed to the underlying clock of how fast the human brain can catch information, however it is coded in syllables/second. The critical clock speed is ideas/second.

(via wildcat2030)

(via wildcat2030)

A Better Way to Teach Math - David Bornstein →

There might be a bell curve in natural ability, but does that mean we are condemned to a bell curve in the results of training? Perhaps not, as the Jump approach to teaching math shows:

Children come into school with differences in background knowledge, confidence, ability to stay on task and, in the case of math, quickness. In school, those advantages can get multiplied rather than evened out. One reason, says Mighton, is that teaching methods are not aligned with what cognitive science tells us about the brain and how learning happens.

In particular, math teachers often fail to make sufficient allowances for the limitations of working memory and the fact that we all need extensive practice to gain mastery in just about anything. Children who struggle in math usually have difficulty remembering math facts, handling word problems and doing multi-step arithmetic (pdf). Despite the widespread support for “problem-based” or “discovery-based” learning, studies indicate that current teaching approaches underestimate the amount of explicit guidance, “scaffolding” and practice children need to consolidate new concepts. Asking children to make their own discoveries before they solidify the basics is like asking them to compose songs on guitar before they can form a C chord.

Teaching is another era where cognitive science hasn’t really reached. Most of what educators is doing is based on folklore, and most of the premises underlying education are likely to be flawed, or totally false.

To Tug the Heartstrings, Music Must First Tickle the Brain - Pam Belluck →

Tapping into empathy?

The brain processes musical nuance in many ways, it turns out. Edward W. Large, a music scientist at Florida Atlantic University, scanned the brains of people with and without experience playing music as they listened to two versions of a Chopin étude: one recorded by a pianist, the other stripped down to a literal version of what Chopin wrote, without human-induced variations in timing and dynamics.

During the original performance, brain areas linked to emotion activated much more than with the uninflected version, showing bursts of activity with each deviation in timing or volume.

So did the mirror neuron system, a set of brain regions previously shown to become engaged when a person watches someone doing an activity the observer knows how to do — dancers watching videos of dance, for example. But in Dr. Large’s study, mirror neuron regions flashed even in nonmusicians.

Maybe those regions, which include some language areas, are “tapping into empathy,” he said, “as though you’re feeling an emotion that is being conveyed by a performer on stage,” and the brain is mirroring those emotions.

Music is a medium for feelings, literally: the artist can actually make us feel what they are feeling.

David Eagleman and Mysteries of the Brain : Burkhard Bilger →

The brain is a remarkably capable chronometer for most purposes. It can track seconds, minutes, days, and weeks, set off alarms in the morning, at bedtime, on birthdays and anniversaries. Timing is so essential to our survival that it may be the most finely tuned of our senses. In lab tests, people can distinguish between sounds as little as five milliseconds apart, and our involuntary timing is even quicker. If you’re hiking through a jungle and a tiger growls in the underbrush, your brain will instantly home in on the sound by comparing when it reached each of your ears, and triangulating between the three points. The difference can be as little as nine-millionths of a second.

Yet “brain time,” as Eagleman calls it, is intrinsically subjective. “Try this exercise,” he suggests in a recent essay. “Put this book down and go look in a mirror. Now move your eyes back and forth, so that you’re looking at your left eye, then at your right eye, then at your left eye again. When your eyes shift from one position to the other, they take time to move and land on the other location. But here’s the kicker: you never see your eyes move.” There’s no evidence of any gaps in your perception—no darkened stretches like bits of blank film—yet much of what you see has been edited out. Your brain has taken a complicated scene of eyes darting back and forth and recut it as a simple one: your eyes stare straight ahead. Where did the missing moments go?

The question raises a fundamental issue of consciousness: how much of what we perceive exists outside of us and how much is a product of our minds? Time is a dimension like any other, fixed and defined down to its tiniest increments: millennia to microseconds, aeons to quartz oscillations. Yet the data rarely matches our reality. The rapid eye movements in the mirror, known as saccades, aren’t the only things that get edited out. The jittery camera shake of everyday vision is similarly smoothed over, and our memories are often radically revised. What else are we missing? When Eagleman was a boy, his favorite joke had a turtle walking into a sheriff’s office. “I’ve just been attacked by three snails!” he shouts. “Tell me what happened,” the sheriff replies. The turtle shakes his head: “I don’t know, it all happened so fast.”

[…]

Just how many clocks we contain still isn’t clear. The most recent neuroscience papers make the brain sound like a Victorian attic, full of odd, vaguely labelled objects ticking away in every corner. The circadian clock, which tracks the cycle of day and night, lurks in the suprachiasmatic nucleus, in the hypothalamus. The cerebellum, which governs muscle movements, may control timing on the order of a few seconds or minutes. The basal ganglia and various parts of the cortex have all been nominated as timekeepers, though there’s some disagreement on the details. The standard model, proposed by the late Columbia psychologist John Gibbon in the nineteen-seventies, holds that the brain has “pacemaker” neurons that release steady pulses of neurotransmitters. More recently, at Duke, the neuroscientist Warren Meck has suggested that timing is governed by groups of neurons that oscillate at different frequencies. At U.C.L.A., Dean Buonomano believes that areas throughout the brain function as clocks, their tissue ticking with neural networks that change in predictable patterns. “Imagine a skyscraper at night,” he told me. “Some people on the top floor work till midnight, while some on the lower floors may go to bed early. If you studied the patterns long enough, you could tell the time just by looking at which lights are on.”

Time isn’t like the other senses, Eagleman says. Sight, smell, touch, taste, and hearing are relatively easy to isolate in the brain. They have discrete functions that rarely overlap: it’s hard to describe the taste of a sound, the color of a smell, or the scent of a feeling. (Unless, of course, you have synesthesia—another of Eagleman’s obsessions.) But a sense of time is threaded through everything we perceive. It’s there in the length of a song, the persistence of a scent, the flash of a light bulb. “There’s always an impulse toward phrenology in neuroscience—toward saying, ‘Here is the spot where it’s happening,’ ” Eagleman told me. “But the interesting thing about time is that there is no spot. It’s a distributed property. It’s metasensory; it rides on top of all the others.”

[…]

“Time is this rubbery thing,” Eagleman said. “It stretches out when you really turn your brain resources on, and when you say, ‘Oh, I got this, everything is as expected,’ it shrinks up.” The best example of this is the so-called oddball effect—an optical illusion that Eagleman had shown me in his lab. It consisted of a series of simple images flashing on a computer screen. Most of the time, the same picture was repeated again and again: a plain brown shoe. But every so often a flower would appear instead. To my mind, the change was a matter of timing as well as of content: the flower would stay onscreen much longer than the shoe. But Eagleman insisted that all the pictures appeared for the same length of time. The only difference was the degree of attention that I paid to them. The shoe, by its third or fourth appearance, barely made an impression. The flower, more rare, lingered and blossomed, like those childhood summers.

[…]

"We’re stuck in time like fish in water,” Eagleman said, oblivious of its currents until a bubble floats by. It’s usually best that way. He had spent the past ten years peering at the world through such gaps in our perception, he said. “But sometimes you get so far down deep into reality that you want to pull back. Sometimes, in a great while, I’ll think, What if I find out that this is all an illusion?” He felt this most keenly with his schizophrenic subjects, who tended to do poorly on timing tests. The voices in their heads, he suspected, were no different from anyone else’s internal monologues; their brains just processed them a little out of sequence, so that the thoughts seemed to belong to someone else. “All it takes is this tiny tweak in the brain, this tiny change in perception,” he said, “and what you see as real isn’t real to anyone else.”

I am looking forward to reading David Eagleman’s Brain Time, which is available online at Edge.

Another Lesson About Cognition And The Web: Lara Logan And Hate

I am all for teachable moments, but Maureen Dowd and the tut-tut, tsk-tsk bloviators are connecting the wrong dots following Lara Logan’s sexual assault in Egypt and the fooforah that followed. Dowd starts by taking aim at Nir Rosen, whose unfeeling and reptilian comments led to him losing a fellowship at NYU, and probably his relationship with The Nation, The New Yorker, and The Atlantic. But she winds up demonizing the web. 

Maureen Dowd, Stars and Sewers

On Tuesday, he [Nir Rosen] merrily tweeted about the sexual assault of Logan: “Jesus Christ, at a moment when she is going to become a martyr and glorified we should at least remember her role as a major war monger.”

He suggested she was trying to “outdo Anderson” Cooper (roughed up in Cairo earlier), adding that “it would have been funny if it happened to Anderson too.”

Rosen lost his fellowship. He apologized in a whiny way, explaining that he “resented” Logan because she “defended American imperial adventures,” and that she got so much attention for the assault because she’s white and famous. He explained in Salon that “Twitter is no place for nuance,” as though there’s any nuance in his suggestion that Logan wanted to be sexually assaulted for ratings.

He professed to be baffled by the fact that he had 1,000 new Twitter followers, noting: “It’s a bizarre, voyeuristic Internet culture and everybody in the mob is looking to get in on the next fight.” It’s been Lord of the Flies for a while now, dude, and you’re part of it.

The conservative blogger Debbie Schlussel smacked Logan from the right: “Lara Logan was among the chief cheerleaders of this ‘revolution’ by animals. Now she knows what the Islamic revolution is really all about.”

On her LA Weekly blog, Simone Wilson dredged up Logan’s romantic exploits and quoted a Feb. 3 snipe from the conservative blog Mofo Politics, after Logan was detained by the Egyptian police: “OMG if I were her captors and there were no sanctions for doing so, I would totally rape her.”

Online anonymity has created what the computer scientist Jaron Lanier calls a “culture of sadism.” Some Yahoo comments were disgusting. “She got what she deserved,” one said. “This is what happens when dumb sexy female reporters want to make it about them.” Hillbilly Nation chimed in: “Should have been Katie.”

The “60 Minutes” story about Senator Scott Brown’s revelation that a camp counselor sexually abused him as a child drew harsh comments on the show’s Web site, many politically motivated.

Acupuncturegirl advised: “Scott, shut the hell up. You are gross.” Dutra1 noted: “OK, Scott, you get your free pity pills. Now examine the image you see in the mirror; is it a man?”

Evgeny Morozov, author of “The Net Delusion: The Dark Side of Internet Freedom,” told me Twitter creates a false intimacy and can “bring out the worst in people. You’re straining after eyeballs, not big thoughts. So you go for the shallow, funny, contrarian or cynical.”

Nicholas Carr, author of “The Shallows: What the Internet is Doing to Our Brains,” says technology amplifies everything, good instincts and base. While technology is amoral, he said, our brains may be rewired in disturbing ways.

“Researchers say that we need to be quiet and attentive if we want to tap into our deeper emotions,” he said. “If we’re constantly interrupted and distracted, we kind of short-circuit our empathy. If you dampen empathy and you encourage the immediate expression of whatever is in your mind, you get a lot of nastiness that wouldn’t have occurred before.”

Leon Wieseltier, literary editor of The New Republic, recalled that when he started his online book review he forbade comments, wary of high-tech sociopaths.

“I’m not interested in having the sewer appear on my site,” he said. “Why would I engage with people digitally whom I would never engage with actually? Why does the technology exonerate the kind of foul expression that you would not tolerate anywhere else?”

Why indeed?

Just to review the bidding:

  1. Lara Logan is sexually assaulted by a mob in Cairo.
  2. Idiots of various stripes make comments about the event, like Nir Rosen, based on ideological or even pathological motivations. Some of them use the web to make these comments, or they make the comments elsewhere and those comments are sucked up into the swirling maelstrom of the web.
  3. Dowd and Morozov play blame-the-web, implying that web, itself, is like the devil at our ear, making us — or others — do evil.

There is buried network of spurious arguments underneath all the comfortable hatred of incivility, here. There is an assumption that the web is supposed to be a force for good, and only good. Who says? And secondly, that those that use the web are in some way a collective entity, a global society with shared beliefs, including various democratic ideals. These unstated assertions are deeply and profoundly wrong, but taken as a given in anti-web circles.

The web is more like one of the squares that Egyptian, Tunisian, and Bahraini activists have been occupying, recently. Demonstrators move in, perhaps displacing the using bicyclists, traffic cops, and fruit vendors. Later on, riot police or the army show up, and conflict may ensue.

But the fact that these different groups or individuals are occupying the same plaza does not equate to shared beliefs, necessarily. The same is true online, except it is an enormous plaza, encompassing all groups of all positions and persuasions.

[Note: This doesn’t undermine my belief is an emerging social culture, as a consequence of the rise of a post-industrial world. Indeed, the unrest in the Arab world is a distant echo of the same forces — including the social web — that will come to transform the future. But that is another, even longer post.]

Carr’s suggests amorality on the web is rewiring the human mind, and worsening society. On the contrary, many sorts of cognition are employed when we interact on the web and in the world. Fear and distrust of strangers — those that we perceive as not part of our clan, tribe, or ethnic group — is a human universal, deeply wired into our minds. This bias can lead to devaluation of others’ humanness: the ability to think of outsiders as being less human than us, and undeserving of human treatment, like the rioters’ behavior toward Logan, or the trolls that attack in blog comments.

There are other, more hope-inspiring universals of human character, though, such as the belief that justice should prevail and that the strong have an obligation to help the weak. The human mind is an enigma, driven by both love and hate, and capable of soaring insight and irrational fears.

We have built the web to connect to ourselves, and it’s not designed to filter any demons out of our minds.

What the web does do, and what leads to Dowd and Carr’s web-bashing, is to relax the strictures imposed by social order. Carr, Dowd, and other professional finger-waggers were raised in an era when professional, corporate media controlled public discourse. They determined what was fit for the front page, or got into the classifieds. They decided what ‘balanced and fair’ meant. They determined whose voices were authoritative, which positions were legitimate, and what stories should be squelched.

That has been undone, and now all sorts of perspectives can rise to our attention. Rape can be positioned as a reasonable response to an attractive western journalist’s presence in Egypt’s boiling street scene during the unrest there. But saying so doesn’t make it so. And the fact that it appeared — was transmitted — over Twitter does not mean that Twitter is somehow tarnished by that statement.

The real issue here is not, the web. It is the conflict inherent in liberal western society between the freedom of speech and the tendency toward censorship of hate language. In principle, we state that people should have the right to express any belief, and that they are free to do so. At the same time, in many western countries, speaking hatefully in ways that could lead to violence or discrimination is illegal. But the borderline between these principles is highly variable in different countries. 

We won’t see the end of this anti-web rhetoric, because the motivation for it is itself a cognitive universal. People naturally believe that touching something or someone unclean makes you unclean. So, a medium that is used to transmit evil is naturally thought to be, by extension, evil itself. Human cognitive preferences lead us to feeling disgust and avoidance when confronted with things that we have been socialized to think are dirty or evil.

So, Carr and company are acting profoundly human when they recoil from a web that can pass along such messages, or makes it easy to come into contact with people professing unacceptable and personally disgusting beliefs. 

However, words appearing on a computer screen are not like spit flying from the author’s mouth. And while we may understand that intellectually, the non-rational part of our minds still thinks magically, and might come to hate the medium that carries the words. That’s why the bearer of bad tidings was so often killed in the dim, dark past.

Still, throwing away the web because you don’t like what you see is like breaking a mirror because you don’t like your own reflection. It is us we are staring at in that mirror, on the web: and it is us looking out, too.

Your Brain Is Plastic: So Move It!

Oliver Sachs, Don’t leave learning to the young. Older brains can grow, too

One does not have to be blind or deaf to tap into the brain’s mysterious and extraordinary power to learn, adapt and grow. I have seen hundreds of patients with various deficits — strokes, Parkinson’s and even dementia — learn to do things in new ways, whether consciously or unconsciously, to work around those deficits.

That the brain is capable of such radical adaptation raises deep questions. To what extent are we shaped by, and to what degree do we shape, our own brains? And can the brain’s ability to change be harnessed to give us greater cognitive powers? The experiences of many people suggest that it can.

 […]

I have had many reports from ordinary people who take up a new sport or a musical instrument in their 50s or 60s, and not only become quite proficient, but derive great joy from doing so. Eliza Bussey, a journalist in her mid-50s who now studies harp at the Peabody conservatory in Baltimore, could not read a note of music a few years ago. In a letter to me, she wrote about what it was like learning to play Handel’s “Passacaille”: “I have felt, for example, my brain and fingers trying to connect, to form new synapses. … I know that my brain has dramatically changed.” Ms. Bussey is no doubt right: her brain has changed.

Music is an especially powerful shaping force, for listening to and especially playing it engages many different areas of the brain, all of which must work in tandem: from reading musical notation and coordinating fine muscle movements in the hands, to evaluating and expressing rhythm and pitch, to associating music with memories and emotion.

Whether it is by learning a new language, traveling to a new place, developing a passion for beekeeping or simply thinking about an old problem in a new way, all of us can find ways to stimulate our brains to grow, in the coming year and those to follow. Just as physical activity is essential to maintaining a healthy body, challenging one’s brain, keeping it active, engaged, flexible and playful, is not only fun. It is essential to cognitive fitness.

We can learn to do almost anything. Pick something wild, and do it!

NIck Carr’s ‘The Shallows’

I have not read Nick Carr’s new work, The Shallows, although I will order it from my library: I am certain it is not a book I want to own, since it is a continuation of the Luddism he has been peddling on his blog and earlier books, arguing that Google, and by extension, the web, web is making us stupid.

Thankfully, Jonah Lehrer has read the work, and dissects Carr’s arguments:

Our Cluttered Minds

This is a measured manifesto. Even as Carr bemoans his vanishing attention span, he’s careful to note the usefulness of the Internet, which provides us with access to a near infinitude of information. We might be consigned to the intellectual shallows, but these shallows are as wide as a vast ocean.

Nevertheless, Carr insists that the negative side effects of the Internet outweigh its efficiencies. Consider, for instance, the search engine, which Carr believes has fragmented our knowledge. “We don’t see the forest when we search the Web,” he writes. “We don’t even see the trees. We see twigs and leaves.” […]

But wait: it gets worse. Carr’s most serious charge against the Internet has nothing to do with Google and its endless sprawl of hyperlinks. Instead, he’s horrified by the way computers are destroying our powers of concentration. […] The online world has merely exposed the feebleness of human attention, which is so weak that even the most minor temptations are all but impossible to resist.

Carr extends these anecdotal observations by linking them to the plasticity of the brain, which is constantly being shaped by experience. While plasticity is generally seen as a positive feature — it keeps the cortex supple — Carr is interested in its dark side. He argues that our mental malleability has turned us into servants of technology, our circuits reprogrammed by our gadgets.

It is here that he starts to run into problems. There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to “marked increases in the speed of information processing.” One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in ­visual attention and memory.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn’t making us stupid — it’s exercising the very mental muscles that make us smarter.

But Carr, and other web critics like Andrew Keen, won’t let cognitive science dampen their enthusiasm for moralizing. What we are doing online is immoral, illegitimate, and immature. It is causing anxiety, acne, and the dissolution of the family. This is what I call the ‘war on flow’: and it will never end.

I will have more to say, I wager, when I read the book.

Enhanced by Zemanta