Posts tagged with ‘mind’
A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.
The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing—difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.
Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity”—a point where it had gained an intelligence beyond the understanding of the 1914 mind.
The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.
The time-traveller scenario demonstrates that how you answer the question of whether we are getting smarter depends on how you classify “we.” This is why Thompson and Carr reach different results: Thompson is judging the cyborg, while Carr is judging the man underneath.
Cathy Davidson, The Myth of Monotasking | HASTAC
Jonah Lehrer vividly remembers drinking coke from a glass bottle at a high school football game, vividly. However, the school prohibited glass from the stadium, so it couldn’t have happened. But Coke works hard to make you act as if it did.
Jonah Lehrer, Ads Implant False Memories
A new study, published in The Journal of Consumer Research, helps explain both the success of this marketing strategy and my flawed nostalgia for Coke. It turns out that vivid commercials are incredibly good at tricking the hippocampus (a center of long-term memory in the brain) into believing that the scene we just watched on television actually happened. And it happened to us.
The experiment went like this: 100 undergraduates were introduced to a new popcorn product called “Orville Redenbacher’s Gourmet Fresh Microwave Popcorn.” (No such product exists, but that’s the point.) Then, the students were randomly assigned to various advertisement conditions. Some subjects viewed low-imagery text ads, which described the delicious taste of this new snack food. Others watched a high-imagery commercial, in which they watched all sorts of happy people enjoying this popcorn in their living room. After viewing the ads, the students were then assigned to one of two rooms. In one room, they were given an unrelated survey. In the other room, however, they were given a sample of this fictional new popcorn to taste. (A different Orville Redenbacher popcorn was actually used.)
One week later, all the subjects were quizzed about their memory of the product. Here’s where things get disturbing: While students who saw the low-imagery ad were extremely unlikely to report having tried the popcorn, those who watched the slick commercial were just as likely to have said they tried the popcorn as those who actually did. Furthermore, their ratings of the product were as favorable as those who sampled the salty, buttery treat. Most troubling, perhaps, is that these subjects were extremely confident in these made-up memories. The delusion felt true. They didn’t like the popcorn because they’d seen a good ad. They liked the popcorn because it was delicious.
The scientists refer to this as the “false experience effect,” since the ads are slyly weaving fictional experiences into our very real lives. “Viewing the vivid advertisement created a false memory of eating the popcorn, despite the fact that eating the non-existent product would have been impossible,” write Priyali Rajagopal and Nicole Montgomery, the lead authors on the paper. “As a result, consumers need to be vigilant while processing high-imagery advertisements.”
At first glance, this experimental observation seems incongruous. How could a stupid commercial trick me into believing that I loved a product I’d never actually tasted? Or that I drank Coke out of glass bottles?
The answer returns us to a troubling recent theory known as memory reconsolidation. In essence, reconsolidation is rooted in the fact that every time we recall a memory we also remake it, subtly tweaking the neuronal details. Although we like to think of our memories as being immutable impressions, somehow separate from the act of remembering them, they aren’t. A memory is only as real as the last time you remembered it. What’s disturbing, of course, is that we can’t help but borrow many of our memories from elsewhere, so that the ad we watched on television becomes our own, part of that personal narrative we repeat and retell.
This idea, simple as it seems, requires us to completely re-imagine our assumptions about memory. It reveals memory as a ceaseless process, not a repository of inert information. The recall is altered in the absence of the original stimulus, becoming less about what we actually remember and more about what we’d like to remember. It’s the difference between a “Save” and the “Save As” function. Our memories are a “Save As”: They are files that get rewritten every time we remember them, which is why the more we remember something, the less accurate the memory becomes. And so that pretty picture of popcorn becomes a taste we definitely remember, and that alluring soda commercial becomes a scene from my own life. We steal our stories from everywhere. Marketers, it turns out, are just really good at giving us stories we want to steal.
So, Phillip K Dick was right. Again.
Dan McLaughlin was a complete novice at golf when he conceived a plan to dedicate 10,000 hours to the sport, hoping to become a professional golfer.
There might be a bell curve in natural ability, but does that mean we are condemned to a bell curve in the results of training? Perhaps not, as the Jump approach to teaching math shows:
Children come into school with differences in background knowledge, confidence, ability to stay on task and, in the case of math, quickness. In school, those advantages can get multiplied rather than evened out. One reason, says Mighton, is that teaching methods are not aligned with what cognitive science tells us about the brain and how learning happens.
In particular, math teachers often fail to make sufficient allowances for the limitations of working memory and the fact that we all need extensive practice to gain mastery in just about anything. Children who struggle in math usually have difficulty remembering math facts, handling word problems and doing multi-step arithmetic (pdf). Despite the widespread support for “problem-based” or “discovery-based” learning, studies indicate that current teaching approaches underestimate the amount of explicit guidance, “scaffolding” and practice children need to consolidate new concepts. Asking children to make their own discoveries before they solidify the basics is like asking them to compose songs on guitar before they can form a C chord.
Teaching is another era where cognitive science hasn’t really reached. Most of what educators is doing is based on folklore, and most of the premises underlying education are likely to be flawed, or totally false.
The brain processes musical nuance in many ways, it turns out. Edward W. Large, a music scientist at Florida Atlantic University, scanned the brains of people with and without experience playing music as they listened to two versions of a Chopin étude: one recorded by a pianist, the other stripped down to a literal version of what Chopin wrote, without human-induced variations in timing and dynamics.
During the original performance, brain areas linked to emotion activated much more than with the uninflected version, showing bursts of activity with each deviation in timing or volume.
So did the mirror neuron system, a set of brain regions previously shown to become engaged when a person watches someone doing an activity the observer knows how to do — dancers watching videos of dance, for example. But in Dr. Large’s study, mirror neuron regions flashed even in nonmusicians.
Maybe those regions, which include some language areas, are “tapping into empathy,” he said, “as though you’re feeling an emotion that is being conveyed by a performer on stage,” and the brain is mirroring those emotions.
Music is a medium for feelings, literally: the artist can actually make us feel what they are feeling.
The brain is a remarkably capable chronometer for most purposes. It can track seconds, minutes, days, and weeks, set off alarms in the morning, at bedtime, on birthdays and anniversaries. Timing is so essential to our survival that it may be the most finely tuned of our senses. In lab tests, people can distinguish between sounds as little as five milliseconds apart, and our involuntary timing is even quicker. If you’re hiking through a jungle and a tiger growls in the underbrush, your brain will instantly home in on the sound by comparing when it reached each of your ears, and triangulating between the three points. The difference can be as little as nine-millionths of a second.
Yet “brain time,” as Eagleman calls it, is intrinsically subjective. “Try this exercise,” he suggests in a recent essay. “Put this book down and go look in a mirror. Now move your eyes back and forth, so that you’re looking at your left eye, then at your right eye, then at your left eye again. When your eyes shift from one position to the other, they take time to move and land on the other location. But here’s the kicker: you never see your eyes move.” There’s no evidence of any gaps in your perception—no darkened stretches like bits of blank film—yet much of what you see has been edited out. Your brain has taken a complicated scene of eyes darting back and forth and recut it as a simple one: your eyes stare straight ahead. Where did the missing moments go?
The question raises a fundamental issue of consciousness: how much of what we perceive exists outside of us and how much is a product of our minds? Time is a dimension like any other, fixed and defined down to its tiniest increments: millennia to microseconds, aeons to quartz oscillations. Yet the data rarely matches our reality. The rapid eye movements in the mirror, known as saccades, aren’t the only things that get edited out. The jittery camera shake of everyday vision is similarly smoothed over, and our memories are often radically revised. What else are we missing? When Eagleman was a boy, his favorite joke had a turtle walking into a sheriff’s office. “I’ve just been attacked by three snails!” he shouts. “Tell me what happened,” the sheriff replies. The turtle shakes his head: “I don’t know, it all happened so fast.”
Just how many clocks we contain still isn’t clear. The most recent neuroscience papers make the brain sound like a Victorian attic, full of odd, vaguely labelled objects ticking away in every corner. The circadian clock, which tracks the cycle of day and night, lurks in the suprachiasmatic nucleus, in the hypothalamus. The cerebellum, which governs muscle movements, may control timing on the order of a few seconds or minutes. The basal ganglia and various parts of the cortex have all been nominated as timekeepers, though there’s some disagreement on the details. The standard model, proposed by the late Columbia psychologist John Gibbon in the nineteen-seventies, holds that the brain has “pacemaker” neurons that release steady pulses of neurotransmitters. More recently, at Duke, the neuroscientist Warren Meck has suggested that timing is governed by groups of neurons that oscillate at different frequencies. At U.C.L.A., Dean Buonomano believes that areas throughout the brain function as clocks, their tissue ticking with neural networks that change in predictable patterns. “Imagine a skyscraper at night,” he told me. “Some people on the top floor work till midnight, while some on the lower floors may go to bed early. If you studied the patterns long enough, you could tell the time just by looking at which lights are on.”
Time isn’t like the other senses, Eagleman says. Sight, smell, touch, taste, and hearing are relatively easy to isolate in the brain. They have discrete functions that rarely overlap: it’s hard to describe the taste of a sound, the color of a smell, or the scent of a feeling. (Unless, of course, you have synesthesia—another of Eagleman’s obsessions.) But a sense of time is threaded through everything we perceive. It’s there in the length of a song, the persistence of a scent, the flash of a light bulb. “There’s always an impulse toward phrenology in neuroscience—toward saying, ‘Here is the spot where it’s happening,’ ” Eagleman told me. “But the interesting thing about time is that there is no spot. It’s a distributed property. It’s metasensory; it rides on top of all the others.”[…]
“Time is this rubbery thing,” Eagleman said. “It stretches out when you really turn your brain resources on, and when you say, ‘Oh, I got this, everything is as expected,’ it shrinks up.” The best example of this is the so-called oddball effect—an optical illusion that Eagleman had shown me in his lab. It consisted of a series of simple images flashing on a computer screen. Most of the time, the same picture was repeated again and again: a plain brown shoe. But every so often a flower would appear instead. To my mind, the change was a matter of timing as well as of content: the flower would stay onscreen much longer than the shoe. But Eagleman insisted that all the pictures appeared for the same length of time. The only difference was the degree of attention that I paid to them. The shoe, by its third or fourth appearance, barely made an impression. The flower, more rare, lingered and blossomed, like those childhood summers.[…]
"We’re stuck in time like fish in water,” Eagleman said, oblivious of its currents until a bubble floats by. It’s usually best that way. He had spent the past ten years peering at the world through such gaps in our perception, he said. “But sometimes you get so far down deep into reality that you want to pull back. Sometimes, in a great while, I’ll think, What if I find out that this is all an illusion?” He felt this most keenly with his schizophrenic subjects, who tended to do poorly on timing tests. The voices in their heads, he suspected, were no different from anyone else’s internal monologues; their brains just processed them a little out of sequence, so that the thoughts seemed to belong to someone else. “All it takes is this tiny tweak in the brain, this tiny change in perception,” he said, “and what you see as real isn’t real to anyone else.”
I am looking forward to reading David Eagleman’s Brain Time, which is available online at Edge.