A great number of societal shifts take place in a three step fashion: slow, slow, fast.
The underlying cause is about the spread of innovations. A new innovation arrives on the scene and is embraced by a small group of innovators, but other, more conservative people deny any interest in the innovation, and embrace the technology or practice that is being potentially disrupted.
After a while, early adopters see the utility of the innovation and adopt it, but the majority remain unconvinced, although they are more likely now to come into contact with those that come aboard to the innovation.
Then, after a while, the innovation ‘crosses the chasm’ (to use the phrase that Geoffrey Moore made famous, popularizing the work of Ed Rogers, who wrote Diffusion Of Innovations), and then things move fast.
We are at that stage now for cloud computing, and that’s why the ground is suddenly changing under the feet of the mainstream IT solutions and computing companies.
IBM has abandoned its plan to deliver $20 a share in profits by 2015, a long-professed goal, and this is due to the rapid shifts in its clients’ buying behavior. Unlike other companies that are responding to these challenges by breaking themselves into two or more independent companies, like HP, IBM is shedding businesses where margins are falling as the gravity fields are being realigned. The newest example is the company’s PowerPC chip business, spun out with a sweetener of $1.5 billion. The company’s quarterly numbers were bad: 14% lower in profits than expected.
SAP has revised its projections for the year downward, citing the transition to cloud solutions. The firm says it will catch up on lower-cost cloud competitors in the ‘long run’, but that might just be wishful thinking.
What is coming? As we slide into the third part of slow, slow, fast things will move more quickly than the decision loops at these older, larger, and slower companies. They will be forced to sell off, spin out, or break apart in order to become quick enough to stay ahead of the event horizon.
This disruption is not transitional, but foundational, and will extend to the bedrock of the IT world, and as a result not a single large, established, IT behemoth will remain untouched: they will all have to remake themselves — rework their DNA — or lose everything.
Americans have a well-known obsession with productivity. In recent research I am involved in we found that personal productivity is the highest aspiration in the use of work technologies. But perhaps this focus is not necessarily the best for us with regard to other aspects of life and work like well-being and creativity.
I’ve been reviewing Andrew Smart’s Autopilot: The Art & Science of Doing Nothing, in which he makes a strong case for spending more time idle. As he puts it,
Psychological research has shown that humans, especially American humans, tend to dread idleness. However, this research also shows that if people do not have a justification for being busy, on average they would rather be idle. Our contradictory fear of being idle, together with our preference for sloth, may be a vestige from our evolutionary history. For most of our evolution, conserving energy was our number one priority because simply getting enough to eat was a monumental physical challenge. Today, survival does not require much (if any) physical exertion, so we have invented all kinds of futile busyness. Given the slightest or even a specious reason to do something, people will become busy. People with too much time on their hands tend to become unhappy or bored. Yet as we will see in this book, being idle may be the only real path toward self-knowledge. What comes into your consciousness when you are idle can often be reports from the depths of your unconscious self _and this information may not always be pleasant. Nonetheless, your brain is liker bringing it to your attention for a good reason. Through idleness, great ideas buried in your unconsciousness have the chance to enter your awareness.
I know that my most creative moments generally come in a state of half-sleep, using in the early morning or following an afternoon nap. And there is significant evidence that intentionally deciding to delay making a decision, and letting it simmer on the back burner while we do other things — like sleep, or other work — can lead to better decisions (see Being distracted — multitasking — can lead to better decisions).
Smart cites the work of Marcus Raichle, who discovered the ‘resting-state network’ of the human brain in 2001. This is the network that is active when we aren’t focused on anything in particular:
Raichle noticed that when his subjects were lying in an MRI scanner and doing the demanding cognitive tasks of his experiments, there were brain areas whose activity actually decreased. This was surprising, because it was previously suspected that during cognitive tasks brain activity should only increase, relative to another task or to a “flat baseline.” This led Raichle to study what the brain was doing in between his experimental tasks. What he discovered was a specific network that increased activity when subjects seemed to disengage from the outside world. When you have to perform some tedious task in an fMRI (functional magnetic resonance imaging) experiment such as to memorize a list of words, certain areas of your brain become more active and other areas become less active. This does not seem peculiar. However, if you are just lying in the scanner with your eyes closed or staring up at the screen, brain activity does not decrease. The area of activity merely switches places. The area that deactivates during tasks becomes more active during rest. This is the resting-state network.
Most parts of the brain are dedicated to certain sorts of cognition, and are excited by singing, or reading, or doing math. But it appears that the ‘aha!’ moments of insight and creativity occur most frequently when the RSN is active: that is, when we aren’t trying to be creative, but we are letting our minds wander where the autopilot wants to take us.
Note the brain isn’t shutting down in RSN time: it is active. Again, Smart says,
Rather, the brain is perpetually and spontaneously active. It is maintaining, interpreting, responding, and predicting.
The brain use more energy when we are on autopilot than when we are doing math, for example.
What is increasingly clear is that our intuitive — or culturally-imposed — understanding of the human mind is woefully wrong. We naturally would imagine that concentrating hard on a math problem should take more energy, but it turns out that spacing out is more of an energy hog.
Looked at from a physics perspective, more energy should lead to more of something else, and it seems that all that energy makes the brain temporarily more organized: different brain centers are communicating, exchanging information, and erecting a dialogue about our self and the world.
I’ll leave out Smart’s description of the various centers and what they add to the soup that is being made as we daydream, but as Smart says,
In a nutshell, when you are being lazy, a huge and widespread network in your brain forms and starts sending information back and forth between these regions. The butterflies only come out to play when all is still and quiet. Any sudden movements and they will scatter.
And when we are working on a spreadsheet, checking our task list, or even sitting in a brainstorming meeting, that network is asleep.
Those with Alzheimer’s disease or schizophrenia appear to lack well-modulated autopilots, and that explains the nature of their impediments.
So it’s time to move past the stigma attached to laziness, to wool-gathering and staring out the window. To reclaim what makes us unique we need to recapture the daydreams of childhood, and dedicate time to actively turning our thoughts away from the affairs of the day.
This flies in the face of what has become the orthodoxy of busyness, but we need to accept the heterodox paradox: to be deeply productive these days rests squarely on creativity, not brute focus. So, take that walk, take that nap, turn on the autopilot.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet. I’ve been compensated to contribute to this program, but the opinions expressed in this post are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
“There’s an interesting thing about ancient China, because if you read through the history, almost every single major invention of the world was invented in China first, and sometimes it took hundred of years for each to either it to make it’s way to Western Europe or to be reinvented in Western Europe. That includes paper, printing, steel, gunpowder, the compass, rudder, suspension bridges, etc. It’s almost everything, and for a long time China led the world in civilization because it was able to make these things long before anyone else. But there was one invention that China did not invent, and it would turn out to be the most important invention, and that was the invention of the scientific method.
There’s still a question about why China didn’t invent that, which was invented in the West. Because of that one invention, the West suddenly had a method for inventing new things and finding new things that was so superior that it just blew past all the great inventions of China and invented so many more things because of the power of this one invention. And that invention—the scientific method—is not a single thing. It’s actually a process with many ingredients, and the scientific method itself has actually been changing. In the very beginning it was very simple, a couple of processes like a controlled experiment, having a control, being able to repeat things, having to have a proof. We tend to think of the scientific method as sort of a whole—as fixed in time with a certain character. But lots of things that we assume or we now associate with the scientific method were only invented recently, some of them only as recently as 50 years ago—things like a double blind experiment or the invention of the placebo or random sampling were all incredibly recent additions to the scientific method. In 50 years from now the scientific method will have changed more than it has in the past 400 years just as everything else has.
So the scientific method is still changing over time. It’s an invention that we’re still evolving and refining. It’s a technology. It’s a process technology, but it’s probably the most important process and technology that we have, but that is still undergoing evolution refinement and advancement and we are adding new things to this invention. We’re adding things like a triple blind experiment or multiple authors or quantified self where you have experiment of N equals one. We’re doing things like saving negative results and transmitting those. There’s many, many things happening with the scientific method itself—as a technology—that we’re also improving over time, and that will affect all the other technologies that we make.”—
“Implants and wearables will replace tools we carry or purchase. Technology will be biological in the sense that those who can afford it will ‘receive’ it as children. It will be part of our body and our minds will not function well without it. We will be dependent on it. There will probably be new forms of addiction and theft. It will also redefine what a ‘thought’ is, as we won’t ‘think’ unassisted.”—
“I’m definitely not down on utopian narratives in general — in fact, I think they’re a vital tool for thinking about the future, so long as they’re always informed by a sense of their essential impossibility. Or, to put it another way: utopias are terrible as blueprints for a better world, but brilliant as sandboxes in which to play with ideas for a better world.”—
“The worst work I did was from 2001 to 2004. And the company paid a price for bad work. I put the A-team resources on Longhorn, not on phones or browsers. All our resources were tied up on the wrong thing.”—
It’s a telling quote. A big part of Microsoft’s current predicament isn’t that they lacked the talent to do what their rivals did — it’s that the talent was directed to focus on the wrong things (or just as bad: the right things at the wrong time).
Mayor Randy Casale Open Office Hours 10am 14 Ocotober
Tuesday, October 14th at 10:00am in the Beacon sukkah: Mayor Randy Casale holds Open to the Sky: Open Office Hours.
Tell, share or ask anything - Mayor Casale will be open to discussing Beacon issues, history and ideas for the future. A lifelong Beaconite, Mayor Casale takes a long view, past and future. Come discuss at the sukkah, sponsored by Beacon Hebrew Alliance and Beacon Arts. Meet us in Polhill Park, next to Visitor Center, across Wolcott /9D from City Hall and across South Avenue.
“The Nelson Mandela rule: You can get what you want by showing people ordinary respect. When Mr. Mandela heard that an Afrikaner general was arming rebels to prevent multiracial elections, he invited the general over for tea. The journalist John Carlin writes that Gen. Constand Viljoen “was dumbstruck by Mandela’s big, warm smile, by his courteous attentiveness to detail” and by his sensitivity to the fears of white South Africans. The general abandoned violence.”—
Since at least the 14th century, when the bubonic plague devastated Europe, posting medical officers at a port of entry has been one of the main tools used to try to halt the spread of disease.
An outbreak of yellow fever in 1878 led the United States Congress to grant the federal government the authority to order a quarantine to prevent its spread.
Those powers were enhanced in 1892 to try to prevent another scourge,cholera.
For several decades, starting in the 1970s, the quarantine program in the United States was neglected until another threat, severe acute respiratory syndrome, or SARS, prompted Congress and the C.D.C. to bolster the program.
Ebola cannot be transmitted through the air, but rather only through bodily fluids; people are contagious only when they are symptomatic. There is no vaccine.
Thomas E. Duncan, who traveled to Dallas from Liberia, had no symptoms when screened before boarding his flight. He only developed symptoms a few days later, and then subsequently died.
Perhaps we should reinstitute quarantine for travelers coming from countries where there is an outbreak? The incubation time is 2 to 21 days. If we took all the travelers from the countries suffering from the outbreak, and housed them in a quarantined camp for 21 days — segregating them by the day of arrival, and monitoring for any symptoms — then it would be much harder for the disease to gain a foothold in the US.
While this would involve much more of an investment in money, time and materiel, we would be much safer for it. The only safer alternative would be not allowing people to fly to the US from those countries, at all.
“Our society constantly proclaims that anyone can make it if they just try hard enough, all the while reinforcing privilege and putting increasing pressure on its overstretched and exhausted citizens. An increasing number of people fail, feeling humiliated, guilty and ashamed. We are forever told that we are freer to choose the course of our lives than ever before, but the freedom to choose outside the success narrative is limited. Furthermore, those who fail are deemed to be losers or scroungers, taking advantage of our social security system.
A neoliberal meritocracy would have us believe that success depends on individual effort and talents, meaning responsibility lies entirely with the individual and authorities should give people as much freedom as possible to achieve this goal. For those who believe in the fairytale of unrestricted choice, self-government and self-management are the pre-eminent political messages, especially if they appear to promise freedom. Along with the idea of the perfectible individual, the freedom we perceive ourselves as having in the west is the greatest untruth of this day and age.
The sociologist Zygmunt Bauman neatly summarised the paradox of our era as: “Never have we been so free. Never have we felt so powerless.” We are indeed freer than before, in the sense that we can criticise religion, take advantage of the new laissez-faire attitude to sex and support any political movement we like. We can do all these things because they no longer have any significance – freedom of this kind is prompted by indifference. Yet, on the other hand, our daily lives have become a constant battle against a bureaucracy that would make Kafka weak at the knees. There are regulations about everything, from the salt content of bread to urban poultry-keeping.
Our presumed freedom is tied to one central condition: we must be successful – that is, “make” something of ourselves. You don’t need to look far for examples. A highly skilled individual who puts parenting before their career comes in for criticism. A person with a good job who turns down a promotion to invest more time in other things is seen as crazy – unless those other things ensure success. A young woman who wants to become a primary school teacher is told by her parents that she should start off by getting a master’s degree in economics – a primary school teacher, whatever can she be thinking of?
There are constant laments about the so-called loss of norms and values in our culture. Yet our norms and values make up an integral and essential part of our identity. So they cannot be lost, only changed. And that is precisely what has happened: a changed economy reflects changed ethics and brings about changed identity. The current economic system is bringing out the worst in us.”—
“Counterculture giants of the time, like Stewart Brand, Buckminster Fuller and Ivan Illich, championed vernacular tools as a way to give people the personal autonomy and choices they craved. But the consumerist version of this ultimately vision prevailed, such that the decentralized empowerment that networked computers provided has been a mixed bag.”—Morozov on the Maker Movement | David Bollier (via johnborthwick)
“The idea that every portrait of a woman should be an ideal woman, meant to stand for all of womanhood, is an enemy of art — not to mention wickedly delicious Joan Crawford and Bette Davis movies. Art is meant to explore all the unattractive inner realities as well as to recommend glittering ideals. It is not meant to provide uplift or confirm people’s prior ideological assumptions. Art says “Think,” not “You’re right.””—
“I don’t know what happened to the Future. It’s as if we lost our ability, or our will, to envision anything beyond the next hundred years or so, as if we lacked the fundamental faith that there will in fact be any future at all beyond that not-too- distant date. Or maybe we stopped talking about the Future around the time that, with its microchips and its twenty-four-hour news cycles, it arrived.”—
Futurelessness is an attribute of the postnormal era. We are confronted with so much fog — from a cascade of ambiguities, the dissolution of institutions and the collapse of solidarity, and the growing complexities of an incestuously interconnected world — that we are blocked from envisioning some extrapolated arc of history over the event horizon. And there is so much appearing and smacking us in the face everyday, it’s as if the present has been colonized by the future. As William S. Burroughs put it,
When you cut into the present the future leaks out.
One of those clever, potentially profound system-level apps that can unfortunately only work on Android for the time being. I personally use at least six different messaging clients (including unconventional ones like Twitter DM) throughout the day. It’s a chore to figure out who I’m talking to where. And it gets worse seemingly everyday with new apps constantly popping up.
That’s the first battle Snowball is choosing to fight. And why I’m pleased Google Ventures has invested in the team. Now to figure this out on iOS…
“People are beginning to understand the nature of their new technology, but not yet nearly enough of them — and not nearly well enough. Most people, as I indicated, still cling to what I call the rearview-mirror view of their world. By this I mean to say that because of the invisibility of any environment during the period of its innovation, man is only consciously aware of the environment that has preceded it; in other words, an environment becomes fully visible only when it has been superseded by a new environment; thus we are always one step behind in our view of the world. Because we are benumbed by any new technology — which in turn creates a totally new environment — we tend to make the old environment more visible; we do so by turning it into an art form and by attaching ourselves to the objects and atmosphere that characterized it, just as we’ve done with jazz, and as we’re now doing with the garbage of the mechanical environment via pop art.
The present is always invisible because it’s environmental and saturates the whole field of attention so overwhelmingly; thus everyone but the artist, the man of integral awareness, is alive in an earlier day. In the midst of the electronic age of software, of instant information movement, we still believe we’re living in the mechanical age of hardware. At the height of the mechanical age, man turned back to earlier centuries in search of “pastoral” values. The Renaissance and the Middle Ages were completely oriented toward Rome; Rome was oriented toward Greece, and the Greeks were oriented toward the pre-Homeric primitives. We reverse the old educational dictum of learning by proceeding from the familiar to the unfamiliar by going from the unfamiliar to the familiar, which is nothing more or less than the numbing mechanism that takes place whenever new media drastically extend our senses.”—
"Let’s see if we can use these ideas to understand some things about „big data.” The analysis of massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems. Now the focus has quietly shifted to the commercial monetization of knowledge about current behavior as well as influencing and shaping emerging behavior for future revenue streams. The opportunity is to analyze, predict, and shape, while profiting from each point in the value chain.
"There are many sources from which these new flows are generated: sensors, sur-veillance cameras, phones, satellites, street view, corporate and government databases (from banks, credit card, credit rating, and telecom companies) are just a few.
"The most significant component is what some call “data exhaust.” This is user-generated data harvested from the haphazard ephemera of everyday life, especially the tiniest details of our online engagements— captured, datafied ( translated into machine-readable code), abstracted, aggregated, packaged, sold, and analyzed. This includes eve-rything from Facebook likes and Google searches to tweets, emails, texts, photos, songs, and videos, location and movement, purchases, every click, misspelled word, every page view, and more.
"The largest and most successful „big data“ company is Google, because it is the most visited website and therefore has the largest data exhaust. AdWords, Google’s algo-rithmic method for targeting online advertising, gets its edge from access to the most data exhaust. Google gives away products like “search” in order to increase the amount of data exhaust it has available to harvest for its customers— its advertisers and other data buyers. To quote a popular 2013 book on „big data“, “every action a user performs is considered a signal to be analyzed and fed back into the system.” Facebook,Linked In, Yahoo, Twitter, and thousands of companies and apps do something similar. On the strength of these capabilities, Google’s ad revenues were $21 billion in 2008 and climbed to over $50 billion in 2013. By February 2014, Google’s $400 billion dollar market value had edged out Exxon for the #2 spot in market capitalization.
"V. “BIG DATA” IS BIG CONTRABAND
"What can an understanding of declarations reveal about “big data?” I begin by suggesting that „big data“ is a big euphemism. As Orwell once observed, euphemisms are used in politics, war, and business “to make lies sound truthful and murder respectable”. Euphemisms like “enhanced interrogation methods” or “ethnic cleansing” distract us from the ugly truth behind the words.
"The ugly truth here is that much of „big data“ is plucked from our lives without our knowledge or informed consent. It is the fruit of a rich array of surveillance practices designed to be invisible and undetectable as we make our way across the virtual and real worlds. The pace of these developments is accelerating: drones, Google Glass, wearable technologies, the Internet of Everything (which is perhaps the biggest euphemism of all).
"These surveillance practices represent profound harms—material, psychological, social, and political— that we are only beginning to understand and codify, largely because of the secret nature of these operations and how long it’s taken for us to understand them. As the recent outcry over the British National Health Service’s plan to sell patient data to insurance companies underscored, one person’s „big data“ is another person’s stolen goods. The neutral technocratic euphemism, „big data“, can more accurately be labeled “big contraband” or “big pirate booty.” My interest here is less in the details of these surveillance operations than in how they have been allowed to stand and what can be done about it.
"VI. THE INTERNET COMPANIES DECLARE THE FUTURE
"The answer to how these practices have been allowed to stand is straightforward: Declaration. We never said they could take these things from us. They simply declared them to be theirs for the taking—- by taking them. All sorts of institutional facts were established with the words and deeds of this declaration.
"Users were constituted as an unpaid workforce, whether slaves or volunteers is something for reasonable people to debate. Our output was asserted as “exhaust” — waste without value—that it might be expropriated without resistance. A wasteland is easily claimed and colonized. Who would protest the transformation of rubbish into value? Because the new data assets were produced through surveillance, they constitute a new asset class that I call “surveillance assets.” Surveillance assets, as we’ve seen, attract significant capital and investment that I suggest we call “surveillance capital.” The declaration thus established a radically disembedded and extractive variant of information capitalism that can I label “surveillance capitalism.”
"This new market form entails wholly new moral and social complexities along with new risks. For example, if the declarations that established surveillance capitalism are challenged, we might discover that „big data“ are larded with illicit surveillance assets who’s ownership is subject to legal contest and liability. In an alternative social and legal regime, surveillance assets could become toxic assets strewn through the world’s data flows in much the same way that bad mortgage debt was baked into financial instruments that abruptly lost value when their status function was challenged by new facts.
"What’s key to understand here is that this logic of “accumulation by surveillance” is a wholly new breed. In the past, populations were the source of employees and consumers. Under surveillance capitalism, populations are not to be employed and served. Instead, they are to be harvested for behavioral data…."
“There’s a temptation within many newspapers to believe that the only problem the web has created is how to get all that excellent journalism to readers most efficiently, and to see the social web as merely a distribution mechanism or PR gesture. Engaging with readers is much more than that — it’s the key to developing a new kind of interactive, two-way journalism, and that journalism may ultimately be the only kind that survives.”—
Ello, like a luxury bike, isn’t antithetical to capitalism and all of its problems. But it’s a step in the right direction, not just by being politically better than Facebook, but also being more useful and pleasurable than Diaspora. Ello’s core design team desperately needs some diversifying, and hopefully that and many other concerns of its users will alleviated sooner rather than later. This new network certainly isn’t the answer to every problem with have with private social networks, but it responds to some of the worst problems we face today. Ello might be a walled garden, but it’s fertile ground for growing something even better.
saved.io is a small-and-simple new bookmarking tool
Anthony Feint, the guy behind pen.io, has also created saved.io, a minimal bookmarking tool.
I like the ability to tag bookmarks, and the inclusion of a note field.
The bookmarks can be created in several ways: 1/ by bookmarklet, 2/ Chrome extension, or 3/ by prepending ‘saved.io/’ to the URL in your browser. Alternatively you can prepend ‘xyz.saved.io/’ to add a bookmark to the ‘xyz’ category.
It’s too bad that I can’t add tags to the URL in some way. I have to do that on the saved.io page.
HTML sort-of works in these notes, although there is a bug that leads to escaping HTML quotes and double quotes. I hope he fixes that.
There’s no way to share these links, but in general I am conserving these for my own research purposes, and when I get to the point where I want to share I move to this blog or Gigaom Research, anyway.
What’s the perfect length for a break? Seventeen minutes, according to an experiment released this week.
DeskTime, a productivity app that tracks employees’ computer use, peeked into its data to study the behavior of its most productive workers. The highest-performing 10 percent tended to work for 52 consecutive minutes followed by a 17-minute break. Those 17 minutes were often spent away from the computer, said Julia Gifford at The Muse, by talking a walk, doing exercises, or talking to coworkers.
Telling people to focus for 52 consecutive minutes and then to immediately abandon their desks for exactly 1,020 seconds might strike you as goofy advice. But this isn’t the first observational study to show that short breaks correlate with higher productivity. In 1999, Cornell University’s Ergonomics Research Laboratory used a computer program to remind workers to take short breaks. The project concluded that “workers receiving the alerts [reminding them to stop working] were 13 percent more accurate on average in their work than coworkers who were not reminded.”
It seems unlikely that there is one number representing the ideal amount of time for every employee in every industry to break from work. Rather than set your stop-watch for 17:00 when you get up from your desk, the more important reminder might be to get up, at all. Indeed, the most productive employees don’t necessarily work the longest hours. Instead, they take the smartest approach to managing their energy to solve tasks in efficient and creative ways.
Just round it off to 15 minutes break after working 60.
Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.
A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.
The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.
But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking.
The researchers at the University of Sussex’s Sackler Centre for Consciousness Science used functional magnetic resonance imaging (fMRI) to look at the brain structures of 75 adults, who had all answered a questionnaire regarding their use and consumption of media devices, including mobile phones and computers, as well as television and print media.
They found that, independent of individual personality traits, people who used a higher number of media devices concurrently also had smaller grey matter density in the part of the brain known as the anterior cingulate cortex (ACC), the region notably responsible for cognitive and emotional control functions.
Kep Kee Loh says: “Media multitasking is becoming more prevalent in our lives today and there is increasing concern about its impacts on our cognition and social-emotional well-being. Our study was the first to reveal links between media multitasking and brain structure.”
Scientists have previously demonstrated that brain structure can be altered upon prolonged exposure to novel environments and experience. The neural pathways and synapses can change based on our behaviours, environment, emotions, and can happen at the cellular level (in the case of learning and memory) or cortical re-mapping, which is how specific functions of a damaged brain region could be re-mapped to a remaining intact region.
Other studies have shown that training (such as learning to juggle, or taxi drivers learning the map of London) can increase grey-matter densities in certain parts of the brain.
“The exact mechanisms of these changes are still unclear,” says Kep Kee Loh. “Although it is conceivable that individuals with small ACC are more susceptible to multitasking situations due to weaker ability in cognitive control or socio-emotional regulation, it is equally plausible that higher levels of exposure to multitasking situations leads to structural changes in the ACC. A longitudinal study is required to unambiguously determine the direction of causation.”
“The first mouse was invented in 1965, but it took until the mid-1990s for mice to be a standard computer feature. The first packet-switched network was invented in 1969, but the internet didn’t become mainstream until the late 1990s. Multitouch interfaces were first developed in the early 1980s, but didn’t become a mainstream technology until the iPhone in 2007. That suggests we shouldn’t underestimate the disruptive potential of technologies, like self-driving cars, personalized DNA testing, and Bitcoin, that seem exotic and impractical today.”—Newspapers weren’t late to online news — they were way too early - Vox (via infoneer-pulse)