Farhad Manjoo has had a personal epiphany. He wants to get more ‘done’ so he has turned off his second monitor, which allows him to be more focused.
Farhad Manjoo, Discovering Two Screens Aren’t Better Than One
The conventional argument in favor of dual monitors rests on what might be called the two-window problem. Imagine, for instance, the process of writing a research report. You have a word processor open in one window, and, somewhere else on the screen, a web browser full of tabs pointing to research papers. To write the report, you need to shift your attention frequently from the browser to the word processor and back again. On a small display, it would be difficult to keep both windows open at the same time, so you’d waste time switching from one to the other. On a large multiscreen display, you can keep both windows open on your screen — and you save all that switching time.
The research supports this. One study commissioned by NEC and conducted by researchers at the University of Utah showed that people using a dual-display machine to do a text-editing task were 44 percent more productive than those who used a single monitor.
But for most people, the time spent juggling two windows or scrolling across large documents isn’t the biggest bottleneck in getting work done. Instead, there’s a more basic, pernicious reason you feel constantly behind — you’re getting distracted.
Ms. Mark’s research, based on observations and digital tracking of office workers, has found that our workplaces are bombarded with distractions. Studies show that office workers are interrupted every four to 11 minutes by external distractions including phone calls, email and people who stop by your desk to chat about the weekend.
Then there are self-motivated distractions, when, for no apparent reason, you quit working on your project and do something else — for instance, jump into the rabbit hole of the web.
All such disruptions are costly. It can take workers as much as 25 minutes to regain focus after being interrupted. And constant interruptions create a stressful workplace.
“The second screen can also be an inviting entry-way for self-distraction,” Ms. Mark said. That’s because it’s an ever-present, available canvas calling out for you to fire up a web window and find solace in the latest thrills on YouTube.
This is an apparently unassailable argument, so long as you accept the premises and limit the discussion to a very narrow range of issues. The first narrowing premise is that our work consists principally of taking well-defined chunks of ‘work’ out of an implicit queue, performing the physical or mental labor necessary to get it ‘done’, and then moving onto the next item in the queue. Sometimes the work involves adding new elements to the queue, or the work involves interacting with others, which can lead to new items being added to the queue or existing items being modified, or deleted from the queue. And the implicit highest good is to execute the labor to accomplish these tasks as quickly as possible.
However, there is a wider range of issues involved, even if we don’t wander off into an aesthetic argument about how we are supposed to spend our time here on Earth. For example, it’s clear that at least some sorts of work require creative insights, and that these are not always accomplished through routine techniques.
There is also considerable evidence that when we are wrestling with decisions that involve many factors, simply concentrating on the problem analytically in the foreground of our thinking — focusing — does not necessarily lead to the best outcomes. I wrote about this in How to improve decision-making? Distraction. where I cited David Cresswell’s work:
New science shows that while we are distracted from making a final decision on some issue — even though we are unaware of it — the part of the brain associated with decision-making is still mulling the problem over, as David Cresswell of Carnegie Mellon discovered with his colleagues (see Being distracted — multitasking — can lead to better decisions).
Cresswell thinks this is because your brain can process more factors when your conscious self isn’t involved:
Your conscious mind has a capacity constraint–it can only think about a couple of features at once. But your unconscious mind doesn’t have these capacity constraints. It can weigh all relevant information more effectively.
This suggests that we have untapped reservoirs of greater parallelism in our thinking than generally recognized. And, just as importantly, this is a critical argument in favor of multitasking, whose detractors are too quick to attack its hypothetical decrease in productivity based on artificial cases of switching from math problems to recollection of words. However, the enemies of multitasking pay no attention to factors like the quality of decisions being made, or networked productivity.
The second point I made there — about networked productivity — represents another issue that Manjoo and others leave out of the picture of productivity. While I may be distracted by a coworker breaking in on my concentrating on a specific piece of work, and yes, in fact, it may take me 25 minutes to get back on track, it might well be that by answering a question that other person and a group of four others might have saved hours of effort by by timely recommendation, and that falls out of the equation of personal productivity.
There is an astonishing eagerness to overemphasize mechanistic and narrow-bore ideals of productivity, while leaving aside the more general and larger questions. Instead, we seem to see a lot of attention paid to how can we do more, rather than how can we do better.
Quiggin points out that when the Australian Productivity Commission states that “productivity needs to pick up” that’s just code for driving employees to work harder for no extra pay. But that’s the subtext of nearly all mealy-mouthed discussions of increased productivity in the workplace.
It reminds me of the scene in Ben Hur when the Roman Consul, Quintus Arrius, played by Jack Hawkins, says to the galley slaves chained to the oars on his trireme,
Now listen to me, all of you. You are all condemned men. We keep you alive to serve this ship. So row well, and live.
The postnormal is a period of social re-equilibration instigated by the chaotic risks posed by the postmodern.
This postnormal world we live in now is characterized by a number of principal differences from the earlier industrial eras, the modern and postmodern. In those days, increasing productivity — by the application of steam power, the harnessing of electricity and motors, and the rise of better ways to apply humans in work, like the assembly line and business processes — led to the transition away from manual to increasingly knowledge-based work. And as people were freed from being sources of brute power they could be equipped with new skills or learn a new trade, and get a new and perhaps better-paying job.
But today, things are different. People are being pushed out of middle-class jobs, but the companies aren’t hiring people with postmodern skills, aside for a small number of postnormal jobs for programmers and big data quants. The jobs opening up are low-paying service jobs, for baristas and home health-care aides.
Erik Brynjolfsson and Andy MacAfee have a terrifying chart, one that shows the growing gap between US productivity and employment:
Note that this gap started around 2000, which is a good milestone for the start of the postnormal era.
Asked about this chart by David Rotman (in How Technology Is Destroying Jobs) Brynjolfsson says,
We were lucky and steadily rising productivity raised all boats for much of the 20th century. Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.
It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.
The core problem is that we are reaching the point of diminishing returns on postmodern Western neoliberal capitalism, which has — since conquering communism — operated as a kleptocracy, and completely avoided dealing with the earth as a shared commons. The growing inequality is inherent in that growing gap, where US productivity is not translating into general prosperity, and the margin is also not being applied to creating a more sustainable and just society.
For the individual, we are living in an enormously precarious time. More Americans are poor, members of many formerly middle class families are falling into lower class jobs with little prospect of regaining their postmodern standard of living, and even those fortunate enough to have the postnormal skills needed at present could find themselves sliding down the firepole once IBM’s Watson figures out how to program iPhone apps or write movie scripts.
There is one hope. The postnormal is a period of social re-equilibration instigated by the chaotic risks posed by the postmodern. But instead of being whipsawed by oscillating risks of our incestuously complex economy — like the boom/bust financial system, and the latent danger in international economic competition — we could decrease risks by moving onto a new set of operating principles. In particular, retooling civilization to counter the rape of the earth and the looming threat of ecological collapse. This will require us to devise a sustainable economy, one that is not based on debt and growth. An enormous challenge, and one that could gainfully re-employ all the 200 million people currently out of work, worldwide.
I fear that we will have to hit some sort of bottom before that transformation happen.
Susan Dominus, Is Giving the Secret to Getting Ahead?
A deep piece on the research and personality traits of Adam Grant, Warton Business School professor and the author of the soon-to-be-released Give and Take: A Revolutionary Approach to Success, in which he argues that a sense of service to others — an almost obsessive focus on the contribution of our work to other people’s lives — may be the single greatest key to productivity, much greater than trying only to help ourselves.
Read the case study that started his career: as a sales lead at an academic fundraising call center, bringing in a student who benefitted from that fundraising, and letting him tell the callers, directly, of how it had changed his life, led to enormous gains in their productivity, gains that could not be explained by other factors, even when the callers themselves were unaware of that motivation, or actively pooh-poohed it.
Seth Godin, Hooked on hacking life
The point of getting organized is establishing a platform for action, not to become your primary obsession.
Except for those whose work is analyzing the tools, of course. We get an out.
To do lists don’t make you productive, they just lower the investment needed.
My Desk Miguel Mestre
“My Desk’s concept is freedom. Freedom from the boundaries of your notebook pages and post-its. My Desk gives a 100x70cm blank paper that serves literally as base for your work and helps your mind flow. Sketch, draw, take annotations or simply scribble.”
Nick Bilton takes a fairly uncritical look at a new startup, tenXer, that asserts developer producitivity gains can come from monitoring lines of code produced, or other development tasks completed:
Former Card Counter’s New Start-Up Helps Measure Productivity - Nick Bolton via NYTimes.com
Once authorized by an employee, tenXer monitors the worker’s Gmail, Calendar, GitHub (an online service for software developers) and other programming services to determine how much work the employee produces. The idea isn’t to play Big Brother with employees, but to measure the work they create and then reward them with positive feedback when tasks are completed — just as in a game.
“The feedback loop at work is inherently broken. People want to get better at their jobs but have no idea how to do this,” explained Mr. Ma. ”There needs to be an instantaneous, objective, actionable feedback, which is what we’ve done with tenXer.”
I agree with these assertions in part: people feel happier and sense time passing more quickly when they explicitly share progress against a task list. Roger Meade showed this in the ’70s, and it’s a strong cognitive motivation for for the adoption of work media tools.
However, tracking productivity as a function of lines of code produced is a snare: sometimes the highest sort of programming productivity comes from taking code out of software. And I am skeptical of considering this as somehow related to big data. In fact, this is more a case of social data, or social metrics: exposing data relevant to social interaction around work.
Nonetheless, tenXer seems a natural fit for the developer community who are totally wired, using solutions like GitHub to manage code, and who are likely to buy into the somewhat Taylorist premise that underlies tenXer’s positioning.
At the same time, it seems like a set of features that should be implemented in a version of Yammer (or Podio, etc.) instrumented for developers, rather than a stand alone solution.
Ray Udeshi cited by John Tierney in From Cubicles, Cry for Quiet Pierces Office Buzz via NYTimes.com, discussing how office workers deal with the increasing noise in open space offices.