Elsewhere

Work/Life Balance In The ‘Social Factory’

In a piece ostensibly about Marissa Meyer, her famous sleep habits, and her ‘having it all’ lifestyle of rich CEO with newborn baby, Sarah Leonard uncovers a dark truth about the technorati using social tools to ‘brand’ themselves:

Sarah Leonard, She Can’t Sleep No More

The practices in Silicon Valley power centers put the lie to any concept of work life “balance.” As theorist Kathi Weeks likes to say, this is a site of contradiction, not mere imbalance, the contradiction between production and reproduction that has long existed for women. How one combines the two is dictated to a great degree by the economy; you can bet that if it was popularly believed that the American economy was suffering due to a lack of female middle management, all efforts to relieve working women of home duties would be celebrated, rather than held up to “but is she a good mother?” scrutiny.

Silicon Valley adds another twist to this formula — many of the women rising to the top are doing so in an office culture that is relentlessly sexist, but also dedicated to building products that focus on the “social factory.” The term sounds coined for and by people seeking degrees in media theory, but it’s a useful descriptor for the work we do commodifying our social relationships: think Facebook profiting from our clicks and Twitter from our tweets. AsJacobin contributing editor Melissa Gira Grant points out in a forthcomingDissent essay, Facebook was driven from the get-go by men’s relationships to women. It originated as Facemash, a sort of “hot or not” for Harvard women, in Mark Zuckerberg’s dormroom.

Employees at such social media companies now are required to maintain profiles themselves and operate as model users. Grant notes that Facebook hired a photographer to take their workers’ social media photographs, and employed photographers at all events so that the glamour could be shared in a brand-building exercise premised on the attractiveness of employees. The post-Fordist workplace makes more porous the barrier between personal and professional, and therefore the boundaries between work and home.

The second shift is now something of a permanent shift. Even after every job is done for the day, one updates Facebook, Tumblr, Twitter. Free time is enclosed for an uncompensated personal branding exercise important to a corporate world eager to use up workers’ personalities alongside their skill sets. Users may not perceive their experience this way, but social media companies profit directly from clicks and the impetus such sites create to “keep up” are a form of subtley imposed labor. And it means that there is absolutely no time that cannot be dedicated to work. There is no work life balance because work makes its way into life and life is the raw material with which to brand oneself for work.

I often say that I have given up on balance: I’m going for depth instead. But it appears that most people are pulled the other way: they lose balance, but are stretched out across too many social connections and too many contending social contracts. 

One of the characteristics of our time is a fragmenting of identity, what I called ‘networked identity’ for some time. However, the psychologist Kenneth Gergan was one of the first in discuss these thoughts, and he used the term multiphrenic identity:

Karin Wilkins, Moving Beyond Modernity: Media and Multiphrenic Identity among Hong Kong Youth

Gergen conceptualizes a new sense of self, contending that “the social saturation brought about by the technologies ofthe twentieth century, the accompanying immersion in multiple perspectives, have brought about a new consciousness: postmodernist”. Thus, Gergen believes that the proliferation of communication modes and of mediated products have contributed to what he terms the “multiphrenic self.”

Further, “cultures incorporate fragments of each other’s identities. That which was alien is now within”. In other words, the self may be interpreted not as a monolithic construction, but as a set of multiple socially constructed roles shaping and adapting to diverse contexts (cf. Weick). Rather than assume multiple identities pose a deviant condition, I prefer to assume their existence, moving toward an understanding of how these are constructed and supported within a media-saturated setting.

My sense is that the transition from the postmodernist era — post WWII until 2000 — into the postnormal is only accelerating this trend, and we are all becoming multiphrenic. We invest ourselves into relationships that are shaped by the affordances of the tools and the particular social contracts of the contexts. Through these relationships new and perhaps unexpected insights into others and ourselves arise. And we participate in dozens of these social environments, possibly with non-overlapping constituencies.

At some point for many, a complete blurring takes place, and there is no balance, no modulated transition from one situation to another. 

And our willingness to live this way means that we are offering up our selves, one fragment at a time to different constituencies, like a product placement in a TV show.

“Real Names” Policies Are an Abuse of Power - danah boyd

http://www.zephoria.org/thoughts/archives/2011/08/04/real-names.html

Starting from her research into youth, people of color, abuse victims, LGBT folks, and other marginalized groups, danah makes a short and sweet refutation of the premises of normalcy and naturalness of the Google ‘Real Names’ policy. She ends up here:

There is no universal context, no matter how many times geeks want to tell you that you can be one person to everyone at every point. But just because people are doing what it takes to be appropriate in different contexts, to protect their safety, and to make certain that they are not judged out of context, doesn’t mean that everyone is a huckster. Rather, people are responsibly and reasonably responding to the structural conditions of these new media. And there’s nothing acceptable about those who are most privileged and powerful telling those who aren’t that it’s OK for their safety to be undermined. And you don’t guarantee safety by stopping people from using pseudonyms, but you do undermine people’s safety by doing so.

Thus, from my perspective, enforcing “real names” policies in online spaces is an abuse of power.

The Zuckerberg Fallacy is a travesty of dogmatic ideology, based on a asbergerish premise of a single public identity to be mandated and used in all contexts.

Zuckerberg said “Having two identities for yourself is an example of a lack of integrity” in an interview with David Kirkpatrick, which directly attacks the motives of anyone advancing an opposite argument.

Facebook and now Google have adopted this model because they think of us as consumers, not people. They want to track our doings, for their own ends.

But in a fragmented world online our identity is becoming a network of context-dependent identities, or multiphrenic identity as Kenneth Gergen styled it, and as I explored:

Stowe Boyd, Multiphrenic Identity

We invest ourselves into relationships that are shaped by the affordances of the tools and the particular social contracts of the contexts. Through these relationships new and perhaps unexpected insights into others and ourselves arise. And we participate in dozens of these social environments, possibly with non-overlapping constituencies, each focused on different aspects of the greater world: entertainment, food, news, social causes, health, religion, sex, you name it. We become adept at shifting registers, just like polyglots shift from Italian to Corsican to Catalan without even thinking about it. We are multiphrenic.

It’s an interesting paradox — and one that might spell the limits of Google+ success — that Google has built the Circles capability so that people can break up their monolithic social world into separate scenes. But Google won’t let you be Carlos in one, and Carlotta in another, even if that is how you are known those possibly non-overlapping groups.

I am known as an advocate for publicy: living out loud online. But nearly every time I discuss living openly I make the case for privacy and secrecy, which are essential elements of life for all of us.

A social tool that prohibits fundamental and non-harmful human behaviors is oppressive, and such oppression means that we are justified in breaking their ‘laws’ to the extent that we can.

Evan Williams | evhead: Five Easy Pieces of Online Identity

http://evhead.com/2011/04/five-easy-pieces-of-online-identity.html

Ev Williams tries to boil down identity to five parts:

  1. Authentication - Do you have permission?
  2. Representation - Who are you?
  3. Communication - How do I reach you?
  4. Personalization - What do you prefer?
  5. Reputation - How do others regard you?

This is a very tool-centric, or marketing-centric approach, and leaves out — or dismisses — all the messy and interesting philosophical aspects of identity.

Consider issues like publicy: How much of these various aspects of identity do you want to be revealed? Or context-based identity: you are a different you with the bowling league, at work, or on Suicide Girls.

Ev’s list is based on information flows — how people and systems might communicate or interact with people through identity markers of various kinds — but it doesn’t get at our personal motivations, needs, or requirements around identity as an aspect of human psychology.

Facebook, Discourse, And Identity

The question of Facebook comments disguises a number of deeper issues, but is also in and of itself interesting. Many have reported that the number of blog comments has gone down with the introduction of Facebook comments on various well-trafficked blogs. This may be a good thing, reintroducing social scale to forums that had grown too large, and as a consequence had seen a decrease in civility.

Mathew Ingram notes that involvement trumps numbers in comments:

Mathew Ingram, Why Facebook Is Not the Cure For Bad Comments

[…] the reality is that when it comes to improving blog comments, anonymity really isn’t the issue — the biggest single factor that determines the quality of comments is whether the authors of a blog take part in them.

Working at a pioneering blog network in 2004, I coined the term ‘the Conversational Index’ which we discovered as a means of predicting the future success of blogs. It was defined as

Conversational Index = (comments + trackbacks) / posts

I guess nowadays we’d have to include references from Twitter and Facebook, but you get the idea. Successful blogs generated a lot of commentary, and they did so from almost the very start.

And it wasn’t a function of publicy: there was no effort involved to have people use their legal names. It was a function of involvement on the part of the authors.

Regarding the deeper issues underlying comments, Robert Scoble went apeshit yesterday, after reading Steve Cheny’s piece, How Facebook is Killing Your Authenticity, that I also commented on (see The Facebooking Of Identity). Here’s some of what Robert wrote:

Robert Scoble, The Real Authenticity Killer

These “authenticity is dead” people are cowards.

See, where I ONLY post opinions I’m willing to sign my name to, lots of people are actually cowards and just not willing to sign their names to their mealy-mouthed attacks.

Don’t give me that horseshit that you won’t be able to whistle blow at work.

It is hard to summarize Scoble’s rant, but in essence he is making the case that the web’s natural structure channels each of us toward using a single identity — for example in comments, or blog posts — and we should embrace that, and not attempt to subvert it.

I think this is a bit simplistic, at the least; principally because it leads to overtly conservative strictures on discourse, and not just for whistle blowers.

How many people have been fired in recent years for blogging, for example? And how many untold thousands have held their tongue or suppressed their own potentially unpopular opinions for fear of various sorts of retribution, or just being left out of the discussion?

Lastly, we are moving into a new era, principally opened by the rise of web culture, where a post-modern identity is a possibility. We can potentially involve ourselves with very different social scenes, with different ground rules, different purposes, and starkly different values, all at the same time.

Through involvement with such diverse groups we grow and learn very different perspectives. In a sense, we can  shift from a unitary identity to a network of identities, where the various nodes connect with each other in asymmetric and uneven ways: we may even have elements in a multiphrenic personality that are in conflict with each other.

This infuriates a lot of people, and whenever I present this concept there are fireworks. Some argue that such an identity is immature, illegitimate, and possibly immoral. I have been accused of inciting others to have false identities, when in fact I am really just observing a shift in societal mores.

Just as our society, politics, and business benefit from increased diversity — different views that possibly conflict — I think the same is true for post-modern identity.

Who among us is certain about everything? Who has no doubts? Who never wonders about choices made, or paths not taken? Who never sees multiple sides to an argument?

Scoble obviously has no doubts about identity: you are the you that the most open social context says you are, and that’s that. You should accept it, and if you don’t you are a coward, or so Scoble says.

But I have a different perspective, one that is more accepting of our search for self and the relativity of identity, and less demanding of certainty in an uncertain and rapidly evolving world.

Enhanced by Zemanta

Hiding In Plain Sight: Publicy and Social Steganography

I have written a great deal about our transition online from an ethos of secrecy and privacy (a la email, and groupware) in the pre-social web, to a social web in which publicy (or publicness) is displacing and remaking the premises of social interaction.

Danah Boyd has introduced a great metaphor into theis discussion: social steganography. Here’s a discussion about teens, making the case for concealment by social camouflage:

Alice Marwick and Danah Boyd, Tweeting teens can handle public life

But even when teens aren’t hiding behind monikers, what they post may not make sense to an outsider. Access to content is not the same as access to interpretation. Teens regularly post in-jokes and use song lyrics or cryptic references to speak to a narrower audience than might be accessing their tweets. Some tweets are clearly difficult to decode, making the reader aware that a message is being hidden; others can be understood as “social steganography” where the message is hidden in “plain sight”. While their classmates, parents or potential employers may be able to see these tweets, they don’t necessarily understand them. Although there’s nothing fundamentally new about these practices, their application to Twitter makes it clear that teens are aware of speaking in public and using strategies to manage it.

What all this means is that “public or private” is more complicated than it seems. Twitter and its ilk aren’t going away, and the answer to responsible use isn’t to shut teens out of public life. Many teens are indeed more visible today than ever before, but, through experience, they’re also developing skills to manage privacy in public. What matters is not whether or not teens are speaking in public, but how we support them as they try to learn how to responsibly navigate the networked public spaces that are central to contemporary life.

Steganography is ‘is the art and science of writing hidden messages in such a way that no one, apart from the sender and intended recipient, suspects the existence of the message, a form of security through obscurity.’ - Wikipedia. The classic examples include invisible ink between the visible lines of a letter, and today, information can be embedded in digital images, sent via email, and extracted by the recipient based on a shared key.

It’s based on a kind of camouflage: where the familiar and superficial draws attention away from the occluded and hidden.

Danah defines social steganography this way:

When Carmen broke up with her boyfriend, she “wasn’t in the happiest state.” The breakup happened while she was on a school trip and her mother was already nervous. Initially, Carmen was going to mark the breakup with lyrics from a song that she had been listening to, but then she realized that the lyrics were quite depressing and worried that if her mom read them, she’d “have a heart attack and think that something is wrong.” She decided not to post the lyrics. Instead, she posted lyrics from Monty Python’s “Always Look on the Bright Side of Life.” This strategy was effective. Her mother wrote her a note saying that she seemed happy which made her laugh. But her closest friends knew that this song appears in the movie when the characters are about to be killed. They reached out to her immediately to see how she was really feeling.

Privacy in a public age

Carmen is engaging in social steganography. She’s hiding information in plain sight, creating a message that can be read in one way by those who aren’t in the know and read differently by those who are. She’s communicating to different audiences simultaneously, relying on specific cultural awareness to provide the right interpretive lens. While she’s focused primarily on separating her mother from her friends, her message is also meaningless to broader audiences who have no idea that she had just broken up with her boyfriend. As far as they’re concerned, Carmen just posted an interesting lyric.

In a world based on publicy and multiphrenic identity it will not be uncommon to have the meaning of one’s words or actions interpreted differently, contextualized differently, by the members of different networks. Do they see the leopard’s spots, or the leopard?

(ht @fstutzman)

IgniteNYC: Publicy And The Erosion Of Privacy

 [These are the slides I used at IgniteNYC last night, and something like what I intended to say. In several cases I ran out of time before making the final quip! 15 seconds per slide is fast!]


William James once said, “A man coins a new word at his own peril.” Nonetheless, the rapid changes surrounding online sharing and privacy have led me to spin up ‘publicy’ to represent the shift to public as a default instead of private as the default.



No matter how open we want to be, or how much we’d like institutions to be transparent, some things must be kept private. But how much? Our social contract is changing fast.



There’s a tradition in the West of respecting personal privacy, but this has limits. It’s a felony to wear a mask in public in most US states, for example, and the Feds have the right to tap your phone, once a court agrees.



We feel we have the right to conceal what’s in our thoughts, and what goes on in our bedroom. We believe we should not have to walk through a ‘full-body’ scanner in the airport because our privates are private.



Our notions of privacy are a response to sharing physical space, and creating conventions so we can live together without causing offense and killing each other.



When we first went online, in the early days of social media, it was mostly about ‘personal publishing’ and it was more about influencing open social discourse than social connection. More about Freedom of Speech than Freedom of Association.



The more recent Web 2.0 era of social media is different: much more social, based on social networks. But the Web is not a shared space, it’s shared time, no matter how many people say it is. So much of what we mean by ‘privacy’ doesn’t hold online.



On the Web you must publish to be known. You can’t have social experience online and remain totally private. You can’t ‘see’ someone on Foursquare unless they tell you there are there.



'Publicy' is gaining ground over privacy because we are spending more time online is social streams. We have come to believe that this is a natural thing, and a natural right, despite all the talk that it is making us stupid.



We are affiliating with others that share our involvement in online involvement, and we value the time spent and lessons learned online more the more we are online. “I am made greater by the sum of my connections, and so are my connections,” as I say.



As just one example of how tools influence this, consider how streaming apps (like Twitter, Facebook, and Yammer) are displacing email, and how this change seems to shape what is being said and how it is interpreted.



Not only is the pace or tempo of communication different with streams, perhaps the biggest shift is that in a stream you don’t (generally) say exactly who is supposed to see something. Messages are released, not addressed.



And of course, it’s a public stream, not an inbox. A place where you hear many voices, some from unknown members of your social scene: the dark matter of social influence impinging on you.


 

Facial recognition and augmented reality means that people you haven’t met will know who you are walking down the street. This is a distant echo of Andy Warhol’s 15 seconds of fame: everyone will be famous for 15 meters.



Brands will be able to make you offers you can’t refuse, like gifts from a friend. That’s because we will have friended them, so they can know about us. They will be about as accurate as casual friends are when guessing what we like or don’t like.



We are zooming toward a new social contract, between each other, brands, and the platforms and apps that mediate our sociality. Facebook’s Privacygate and Google’s mislaunch of Buzz are disruptive because they break an existing contract before we have agreed to the new one.


 

Our brains are plastic and the postmodern shift to a radically different social setup will mean we change deeply, and our identity will morph to match. We aren’t defined in the same ways anymore.



The 20th Century notion of identity is that we are monoliths. unitary, based on a single set of attributes — like how much we make public or private: a single self dealing with a single world.



But today we are affiliating with many worlds — in Foursquare, Twitter, SuicideGirls — and we are shifting to a networked self, comprised of distinct identities matching those worlds. This is what Kenneth Gergen refers to as multiphrenic identity.



Despite the recent publicy missteps of Facebook and Google — Facebook is like watching a drunk fall down the stairs, at this point — we are moving toward a new social contract. Despite the hiccups, I remain optimistic that the era of publicy and the erosion of privacy will lead to a better world in which to play and work.

[Update:

Here’s a drawing that Heather from ImageThink.net made from my talk! Wow!


]

Do ‘Supertaskers’ Mean We Are Adapting To A Multiphrenic World?

In a full frontal attack on multitasking and the tools that seem to seduce us into it, Matt Richtel makes the case for the evils of being wired by chronicling the day-to-day media addiction of a California entrepreneur and his family. Kord Campbell misses an email from someone who wants to buy his company, his son is getting C’s, and mom gets pissed when Kord reacts to stress by playing video games interminably.

Richtel uses this modern dysfunctional family to advance the conventional interpretation of recent psychological tests and conjectues about human cognition in the wired age:

Matt Richtel, Hooked on Gadgets, and Paying a Mental Price

Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.

These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.

The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.

While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.

And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers.

Ok, Richtel is a reporter, not a scientist, so it’s a natural thing for him to start with the conclusions first. But what is the science here?

Just some background, though, to level the playing field.

The human mind is plastic — This is unsurprising, but commonly overlooked. We all can learn new skills, or repurpose existing cognitive centers in our brains when exposed to new situations. That’s how we learn to speak a foreign language, to juggle, or to play the guitar.

Mastery is distinct from learning — The first few weeks when you are trying to learn to play the drums can be humbling, and lead to a lot of bad music. The rule of thumb called the ‘10,000 hour rule’ — made famous by Malcolm Gladwell in Outliers — suggests that for many sorts of complex behaviors, like getting a black belt, ten years of very regular practice is a baseline. And while the white belt may be learning valuable skills, he may be no better in a bar room brawl than an average person, and perhaps worse, since her new training may actually slow her responses as she responds intellectually to the situation: her karate is not second nature, yet.

So, the assumption of much of the popular discourse about multitasking is that the cognitive adaptation that happens when we are grappling with wired world is, at base, bad. The reality is that we are always learning, always adapting. Underlying this sense that multitasking is bad is the industrial ideal of personal productivity: we are supposed to be heads down, doing purposeful work as much as possible, and not being distracted by other things that are not relevant to the task at hand. Anything that distracts us from that is an annoyance.

However, the fact is that people need to balance task-oriented work — like writing this post — with the thinking and learning that informs the work and my ability to perform it — like reading the scientific studies cited in Richtel’s article, and thinking about what it means. Or answering the phone while I am writing the post, because I have been trying to close the loop with someone for several days, and this is him calling.

The world is too rich and varied to imagine that there is a path through it where we can simplify our activities to a series of programmed single-tasking activities. So clearly there is a balance. And I propose the following maxim: each person can multitask successfully to some degree, and our ability to multitask is a combination of innate and learned behaviors.

Much of the evidence that Richtel cites — when stripped of the moralistic preaching about media consumption rotting our minds — the usual war on flow stuff — accords with my maxim.

As Richtel cites:

Technology use can benefit the brain in some ways, researchers say. Imaging studies show the brains of Internet users become more efficient at finding information. And players of some video games develop better visual acuity.

[… much of the technical discussion in the article is spread all over]

At the University of Rochester, researchers found that players of some fast-paced video games can track the movement of a third more objects on a screen than nonplayers. They say the games can improve reaction and the ability to pick out details amid clutter.

“In a sense, those games have a very strong both rehabilitative and educational power,” said the lead researcher, Daphne Bavelier, who is working with others in the field to channel these changes into real-world benefits like safer driving.

What leads these better players to be better? Playing more games? Playing more games against better players? Better teaching from friends? Better genes?

Other research shows computer use has neurological advantages. In imaging studies, Dr. Small observed that Internet users showed greater brain activity than nonusers, suggesting they were growing their neural circuitry.

Many studies show that online activity — like reading — involves more of the brain than reading a book, for example. It seems we are thinking more critically while online, despite all the opportunities for distraction.

And Richtel only touches on one topic for a paragraph, and does not dig into the actual research involved. It seems that at least some people can in fact drive a car and talk on the phone at the same time: Supertaskers.

Preliminary research shows some people can more easily juggle multiple information streams. These “supertaskers” represent less than 3 percent of the population, according to scientists at the University of Utah.

That’s it? No mention of who these people are, or what sort of multitasking is involved? No suppositions?

Nope. Richtel wants to get back to his agenda, which is making the case against multitasking.

So I dug up the research which was conducted by Jason M. Watson and David L. Strayer at the University of Utah (Supertaskers: Profiles In Extraordinary Multitasking Ability), instead of just reading other reporters slander the authors. Watson and Strayer tested 200 subjects in a controlled fashion, and determined that 2.5% of the group could in fact drive in a difficult car simulation while conversing on the phone without significant loss of ability of the individual tasks. The ‘conversing on the phone’ wasn’t just talking about TV: it was a complex set of behaviors called OSPAN tasks, like remembered lists of items while performing mathematical calculations.

The authors state, unequivocally:

Supertaskers are not a statistical fluke. The single-task performance of supertaskers was in the top quartile, so the superior performance in dual-task conditions cannot be attributed to regression to the mean. However, it is important to note that being a supertasker is more than just being good at the individual tasks. While supertaskers performed well in single-task conditions, they excelled at multi-tasking.

This means that there are some of us who can drive and talk on the phone safely. And it seems like their superpower is multitasking itself, not just the ability to do these two specific things together.

Obviously, much more research is needed to determine what goes into this. I am going to suggest a few ideas though.

Being good at multitasking draws on more than one cognitive center — I doubt they will find a single gene or region of the brain responsible for multitasking. Like most complex cognitive function, it will involve some extremely diffused network of interaction in our mind. What we have learned about the minds of musicians and zen monks will be related, in some direct way.

No matter who you are, you can get better at multitasking — This will turn out to be like other human activities that involve mastery: it will take a long time, and it is better to have a teacher who is a master. Thinking hard about moving your hands fast — like the barroom challenge of tying to catch a dollar bill between your outstretched fingers — doesn’t work. The only thing that makes your hands move faster is practice: ten years of practice.

The fear mongers will tell us that the web, our wired devices, and remaining connected are bad for us. It will break down the nuclear family, lead us away from the church, and channel our motivations in strange and unsavory ways. They will say it’s like drugs, gambling, and overeating, that it’s destructive and immoral.

But the reality is that we are undergoing a huge societal change, one that is as fundamental as the printing press or harnessing fire. Yes, human cognition will change, just as becoming literate changed us. Yes, our sense of self and our relationships to others will change, just as it did in the Renaissance. Because we are moving into a multiphrenic world — where the self is becoming a network ‘of multiple socially constructed roles shaping and adapting to diverse contexts’ — it is no surprise that we are adapting by becoming multitaskers.

The presence of supertaskers does not mean that some are inherently capable of multitasking and others are not. Like all human cognition, this is going to be a bell-curve of capability. The test that Watson and Strayer devised only pulled out the supertaskers: the one with zero cognitive costs from multitasking. There are others in the text who had a slight cost, and others with higher costs.

Who among us are the most capable multitaskers, and in a position to teach the others? It may not be the case that the specific subjects in Watson and Strayer’s study are the best to teach others how to multitask, but it’s likely that some supertaskers out there are also good teachers.

Expect this to be a hot trend: parents sending their children off to supertasking classes after school, to get a jump on the new century.

Enhanced by Zemanta

Multiphrenic Identity

I stumbled across a word today courtesy of @alicetiara: ‘multiphrenic’, which she defined as ‘multiple identities pieced together from the multiplicity of mediated messages in our environments.’ This sounded so much like my recent musings on networked identity that I did some searching.

Turns out the term was coined by Kenneth Gergen, a well known psychologist and author, first used in The Saturated Self (1991), which I have ordered from the library and hope to read soon.

I also found an essay online by Karin Wilkins that defines Gergen’s notions fairly concisely, and confirmed that he is indeed talking about a postmodernist identity of the same sort that I have been thinking about [emphasis mine]:

Karin Wilkins, Moving Beyond Modernity: Media and Multiphrenic Identity among Hong Kong Youth

Implications for Multiphrenic Identity

Identities connect individuals to larger social groups, constituting boundaries used to include and exclude members. Whereas in earlier development communication theory media were believed to promote national identity, an autopoietic framework would hold that media might promote multiple and diverse identities related to maintaining the boundaries of communities. Recent communication literature has moved away from an interest in a spatially-determined national identity, instead focusing on cultural identity, not equivalent to a particular space or territory.

[…]

[Kenneth] Gergen conceptualizes a new sense of self, contending that “the social saturation brought about by the technologies ofthe twentieth century, the accompanying immersion in multiple perspectives, have brought about a new consciousness: postmodernist”. Thus, Gergen believes that the proliferation of communication modes and of mediated products have contributed to what he terms the “multiphrenic self.”

Further, “cultures incorporate fragments of each other’s identities. That which was alien is now within”. In other words, the self may be interpreted not as a monolithic construction, but as a set of multiple socially constructed roles shaping and adapting to diverse contexts (cf. Weick). Rather than assume multiple identities pose a deviant condition, I prefer to assume their existence, moving toward an understanding of how these are constructed and supported within a media-saturated setting.

Exactly what I have been arguing with regard to our use of social tools online. We invest ourselves into relationships that are shaped by the affordances of the tools and the particular social contracts of the contexts. Through these relationships new and perhaps unexpected insights into others and ourselves arise. And we participate in dozens of these social environments, possibly with non-overlapping constituencies, each focused on different aspects of the greater world: entertainment, food, news, social causes, health, religion, sex, you name it. We become adept at shifting registers, just like polyglots shift from Italian to Corsican to Catalan without even thinking about it. We are multiphrenic.

Related Posts Plugin for WordPress, Blogger...