Amy Merrick on the history and demise of the traditional shopping mall: http://nyr.kr/1qtbn0G
“As any cubicle dweller knows, people like natural...
Another DJ Mike Rizzy Behind The Beat adventure, 9th Wonder edition.
An original sample mix showcasing songs sampled by Super Producer 9th Wonder.
- 9th Wonder Medley Intro
Ryley Walker - Clear The Sky
Great new song from his upcoming record. Cannot wait. This almosts feels like 60s era British folk with an American primitive overlay. Its expansiveness reminds me of...
Madrigal thinks 2014 will be a major turning point for tech, and he says so in a tone that is at the same time both world-weary and hopeful, a postnormal weltschmerz with a cherry on top, where he juxtaposes the ‘forever drone wars’ with ‘I want to investigate what screens want.’ He’s a seething mass of contradictions, and a great read.
I pull one snippet out, because I am down with him on this one:
Alexis C. Madrigal, Where Do We Go From Here? 8 Hypotheses About Tech in 2014
Job tech. When we look back on this year from 2030, I think it will become clear that the largest change in our daily lives came around the technology—and therefore quantification and control—that we allowed to creep into our work lives. For example, right now, cameras on some buses and trucks are constantly monitoring them on the road, and when they detect some sort of anomaly, the video is sent to a human-staffed control center, where the event is recorded and coded as the driver’s fault or not. That information will go into the driver’s record and perhaps be used to predict when accidents might occur and perhaps be used to hire and fire drivers. Let’s just say that such a system makes the roads safer, but it costs employees even more power vis a vis their employers.
The big question is: Do we want to live in this world? Do you want this kind of technology applied to your job? What kinds of artifacts will be introduced by this kind of tracking? How will the stats be juked?
As Don Peck’s excellent feature on HR analytics shows, companies are pushing into this territory right now and if we want our society and politics to make the right adjustments, we need to start thinking this through now.
The rise of AI and data analysis has the power to become a coercive tool in the hands of authoritarians, but I don’t think that the baseline prior to the use of such technologies is stellar. For example, hiring and promotion approaches at most companies are a joke, relying on informal, biased and unscientific approaches that are more folklore than rational. It’s not clear that people are any good at those tasks, and it might be better to develop completely new ways to do them: either by dismantling our preconceptions and starting over, or by building smarter tools to do it for us, using techniques human minds can’t achieve.
Like Madrigal, I think this year will be a turning point for work technologies. Aside from the examples he cites, I am closely following looser, cooperative tools that could be the start of a disruption of our aging, dominant collaborative tools. Today’s work management tools (so-called enterprise social networks) are based on design principles that go back to the groupware of over 20 years ago, and may be the best proof of Mcluhan’s saying:
Doing today’s jobs with yesterday’s tools.
'Perhaps our creations reflect our times, and our fears reflect our past'
Alexis Madrigal has written a highly ambivalent piece about Medium, which is the somewhat-new project from the boys at Obvious, led by Ev Williams, the founder of Twitter. Madrigal includes, more or less that the same time, praise for some of the journalism being done by paid staff at Medium (like Quinn Norton’s long-form essay on Bradley Manning), but at the same time he wonders about a ‘publication’ that doesn’t really screen out the bottom two thirds of content full of crap and misogyny (Coder Peter Shih’s rant about San Francisco).
He lofts a few telling observations out there, although they are more revealing about the mindset of a journalist at a top shelf magazine regarding Medium’s possible trajectory:
Alexis C. Madrigal, What Is Medium?
Medium wasn’t building a magazine, I realized, but a magazine killer.
Basically, Madrigal — and many others — have had their thinking shaped by the dichotomy of platform and publication, where Wordpress was clearly not The Atlantic, and vice versa. But Forbes and the Huffington Post muddied those waters, didn’t they? And Tumblr tried to, once, although they’ve now retreated.
Madrigal closes with an insight too large for him to swallow, I think:
Maybe, though, I’m applying old-line thinking to this new creation. Perhaps Medium can continue to do precisely what it has been doing, and their brand value will continue to grow while these major questions remain unresolved. The center will hold because there is no center. In a world when every post stands on its own, atomistically, perhaps it’s silly to think a publication can’t be incoherent.
Perhaps our creations reflect our times, and our fears reflect our past. To someone who is standing on what was the center, like Madrigal, the notion that there may soon be no center may be the greatest fear of all. For those of us who have been living on the edge, this emerging future is the reflection of our hopes: when the edge becomes the center, and every author is on their own.
Marco Arment views the Medium set-up as a publication, a content farm, one that is making money on the labor of its participants, most of whom are unpaid. He cautions us to avoid the honey pot, that sugar rush of easy page hits, and instead build a following elsewhere, where the brand is your own.
But maybe the middle ground is possible. Perhaps those that want to rise to be professional journalists — to get paid directly for their writing and not to rely solely on its side effects — perhaps those writes might opt to post at Medium hoping to rise into the paid writer tier, there. This path is not explicitly offered by Medium, mind. But it’s latent in the architecture they’ve built into their model.
Maybe that is Medium’s message.
Dan Slater wrote an article called A MillIon First Dates in which he makes the case that online dating by offering a wide selection of readily sexually available partners leads to a decrease in commitment: hence the million first dates trope. The piece is solely focused around a disengaged dope named Jacob, and never touches on the needs, wants, and desires of heterosexual women, or gays.
I am glad to say that Alexis Madrigal skewers this one-sided and unbalanced argument, saving me from doing so.
There’s No Evidence Online Dating Is Threatening Commitment or Marriage - Alexis C. Madrigal via The Atlantic
Unfortunately, neither Jacob’s story nor any of the evidence offered compellingly answers the questions raised. Now, let’s stipulate that there is no dataset that perfectly settles the core question: Does online dating increase or decrease commitment or its related states, like marriage?
But I’ll tell you one group that I would not trust to give me a straight answer: People who run online dating sites. While these sites may try to attract some users with the idea that they’ll ﬁnd everlasting love, how great is it for their marketing to suggest that they are so easy and fun that people can’t even stay in committed relationships anymore? As Slater notes, “the proﬁt models of many online-dating sites are at cross-purposes with clients who are trying to develop long-term commitments.” Which is exactly why they are happy to be quoted talking about how well their sites work for getting laid and moving on.
But hey, maybe these guys are right. Maybe online dating and social networking is tearing apart the fabric of society. How well does the proposition actually hold up?
First off, the heaviest users of technology—educated, wealthier people—have been using online dating and networking sites to ﬁnd each other for years. And yet, divorcerates among this exact group have been declining for 30 years. Take a look at these statistics. If technology were the problem, you’d expect that people who can afford to use the technology, and who have been using the technology, would be seeing the impacts of this new lack of commitment. But that’s just not the case.
Does it follow that within this wealthy, educated group, online daters are less likely to commit or stay married? No, it does not.
Like I said, there’s no data to prove that question one way or the other. But we have something close. A 2012 paper in the American Sociological Review asked, are people who have the Internet at home more or less likely to be in relationships? Here was the answer they found:
One result of the increasing importance of the Internet in meeting partners is that adults with Internet access at home are substantially more likely to have partners, even after controlling for other factors. Partnership rate has increased during the Internet era (consistent with Internet efﬁciency of search) for same sex couples, but the heterosexual partnership rate has been ﬂat.
So, we have, at worst, that controlling for other factors, the Internet doesn’t hurt and sometimes helps. That seems to strike right at the heart of Slater’s proposition.
A 2008 paper looked at the Internet’s ability to help people ﬁnd partners and postulated who might beneﬁt the most. “The Internet’s potential to change matching is perhaps greatest for those facing thin markets or difﬁculty in meeting potential mates.” This could increase marriage rates as people with smaller pools can more easily ﬁnd each other. The paper also proposes that perhaps people would be *better* matched through online dating and therefore have higher-quality marriages. The available evidence, though, suggests that there was no difference between couples who met online and couples who met ofﬂine. (Surprise!)
So, here’s the way it looks to me: Either online dating’s (and the Internet’s) effect on commitment is nonexistent, the effect has the opposite polarity (i.e. online dating creates more marriages), or whatever small effect either way is overwhelmed by other changes in the structure of commitment and marriage in America.
The possibility that the relationship “market” is changing in a bunch of ways, rather than just by the introduction of date-matching technology, is the most compelling to me. That same 2008 paper found that the biggest change in marriage could be increasingly “co-ed” workplaces. Many, many more people work in places where they might ﬁnd relationship partners more easily. That’s a big confounding variable in any analysis of online dating as the key causal factor in any change in marital or commitment rates.
Madrigal mentions at one point that going online is like moving to a city, and then doesn’t explore it much. But that is exactly the right counterexample: it is exactly like moving to a city. Going online increases your social density: you add new contacts, some of those become close, some of those introduce you to yet others. This opens up new possibilities, but under no circumstance should we think it leads to lessened commitment.
Using divorce as a proxy for decreased commitment, rural areas are just as likely today to see marriages end in divorce. This was not true decades ago.
Once Rare in Rural America, Divorce Is Changing the Face of Its Families, Sabrina Tavernise and Robert Gebeloff
Forty years ago, divorced people were more concentrated in cities and suburbs. But geographic distinctions have all but vanished, and now, for the first time, rural Americans are just as likely to be divorced as city dwellers, according to an analysis of census data by The New York Times.
The shifts that started in cities have spread to less populated regions — women going to work, gaining autonomy, and re-arranging the order of traditional families. Values have changed, too, easing the stigma of divorce.
“In the bottom ranks, men have lost ground and women have gained,” said June Carbone, a law professor at the University of Missouri-Kansas City and co-author of “Red Families v. Blue Families.”
“A blue-collar guy has less to offer today than he did in 1979,” Professor Carbone added. Those shifting forces, she said, “create a mismatch between expectation and reality” that can result in women becoming frustrated and leaving, because now they can.
Since 1990, class has become an increasingly reliable predictor of family patterns, Professor Carbone said. College-educated Americans are now more likely to get married and stay married than those with only a high school diploma, a change from 20 years ago, she said, when differences were much smaller.
That trend has been particularly important for rural areas, which have fallen further behind urban ones in education, according to census data. Just one in six rural residents have college degrees, far fewer than in cities, where one in three do. Nationally, there were about 121 million married adults and 26 million divorced people in 2009, compared with about 100 million married and 11 million divorced people in 1980.
My bet is that the factors that contribute to divorce — low education rates, more co-ed workplaces, and the increased acceptability of divorce and, at core, of living alone — trump any of the internet effects, if any.
I am surprised that neither of these articles made any reference to the increasing phenomenon of people living alone. Today, half of Americans are single, and the number is rising. See Eric Klineneberg’s Going Solo, as discussed by Nathan Heller:
The Disconnect, Nathan Heller
Klinenberg’s data suggested that single living was not a social aberration but an inevitable outgrowth of mainstream liberal values. Women’s liberation, widespread urbanization, communications technology, and increased longevity—these four trends lend our era its cultural contours, and each gives rise to solo living. Women facing less pressure to stick to child care and housework can pursue careers, marry and conceive when they please, and divorce if they’re unhappy. The “communications revolution” that began with the telephone and continues with Facebook helps dissolve the boundary between social life and isolation. Urban culture caters heavily to autonomous singles, both in its social diversity and in its amenities: gyms, coffee shops, food deliveries, laundromats, and the like ease solo subsistence. Age, thanks to the uneven advances of modern medicine, makes loners of people who have not previously lived by themselves. By 2000, sixty-two per cent of the widowed elderly were living by themselves, a figure that’s unlikely to fall anytime soon.
So, we have a lot of people dating today — and more in the future — who have no desire to settle down, to ‘commit’, and the reasons to do so are diffuse and interconnected with modern life, but not prinicpally the outgrwoth of internet dating.
The saddest thing is that Slater is likely to sell a lot of books. He’ll be widely quoted. He’ll do morning talk shows. His nonsense will be quoted millions of times until it works its way into the collective unconscious of American culture. But its completely wrong.
In response to Roger Cohen’s recent Thanks For Not Sharing — a rant about the oversharing he sees on Twitter and Facebook — Alexis Madrigal skewers him:
Your Anti-Social Media Rant Reveals Too Much About Your Friends - Alexis C. Madrigal via The Atlantic
My diagnosis is simple, Roger: your friends and associates are terrible and boring. Being that you are a smart and interesting guy who would distill only the finest information from any social network, the problem is the garbage going into your feed, which can only come out as garbage in your column. And that garbage is being created by the people who you choose to follow and know.
Madrigal is being a bit snarky, but it’s actually true. The best thing about open social networks is that they are open: you can follow whoever you want. And the most positive and life-affirming thing we each can do is move ourselves in the network by adding new connections, and possibly dropping old ones.
I am leaving aside — as did Madrigal — the question of what others may gain by following Cohen. Or not.
But the possibility exists to be enlarged by this, and to share that: I am made better by the sum of my connections, and so are my connections. But I think Cohen is missing that point, completely.
- Mike Bulajewsli, cited by Nathan Jurgenson, Against TED via The New Inquiry
Jurgenson excoriates TED, making me finally feel like I wasn’t the only one:
The way TED talks fuse sales-pitch slickness with evangelical intensity leads to perhaps the most damming argument against the TED epistemology: It necessarily leaves out other groups and other ways of knowing and presenting ideas. As Paul Currion tweeted, TED seems “unaware of its own ideological bias.” Let’s take one example. Take a wild guess which gender is massively over-represented as TED speakers (answer, via Tom Slee @whimsley). And TEDxWomen stinks of tokenism. Hint: It is better to be more inclusive through and through than to segregate marginalized groups into their own token corners. But the TED style aligns much more easily to articulating ideas that sell than ideas that concern power, domination, and social inequalities. Real cutting-edge ideas also come from the margins. TED’s corporate-establishment voice and style aren’t without their uses, but they are certainly not innovative or cutting edge.
As problematic as TED is in itself, its popularity is more troublesome, coming to dominate the social conversation about what new technologies mean. Not that TED should be barred a role in the conversation. Because of the conference, some complex ideas get wider exposure than they otherwise would (as Atlantic editor Alexis Madrigal pointed out in a Tweet). But TED and the larger TED-like world of Silicon Valley corporatism have far too much importance, as Evgeny Morozov points out when criticizing the “Internet guru.”
There are consequences to having this style of discourse dominate how technology’s role in society is understood. Where are the voices critical of corporatism? Where is there space to reach larger publics without having to take on the role of a salesperson, preacher, or self-help guru? Academics, for instance, have largely surrendered the ground of mainstream conversations about technology to business folks in the TED atmosphere.
Can a new wave of technology thinkers produce a fresh outlet for smart ideas not (yet) co-opted as badly as TED? If so, it won’t come from the well-financed centers of Silicon Valley but from the margins, the actual cutting edge.
Aleixis Madrigal weighs in on the Google+ real names mess:
Alexis Madrigal via The Atlantic
Imagine you’re walking down the street and you say out loud, “Down with the government!” For all non-megastars, the vast majority of people within earshot will have no idea who you are. They won’t have access to your employment history or your social network or any of the other things that a Google search allows one to find. The only information they really have about you is your physical characteristics and mode of dress, which are data-rich but which cannot be directly or easily connected to your actual identity. In my case, bystanders would know that a 5’9”, 165 pound probably Caucasian male with half a beard said, “Down with the government!” Neither my speech or the context in which it occurred is preserved. And as soon as I leave the immediate vicinity, no one can definitively prove that I said, “Down with the government!”
In your head, adjust the settings for this thought experiment (you say it at work or your hometown or on television) or what you say (something racist, something intensely valuable, something criminal) or who you are (child, celebrity, politician) or who is listening (reporters, no one, coworkers, family). What I think you’ll find is that we have different expectations for the publicness and persistence of a statement depending on a variety of factors. There is a continuum of publicness and persistence and anonymity. But in real life, we expect very few statements to be public, persistent, and attached to your real identity. Basically, only people talking on television or to the media can expect such treatment. And even then, the vast majority of their statements don’t become part of the searchable Internet.
Online, Google and Facebook require an inversion of this assumed norm. Every statement you make on Google Plus or Facebook is persistent and strongly attached to your real identity through your name. Both services allow you to change settings to make your statements more or less public, which solves some problems. However, participating in public life on the services requires attaching your name to your statements. On the boulevards and town squares of Facebook, you can’t just say, “Down with the government,” with the knowledge that only a small percentage of the people who hear you could connect your statement to you. But the information is still being recorded, presumably in perpetuity. That means that if a government or human resources researcher or plain old enemy wants to get a hold of it, it is possible.
The pseudonym advocates note that being allowed to pick and choose a different name solves some of these problems. One can choose to tightly couple one’s real-world identity and online identity… or not. One can choose to have multiple identities for separate networks. In the language we were using earlier, pseudonyms allow statements to be public and persistent, but not attached to one’s real identity.
I can understand why Google and Facebook don’t want this to happen. It’s bad for their marketing teams. It generates social problems when people don’t act responsibly under the cloak of their assumed identity. It messes up the clarity and coherence of their data. And maybe those costs do outweigh the benefits pseudonymity brings to social networks.
But then let’s have that conversation. Let’s not pretend that what Google and Facebook are doing has long-established precedents and therefore these companies are only doing what they’re doing to mimic real life. They are creating tighter links between people’s behavior and their identities than has previously existed in the modern world.
Mathew Ingram refutes the growing chorus of early-adopter types (or former friendfeed types, like Scoble) who are taken with the shiny new Google Plus, and now think of Twitter as stale beer. In particular, Mathew smacks down Farhad Manjoo’s suggestion that Twitter should double the number of characters in Tweets to 280:
The point the Slate writer [Manjoo] misses (or hints at, and then discards) is that if it did this, it wouldn’t be Twitter any more. As far as I’m concerned, the 140-character limit is one of the most brilliant things Twitter has ever done — and might even explain why it is still around, let alone worth a reported $8 billion or so. Not only did that limit feel comfortable to many users who were familiar with text messaging, but it restricted what people could post, so that Twitter didn’t become a massive time-sink of 1,000-word missives and rambling nonsense, the way so many blogs are.
I’m not the only one who has noticed that on Google+, things often stray more towards the rambling-nonsense end of the spectrum than they do on Twitter. Does Twitter encourage a “sound bite” kind of culture, as Manjoo argues — or what Alexis Madrigal describes as a “call-and-response” approach, rather than real conversation? Perhaps. But a long and rambling post followed by hundreds of comments on Google+ isn’t really much of a conversation either, when it comes right down to it.
In the long run, it’s good that Google+ is providing some competition for Twitter. Maybe the ability for users to share comments with different “Circles” of friends and followers on Google’s network has Twitter thinking about how it can make better use of groups and other features. That’s a good thing. But throwing out some of the core aspects of what make Twitter useful, or cluttering it up with all kinds of other features of dubious merit doesn’t really make any sense at all. And I think Twitter knows that.
This is so similar to the Friendfeed-is-better argument of 3 years ago, it’s worth pulling some stuff from the archives, like this:
I believe Friendfeed is more attractive to those that want to have spontaneous comment-thread discussions somewhere outside of blogs, while Twitter is more divorced from the blogosphere and supports a more wide-open sort of cocktail party ambience, not some giant panel session from an endless conference. And the asymmetry of the blogosphere/conference model is continued in Friendfeed, where A-listers like Scoble and Rubel can accumulate a hundred comments on their pearls of wisdom, reposted in the Friendfeed context.
I don’t subscribe to the meme that ‘Friendfeed is better than Twitter’. Performance issues aside, Twitter provides a very different experience that Friendfeed, which I fooled with for a time, but which I have found to be like a conference with too many panel sessions and too many people. In Twitter I manage the human scale better, even with 10X the number of friends.
Regarding Scoble’s love of the shiny new things, most people will have forgotten Michael Arrington’s intervention when Scoble went sideways on Friendfeed, and suggested he was squandering his time inside of an app he couldn’t monetize, instead on writing on his blog, where he could:
I have said for years that traditional media — and Arrington has become mainstream media at this point, a Murdoch in the making — would war against the movement from pages to flow: they will say it is illegitimate, immoral, fattening, addictive, whatever.
Arrington’s points make sense relative to a certain perspective. In essence he is saying that time we spend engaging with others on the web has got to have a point, otherwise it’s just hanging out. And in the simplest terms, you should either be making money from becoming heavily invested (and well-known) on the web, or doing something else of great value.
Scoble maintains that his involvement with those in his various networks has great value, and that his more tangible work — his video series — has improved because of this involvement. But Arrington’s argument is stronger, at least to Arrington and other realists, since, implicitly, if Scoble went to work for a media outlet like TechCrunch and devoted his energies to media work that was more monetizable than the amorphous ‘following’ he has amassed in Flowland, he’d be worth millions. And he isn’t using his great hypothetical influence on the web to cure poverty, or end the genocide in Darfur, or overturn prop 8, either. He’s just fooling with tools.
But Scoble is some sort of idealist, maybe even a utopian, who sees the distant glimmerings of a new tomorrow, one that hasn’t been figured out yet. Arrington is right that Scoble can’t sell ads on his Friendfeed stream. Yet. So in very concrete terms, Scoble is losing serious bank while he is putzing around with all this social community chit-chat stuff.
And to a lesser extent, so are all of us that Twitter all day. Some a certain viewpoint, it’s like sitting on the porch and whittling.
But Robert is a early adopter, and not necessarily even the ablest promoter of the movement he is in.
The rise of flow and the new form of social connection that these flow applications engenders will slowly erode the edges of the more established, page-based Web 1.0 publishing models, like TechCrunch, Huffington Post, and whatever it is that the newspaper behemoths metamorphose into before finally shutting off their printing presses. Something new will emerge, out here, at the far fringes of Flowland. I believe it will recast the older forms of media, reshape them, like TV did to radio, and web 1.0 has done to print. But it’s going to take a long time, a decade or more, and a million baby steps to get there.
Scoble’s in love with the edge, and he doesn’t apparently want to monetize every waking second of his life. But is not an addiction: he’s blinded by the light, which is a whole different problem.
I think it’s inevitable that Scoble would go gaga over the social scene that emerges around him from Friendfeed or Google Plus. It’s a natural for an influencer with hundreds or thousands of acolytes, and I believe that Scoble and his most avid followers get something special out of that sort of interaction. But it is quite distinct from the nearly conversational, call-and-response, socially-scaled cocktail party that is Twitter.