Elsewhere

Twitter’s New Discover Is Working

Twitter releases a new Discovery tab — yes, the tab you never click on because it is basically useless. Is it still useless? Mathew Ingram says its been despammified, but not much else:

Mathew Ingram, Twitter’s big problem: It still needs better filters

In my initial use of the upgraded one (which is being rolled out to all users over the next few weeks), I found things somewhat improved, but only in the sense that the obvious spam was gone.

The twitter Engineering Blog spells out what is supposed to happen:

Behind the scenes, the new Discover tab is powered by Earlybird, Twitter’s real-time search technology. When a user tweets, that Tweet is indexed and becomes searchable in seconds. Every Tweet with a link also goes through some additional processing: we extract and expand any URLs available in Tweets, and then fetch the contents of those URLs via SpiderDuck, our real-time URL fetcher.

To generate the stories that are based on your social graph and that we believe are most interesting to you, we first use Cassovary [Cassowary?], our graph processing library, to identify your connections and rank them according to how strong and important those connections are to you.

Once we have that network, we use Twitter’s flexible search engine to find URLs that have been shared by that circle of people. Those links are converted into stories that we’ll display, alongside other stories, in the Discover tab. Before displaying them, a final ranking pass re-ranks stories according to how many people have tweeted about them and how important those people are in relation to you. All of this happens in near-real time, which means breaking and relevant stories appear in the new Discover tab almost as soon as people start talking about them.

My take?

At this moment nearly all the stories in the Discover tab make sense. I wrote about American Football yesterday (see Should College Football Be Banned? Or Just Ban The Armor?) so the sports story about Eric LeGrand, a Rutgers defensive tackle who was paralyzed by a game injury is reasonable. But the Montreal Canadiens getting a new manager, no.

All the tech stories — Spotify, Caterina Fake, iPad, Pebble Watch, Moz — fit my profile, and so does the story about sardines, because I write a lot about food and the environment at Underpaidgenius.com. Online black markets? A good fit. Even the story about London mayoral elections fits because I wrote about Boris Johnson a few times (like this freakish accident video, showing a truck almost killing the mayor).

I will now officially look at Discover daily, like I do Flipboard, News.me, and others.

I wish there was a way to help it learn faster, though, like voting a la Zite and Prismatic.

I get a bang out of being a top contributor on Tumblr’s Tech thread, as a lowly, lowly soloist in the midst of The Atlantic, The Verge, Fast Company, CNet, and IBM’s Smarter Planet.
Curation is increasing in relevance. I think I need to start a regular salon on curation in NYC. Any interest?

I get a bang out of being a top contributor on Tumblr’s Tech thread, as a lowly, lowly soloist in the midst of The Atlantic, The Verge, Fast Company, CNet, and IBM’s Smarter Planet.

Curation is increasing in relevance. I think I need to start a regular salon on curation in NYC. Any interest?

The Rise of the Content Strategist - Cheryl Lowry via Flip the Media

http://flipthemedia.com/2012/04/the-rise-of-the-content-strategist/

futuresagency (stowe boyd):

One way to know that tectonic changes are happening in an industry is to see people’s titles change when they aren’t being promoted. Newest example? Editors are becoming Content Strategists, and there is increasing demand for this ‘new’ specialty:

The Rise of the Content Strategist - Cheryl Lowry via Flip the Media

Kristina Halvorson’s Content Strategy for the Web, first published in 2009, has been a big influence, as Peter notes in his post. In her book, Halvorson defines content strategy as “the practice of planning for the creation, delivery, and governance of useful, usable content.” How does this differ, though, from what professional content writers, editors and managers have been doing all along?

I see it as a question of abundance. When I began writing content, creation was the goal. Marketing copy. User guides. FAQs. Help systems. Writers and editors produced and published words, and moving up the chain meant managing an editorial calendar and other writers to produce ever greater sums of copy. As print gave way to the web, this became considerably easier and cheaper to do. Many companies employed (and still employ) a strategy that web usability expert Gerry McGovern refers to as “launch and leave:” produce a ton of content, and then leave it sitting there unmeasured and unmaintained. Clay Shirky calls this abundance a result of post-Gutenberg economics, in which “the cost of producing [content] has fallen through the floor… .and so [now] there’s no economic logic that says you have to filter for quality before you publish.”

However, several recent trends have contributed to organizations demanding more from content.. The Great Recession, the rise of web analytics, and the voice of the customer amplified by social networks have all given companies more tools and incentive to create and maintain “useful, usable content.” Organizations are now realizing that content ought to earn its keep — it should drive conversion (sales, donations), or reduce call drivers (solve frequent and actual problems customers have). If it doesn’t, it’s just polluting the relevance and searchability of content that does.

So, the content strategist is concerned with the full lifecycle of media, not just production or aggregation. I think this title will absorb the brief rise of ‘content curator’, because it sounds shinier.

Trying to recreate the scarcity of content that used to exist in print — when media outlets controlled not only the creation of news but the platforms through which it was distributed — by using paywalls and subscription apps is fundamentally a losing battle. Many users want that content to be part of a larger digital experience, whether it’s through an aggregation app like Flipboard or through Facebook or Twitter. If your content is not designed to take advantage of that, you will be missing a larger and larger proportion of the audience you need.

Mathew Ingram, responding to new research from Pew, in If you have news, it will be aggregated and/or curated via GigaOM

Reading-and-sharing: nurturing the ties that bind

http://socialabacus.blogspot.com/2012/03/reading-and-sharing-nurturing-ties-that.html

Kate Niederhofer via Social Abacus

I’ve blogged before about Wegner’s notion of the transactive memory, a concept I love about how we get information into our heads (encode), arrange and add context (store), and eventually access when needed (retrieve) *as a group*. In my mind, this is underpinning of the success that Twitter is. It also helps explain this tendency we have to read-and-share as a means to coordinate our social network. That is, by sharing certain content with specific people, we more effectively encode, store, and retrieve information as a social network. Think of it like really effective curating. Simply by sharing links, we’re making sense out of our expanding networks. 

But something else happens when we read-and-share. We create virtual spaces. As the great sociologist Ray Oldenburg might say, we create “a third place.” Places, really. Salons. Sharing links creates places for us to meet and talk about our shared interests. Traditionally a “third place” is a place of refuge. It’s not your home, not your job. So these virtual salons we create let us escape— or augment our reality— while performing social network maintenance: clustering and categorizing our network.

Yes, I believe that by curating we are sharing more than links, although it’s not a space that we define, but a way to share time: to still the time we are in, and share it with others, who experience it themselves.

We are sharing experience: Time is the new space.

I’m not a “curator” – Marco.org

http://www.marco.org/2012/03/12/not-a-curator

I wrote recently about Maria Popova’s promoting some very hard-to-use microsyntax for curation, called Curator’s Code. The skinny? In principle, I’m down with making a distinction between via and h/t (hat tip), but I don’t think that her symbology, and , respectively, will catch on: Too hard to use, and they don’t add anything to the well-established via and h/t.

Marco Arment weighs in on the discovery angle:

Marco Arment via Marco.org

I completely disagree with Popova on the value of discovery.

The value of authorship is much more clear. But regardless of how much time it takes to find interesting links every day, I don’t think most intermediaries deserve credit for simply sharing a link to someone else’s work.

Reliably linking to great work is a good way to build an audience for your site. That’s your compensation.

But if another link-blogger posts a link they found from your link-blog, I don’t think they need to credit you. Discovering something doesn’t transfer any ownership to you. Therefore, I don’t think anyone needs to give you credit for showing them the way to something great, since it’s not yours. Some might as a courtesy, but it shouldn’t be considered an obligation.

Every link-blogger has their own standards for when to use a “via” link (or a “hat-tip” — again, I doubt most of us know the difference). I add a “via” if it’s convenient (if I can remember where I found the link) and I probably wouldn’t have seen it from any other sources.

If your standard is never to add a “via” to intermediate linkers, even when I am an intermediate linker, that’s fine with me, too.

And my syntax for adding a “via” link is… a link, often prepended by the word “via”. My readers understand.

[…]

The proper place for ethics and codes is in ensuring that a reasonable number of people go to the source instead of just reading your rehash.

Codifying “via” links with confusing symbols is solving the wrong problem.

The Curator's Code

https://mail.google.com/mail/u/0/?shva=1#inbox/13601a739c3c2cb2

I totally support the idea of a Curator’s Code, which is basically the use of microsyntax to represent different kinds of attribution in posts and or tweets:

ᔥ Maria Popova

The system is based on two basic types of attribution, each connoted by a special unicode character, much like ™ for “trademark” and for © “copyright”:

ᔥ stands for “via” and signifies a direct link of discovery, to be used when you simply repost a piece of content you found elsewhere, with little or no modification or addition. This type of attribution looks something like this:

↬ stands for the common “HT” or “hat tip,” signifying an indirect link of discovery, to be used for content you significantly modify or expand upon compared to your source, for story leads, or for indirect inspiration encountered elsewhere that led you to create your own original content. For example:

But I think the folks behind this made a few mistakes, ones that will kill the adoption of this Code.

First, there are well-established textual ways to accomplish what is envisioned, like via and h/t. They might have been better off to simply throw their weight behind a more uniform usage of those terms.

Second, the characters they picked for the microsyntax are hard to use. The sideways S, for example, doesn’t appear on Apple’s special characters. I tried using their bookmarklet, but it wouldn’t insert into Tumblr’s editor. Wouldn’t it have been better to use some sort of a ‘v’ for via, like shift-option-V = ‘◊’? Or maybe some sort of arrow? Like ‘←’?

I am emotionally in favor of this, but as a microsyntax observer, I don’t think it will catch on.

Where’s The Bottom For Newspapers?

Here’s a big story, and the meta-story behind it:

The Collapse of Print Advertising in 1 Graph, Derek Thompson via The Atlantic

Print newspaper ads have fallen by two-thirds from $60 billion in the late-1990s to $20 billion in 2011.

[…]

Don’t just blame the bloggers. For decades, newspapers relied on a simple cross-subsidy to pay for their coverage. You can’t make much money advertising against A1 stories like bombings in Afghanistan and school shootings and deficit reduction. Those stories are the door through which readers walk to find stories that can take the ads: the car section, the style section, the travel section, and the classifieds. But ad dollars started flowing to websites that gave people their car, style, travel, or classifieds directly. So did the readers. And down went print.

The decline is stunning. “Last year’s ad revenues of about $21 billion were less than half of the $46 billion spent just four years ago in 2007, and less than one-third of the $64 billion spent in 2000,” Mark Perry writes. In the next few years — and hopefully, in the next few decades (I like print!) — we’ll see papers and magazines continue to invest in their websites and find advertising and pricing models that support journalism independently. Otherwise, one hopes that rich people continue to be fond of paying for the production of great writing on bundles of ink and paper.

I don’t think there is any business model that will prop up print newspapers as we know them. As media is being exploded into a thousand bits, the 20th century model of newspaper journalism is increasingly obsolete. Something else will come along, some sort of networked journalism.

Consider the way I read about this graphic: I saw a piece in my Tumblr stream by Futureamb, quoting a fragment of a Business Insider piece by Henry Blodget, where Futureamb actually didn’t add even a comment, but was curator zero for me. I then looked at the Blodget piece: he had a few things to add, but wasn’t the originator of the graph. The trail led to Derek Thompson at the Atlantic, who was citing Mark Perry's graphing of data from the Newspaper Association of America. Derek added real value, and an extra graph:

Wow! The Wall Street Journal is a singular institution since the majority of its subscribers are businesses, not individuals. All the other major US papers are falling like stones.

And now, I am adding my 2¢, which is likely how you are seeing this.

My point is that this trail of interactions is how we are increasingly experiencing the ‘news’, and it jumps from place to place, outside the boundaries of the newspapers officially publishing this bits of the ‘story’. Newspapers are built top-down — that’s how most businesses are oriented — but that’s doesn’t match the way that information transfer works in a networked world.

The social object — the information embedded in these graphs — is handed around, from curator to curator, each adding something, taking away, looking at it from a different angle. It’s additive and subtractive, kind of like Wikipedia, but distributed across a bunch of independent, cooperative posts instead of embodied in a consolidated Wikipedia entry.

We need new social technologies — a step past today’s curation tools — to support this new sprawling, liquid media world. That’s what how we’ll experience news in the near future.

I am betting that newspapers will fail to make a painless transition to a new business structure before hitting the bottom, and fracturing into a millions pieces. After that crash, some of those pieces might begin to create a new networked, post-journalism model of news organization, one that doesn’t have papers in it.

Habits Are The New Viral: Why Startups Must Be Behavior Experts - Nir Eyal via TechCrunch

http://techcrunch.com/2012/02/26/habits-are-the-new-viral-why-startups-must-be-behavior-experts/?grcc=33333Z98

Eyal makes a good argument: that virality — users inviting their friends to try an app — is less important (and more annoying) than habitual use of apps: habit is the new viral.

Nir Eyal via TechCrunch

The Curated Web Will Run On Habits

Increasingly, companies will become experts at designing user habits. Curated Web companies already rely on these methods. This new breed of company, defined by the ability to help users find only the content they care about, includes such white-hot companies as Pinterest and Tumblr. These companies have habit formation embedded in their DNA. This is because data collection is at the heart of any Curated Web business and to succeed, they must predict what users will think is most personally relevant.

Curated Web companies can only improve if users tell their systems what they want to see more of. If users use the service sparingly, it is less valuable than if they use it habitually. The more the user engages with a Curated Web company, the more data the company has to tailor and improve the user’s experience. This self-improving feedback loop has the potential to be more useful – and more addictive — than anything we’ve seen before.

However, I think Eyal’s characterization — helping users ‘find only the content they care about’ — is too limited. Steve Jobs said the users don’t know what they want, so by extension, they don’t know what they care about.

Getting back to Eyal’s habituation remark, these new tools will have to meld into the user’s existing behaviors and amplify them in some adjacent way.

For example, I’ve started to experiment with the user of Timely.is instead of Bitly as a way to publish Tweets. It ‘fits the hand’ in the sense that it works much like Bitly: a bookmarklet in the browser that creates an editable tweet with a shortened URL back to the source. Like Bitly, it provides stats on clickthroughs, but adds one additional feature: the ability to queue tweets and have them post over time.

So, I am able to develop a new Timely habit because it is similar to my habituated use of Bitly, but adding an additional capability. And there is a viral vestige: the promotion of Timely in the footer of the tweets.

Related Posts Plugin for WordPress, Blogger...