If you haven’t, I suggest reading Postman’s “Amusing Ourselves To Death” (from 1985) which discusses this idea that TV news is actually entertainment.
I recently downloaded Index Card for iPad. It’s a pretty cool app that allows users to manipulate text snippets in the form of both index cards and text. This makes it attractive to screenwriters, fiction authors, and maybe anyone who like to work with compositing small bits of text together into a whole. I mentioned the app in A Long Journey To Find A Better Way To Write Notes.
The metaphor of piling and reordering index cards is very attractive to me, but, alas, things are not going well.
Index Card for iPad
Why? First, the app is only available on iPad, and I do almost all of my research and writing on Mac. So, I looked into the app’s options for importing, which are limited to Scrivener docs (through a complex series of steps) and index card formatted files, which is the nature format of the app and perhaps others.
I thought I would ask about possible support for importing text files, via Dropbox, thinking that others might want a similar capability. Here’s the transcript of my interaction with a customer support person, who never identifies himself (for some reason, I am convinced this is a guy, and the developer of the tool):
Stowe Boyd, Jun 28 08:34 am (PDT):
Would make Index Card much more useful if I could import a text file from Dropbox as the content of a new card. Even better: import all the text files in a Dropbox folder as index cards.
I am using TextDrop (similar to Notational Velocity) and would like to be able to manipulate text files in Index Card, after creating them in TextDrop.
DenVog Support, Jun 28 08:43 am (PDT):
As you might guess, Index Cards have a very specific format in order to define individual cards, and accommodate title, text, notes, labels, etc. Importing a raw text file would throw this out the window. If you’re just making a separate plain text file for every card, then why not start with the Index Card app in the first place?
You can always copy and paste text between the apps. Not convenient, but I really don’t have a better suggestion if you are wanting to start your writing in something other than Index Card or Scrivener.
Stowe Boyd, Jun 28 01:13 pm (PDT):
But there are many reasons that text files might have been created by other tools — email, exports, word docs, etc. Certainly, everything is not created in Scrivener and Index Card?
DenVog Support, Jun 28 04:10 pm (PDT):
That seems like a rhetorical question, so I’m not sure what kind of response you are looking for.
You can obvious create files in any number of programs. Index Card won’t know what to do with a Word file, just like Word won’t know what to do with a .indexcard file. Could I engineer a way for Index Card to open a Plain Text file? Probably. But I’m not sure that many users would find it that useful since Plain Text does not have a concept of title, synopsis, label, etc. Dumping it into the synopsis of a single card doesn’t seem much better than cut-and-paste.
Given that you present yourself as a “clairvoyant” on your web site, I’m sure you already knew I was going to say that. :P
Your blog presents a very unique, specific workflow that you have in mind. Even if Index Card did import Plain Text, it appears it would still be a long way from what you’re looking for. I’m sorry that Index Card is not a better fit for you.
Anyway, leaving aside the trivial implementation issues (like mapping first line or filename to be the title of an index card, etc.), it’s the entire tone of this interaction which is annoying. The guy is so tin-eared that he can’t hear how condescending and unhelpful he is being.
I don’t want to over generalize, but this guy is just an extreme case of a general problem, and I hope that bringing to light the most egregious examples we can shove the world in a better direction, one tin-eared idiot at a time.
Update: 10:26 am
I wrote in one last time, with a link to this post:
Stowe Boyd, Jun 29 05:10 am (PDT):
Thanks for the help!
DenVog Support, Jun 29 06:20 am (PDT):
Instead of just saying “no”, I was trying to offer you some insight as to why. While you are free to disagree, I will not try to carry on a dialog with someone who resorts to name calling. You will not be receiving further replies from me.
So, he completely avoids the issue of his unhelpful customer support. I wonder how many other people he’s pissed off?
Tumblr has introduced a new way to promote posts: pinning it on other people’s dashboard, which means it will show up at the top of every page in your follower’s dashboards.
Hmm. Sounds like spam.
As Whitneymcn says:
This doesn’t feel like a good idea.
Still, it’s worth five bucks to try it out. Which is to say that I’m paying $5 to have this post stuck on your dashboard, not mine, for the several seconds it takes to un-pin it.
I’ve been plenty wrong about stuff like this over the years, but this doesn’t feel like a good idea.
It would be better if a post became pinned because a lot of people liked it, or was based on some other emergent property, not just payola. I bet ‘pinola’ will change pretty quickly to something else.
It’s almost ridiculous how much effort and time I have put into trying to find a better way to keep track of the snippets of writing that form the foundation of my life.
The great majority of what I write is public, and for that I use Tumblr, principally. That part is simple.
But I am constantly at work on other projects that are intended to be published in other ways, and until recently the fragments of writing that I produce — or the snippets of text that I collect — were all over the place. Some as text documents on my hard drive, others in Google docs, others as posts in work media tools. A total mess.
A few weeks ago, as I was starting to work on The Business Of Social Business, an ebook to be published by GigaOM’s new publishing arm, I started to collect quotes and observations of a small group of smart people. These come from online forms, as well as notes I was copying during interviews. Since I wasn’t planning on writing the book as a single monolithic document, I realized I needed a few things if I were to research and write the book in a sane fashion:
- A tool (or tool) to manage fragments, thoughts, and quotes.
- A tool that lets me easily edit fragments,
- add some lightweight styling to the text,
- possibly publish to the web, or export in various formats, and
- simply organize the bits and pieces.
- Preferentially, I thought it would be good if this solution would integrate with Dropbox, so I could potentially work on multiple devices, so that was more iffy.
- I had envisioned using an ‘index card’ like interface, but not based on previous experience.
- And, although I was motivated principally by the book, I thought it would be best if the tool would support more than one project at a time, as well.
I didn’t think this would be too tall an order. But it became a personal hell, because it seemed like none of the tools I heard or read about would match my personal way of doing things, or, the way I would like to do things if I only had the tools to do so.
I will start with the bottom line first: I finally — only last week — stumbled upon TextDrop, which is a browser-based web application that is integrated with Dropbox in a direct way. I think this was the first review I saw:
Matther Guay, TextDrop: An Online Text Editor for Your Dropbox Files
A native app for plain text writing will usually let you edit any plain text file on your computer, and save new or edited files in any folder as you’d expect. You can then copy the file onto a flash drive, edit it in another app, post it on your website, or anything else you want. That’s the beauty of plain text: it works anywhere, and you’ll never have to worry about losing what you wrote as long as you have the files.
Most writing apps online, however, store your text in their own database, making it hard to save what you’ve written as a plain text file and almost impossible to sync to your computer and edit in other apps without resorting to copy and paste. TextDrop is a new web app that turns this totally around, letting you edit and create plain text files in your Dropbox account, right in your browser. All your files are safe and synced with Dropbox, and you’ve got all the benefits of a minimalist writing app in your browser. It’s like a writer’s dream come true.
And, with the exception of the index card UX metaphor, I am happily using TextDrop, although ‘dream come true’ might be pushing it a bit.
So, now having gotten to the end, let me start at the beginning.
I started by looking for tools that supported text fragments and the index card metaphor.
Index Card for iPad looked like a likely candidate, but the iPad is too limited in support for typing, and there is no web or mac version. I fooled with it, but decided that I would need to do my research and writing elsewhere, even if I were to use it downstream for composing fragments into sections of the book.
Scrivener is widely regarded as a great tool for book writing, but I found it overly complicated and finicky. Also, Scrivener has a private database, so there is no easy way to sync the elements of a book and use other tools to edit or create them. There is a so-called integration with Simplenote, but its a kludge.
I tried out Ulysses, which seemed simpler than Scrivener, but shared some of the same problems:
I considered using Simplenote, which is a very simple web-based text editor, one that can integrate with Dropbox and which has clients on many devices (iPad shown below), but it has a few issues, the biggest being this:
- Simplenote’s integration with Dropbox is based on the creation of a single Dropbox folder, called ‘Simplenote’, and all the notes are managed there as text files. I really wanted a tool that would be able to edit text files across my entire Dropbox directory, not just in one folder.
- Simplenote does have tags for notes, and clients on all the devices I use, but more than that I wanted to be able to mark up text a bit, and Simplenote doesn’t support that.
I read about Notational Velocity, and I really liked everything about it. It seemed to meet all my needs except one glaring defect. While you can opt to work on text file in a Dropbox folder, you can only have a single database/folder. Other than that, great features:
- UX is very smart: the same text area is used to create new notes and search against existing ones. If the text entered doesn’t match an existing file, pressing return leads to it being created.
- Supports Markdown, a simple text-based approach to styling text, and also can be exported as HTML-based styled text.
A variant of Notational Velocity, called NValt, adds extra features, but inherits all the same limitations, principally the issue with a single datablase/folder.
So, I was stuck: I liked the user experience of Notational Velocity, but wanted to be able to roam from one Dropbox folder, where I might be storing the fragments for the book, to a second one, where I could be amassing text fragments for a report, or a presentation.
And then I learned about TextDrop, which was a/ inspired by Notational Velocity, but b/ allows users to edit anywhere in their Dropbox account.
So, now I am set, and productively working on several projects at once.
There is a stubborn logical fallacy at work in the world of political forecasting, and that is a belief in the inherent determinism of the universe. As Karl Popper wrote in The Open Universe, scientific determinism is
the doctrine that the structure of the world is such that any event can be rationally predicted, with any desired degree of precision, if we are given a sufficiently precise description of past events, together with all the laws of nature.
Determinism has been pretty well debunked, except in fields like celestial mechanics, where long-term projections of the movements of planets and stars is possible because of the orderliness of the systems involved. But our more modern investigations into complexity and chaos have revealed that many systems are deeply and profoundly unpredictable. Most suitably interesting systems are subject to high degrees of randomness and their state at some point in the future is highly dependent on their state in the past, which is probably too complex to measure.
But that doesn’t stop political ‘scientists’ from persisting in prediction:
Jacqueline Stephens, Political Scientists Are Lousy Forecasters via NYTimes.com
[…] in the 1980s, the political psychologist Philip E. Tetlock began systematically quizzing 284 political experts — most of whom were political science Ph.D.’s — on dozens of basic questions, like whether a country would go to war, leave NATO or change its boundaries or a political leader would remain in office. His book “Expert Political Judgment: How Good Is It? How Can We Know?” won the A.P.S.A.’s prize for the best book published on government, politics or international affairs.
Professor Tetlock’s main finding? Chimps randomly throwing darts at the possible outcomes would have done almost as well as the experts.
These results wouldn’t surprise the guru of the scientific method, Karl Popper, whose 1934 book “The Logic of Scientific Discovery” remains the cornerstone of the scientific method. Yet Mr. Popper himself scoffed at the pretensions of the social sciences: “Long-term prophecies can be derived from scientific conditional predictions only if they apply to systems which can be described as well-isolated, stationary, and recurrent. These systems are very rare in nature; and modern society is not one of them.”
Some political scientist have started to pay attention to the forms and norms of so-called future studies, and to consider alternative scenarios for political outcomes, rather than attempting to apply deductive or inductive logic to predict the future.
Personally, I find hope in abductive reasoning or inference, or as it first proponent, Charles Sanders Pierce, called it, ‘guessing’. Here’s a fairly concise definition from the Ohio State University Laborartory for Artificial Intelligence Research:
Abduction or Inference to the Best Explanation is a form of inference that follows a pattern like this:
D is a collection of data (facts, observations, givens),
H explains D (would, if true, explain D),
No other hypothesis explains D as well as H does.
Therefore, H is probably correct.
The strength of an abductive conclusion will in general depend on several factors, including:
- how good H is by itself, independently of considering the alternatives,
- how decisevely H surpasses the alternatives,
- how thorough the search was for alternative explanations, and
- pragmatic considerations, including
- the costs of being wrong and the benefits of being right,
- how strong the need is to come to a conclusion at all, especially considering the possibility of seeking further evidence before deciding.
That the strength of abductive conclusion ‘will in general’ depend on these factors means that it should depend on these factors, and that insofar as we are intelligent creatures, our conclusions will actually depend on these factors.
And the creation of scenarios to consider possible futures can be approached as the creation of different sets of data, D, in the formulation above, and the attempt to come up with the most plausible hypothesis, H, to explain how we might have arrived at the corresponding data set.
In a way this can be thought of as generating theories to explain the data, in a circular reasoning process.
My feeling is that most economists are grounded in rules they are trying to prove, and are operating deductively or inductively, inferring the general from the specific or the specific from the general. Abduction can be considered reasoning from the consequent to the antecedent, from outcomes to precipitating causes.
It’s worth noting that there is no clockwork mechanism here, no guaranteed success in abduction: it’s guessing, after all.
And for the purposes of future studies — conjecturing hypothetical situations in the future and possible ways that events might have unfolded to reach those states — it is doubly likely for a guess to unravel. Add to the mix the growing uncertainty of the post-normal world, characterized by VUCA — volatility, uncertainty, complexity and ambiguity — and abduction is stretched even thinner.
But I still believe it’s the only tool we have that extends our thinking, instead of simply applying what we already know. Abduction is generative — it creates theories — instead of simply reusing existing rules and case reasoning. Let’s hope the political scientists get the memo.
Thomas Friedman is doing his ropa-dope again: blaming the victims — us — for the terrible political world we are subjected to. And all because of social networks:
Thomas Freidman, The Rise of Popularism via NYTimes.com
In 1965, Gordon Moore, the Intel co-founder, posited Moore’s Law, which stipulated that the processing power that could be placed on a single microchip would double every 18 to 24 months. It’s held up quite well since then. Watching European, Arab and U.S. leaders grappling with their respective crises, I’m wondering if there isn’t a political corollary to Moore’s Law: The quality of political leadership declines with every 100 million new users of Facebook and Twitter.
The wiring of the world through social media and Web-enabled cellphones is changing the nature of conversations between leaders and the led everywhere. We’re going from largely one-way conversations — top-down — to overwhelmingly two-way conversations — bottom-up and top-down. This has many upsides: more participation, more innovation and more transparency. But can there be such a thing as too much participation — leaders listening to so many voices all the time and tracking the trends that they become prisoners of them?
The answer, Mr Friedman, is no.
And, oh, by the way, when you talk about the participative nature of the social web, consider the term many-to-many instead of two-way. We, the people, are involved in a conversation among ourselves, and if curmudgeons like you or our self-obsessed political leaders want to get involved with that, fine.
Friedman springs a relatively interesting term on us:
Indeed, I heard a new word in London last week: “Popularism.” It’s the über-ideology of our day. Read the polls, track the blogs, tally the Twitter feeds and Facebook postings and go precisely where the people are, not where you think they need to go. If everyone is “following,” who is leading?
Leadership today is — as always — linked to having a following, Mr Friedman. And before you can lead the people somewhere you have to start where they are.
Friedman goes on with the craziness:
And then there is the exposure factor. Anyone with a cellphone today is paparazzi; anyone with a Twitter account is a reporter; anyone with YouTube access is a filmmaker. When everyone is a paparazzi, reporter and filmmaker, everyone else is a public figure. And, if you’re truly a public figure — a politician — the scrutiny can become so unpleasant that public life becomes something to be avoided at all costs.
Wait a second: are we all public figures now? What’s with the shift to ‘real’ public figures? What point have you made? Did I miss something?
Alexander Downer, Australia’s former foreign minister, remarked to me recently: “A lot of leaders are coming under massively more scrutiny than ever before. It doesn’t discourage the best of them, but the ridicule and the constant interaction from the public is making it more difficult for them to make sensible, brave decisions.”
Oh, now I see. Because we are looking more closely at what our ‘leaders’ spassive ay and do they are having a hard time being brave. So we should go back to being a mass audience, watching TV, and not whispering among ourselves.
So it’s our fault that our fearless leaders are no longer fearless, and our fault that they can’t rein us in to work together to save the world, and our fault that we don’t have extraordinary leaders.
Yes, let’s blame social tools and the spin they have on human society. Let’s not talk about the precarious of a flattened down world that you championed, Mr Friedman, where offshoring is treated like a law of nature, and the externalization of true costs is a first order predicate in the economics that led to the econolypse we are still living in.
The problem we have isn’t that our leaders are afraid to tell the truth. Our problem is that our leaders have accepted inequity and injustice, and we, the people, can apparently find no way toward solidarity. But don’t blame social tools for our social ills: they are a lot older and deeper that Facebook and Twitter.
The design of technology is thus an ontological decision fraught with political consequences.” —- Andrew Feenberg, Transforming Technology: A Critical Theory Revisited
If I didn’t blog this, I’d have to be put out to pasture and shot as a mercy.
Pickard starts by making the case that we are living in post-normal times, a time of volatility, uncertainty, complexity, and ambiguity:
[…] we have the dubious honour of living in post-normal times. Hooray for us.
As used here, the concept of the ‘post-normal’ originates with British philosopher Jerry Ravetz and Argentinian mathematician Silvio Funtowicz. In their work on science policy and risk, the term ‘post-normal’ is used to describe situations where ‘facts are uncertain, values in dispute, stakes high and decisions urgent’ (Ravetz, 1999). Does that sound familiar? Scholar and commentator Ziauddin Sardar suggests it should, for ours is an age
characterised by uncertainty, rapid change, realignment of power, upheaval and chaotic behaviour. We live in an in-between period where old orthodoxies are dying, new ones have yet to be born, and very few things seem to make sense. A transitional age, a time without the confidence that we can return to any past we have known and with no confidence in any path to a desirable, attainable or sustainable future. (Sardar, 2010)
More than anything else, this is the background hum of the 2010s. For a second opinion, we need look no further than the feeder schools and colleges of the American military-industrial establishment, where the talismanic acronym ‘VUCA’ has come to identify operational contexts of notable ‘volatility, uncertainty, complexity and ambiguity’. VUCA situations change quickly and in unexpected ways, with participants befuddled by information overload and the fogs of war, and the risk of second-order blowback more deadly than any enemy.
Whether on or off the battlefield, 2011 was very VUCA.
The world — our sprawling, interconnected, and unimaginably complex world society — has moved past the threshold into the post-normal. Note that in the post-normal, we still have domains in which the old normal seem to hold, or hold in part. There is no clear transition, and so we can be caught wrong-footed at every turn, unless we can adopt new frames of reference and new ways of seeing.
Pickard quotes Montuori’s Beyond postnormal times: The future of creativity and the creativity of the future, but not at length, although he seems to be exploring the same ideas regarding the end of the West’s concept of the future:
In 2010, 2012 has become the mythical wall where the imagination of the West comes to an abrupt end. From ‘‘hard’’ science-ﬁction to ‘‘hard’’ techno-psychedelic mysticism-fact, extraterrestrial visions interwoven with chaos theory and neurotheology. 2012 is symbolically the point at which the imagination fails. Where do we go from here? What can the West dream of?
Perhaps we have reached the limits of futurism: have we passed peak future?
Whatever else, if we are to conceive of some future — one in which we find options other than boiling the oceans in a runaway greenhouse hell planet — we must start with a new synthesis in our thinking that encompass alternate visions of the future other than collapse and dystopia. Whatever else, if we are to conceive of some future — one in which we find options other than boiling the oceans in a runaway greenhouse hell planet — we must start with a new synthesis in our thinking that encompass alternate visions of the future other than collapse and dystopia.
Montuori rightly mentions Von Foerster’s Emirical Imperative — Act always to increase the number of choices — as an example of the ‘complex ethics’ that might be necessary to overcome postmodern ennui and solipsism. The decline in faith, the break in identification with trusted organizations (government, religion, unions, nationalism), and the apparent collapse of the social contract all contribute to what I call post-normal traumatic stress syndrome: we are stressed beyond the breaking point by the post-normal world, but it’s not in the past. We are not post the stress: it’s an on-going state; permanent, and seemingly inescapable.
If we are to adapt to the post-normal we need new ways to see and think.
Pickard makes a case for new skills for what he calls the ‘gonzo futurist’.
I dislike the term because it is badly patched together. Gonzo is lifted from Hunter S Thompson’s gonzo journalism, where the reporter is directly in the story, and maybe *is* the story, a reversal of the objective mumbo jumbo of conventional journalism.
I agree that we need to drop the outmoded thinking of old futurism — the association with fanciful science fiction, techno-utopianism, and the colonization of the future by the past — but ‘gonzo’ doesn’t capture that. The best path is to drop ‘futurism’ instead of dropping a new adjective in front of it.
I have been guilty of this as well, although I have been modifying ‘futurist’ with a new prefix: postfuturist. But no more.
There’s a long list of skills and traits that Pickard lays out as if he is defining the gonzo futurist, but its more like describing a style inherent in a gonzo culture. By all means read it. But it doesn’t address the fundamental issue that futurists — or whatever we will come to all them — are attempting to connect the dots in a puzzle that others often don’t even see. And then, we share the puzzled-out puzzle and its implications.
My friend Jamais Cascio says that ‘deep generalists’ are likely the best to consider the future implications of today’s realities, and I buy that, but it doesn’t help recasting futurism in a way more relevant for post-normal times.
Stuart Candy characterizes this as ‘the search for killer imps’ — implications — and zooms in on the difference between applications and implications. Candy quotes Tony Dunne and Fiona Raby, from Design For Debate (note: the question marks are in the original):
Design today is concerned primarily with commercial and marketing activities but it could operate on a more intellectual level. It could place new technological developments within imaginary but believable everyday situations that would allow us to debate the implications of different technological futures before they happen.
This shift from thinking about applications to implications creates a need for new design roles, contexts and methods. It?s not only about designing for commercial, market-led contexts but also for broader societal ones. It?s not only about designing products that can be consumed and used today, but also imaginary ones that might exist in years to come. And, it?s not only about imagining things we desire, but also undesirable things — cautionary tales that highlight what might happen if we carelessly introduce new technologies into society.
Candy goes on:
Traditionally, design practice has been preoccupied with the former, whereas theirs, and that of their Design Interactions students, is more concerned with the latter. And it seems to me that this maps rather well onto what we examined a moment ago: “futures in support of design” amounts to an orientation to applications, while “design in support of futures” can be seen as pointing towards implications.
Applications are necessarily convergent — concerning that part of the design process where ideas, intentions and constraints culminate and are distilled into solutions, embodiments of the exploration process. Implications, on the other hand, are intrinsically divergent, multiplicative, compound; not only are there alternative futures, but there are first, second, and third-order effects (and so on, as far as you care to go) for any given innovation or development you might name.
What futures uniquely contributes to the exploration of implications is a framework for the systematic exploration of these contingencies; ways of managing the mess of possibilities.
This brings us closer, perhaps, to thinking about where futures thinking might be better positioned: in the world of design, considering the implications of technologies, innovations, and the possible negatives lurking in the law of unintended consequences.
So, I am dropping futurist and even postfuturist. I will instead be adopting more of a speculative design edge to my work: more scenarios and more imagery. More texture to the lines drawn between the dots.
For example, a thumbnail of a scenario of the near future:
Brighton UK, 1 October 2012, Meaning 2012 conference
I am approached during a coffee break by an animated red-headed woman in her late 20s or early thirties, who introduces herself as a user experience designer. Her name is Katje. “You described yourself as a ‘speculative designer’ in your presentation. What does that mean?” she asks.
I hesitate, since this is a UX person I am talking with: a designer. Then I offer this.
Design isn’t just contriving an object: it also involves considering the impacts of its adoption on its users, individually or as a society.‘I started as a technologist, designing and building software, especially software tools: software to help programmers collaborate, and manage the processes around creating programs. Then I became an analyst, writing about the pros and cons of different software products in the context of business use, focusing primarily on social tools. Along the way I became an advisor to software companies, helping them to think about the near future — 3, 6, or 18 months ahead — and what paths they might want to take in their product direction. In June 2012, I finally came to the realization that I’m actually involved in speculative design, thinking about the societal or business implications of designed innovations — like instant messaging or social networks — and sharing those conjectures with clients and the public at large. Design isn’t just contriving an object: it also involves considering the impacts of its adoption on its users, individually or as a society.’
She considered this and said, ‘I am going to have to think more about the implications of the user experiences I am designing. I have to think about how they change the user after they turn off the app.’
Stowe Boyd, speculative designer, researcher-at-large, former futurist.
So - high intelligence is very rare (and some societies have too low an average intelligence to generate more than a tiny proportion of very intelligent people).
Within this tiny group of highly intelligent people, on top of all this, to get the coincidence of a creative way of thinking with a sufficiently persevering personality type is very rare.
And among this small percentage of a small percentage, there are the workings of sheer luck, there is the higher than normal risk of (self) sabotage by mental illness and addiction, there are the problems of a higher than usual probability of an abrasive or antisocial personality - and (as Murray identifies) the likelihood that for a person to aim very high requires a belief in transcendental values (the beautiful, the truth, virtue) - and that some societies (such as our own) lack this belief.
Put all these together and it is clear why in all societies genius is rare; and why genius is completely absent from most societies.” —
- Bruce Charlton, Why Genius Is So Rare
The second screen is being recognized as one of the most innovative and engrossing ideas happening:
The Age of Mobile Creativity: Are We There Yet? - Douglas Quenqua via Co.Create
We asked some creatives who work in mobile to name their favorite executions of the past year or so. The consensus? Don’t look for amazing visuals or stories. When it comes to mobile, the most creative ads are the ones that use the technology to forge connections.
“The connectivity stuff is where I think it’s getting really interesting,” says Tom Eslinger, digital creative director of Saatchi & Saatchi Worldwide and head of the mobile jury at Cannes.
“Mobile that connects you to your community when you’re watching your favorite TV show so you can hang out and watch together, ” he says, seemingly describing any number of social TV apps. “There’s this really high-utility stuff—TV and entertainment has a lot of it.”