I loved Minority Report’s gestural interface, as a scifi representation of what we think a police state might use to watch us, given the ability to move through a nearly infinite amount of data — and time — searching for clues.
Apparently, that interface is not just the stuff of Hollywood, as it appears that John Underkoffler, the guy that mocked up that experience for the movie, has been off actually building the system he literally is hand waving into existence.
MG Siegler thinks this represents the future of computing. I disagree, but first, MG’s thoughts:
While we may not have been at this year’s TED conference, apparently, Oblong was. And apparently, it wowed the crowd. And it should have. If you’ve seen the movie Minority Report, you’ve seen the system they’re building.
No, really. The co-founder of Oblong, John Underkoffler, is the man who came up with the gesture-based interface used in the Steven Spielberg movie. And now he’s building it in real life.
The demo I saw a couple years ago was stunning, but it was still just a video. Apparently, at TED, the audience got to see it in action. NYT’s Bits blog detailed some of it in a post yesterday. For those not at TED, Oblong has also made a few demo videos in the past, which I’ll embed below. Again, this is Minority Report.
Oblong’s coming out party couldn’t come at a better time. Following the unveiling of Apple’s iPad, there has been a lot of talk about the future of computing at a fundamental level. That is to say, after decades of dominance by the keyboard and mouse, we’re finally talking about other, more natural, methods of input. The iPad is one step to a multi-touch gesture system (as is this 10/GUI awesome demo), but this Oblong system is the next step beyond that.
I don’t believe that huge displays based on petabytes of information — like Cruise was surfing — is likely to be the prototypical user experience for normal people in the near term. In some narrowly defined industries — military, cinematography — such displays may be temporarily of interest. But the future of user experience is a logical extension of what we have been seeing in consumer electronics: a continued movement to small, mobile, and personal.
Yesterday, I posted a Nokia video that I think is much more true to life. I reproduce it here, again, in the form of the complete video, and an image pulled out.
one screenshot from the video
The Nokia example is based on a few assumptions:
Augmented reality glasses will become the standard user display — Instead of huge displays, hung on walls with giant panels, people will wear augmented reality glasses. These will display on the inside of the glass images that provide access to various sorts of information.
Displays will become less complex than today’s file/folder/desktop jumble, and interaction will be based on simple eye movements and gestures — User interaction will rely on eye tracking and gestural interfaces to represent selection, expansion, playing video or audio, and the like. In this example, the woman looks at the name of an artist in a playlist long enough and the environment interprets that as a selection. At some points in the demo she flicks her hand to represent clicking or scrolling. Note she doesn’t wear gloves or special hand gear: the glasses have cameras that watch her hands. They don’t show her doing it but either a generalized sign language could be used for more complex communications — more than selecting an emoticon, like she does — or a virtual keyboard could be displayed, and ‘keystrokes’ recorded, again, by the glasses observing our hands.
I don’t think that the grand gestural, ‘orchestra conductor’ sort of scenario that we saw in Minority Report will be the norm, although in specialized contexts — like gaming, war fighting, and brain surgery — those sorts of advanced gestural languages might be developed.
Social interaction with others will be the primary modality of all future operating environments, and other activities will principally be constructed to help filter and aggregate social channels — This is not well-represented in either the Nokia video or Minority Report. In the Nokia example, the woman is mostly dabbling with relatively conventional streams and stores — weather and news, and riffling through a music library — while occasionally being pinged by an overly attentive boyfriend. Imagine a more rich scenario of a marketing executive racing through the streets of New York, communicating with four colleagues in an open semi-public sort of way, with integrated information streams of plans, designs, and marketing campaign mockups. And at the same time receiving local augmented reality information about the streets she is passing through, like GPS coordinates, a map showing her destination and where her four colleagues are, offers from the food truck she passes, and a global stream of socialized news and information from her network of friends, fans, and connections.
This can be condensed to the shorthand: not a wall, a world.
The steampunk idea that we will continue to have displays like today’s TV screens or PC monitors is dubious. I would give up mine in a heartbeat. More important, there is a world out there, and amplifying what we are already looking at — like the street we are walking on — with relevant information — like where the bus stop is, or what kind of food that restaurant serves — is so obviously helpful it doesn’t really need to be motivated.
We will continue to have personal and mobile computing experiences in the near future, because mostly we work and play on personal devices. Yes, there is the occasional meeting in a face to face setting where currently we use large displays, but this will be replaced by shared augmented reality: a presentation, for example, could be controlled by one person (or more) and viewed by a larger group. But this wouldn’t be projected on the wall, necessarily. Instead, it would be shared via each attendee’s glasses. We might be looking at a blank wall, or we might be walking through a virtual representation of a building being designed, or a product being assembled.
Amplifying the social through this sort of user experience would be phenomenal. Wandering around at a business meeting, a party, or a conference, and seeing salient information about the people you are looking at — where they work, when you last talked, the names of their loved ones, their pet peeves, whether they follow you and know of your work — would be an immense help. And would potentially change the nature of our social contract in startling ways. This is what I am expecting to appear, and very soon. Not 2045.