words William WilesReality as we know it is finished. Welcome to reality 2.0 – where the internet is ubiquitous, and the world is forested with information.
In this new world, when you see a building, you can see its Wikipedia page, Flickr sets of photographs taken inside, the websites of the companies that are based in it, the public Twitter feeds of people who work there, intersecting spheres of free wifi radiating from it, and the energy flowing from its windows. In the lobby, you’ll see scores of handwritten messages from other visitors and users, recommending nearby coffee shops and suggesting that you take the stairs because the lifts are slow. There’s also a message only you can see, from the friend you’ve come to meet, saying take a seat – she already knows you’re here.
None of this extra information is really visible – you can only see it by holding up your mobile phone. This is augmented reality (AR) – the convergence of a number of fast-evolving technologies that will allow us to see the world’s data where it’s most useful to us – out in the world. It’s a system with the potential to transform the way we see and interact with everything from appliance to buildings and, perhaps most importantly, the city itself.
“Our physical world is documented on the internet, through Google Maps, through Flickr,” says David Sweeney, an industrial designer who has developed some prototype AR systems. “There is information relating to each physical spot on earth. So potentially these graphics or this information could be layered over our viewpoint of the world, and we could access it as we see fit. It’s about blurring the lines between the real and virtual world.”
AR brings together a number of different technologies to achieve this trick. It needs a device that knows where it is, generally using the global positioning system (GPS), and a device that knows if it is pointing north or south, or down or up. This device also needs an internet connection, and a display with a video camera and a graphics card powerful enough to lay information over a video feed in real time. All these technologies have been around for a while, but only recently have they been combined in a single, portable device – the new generation of smartphones, such as Apple’s iPhone and Google’s Android.
Now that these phones are in an increasing number of pockets, the first AR applications are appearing. In April Google launched Latitude, a program for the Android that lets you broadcast your location to your friends, and lets you see their location – like Facebook on a map. Since then, two of the first AR “browsers” have appeared, both for the Android – Layar.eu and Wikitude, both of which layer information in real-time over the view through the phone’s camera. If you hold up a phone running Wikitude, for instance, you see the same street in front of you, but overlaid with appropriate links to Wikipedia and Qype, a restaurant and bar review website. Other applications have been announced or demonstrated, such as Twittaround, which shows a live view of what nearby users of the microblogging site Twitter are saying, and Nearest Tube, which will point out the nearest Underground station in London, both of which became available on the iPhone in September.
Even more interesting is the way augmented reality could insert live 3D-looking computer graphics into the street scene. This year the marketing campaigns for the films Star Trek: Enterprise and Transformers 2 both included AR applications for the home computer. Users could, for instance, hold up a sheet of paper in front of their webcam and, on the monitor, see themselves manipulating a virtual model of the USS Enterprise. This real-time modelling will soon be available for the mobile phone, says Daniel Light, head of interactive at Picture Production Company, which built the Star Trek and Transformers programs: “Soon you’ll be able to pick up an image on your phone camera and then see it augmented on screen [for instance, with a spaceship included] – you’ll be able to walk around it and see it from different angles.” When that works, AR will be able to insert almost anything into your view of the city, such as computer game characters or proposed but unbuilt buildings. “You could call upon historical events, even, and watch a real-world scene that happened in history while you’re in that location,” says David Sweeney.
The implications for architecture and the ways we use built space are profound. Eric Rodenbeck, one of the principals of San Francisco-based interaction design firm Stamen, says the technology will complete the process of behavioural change that ubiquitous mapping has already started, giving us a totally different relationship to the urban structure. “I took a walk today, and I used my iPhone to track where I was, and I didn’t even think about it,” he says. “[AR] opens up all these incredibly abstract and intense possibilities for what you could do, and it also opens up these possibilities for what you no longer have to do, you just never have to think about where you are. Well, you do, but you just assume that your phone will start beeping if you’re in a high-crime area.”
In the longer term, AR could lead to the effective “merger” between the internet and the world around us. Objects and buildings wouldn’t just be what they are – they would be the portals to a wider world of data. “The way I like to look at it is to see architecture, physicality, as being the backbone of information,” says Robert Miles Kemp, an architectural theorist and author who is an advocate of heavily networked and even robotic environments. “So without the physical object you can’t ground the information. That’s why AR is really important to architecture – it’s an added layer onto the physicality of space.”
Meanwhile smart phones could soon get even smarter at looking at the street and figuring out where they are. Google’s Streetview and the photo-hosting service Flickr have already created vast online databases of visual information about what places look like, and programs like Microsoft’s Photosynth can use thousands of 2D photographs to build detailed 3D models of places. “We can see a scenario where just by showing a camera to our viewpoint of the world, it’ll recognise where we are, what orientation, and link us to the information that is associated with that location,” says Sweeney, who has built some basic wayfinding systems that tell where they are by reading RFID chips or recognising QR codes – blocky ideograms that can be read by cameras – embedded in signs. But chips and codes are a very basic way of enabling AR. “The QR code is absolutely stone-age technology, the most basic building block of this technology,” says Light. “There’s no reason at all why there can’t be software that recognises buildings, as you have facial recognition software.”
Ideally this layer would not be a passive buffet of information, but could be altered and added to by the user – like Wikipedia, the online encyclopedia that, in theory, anyone can add to. “It’s about having a framework that sits on top of our space that people can leave messages on, they can leave notes on … people can leave some trace of themselves,” says Usman Haque, principal of Haque Design & Research. “And the whole point of the augmented reality system is the kind of magic of it, the feeling of magic that it brings to the experience of that space.” Haque is the creator of Pachube, an online brokerage for information about particular locations – such as data from weather stations, thermostats and pollution monitors – which makes it easy for people to build their own AR applications.
What, however, will be the result of living in this data-rich environment? Matt Jones sees the effect as being similar to the impact that the mobile phone had when it first appeared. “Our arrangements becoming more fluid, saying ‘I’ll call you when I’m in the area and we’ll decide where to go’,” he says. “That’s still going to be one of the big changes that this brings about.” Whole fields of human interaction are subject to change as old behaviours die and new ones arise. Stamen’s Rodenbeck says he has heard of an app called Grindr, which helps gay men meet for casual sex. “It’s a real-time hook-up machine,” he says. “You’re getting a landscape of who wants to fuck within 10 blocks of you … you don’t even need to know where the bars are any more.”
But at the moment, architects are barely involved in developing this interactive environment – something that troubles Haque, who fears the profession will be sidelined. “The production of so much of what we call architecture is done by people other than architects,” he says. “The experience of space is more and more guided by technologists.”
It’s possible, as these technologies develop, that the effect on “un-augmented” reality could be harmful. As more signage and advertising is geared towards triggering AR content, for instance by including QR codes, the city might look increasingly baffling to un-augmented eyes. “Even now in Japan, often a third of a billboard advertising a new TV show will be a huge blocky QR code – one third of that poster has become unreadable to human beings,” says Matt Jones, director of design at creative design consultancy Schulze & Webb. “We might see less and less information in the world that’s human-readable, as more of it becomes machine-readable.”
Daniel Light echoes that idea, suggesting that we might come to miss the relatively simple single-layer world: “Go out and enjoy un-augmented reality while you can because soon everything will be augmented in some way. And there’s something depressing about that.” Jones adds, however, that the change is unlikely to be radical: “It’s still going to be easier for us to take our cues from the design of the city than it is from AR devices – AR is going to be a layer on top, it’s going to be part of the palimpsest of what’s built.”