Reality can be dull, so who wouldn’t want to spice it up a bit? The phrase “augmented reality” (AR) distils all that could be hoped from technology – it might as well be called, “reality, just that little bit better,” like a technological pill to brighten the mundane. But does AR sufficiently merit its name? While it is still very much in the experimental stages, we are starting to see some real applications.
To be specific, AR involves capturing a video stream of a physical object or scene, and then adding information to the stream in real-time, displaying it on a suitable screen. Right now the whole package can be achieved with a smart phone with a camera and internet access – the video is captured, uploaded and recognised, then additional information is downloaded to add to what is seen on the display.
Perhaps AR is currently more, then, about “Augmented Imagery” – there’s not (yet) any augmentation of other sense-related information, say aural or tactile signalling. This isn’t a knock – more an indicator that we need to keep things in perspective when we consider examples of AR at work today. Essentially these fall into three groups: symbol-sensitive, location-sensitive and object-sensitive.
While symbol-sensitive AR is the simplest to implement, it is perhaps the most fun. The ‘symbol’ needs to be a pre-defined, fixed image that can be recognised by a program installed on a smart device. The latter can then add information in real time – for example adding a 3D avatar, or replacing somebody’s head with a cartoon image.
This model is not dissimilar to using QR codes – squares of pixels which can be photographed and interpreted to link to online information. Indeed, examples exist of using QR codes as the basis for symbol-sensitive AR. Both have also been used with some success at conferences and events, so it will be no surprise to see these applications growing.
Object-sensitive AR takes things one step further, in that image recognition software can identify specific objects and then construct a virtual world around them. Examples include Metaio’s digital lego box and apps to show how, say, to remove toner cartridges or other products from their packaging.
Finally we have location-sensitive AR, which captures the entire surroundings and adds information prior to displaying both. Google Sky Map is a simple, effective example of what can be done; other obvious applications are for travellers and direction finding, picking up specific street features and identifying the nearest pizza outlet, say. In these cases the video feed is supported by GPS information – so the software doesn’t have to work out which street one is on from scratch!
All of these models are being tested out in various ways by vendors, with AR capabilities finding their way into games, creating “a range of new possibilities” (in vendor-speak) such as incorporating a camera into a model helicopter and turning it into a virtual gunship. While such examples are indicative of what’s possible, AR has yet to find its killer app and has not, therefore, yet seen mainstream adoption.
Even so, it continues to develop. Gravity-related features or face recognition (as recently incorporated in Google Android’s Ice Cream Sandwich) are being used now; meanwhile, sites like Augmented Planet are speculating about integration of near-field communications and heads-up displays to enable more deeply immersive experiences.
Of course it is typical of the technology industry to try technology combinations and see what sticks – and AR is no exception. This is not necessarily a bad thing but neither should it distract from research into delivering AR-enabled capabilities that are of genuine, compelling use – for example educational applications or other places where ‘simple reality’ is not sufficient. It may be that there is no killer app at all – rather, like location-based services (touted as the next big thing eight years ago), AR features will simply find their place and gain adoption as part of other applications and services.
As AR finds its way more into the mainstream, the chances are some of the downsides will emerge – equally, then, it would be worth thinking about these up front. For example there are very clear privacy questions around using facial recognition in conjunction with AR. Right now we can sit on public transport with relative anonymity – but this could easily change if, say, our image could be mapped onto Google images. There’s also the possibility for social engineering, as the AR equivalent of social engineering takes hold.
Augmented reality may have plenty to offer – particularly as it starts to integrate other forms of information and as new applications are brought to the fore. We may yet reach a point where we drop the A and it simply becomes ‘reality’, at which point the bar of what augmentation means has to rise. Meanwhile, as a cross-over capability that can enhance applications and services across the board, it is certainly one to watch.