Despite some early speculation that Apple might choose to pre-announce a new augmented reality (AR) device, there were no new AR product launches at last week’s iPhone event.

This silence is noteworthy given Apple’s purchase of Akonia Holographics for an undisclosed amount over the summer, a transaction the company confirmed without giving any further details. Akonia Holographics developed the HoloMirror, which overlays a level of AR on to the physical environment, through a “thin, transparent smart glass lens that displays vibrant, full-colour, wide field-of-view images.”

Apple’s interest in developing an AR headset or wearable has been widely discussed in the press over the last year or so, with Bloomberg having previously written an extensive piece suggesting a 2020 launch date for an iGlass. This latest acquisition was widely seen as an important step in the process, as Apple looks to assemble the various hardware and software requisites for a successful AR wearable product launch.

So what is the current story on Apple and AR?
Apple did announce some further AR developments last week. The new iPhone X derivatives (with their new A12 Bionic chip) and this year’s ARKit 2 update have brought new AR capabilities to the smartphone. Combined with iOS 12, these developments promise dramatic increases in processing power, capable of delivering more immersive AR experiences for the user. An example is the Measure app, which allows users to measure 3D objects in the real world.

Impressive as they are, these features and capabilities are only scratching the surface of the true longer-term potential of AR. One of the fundamental challenges with smartphones is that, however good the display and stylish the overall phone, the so-called magic window is poorly suited as a medium for the AR experience. To quote the AR/ VR writer and consultant Charlie Fink: “Holding your arm out to look through a tiny screen has got to be one of the worst form factors ever accidentally invented by man.”

These challenges were unintentionally highlighted by the Apple event demo of three people standing around a table playing the Galaga game. Impressive as the screen shots and sound effects were, it’s hard to see how long the average person would want to stand (or sit) holding out a phone in front of them.

Where does Apple have work to do?
So where is Apple getting it right on AR and where does it still need to work? In a previous edition of the GSMA’s Global Mobile Radar, we identified three key components to realising the full potential of AR: the right hardware form factor with a natural user interface; access to an advanced intelligence and huge variety of data to supply the right, contextually relevant, information to users during their day; and a pervasive 3D digital map of the world.

While Apple appears to be leading on the smartphone AR experience, others are already developing a wearable form factor which may ultimately offer a more natural and readily acceptable user interface. A number of AR glasses and headsets are already in the market, with an updated Google Glass now being repositioned as an enterprise play. Magic Leap has finally launched its Magic Leap One VR headset, to fairly mixed reviews.

Vuzix’ Blade 3000 was demonstrated at a number of trade events over the last year and gives a sense of what is possible with an AR glass, even if it does not fully deliver today. One of the more impressive features of the Blade was its head-up display (HUD), even if this falls short of what might be expected from a true AR experience.

The second challenge is being addressed by numerous players including Google and Facebook. Recent hires should help Apple in its machine learning and AI strategy, an area where it has been seen as a laggard. Companies across the ecosystem are looking to harness advances in artificial intelligence, big data and edge computing to deliver contextually relevant information and content to end users.

The gap here, and arguably where the big strategic battle is still to be fought, concerns the need for a pervasive 3D digital map. To offer a seamless user experience, an AR device needs to know exactly where the user is, a process known as localisation, and then overlay this with relevant contextual information. Currently devices generally create a new map every time an app launches, which is both processor and memory intensive while limiting the scope for shared experiences.

A mass-market wearable is still some way off
As with many new emerging technologies, definitions can be a problem. There are many strands of augmented and virtual reality that stretch along a continuum, with mixed reality often a catch-all to cover all the stuff between the two extremes. It appears likely that different form factors will emerge and be suitable for different use cases, with a more immersive headset for home entertainment purposes and a more discreet wearable for everyday use.

For now, smartphones offer the most affordable route to AR experiences and may serve an important role in raising public interest. However, longer term we see the need for a new form factor to emerge and so allow AR to fulfil its promise as the fourth computing platform. The question is whether this new platform will be hardware- or software-based, or somewhere in between.

The answer to this question will go a long way to deciding who the winners in the emergent AR wearable space will be. While Apple can be relied upon to produce an aesthetically pleasing glass, does their closed ecosystem approach lend itself to collecting and interpreting the broad range of data needed to fuel new AR applications and services?

Only time will tell, although an iGlass launch in 2019 could give us some of the answers.

 – David George, head of consulting at GSMA Intelligence

The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.