Intro to Immersive Experience Design (Part 2)
“Designing immersive experiences” is a guide for everyone who’s interested to learn more about 3D interface design and augmented reality. It is the result of over 5 years of research and exploration in the field of immersive technologies, and is intended to help you navigate through the vast, uncharted territory of this exciting new industry.
The guide consists of 3 parts:
Part 1: Why 3D interfaces are the future of UX/UI design
Part 2: The past, present, and future of the AR industry
Part 3: How to start your career as an immersive experience designer
Ready? Let’s dive into it.
Part 2: The past, present, and future of the AR industry
In the first part of my series, I talked about why immersive tech will be the next big frontier for UX/UI designers. I explained why 3D interfaces are so much more powerful, and so much closer to human nature than the 2D interfaces we currently interact with on our mobile devices. And we laid out some of the implications that this impending evolution in interface technology will have on your work as a designer.
So when can we expect AR to reach the level of maturity that will enable mainstream adoption? With a technology as complex as this, it’s obviously hard to make predictions, as there are many tough technical challenges that still haven’t been solved. But if you have followed closely what the big players of the industry have been doing over the last years, you can at least derive some rough estimates for where things are heading.
With this article, I want to give you a bird’s eye view of the AR industry, its main players, and the individual strategies they have put in place on their mission to create the “holy grail of immersive tech”: smart glasses. I will discuss why this technology is such a hard nut to crack, and why no one — despite massive investments and many years of research and development — has been able to “get it right” yet. And I will give you my personal forecast of what’s to come in the next few years.
The hype is over
If you have followed the news around immersive technologies in the last decade, you might have noticed a slight decline in public interest around the topic, compared to 5 years ago. VR was first to go through the hype cycle, with bloggers and tech reporters flocking to it by the thousands, raving about its potential impact on the future of humanity. AR followed shortly thereafter. When I checked the Gartner Hype Cycle for emerging technologies in 2018, AR was at the bottom of the “trough of disillusionment”, and VR had disappeared already from the graph. Last year neither of them were on it anymore.
Does that mean the technology has failed? Is immersive tech dead? Quite the contrary, actually. It just means the hype is dead, and people have decided to project their hopes, dreams, and fears onto the next emerging technology, only to be disappointed and loose interest again a few months later.
There is a great quote that perfectly encapsulates this phenomenon. Although the true origin of the sentence is not clear, it is attributed to futurist Roy Amara, which is why many call it “Amara’s Law”:
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Humans are notoriously bad at doing long-term projections. Any ground-breaking new technology usually produces very strong reactions from the public, in both directions: It’s met with a lot of over-excitement and unrealistic hopes by half of the population, while the other half is dismissive or fearful, advocating to ban it as soon as possible (check out this hilarious Twitter account for reference). Both sides are usually wrong.
According to the Gartner Hype Cycle, immersive tech is slowly but surely approaching a stage called the “plateau of productivity”. It’s the time when a technology has gone through an intense hype phase and couldn’t deliver on its promise soon enough for the mass media to sustain their interest. Some of the VC funding in the sector dries up, many of the teams and projects that were piggy-backing on the hype shut down or pivot to do something different altogether. And then there is the ones who are 100 % committed, the ones who keep building and improving because they understand that this is not a sprint but a marathon. These are the companies that come out on the other side as the winners. The wheat is separated from the chaff. Fast forward a few years — or decades — later, and the tech has matured into something that no one could’ve ever imagined before. In the following paragraphs I’d like to take a look at some of these marathon-runners and see what we can learn from them.
The AR arms race
A good indicator for where a technology is headed tomorrow are the investments that are poured into its development by industry leaders today. In the case of AR, the numbers, although hard to estimate, must be staggering. Every single one of the most powerful tech companies of this world must have invested billions of dollars into AR over the last years. It might not be very obvious, but there is a war going on behind the scenes — fought by no other than Google, Apple, Facebook, and Microsoft. All of them have long realised the potential of AR — and all of them have made it an integral part of their respective roadmaps for the coming years. The funny thing is that most people don’t realize how much of their current portfolio is already part of a broader strategy towards this immersive future. The foundation is being built right at this moment, but it’s happening silently, and you really have to pay attention to see what’s going on. Let me try to connect some of the dots.
Google:
The first serious investment in immersive tech that I can remember came from Google when it announced its Google Glasses project in 2012. This marked an important milestone in a few ways: On the one hand, it was a huge leap forward for AR, IoT, and wearables in general. On the other hand, it showed that one of the most powerful companies of the world deemed this technology so promising that they would invest a great number of talent, time, and money into it. It also marked the unofficial start of the “AR arms race” between the tech giants of Silicon Valley.
Google Glass was a big commercial flop and was ridiculed by many. The reason for this is simple: Although the team really pushed the hardware beyond what was thought to be possible at the time, the tech just wasn’t there yet. The battery life was way too short, and the glasses looked too strange to wear them in everyday life. More importantly, the content delivered through the display wasn’t context-aware and not connected in any way to the physical world — so one of the most essential parts of what makes AR so magical was still missing.
Although Google Glass was not the success they might’ve hoped for, Google never lost their interest in AR. They have been at the forefront of the spatial computing movement ever since and have only doubled down on their investments into the space over the last years. If anything, it has taught them (and other players in the sector) an important lesson: In order for AR to be successful, you have to solve not one, but a whole set of serious technological challenges that are all interconnected. Unless you get all of them 100% right, your solution is doomed to fail.
On the hardware side, you need a long battery life, an array of highly efficient sensors to understand the environment, and displays with a wide field of view that can project crystal-clear visuals on top of the users view under all lighting conditions. All of this needs to be crammed into a light, durable, and fashionable piece of eyewear that does not make the wearer look like a cyborg. On the software side, you need to have best-in-class computer vision capabilities to continuously analyze and understand the environment, and the ability to create live, photo-realistic renderings of complex 3D models — including lighting, shading, and material properties that react to the real-world environment of the user. None of this is impossible, but it will take another few years to make everything work seamlessly together on a small wearable device.
So what did Google do next? They started decomposing AR into more manageable chunks of technology experiments that could be iterated and learned from separately. Realising that their strength as a company lay more in software rather than hardware development, they opted for a new strategy: To figure out the hardware side, Google developed close bonds with various promising AR hardware startups in the space. The most promising one was Magic Leap, in which they invested a 9 digit funding sum in 2014.
At the same time, they spent the last years investing heavily in pushing AR software to the next level. The first indicators for this strategy were a few rather playful / exploratory releases such as Google Translate’s AR mode and the gaming smash hit Pokemon Go in 2015. While it might be easy to dismiss these products as gimmicky, I would argue that there couldn’t have been a better way to battle-test some early AR concepts quickly with a large audience, and to use the learnings to build their first “serious” AR product.
In 2017, the next major milestone was reached with the release of Google’s Android development framework ARCore, following the lead of their main competitor Apple who had published their iOS framework ARKit just a few weeks prior to them.
Apple:
It’s not clear to me when exactly Apple decided to join the AR arms race — but when they did, they did so with full force and conviction. Why did they decide to go after Google despite the fact that they had already established their pole position in the industry with Google Glass? There is one factor that puts Apple ahead of all other competitors: They produce both hardware and software.
Because AR is such a complex topic and requires a perfect symbiosis of software and hardware to make it work, Apple seems like the perfect candidate to pull it off. Still, they too realised early on that it’s pretty much impossible to solve all the technical challenges in one go, as illustrated by this quote by no other than CEO Tim Cook:
“AR is going to take a while, because there are some really hard technology challenges there. But it will happen, it will happen in a big way, and we will wonder when it does, how we ever lived without it. Like we wonder how we lived without our phone today.”
So what did Apple do? Just like their peers, they decided to separate hardware from software for the time being and release projects that would help them gather learnings and iterate on solutions until the tech would become mature enough to be integrated into one fully functional AR product.
On the software side, the first AR initiative that really put Apple on the map was their ARKit release in 2017. It was a very smart strategic move, as it followed a similar approach like their App Store: Instead of creating all of the apps for the iPhone in-house by themselves, they built a platform that developers could use to build and publish their own solutions. It’s quite genius because this type of crowd-sourcing leads to quicker implementation, better quality, and a higher number of innovative solutions, ultimately accelerating adoption of the technology.
There is another strong advantage to ARKit. One of the big challenges of AR is that each user experience is unique, as you cannot control in which context your app will be used. Each experience is very different from the next in terms of the available space, lighting conditions, etc. — so how do you make sure to account for all of these different scenarios? How do you make sure you AR experience works in any context? Apple realised early on that in order to build a stable AR solution, you must be able to test your software in as many different contexts as possible. ARKit does exactly that: It’s using a few million iPhones worldwide as a testing ground from which Apple can learn invaluable lessons for future AR solutions.
The even more interesting part is Apple’s AR hardware strategy, though. The past 5 years have been a constant stream of releases that Apple can learn from — most people just haven’t noticed because it was hidden in plain sight. Apple has effectively been using a trojan horse strategy, integrating lots of AR hardware in their products to gather insights for their upcoming AR glass project. The ambient light sensor in the iPhone? Perfect for measuring different lighting conditions in the environment. The TrueDepth sensor introduced in the iPhone X? A great way to test near-distance 3D mapping. The Lidar sensor in their latest iPad and iPhone releases? Creating large-scale 3D maps of the physical space around the user.
This approach isn’t just limited to the iPhone product range, either. One could argue that the Apple Watch and AirPod products serve as the perfect preparation for an AR glasses product: It taught Apple how to design a Bluetooth-connected “satellite device” that uses the iPhone for connectivity, computation, and data storage. It also taught them how to put a huge number of hi-tech components into a tiny form factor.
Last but not least: Just like a pair of glasses, a watches and headphones are considered by most to be a fashion accessory. People wear it on their body, visible to others at most times. The expression of personal taste and lifestyle plays as much of an important role as its functionality. All of these learnings are a perfect basis for the development of their upcoming Apple Glass product.
Facebook:
Facebook joined the race in the mid-2010’s. CEO Mark Zuckerberg had the following to say about AR at the time:
“I think everyone would basically agree that we do not have the science or technology today to build the AR glasses that we want. We may in five years, or seven years, or something like that.”
What did Facebook do to prepare for this future? Again: dividing software and hardware, and gathering learnings separately. Because Facebook doesn’t have their own mobile operating system and doesn’t build hardware either, they had to adapt their strategy ever so slightly:
The first milestone on the roadmap was Facebooks Spark AR Studio, released in the same year as ARKit and ARCore. It was serving the same purpose, too — as highlighted by Mark Zuckerberg in an interview that year:
“A key part of that journey is making an open platform where any developer can create anything they want.”
Their strategy might work, too: Ever since Spark AR Studio was made available to the general public in 2018, there has been an explosion of user-generated AR content on social media. The interesting challenge for Facebook will be to broaden the use cases beyond face filters.
On the hardware side, the company had made a jaw-dropping investment three years earlier, when it bought VR company Oculus for 2.3 billion USD.
While it seemed like a big bet on VR at the time, the underlying motivation might’ve been a different one: Building up expertise in AR hardware design. At least Mark Zuckerberg made everyone believe so when he said the following:
“We can’t build the AR product that we want today, so building VR is the path to getting to those AR glasses,”
Microsoft:
It could be argued that Microsoft was actually the first one to invest in AR, although probably it didn’t happen as part of a larger AR strategy at first. In 2010, the company released the first Kinect game console which became the fastest-selling consumer gaming device of all time. It was pretty groundbreaking, as it was able to 3-D scan a room and the people in it, allowing for a fun, physical gaming experience where users could control the game through their body movement and gestures.
What was first intended as an answer to Nintendo’s Wii console, soon developed a life of its own and ultimately laid the foundation for Microsoft’s HoloLens project, which was first made publicly available as a developer kit in March 2016. Besides the Magic Leap headset it’s the closest we’ve gotten to a true AR glasses experience so far, and offers a great glimpse into the future of AR.
But: While both the HoloLens and the Magic Leap devices are very promising first steps, they can also be used as great examples of the massive short-comings that still need to be overcome in order to truly make AR work for everyone. If you have tried any of these devices you probably can agree: The technology has come a long way, but also still has a long way to go.
It’s important to point out a very interesting aspect of Microsoft’s AR strategy in this context: Because the current version of the HoloLens has a form factor that could never work for the mass market they specifically targeted the B2B market early on. Because the size and look of the device don’t play as much of a role as in B2C, it is now successfully being used and tested by manufacturing, maintenance, or engineering companies around the world — a perfect way for Microsoft to learn and iterate on their hardware.
“Augmented reality technology will have a far bigger impact than smartphones ever did”
— Alex Kipman, Microsoft
There is two other companies that have pivoted into the same direction, effectively using the enterprise context as a sandbox for AR product development. The first one is Magic Leap, who announced this year that it would focus on the B2B market in the foreseeable future. The second may come as a bit of a surprise: Unbeknown to many, the Google Glass project has quietly become quite a success as an efficient workplace productivity tool for industry giants such as Boeing, DHL, and AGCO. The second version of their enterprise edition just launched about a year ago and seems to be doing quite well.
What’s next
So what do we make from all of this? To me, the message is quite clear: Even though we might not hear as much about it in the latest tech news, AR is alive and well. Every single one of the most important tech companies on this planet has been dedicating enormous amounts of resources towards “getting it right” over the last years. All of them want to be the first one to cross the finish line. Whoever is the first one to deliver a successful consumer smart glasses solution gets to do something that is very rare: defining a new paradigm.
Once perfected, AR’s impact on the way we interact with technology and digital information will be similar — if not bigger — than the first desktop computer or the iPhone. Who wouldn’t want to be the pioneer who broke this new ground? On the flip side, whoever lags behind and is unable to compete or catch up with the others risks loosing out on a once-in-a-lifetime opportunity.
The important question is: How much longer do we have to wait? I’m going to make a careful estimation that in 2023 we’ll see the first proper pair of smart glasses on the market. There might be beta releases and developer editions released before that but I don’t see the puzzle pieces coming together in less than 3 years. I also think that it will be Apple who will lead the way. I just see no other company better suited for this, as they hold all keys to success in their hand: No one else understands it better to create highly appealing lifestyle products with both software and hardware working in perfect symbiosis.
“We believe augmented reality is going to change the way we use technology forever.” — Tim Cook, Apple
What does this mean for us designer? We have 3–5 years in which we can prepare ourselves for this new reality. That’s a lot of time to learn 3D design, get familiar with all the important apps and vocabulary, to build our own tools and processes. Ready to take on the challenge? In part 3 of my series, I will talk about how to best start your career as an immersive experience designer. I will share my favorite resources and outline the first steps you can take to enter this new and exciting field. Looking forward to having you on board!