Augmented World Expo 2019 – Part 1: The future is now

Image credit: Auganix

After attending last week’s AWE conference in Santa Clara, California, Auganix’ Managing Editor, Sam Sprigg, documents some of the highlights from the event’s talks and presentations.

Introduction

Last week, exhibitors filled the Santa Clara Convention Center to showcase and share in the cutting edge of Augmented Reality, for the 2019 Augmented World Expo (AWE). The energy was electric and, as expected, several major announcements ensued over the course of the three-day event.

Varjo announced its XR-1 Developer Edition Mixed Reality headset, nreal launched their ‘nreal light’ smartglasses for consumers, ThirdEye announced a new Software Partner and Lease Program for its X2 Mixed Reality glasses, Rokid unveiled its new Rokid Vision Mixed Reality glasses, Neurable released its Neurable Analytics Platform, and NexTech announced the upcoming launch of its AR Chat system  – to name but a few.

The first day of the expo was mainly about the presentations, and after a keynote speech from AWE co-founder Ori Inbar, the theatre rooms opened for talks, panel discussions and seminars.

Zappar & Web-based AR experiences

One of the first seminars that we attended was by Connell Gauld from Zappar, who talked about how to author and publish web/browser-based AR experiences using the company’s ZapWorks platform. Web-based AR experiences have such a wide use case, which extends to basically anyone that owns a smartphone. Since every smartphone has a web browser, the potential for reaching a wider audience is greatly increased as it removes the need for an augmented reality-specific app download, and reduces friction within the user experience. Another advantage of these sorts of AR experiences is the fact that there is no approvals process required (unlike with an app in an app store), so deployment can be rapid, with it being possible to publish fully AR microsites.

Gauld then demoed ZapWorks Studio – a platform Zappar has built to help developers and those with zero coding experience design web-based AR experiences. The ease of use was impressive. In less time than it takes to make a coffee, Gauld transformed a standard business card into one with an interactive Twitter follow button overlaid in AR through a web viewer.

ZappWorks Studio allows users to draw from Sketchfab’s library of assets to create AR content. However, it was noted that even with great content, one of the most important considerations with an AR campaign is its distribution. With ZappWorks Studio, users typically get two options. Users can either deliver their content through the Zappar App, or, they can create a branded microsite that can be integrated into a pre-existing user experience. Generally, Gauld noted, “Users prefer a longer app install time vs wait times whilst using an app”. It will thus probably help to pre-load content in order to further reduce UX friction (a topic deftly handled by Brian Hutchinson from Georgia Pacific).

When asked if the platform was only for mobile experiences, Gauld replied: “Zappar supports a number of headsets, and although there’s nothing to announce at the moment with regards to other hardware, it’s definitely something the company is looking at.”

A key takeaway from the talk, and AWE as a whole, was how companies should invest in AR/VR. When they do, it should be primarily to strengthen their own ecosystem – not that of another app. Speakers across the event reiterated this theme, trying to dissuade creators from pursuing a “content for content’s sake” approach.

Ultimately, despite its promise, web-based AR comes with a few caveats. Namely, that (currently) mobile web-based experiences are slower than native applications that have to be downloaded. In addition, not all web browsers support the same types of features. Gauld was quick to point out however, that in the case of ZapWorks, certain fallbacks exist to ensure a smoother user experience (although he didn’t specify what).

Conversational Commerce with NexTech

Another talk that covered web-AR was presented by NexTech’s Paul Duffy. NexTech’s ‘goal state’ is to be able to capture and convert sentiment from a full customer journey in an e-commerce setting. To do this, the company is combining AR, artificial intelligence, and analytics, in order to implement these within conversational commerce platforms for placement into brand websites.

In a similar vein to Zappar, NexTech also offers a web-based platform called ARitize, which allows for the creation and embedding of AR experiences by transforming 2D images into 3D assets for augmented e-commerce purposes. Echoing Gauld’s talk, Duffy also touched on how apps are still a point of friction – particularly between a retailer and a shopper – but with web-AR, the experience is simplified for both. From a business perspective, when using a platform such as ARitize, AR assets can be placed “into basically any piece of IP enabled content”, according to Duffy.

“If you are a retailer and you are thinking about commerce, the real battleground there is not the website, but the messaging app. Web AR with conversational commerce will be the next new channel for retailers.”

– Paul Duffy, President, NexTech AR Solutions

Why is this so compelling? The main reason is, again, due to the sheer magnitude of the total addressable market for display devices. “Forget apps”, Duffy said. “From a web point of view, you are literally on billions of smartphones and tablets today. It might not have the same functionality that you would get from a robust app, but from a retail point of view, if you’re trying to understand [customer] sentiment and if you’re trying to continue that customer journey – you’ll sacrifice that.”

Duffy then went into detail on NexTech’s recent announcement that it will be launching a conversational commerce platform in partnership with LivePerson. Within the next 60 days, the company will be releasing its AR chat system (which is currently in beta) and it is expected to be available towards the end of July, according to Duffy. He commented: “If you are a retailer and you are thinking about commerce, the real battleground there is not the website, but the messaging app. Web AR with conversational commerce will be the next new channel for retailers.” He added, “This seems to be the perfect point of cut in for AR in the retail segment”.

Part of how the company’s AR Chat works is by combining an AI chat bot with AR technology, as well as facial recognition technology, in order to determine at what point to hand off a customer interaction to a human sales agent – all whilst a customer is in a browser based experience on a retail brand’s site. NexTech’s ‘Grand Vision’ is where customers are converted into buyers through interaction with AI-powered holographic brand ambassadors. NexTech states that we are in fact a lot closer to this grand vision than people might think.

The Role of 5G

It seems clear that browser-based AR experiences are going to play a critical part in shaping the landscape of the AR ecosystem in the coming years, but the one thing that is going to help these sorts of experiences explode is 5G. Enter Hugo Swart from Qualcomm, who took to the Main Stage on the first day of the conference.

“2019 is the year of 5G. We already have deployments in selected areas in the US, Europe, China, Japan and Korea. We expect that these deployments will rapidly expand in 2019. We want to emphasize that 5G is here.”

– Hugo Swart, Head of XR, Qualcomm

5G will enhance the mobile broadband experience. Greater capacity will allow for up to 100 times more traffic enabled over 5G. One of the reasons behind this is spectral efficiency, according to Swart, who stated: “You can put more bits on the same bands. But you also have more bands.”

With a faster network comes a slicker user experience, and once 5G is commonplace, it can be expected that both web-based AR, and standalone AR devices are going to take off substantially. 5G is quite possibly the single most important technological enabler of AR mass adoption.

Augmenting Journalism

Another incredible talk was from the New York Times’ Graham Roberts, who discussed the use of Augmented Reality in journalism, and demonstrated a lot of what the Times is doing to further engage its audience through its digital storytelling. One point that Roberts made was that for those involved in visual storytelling and graphics, there is almost a sense of loss from the big print pages of the past, which have today been reduced to a small screen on a phone or tablet. However, with the rise of AR, a phone is no longer a surface anymore, but more of a window, and the visual experience is one that feels big and impactful again.

“With AR, we can regain perspective by presenting objects in real scale. Understanding real world scale is almost impossible through a phone screen – but this is no longer true with AR.”

– Graham Roberts, Director of Immersive Platforms Storytelling, The New York Times

For each of its AR projects, the NYT considers both the readers who either don’t have access to the AR experience, or who simply don’t want it. The company referred to this non-augmented option, aimed at its readers in the latter camp, as a “Fallback Mode”. Roberts stated: “We don’t want these experiences to feel any less considered, so we typically use the same assets and build a parallel real-time web experience.” Since its creation as a sort of ‘back-up’ option though, the Fallback Mode has now actually become a more fundamental element of how the NYT is approaching immersive experiences generally, according to Roberts.

Through the initial process of creating AR content and adapting it to please those that don’t want an AR experience, the Times has actually come up with an innovative and fresh way to tell a story through this alternative offering. Roberts used the very engaging example of the NYT’s coverage on the Notre Dame fires.

Brain Computer Interfaces in VR

Another notable presentation was by Ramses Alcaide from Neurable, who discussed Brain Computer Interfaces (BCIs). There are certain applications that people think about with regards to BCIs. One is ‘Direct Control’, which applies to the control of wheelchairs, prosthetics, and communication devices. Another application relates to a deeper, unconscious layer of brain activity, and is to do with understanding and measuring ‘Cognitive State’. Put simply, understanding what a person feels, and using that information to inform application development and gather insights. The main issue with this right now is that it involves the use of electroencephalography (EEG) systems to record brain activity, which Alcaide likened to “listening to an out of tune radio”. EEG systems involve a large number of sensors being attached to a patient’s head, where gel is then injected into the sensors in order to help boost the signal. As effective as this may be, the process is extremely time consuming.

Image credit: Auganix

A lot of the company’s work has been on the algorithm development and machine learning side. Neurable has spent its time developing a new way to interpret the brain and the signals that come from it in order to address the challenges faced with using current EEG systems. Their solution, according to Alcaide, is able to boost performance by about 300 times the “gold standard”. What this means, according to the company, is that Neurable is able to get better signals, and also improve the form factor by reducing the number of required sensors on an EEG cap from 64 to 6, as well as removing the need for the use of gels.

Imagine if this sort of simplified EEG setup was implemented into a wearable VR headset. Firstly, from a ‘Direct Control’ standpoint, it would enable a wearer to effectively interact with a virtual world just with their thoughts (something which was shown during the presentation in fact). Secondly, from the ‘Cognitive State’ standpoint, it means that a wearer’s emotional state can be monitored. The technology can effectively be used to understand the person who is using an AR or VR system. From an enterprise perspective, this is extremely exciting, as it will effectively allow companies to tap into the computing capabilities and the ‘user intent’ capabilities that BCIs can offer.

It was at this point in the talk that Alcaide announced the launch of the Neurable Analytics Platform, which he stated will allow “For the first time, the ability to measure in a repeatable and reusable way, stress information, and what is cognitively and emotionally meaningful to a user.” Framed from the perspective of an educational application of the technology, he added, “This means we have the ability to get unbiased cognitive insights. No more user surveys – We can literally just understand what the user wants.” Obviously, there are clear uses to this technology that expand well beyond an educational application.

Alcaide closed his presentation with the following remarks: “AR is one of these areas where we all have to be visionaries. The future that we look forward to is one where everything is connected. Where information is overlaid in front of you, is seamless, natural, and requires zero learning to actually interact with. We believe that it is possible, but in order to reach that, we need to start bringing brain computer interfaces – specifically control and effective computing and insights – together, in order to create this future.”

“AR is one of these areas where we all have to be visionaries.”

–Ramses Alcaide, CEO, Neurable

It is not just Alcaide that looks forward to this vision of the future. Based on everything we saw and heard throughout AWE, it is hard not to be excited about where AR and VR technology will be in, say, five years time. It is exciting to think where the technology is now for that matter. Things have come a long way since AWE first launched ten years ago, and the range of tech that is currently available, and the capabilities that it is able to offer today are fairly mind-blowing. The future IS now. This is all just based on the presentations we saw as well – we haven’t even touched on the ‘Playground’ side of things and the hands-on tech demos that we were able to experience on day two of the conference – that is coming soon in part two of our AWE 2019 write up.

Stay tuned for part two of Auganix.org’s AWE 2019 write up, and highlights from the AWE Playground.

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he has been covering XR industry news for the past seven years.