Google announces Depth API updates to ARCore developer platform, enabling occlusion in Augmented Reality

Example of occlusion off (left) and occlusion on (right)

In Augmented Reality News

December 9, 2019 – Google has today announced new updates to its ARCore developer platform for building augmented reality (AR) experiences. Earlier this year, the company introduced Environmental HDR, which brings real world lighting to AR objects and scenes, enhancing immersion with more realistic reflections, shadows, and lighting. Today, Google has further improved ARCore’s ability for immersion with the introduction of its new Depth API in ARCore, and has opened the call for collaborators to out the tool.

The ARCore Depth API should help to enable experiences that are vastly more natural, interactive, and helpful for users, and allows developers to use Google’s depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as a user moves their phone to estimate the distance to every pixel.

One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects. Occlusion helps digital objects feel as if they are actually in a space by blending them with the scene. Google will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.

Google has also stated that it has been working with Houzz, a company that focuses on home renovation and design, to bring the Depth API to the ‘View in My Room 3D’ experience in the Houzz mobile app. “Using the ARCore Depth API, people can see a more realistic preview of the products they’re about to buy, visualizing our 3D models right next to the existing furniture in a room,” said Sally Huang, Visual Technologies Lead at Houzz. “Doing this gives our users much more confidence in their purchasing decisions.”

In addition to enabling occlusion, having a 3D understanding of the world on a mobile device unlocks a myriad of other possibilities, according to Google. The company has been exploring some of these, playing with realistic physics, path planning, surface interaction, and more.

When applications of the Depth API are combined together, developers are able to create experiences in which objects accurately bounce and splash across surfaces and textures, as well as new interactive game mechanics that enable players to duck and hide behind real-world objects.

Google’s Depth API is not dependent on specialized cameras and sensors, and the company believes that it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.

In a blog post, Google stated that it has only begun to scratch the surface of what’s possible with the Depth API, and is keen to to see how developers will innovate with the feature. Those interested in trying the new Depth API can fill out Google’s call for collaborators form.

Image/video credit: Google

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he has been covering XR industry news for the past seven years.