Microsoft announces Project Kinect for Azure and demonstrates mixed reality capabilities at annual developer conference

May 7, 2018 — Today at Microsoft Build 2018, Microsoft’s annual developer conference, Microsoft leaders showcased new technologies that enable developers to develop for AI on Microsoft Azure, Microsoft 365 and across any platform.

“The era of the intelligent cloud and intelligent edge is upon us,” said Satya Nadella, Microsoft’s CEO. He added: “These advancements create incredible developer opportunity and also come with a responsibility to ensure the technology we build is trusted and benefits all.”

The company made a series of announcements, primarily relating to AI and cloud technologies, but which have implications for mixed-reality, and which the company states will “enable richer experiences that understand the context surrounding people, the things they use, their activities and relationships”. These announcements included:

  • A joint effort with Qualcomm Technologies, Inc. to create a vision AI developer kit running Azure IoT Edge. This solution makes available the key hardware and software required to develop camera-based IoT solutions. As a result, developers will be able to create solutions that use Azure Machine Learning services and take advantage of the hardware acceleration available via the Qualcomm Vision Intelligence Platform and Qualcomm AI Engine. The camera can also power advanced Azure services, such as machine learning, stream analytics and cognitive services, that can be downloaded from the cloud to run locally on the edge.
  • Project Kinect for Azure – a package of sensors, including the company’s next-generation depth camera, with onboard compute designed for AI on the Edge. Building on Kinect’s legacy that has lived on through HoloLens, Project Kinect for Azure empowers new scenarios for developers working with ambient intelligence. Combining Microsoft’s Time of Flight sensor with additional sensors, Project Kinect for Azure will leverage Azure AI to improve insights and operations. It can input fully articulated hand tracking and high-fidelity spatial mapping, enabling a new level of precision solutions.
  • Microsoft Remote Assist, which enables customers to collaborate remotely with heads-up, hands-free video calling, image sharing, and mixed-reality annotations. Firstline Workers can share what they see with any expert on Microsoft Teams, while staying hands on to solve problems and complete tasks together.
  • Microsoft Layout, which allows customers to design spaces in context with mixed reality, Import 3-D models to create room layouts in real-world scale, experience designs as high-quality holograms in physical space or in virtual reality, and share and edit with stakeholders in real time.

Image credit: Microsoft

 

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.