Khronos releases OpenVX 1.3 open standard for cross-platform vision and machine intelligence acceleration

In Augmented Reality News

October 22, 2019 – The Khronos Group, an open consortium of hardware and software companies creating advanced acceleration standards, has recently announced the ratification and public release of the OpenVX 1.3 specification, along with code samples and a prototype conformance test suite.

OpenVX is a royalty-free open standard for portable, optimized, and power-efficient vision and machine learning inferencing acceleration, vital to embedded and real-time use cases, such as face, body, and gesture-tracking, smart video surveillance, advanced driver assistance systems, object and scene reconstruction, augmented reality, visual inspection, robotics, and more. Also available now is an open source implementation of OpenVX 1.3 for Raspberry Pi to make OpenVX widely accessible to developers. The new specification can be found on the OpenVX registry.

“Over the years, OpenVX has evolved an extensive range of functionality to meet the diverse needs of developers using accelerated vision and inferencing. The next step in OpenVX’s evolution is to enable implementations that deliver a focused subset of features that are targeted at specific key use cases,” said Kiriti Nagesh Gowda, OpenVX Working Group Chair, and MTS Software Development Engineer at AMD. “OpenVX 1.3 feature sets provide implementers with the deployment flexibility to implement and optimize just the functionality that their customers need, while still being conformant to the standard and providing cross-vendor interoperability.”

To enable deployment flexibility while avoiding fragmentation, OpenVX 1.3 defines a number of feature sets that are targeted at common embedded use cases. According to the Khronos Group, the flexibility of OpenVX enables deployment on a range of accelerator architectures, and feature sets are expected to help increase the breadth and diversity of available OpenVX implementations. The defined OpenVX 1.3 feature sets include:

  • Graph Infrastructure (baseline for other feature sets),
  • Default Vision,
  • Enhanced Vision (functions introduced in OpenVX 1.2),
  • Neural Network Inferencing (including tensor objects),
  • NNEF Kernel import (including tensor objects),
  • Binary Images,
  • Safety Critical (reduced features to enable easier safety certification).

MulticoreWare has worked with Khronos to provide an OpenVX 1.3 implementation for the Raspberry Pi 3 Model B using the Raspbian operating system. This implementation takes advantage of OpenVX’s architecture to include: automatic optimization of memory access patterns via tiling and chaining; the ability to use highly optimized kernels leveraging multimedia instruction sets; automatic parallelization to utilize multiple compute resources such as multicore CPUs and GPUs; and automatic merging of common sequences of processing kernels into single, higher-performance kernels.

“We are excited to have worked with Khronos to develop the OpenVX 1.3 Raspberry Pi implementation, conformance test suite, and samples,” said AGK Karunakaran, CEO of MulticoreWare. “Raspberry Pi is an easily accessible platform for any developer to try out the power of OpenVX to rapidly develop a wide range of applications with optimized memory usage and enhanced performance. This is an exciting next step in the march towards more capable computer vision and machine learning systems, and MulticoreWare is proud to be a leader in this ecosystem.”

The Conformance Test Suite for OpenVX 1.3 is in development and is expected to be released before the end of 2019. Sample implementations of OpenVX 1.3 are available on GitHub for developers to build upon. The OpenVX 1.3 specification and more information is available on the Khronos website or through the OpenVX registry, which contains specifications of the core API, headers, extensions, and related documentation.

Image credit: OpenVX/Twitter

About the author

Sam Sprigg

Sam is the Founder and Managing Editor of Auganix. With a background in research and report writing, he covers news articles on both the AR and VR industries. He also has an interest in human augmentation technology as a whole, and does not just limit his learning specifically to the visual experience side of things.