Grai Matter Labs launches low-latency vision inferencer

Reading time: 1 minute

Author:

Grai Matter Labs (GML) is introducing its Grai Vision Inference Processor (VIP) to partners and customers at this year’s all-digital Consumer Electronics Show. The full-stack AI system-on-chip platform offers fast responsiveness for image recognition capabilities in AR/VR, industrial automation, robotics and surveillance. Built on GML’s Neuronflow event-based dataflow compute technology, Grai VIP enables an inference latency up to 100x better than competing solutions, according to the fabless semiconductor scale-up with Eindhoven roots.

Credit: Grai Matter Labs

GML’s mission is to bring fastest AI per watt for sensor analytics and machine learning to every device on the edge. It already launched a Neuronflow-based accelerator chip, Grai One, an accompanying hardware development kit and an SDK, called Graiflow. AI application developers can now get early access to the company’s new AI SoC platform.

“Vision inferencing is the most active segment of the AI chip market. GML has developed several innovations that combined have produced an outstanding AI accelerator, exploiting sparsity – both spatial and temporal – in the input data,” says Michael Azoff, a chief analyst at Kisaco Research. GML CEO Ingolf Held: “Grai VIP will deliver significant performance improvements to industrial automation and revolutionize systems such as pick and place robots, cobots and warehouse robots.”