BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Sharper Edge Engines Reach The Internet Of Things

Following

The edge is sharpening. If we listen in to the Silicon Valley water-cooler discussions surrounding Artificial Intelligence (AI), there are a handful of themes driving the AI narrative. While generative AI has hogged the limelight this year with its human-like ability to draw inference intelligence via the use of Large Language Models (LLMs), the way we now also apply AI to the smart devices in the Internet of Things (IoT) is also said to be coming of age.

Edge vs. IoT

This is computing inside the IoT, so this is the ‘edge’ computing. While the terms IoT and edge are often used interchangeably, we can clarify and say that the IoT is where the devices are, while edge computing is what happens on them. For further definition, Internet of Things devices more typically need to be connected to the Internet to be able to work (the clue is in the name, right?), while edge devices might be disconnected for much of their life, only occasionally connecting to a cloud datacentre for processing.

Creating the hardware for edge applications requires an entirely new design considering specific computational performance, power and economic conditions. Considering these core dynamics, the IT industry has been working hard to make edge computing better. If you will, sharper.

Size of the IoT

By 2030, it is expected that over 125 billion IoT devices will be connected to the Internet, from smartphones to cameras to smart home devices etc. Each of these devices will generate an enormous amount of data for analysis, with 80% of it being in the form of video and images. Thus far, even though we know that there’s a huge amount of data out there in the IoT, even where there is connectivity to the cloud, only a small portion of this data has been analyzed.

Growing concerns over privacy, security and bandwidth have led to data being processed closer to its origin i.e. at the edge in the IoT. So can AI rescue us? Currently, AI technology has primarily been designed for cloud computing operations, which do not have the same cost, power and scalability constraints as edge devices. AI edge specialist Axelera AI thinks it can help. The company’s Metis AI Platform has this month reached its early access phase for the development of advanced edge AI-native hardware and software solutions.

"Placing a comprehensive hardware and software solution directly into the hands of our customers within a mere 25 months [of the project starting] stands as a pivotal milestone for our company," said Fabrizio Del Maffeo, Axelera AI co-founder and CEO. “The Metis AI Platform offers practical edge AI inference solutions, catering to companies developing next-generation computer vision applications. The AI-native, integrated hardware and software solution simplifies real-world deployment, providing a user-friendly path to development and integration. Available in industry-standard form factors like PCIe cards and vision-ready systems, it streamlines the integration of AI into business applications, meeting today's market demands.”

What is dataflow technology?

The core of the platform is the Metis AI Processing Unit (AIPU), which is based on proprietary digital in-memory computing technology (D-IMC) and RISC-V with ‘dataflow’ technology. As Maxeler Technologies reminds us, “Dataflow computers focus on optimizing the movement of data in an application and utilize massive parallelism between thousands of tiny 'dataflow cores' to provide order of magnitude benefits in performance, space and power consumption.”

Del Maffeo and team claim that the AIPU offers industry-leading performance, usability and efficiency at a fraction of the cost of existing solutions. The technology is scalable for deployment projects that experience growth and the company’s embedded security engine protects data and information through encryption, ensuring the security of sensitive biometric data.

The technology is integrated into AI acceleration cards, AI acceleration boards and AI acceleration vision-ready systems, which are available to the general public. This enables small to medium-sized enterprises to speed up adoption and streamline field installation. Developed with the Metis AIPU, a click-and-run Software Development Kit (SDK) known as Voyager provides easy-to-use user-friendly neural networks for computer vision applications and (coming later) Natural Language Processing (NLP) to software developers aiming to integrate AI into their devices.

“The Voyager SDK offers a fast and easy way for developers to build powerful and high-performance applications for Axelera AI’s Metis AI platform,” explained Del Maffeo. “Developers describe their end-to-end pipelines declaratively, in a simple YAML configuration file, which can include one or more deep learning models along with multiple non-neural pre and post-processing elements. The SDK toolchain automatically compiles and deploys the models in the pipeline for the Metis AI platform and allocates pre and post-processing components to available computing elements on the host such as the CPU, embedded GPU or media accelerator.”

Smart toaster reality

We the consumers might initially be oblivious to the fact that the computing edge is getting this kind of turbo-charge. No average user is going to pop a slice of bread in their smart toaster and get ‘ready!’ alert on their smartwatch while stopping to consider whether the operation involved a machine learning network using 32-bit floating point data to precision-train AI models using standard backpropagation techniques.

Of course, we won’t think like that - although that is what is happening here - most toast consumers will only stop to think: hmm, peanut butter or just marmalade this time?

The point to grasp here is that AI models are typically trained in a cloud datacenter using powerful, expensive, energy-hungry Graphical Processing Units (GPUs) and, in the past, these models were often used directly for inferencing (the technique we use to get AI intelligence) on the same hardware. What Axelera AI is suggesting is that this class of hardware is no longer needed to achieve high inference accuracy and today’s challenge is how to efficiently deploy these models to lower cost, power-constrained devices operating at the network edge.

The edge continues to get smarter, sharper and larger, let’s make sure we keep control of this new breed of device intelligence so that it doesn’t also become darker.

Follow me on Twitter or LinkedIn