Plus icon

2022-05-29 00:57:52 By : Ms. Maggie Ding

Since the HAL9000 and Star Trek's M-5 Multitronic, the power and capabilities of AI have always been oversold by both Hollywood and Silicon Valley. Although we're still waiting on machines that can carry on an intelligent conversation, AI has been creeping into many objects in our everyday lives behind the scenes, making them more useful and proactive.

People are most familiar with the intelligent assistants built into devices like the Amazon Echo, Google Nest Hub and Apple HomePod, but as I wrote more than three years ago, these rely on cloud backend services for most of their smarts, using local hardware primarily to recognize their wake word and listen for follow-up questions. 

Soon, devices as small as a vibration sensor will outsmart an Echo due to significant advances in the performance of low-power hardware and more efficient AI algorithms. The combination allows surprisingly sophisticated deep and machine learning models to run on embedded systems. Until recently, shoehorning AI software into a battery-powered device has required data scientists skilled in working with the constraints of an embedded SoC, but recent advances in AI development and automation frameworks, categorically termed TinyML, greatly expands the realm of smart devices.

AI has significantly reshaped and improved everyday objects in ways that few people recognize. For example, most phone users don't realize that when they press the shutter button to take a snapshot, it unleashes a complicated process causing the camera to rapidly take multiple images using different exposure settings, analyzes them for features and then pixel-by-pixel using embedded deep learning models before combining them into a single picture. Apple calls this feature Deep Fusion, while Google uses similar computational photography techniques for its Night Sight, Astrophotography and HDR+ shooting modes. Here's what the process looks like when Pixel phones take a low-light shot. Apple's most recent iPhone 12 Pro and iPad Pro models go even further by combining data from both the camera and LIDAR (laser rangefinder) sensors. The stunning results are often impossible to recreate with a conventional camera and tripod. 

Source: Google Research paper; Handheld Mobile Photography in Very Low Light

While sensors and other low-power devices can't run algorithms of the same sophistication, TinyML and associated development tools promise to give AI smarts to an immense range of battery-powered devices. TinyML is the moniker for both a movement and a developer community. The movement is galvanized by the idea of making ML work on sensors that can be powered by a watch battery or energy harvesting to turn raw data into useful information. As two Google engineers put it in their how-to on TinyML development (emphasis added)

This is where the idea of TinyML comes in. Long conversations with colleagues across industry and academia have led to the rough consensus that if you can run a neural network model at an energy cost of below 1 mW, it makes a lot of entirely new applications possible. This might seem like a somewhat arbitrary number, but if you translate it into concrete terms, it means a device running on a coin battery has a lifetime of a year. That results in a product that's small enough to fit into any environment and able to run for a useful amount of time without any human intervention.

For context, a phone SoC like the Qualcomm Snapdragon 865 uses up to 5W, or about 1000-times the power of some TinyML devices. 

Cost is another aspect that differentiates TinyML devices from mobile or ultra-portable processors. For example, the cheapest Raspberry Pi, the Pi Zero, which uses a Broadcom SoC with an older Arm 32-bit core, runs about $5 in volume. The same model with embedded Bluetooth and Wi-Fi is double the price at $10. In contrast, many 32-bit microcontrollers used in embedded systems, like those using the popular Arm Cortex M0+, only cost $1. At that price, the ubiquity of microcontrollers in everyday objects isn't surprising, with sales expected to hit 38 billion devices in 2023. The ability to run machine learning algorithms on such quotidian hardware opens up a slew of new applications. 

TinyML the developer community has been kindled by the TinyML Foundation, a group of like-minded researchers and developers seeking to promote information exchange about innovative ML implementations on ultra-low power devices "at the very edge of the physical and digital world." In promoting the idea of TinyML services, Ericsson offers a useful graphical depiction of where TinyML fits in relation to other computing paradigms, where it sees the movement at the intersection of IoT devices, edge computing and machine learning data analysis. 

Source: Ericsson; TinyML as-a-Service: What is it and what does it mean for the IoT Edge?

TinyML has been the inspiration for several tools and services designed to accelerate and simplify the development and deployment of ML software on embedded systems. One of the first was TensorFlow Lite, a variant of the popular AI development framework targeting mobile and embedded devices. As a presentation by one of its chief developers illustrates, training  a TF Lite model merely requires passing standard TensorFlow models through a converter before feeding it with sensor data. Similarly, inference works by running data through a preprocessor and the TF Lite interpreter. TF Lite works in most TinyML scenarios using 32-bit microcontrollers and has been extensively tested with Arm Cortex-M devices. The TF Lite runtime takes only 16 KB. A simple speech recognition app like wake word detection takes only 22 KB, while person detection in a grayscale image feed can run in only 250 KB. 

TF Lite is perfect for developers already fluent in the TensorFlow framework and with an understanding of the limitations of embedded hardware, however, these requirements set a high bar for the millions of embedded developers. AutoML, a new development platform from Qeexo, is designed to lower these technical barriers by automating the data processing, model development, tuning and hardware provisioning for embedded developers. 

Like ML automation cloud services or server software such as AWS SageMaker, Google Cloud AutoML, Auger and Sigopt (which I highlighted back in 2017), Qeexo AutoML:

There are several alternatives to Qeexo's system for embedded ML, including Cartesiam NanoEdge, Edge Impulse, NeuroPilot Micro and OctoML.

The overriding impetus behind moving ML to the far edge is so-called sensor fusion in which increasingly capable edge devices combine, correlate and analyze data from multiple sensors to detect anomalies, objects and their relative positions and make predictions using ML that are far more accurate that simple trend extrapolation techniques. Applications span many industries and usage scenarios, including:

These environments require rapid results since the type of streaming data they generate is fleeting, with exponentially-decaying value over time. Thus, locally performing the ML without sending it the cloud and back is critical to achieving near-real time low-latency.

We remain in the twilight hours of TinyML as the capabilities of microcontrollers and sophistication of ML optimization have reached a point where incredibly useful applications can now run on near-invisible devices. Systems like TF Lite, AutoML and others will unleash the creativity of millions of embedded developers to infuse intelligence, interactivity and uncanny features in almost every physical object we interact with. 

Qeexo has several examples that illustrate the way TinyML will reshape everyday products, including:

From cars that tell you when an engine bearing is about to fail to kitchen faucets that warn of harmful chemicals in the water, embedded intelligence is set to revolutionize our interactions with everyday objects.

Image credit - Feature image - Intelligent car, intelligent vehicle and smart cars concept, by @jirsak, from Shutterstock.com. Screen shots credited above.

© Diginomica Limited and its licensors 2013-2022