Named as one of CB Insights
100 Most Promising AI Startups

Decoding Edge AI: A Glossary of Buzzwords

6/11/23

Lost in buzzwords and jargon? We’ll get straight to it. Here are our picks for the top 21 terms worth getting your head around in the world of edge and visual AI, accompanied by clear examples.

  1. AI Accelerator
  2. ANN (Artificial Neural Network)
  3. CNN (Convolutional Neural Network)
  4. CPR (Central Processing Unit)
  5. CUDA (Compute Unified Device Architecture)
  6. DSP (DIgital Signal Processor)
  7. Edge Computing 
  8. Edge Device
  9. GPU (Graphics Processing Unit)
  10. FPGA (Field-Programmable Gate Array)
  11. Image Sensor
  12. ISP (Image Signal Processor)
  13. Latency
  14. Machine Learning Model Compression
  15. Machine Vision
  16. Network Pruning
  17. Neural Network
  18. NPU (Neural Processing Unit)
  19. Quantization
  20. RNN (Recurrent Neural Network)
  21. SoC (System on Chip)

AI Accelerator:

Definition:

An AI accelerator is a specialized piece of hardware designed to speed up and improve the performance of artificial intelligence (AI) tasks. It helps computers process and analyze data more quickly, making AI applications like image recognition and language understanding much faster and more efficient.

Example:

Think of a smartphone using AI to recognize your voice commands. An AI accelerator chip in the phone can make sure it understands and responds to you quickly and accurately, so you can ask it questions or get directions without waiting too long.

ANN

Definition:

An Artificial Neural Network (ANN) is a computer system inspired by the way the human brain works. It's made up of interconnected nodes, or artificial neurons, which process and transmit information. ANNs are used in machine learning to solve complex tasks, such as image recognition or language translation.

Example:

Imagine you want to teach a computer to tell the difference between cats and dogs in pictures. You feed the computer many photos of cats and dogs, and the ANN processes this data. Each artificial neuron in the network contributes to recognizing specific features like fur patterns, ears, or tails. As the ANN "learns" from this data, it becomes better at classifying new, unseen images as either cats or dogs. This is similar to how our brain processes information to distinguish between various objects or animals.

CNN

Definition:

A Convolutional Neural Network (CNN) is a type of artificial neural network designed for processing and analyzing visual data, such as images and videos. It's particularly adept at recognizing patterns and features within this data. CNNs have become a fundamental technology for tasks like image classification, object detection, and facial recognition.

Example:

Imagine you want to build a system that can identify different types of vehicles in street camera footage. You employ a CNN for this task. The CNN is structured to analyze small sections of the video, recognizing simple shapes and features like edges, corners, and color variations. As you move through the footage, the network gradually combines these simple features to understand more complex structures like wheels, windshields, and headlights. This hierarchical approach enables the CNN to identify cars, trucks, and motorcycles accurately, making it a powerful tool for visual recognition tasks.

CPU

Definition:

A Central Processing Unit (CPU) is the primary component of a computer responsible for executing instructions and performing calculations. It acts as the "brain" of the computer, managing all the tasks and processes that make a computer function.

Example:

Think of the CPU in your computer as the conductor of an orchestra. Just as a conductor directs musicians to play their instruments in harmony, the CPU directs different parts of your computer, like the memory, storage, and graphics, to work together seamlessly. For instance, when you open a web browser, the CPU coordinates the retrieval of web pages, the rendering of text and images, and the interaction with your keyboard and mouse, ensuring that everything runs smoothly. It's the CPU's speed and efficiency that largely determine how fast your computer can perform these tasks.

CUDA

Definition:

CUDA, which stands for Compute Unified Device Architecture, is a parallel computing platform and application programming interface (API) created by NVIDIA. It's primarily used for accelerating the performance of tasks, especially in the field of graphics and high-performance computing, by harnessing the power of NVIDIA graphics processing units (GPUs).

Example:

Consider a scientific simulation that needs to perform complex calculations involving huge datasets. By using CUDA, you can tap into the parallel processing capabilities of NVIDIA GPUs. This means the calculations can be divided into many smaller tasks and processed simultaneously on the GPU, significantly speeding up the simulation. It's like having many assistants working on different parts of a big project, making the overall task much faster and more efficient. This is how CUDA technology is used to accelerate various applications, from scientific research to video rendering.

Digital Signal Processing (DSP)

Definition:

Digital Signal Processing (DSP) is a branch of engineering and computer science that involves the manipulation of digital signals (data) to analyze, modify, or enhance them. It's commonly used for tasks such as filtering, compression, and transformation of signals, making it applicable in various fields, including telecommunications, audio processing, image processing, and more. Today, neural networks often run on a DSP.

Example:

Suppose you're listening to music on your smartphone with noise-canceling headphones. The smartphone records external sounds through its microphone, but you want to enjoy your music without the interference of background noise, such as the hum of an airplane engine.

Here, DSP comes into play. The digital signal processor in your smartphone processes the audio signals from the microphone in real-time. It analyzes the incoming sound waves, identifies the unwanted background noise, and generates an "anti-noise" signal that is precisely out of phase with the unwanted noise. When the anti-noise signal is combined with the original audio signal, it cancels out the background noise, allowing you to enjoy your music without distractions.

Digital Signal Processing is fundamental in various applications, from audio enhancement in headphones to medical imaging, where it helps improve signal quality, remove noise, and extract useful information from digital data.


Edge Computing 

Definition:

Edge computing is a computing paradigm that brings data processing and analysis closer to the data source or "edge" of a network, rather than relying on a central cloud server. It aims to reduce latency, enhance real-time processing, and improve efficiency by performing computations on local devices or edge servers.

Example:

Imagine a self-driving car on the road. Instead of sending all the data it collects (e.g., sensor data, video feeds) to a distant cloud server for processing, which might introduce significant delays, edge computing allows the car's onboard computers to analyze and make critical decisions right there in the vehicle. For example, the car can quickly identify obstacles or pedestrians, react to changing road conditions, and make split-second decisions without waiting for instructions from a remote server. This is a practical application of edge computing, making the car more responsive and safer by processing data at the edge of the network, where it's needed.

Edge Device


Definition:

An edge device is a physical or virtual device located at the periphery, or "edge," of a network. These devices are responsible for collecting, processing, and transmitting data from the local environment to other parts of the network or to central data centers. Edge devices are a key component of edge computing.

Example:

A modern home security camera is an example of an edge device. This camera is installed at the "edge" of the network, typically in or around a person's home. It collects video and audio data from its immediate surroundings, processes that data to detect motion or recognize faces, and then sends relevant information to a homeowner's smartphone or a central monitoring system. By doing this local processing at the edge device, it can provide real-time alerts and reduce the need to transmit all the raw video data to a remote data center, making it more efficient and responsive.

GPU

Definition:

A Graphics Processing Unit (GPU) is a specialized electronic circuit or chip designed to accelerate and optimize the processing of images and videos. Initially developed for rendering graphics and animations, GPUs have evolved into powerful parallel processors, widely used in various tasks, including scientific simulations, machine learning, and gaming.

Example:

In gaming, a GPU plays a crucial role in rendering complex 3D graphics and ensuring smooth gameplay. For instance, when you're playing a graphically demanding video game, the GPU in your computer or gaming console processes the game's graphics and displays them on your screen. It calculates the colors, shapes, and movements of the in-game objects in real-time, making sure you see a realistic and fluid gaming experience. The GPU's ability to handle complex calculations quickly is why it's essential for delivering high-quality visuals and responsive gameplay.

HDR

Definition:

HDR, which stands for High Dynamic Range, is a technology used in photography, video, and display technology. It enhances the contrast and range of colors in visual content, providing a more lifelike and vivid viewing experience by capturing and displaying a wider range of brightness levels.

Example:

If you've ever taken a photograph in a high-contrast environment, such as a landscape with a bright, sunlit sky and dark shadows, you might have noticed that the resulting image doesn't capture the scene as your eyes see it. In this case, using HDR technology, a camera takes multiple pictures of the same scene at different exposure levels, from very bright to very dark. These images are then combined to create a single photograph that shows the details of both the bright sky and the dark shadows. The result is a stunning image that more closely resembles what you would see with your eyes, with rich colors and intricate details in both the highlights and shadows. This is one of the practical applications of HDR technology in photography.

FPGA

Definition:

A Field-Programmable Gate Array (FPGA) is a type of integrated circuit that can be reprogrammed to perform a wide range of digital functions. FPGAs are highly flexible and can be configured to implement custom hardware, making them valuable in various applications, from electronics prototyping to specialized computational tasks.

Example:

Suppose you are developing a new product that requires fast and custom digital signal processing, like a specialized image recognition system for industrial machinery. Instead of designing a fixed-function chip, which can be expensive and time-consuming, you can use an FPGA. You program the FPGA to process the image data in real-time, implementing the specific algorithms and computations needed for your application. If you later need to make changes or improvements to your image recognition system, you can reprogram the FPGA instead of designing a new chip. This adaptability and customization capability make FPGAs suitable for various industries and applications, where rapid development and reconfiguration are essential.

Image Sensor

Definition:

An image sensor is an electronic device that captures and converts visual information, such as light and color, into digital signals. Image sensors are commonly used in digital cameras, smartphones, and other imaging devices to create photographs, videos, and digital representations of the visual world.

Example:

In a digital camera, an image sensor serves as the "eye" of the device. When you take a picture, the image sensor captures the incoming light through the camera's lens and converts it into an electrical signal. This signal is then processed and transformed into a digital image that you can view on the camera's screen or save on a memory card. There are different types of image sensors, including CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor), each with its own strengths and weaknesses in terms of image quality, sensitivity, and power consumption. Image sensors are a fundamental component in modern digital imaging technology, allowing us to capture and share visual moments in various forms.

Image Signal Processor (ISP)

Definition:

An Image Signal Processor (ISP) is a specialized hardware component or software that enhances and optimizes the quality of images and videos captured by an image sensor. ISPs perform various tasks like noise reduction, color correction, sharpening, and exposure adjustments to produce visually appealing and accurate digital images or video footage.

Example:

Let's say you're taking a photo with your smartphone. The image sensor in the phone captures the scene, but the raw image it produces may have imperfections like noise, incorrect colors, or uneven lighting. This is where the ISP comes into play. The ISP analyzes the raw data from the image sensor and applies a series of corrections and enhancements. It reduces noise to make the image clearer, adjusts the colors to make them more accurate, sharpens the details, and optimizes the brightness and contrast for a well-balanced and visually pleasing result. The image you see on your phone's screen, which looks vibrant and clear, is the outcome of the Image Signal Processor's work. ISPs are a crucial component in ensuring that the images and videos you capture with your devices look their best.

Latency

Definition:

Latency refers to the delay or lag in the transmission of data between its source and destination. It is the time it takes for data to travel from one point to another, and it can be a critical factor in various applications, such as communications, network performance, and real-time systems.

Example:

Imagine you're playing an online multiplayer video game with friends. When you press a button to make your in-game character jump, the action needs to be instantly reflected on your screen to provide a smooth and responsive gaming experience. If there is high latency, you might press the jump button, but your character doesn't respond right away. Instead, there's a noticeable delay between your action and the character's jump, which can negatively impact your gameplay. Low latency is essential for real-time applications like online gaming, video conferencing, and autonomous vehicles, where quick responses are crucial for a seamless and interactive user experience.

Machine Learning Model Compression

Definition:

Machine Learning Model Compression is a set of techniques used to reduce the size or complexity of machine learning models while preserving their performance. It involves methods such as quantization, pruning, and knowledge distillation to make models more efficient for deployment on resource-constrained devices or to speed up their execution.

Example:

Let's say you've trained a complex machine learning model to recognize various objects in images, like cats, dogs, and cars. This model is quite accurate but is too large and slow to run on a mobile device, making it impractical for a mobile app that needs to provide real-time image recognition.

To make it more suitable for mobile deployment, you apply machine learning model compression techniques. This could involve quantizing the model's parameters to reduce their precision, which makes the model smaller and faster but may have a minor impact on accuracy. You might also prune some of the model's less important connections or neurons, further reducing its size. Additionally, you can use knowledge distillation, where a smaller, simpler model (student) is trained to mimic the behavior of the larger model (teacher).

The result is a compressed machine learning model that, while smaller and faster, can still perform reasonably well in recognizing objects in images, making it practical for use in a mobile app without sacrificing too much accuracy.

Machine Vision

Definition:

Machine vision is a technology that enables machines, typically computers, to "see" and interpret visual information from the world, much like human vision. It involves the use of cameras and specialized software to capture, analyze, and make decisions based on visual data. Machine vision is commonly used in industrial automation, quality control, robotics, and various applications where the analysis of images or video is necessary.

Example:

In a manufacturing setting, machine vision is often used to inspect products on an assembly line. Imagine a factory that produces smartphones. As each phone moves down the assembly line, a camera equipped with machine vision technology captures images of the phone's screen, casing, and components.

The machine vision software analyzes these images in real-time, checking for defects such as scratches, cracks, or misaligned components. If a defect is detected, the machine vision system can trigger a robotic arm or conveyor system to remove the faulty phone from the production line. This not only ensures the quality of the product but also automates the inspection process, reducing the need for human intervention and improving efficiency. Machine vision is invaluable in industries where precision and speed in visual inspection are crucial.

Network Pruning

Definition:

Network pruning is a technique in deep learning where specific connections (weights) or neurons in a neural network are identified and removed based on their importance or contribution to the model's performance. The goal is to make the network smaller and more efficient without significantly sacrificing its accuracy.

Example:

Let's say you have a deep neural network for image classification that was initially trained with a large number of connections. Over time, you find that some connections or neurons are not as important as others for correctly identifying objects in images.

Using network pruning, you analyze the model to identify these less crucial connections or neurons. You might use a criteria like the weight values - connections with very low or near-zero weights, indicating minimal impact on the model's output, can be pruned. By selectively removing these unimportant components, you create a pruned model that is smaller and faster.

The pruned model retains a significant portion of its accuracy while being more efficient in terms of memory and processing power. This can be especially useful for deploying deep learning models on resource-constrained devices, improving their speed, and making them more practical for real-time applications.

Neural Network

Definition:

A neural network is a type of artificial intelligence model inspired by the structure and function of the human brain. It's composed of interconnected nodes, or artificial neurons, organized in layers. These networks are used for various machine learning tasks, including pattern recognition, decision-making, and solving complex problems.

Example:

Imagine you want to build a neural network for handwriting recognition. You start with an input layer that receives pixel values of handwritten characters. These values are passed through one or more hidden layers of artificial neurons, where the network learns to recognize patterns and features in the handwriting, like loops, curves, and straight lines.

As you train the neural network with a dataset of handwritten characters and their corresponding labels (e.g., 'A,' 'B,' 'C'), it gradually learns to associate the input patterns with the correct letters. After training, when you feed the neural network an image of a handwritten letter, it processes the data through its layers and outputs a prediction, such as identifying the letter 'A.' Neural networks can generalize from the training data and are used in various applications like image and speech recognition, autonomous vehicles, and natural language processing.

NPU

Definition:

A Neural Processing Unit (NPU) is a specialized hardware component designed to accelerate and optimize the execution of artificial neural networks, particularly deep learning models. NPUs are built to handle the intense computational demands of neural network inference, making them valuable for tasks like image recognition, natural language processing, and machine learning.

Example:

Consider a smartphone equipped with an NPU. When you use a language translation app that converts spoken words into another language, the NPU plays a significant role. The app captures your voice input, which is then converted into text. This text data is processed by a deep learning model that understands and translates the text.

The NPU accelerates the computations required for the translation, making it faster and more energy-efficient. It analyzes the text, translates it, and generates the translated text or spoken words. Without the NPU, the translation process might be slower and consume more power, resulting in a less responsive and less efficient application.

NPUs are also commonly found in other devices like smart cameras, autonomous vehicles, and edge computing devices, where rapid and efficient neural network processing is essential.

Quantization

Definition:

Quantization is a process in digital signal processing and data compression where the range of possible values for a variable is reduced to a smaller set of discrete values. It involves approximating continuous values with a limited number of quantized levels. This is commonly used to reduce data size or precision while minimizing the impact on the quality or performance of the signal or data.

Example:

Consider an image with colors represented in a continuous range of millions of possible shades. If you were to quantize the image, you would reduce the number of possible colors to a smaller, discrete set. For instance, instead of using millions of colors, you might limit the image to 256 colors.

In this case, each pixel in the image would be represented by one of those 256 predefined colors. While this reduces the data size significantly, it can still produce an image that appears visually similar to the original, especially if you carefully select the 256 colors to maintain the important visual characteristics of the image. Quantization is widely used in image and video compression, as well as in the compression of audio and other data types to reduce storage requirements or transmission bandwidth without a noticeable loss in quality.

RNN

Definition:

A Recurrent Neural Network (RNN) is a type of artificial neural network designed for processing sequences of data. Unlike traditional feedforward neural networks, RNNs have connections that loop back on themselves, allowing them to maintain a kind of memory of previous inputs. This makes them well-suited for tasks involving sequences, such as natural language processing and time series analysis.

Example:

Let's say you're building a language model with an RNN. When you input a sequence of words, the RNN processes one word at a time, and at each step, it not only considers the current word but also the context of the previous words in the sequence. This ability to maintain context is crucial for understanding language, as it allows the network to interpret sentences and phrases based on the words that came before. For instance, in the sentence "The cat sat on the...," the RNN can use its memory of previous words to predict that the next word might be "mat" or "chair."

RNNs are used in various natural language processing tasks, including language generation, machine translation, and sentiment analysis, where understanding the context of the sequence is essential for accurate processing and predictions.

SoC

Definition:

A System on a Chip (SoC) is a complete electronic system that integrates various hardware components, including microprocessors, memory, input/output interfaces, and often specialized hardware like graphics processors or communication modules, all onto a single chip. SoCs are commonly used in mobile devices, IoT devices, and other compact electronics to save space and improve efficiency.

Example:

Consider a smartphone. In the past, different components like the central processor (CPU), graphics processor (GPU), memory, and various communication chips were separate entities on the phone's circuit board. In a modern smartphone, these components are often integrated into a single SoC.


The SoC in a smartphone includes the CPU, GPU, RAM, cellular modem, Wi-Fi module, and other essential components all on one chip. This integration results in a smaller and more power-efficient design, allowing the phone to perform a wide range of functions while conserving space and energy. SoCs are not limited to smartphones and are used in various applications, from smartwatches and tablets to IoT devices and embedded systems.