Future artificial intelligence will happen at the edge

Photo: File

Photo: File

Published Jun 18, 2021

Share

By Louis Fourie

“Edge computing” has become a buzzword just like “Internet of Things” (IoT) and “cloud computing” in the past. In addition, the COVID-19 pandemic has tremendously accelerated the adoption of edge computing.

According to the 2021 State of the Edge Report by the Linux Foundation, specifically digital health care, manufacturing, and retail business will increase their use of edge computing in the coming years, pushing enterprise-generated data created and processed outside the cloud from 10% to 75% by 2022.

Edge computing

Edge computing refers to computation and data storage that are located close to where it is needed and is done at or near the source of the data, instead of relying on the centrally located cloud at one of the major data centres to do all the processing work.

In the computing world we have gone through several phases. In the beginning organisations usually had one large mainframe computer. This was followed by the era of dumb terminals and eventually by the era of personal computers – the first-time end-users owned the hardware that did the processing.

Several years later cloud computing was introduced. People still owned personal computers, but now accessed centralised services in the cloud such as Gmail, Google Drive, Dropbox, Trello, and Microsoft Office 365. Similarly, many smart devices, for instance Google Home, Amazon Echo, Apple TV, and Google Chromecast, are all powered by content and intelligence situated centrally in the cloud. Billions of people all over the world thus became deeply dependent on a few large public cloud providers such as Amazon, Microsoft, Google, and IBM, together with some private cloud providers such as Apple, Facebook, and Dropbox.

But currently the trend is to move the bulk of the processing and intelligence to the edge or closer to the source of the data. However, this does not mean that the cloud will disappear but rather that the cloud is coming to the end-user. We have trusted the large companies such as Amazon, Google, and Microsoft with our personal data. Now we will give them almost complete control over our computers, home appliances and cars.

Latency

The major reason for the growing popularity of edge computing is its ability to enhance response times and save bandwidth while enabling less constrained data analysis.

The brief time delay after you click a link before your web browser displays the content is called latency. To overcome this problem, multiplayer computer games use several intricate techniques to mitigate true and perceived latency between shooting something and knowing that you missed.

Voice assistants, such as Alexa, typically need to resolve requests in the cloud often causing noticeable latency. It is understandable, since the voice assistant has to process the user’s speech, send a compressed representation of it to the cloud, which uncompresses and processes it. The processing often involves pinging another Application Programming Interface (API) on a distant server to determine the weather or retrieve a piece of music. Then the cloud has to compress the result and send the voice assistant the answer, which convert it to speech or play the music.

The latency can be significantly shortened by edge computing or if more processing is done locally on the smart devices. It would also save tremendous bandwidth such as in the case of the new generation of smart security cameras that do the processing itself and only save relevant footage in the cloud.

Progressive Web Apps and self-driving cars

Even some websites are using edge computing. Progressive Web Apps often have offline-first functionality. This allows users to open a website on a device and continue working without an internet connection while saving all changes locally. When an internet connection becomes available, the changes are synchronised with the cloud.

Another example is self-driving cars, which have to process such an enormous amount of visual and other sensor data that it cannot process all this data in the cloud and wait for a response. Therefore, driverless cars make extensive use of edge-computing to avoid latency.

Edge artificial intelligence (AI)

Edge computing is a very powerful paradigm shift, but it is even more powerful when combined with AI. Most AI processes are carried out in the cloud and need considerable processing capacity. Edge AI, however, requires very little or even no cloud infrastructure beyond the initial training phase. Edge AI only requires a microprocessor and sensors to process the data and make predictions in real time.

One of the reasons for Edge AI, is that in a manufacturing enterprise with thousands of sensors, it is just not realistic to send large amounts of sensor data to the cloud where the data analytics are done before the results are send back to the manufacturing facility. This process will use huge amounts of bandwidth and cloud storage, cause unnecessary delays, and potentially reveal very sensitive information.

Application of edge AI

Usually, edge AI encompasses one or more sensors such as an accelerometer, magnetometer or gyrometer that are connected to a small microcontroller unit (MCU) with an algorithm that has been trained on typical scenarios the device may encounter. Many electric motors and other equipment nowadays contain sensors that monitor the temperature, vibration and current of the motor and send it to an edge AI platform, which continuously analyses the data to make predictions regarding when the motor will fail, and maintenance should be scheduled. The data is thus analysed locally instead of sending the data to the cloud.

Another application of edge AI is automated optical inspection of manufacturing lines, which entails the visual analysis of assembled components to detect missing or misaligned parts. Whenever retraining of the AI is needed the cloud can be used. The cloud, together with machine learning, can also assist with important insights to improve processes, efficiency, and safety. Although most processing is done at the edge, AI model training usually happens in the cloud. Only real-time critical processing is done at the edge.

Challenges and benefits

Although edge AI offers many advantages over cloud-based AI technologies, and significantly decreases latency, it is not without challenges. Computing power is currently rather limited at the edge, which restricts the amount of processing and number of AI tasks that can be performed. Complex models currently still have to be simplified to be able to run on edge AI hardware, which inevitably means a reduction in accuracy.

However, new chips customised for AI workloads and new hardware platforms promise to alleviate some of the computing limitations at the edge. Various new devices, such as video cameras, are now equipped with deep learning capabilities.

The benefits derived from Edge computing and AI make it a wise business decision for many organisations. In segments such as the automotive, health care, manufacturing, and retail, AI at the edge has been proven to reduce costs while simultaneously supporting greater scalability, reliability, speed, and agility.

Convenient but controlled

Whether cloud or edge computing, one thing is certain and that is that some of the large corporates will control even more of our world than they do right now. It is a wonderful world where almost all devices are managed by Google, Amazon, Apple and Microsoft and we do not have to worry about updates, security, fault correction, functionality, or capabilities. But unfortunately, we have to take what the large companies are pushing to our toaster, dishwasher, fridge, car and voice assistant. Previously in the personal computer era we controlled the software that was installed. In the edge computing era, we will only use it.

Professor Louis CH Fourie is a technology strategist

*The views expressed here are not necessarily those of IOL or of title sites

BUSINESS REPORT

Related Topics: