Nicholas Ismail, Global Head of Brand Journalism, HCLTech
Nicholas Ismail
Global Head of Brand Journalism
HCLTech

Professional background: Nick Ismail is the Global Head of Brand Journalism at HCLTech. He is responsible for delivering the editorial and content strategy. He previously spent 6 years leading the content for Information Age, a B2B technology publication headquartered in London.

Education: MA (TV Journalism) City University, BA (English Literature) University of Manchester

By Nicholas Ismail, Global Head of Brand Journalism, HCL Technologies Ltd.

 

Before delving into Edge AI, it’s important to understand the distinction between cloud and the edge.

Every forward-looking industry is attempting to accelerate their cloud-enabled transformation. The cloud brings a host of advantages, including the agility to innovate at speeds simply not possible with previous data center-based approaches.

However, the wave of cloud adoption has brought some inherent challenges. The most significant is the inability to process data coming from outside the cloud, such as from a video camera monitoring system, in real-time. This has driven the emergence of the edge.

At the edge, the space between the data center and the cloud, data can be processed in real-time on (or near to) the physical device capturing the data. Examples include video camera monitoring systems, wearable devices in healthcare, or even a smart refrigerator.

Where does Edge AI fit in?

Edge AI takes the edge to the next level. The data being processed on the physical device can be analyzed by AI models in real-time, without the need to be sent to the cloud. For example, this solves potential real-world problems when looking at the smart city environments.

In a smart city, cameras will monitor the volume of traffic driving into the city center. Traffic operators might want to determine if there are enough parking spaces for those incoming vehicles, and if not, notify the drivers to that fact and identify where parking is available. There might be a scenario where the operators want to ensure people are correctly using the driver plus passenger lanes. In these potential use cases, Edge AI can facilitate image recognition to see how many people are in the car or use smart interpretation of the video feeds through machine learning to match the number of cars entering a city with the number of parking spaces available.

This type of activity is not possible in the cloud, it will take too long for the data to get there and back again.

“Edge AI is about undertaking AI activities close to the point of data origination and being able to interpret that data at the point of origin,” explains Alan Flower, CTO Cloud Native and Head Cloud Native Labs at HCL Technologies.

Mass adoption

The potential for Edge AI has meant that its use is moving up the adoption curve beyond the experimentation phase. Flower believes we’re at the point of mass adoption as an entirely consumable piece of technology.

This advancement has seen traditional cloud and data center vendors move into the edge space to meet the increasing demand from large enterprises that want and expect these Edge AI capabilities to run on a fully managed platform.

“Vendors are producing more appropriate versions of their existing products, such as the availability of containerized applications, that support an edge environment to meet increasing demand,” says Flower.

He adds: “As these technologies and platforms have emerged in edge environments, it's also accelerated mass adoption."

Business value at the edge

Edge AI has the power to generate previously untapped business value. Flower provides three examples:

  • Speed and immediacy: For example, in a manufacturing setting being able to use AI on or near a piece of machinery allows the real-time processing of data or images that can be used to determine any faults and make immediate remediations. The improved quality control has a significant impact on wastage and efficiency.

  • Cost savings: In a smart city or building development, high-definition cameras could be uploading terabytes of data each day to a cloud provider. The cloud provider will charge to store that data and organizations will also be liable for transmission and bandwidth costs. With Edge AI, these costs can be avoided, as the intelligence from the data and next-step decisions are made without having to transfer any data to the cloud. In addition, this approach also saves time.

  • Sustainability: In the energy sector, Edge AI can be used to manage the bidirectional flow of power in a grid. This ensures the reliable delivery of energy, while automating the process for users to sell additional energy, not consumed, back to the grid. Additional business value when it comes to sustainability can be applied to any industry environment, as Edge AI saves energy by reducing the need to upload data and process it in the cloud.

Comparing AI at the edge to AI in the cloud: same technology different environments

The benefits of running AI at the edge are clear, but are there limitations compared to running AI in the cloud?

The answer is no.

Flower confirms: “There has been a huge acceleration in the maturity and development of AI technologies. It's a rapidly developing space and as a result there's been a lot of standardization. In the AI space, there are common technologies that most people building AI systems would tend to deploy. And, because of this standardization, developing an AI solution on the edge requires using the same technology as in a cloud environment. The application can be written once and applied anywhere without any impact on processing power.”

This is because “training an AI model, before it’s deployed, requires real processing power. Once trained, the production operation and deployment require far less computing resources. This can be handled at the edge,” he adds.

Contextualization and consumption: the next step for AI and the edge

Standardized or out-of-the-box solutions for most technology products are giving way to custom models and approaches for specific use cases and industries. Edge AI is no different.

“There will be a greater reuse of models that address specific industry challenges,” says Flower.

 “From the technology side, there is going to be a continued evolution in the ease of consumption for AI technologies,” he adds.

Intel is a key enabler for driving this ease of consumption. The company has developed a cross-platform approach to AI called oneAPI.

"oneAPI is a generic toolkit that works with all underlying hardware technologies. It enables a developer to build advanced AI models – that require advanced processing capabilities, GPUs, and FGPA hardware – without worrying about what the underlying hardware is,” explains Flower.

He adds: “These types of products are driving the adoption of AI by making it easier for developers and client organizations to build AI solutions that can take advantage of the future innovation in hardware. The ability to develop AI that can inherently benefit from improved hardware, as it becomes available, is massively important to potential clients.”

Edge AI in action

As an example of this in action, HCL Technologies leveraged the oneAPI toolkit to develop the Intelligent Secure Edge (ISE) for Smart Cities, which was built by HCL Technologies’ IoT WoRKS. The ISE is a collaborative platform to drive proactive responses powered by real-time insights.  It brings together citizens, communities, and authorities in a virtual network, powered by artificial intelligence, edge, wi-fi 6, 5G, and other next-gen technologies.

There are many use cases that can be facilitated by this platform to generate value in a smart city environment, such as budget optimization, traffic management, incident response, water quality and food control, smart parking, and more.

Edge AI can be deployed on this platform to contextualize and generate alerts in a smart city environment. The technology can leverage data from citizens to contextualize sensor data to prioritize alerts for emergency services and provide live incident tracking.

HCL Technologies has also utilized Intel’s OpenVINO toolkit to enable real-time predictions on live video feeds using deep neural network models on edge devices that are powered by Intel processors – crucial to facilitating these smart city interactions.