AI at all corners: bringing intelligence to the edge

In their own unique ways, both artificial intelligence (AI) and edge computing have now entered the mainstream. By Tom Canning, VP of IoT at Canonical – the company behind Ubuntu.

  • 4 years ago Posted in

The connected home has delivered smarter technology directly to the consumer’s doorstep – remote-controlled heating, lighting and entertainment all take advantage of connected sensors to add greater value – while many devices already possess the AI capabilities necessary to convert hand-written notes into text in an instant. The convergence of AI and edge however is still in its infancy, and holds the potential to revolutionise the lives of individuals and businesses alike. It is also not without its challenges, with the reach of innovation often exceeding the grasp of practical implementation.

 

The building blocks

 

AI at the edge is not a distant dream. The ability to unlock a phone with your face, ask a smart assistant what the weather will be like tomorrow, or automatically adjust a camera to take the perfect low-light shots are all examples of AI in our pockets, right at the edge. These use cases remain small scale, however, and relate to individuals rather than a technology that empowers the collective. The challenge is getting the infrastructure right to begin with, and to support the growing complexity at the extremities, because more network capacity locally means greater density at the edge.

 

The right mix of compute, AI accelerators, storage, and networking will allow AI at the edge to thrive and evolve. Similarly, bringing in the capacity of a core data centre, to offload any passive edge applications, will help streamline operations even further. Although the edge is where the innovation appears to happen - where people actually interact with technology - balancing the workload between the locality and the cloud is the key to successful implementation. Get these foundations right, and the potential of AI at the edge will begin to be fully realised.

 

Harmonious infrastructure is the difference between one off examples of edge-based AI and much more expansive opportunities that connect larger patterns of data. Pooled from multiple devices, AI at the edge will offer intelligence greater than any individual piece of data can provide. Think about mapping as an example of collective smartness in operation: traffic congestion alerts, speed traps and car sensors are all individual points of information that feed into a larger whole, from which smarter solutions can then be developed. There is, therefore, no set model for the successful implementation of AI at the edge - it is instead a case of flexibility. But one thing that is certain is the need to prioritise security as the use cases multiply.

 

Prioritising security

 

Without trust and a comprehensive set of security measures, AI at the edge will never truly take off. The challenge is that privacy remains a double-edged issue. 

 

On the one hand, processing data locally offers inherent benefits because the data remains in the desired sovereign area and does not traverse the network to the core. In other words, the data is physically domiciled at all times. On the flip side, keeping data locally means more locations to protect and secure simultaneously, with increased physical access allowing for different kinds of imminent threats. A greater physical presence at the edge could, for example, increase the likelihood of Denial of Service (DoS) attacks, rendering individual machines or networks compromised.

 

To combat this threat, backup solutions that circumvent local edge failures may be needed. However, by removing the constant back and forth of data between the cloud and edge, privacy will be enhanced beyond its current capacity; especially where individual consumers are concerned, because personal information remains in the hands of the user at the edge. And when privacy combines with flexible infrastructure, AI at the edge will deliver innovation at a much greater scale.

 

Next-gen outcomes

 

There are two primary advantages of embedding intelligence at the edge: the first is lower latency and the second is reducing network traffic to the core data centre. Both are critical for real-time systems that power the likes of autonomous vehicles (AVs) or industrial robots. They illustrate the convergence of the real world with the digital, and the need to act on data immediately in order for applications to perform as smoothly and naturally as possible. Without AI at the edge, both AVs and robots will remain in the camp of primitive connected technology, rather than intelligent systems that are able to learn and adapt automatically.

 

Closer to home, one of the biggest opportunities lies in the real-time contextual synthesis of the edge domain. The house acts as a combination of data streams from a whole host of independent devices. Combined in a way that helps to tell a larger story, and from that story intelligence can be derived. A simple example could be motion sensors, facial recognition screens, and kitchen appliances all working together to produce an intuitive environment, where the home is one step ahead of the owner to help spread the workload.

 

The Internet of Things is transitioning from a predominantly centralised, hub-and-spoke cloud-based offering into a more distributed and intelligent edge. By tackling the initial challenges of privacy and infrastructure, businesses will be able to innovate quicker down the line, while consumers will better understand the role that AI actually plays in their daily lives. Intelligence at the edge will transform every industry and every home. Fundamentally, it is software that allows the world to be smarter. The combination of AI and the edge will mean inference can be done exactly where the application is, all in true real-time.

By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.