LIFE AT THE EDGE
Canonical is the company behind Ubuntu – the Linux-based OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Here, in an exclusive interview, its Product Manager, Alex Chalkias digs deep into edge computing, how edge can help realise the potential of 5G and the importance of containerisation
Interview James Henderson
You've previously likened technology to water, and edge computing to an ocean, could you explain what you mean by that?
In a similar way to water, technology often begins at a source – a central control or authority – before trickling down and away toward the edge. To that extent, edge computing could be described as technology’s ocean: a great wealth waiting to be explored, and we have only just touched the surface. However, as it continues to proliferate, so does its complexity.
The potential of edge computing is clear, but what are some of the barriers to adoption?
With the current opportunity being so vast, almost infinite, it’s no surprise that the edge is often seen as a no-go, particularly for smaller industry players. Data is being created exponentially, while both AI and 5G lie at the heart of the edge. So where do you even begin? A challenge for enterprises is getting the infrastructure right to start with, and to support the growing complexity at the extremities, because more network capacity locally means greater density at the edge. Additionally, privacy remains a double-edged sword. Without trust and a comprehensive set of security measures, edge will never truly take off. On the one hand, processing data locally offers inherent benefits because the data remains in the desired sovereign area and does not traverse the network to the core. In other words, the data is (mostly) physically domiciled.
Alex Chalkias, Product Manager, Canonical
On the flip side, keeping data locally means more locations to protect and secure simultaneously, with increased physical access allowing for different kinds of imminent threats. A greater physical presence at the edge could, for example, increase the likelihood of Denial of Service (DoS) attacks, rendering individual machines or networks compromised. To combat this threat, backup solutions that circumvent local edge failures may be needed. However, by removing the constant back and forth of data between the cloud and edge, privacy will be enhanced beyond its current capacity, especially where individual consumers are concerned, because personal information remains in the hands of the user at the edge. When privacy combines with flexible infrastructure, edge will deliver innovation at a much greater scale.
How should enterprises accommodate edge computing as a wider part of broader ecosystems?
In short, there is a fundamental need for flexibility, to be able to shift workloads at the drop of a hat; and to understand the edge as one part of a broader ecosystem. In order to be flexible and easily manage the operation, enterprises need to have a grasp on containers, which will allow them to truly make the most of edge with multiple apps. With data being created at an unprecedented rate, enterprises must also consider how economical it is to transfer data from the edge to the core and whether it is less expensive to filter and pre-process data locally. Workloads that aren’t subject to demanding latency requirements should continue to be served by the most optimal cloud solutions possible. However, the coming wave of new use cases requires operators to rethink how the network is architected. And that’s where edge computing comes in. Interest in edge computing is being driven by exponential data increases from smart devices in the IoT, the coming impact of 5G networks and the growing importance of performing artificial intelligence tasks at the edge — all of which require the ability to handle elastic demand and shifting workloads. For companies that want to use edge, it can just be an afterthought, but will see more success if it is woven amongst their cloud operations from the get-go.
To continue the water analogy you've also spoken about technology's "natural springs", as well, could you speak about that?
The natural spring is the origin of the compute, so in edge computing’s case, would be the cloud or on-premise, before it trickles its way into the ocean. The edge, like the ocean, is a natural extension.
How does containerisation fit into the conversation around edge computing?
Container capabilities have the ability to remove barriers at the edge and have now become more than just ‘nice-to-have’. That’s because they are already synonymous with cloud deployments and are also infrastructure-agnostic, meaning you do not have to reinvent architecture to innovate from cloud through to edge.
Edge clouds should have at least two layers — both of which will maximise operational effectiveness and developer productivity, though each layer is constructed differently. The first is the Infrastructure-as-a-Service (IaaS) layer. Besides providing compute and storage resources, the IaaS layer should satisfy the network performance requirements of ultra-low latency and high bandwidth. The second involves Kubernetes, which has become a de facto standard for orchestrating containerised workloads in the data centre and the public cloud. Kubernetes has emerged as a hugely important foundation for edge computing. While using Kubernetes for this layer is optional, it has proven to be an effective platform for those organisations getting into edge computing. Because Kubernetes provides a common layer of abstraction on top of physical resources — compute, storage and networking — developers or DevOps engineers can deploy applications and services in a standard way anywhere, including at the edge. Kubernetes also enables developers to simplify their DevOps practices and minimise time spent integrating with heterogeneous operating environments, leading to happy developers and happy operators.
You're known for favouring lightweight versions of Kubernetes such as MicroK8s, how would you characterise the advantage of leveraging that technology?
Kubernetes manages and automates containers at scale. This includes resource allocation, container networking, health checking and self-healing based on explicitly declared desired state. K8s allows enterprises to move faster, improving developer productivity and operational agility. That means faster release cycles and go-to-market. MicroK8s reduces the footprint and inherent complexity of Kubernetes. All the Kubernetes services are included in a single package alongside the most popular add-ons to bring an autonomous Kubernetes cluster from developer workstation to edge and IoT appliances. The low maintenance user experience of MicroK8s is particularly evident when users create highly-available multi-node K8s clusters within seconds, without ever having to edit a single configuration file.
Which industry sectors do you think are best placed to benefit from advances in edge computing capabilities?
Since its introduction, edge computing has emerged as a proven and effective runtime platform to help solve unique challenges across telecommunications, media, transportation, logistics, agricultural, retail and other market segments. The speed and reduced latency of edge combined with 5G can also provide benefits to an array of IoT applications such as smart cities, transportation, intelligent manufacturing, healthcare and smart farming.
While edge computing will be one of the differentiating technologies to take us into the next frontier for a fully-connected world, proof of concept will be a key factor to demand improved solutions from cloud providers to ensure edge is both effective and financially viable - before it starts revolutionising the likes of wearables, cities and robots.
How important is it that a company has a set cloud strategy if it wants to benefit from edge computing?
Businesses have yet to fully familiarise themselves with edge computing and its infinite possibilities. We see companies that have already invested in on-prem, public, hybrid or multi cloud having a better understanding of the benefits of having “micro clouds” at the edge. A company that has already dealt with shifting workloads, upgrading or applying security patches on their cloud-native applications is more prepared to tackle the same challenges in the distributed, minimal and variable scale micro clouds at the edge.
How excited should we be about bringing together 5G and edge, and where do you think we'll see real leaps of advancement (self-driving cars, etc)
Combined with 5G, edge promises to fast-track the use of AI in the IoT. With faster speeds and dramatically lower latency, 5G makes it much more feasible to process AI workloads locally at the edge, where data is gathered, rather than the slower and more expensive method of sending it to the cloud or data centre. Multi-gigabit-per-second speeds and one-millisecond latency times will ensure more data than ever can be farmed off and used in wider crowd-sourced intelligence. The capabilities of edge for autonomous vehicles means we will see real leaps of advancement in this area as this kind of technology develops. In order to safely operate, driverless cars will need to collect a vast amount of data about their surroundings and directions as well as considerations such as road closures and weather conditions. Edge computing will allow these vehicles to collect, process and share data in real-time with almost no latency and superior reliability, as inferences can be run within the car instead of having to connect externally to the cloud. There are also some exciting use cases in the pipeline for emerging tech like augmented reality (AR). AR modifies real-world environments by incorporating digital elements, requiring visual data to be processed and rendered in real time. Edge will allow IoT devices to generate AR displays instantly, eliminating any loading times and improving the user experience. As well as gaming, AR enhanced by edge could also be used in retail, so customers can visualise what they might look like in an outfit, for instance, and even employee training scenarios to encourage workers to adapt to changes in their surrounding environment. Other areas with big potential for edge include industrial manufacturing and healthcare.
If you look into a crystal ball, what will the conversation around edge computing focus on in five years' time?
The predictions say that edge will eventually be bigger than the public cloud. Businesses will have a micro cloud at the base of every cell tower, at the back of every store or in every office block. The challenge then will be of large-scale fleet management, integration between different micro clouds (or edges) and the even further automation of lifecycle operations, like backups and upgrades.
Where does Canonical see itself as part of the edge computing conversation?
Our vision is to provide our customers with a zero-ops bulletproof micro cloud stack that can serve compute, networking and storage at the edge, near the consumer and the data source. Our edge stack allows for remote operations without local presence at each site, repeatable deployments and ultra-resilience. Our micro clouds can be deployed and managed as a fleet of minimal clusters at scale, across geographies with better economics, predictability and performance.