The evolution of our networked world
All of the world’s digital services are supported by technical infrastructure – a vast collection of elements including servers, networks and storage. Depending on how and where it’s used, this infrastructure can run on-premise, on the public cloud – such as that provided by Google, Amazon or Microsoft, or on hyper-converged infrastructure (HCI) – software-defined infrastructure, using virtualisation.
The past decade has seen a profound transformation in the way this infrastructure is managed, controlled and marketed. While this has historically relied heavily on human supervision, the growing adoption of automation, cloud and, most recently, machine learning (ML) is weakening that dependency. Indeed, this year we can expect the ecosystem of smart infrastructure to begin growing and evolving.
Architects and administrators have long been responsible for manually creating and managing infrastructure, along with its various configurations and governing policies and processes. But in 2006, the nature of infrastructure technology underwent a major change in the wake of the launch of the first public cloud by AWS, followed two years later by Google’s application engine, which introduced technologists to self-service infrastructure.
The most significant step-change regarding manual infrastructure maintenance occurred in 2011, however, with the introduction of automation application Jenkins, and Infrastructure as Code (IaC). AWS also released CloudFormation in the same year, allowing engineers to define cloud infrastructure in code, and spin-up the necessary infrastructure resources on Amazon’s servers.
The level of automation this made available provided early-adopters with more manageable infrastructure, which required much smaller teams to oversee it. Operations and implementation were simplified as a result, which had a positive impact on its running costs. A trend was set in motion – software-defined infrastructure (SDI) soon led to the emergence of HCI in 2014, and the concept of well-managed infrastructure minimising the need for human supervision.
Such was its popularity that, by 2017, tech giants including Cisco and Hewlett Packard had launched their own HCI offerings to the market.
Taking a different approach
The next phase in the evolution of infrastructure has been termed “smart infrastructure”. Cloud machine learning was originally introduced as a market proposition in 2014-15, and began to be adopted around a year later.
Since then, infrastructures including the cloud and HCI have implemented the ability to use the full capabilities of ML. And, as uptake of these infrastructures began to grow massively in the global tech market, so the volume of data available exploded. This data recorded every type of engagement – by software and hardware – within the infrastructure, making it highly amenable to ML. Now, the possibility of using ML to capture and analyse all (transient) infrastructure data via the logs has become highly desirable, due to the valuable insights it provides.
Smart infrastructure uses this data differently, however, instead leveraging ML to report and highlight potential optimisations in how any given infrastructure should be operated. In these cases, ML is used to look at database optimisations, network and security recommendations, observed security threats and fixes, and cost optimisations.
All of this represents significant value to a business, as having an autonomous driven resource that can be replicated, and that runs 24/7, can be considered a cost optimisation. What’s more, it could also continuously check the systems for further opportunities to optimise costs, creating more efficient infrastructure, and offering ongoing security and performance observations.
We’re already starting to see examples of smart infrastructure on the market. HPE, for example, is looking to differentiate itself in the HCI space by using an AI tool called HPE Infosight which assesses and reports on virtual machine usage and enhancements for streamlining and simplifying infrastructure set-up.
AI-driven continuous performance optimisation appears to be gathering industry support for its ability to look at application resource requirements directly. Also impacting infrastructure requirements, this concept is currently being developed by The Cloud Native Computing Foundation in partnership with DevOps specialists Opsani.
AI-driven principles are set to affect both cloud and data centre infrastructure too, although this will initially take a more suggestive format as opposed to AI-directly-controlled infrastructure. Examples of where this can already be seen on the market include Nutanix’s AI-driven optimisation solutions, and its Prism Pro, which advises on operational refinements.
Furthermore, at its re:Invent 2019 cloud conference, AWS unveiled several new PaaS offerings which use ML behind the scenes. These include Amazon Detective, which uses ML to identify security threats within a customer’s infrastructure; Contact Lens, a set of ML capabilities for AWS Connect to drive insights into customer sentiment, trends, and compliance risks; and Amazon Fraud Detector, which uses ML to enable companies conducting business online to quickly identify fraudulent activity.
These announcements all serve to underline how AI- and ML-driven infrastructure offerings will begin growing in popularity during the course of 2020.
Looking to the future
As we’ve seen, the nature of infrastructure is developing rapidly. As businesses continue to look for ways to reduce costs and extract greater intelligence from their data, they have progressed from utilising software and technology located on-premise, in data centres, to running remotely on hosted servers or in the cloud. AI and ML are set to be the step in the evolution of infrastructure.
Eventually, both the operation and management of smart infrastructure will become seamless. Over time, these smart architectures may have the ability to self-govern, self-optimise, and self-heal – all with limited involvement – resulting in highly optimised, fault-tolerant, and low-cost infrastructures.
Companies will begin exploring scenarios which involve AI-enabled networks which means that, as these AIs become trained on real and simulated environments, they’ll become as much a cybersecurity asset as a cost-optimisation solution for compute and storage.
And with the first signs of this highly optimised ML-driven infrastructure likely to become visible this year, this future isn’t as far off as we may think.