Achieving open source excellence with NGINX
NGINX’S Senior Director of Product Management, Owen Garrett, speaks to Digital Bulletin about powering the world’s busiest sites, the importance of open source and its recent $560 million acquisition by F5
Could you start by giving us an overview of NGINX?
NGINX is a piece of open-source software that is used by the majority of the world’s busiest websites to improve performance of the websites, to absorb large volumes of traffic, and to protect the websites from whatever the internet throws at them. It is very well established and respected, and has been used by some of the largest and busiest sites in the world for the last 15 years.
What are your core products and services?
Like many software companies, NGINX began as a single product company, the eponymous NGINX web server and reverse proxy. From that open-source beginning, we created a commercial variance called NGINX Plus. We needed to find ways in which we can monetize that open-source foundation. We developed an open-source project, and then for a small minority of users we would provide an extended version with support and professional services on a subscription basis, so the core of the product was open.
In the case of NGINX, there are some small extensions, which are commercial. Customers come to us not just for the commercial features, but also for the services that we’re able to offer around our products. This enabled us to grow very, very steadily, almost doubling revenues year on year since we released NGINX Plus.
Last year we announced an evolution of our products, which centres around a new product called NGINX Controller. It is able to generate specific configurations, to pull metrics back, so you can monitor those instances, and to audit the health of those instances to make sure they’re running the most up-to-date software. They are protected from known vulnerabilities, but the configuration that is running in those instances meets our best practice.
It focuses the load-balancing user case. It generates configuration for NGINX, operates a load balancer. We recently announced a release, a new module for controller that focuses on an API-used space. It configures NGINX optics as an API gateway. We are currently developing a service-mesh module, so a controller can configure NGINX to operate in a new mode of operation described as a service mesh, which is used for internal communication in microservices and distributive applications. That has taken NGINX to where we are now.
Could you tell us a bit about the recent acquisition by F5, and how the two companies will work together?
There are very strong synergies between the two organisations. Although we sell into similar use cases, we sell to very different audiences within those use cases. With the two products together, we have a very rich range of solutions that we can sift out for custom organisation.
F5 is a very well respected, established vendor of load balancing and security solutions with a strong footprint in enterprise and FTSE 500 organisations. They have over 25,000 customers, which are typically large enterprises running machine-critical services.
NGINIX, on the other hand, has an open-source technology stack that moves very, very deeply into an organisation’s software delivery processes. It’s used by front end, by developers, and by DevOps. By combining those two things, you can see how they complement each other. F5 gives NGINX the broad access to the enterprise market, while NGINX brings to F5 a set of tools, processes, knowledge and expertise that allows them to reach deeper into each of their customers or subscribers.
The goal of the combined organisation is to deliver solutions that help businesses develop and deliver applications effectively, whether they’re doing it from a physical, traditional data centre, from a cloud environment, or from service environments. By bringing all that technology together, we create a very compelling set of solutions to help businesses operate more efficiently, adopt Continuous Integration/Continuous Deployment, and adopt more rapid DevOps processes.
What should enterprises consider when thinking about implementing a DevOps culture?
It starts, first of all, from understanding what you want to achieve by DevOps. There’s no point bringing in a DevOps culture if you haven’t got a measurable goal that you’re working towards. That goal varies depending on the nature of the business, and the nature of the products and services that you’re building. Typically, businesses want to be able to iterate, and improve, and develop new services more quickly, and more reliably. They want to operate in a more agile fashion.
You can start with home services and products used to be delivered. Organisations would have what is often described as a silent model, or different teams responsible for different parts of development of the process of building an application. An architecture team might lay down the design, and a development team would then build to the architect’s design. Then the code would be given to the test team, and then it would be given to the operations team.
It goes through a series of stages, but that lends itself to a process that is very difficult to change in the middle. The fact of the world is that requirements do change often without notice. It is not well suited if requirements are constantly changing, because a business sees new opportunities or saw new competitive threats. That often becomes the core of why a business wants to become more agile. They need to be able to turn on a dime, to change their features they’re developing, to reprioritise those features at a moment’s notice.
How you believe that APIs can help companies to drive innovation, unlock data, and modernise their applications?
API is absolutely key to all of this. APIs mean that you can decouple the data on the business processes from the person or the organisation that is consuming it. Again, we could look at a before and after: without an API, you would build a single, large application that contained the data, the business processes, and the user interface to interface with that. If you wanted to extend that, or you wanted to add a new business process, or add a new consumer, you’d have to rewrite large parts of the application.
The alternative is to separate the consumer of the processes and the data from the provider, and provide an API between the two. In that case, let’s say, you have a business that is providing a service, you could have multiple different interfaces, which consume that service. They all go through a common API, so if I want to change the way the service implements it, I can do that as long as I don’t change the API – I’m free to make those changes.
Maybe I want to make changes for performance reasons, I want to add some monetisation methods, or I want to add more data on my business processes. I can do all of those things as long as the API stays the same, then the consumers aren’t affected. At the same time, if I decide I want to provide the service to a different organisation, a different consumer, we just need to create a new client for that consumer that talks to the API. APIs separate the consumer of a service from a provider of the service to allow the two to operate and scale independently.
What are some of NGINX’s goals for the next three to five years?
We started NGINX with the goal of building a successful software company with a rich open-source foundation that would help the world experience better internet and connected services. With F5, we can continue towards that goal.
F5 is committed to continuing to invest in NGINX open source, and that will drive innovation and our products. Our community has been hugely influential on the success of NGINX and we intend to continue to support and give back to that community when we’re living under the F5 umbrella.
I’m looking forward to opportunities to leverage some of F5’s technology and bring it towards the NGINX user base. DevOps, microservices, and distributive applications will shape the way that applications will be built in the next five years, and I’m looking forward to NGINX being at the forefront of a lot of that.