Balancing tech innovation and smooth operations

There is an intense pressure on the enterprise to be seen using the most innovative technologies while maintaining high operational performance so that they can cut through a fiercely competitive landscape. So how do technology leaders and their teams strike the right balance between innovation and operations?

Firstly, think functionality, serviceability and experience. Too often many product teams forget the critical dimensions of how the technology will be consumed. 

Customers are now used to consuming ‘experiences’ rather than products, which requires that product and development teams also focus on this ‘experience’. 

Great product innovations fail in terms of user adoption because serviceability and experience have been left as secondary considerations.

This can be addressed by taking a modern approach to implementing innovations in ‘small chunks’, especially early in the product lifecycle with the concept of Minimum Viable Product. Your innovations can go to market faster in order to collect vital customer and end-user feedback. This lets you adapt and expand the innovation based on real insight into what customers tell you about their experience of your product’s functionality and serviceability.

Linked to this is how modern innovation methods leverage agile and DevOps with small multi-disciplinary teams involving all stakeholders, from Product Management, Development, UX, Operations, Infrastructure, who consider operational matters very early in the design and development cycle.

This approach leads to architectures based on microservices or modules which are loosely coupled and collaborate via well-defined interfaces. A virtuous consequence is that you can innovate and create new versions of a microservice with minimal impact on the other components and you therefore greatly reduce and localise operational risk related to change.

It may seem counter-intuitive but releasing more innovations more often can greatly improve operational stability. How? Well each change is relatively small, so it is easy to understand, and it is isolated to one or a few microservices affected, and as you release often, the operational environment changes little from one deployment to another, therefore minimising and localising risks for degradation or regression. Also, as you release new code often, you build a practice and culture of automation of your deployments, and therefore minimize human errors. 

There’s cultural change too. Deploying more often means your teams develop ‘muscle memory’ of how to deploy well.  They practice every week or every day and understand and master their production environment much better, as opposed to teams who only release large batches of innovation once or twice a year, where risks are extremely high that things will go wrong and will be very difficult to isolate. You do well what you do often.

Ensuring implementing new innovations don’t upset operations does require a DevOps approach. DevOps implies that testing and performance monitoring are key accountabilities of development teams and ‘SREs’ (Site Reliability Engineers). If something breaks in the night, developers get paged and fix it.

If done well, that means that you can innovate fast with everyone in the team feeling accountable not just for functionality, but also for serviceability, performance and experience This is a strong driver of operational improvement because who wants to get paged during the night?

The State of DevOps report 2019 outlines how best-in-class DevOps organizations deploy software 208 times more frequently on average and 106 times faster from coding to production, but also incur seven times less incidents during deployments and recover from incidents 2,600 times faster. 

Ultimately, the way to juggle innovation and operations means you need to emulate how Formula One or SpaceX succeeds through their use of telemetry. In order to innovate at pace without sacrificing operational quality, you need instrumentation. Modern technology enables the instrumentation of hardware with all sorts of sensors and the collection of software telemetry through monitoring and observability platforms. 

Telemetry has become essential to managing complex systems based on highly modular microservices, relying on volatile and scalable cloud and container infrastructure, and changing at a pace of thousands of new releases per day. 

Teams with the ambition to innovate and disrupt markets need to embed telemetry and observability into their innovation practices and create a data driven culture.

People, processes and culture play a vital role in transformation and innovation. Identifying the key trade-offs between in-house skills, experience and know-how and bringing in fresh new blood from outside is a recipe that is unique to each organisation and business objectives.

Greg Ouillon is the EMEA CTO at New Relic. He specialises in high tech innovation, product management, service development and engineering in Aviation and IT and is an international executive with extensive service provider experience. He enjoys turning advanced business strategies and technology into global services. 

Scroll to Top

SUBSCRIBE

SUBSCRIBE