Continuous Delivery is the Key to Getting the Most out of Microservices, Containers and the Cloud
Jun 18, 2018 by Armory
Continuous Delivery is key to the “better” and “faster” elements in Armory’s Manifesto to help software teams “Build Better Software, Faster.” The goal of continuous delivery is to cut down the lead time betweeen “idea” to “feature in production” from weeks (or months!) to days, hours, or even minutes. According to Jez Humble, who wrote the book on Continuous Delivery,
”Continuous delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.”
Humble lists advantages including lowering the risk of software releases, decreasing the time to market, and increasing quality, which results in lower costs (after the initial investment), better product, and happier teams.
Minimizing the iteration time of the software development process speeds up the process and increases the quality of your software, simultaneously. While that might seem counter-intuitive, it’s backed up by science, as Nicole Fosgren, Jez Humble, and Gene Kim describe in their new book, Accelerate. Simply put, the less time your developers spend on the delivery cycle itself, the more time they have to spend on writing quality code.
More than just automating the code, a streamlined and automated pipeline is required to remove bottlenecks, reduce friction, and speed up time-to-market. The goal is to automate the entire deployment pipeline from code check-in, through build, test, and deployments through release to production, and monitoring. Automation maximizes deployment reliability and predictability.
Netflix created Spinnaker to pragmatically address several of the above issues through the creation and management of build pipelines. Spinnaker is a continuous delivery platform for the cloud, and Armory is built on top of the Spinnaker open source platform, extending its functionality and making it easier to use.
The articles How Microservices Help Continuous Delivery in the SDLC Part 1 and Part 2 talk about how microservices help with continuous delivery in more detail.
More wisdom from Jez Humble:
“If this sounds too good to be true, bear in mind: continuous delivery is not magic. It’s about continuous, daily improvement—the constant discipline of pursuing higher performance by following the heuristic “if it hurts, do it more often, and bring the pain forward.”
One of the biggest factors we see driving customer interest in improving software delivery is a move from data centers into the cloud (as well as starting to run Kubernetes in those data centers to create a common abstraction layer). The legacy “spit, polish & tape” brittle, hand-scripted tooling companies have for deploying software (poorly) to data centers breaks down as companies migrate workloads into the cloud, break monoliths into microservices, and begin to containerize workloads.
Here’s how to get the most out of this shift:
Plan Your Pipeline Strategy
Setting up a deployment pipeline is the backbone of continuous delivery. While microservices and containers break processes down into small units, it is critical to design the entire process end-to-end. Each company has different business processes and workflows. Deployment pipelines codify these into safe, repeatable actions. Knowing how all of the multiple moving parts fit together is the difference between success and frustration. Maintaining the integrity of complex distributed systems requires a clear understanding of the process as a whole, as well as how the individual parts fit together.
Your plan should include identifying the sets of environments, the intended use cases for each of them, how artifacts are created and how they flow through the environments. Continuous delivery pipelines generate many artifacts. How many of these artifacts will you store? How long will you store them? How many repositories do you need to store them? Where will the repositories live? And so on.
Whenever possible, use delivered functionality of your chosen elements to help with this. For example, use Kubernetes’ labels capability to spin up on-the-fly test environments for automated testing rather than maintain established test environments. Also, use Canaries to limit your blast radius in production.
What to Look Out For
Two of the sneakiest problems in moving to CI/CD are dependencies and version control. Microservices allow for each service to pick the best language/database/tool for its particular job. This flexibility is efficient for an individual service but can be challenging when having to support all of them together, especially when your system scales to using hundreds, then thousands of APIs.
Managing deployments of disparate technology stacks require a clear strategy. The strategy must be clear not just in terms of the components of the strategy, but clear to every team member who touches the stack. In order to be effective, it needs to be a living document that is accessible to the entire team. This document aids them when they are tracking down bottlenecks in the deployment process, or failures in production.
While variety is the spice of life, it has the potential to wreak havoc on versioning and managing backward compatibility. Each service definition must include specific versioning rules, and requirements, including dependencies. Knowing which versions of each dependent service it relies on are critical to keeping your pipeline running while individual microservices upgrade individually.
Automate Everything
Automation, as any manufacturer can tell you, is the key to speed. For continuous delivery, we’ve found that human interaction exponentially decreases your release cycle. The goal, then, is to automate your entire toolchain including configuration, continuous integration, infrastructure provisioning, testing, deployments, application release processes and production feedback loops.
It may take a while to fully automate all of these steps and using an automation platform like Armory is key to getting the most out of containerizing your pipeline. Armory abstracts away as much complexity as possible, leaving your developers to focus on what they do best – developing.
Automation is essential for consistency, managing deployments across disparate technology stacks and maintaining the integrity of complex distributed systems.
Seeing Into the Stack
Artifact audibility is a crucial component to making an automated CI/CD pipeline work. As each artifact moves through the pipeline, you need to capture who checked in the code, on which environments it was deployed, which configuration was used, which tests were run with outcomes, who approved it, and more.
Here at Armory, we use feature flags throughout the entire development pipeline. In addition to limiting the range of who sees the new artifact along the pipeline, feature flags log the key points mentioned above and more. The visibility is critical to the automation process.
Connecting Microservices
While each microservice is independently deployable, you also need a ‘road map’ for each one. Your pipeline needs to know where and how to find each service. The service itself needs to know where the shared resources can be found (databases, message cues, and so on), and how to access them.
Also important is a plan for failure resilience in case a certain service is temporarily unavailable. Failure is expected to happen unexpectedly (and often at the most importune time possible), so what is your plan for that?
Prepare for Things to Go Wrong
One of the things that Fosgren, Humble and Kim discuss at length in their book is the switch in DevOps to a culture that embraces failure. Delivering software several times a week in a technology stack where containers are spun up for a few minutes to perform a specific task then melt away mean that the final testing is always done in production and there is no way to cover all contingencies. In such an environment, failure is inevitable.
Rollbacks, the traditional method of applying fixes, are always tricky in production systems and the potential complications in a distributed system counterbalance their usefulness in most cases. To say nothing of the downtime they require.
Most times, the best remediate response it to roll forward, a term that means finding the root cause of a failure and applying the fix as soon as possible. Finding the cause is where you are happy that you listed out all of the dependencies and test results for each microservice.
If this sounds like a lot of work, it is, but the payoffs are enormous.
“Our goal,” said Humble, “is to make deployments– whether of a large-scale distributed system, a complex production environment, an embedded system, or an app– predictable, routine affairs that can be performed on demand.”