We cannot discuss continuous deployment without, briefly, going through the concepts behind continuous integration.
Continuous integration (CI) is represented by an automated integration flow. Every commit is detected by a CI tool of choice, the code is checked out from the version control, and the integration flow is initiated. If any of the steps of the flow fails, the team prioritizes fixing the broken build over any other activity. For continuous integration to be truly continuous, the team must merge into the main branch often (at least once a day) or, even better, commit directly to it. Without such a strategy, we have automated, but not continuous integration.
The problem with continuous integration is that there is no clear objective. We know where it starts (with each commit), but we don't know where it ends. The process assumes that there are manual actions after the automated flow, but it does not specify where the automation stops and humans take over. As a result, many companies jumped into the CI train and ended without tangible results. A part of testing continued being manual and time to market did not decrease as much as hoped. After all, the total speed is the speed of the slowest. With CI we start fast only to end with a crawl. That does not mean that CI does not provide a lot of benefits. It's just that they are not enough for what is expected from us today. Simply put, continuous integration is not a process that produces production ready software. It only gets us half-way there.
Then came continuous delivery. You're practicing it when:
You are already doing continuous integration. Continuous delivery is an extended version of the automation required for CI.
Each build that passed the whole flow is deployable. With continuous delivery, anyone can press a button and deploy any build to production. The decision whether to deploy or not is not technical. All (green) builds are production ready. Whether one is deployed or not is based on a decision when should a certain feature be available to our users.
The team prioritizes keeping software deployable. If a build fails, fixing the problem is done before anything else.
Anybody can get fast and automated feedback on production readiness.
Software can be deployed by pressing a single button.
If any of those points is not entirely fulfilled, you are not doing continuous delivery. You are, most likely, still in the continuous integration phase.
This poses a question what continuous deployment is.
While continuous delivery means that every commit can be deployed to production, continuous deployment results in every commit being deployed to production. There no button to push. There is no decision to make. Every commit that passed the automated flow is deployed to production.
Now that we have the definitions and understand the goals of continuous delivery or deployment, let us try to, briefly, define what should we expect from the tools.
CD, often, results in a complex set of steps. There are many things to be done if the process will be robust enough to give us enough confidence to deploy to production without human intervention. Therefore, the tool of choice needs to be able to define complex flows. Such flows are complicated, and sometimes even impossible to define through UIs. They need to be expressed as code.
Another thing to note is that the tool should not prevent team's autonomy. If a team that produces commits that are automatically deployed (or ready to be deployed )to production is not autonomous, the continuous ceases to exist. They (members of the team) need to have the ability to define and maintain the definition of the CD flow so that, whenever it requires changes, they can implement them without waiting for someone else to do them. If autonomy is the key, the tool cannot have the flows centralized. The code that defines a flow should reside in the same repository as the code of the service (or the application) the team is maintaining.
If we add the need for the tool to be capable of operating on a large scale, there are only a few products available today. One of them is, without a doubt, Jenkins combined with the Pipeline defined as Jenkinsfile. Is it the only product that exerts the features we require from a modern CD tool? That's hard to say since new tools are emerging on almost daily basis. What is true is that Jenkins proved itself over and over again. It, in a way, defined CI processes and, today, continues leading the CD movement.
Are We Done Now? (The DevOps 2.0 Toolkit)
Are we, finally, done with an overview of the modern DevOps toolkit that should reside in our toolbelt? Not even close! Once we adopt all the tools and practices we discussed, there is still much to do.
Automating the process all the way until deployments to production is only half of the story. We need to evaluate (and automate) processes that will make sure that our software continues running and behaves as desired. Such a system would need to exert some level of self-healing, not much different than the processes that perform similar functions in our bodies. Viruses constantly attack us, our cells are dying, we are getting hurt, and so on. To cut the long story short, if our bodies were not able to self-heal, we would not last a day. And, yet, we, and life itself, continue to persist no matter what happens. We should learn from ourselves (and evolution) and apply those lessons to the systems we're building.
We haven't touched the cultural and architectural changes required by the adoption of new processes and technology. The subject is so vast that it requires much more than a series of posts.
Even though we only scratched the surface, I hope you got some ideas and a possible direction to take. If you are interested in more details in the form of (mostly) hands-on practices, please consider purchasing The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.
The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.