Configuration management (CM) or provisioning tools have been around for quite some time. They are one of the first types of tools adopted by operation teams. They removed the idea that servers provisioning and applications deployment should be manual. Everything, from installing base OS, through infrastructure setup, all the way until deployment of services we are developing, moved into the hands of tools like CFEngine, Puppet and Chef. They removed operations from being the bottleneck. Later on, they evolved into the self-service idea where operators could prepare scripts in advance and developers would need only to select how many instances of a particular type they want. Due to the promise theory those tools are based on, by running them periodically we got self-healing in its infancy.
The most notable improvement those tools brought is the concept of infrastructure defined as code. Now, we can put definitions into code repository and use the same processes we are already accustomed to with the code we write. Today, everything is (or should be) defined as code (infrastructure included), and the role of UIs is (or should be) limited to reporting.
With the emergence of Docker, configuration management and provisioning continues to have a critical role but the scope of what they should do has reduced. They are not in charge of deployment anymore. Other tools do that. They do not have to set up complicated environments since many things are now packed into containers. Their main role is to define infrastructure. We use them to create private networks, open ports, create users and other similar tasks.
For those and other reasons, adoption of simpler (but equally powerful) tools became widespread. With its push system and a simple syntax, Ansible got hold on the market and, today, is my CM weapon of choice.
The real question is why did Docker take away deployment from CM tools?
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.
The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.
This post is part of a new blog series all about the DevOps 2.0 Toolkit. Follow along in the coming weeks. Each post builds upon the last!
The DevOps 2.0 Toolkit
Configuration Management (The DevOps 2.0 Toolkit)
Containers and Immutable Deployments (The DevOps 2.0 Toolkit)
Cluster Orchestration (The DevOps 2.0 Toolkit)
Service Discovery (The DevOps 2.0 Toolkit)
Dynamic Proxies (The DevOps 2.0 Toolkit)
Zero-Downtime Deployment (The DevOps 2.0 Toolkit)
Continuous Integration, Delivery, And Deployment (The DevOps 2.0 Toolkit)
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.