Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 4

Written by: Hannah Inman

9 min read

This is the fourth in a series of blog posts about the ability to greatly accelerate software delivery pipelines - thus, also accelerating innovation - using the combination of Jenkins, Docker and continuous delivery practices.

Next Gen CD - Orchestrating Docker

Transforming IT Automation with Continuous Delivery
Virtually every IT organization today is driving toward increased automation and exploring ways to achieve CD. There is an incredible array of services, tools and technologies available to these organizations as they strive to meet their goals for faster delivery of higher quality software. Both Jenkins and Docker have been embraced by organizations because they help in specific ways, while providing a real foundation to build further. Jenkins, in particular, excels in its ability to integrate with existing systems, and to orchestrate work broadly across multiple organizations. The introduction of native Workflow capabilities within Jenkins was the key to making this happen.

One of the key requirements for end-to-end IT automation - real industrialization of IT - is to move work reliably across a complex organization, integrating elements from many contributors and using a vast array of tools. The same pattern of requirements exists for active open source projects like Jenkins and Docker, but the constraints and cultures within enterprises are often the controlling factors in implementation.

The practices and features for using Docker and Jenkins together, which are outlined in this blog series, are the foundation for much broader application as CD practices mature over the coming years. We talked earlier about how automation is evolving. To dig a little deeper, each dot in the diagram in the first of this series of blog posts , was typically born in a larger organization as a silo of automation.

Then, as automation processes grow across an organization or as the complexity of a delivery increases or dependencies multiply, shared repositories were put in place to provide cross-organizational access and central management.

Jenkins initially played a central role in automating the flow of artifacts between these islands of automation, by providing simple triggers to cause actions when changes took place, along with richer constructs for defining flows. However, what developers and operations users needed was a simpler way to define, version and reuse workflows themselves, not just triggers. In addition, rather than just capturing binaries of libraries and application artifacts, developers and operations users needed simpler ways to capture application containers in environments - Docker’s forte.

The combination of Jenkins with Workflow, the new Docker plugins and Docker provides a new level of abstraction when constructing and operationalizing CD pipelines. These tools help developers and operations personnel speak the same language, share the same processes and meet the same goals as a team. Furthermore, as a pipeline executes, not only can it be spread over the cluster, but it will leverage a key feature of Jenkins: its ability to integrate with hundreds of tools on the market. This is especially important in an enterprise environment where many different tools have typically been accumulated over time and have to be integrated into the overall process.

It is important to fully capture the benefits that Docker brings when added to your continuous delivery practices. If you focus solely on Docker as a packaging mechanism, you might think the impact will merely be about the last mile of how your application gets pushed to production. Yet, since Docker fundamentally improves the way you package, maintain, store, test, stage and deploy your applications, Jenkins makes it possible to capture those improvements all along the application lifecycle and provides key advantages all along your CD pipeline.

In a traditional environment, application source code is stored in a repository and, as Jenkins executes its CD pipeline, it interacts with several tools (Chef, Puppet, Serena) within target runtime environments for testing, initially, followed by staging and production. But the actual baking of the application with its environment (operating system, application server, load balancer) is a concern that usually happens relatively late in the process (which also means the environment used along the pipeline stages might vary quite a bit).

In the new CD-with-Docker world, the target runtime environment isn’t an afterthought that’s left to the IT Ops team at a late stage of the CD pipeline. Instead, the runtime environment is closely associated to the application source code from the start. At the beginning of any CD pipeline, you’ll find a set of code repositories as well as a set of binary repositories containing a number of IT Ops-approved Docker images for the various environments required (operating system, application servers, databases, load-balancers, etc.).

Very early on in the pipeline process, Jenkins will be baking the application with its target Docker environment and produce a complete executable application as another Docker image. This will be the runtime version of your application. This runtime will be stored in a company repository that contains the archive of your Docker-ized target runtime applications. You can see your overall CD process as having several code and binary repositories as input and while the pipeline executes, several new Docker images - the applications - to be generated. Those application images might end up being wrapped together as a set of microservices (for example, as in a Kubernetes deployment) or as a traditional monolithic application in one container. Once an application image has been successfully built, it can be stored in the company repository as a “golden image” and serve as a potential candidate for a future deployment to production (remember: CD doesn’t automatically push to production - that would be continuous deployment - but makes sure your application is in a release-ready stage at all times).

From a process standpoint, this brings a lot of advantages:

  • First , since your application gets packaged in its final form very early on, it will travel through all testing and staging steps in that form. This highly reduces the risk of having problems in production not show up in previous steps because of a change in the runtime environment between those two stages.

  • Second , updating the environment itself is much more formalized, yet simplified. In a typical CD process, the main trigger of a new CD pipeline will be a change in the source code of the application. This will initiate a web of tests, integrations, approvals and so on, which, taken together, comprise the CD pipeline. However, in case one wants to update the environment itself (such as patching the operating system), this would happen separately in parallel to the application build process and it is only once the CD pipeline is executed again that the updated bits will be picked up. As we have seen, this could happen late in the pipeline execution, hence an application could end up not going through all tests with that new environment. With Docker, not only will a code change initiate a CD pipeline execution, but uploading a new Docker base image (such as an operating system) will also trigger the execution of any CD pipeline that is a consumer of this image. Since Docker images can depend on each other, patching an operating system might result in the automatic update of database and application server images, which will in turn initiate the execution of any pipeline that consume those database/application server images! A CD pipeline is no longer just for developers and their source code. Developers and IT Ops now share the exact same pipeline for all of their changes. This has the potential of hugely improving the safety and security of an IT organization. For example, when facing a critical and widely deployed security issue (such as the Heartbleed bug), IT Ops teams often struggle in making sure that absolutely ALL machines in production have been patched. How to make sure that no server gets forgotten? With a Docker-based CD pipeline, any environment dependency is explicitly/declaratively stated as part of the CD pipeline.

In this world where countless Docker images are going through various phases of the CD pipeline and getting copied from one system to another, it becomes very hard to keep track of what processes each of those images went through. Docker images get transformed and change their names all the time as they go through the pipeline. This is where the “traceability” features in Jenkins shine. Jenkins unambiguously keeps track of exactly which images are transformed to what, who made what changes in them, what tests were run, where they were used, who performed any manual approvals and so on. And this all happens regardless of whether they are stored in S3, Docker Hub or a file in NFS. In addition to being a useful trigger condition for automation (i.e. if an image passes these tests, start that process), it is also a treasure trove for a forensic analysis, months or even years after the application has been pushed into production. This information removes a lot of guesswork from troubleshooting and defect analysis, as well as helps you to track the propagation of important changes, such as vulnerability fixes. This can prove very important, for example in the case of a security breach when you need to precisely identify when a specific vulnerability was released out into the wild.

What’s Next

As experience with Docker-based applications grow, the industry will quickly evolve to a place where a single container delivers an application or service within a microservices-based architecture. In that microservices world, fleet management tools like Docker Compose, Mesos and Kubernetes will use docker containers as building blocks to deliver complex applications. As they evolve to this level of sophistication, the need to build, test and ship a set of containers will become acute. Jenkins Workflow Docker DSL is already designed for such a use case. Building on the robust functionality already delivered for Docker, the Jenkins community has now also developed support for Kubernetes.

Other use cases remain to be discovered: one must learn to walk before running. The heartening thing is that the Jenkins community is on the leading edge of these changes and responds quickly to technology trends. It is almost as if Jenkins is the one constant thing in the storm of industry changes that happen around it.

This blog post is authored by the following CloudBees executives:

  • Sacha Labourey, CEO

  • Dan Juengst, senior director, product marketing

  • Steve Harris,advisor

Read the entire series :

  • Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 1

  • [_Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 2](https://cloudbees.com/blog/jenkins-docker-and-devops-innovation-catalysts-part-2)_

  • Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 3

  • [_Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 4](https://cloudbees.com/blog/jenkins-docker-and-devops-innovation-catalysts-part-4)_

__​​__

Stay up-to-date with the latest insights

Sign up today for the CloudBees newsletter and get our latest and greatest how-to’s and developer insights, product updates and company news!