Building Modern, Real World Software Delivery Pipelines with Jenkins and Docker

This blog outlines the key use-cases enabled by the newly released Docker plugins in the Jenkins communities. You can drill into more depth with an in-depth blog for each of the use case. The CloudBees team has actively worked within the community to release these plugins.
Lately, I have been fascinated by how lean manufacturing radically improved the production of goods - the key being a fully orchestrated, automated delivery pipeline. We are at the “lean” inflection point in the software computing history where light-weight containers viz Docker and Jenkins will bring a rapid improvements in software delivery. I suggest that you read more on the how/why in Kohsuke’s white paper on Jenkins and Docker.

The executive summary of this white paper is that Docker provides a common currency between Dev and Ops teams in expressing environments. Jenkins provides the marketplace through orchestration with Workflow whereby the currencies are easily exchanged between these teams.

The CloudBees team has been in the forefront of these changes through our role in the Jenkins community. Our team members have seen and often contributed to requests for enhancements with Jenkins and Docker as the industry pokes its way through this new era. This experience has helped us capture the canonical use cases that help deliver modern pipelines. Today, I am happy to announce the general availability of a number of Docker plugins in OSS that help organizations adopt CD at scale with Jenkins and Docker.

 
There are two primary meta-use cases that these plugins help you tackle:

Meta-Use-Case 1: Constructing CD pipelines with Jenkins and Docker

Let’s construct a simplified pipeline, the steps outlined below increase in sophistication:

Jenkins Credit Union (JCU) has a Java web application that is delivered as a .war file that runs on a Tomcat container.

 
Screen Shot 2015-06-10 at 1.57.06 PM.png
 
  1. In the simplest use case, both of the application binary and middleware containers (the .war and TC) are built independently as Docker containers and and “baked” into one container which is finally pushed to a registry (company “Gold” Docker image).The Docker Build and Publish plugin can be used to achieve this goal by giving Jenkins the ability to build and package applications into Docker containers, as well as publishing them a images to both private and public Docker registries like Docker Hub.
  2. Now, the JCU team wants to hand this container to the QA team for the the “TESTING” stage. The QA team pulls the container and tests it before pushing it downstream. You can extend the chain of deliveries to “STAGING” and “PRODUCTION” stages and teams. In this case, the JCU team can either chain jobs together or use Jenkins Docker Workflow DSL (ignore this at the moment) to build the pipeline.
  3. Everything’s going fine and peachy, until…the JCU security team issues a security advisory about the Tomcat docker image. The JCU security team updates the Tomcat Docker image and pushes it to the Docker Hub registry. At this point, the Dev job that “baked” images is automatically tickled and builds a new image (application binary + middleware) without any human input. The tickle is achieved through the Docker Hub Notification plugin, which lets Docker Hub trigger application and slave environment builds. The QA job is triggered after the bake process and as part of the pipeline execution.
  4. Despite all the testing possible, the Ops team discovers that there is a bug in the application code and they will like to know which component team is responsible for the issue. The Ops team used the Docker Traceability plugin feature to let Jenkins know which bits have been deployed in production. This plugin lets them find the build that caused the issues in production.
 
 

I had mentioned that we would ignore the Workflow initially - let’s get back to it now. 

Most real world pipelines are more complex than the canonical BUILD->TEST->STAGE->PRODUCTION - Jenkins Workflow makes it possible to implement those pipelines. The Jenkins Docker Workflow DSL provides first class support within Workflow to address the above use cases as part of an expressed workflow. Once implemented, the workflow becomes executable, and once executed, it becomes possible to visualize which one is successful vs not, where the problems are located, etc. The red/green image in the picture above is the Workflow Stage View feature that is available in the CloudBees Jenkins Platform.​

 
 
 
The above steps layout a canonical use case for building pipelines with Jenkins. The examples can get more sophisticated if you bring the full power of Workflow and the ability to kick of connected Docker containers through Docker Compose to bear.
 

Meta-Use-Case 2: Providing build environments with Docker

JCU has multiple departments and each of these departments has its own Jenkins master and corresponding policies with how build environments can be setup.

 
Use case 1: The “PRODUCTION” team of the e-banking software has a requirement that all builds happen on a sanitized and locked-down build environments. They can use the Docker Slaves feature of the CloudBees Jenkins Platform to lock-down these environments and provide them to their teams. This not only makes sure that those build/test environment will always be clean, but it will also provides increased isolation as no build executing on a given machine will have access to the other Jenkins jobs concurrently executing on that same machine, but on a different Docker host.
 
JCU is also using CloudBees Jenkins Platform to manage multiple masters, so they can use the “Shared Configuration” to share these slaves across all client masters.
 

Use case 2: The CTO team wants to have the flexibility to have custom environments for working with custom stacks. The Docker Custom Build Environment plugin allows Docker images and files to serve as template for Jenkins slaves, reducing the administrative overhead of a slave installation to only updating a few lines in a handful of environment definitions for potentially thousands of slaves. ​

 

 

In this way, the overhead involved in maintaining hundreds or even thousands of slaves is reduced to changing a few lines in the company’s Docker slave Dockerfile.

 
Closing thoughts
The above set of use cases and corresponding plugins push the boundary for Continuous Delivery within the organization ahead. As experience with Docker grows, the Jenkins community will continue building out features to keep up with the requirements.
 
I hope you have fun playing with all the goodies just released.
 
Where do I start?
  1. All the plugins are open sourced, so you can install them from the update center or you can install CloudBees Jenkins Platform to get them quickly.
  2. Read more about the impact of Docker and Jenkins on IT in Kohsuke’s White Paper

Harpreet Singh

Vice President of Product Management 
CloudBees
 
Harpreet is the Vice President of Product Management and is based out of San Jose. Follow Harpreet on Twitter

 

 

Blog Categories: 

Comments

Fantastic move, thank you! I cannot find the Docker Slaves plugin anywhere. Will it be released later or is it only part of the enterprise package?

Hello there @Unknown: the Docker Slaves plugin mentioned is available from jenkins-ci.org (https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin). The enterprise package offers other features for Docker, like the ability to centrally manage Docker connections with Jenkins Operations Center and of course support for the platform itself.

Add new comment