Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 3
This is the third in a series of blog posts about the ability to greatly accelerate software delivery pipelines - thus, also accelerating innovation - using the combination of Jenkins, Docker and continuous delivery practices.
Next Gen CI/CD: Use Cases, Best Practices and Learning
In this section, we will drill down into key use cases related to Jenkins and Docker, and offer best practices for them.
Because Jenkins and Docker are two very flexible technologies, use cases can quickly become confusing: while each use case is based on a specific articulation of Jenkins and Docker, they are doing so in a very different fashion, in order to achieve different objectives. For that reason, we have split the use cases into three sections:
The first section will focus on building Docker images. This is a trivial use case, but a very important one as it serves as the foundation of anything related to the usage of Docker (with or without Jenkins). Essentially, in a Docker world, everything starts with a container... so they must be built!
The second section will cover CI use cases and how Docker can help improve CI independently of whether your application will ultimately be deployed as a Docker image or not. In this section, Docker is mostly defered as an underlying (and transparent) layer that enables Jenkins to deliver faster, safer, more secure and more customizable CI.
The last section will cover typical CD use cases and how you can use Jenkins to orchestrate end-to-end pipelines based on Docker images/applications. Put simply, this is the future of software delivery.
In the Beginning was the Container…
Anything in Docker, will obviously start with the creation of a Docker image. Docker images are the new Lego blocks of IT. As such, it is essential for the engine that will be responsible for building them to be smart about it, highly customizable, secure and stable. The last point is very important and often dismissed: if everything in your IT is dependent on the creation of Docker images, your Docker builder becomes as critical as your production environment. Need to fix a security bug in production? If you can’t properly rebuild an image as part of your company process, you can’t fix your production environment, plain and simple.
Consequently, Jenkins acts as the ideal tool for that. Jenkins has been extensively used in all kinds of environments for more than a decade and has proved to be extremely robust and stable. It also features advanced security features (such as the Role-based Access Control and clustering features, offered by CloudBees).
The overall philosophy of building containers in Jenkins is based on the idea of tracking the dependencies between the pieces that make up a final application (which could be composed of multiple containers). Whenever part of the source code or one of the source golden images are used to run the image changes, both development and operations have the option to automatically rebuild a new image.
Furthermore, Jenkins is tightly integrated with all Docker repositories on the market, making it possible not only to securely manage credentials used to store generated images, but also to track the entire process, from the trigger that initiated the generation of a new image, to the actual location where this image is being used. Full traceability, at all times.
Next Gen CI - Jenkins on Docker
One of the most immediate benefits you can get out of using Jenkins and Docker, is to improve the way you run Jenkins and its build servers. Here, we are talking about how your existing non-Docker application development (such as mobile apps, desktop apps, etc.) can also benefit from Docker, which may be more important for many companies.
Companies doing CI maintain a cluster of Jenkins “agent machines,” on which a number of virtual slots, or “executors,” are defined, and can be used by a Jenkins controller for performing build/test jobs. The number of overall executors in a Jenkins cluster defines how many concurrent jobs will be able to execute at any point in time.
The typical problems with such a setup is that build processes will essentially be sharing resources concurrently. This can create different categories of issues:
Transient : Singleton resources can be requested at the same time by concurrent jobs (network resources, files, etc.). This would typically cause at least one of the concurrent jobs to fail intermittently. (The same job executing at a different time or on a different machine could properly execute.)
Persistent : A build job could make changes to the hosting environment. These changes can disrupt a future execution of that build or of another build.
Both categories of problems incur important costs for the DevOps team: issues have to be debugged and validated, environments have to be regularly debugged and “sanitized.” But more importantly, such errors, although not related to actual bugs in the code being tested, lead to teams not fully trusting the CI results. Whenever a job fails, the typical reaction is usually “probably not my problem, surely an environment issue, I’ll wait to see if it persists.” The problem with that attitude is that the more one waits, the more other code changes happen, which are all potentially responsible for truly breaking a build, hence diluting the responsibility of fixing the problem.
To remedy that situation, Jenkins’ in-depth Docker integration makes it possible to run your build in its own isolated Docker container, rather than on a simple executor on a shared OS. Even if your application has nothing to do with Docker or will not be delivered as a Docker image itself, it will happily run inside a container. Your testing will behave as if you have a whole computer to yourself, oblivious to the fact that it actually is confined in a jail cell. The Docker image essentially becomes an ephemeral executor entity that gets efficiently created and discarded 100 times a day.
Using Docker for your CI fixes the above issues - both transient and persistent - as each job executes in a fully-virtualized environment that’s not visible or accessible by any other concurrent build and each executor gets thrown away at the end of each build (or reused for a later build, if that’s what you want).
Furthermore, some companies are looking to completely isolate teams for confidentiality or IP reasons (i.e., source code/data/binaries from Team A should not be visible to Team B). In the past, the only way to obtain that behavior was to completely segregate environments (controllers and agents), and possibly implement additional security measures (firewalls, etc.) By basing your CI on Docker, builds executing on agents are fully isolated and do not pose any risks. Furthermore, the usage of features such as Role-based Access Control from CloudBees, makes it possible to share controllers as well, by setting proper security rules in place.
Last but not least, IT Ops no longer needs to be in charge of managing build environments and keeping them clean, a tedious but critical task in a well-run CI/CD environment. Developers and DevOps can build and maintain their customized images while IT Ops provides generic vanilla environments.
For anybody doing CI today, moving to Docker images represents low-hanging fruit that comes with very little disruption, but lots of advantages.
Stay tuned for part four!
This series of blog posts is authored by the following CloudBees executives:
Dan Juengst, senior director, product marketing
Steve Harris, advisor
Read the entire series :
Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 1
[_Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 2](https://cloudbees.com/blog/jenkins-docker-and-devops-innovation-catalysts-part-2)_
Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 3
[_Jenkins, Docker and DevOps: The Innovation Catalysts -- Part 4](https://cloudbees.com/blog/jenkins-docker-and-devops-innovation-catalysts-part-4)_
____
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.