While we’ve already discussed how to use the Google Container Engine to host elastic Jenkins agents , it is also possible to host the controller itself in the Google Container Engine. Architecting Jenkins in this way lets Jenkins installations run more frictionlessly and reduces an administrator’s burden by taking advantage of the Google Container Engine’s container scheduling, health-checking, resource labeling, and automated resource management. Other administrative tasks, like container logging, can also be handled by the Container Engine and the Container Engine itself is a hosted service.
What is Kubernetes and the Google Container Engine?
Kubernetes is an open-source project by Google which provides a platform for managing Docker containers as a cluster. Like Jenkins, Kubernetes’ orchestrating and primary node is known as the “controller”, while the node which hosts the Docker containers is called a “minion”. “Pods” host containers/services should on the minions and are defined as JSON pod files.
The Google Cloud Platform hosts the Google Container Engine , a Kubernetes-powered platform for hosting and managing Docker containers, as well as the Google Container Registry , a private Docker image registry hosted on the Google Cloud Platform. The underlying Kubernetes architecture provisions Docker containers quickly, while the Container Engine creates and manages your Kubernetes clusters.
Automating Jenkins server administration
Google Container Engine is a managed service that uses Kubernetes as its underlying container orchestration tool. Jenkins controllers, agents, and any containerized application running in the Container Engine will benefit from automatic health-checks and restarts of unhealthy containers. The how-to on setting up Jenkins controllers in the Google Container Engine is outlined in full here .
The gist is that Jenkins controller runs from a Docker image and is part of a Kubernetes Jenkins cluster. The controller itself must have its own persistent storage where the $JENKINS_HOME with all of its credentials, plugins, and job/system configurations can be stored. This separation of controller and $JENKINS_HOME into 2 locations allows the controller to be fungible and therefore easily replaced should it go offline and need to be restarted by Kubernetes. The important “guts” that make a controller unique all exist in the $JENKINS_HOME and can be mounted to the new controller container on-demand. Kubernetes own load balancer then handles the re-routing of traffic from the dead container to the new one.
The Jenkins controller itself is defined as a Pod (raw JSON here ). This where ports for agent/HTTP requests, the Docker image for the controller, the persistent storage mount, and the resource label (“jenkins”) can all be configured.
The controller will also need 2 services to run to ensure it can connect to its agents and answer HTTP requests without needing the exact IP address of the linked containers:
service-http - defined as a JSON file in the linked repository, allows HTTP requests to be routed to the correct port (8080) in the Jenkins controller container’s firewall.
service-agent - defined in the linked JSON file, allows agents to connect to the Jenkins controller over port 50000.
Where do I start?
The Kubernetes plugin is an open-source plugin, so it is available for download from the open-source update center or packaged as part of the CloudBees Jenkins Platform.
Instructions on how to set up a Jenkins controller in the Google Container Engine are available on GitHub.
The Google Container Engine offers a free trial.
The Google Container Registry is a free service.
Other plugins complement and enhance the ways Docker can be used with Jenkins. Read more about their uses cases in these blogs:
Docker Build and Publish Plugin
Docker Agents with the CloudBees Jenkins Platform
Jenkins Docker Workflow DSL
Docker Hub Trigger Plugin
Docker Custom Build Environment plugin