CloudBees Core integrates with Binary Authorization on GCP

Patrick Wolf's picture

Editor’s Note: This blog post was updated by Jeff Fry, principal business development engineer. The original post was published on August 20, 2018.

Binary Authorization on the Google Cloud Platform (GCP) is now Generally Available (GA). CloudBees has been a close partner with Google and the Binary Authorization team during the alpha and beta versions. We are excited to be a part of the GA launch and CloudBees congratulates the Binary Authorization team on this important milestone!

Binary Authorization is based on the open source Grafeas artivact metadata API, allowing teams to ensure all containers deployed to Google Kubernetes Engine (GKE) have been validated against a defined policy for security and compliance. By integrating Binary Authorization with CloudBees Core, you can secure your container images during the Jenkins build process. This allows you to then implement a policy to control the secured delivery of these images to GKE clusters.

One of our goals with CloudBees Core is to enable enterprises to optimize their usage of Jenkins through standardization, compliance, security and best practices so I thought this was a perfect opportunity to demonstrate this by constructing a Jenkins Pipeline in CloudBees Core that integrates with Binary Authorization. Because CloudBees Core is fully integrated with Kubernetes and available for quick deployment on GKE from the GCP Marketplace, it is actually pretty straightforward to take advantage of this new capability.

The ability to move software from source to production has never been this easy. However, this velocity does not come without risk. With the growing use of containers and automation as the foundation for modern application development, the need for security, compliance and governance does not go away. Operations teams and SREs must still ensure that all applications continue to run as designed, corporate standards have to be maintained, compliance must be met and security guaranteed. These concerns are top priority for our customers who rely on CloudBees to help deliver software fast while still solving these problems.

To meet these needs, organizations rely on a variety of techniques to implement quality and security gates in their continuous delivery pipelines. Quality, security, governance and compliance standards have typically been solved by inserting approval steps into the software delivery chain requiring that everything stop until a person can verify and approve release. One popular method of implementing gates into the pipeline is the input step.

The input step works well to ensure that a human has validated the quality of the application before moving to the next stage of the pipeline. The input step can also require specific user’s approval before the pipeline can progress as well as provide an audit trail of who approved the release.

For many software release cycles this is fine but it still requires a manual step with human input. Kubernetes and containers are completely changing how teams develop, deploy and manage software. These approval gates become a bottleneck in source to production continuous delivery pipelines. The promise of Binary Authorization is to enable all aspects involved in a continuous delivery pipeline, including the signed approval of compliance and security, to be automated.

Offering customers the ability to define specific compliance rules, maintain quality and increase their overall product velocity is critical to CloudBees’ product strategy. The combination of Google Cloud’s Binary Authorization and CloudBees Core creates a very compelling story for enterprise customers.

Of course, none of this means that you have to automate everything or that Binary Authorization won’t work with manual compliance gates. In fact, it makes compliance gates even more powerful. In order to sign an attestation, a private key and public key must be present. The private key might be held by a user (creating a two-factor authentication) for progressing the pipeline or the private key may only be accessible to specific pipelines or jobs in a different folder in your managed master or it might only be available in a separate managed master. Your policy can require any combination of attestations from humans or tools before it is allowed in production.

I have made a full demonstration application here. I tried to make this as close to a reference implementation as described by Google as possible. If you run the setup.sh script provided, it will configure everything you need on GCP to play with this yourself. I have included instructions on how to setup and use the demonstration and documented each step so different parts of it can be applied to your continuous delivery pipelines.

All of the steps used are documented in simple bash scripts to show how things are accomplished. These scripts can be used in any pipeline as is or adapted to your specific use case.

You can watch a walk-through of the configuration, installation and running of the demonstration here: