DockerCon Hackathon: Continuous Dockery

Written by: Electric Bee

6 min read

Stay connected

learn-about-docker-containers

Last year, team CloudBees participated in the first annual DockerCon Hackathon, and won as one of the top-three submissions. This year, Nikhil and I returned to a bigger and badder hackathon event, evidence of Docker’s massive growth.

How it works

40+ teams of 1-10 hackers spent 24 hours working on a project from scratch.  Categories for submission:

  1. Build, Ship and Run Cool Apps with Docker
  2. Management & Operations: Logging, Monitoring, UI / Kitematic, Developer tools, Deployment, CI / CD, Stats, etc
  3. Orchestration: Composition, Scheduling, Clustering, Service discovery, high Availability, Load Balancing, etc
  4. Security, Compliance, & Governance: Authorization, Provenance, Distribution, etc
  5. Resources: networking, storage API, etc

Everyone submitted a 2-minute video, and 10 teams were selected to present.  Of those presenting, the judges selected the top 3 as winners. 

Our plan

CloudBees exists to help people deliver better software faster.  We wanted to show how Docker fits in with other tools in the software delivery ecosystem.  Being experts in our own software, we decided to use CloudBees products to tie everything together - accelerating end-to-end Continuous Delivery , using:

  • CloudBees Flow – an orchestration tool that acts as the single pane of glass from commit through production
  • CloudBees Accelerator – an acceleration tool that dramatically speeds up builds and tests by distributing them across a cluster of CPUs

Last year’s entry focused on the Build stage of a continuous delivery pipeline.  This year, we focused on the Integration stage. We built a deployment process that:

  1. Dynamically spins up a VM on either EC2 or OpenStack
  2. Runs Docker Bench for security tests
  3. Retrieves artifacts from Bintray and Docker Hub
  4. Stands up linked MySQL and Wildfly containers running the application
  5. Runs Selenium tests distributed across a cluster
  6. Pushes some statistics to a Dashing dashboard
  7. Then automatically tears down the VM if the tests are successful.

diagram The deployment process and the various technologies involved A LOT to accomplish in 24 hours! but we were up for the task - and with less-than-pretty version of this diagram chicken-scratched on a piece of paper, we got to work! 

What we built

We chose a sample web application called The Heat Clinic because it has a couple of moving parts (application server and database) making it a somewhat realistic example.  We started out by building the Continuous Delivery pipeline. The continuous delivery pipeline defined in CloudBees Flow

The continuous delivery pipeline defined in CloudBees Flow For this hackathon, we focused on the Integration stage.  Still, it’s important to know what the pipeline is - to make sure the automation pieces are reusable , and knowing how they’d be reused is key.  Having kept this in mind, everything we built can be plugged in to Production (or any other stage) with minimal effort. The next step was modeling the application .  The Heat Clinic application has two tiers, one for the web application and one for the database.  Each of those tiers has a few different components (artifacts) – the Wildfly/MySQL containers from Docker Hub, the WAR file for the web application, configuration files, SQL initialization scripts, etc.  We defined the tiers, the components, and the processes to deploy or undeploy each of those components

The application model defined in CloudBees Flow

The application model defined in CloudBees Flow Next, we defined the deployment process that coordinates everything .  This process is closely aligned with the diagram shown earlier: spin up the dynamic environment, run the security tests, retrieve all the artifacts, stand up the containers (in the right order), run the Selenium tests, and tear down the environment if everything is successful.

The deployment process defined in CloudBees Flow

The deployment process defined in CloudBees Flow The Selenium suite we put together took a long time to run, and we realized this is not uncommon for Selenium.  So we sped up the Selenium test suite by using CloudBees Accelerator .  By distributing the 101 tests across just two 4-core VMs, Accelerator used its patented secret sauce to parallelize and run the tests on the individual cores, bringing the overall time down from >27 minutes to <4 minutes.  That’s 7 times faster with just 2 machines!  If we were to add more VMs to our cluster, we could bring that time down to <30 seconds .  That’s a whopping 60 times faster!

insight

Visualizing how CloudBees Accelerator distributed the Selenium tests across a cluster Finally, we put a pretty face on our work by pushing some key stats to Dashing – typically displayed on a TV screen so everyone has an “at a glance” view of the health of the system.

dashing

Dashing dashboard showing key statistics

Our submission

While we did not win this time around, we did come out with a very cool story and a working set of integrations highlighting Docker in the context of Continuous Delivery .  Here are the pain points we looked to address:

  • You're looking at Docker but need to tie it together with a bunch of existing tools
  • You're looking to increase your velocity by implementing Continuous Delivery & Continuous Testing
  • You need to gather and surface critical stats for your applications
  • You want to make sure you're auditing for security at the earliest possible stage
  • You want to run your long-running integration tests early and often

 Check out the entire flow in this short 3 minutes video we included in our submission:

We’re already looking forward to the DockerCon Hackathon next year.  It will be interesting to see what the rapidly changing Docker landscape looks like by then!


How to integrate Docker as part of your CD Pipeline?

DockerLogo

Container technology like Docker promises to provide version-able, environment-independent application services in a snap. However, the tasks and tools involved in creating, validating, promoting and delivering Docker containers into production environments are many, complex and time-consuming. To learn more on how to successfully incorporate Docker as part of your end-to-end Continuous Delivery pipeline, I invite you to join my colleague Nikhil Vaze and myself on an upcoming webinar, when we'll be discussing:

  • How you can tie together all of your existing tools to repeatedly deploy high quality applications using Docker
  • Common use cases and patterns for incorporating Docker in your software delivery pipeline
  • How you can eliminate confusion and ensure auditability by centrally managing multiple containers across environments
  • How to enable tracking and and reporting on container build, test and runtime stats
  • How to accelerate lead time and feedback loops by crushing build and test times by up to 60X

Register for the webinar »

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.

Loading form...
Your ad blocker may be blocking functionality on this page. Please disable for an improved experience.