OpenStack Tokyo, Docker, and Moving from Monolith to Microservices

Written by: Electric Bee

palace-nikhil

Me at the Imperial Palace As you may know, I've recently returned from a trip to Tokyo, Japan.

skytree-nikhil

Tokyo Skytree As I learned from my Japanese colleagues, October is a great time to visit Tokyo as the weather is very temperate and the peak season has just passed. My favorite tourist attraction was the Tokyo Skytree and seeing the breathtaking 360 views. Being a tourist was fun but, the main reason I went to Tokyo was to deliver a talk at OpenStack Summit Tokyo.

openstuck-summit-nikhil

OpenStack Summit, Tokyo My session was titled "Managing Microservices at Scale With OpenStack + Docker" . And as you’d expect I spent a lot of time talking about Docker , Microservices and Software Delivery pipelines. We’ve all heard about Microservices and Monolith applications. They each have their own pros and cons for development, testing and deployment.  The consensus forming, currently, in terms of best practices, is to start with a Monolith architecture, and work on your product and get some business traction and customers, before you decompose your monolith into Microservices.  (BTW - for an in-depth look into Microservices architecture and deployment patterns, check out our CTO's, Anders Wallgren, excellent talk at the recent DOES15 ). In my talk, I went into further detail about these theories, including a live demo (see recording below). In this post, I wanted to focus on a modeling of decomposition of a Monolith app into Microservices, using our free version of CloudBees Flow . To illustrate the decomposition, I created a demo application that queries a weather feed (based on a weather widget by davefp ) and displays the temperatures in 16 cities around the world. For a UI, I created a Dashing dashboard  that could be deployed as a monolith via one Docker container. Each widget in the dashboard was written as a dashing job that would run every 30 seconds and query the Yahoo weather API to display the results. I wrote a few Selenium tests that also ran in Docker to test the monolithic application.

Modeling the Pipeline, Application and Deployment process of Microservices in CloudBees Flow:

The Monolith:

This is what the software delivery pipeline looks like in CloudBees Flow: CloudBees Flow monolith pipeline model 1 CloudBees Flow Pipeline model This is what the application looked like: CloudBees Flow monolith application model 2 CloudBees Flow app model And this is how straight-forward, serial, deployment process looked like: CloudBees Flow monolith deployment process model 3 CloudBees Flow deployment process model

Microservices decomposition:

Next, I wanted to show how this application would be decomposed, and decided to split each city into its own microservice. Granted, this is a simplistic decomposition scheme, only to illustrate the point. In a real-world scenario, you would likely re-architect the microservices based on the types of data feed (i.e. temperature, traffic alerts, geo-based recommendations), supported languages, etc. In my example, each microservice would now have its own pipeline to go from source (git repository) to artifact repository (docker registry).

sample-pipeline-microservices-and-containers

Sample Pipeline: Microservices with Containers To assemble all of these microservices, I created a loose concept of an “application manifest” in the form of JSON input. This is one way to pick up microservices and deploy them as an application. The JSON input is a parameter to the CloudBees Flow pipeline. Now, let's compare how the decomposed monolith looks in CloudBees Flow in Microservices form: This is what the software delivery pipeline of the application looks like: CloudBees Flow microservices pipeline model 4 CloudBees Flow Microservices Pipeline model The application: CloudBees Flow microservices app model 5 CloudBees Flow microservices app model The deployment process: CloudBees Flow monolith deployment process model 6 CloudBees Flow microservices deployment model Notice that each Microservice (in our case, city) can be deployed in parallel, independent of the other services. There's still a serial deployment in the first couple of steps in the process, as the application matures- you would have likely found a way to decompose those, too.

How does OpenStack fit in:

As you fragment your application, testing becomes even more critical to ensure there are no issues or failures with each Microservice. You will find that your requirements for testing environments grow considerably as you add more microservices. OpenStack and Docker lend themselves well to enable elastic usage of your infrastructure, during testing or deployment. You can easily add/remove instances to accommodate the needs of your app as you continue to decompose it, or as you add more team members, or as you add redundancy to meet HA needs.

Watch the recording of my talk:

If you're interested to learn more, you can watch the recording of my talk, which also includes a cool product demo:

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.