Microservices and Docker at Scale - The PB&J of Modern Application Delivery
So, you've heard me talk - a lot ;) - about Microservices before (1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 ...) Recently, I decided to expand on that, and explore the connection between Microservices and Docker as the foundation for modern application delivery.
It started with the DevOps Enterprise Summit last November, where I gave a talk on Microservices and Docker at scale. Most recently, I had the honor of being asked to contribute an article on the topic for DZone's DevOps Research Guide , now published . Read on to learn:
- Why do containers and microservices go well together?
- What are the challenges of each, separately, and together?
- What do you need to know in order to take advantage of the benefits they offer?
- What are some best practices and some 'gotchas' to be aware of, so you do not drown in the microservices 'whirlpool'?
Microservices and Docker at Scale - The PB&J of Modern Application Delivery
This article is featured in the 2017 DZone Guide to DevOps: Continuous Delivery and Automation . Get your free copy for more expert advise on optimizing your DevOps processes, CD pipelines, industry statistics, and more.
Microservices and containers have recently garnered a lot of attention in the DevOps community. Docker has matured, and is expanding from being predominantly used in the Build/Test stages to Production deployments. Similarly, microservices are expanding from being used mostly for green-field web services to being used in the enterprise - as organizations explore ways to decompose their monolith applications to support faster release cycles. As organizations strive to scale their application development and releases to achieve Continuous Delivery, microservices and containers, although challenging, are increasingly considered. While both offer benefits, they are not “one size fits all”, and we see organizations still experimenting with these technologies and design patterns for their specific use cases and environment.
Why Microservices? Why Containers?
Microservices are an attractive DevOps pattern because of their enablement of speed to market. With each microservice being developed, deployed and run independently (often using different languages, technology stacks, and tools), microservices allow organizations to “divide and conquer”, and scale teams and applications more efficiently. When the pipeline is not locked into a monolithic configuration- of either toolset, component dependencies, release processes or infrastructure, there is a unique ability to better scale development and operations. It also helps organizations easily determine what services don’t need scaling in order to optimize resource utilization. Containers offer a well-defined, isolated runtime environment. Instead of shipping an artifact and all of its variables, containers support packaging everything into a Docker-type file that is promoted through the pipeline as a single container in a consistent environment. In addition to isolation and consistent environment, containers also have very low overhead of running a container process. This support for environment consistency from development to production, alongside extremely fast provisioning, spin-up and scaling, accelerate and simplify both development and operations.
Why Run Microservices in Containers?
Running microservices-based applications in a containerized environment makes a lot of sense. Docker and Microservices are natural companions, forming the foundation for modern application delivery. At a high level, microservices and Docker together are the PB&J of DevOps because:
- They are both aimed at doing one thing well, and those things are complimentary
- What you need to learn to be good at one translates well to the other
- A microservice is (generally) a single process focused on one aspect of the application, operating in isolation as much as possible.
- A Docker container runs a single process in a well-defined environment
- With Microservices you now need to deploy, coordinate, and run multiple services (dozens to hundreds), whereas before you might have had a more traditional three-tier/monolithic architecture. While Microservices support agility—particularly on the development side—they come with many technical challenges, mainly on the operations side.
- Containers help with this complexity because they make it easy and fast to deploy services in containers, mainly for developers.
- Microservices make it easier to scale because each service can scale independently of other services
- Container-native cluster orchestration tools, such as Kubernetes, and cloud environments, such as Amazon ECS and Google Container Engine (GKE) provide mechanisms for easily scaling containers based on load and business rules.
- System Comprehension
- Both microservices and containers essentially force you into better system comprehension – you can’t be successful with these technologies if you don’t have a thorough understanding of your architecture, topology, functionality, operations and performance.
Challenges with Microservices and Containers at Scale:
Managing microservices and large-scale Docker deployments pose unique challenges for enterprise IT. Because there is so much overlap in terms of what an organization has to be proficient at in order to successfully deploy and modify microservices and containers –there is quite a bit overlap in terms of challenges and best practices for operationalizing containers and microservices at scale.
- Increased pipeline variations: Orchestrating the delivery pipeline becomes more complex, with more moving parts. When you split a monolith into several microservices, the number of pipelines might jump from one to 50 (or however many microservices you have set up). You need to consider how many different teams you will need and whether you have enough people to support each microservice/ pipeline.
- Testing becomes more complex. There is a larger amount of testing that needs to be taken into consideration – integration testing, API contract testing, static analysis and more.
- Deployment complexity increases. While scaling the containerized app is fairly easy, there’s a lot of activity that needs to happen first. It must be deployed for development and testing many times throughout the pipeline, before being released to production. With so many different services developed independently, the number of deployments increases dramatically.
- Monitoring, logging and remediation become very important and increasingly difficult because there are more moving parts and different distributed services that comprise the entire user experience and application performance.
- There are numerous different toolchains , architectures and environments to manage.
- There is a need to take into consideration existing legacy applications and how these are coordinated with the new services and functionality of container or microservices-based applications.
- Governance and auditing (particularly at the enterprise level) become more complicated with such a large distributed environment, and with organizations having to support both containers and microservices, alongside traditional releases and monolithic applications.
In addition to these common challenges, microservices and containers each pose their own unique challenges.
If you’re considering microservices, know that:
- Distributed systems are difficult and mandate strong system comprehension.
- Service composition is tricky and can be expensive to change. Start as a monolith, and avoid pre-mature decomposition, until you understand your application’s behavior thoroughly.
- Inter-process failure modes need to be accounted for and although abstractions look good on paper they are prone to bottlenecks.
- Pay attention to transaction boundaries and foreign-key relationship as they’ll make it harder to decompose
- Consider event-based techniques to decrease coupling further
- For API and services’ SLA - “Be conservative in what you do, be liberal in what you accept from others”
- State management is hard – transactions, caching, and other fun things..
- Testing (particularly integration testing between services) and monitoring (because of the increased number of services) become way more complex.
- Service virtualization, service discovery and proper design of API integration points and backwards-compatibility are a must.
- Troubleshooting failures: “every outage is a murder mystery”
- Even if a service is small, the deployment footprint must be taken into account.
- You rely on the network for everything- need to consider bandwidth, latency, reliability.
- What do you do with legacy apps: rewrite? ignore? hybrid?
- Security is a critical challenge – both because it is still a relatively new technology, and due to the security concerns for downloading an image file. Containers are black boxes to Opsec: less control, less visibility inside the container, existing tools may not be container-savvy. Be sure to Sign & scan images, validate libraries, etc.; Harden the container environment as well; Drop privileges early, and use fine-grained access control so it’s not all root. Be smart about credentials (container services can help).
- Monitoring is tricky, since container instances may be dropped or span-up continuously. Logging and monitoring needs to be configured to de-commission expired containers, or save the log and data from other - business data, reference data, compliance data, logs, diagnostics - (temporal) instances.
- Know what’s running where, and why, and avoid image bloat and container sprawl.
- Since the containers hosting and cluster orchestration market is still emerging, we see users experimenting a lot with running containers across multiple environments, or using different cluster orchestration tools and APIs. These early adopters need to manage containers while minimizing the risk of lock-in to a specific cloud vendor or point-tool, or having to invest a lot of work (and steep learning curve) in re-writing complex scripting in order to re-purpose their deployment or release processes to fit a new container environment or tool.
Best Practices for Microservices and Containers
While, admittedly, there are a fair number of challenges when it comes to deploying microservices and containers; the end-result, however, will be reduced overhead costs and faster time to market. If microservices and containers make the most sense for your application use case, there is a great deal of planning that needs to happen before you decompose your application to a set of hundreds of different services, or migrate your data center to a container environment. Without careful planning and following industry best practices, it can be easy to lose the advantages of microservices and containers. To successfully run microservices and containers at scale, there are certain skill sets that the organization must possess throughout the software delivery cycle:
- Build domain knowledge. Before deploying microservices it is critically important to understand the domain before making difficult decisions about where to partition the problem into different services. Stay monolithic for a while. Keep it modular and write good code.
- Each service should have independent CI and Deployment pipelines so you can independently build, verify and deploy each service without having to take into account the state of delivery for any other service.
- Pipeline automation: a ticketing system is not automation. With the increase in number of pipelines and pipeline complexity, you must be able to orchestrate your end-to-end process, including all the point-tools, environments and configuration. You need to automate the entire process- from CI, testing, configuration, infrastructure provisioning, deployments, application release processes, and production feedback loops.
- Test automation: Without first setting up automated testing, microservices and containers will likely become a nightmare. An automated test framework will check that everything is ready to go at the end of the pipeline and boost confidence for production teams.
- Use an enterprise registry for containers. Know where data is going to be stored and pay attention to security by adding modular security tools into the software pipeline.
- Know what's running where and why. Understand the platform limitations and avoid image bloat.
- Your pipeline must be tools/environment agnostic so you can support each workflow and tool chain, no matter what they are, and so that you can easily port your processes between services and Container environments.
- Consistent logging and monitoring across all services provides the feedback loop to your pipeline. Make sure your pipeline automation plugs into your monitoring so that alerts can trigger automatic processes such as rolling back a service, switching between blue/green deployments, scaling and so on. Your monitoring/performance testing tool needs allow you to track a request through the system even as it bounces between different services
- Be rigorous in handling failures (consider using, e.g. Hystrix to bake in better resiliency)
- Be flexible at staffing and organizational design for microservices. Consider whether there are enough people for one team per service.
There is increasing interest in microservices and containers, and for good reasons. However, businesses need to make sure they have the skills and knowledge for overcoming the challenges of managing these technologies reliably, at scale. It is critical to plan and model your software delivery strategy, and align its objectives with the right skillsets and tools - so you can achieve the faster releases and reduced overhead that microservices and containers can offer.
More on the Microservices-Containers Combo:
Download the hot-of-the-press 2017 DZone DevOps Research Guide for more best practices for microservices and containers, CD anti-patterns, and more great content. I would be remiss if I didn't mention the most recent version of CloudBees Flow which allows you to easily model and deploy containers and microservices-based applications as part of your end-to-end delivery pipeline (including robust plugins to Amazon ECS and Google Container Engine (GKE ). Download the free Community edition of CloudBees Flow to accelerates your on-boarding and use of containers and microservices:
You can also watch the recording of my DOES16 talk on Microservices and Docker at Scale - The PB&J of Modern Application Delivery , below: (25 minutes)
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.