This article was originally published on Heptio's blog by Joe Beda. With their kind permission, we’re sharing it here for Codeship readers.
This is the fourth part in a multipart series that examines multiple angles of how to think about and apply “cloud native” thinking.
There is quite a bit of excitement around containers. It is helpful to try to get to the root of why containers are exciting to so many folks. In my mind, there are three different reasons for this excitement:
Packaging and portability
Let’s look at each of these in turn.
The Packaging of Containers
First, containers provide a packaging mechanism. This allows the building of a system to be separated from the deployment of those systems.
In addition, the artifacts/images that are built are much more portable across environments (dev, test, staging, prod) than more traditional approaches such as VM images.
Finally, deployments become more atomic. Traditional configuration management systems (Puppet, Chef, Salt, Ansible) can easily leave systems in a half-configured state that is hard to debug. It is also easy to have unintended version-skew across machines without realizing it.
The Efficiency of Containers
Second, containers can be lighter weight than full systems leading to increased resource utilization. This was the main driver when Google introduced groups -- one of the core kernel technologies underlying containers.
By sharing a kernel and allowing for much more fluid overcommit, containers can make it easier to “use every part of the cow.” Over time, expect to see much more sophisticated ways to balance the needs of containers cohabitating a single host without noisy neighbor issues.
The Security of Containers
Finally, many users view containers as a security boundary. While containers can be more secure than simple unix processes, care should be taken before viewing them as a hard security boundary.
The security assurances provided by Linux namespaces may be appropriate for “soft” multi-tenancy where the workloads are semi-trusted but not appropriate for “hard” multi-tenancy where workloads are actively antagonistic.
There is ongoing work in multiple quarters to blur the lines between containers and VMs. Early research into systems like unikernels is interesting but won’t be ready for wide production for years yet.
!Sign up for a free Codeship Account
The Next Evolutionary Step: Clusters
While containers provide an easy way to achieve the goals above, they aren’t absolutely necessary. Netflix, for instance, has traditionally run a very modern stack (and is the AWS poster child) by packaging and using VM images similar to how others use containers.
While most of the original push around containers centered around managing the software on a single node in a more reliable and predictable way, the next step of this evolution is around clusters (also often known as orchestrators).
Taking a number of nodes and binding them together with automated systems creates a new self-service set of logical infrastructure for development and operations teams.
Clusters help eliminate ops drudgery.
With a container cluster, we make computers take over the job of figuring out what workload should go on which machine. Clusters also silently fix things up when hardware fails in the middle of the night instead of paging someone.
The first thing that clusters do is enable the operations specialization (as described in part 3) that allows application ops to thrive as a separate discipline. By having a well-defined cluster interface, application teams can concentrate on solving the problems that are immediate to the application itself.
The second benefit of clusters is that they make it possible to launch and manage more services. This allows new architectures (via microsevices described in the next installment of this series) that can unlock velocity for development teams.
In the next part of this series, we will look at how Cloud Native works to enable microservices.
Check out the rest of the series: