When Docker was released to open source in 2013, there wasn’t any certainty it would be a success. Five years later, and most companies are using containers as a convenient standard way of packing applications. A container is a lightweight standalone executable that contains all the dependencies, configuration and system tools to run the application - and they run on both Linux and Windows. This has enabled consistency for both development and deployment, which is why containers are so important: you can build your packaged application on your development platform and deploy it anywhere that supports containers, like the public cloud.
However, containers are not enough on their own, once you start deploying multiple applications (e.g. microservices) and need a consistent way for discovery, recovery, deployment, autoscaling, security, etc. for your applications - you need a container orchestration layer to manage them. Until mid-2017, it looked like there would be many competing standards for container orchestration, but now Kubernetes, created by Google in mid-2014, has pretty much won. Redmonk states that Kubernetes usage in Fortune 100 companies is already at 54%, and with Kubernetes being adopted by competitive platform players and public cloud players alike, Kubernetes is now the de facto standard for container orchestration. This changes everything. There is now one standard way to package your applications, and one standard orchestration to deploy them to. This is a boon for application developers and software vendors, bringing true portability between different vendors from Google’s GKE, Microsoft’s Azure and Amazon’s AKS, IBM Cloud, to on-premise solutions such as Kubespray. With many public cloud providers offering free credit to new users, or Minikube for local development, there is also a lot of choice of how to get started. There is also the excellent Jenkins X project project, which helps even the complete novice get up and running and developing cloud native software with the full backing of a continuous delivery system based on the Jenkins server.
What is Kubernetes?
Kubernetes provides a uniform way of managing containers. Its aim is to remove the complexity of deciding where applications should be scheduled to run, how to locate them, how to ensure they are running, autoscale or deploy. Some of the more advanced implementations allow autoscaling of the Kubernetes cluster itself, as well as the applications running on them.
Figure 1: Kubernetes Architecture
In Kubernetes, the cluster is deployed across one or more server nodes, the nodes themselves can also be autoscaled in some implementations.
Kubernetes provides discovery and load balancing, resource management and scheduling. The basic unit of deployment is called a pod, this can be one or more containers, that share the same lifecycle and are deployed to the same node. Typically, most pods will contain a single container, but there is flexibility in the pod abstraction to allow for co-locating cooperative services together:
The Kubernetes cluster is managed by the control plane, that consists of an API server, which controls service requests from inside or outside the cluster, and is responsible for determining access control and locations of a service. The etcd component of the control plane is a highly available, key-value store which holds all the meta information about the cluster.
To run an application on Kubernetes, you provide a manifest (a YAML file) which describes the type of application to be run and how many initial replicas of the application are required to be run. Kubernetes will then maintain the desired state (how many instances should be run). Often applications are load balanced behind a service, a virtual IP service that sits in front of your application, and load balances requests across your application pods. As pods can die and be recreated, or scale (if autoscaling is used), having your application fronted by a service allows a level of indirection for requests, providing a robust solution for application outage.
Kubernetes provides role-based access control, allowing cluster administrators to dynamically define roles to enforce access policies through the Kubernetes API.
Application developers can also extend Kubernetes with custom resource definitions. One such use case is to easily store application specific metadata robustly in the Kubernetes cluster itself, as key-value pairs.
With Kubernetes now nearly four years old (and having recently released v1.10), and having just become the de facto standard for container deployments on the cloud, it opens up a whole new avenue for innovation on the cloud. While the Kubernetes community itself can focus on making the Kubernetes platform more mature and adding an increasing level of enterprise features, application developers can focus on targeting one way of deploying their applications in containers, and assume they will be run in a Kubernetes environment - and take advantage of the wider Kubernetes ecosystem, like Helm for packaging applications and their dependencies. Kubernetes brings standardization to cloud environments, which simplifies the whole developer process as well as removing cloud vendor lock in. There are some organizations who are already taking advantage of this by deploying to different cloud providers based on both cost and application infrastructure requirements in real time. The next big challenge for application developers will be how to leverage Kubernetes environments using continuous delivery - that will be another post.