This article was originally published by Lee Sylvester on his personal blog. With his kind permission, we’re sharing it here for Codeship readers. You can read Part I here.
In my previous article, you looked at creating and preparing your Civo cluster so that you can access it from your development machine through the power of Docker. In this article, you’ll create an actual application that will launch into your cloud and be load balanced.
The beauty of this is that you can then scale this application horizontally, meaning you can support greater load by simply launching more virtual machines in your cluster.
First, Some Theory
In order to better understand what you’re going to do and how it works, I am first going to break down some of the cooler aspects of this process and identify why it works.
The Docker ecosystem utilizes a marshaling infrastructure within the Docker Engine toolset. In Part 1 of this series, you created a controller node, which was then clustered with two worker nodes. These workers identified themselves to the controller and the controller joined them to the swarm.
Once joined, each worker was then contactable via the controller node. This means that the controller node can deploy and send commands to each of the worker nodes. Each worker node can contact all other workers and the controller, too, but it is the controller that typically does the orchestration.
When deploying your application to your cluster, you assign a service name to each Docker container you wish to instance. You may choose to have multiple instances of a container in your cluster. For instance, you may wish to launch three instances of a particular service which Docker will manage for you, though you may not care exactly how it is deployed. Although you have three nodes in your cluster, Docker may decide to place all three instances of a particular service on a single node, unless you tell it otherwise.
Once a service is launched on a cluster, Docker then provides access to this service by name, using DNS (Domain Name Service). This allows for some clever service access using just the service's name, which you will see shortly.
It is important to remember that Docker does all the heavy lifting for you. So long as your configuration is correct, you simply provide Docker with your desired service relationships and size, leaving Docker itself to worry about topography.
The Application
The purpose of this article is not to get too complex and to identify deploying an application which is load balanced. Therefore, all I really want this application to do is to identify the uniqueness of its environment. To do this, you’ll use a simple PHP script which will output to the calling browser the IP address of the host virtual machine and the identity of the container it’s running in.
The following block outlines the folder and file structure of your application.
load_balancer - Dockerfile - nginx.conf server src - index.php - Dockerfile - docker-compose.yml
The Server
The server folder houses your actual application. Really, it’s just a PHP webpage and a Dockerfile that describes the image to run a PHP ready Apache server.
The index.php
file simply outputs the values of some simple commands. I chose PHP as the application base simply because outputting this information is much easier with PHP than something like NodeJS, which requires quite a lot more code.
< ?php // server/index.php echo getHostName(); echo "<br />"; echo getHostByName(getHostName());
Here, line 2 will be responsible for outputting the hostname, which in Docker will be the name or identifier of the running container, while line 4 will output the name of the host machine that Docker is running inside.
The Dockerfile required to get this running is even simpler.
# server/Dockerfile FROM webgriffe/php-apache-base:5.5 COPY src/ /var/www/html/
You simply utilize an existing Dockerfile for Apache/PHP development and copy your src
directory contents into the container's html
directory. Please note, however, that this Dockerfile is NOT intended for production use, but merely to outline the simplicity of deployments with Docker.
The Load Balancer
Once your application is running as a service in your cluster, you’ll need a way to actually marshal calls to it in a way that is load balanced. For instance, if a single instance of this service is running on each of your three nodes, how do you direct calls from users to each of the nodes? If you simply assign an IP of one of the virtual machines to a domain name, only that virtual machine's service instance will be utilized.
What you need is a way to route traffic based on server load, or simply to round-robin requests.
There are several solutions to this problem. Civo, in fact, has one such solution in the pipeline, but it isn’t yet available to the public. Until it is, you could choose to use a third-party solution, such as CloudFlare’s Load Balancer, which isn’t such a bad idea for inter-data center requirements. For this situation, however, and indeed for most use cases, you’ll use NGINX.
Choosing NGINX
NGINX is great because it’s free, lightweight, easy to configure, and can be modified to support your changing needs. Oh, and did I say it was free?
Another great aspect of NGINX is that it allows me to highlight and demonstrate the power of Docker's DNS capabilities.
# load_balancer/nginx.conf worker_processes 2; events { worker_connections 1024; } http { server { listen 80; location / { proxy_pass http://service; proxy_http_version 1.1; } } }
The NGINX config is a little more complex than the others in this article, but it's necessary. The section to be aware of is:
location / { proxy_pass http://service; proxy_http_version 1.1; }
This creates a reference to your service and passes all requests to it. The proxy_pass
command parameter identifies a DNS address with the name service
, which from here on in will be the name of your service (original, I know). You can have as many of these running as you choose on as many virtual machines as necessary, and NGINX will be able to route to them all via the Docker engine.
You’ll compile this into a Docker container using the following Dockerfile:
# load_balancer/Dockerfile FROM nginx COPY nginx.conf /etc/nginx/nginx.conf
Running It All Locally
So, that just about does it for your containers. Now, you need a way to run them. You could do this using Docker calls on the command line, but I find Docker Compose a much more civil and user-friendly approach.
# docker-compose.yml version: '2' services: load_balancer: restart: always build: ./load_balancer ports: - "80:80" service: restart: always build: ./server ports: - "80"
Paste this whole block into a file called docker-compose.yml
in the root of your project directory.
YML files are designed to be easy to read. You’ll quickly notice, for example, that you are describing two services. The first service is the load balancer, which is set to always restart if it fails, has a specified build directory (the load balancer project), and maps the local port 80 to the containers port 80.
Likewise, the second service (called service
… Maybe I should have chosen a different name?) has the same restart protocol, points to the server build directory, and maps the container's port 80 to…what? Actually, you don’t care. By not specifying a local port, you leave Docker to take care of that for you. This way, you can literally run multiple instances of the same service in a single virtual machine without the ports conflicting with the other running instances. Pretty neat, huh?
Notice that the name supplied to the service is the name Docker will use for its DNS. Therefore, if you had named the server service foobar
, you would reference it in the nginx.conf
as http://foobar
.
You can now launch your containers by running the following on the command line:
docker-compose up
If it fails, check that your current directory is the root of your project. I’m forever messing that up, myself.
Now, if you navigate to http://localhost/index.php
in your browser, you should see output like:
61ef13621f77 172.20.0.3
Notice the IP address is that of the running container and not of the host. This is typical when running Docker examples locally and nothing to panic over.
Running In The Civo Cluster
So, that was great. You ran both containers and saw the output from the contained Apache server. That, however, is something you can do from a thousand Docker tutorials on the Internet. Let’s now take it a step further and launch this app in the cloud!
To do this, you’re going to need to do a few things. First, you will need to host the Docker images in a remote Docker repository. Now, this is beyond the scope of this article, but if you really want to go the whole hog and do it yourself, then I would recommend you read up about that on the Docker documentation site.
For those of you that already know this or simply want to get to the good stuff, I’ve already pushed them to the public Docker repository and will use their image names in the upcoming examples.
The next thing you will need to do is to update your docker-compose.yml
file. You’ll notice in the previous file that you use version 2 of the Docker Compose specification and reference local build directories. This is necessary for local testing, but in order to push to a swarm with Docker Stack, you need to use version 3 of the Docker Compose specification and reference your hosted Docker images, like this:
# docker-compose.yml version: '3' services: load_balancer: image: leesylvester/civolb deploy: mode: global ports: - "80:80" service: image: leesylvester/phpinf deploy: mode: global ports: - "80"
As you can see, the version has changed and the build parameters have been replaced with image parameters pointing to my hosted images. The restart parameters have also been removed, as this is not supported in the version 3 specification.
However, you’ll notice that there is a new parameter for each service:
deploy mode: global
What is this? Well, you may recall me saying earlier that, for the most part, Docker controls where instances will be placed. The deploy; global
parameter is a neat way to ensure that Docker deploys one and only one -- no less and no more -- of a service on each machine in your cluster. This isn’t exactly necessary and probably not wise for most applications, but it’s a quick way to ensure that your NGINX image is deployed to your controller node, which will receive all initial requests from the public internet.
In a future article, I will highlight the various ways of orchestrating services, including tying a given service instance to specific virtual machines using tags. For this article, though, it’s best to stick with the simple and easy-to-remember approach.
Now that the file has been updated, you need to switch to the cluster context. If you followed the first article exactly, you do this with the following command.
eval $(docker-machine env cloud-manager)
If you named your controller/manager node something different, then simply place that in the spot where cloud-manager
is located.
Now that that’s done, any Docker commands you enter will be sent to the remote controller’s Docker Engine instead of your local system. So, with that in mind, let’s deploy the application.
First, make sure you’re in the root of the project directory, so that Docker can access the docker-compose.yml
file. Then enter the following in the command line:
docker stack deploy --compose-file docker-compose.yml mycloud
Here, you’re calling the deploy option of the docker stack
command, requesting that it use the docker-compose.yml
file (which is required, sadly) and calling the grouped set of services mycloud
. The latter is so you can reference this deployment later. For example, if you wanted to tear it all down, you would run:
docker stack rm mycloud
Once deployed, you should see the terminal output something like the following:
Deploying service mycloud_service (id: jsuwys8h022806cq3sj9tnerm) Deploying service mycloud_load_balancer (id: pwyktt2obu82uivc00mqu8ugl)
This simply means that the services service
and load_balancer
have been deployed with the names mycloud_service
and mycloud_load_balancer
, respectively. Note, however, that the DNS names are still service
and load_balancer
, but when referencing the services in Docker Stack calls, you will use the extended name variants.
So, that’s it. You application has been deployed. But how do you know the service topography? How can you tell what servers the services are running on? The first thing you can do is list how many instances are running. To do this, simply run:
docker stack services mycloud
And you will be greeted with something like:
ID NAME MODE REPLICAS IMAGE jsuwys8h0228 mycloud_service global 3/3 leesylvester/phpinf:latest pwyktt2obu82 mycloud_load_balancer global 3/3 leesylvester/civolb:latest
This tells you that the images leesylvester/phpinf:latest
and leesylvester/civolb:latest
were both deployed with three replicas each. In this instance, you know that each replica sits on a unique virtual machine, as you used the deploy; global
flag in the docker-compose file.
Now, if you want to be absolutely sure, then you can run the following to see just where these instances were placed:
docker stack ps mycloud
Which outputs:
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS idmbrol4q7wg mycloud_load_balancer.dvuqjv8g99jzr36gz2bnsn9zg leesylvester/civolb:latest mycloud-manager Running Running 5 hours ago zxnxyifm960w mycloud_service.ke1smm0940orz8w9zb5zw4vsp leesylvester/phpinf:latest mycloud-worker2 Running Running 4 hours ago i077rcfv9i18 mycloud_service.h6hpjjrr1t81clpjaahvybuox leesylvester/phpinf:latest mycloud-worker1 Running Running 5 hours ago 9qgu9xhngbq7 mycloud_service.dvuqjv8g99jzr36gz2bnsn9zg leesylvester/phpinf:latest mycloud-manager Running Running 4 hours ago k0sypci21lrl mycloud_load_balancer.h6hpjjrr1t81clpjaahvybuox leesylvester/civolb:latest mycloud-worker1 Running Running 5 hours ago gbfanldhvyb6 mycloud_load_balancer.ke1smm0940orz8w9zb5zw4vsp leesylvester/civolb:latest mycloud-worker2 Running Running 5 hours ago
So, you can now see one of each instance is running on each cluster node. Perfect!
If you want to see more information about a given service, then you can use the inspect command against a specific service:
docker inspect mycloud_service
This will output something like:
[ { "ID": "jsuwys8h022806cq3sj9tnerm", "Version": { "Index": 147 }, "CreatedAt": "2017-08-26T12:59:28.755047944Z", "UpdatedAt": "2017-08-26T17:42:50.118482128Z", "Spec": { "Name": "mycloud_service", "Labels": { "com.docker.stack.namespace": "mycloud" }, "TaskTemplate": { "ContainerSpec": { "Image": "leesylvester/phpinf:latest@sha256:7edb8a3880dc6dd4fb333036bd0bf3cc49bd9cb47555943594e5822734f41e27", "Labels": { "com.docker.stack.namespace": "mycloud" } }, "Resources": {}, "RestartPolicy": { "Condition": "any", "MaxAttempts": 0 }, "Placement": {}, "ForceUpdate": 0 }, "Mode": { "Global": {} }, "Networks": [ { "Target": "mle3s6vn6btshdqbsixrvsnuv", "Aliases": [ "service" ] } ], "EndpointSpec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 80, "PublishMode": "ingress" } ] } }, "PreviousSpec": { "Name": "mycloud_service", "Labels": { "com.docker.stack.namespace": "mycloud" }, "TaskTemplate": { "ContainerSpec": { "Image": "leesylvester/phpinf:latest@sha256:7edb8a3880dc6dd4fb333036bd0bf3cc49bd9cb47555943594e5822734f41e27", "Labels": { "com.docker.stack.namespace": "mycloud" } }, "Resources": {}, "RestartPolicy": { "Condition": "any", "MaxAttempts": 0 }, "Placement": {}, "ForceUpdate": 0 }, "Mode": { "Global": {} }, "Networks": [ { "Target": "mle3s6vn6btshdqbsixrvsnuv", "Aliases": [ "service" ] } ], "EndpointSpec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 80, "PublishMode": "ingress" } ] } }, "Endpoint": { "Spec": { "Mode": "vip", "Ports": [ { "Protocol": "tcp", "TargetPort": 80, "PublishMode": "ingress" } ] }, "Ports": [ { "Protocol": "tcp", "TargetPort": 80, "PublishedPort": 30000, "PublishMode": "ingress" } ], "VirtualIPs": [ { "NetworkID": "xg4d1ejuay7i9p6agp06ps3t8", "Addr": "10.255.0.9/16" }, { "NetworkID": "mle3s6vn6btshdqbsixrvsnuv", "Addr": "10.0.0.6/24" } ] } } ]
Now, of course, the best part about this is in the execution. Open up your browser and navigate to a node's IP address. Since NGINX is running on all three nodes, you can use any of the IPs you choose. You should then see output similar to:
312a22707da6 10.0.0.7
Unlike when you tested locally, the IP address will be the private network IP of the host machine.
Now, refresh the page a few times. The IP and container id should change, cycling between all three machines as the Docker Engine round-robins the requests to each node.
Congratulations! You have deployed a load-balanced, scaled cloud application to Civo. Believe it or not, this is something very few people achieve, yet with a few simple commands and the simplicity of Civo and Docker, you are now able to easily scale your application to support huge amounts of traffic.
Conclusion
While this was both easy and powerful, this is still a relatively simple application. In future articles, you will learn how to scale applications that handle state, such as data and physical assets, as well as how to properly manage messaging between services within your application. You will also take a closer look at Docker's service orchestration features.