The decline of hardware proxies started a long time ago. They were too expensive and inflexible even before cloud computing become mainstream. These days, almost all proxies are based on software. The major difference is what we expect from them. While, until recently, we could define all redirections as static configuration files, that changed in favor of more dynamic solutions. Since our services are being constantly deployed, redeployed, scaled, and, in general, moved around, the proxy needs to be capable of updating itself with this ever changing end-point location.
We cannot wait for an operator to update configurations with every new service (or release) we are deploying. We cannot expect him to monitor the system 24/7 and react to a service being scaled as a result of increased traffic. We cannot hope that he will be fast enough to catch a node failure which results in all services being automatically rescheduled to a healthy node. Even if we could expect such tasks to be performed by humans, the cost would be too high since an increase in the number of services and instanced we're running would mean an increase in workforce required for monitoring and reactive actions. Even if such a cost is not an issue, we are slow. We cannot react as fast as machines can and that discrepancy between a change in the system and proxy reconfiguration could, at best, result in performance issues.
Among software based proxies, Apache ruled the scene for a long time. Today, age shows its face. It is rarely the weapon of choice due to its inability to perform well under stress and relative inflexibility. Newer tools like nginx and HAProxy took over. They are capable of handling a vast amount of concurrent requests without posing a severe strain on server resources.
Even nginx and HAProxy are not enough by themselves. They were designed with static configuration in mind and require us to add additional tools to the mix. An example would be a combination of templating tools like Consul Template that can monitor changes in service registry, modify proxy configurations and reload them.
Today, we see another shift. Typically, we would use proxy services not only to redirect requests, but also to perform load balancing among all instances of a single service. With the emergence of the (new) Docker Swarm (shipped with the Docker Engine release v1.12), load balancing (LB) is moved towards software defined network (SDN). Instead performing LB among all instances, a proxy would redirect a request to an SDN end-point which, in turn, would perform load balancing.
Services architecture is switching towards microservices and, as a result, deployment and scheduling processes and tools are changing. Proxies and expectations we have from them are following those changes.
The deployment frequency is becoming higher and higher, and that poses another question. How do we deploy often without any downtime?
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.
The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the full microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, nginx, and so on. We'll go through many practices and, even more, tools.