Building Cloud Apps with Civo and Docker Part III: Cloud Theory

Written by: Lee Sylvester

This article was originally published on Lee Sylvester's personal blog. With his kind permission, we’re sharing it here for Codeship readers. You can read Parts I and II on the Codeship blog as well.

Having now written a couple of practical articles on getting a cloud cluster online with Civo and Docker, I wanted to take a step back and write about some of the theory of writing apps in the cloud.

Anyone can follow a tutorial and build out a solution for scaling a stateless application, but doing so with state is a whole different ball game. When writing applications, we all benefit from a wealth of available knowledge that outlines tried and tested patterns for solving common problems.

Many of us have been using patterns in our apps for so long that they become second nature. Many of these patterns are even embedded in the programming languages and virtual machines we use, so implementing them is simple. However, with cloud orchestration, things aren’t always so clear cut.

One of the problems with cloud orchestration is simply with how relatively new it is. Now, companies have been scaling applications for a long time, so it’s not really new, but it’s only in recent years that scaling applications has become a necessity of smaller organizations.

Quite simply, the number of potential users willing to use our apps is increasing all the time, so it’s often better to prepare for success than to face embarrassment when our apps can’t meet user demand. As with anything nesw, the knowledge of such implementations becomes coveted, with developers often believing they can demand more money and acquire employment stability by having crucial information that others don’t.

The truth of the matter is both simple and complex. Much like working with analogue electronics, the answer involves very simple concepts, but when combined together can often become overwhelming and confusing. In reality, there are no fast solutions for scaling state, but there are a number of obvious truths that, when considered early on, can help realize a practical solution.

Data Consistency

The first obvious truth regards that of data consistency. If a service doesn’t require consistency, such as REST services that do not handle state or changeable resources, then the service could potentially be scaled exponentially without consequence.

Such situations should be preferred and realized wherever possible. When dealing with stateful services and resources, however, careful planning is recommended.

Identifying State

The first step when building out your data-handling architecture is to identify its state. State is, essentially, any data that differentiates one person's running application from another -- the data specific to that user or to that user's experience. If another person booted up the app, their experience would be different because the state for that user's connection is different, or perhaps, the steps they took once loaded was different. If making the same request to the server produced different results, it might be due to state changes.

State and data are really interchangeable terms, though we can choose to see them differently depending on their transience, uniqueness to user interaction, and its place within the application. We commonly expect state to disappear after a period of disconnection or a hard refresh, while we normally aim to retain data for future reference.

A typical kind of state is session data. This is data specific to a single user. With monolithic applications, where all or the majority of the stateful components exist on a single server, a user's state is not normally a problem. When users leave your application, the state data will eventually expire and the resources it occupied are released and made readily available to new users.

However, as more and more people use the application simultaneously, resources are depleted and the need to scale becomes apparent. This can be done vertically, by increasing the amount of CPU and memory on the server, or horizontally, by adding more servers. Increasing a server's resources usually only proves sufficient for a short period of time. Eventually, all applications with a continually upward-sliding adoption will need to scale horizontally.

Identifying the Problem

One of the great benefits of modern computing is the economy of hardware. It is now far cheaper to utilize more hardware than it is to utilize more software development. As such, increasing server resources is normally the first line of attack when tackling user load.

As more users utilize your application, you scale the hardware. The difficulties only set in when you start to move your application onto multiple servers when its associated data must also distribute, such as when state is present. That’s because, in order to balance server usage, you ideally don’t want to tie a unit of data to a specific machine.

Let’s look at this in a diagram:

[caption id="attachment_6254" align="aligncenter" width="532"]

Two servers with a load-balancer[/caption]

Now, when users make requests to the load-balancer, that request will be passed to one of the available servers. If the server application maintains state for that user, then the two servers will no longer mirror each other and problems will arise.

For example, Bob the user makes a login request which is routed to server one, and a session is created. If Bob then sends a request which is routed to server two, any data that is held in server one will not be available, which may cause problems for the application and for Bob’s experience.

To combat this issue, we can choose to make a decision:

  • We can decide to route users to a specific server based on a set parameter of that user.

  • We can decide to share all data between the servers.

The first option is rather simple, as it typically means the application doesn’t need updating. We simply provide the load-balancer with an amount of logic that routes the user to a specific server. This might be, for example, to send all users with an odd IP component to server one and all other users to server two.

Now, this can work perfectly well in some circumstances and is a legitimate solution, but it can occasionally be unpredictable. What if all users accessing the server happen to have an odd IP component? Then server one will be overutilized and server two will have no traffic. It’s unlikely, but it is possible.

Another problem with this scenario is if user data needs to be shared among other users, such as with chat servers. One could implement a third-party data store for these messages and have the servers poll the store for updates, but that would add an unwanted level of latency and complexity.

The alternative option is to share the state. As data comes into one server, it is immediately (or within an acceptable time frame) passed to the other server and vice versa. This ensures that, regardless of which server a user connects to, they will get the same experience. Thus, requests to the servers can be "round-robined," whereby every subsequent request hits a different server in a cycle that ensures even load throughout.

So, we have a solution. Pretty simple, but what about the side effects? Passing data between two servers isn’t so bad, but what if your cluster consists of fifty servers?

Fifty servers means that for every user that connects to a single server, the session data created needs to be dispatched forty-nine additional times. Thus, if one user connects to each of the fifty servers at the same time, all fifty servers will receive forty-nine additional messages at once for every single message sent! This can quickly get out of hand and cause congestion in your application, which is hard to resolve.

Optimizing State

When writing monolithic apps, we typically put everything in the same bucket. All the assets, messages, and connections originate in the same place, and we typically don’t consider optimization as being too important.

Throughout the history of computing, the need for optimization has come about in waves. Originally, optimization was necessary to eek out the most power from machines with only a few kilobytes of memory or megahertz of processing speed. When the '90s hit, application development became less problematic, and optimization was less important. Then the internet revolution occurred, and we were faced with the need to optimize applications served on the world wide web, as users needed to typically stream them over 56k dial-up connections or worse.

The uptake and ubiquity of broadband connectivity has made that less of a requirement, which is ever more apparent given that many JavaScript clients these days can be several megabytes in size. But with the distributed computing age, the need to optimize has reared its ugly head yet again.

Reducing Data

An obvious optimization is to simply serve smaller or fewer packets. Perhaps your data is being served from many endpoints that can be condensed into one? A typical HTTP request can consist of a header of around 1 or 2 kilobyes, regardless of body size. Therefore, reducing ten requests into one can reduce calls by 10 to 20 kilobytes overall, which can be a big savings over many millions of users.

Reducing data verbosity is also valuable. Perhaps a large packet of configuration data can be reduced to a simpler value?

The following JSON packet, for example:

{
  "showHeader": true,
  "publicFriendsList": false,
  "showAds": true,
  "basicUI": false
}/code>

may be condensed to a numerical value, such as:

{"conf": 5}

where the bits of the integers binary representation correspond to the truthiness of each configuration setting:

5 == 0101 == [false, true, false, true]

Divide and Conquer

Another optimization is to identify the different types of data in your application and to serve them appropriately. In recent days, microservices have become very popular partly for this reason. Unlike with monolithic apps, there is no reason why your image assets should be served from the same location as your REST or Websocket endpoints. Each can be served independently and can be cached to decrease processing costs. For example:

  • Your images, video, and audio files can be served over a CDN, which both caches and load-balances your asset files.

  • REST endpoints can make use of eTags to reduce needless CPU cycles and database trips.

  • Websocket servers can be placed on a separate server to maximize available ports and reduce the number of nodes in the cluster.

  • CPU intensive processes, such as video composition and image generation, can make use of serverless technologies, which reduce cost by utilizing CPU resources only when necessary.

Cache and Queue

Shuttling data around your application can be expensive. Even sending and retrieving data to and from a database in a monolithic application can be expensive. Therefore, reducing these trips can speed up your application and reduce strain on the server.

Caching data in read-heavy resources provides a readily available heap against associated paths. This isn’t solely beneficial for database data, but also for assets and generated data. At Xirsys, we implemented a distributed cache on top of a read-heavy database and reduced calls to it by over 90 percent. That’s a huge savings!

On the flip side, implementing a queue for write-heavy resources reduces calls to a given store, and as a consequence, reduces bandwidth and CPU cycles.

Depending on the use case, it is quite plausible to combine both a cache and a queue, re-serving data not yet committed to its data store.

Positioning State

Distributed application and microservice topographies typically form tree-like structures. These can be both top-down and bottom-up. For instance, a tree may work from a single user to multiple load-balancers or from multiple service nodes to a single data-store.

When contemplating state distribution, you typically want to position it within the narrowest node level. Storing state within a cluster of microservice nodes presents the issue of consistency, as noted above, but keeping that state within the client or in a data store greatly simplifies data management.

Passing data to and from its resident node only when required is preferable to implementing a consistency strategy at scale.

When working with complex microservice architectures, the overall topography may consist of numerous clusters of microservices, each with their own requirement on shared state. In such circumstances, implementing a purpose-built store for each cluster may be practical.

Brewer’s Theorem

Eric Brewer, a computer scientist of UC Berkeley, is well-known for what is alternatively known as CAP theorem. That is, data can only realistically facilitate at most two of the following three guarantees:

  • Consistency: the guarantee that retrieving data from a cluster will result in the most recent value or an error response

  • Availability: the guarantee that data will always be available, even if not up-to-date.

  • Partition tolerance: the guarantee that a service will continue to operate, even if network partitions fail

When considering your architecture, you should decide which two of these requirements are necessary for the associated services.

Let's explore some of these examples.

Communication server

Numerous users connected to a service that distributes messages between them, such as a websocket server, would typically favor availability and partition tolerance. In this circumstance, we forgo consistency, as the real-time element is less important than ensuring data delivery and ensuring service availability.

Data store

A database, such as Postgres or MySQL, may typically favor consistency and partition tolerance. This is a little tongue-in-cheek and may be disputed. A good argument can be found here.

However, considering the circumstance of a single database node, it would be preferable to ensure the data is always current and that it continues to function during a network outage, even if it isn’t accessible by your application 100 percent of the time.

Consistent and available

Some databases have been known to be CA compliant, though I’ve never really understood how. According to CAP theorem, it’s possible for a service to be consistent and available while sacrificing partition tolerance, but I’ve never really understood how a service can legitimately be available but not be current and yet be consistent at the same time.

The answer is that all nodes serve old data until the new data has successfully propagated across all nodes; but ensuring this requires the same fundamental rules as ensuring consistency in the first place. Get your head around that little tidbit!

Pushing and Pulling Data

In times of old, data passed from server to client or from database to server typically occurred in the same way, by polling or pulling the data. In modern times, however, we have more options available to us.

Reactive technologies

Reactive typically refers to responding to changes to data as they occur. Instead of repeatedly checking the server for updates to a given data structure, the data is instead pushed when and only when it is updated. This reduces needless CPU cycles in the consumer.

Reactive technologies are commonly used in client applications, such as with ReactJS or, more specifically, Rx libraries and are truly reactive when combined with socket technologies such as Websockets.

Data store feeds

Modern data stores, such as CouchDB, now provide a feed capability that notifies interested parties when data is updated. Thus, when one service sends data to the data store, all other services listening to the feed will automagically be informed.

This is a fantastic resource for cache cluster development, ensuring all writes are propagated to all nodes in a cache in the shortest possible time, while providing adequate protection and optimization for data reads.

Conclusion

As you may have gathered by now, there are no hard and fast rules for distributed application development. The trick is to plan and to refactor as much as possible.

As your application grows, consider ways to refine and optimize your data-passing needs and reduce, reduce, reduce. What's more, do not be afraid to utilize third-party tools. Keep it simple and do NOT reinvent the wheel. Chances are, the problems you are facing are the same problems faced and solved by many others in the past.

Don't forget to check out the other posts in the series:

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.