Serverless Apps and Rollout Feature Flags

Written by: Michael Neale

Serverless applications (ones that you would deploy to AWS Lambda or Google Cloud Functions or similar) are often the most convenient way to ship a modest piece of functionality: perhaps a web app or event handler.

In this post, I will cover how simple it is to use feature flags in a serverless environment, and why Rollout works very well for them by design.

An example

Cloning this example from github, you can deploy it as easily as:

gcloud functions deploy my_function --trigger-http --runtime "python37"

Once you setup your environment id. This will give you an HTTPS URL you can hit up and then control via a feature flag experiment (Rollout quickstart here ). Very simple in this case with Google cloud functions, but it will work elsewhere too.

The first time you access it, you probably don’t notice, but there is a small delay (usually subsecond) as the function is warmed up - and the flags loaded. The biggest hit will be the preparation of the function dependencies and environment.

It is that simple - and for the most part, you can follow that pattern and there isn’t much more to it.

Take a look a bit closer and this sample function at where you setup Rollout vs where you use flags:

In the top of your function code (which is uploaded along with requirements.txt in this case) you have the imports and the setup code to initialize Rollout. It is important this goes here, not in the function below (which actually does the interesting thing and uses your flags) - as you only want to initialize Rollout once when the function is initialized by the Cloud Function service you are using. In Amazon Lambda this is sometimes called the Execution Context - which is all the stuff that needs to be loaded before your function can be invoked (and is reused between different invocations of the function… there is more you can read on this in the links later on if you are curious).

This is really all most people need to know - follow the above pattern and you will be making good use of the serverless pattern along with feature flags, without any real downside.

Rollout flag config loads fast

One of the nice aspects of the stateless design of the Rollout service is that when you load flags in your application, there is no “computation” going on - the flags are loaded from heavily cached and nearby CDN nodes (not on the Rollout servers). This speeds up and simplifies loading times a lot. Should flags change, updates are pushed to your application as needed.

The Rollout docs have a fairly comprehensive guide on the “update flow” for updating flags on the client:

On the far right is the rollout storage service - which still doesn’t require “computation” - so even in the case where you flags aren’t locally stored somehow, it is still coming from a cache, that is the “worst” case (which is pretty good!).

So for most people the example above is fine. There is a small (subsecond) overhead at worst for loading the flags, but if this is too much for you… read on there is more we can do to optimize.

On cold starts and getting fastest load times

As mentioned in the above example, when a function first loads (and in most cloud function services, it means each time it is loaded for a concurrent invocation, as a function only serves one request at a time) there is a “cold start” where the execution context is loaded. Most of the cost here would be dependencies and environmental, but the Rollout flag config could add a fraction of a second to this (just the first time that concurrent execution) using the example as above. If this is too much for your use cases, there are a few options:

Option 1: Load rollout asynchronously

This is the easiest: remove the “result()” call from the rollout load so it looks like:

Rox.setup("..id here..")

This will mean that the loading of the flags happens in the background. The downside of this is that when your function is first invoked, it may have the default value of the flag, not what it has been set to remotely: this is not always what you want, but if it works for you - great. This could work if your function does a bunch of things before it gets to checking any flags. If this does not work for you - the next option could.

Option 2: Use Config as Code for embedded experiments/flags

Another interesting capability of Rollout is Config as Code: this means that any time you change the state of flags in an “experiment” the change can be stored back to a source code repository. In App Settings>Integrations you set up a connection to a repository:

The flags will be stored as (YAML) files in your (GitHub) project (the Production Rollout environment will be stored on the controller branch, if you have a Staging, it will be stored on the Staging branch, etc).

Every flag change can then trigger a CI/CD pipeline to republish your function along with the new flag settings. Rollout calls this Embedded Experiments and there are build tool plugins from available to automatically package up the Rollout config for you so that when you then use your flags are runtime, it looks locally for the embedded config first.

This is a perfect match for serverless environments if you absolutely must eliminate startup latency, but obviously is a little more work than the other options above.

Signup to Rollout here .

Additional resources:

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.