Scaling Feature Flags: Infrastructure Considerations
This is the second part of a blog series that will explore what you need to consider when scaling your feature flag management tool. We’ll look at common challenges around scaling and what you can do to address them, starting with infrastructure-related concerns. You can catch up by reading the first part on the early stages of feature flag adoption.
If you’re like most developer teams, your first experiments with feature flags were probably one-offs focused on solving specific problems. They may have started with just a few individuals or a small team. At a certain point, though, the word begins to spread about the benefits of feature flagging.
So you build your own tool to self-manage your growing number of feature flags, possibly using a configuration file or database table. As more people begin to use your homegrown system for a wider variety of use cases, you’ll also begin to notice growing pains. You might realize that you can’t tell where a certain feature flag is located or who owns it—or maybe you know where it is, but you can’t show it to anyone outside the IDE. You may find that you can’t support experimentation in certain regions. Maybe you’re too busy updating flag integrations to commit a feature, or maybe you’ve been asked to support a new SDK or integration. These are all symptoms of a larger issue—you need to scale. At this inflection point, you need to decide if you want to make a significant investment in your homegrown system or migrate to an external platform.
Either way you go, you’ll need to plan ahead. A few common themes or patterns often arise as organizations scale their feature flag usage. This blog series groups them into three categories: infrastructure, security and visibility. We’ll address the ones around infrastructure first. As you might expect, infrastructure is extremely important.
SDK Support and Maintenance
Growing pains around SDK support and maintenance are some of the thorniest to manage. Teams usually build the minimum viable product (MVP) of a homegrown solution around a specific language. Later, they may choose to include new tech stacks to expand their product marketplace, or another team within the company comes to the flagging solution with an entirely different set of languages. In either case, the MVP that was built around one specific use case must then flex to fit another.
Teams need to make a tradeoff—the platform should be abstract enough so that it only has to be slightly modified, but specific enough that it can be tailored to the nuances associated with the new languages added. For example, if you want to run a flag on the client side, you’ll probably want to ensure that flag values can’t be sniffed out on a network call.
With each new language added, you need to acknowledge the opportunity cost. It’s important to ensure you have the time and resources allocated to maintain the updated SDKs and provide support to debug any issues.
Multiple Environment Flag Configurations
For teams that are just getting started, a self-built flag configuration is simple. You can deploy code on the server and then edit the configuration. But with more sophisticated deployment pipelines, configurations start to take up more and more time. Each environment has its own database and connecting strings, as well as its own flags. In some of those environments, you might mock up external dependencies. In others, you will want to use parallel sandbox environments.
With homegrown feature flag solutions, this gets very complex very fast. If you’re toggling database access or external dependencies and flags, you need to manage them across production, staging, QA, testing, development and every environment in between. Even with automation, it’s easy to lose track.
Current & Future Feature Flag Use Cases
If a team initially builds a feature flag management solution themselves, the use case they address—the one that will define their MVP—is going to be relatively simple. They won’t need complex capabilities at that point. At first, they may just need a boolean toggle. This is easy enough to handle with a config file or database table alone, without getting a fancy interface involved. You only want to change a single value.
As the number of flags grows, however, either your team or another one may require something new where an “on or off” boolean flag may not suffice. You may need multivariate flags that can take on more than two possible states. You may want to do user segmentation to target an audience based on certain properties. You may decide to split test and set up the process for an A/B or multivariate test, routing traffic to other users. You may want to do gradual rollouts or canary deployments to test on a small percentage of users. With each of these additional use cases, you need to build new infrastructure—which comes with corresponding development costs.
CI/CD & Feature Flag Integrations
A lot of scaling considerations revolve around planning for the future. Just as you may include or eventually exclude different tech stacks from your product, you may require—or no longer need—various tools to integrate with your feature flag solution. At the very minimum, you’ll want to implement a REST API to get and alter information. This is especially useful when integrating a CI/CD process.
If you’re trying to shorten feedback loops and iterate quickly, you want to rely on a specialized, scalable analytics solution and pass flag values to this new platform. The idea here is to keep the abstraction level of the integrations the same as your solution—so you can “plug and play” and your forward flag data see minimal changes.
As you begin to think about scaling your feature flag tool, ask yourself questions like:
How quickly do you need to deliver flags?
Where are my users located?
How many requests are being made?
What analytics and monitoring tools do you need to integrate with?
How can your SaaS architecture support these needs?
Download “A Developer’s Guide to Feature Flags”
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.