Maximize Infrastructure Utilization with Dynamic Resources

Written by: Electric Bee
3 min read

One of the key topics discussed during yesterday's #c9d9 episode around resource utilization was dynamic resources .

Dynamic resources are virtual hosts or containers that are spun up and torn down as needed during the automated software build/test/deploy processes. These processes are typically coded into an orchestration tool like CloudBees Flow that manages dynamic resources via API calls to AWS, OpenStack, VMware, Docker, etc.

CloudBees works with several companies with heavy investments in private and/or public clouds, and needless to say they all love the idea of an elastic infrastructure with maximum utilization.

Here are some pros and cons of dynamic resources based on my experiences:

Benefits:

  • Consistency – Static resources never stay consistent. Automated processes might inadvertently change the state of a host. People might log in and manually install or modify third party libraries. If you spin up 10 identical VMs to serve as your build farm, chances are that these VMs are all a little different after a few months. When you use dynamic resources, this problem goes away.

  • Maximum utilization – Depending on the time of day or stage of the release cycle, the number of hosts needed varies quite a bit. You can spin up the maximum number of hosts you’d ever need at once but (1) you have to guess what that number is, and (2) you’ll end up with a lot of idle time and wasted compute power. Allocate them dynamically instead. Need 30 OpenStack VMs to run a suite of regression tests? Spin them up, run the tests, save the reports, and throw them away. Need 2 Docker containers to deploy the web application you just built? Spin them up, deploy the app, publish to the Docker repository, and throw them away.

  • Cost savings – Whether you use a public cloud and pay for CPU hours or build an on-premise cloud on top of physical servers, you’re spending money on compute power. You’re also spending money when people (dev, qa, ops, release) are managing dozens/hundreds/thousands of static resources instead of a few snapshots. Save money by making your cloud elastic and easy to manage.

Gotchas:

  • Base snapshots – You want these to be lightweight, but where it’s avoidable you don’t want to run the same configuration steps every time. It’s a balancing act that varies from case to case. I’ve found a happy medium to be base VMs with the latest OS updates along with pre-installed agents for the orchestration, configuration management, and monitoring tools.

  • Workspaces – Ideally, you want to use a workspace sitting on a network share - so that log files, binaries, and reports are available after the dynamic resources are torn down. If you can’t use a network share, consider using SCP or an artifact repository to save everything that needs to be retained.

  • Investment – You have to invest time into designing, architecting, and building your SDLC to use dynamic resources. It’s easier to spin up some VMs and have a static set of resources. You avoid the gotchas but you also miss out on the benefits. Treat your SDLC like you do your code; the upfront investment will pay dividends down the road.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.