Maintain Infrastructure with Elastic Beanstalk and CloudFormation

Written by: Florian Motlik

UPDATE: With January 1st, 2017 we rebranded our hosted CI Platform for Docker from “Jet” to what is now known as “Codeship Pro”. Please be aware that the name “Jet” is only being used four our local development CLI tool. The Jet CLI is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.

The multitude of services and ways to build infrastructure on AWS can easily lead to a hand-crafted snowflake of a system that is hard to maintain over the long term. Often you don’t know who introduced which change or how to reproduce it in a staging environment to test future changes.

To counter this problem, AWS developed CloudFormation to keep your infrastructure definition under source control and make it easy to evolve. This gives your team the ability to test changes to your infrastructure on staging accounts and review them before deployment.

To introduce you to CloudFormation, let's walk through setting up an application on Elastic Beanstalk that also sends its logs to CloudWatch Logs. In a follow up post, we’ll set up additional alarms through CloudFormation on your Elastic Beanstalk application and load balancers.

Introduction to CloudFormation Templates

CloudFormation is configured through JSON template files that you can store in your repository. You can define resources, dependencies between resources, and parameters so you can change the infrastructure during setup. All of that is also visible in the CloudFormation UI.

For a deep dive into the CloudFormation configuration, check out the Getting Started section of their documentation. Once you have a good understanding of the basics, which aren’t too complicated, the best way to learn the specifics is by going through the Template Reference that lists the different resource types and all their options.

The JSON file syntax can be pretty verbose. To make this easier, the CloudFormation team introduced a designer UI. Another option is to use different DSL’s built on top of the JSON format to make it easier to write. For Python, there is the Troposphere project or Cfndsl if you prefer a Ruby syntax.

I’ve used Cfndsl for this blog, but it's very close to the JSON syntax; it should be easy to understand.

Setting Up the Elastic Beanstalk Application and Environment

I’m using the example repo flomotlik/cloudformation-elastic-beanstalk-blog-demo. It has a detailed description on how to deploy it with the Codeship Jet CLI (the CLI tool for our CI Platform for Docker).

The repository has a small Sinatra application that responds with Hello World to any request.

Template walkthrough

You can check out the full template that will result from the following steps in the repository. You can follow along all those steps by removing everything not already mentioned in the following example until you get back to the full template.

Let's begin by setting up a simple Elastic Beanstalk application and environment.

ElasticBeanstalk_Application('DemoDockerApplication') do
    Property('Description', 'AWS Elastic Beanstalk Application')
  end
  ElasticBeanstalk_Environment('DemoEnvironment') do
    DependsOn %w(DemoDockerApplication)
    ApplicationName Ref('DemoDockerApplication')
    Description 'AWS Elastic Beanstalk Environment'
    SolutionStackName '64bit Amazon Linux 2015.09 v2.0.4 running Docker 1.7.1'
  end

After this stack is set up (you should follow that in the CloudFormation UI), you should be able to see the demo page of the Elastic Beanstalk Docker stack.

Before we can deploy our own application, we need to create an S3 folder that will hold our deployable artifact. Add the following to the template and rerun the deployment:

S3_Bucket('ElasticBeanstalkDeploymentBucket')

This will create a unique S3 bucket that we will use for the deployment. In the codeship-steps.yml, we’re going to change the placeholder to the name of the bucket that was created; we can now run the deployment through Jet. We’ve also added outputs to the template so it's easier to spot the S3 bucket and URL for our Elastic Beanstalk environment.

Now you can run the Elastic Beanstalk deployment as described in the repository's README.

If you open up the URL for your ElasticBeanstalk Environment you should see Hello World reported back.

Sending Logs to CloudWatch Logs

When building complex infrastructure, you rely on a good logging and metrics system that makes sure you have insight into your infrastructure. Having all logs from all different applications in a central place is very helpful for accomplishing this.

CloudWatch is a service by Amazon that supports handling metrics and logs from your applications. It automatically provides metrics for your EC2 instances and elastic load balancers, but it can also have custom metrics. Additionally, you can put all your logs into it and then create metrics from those logs when specific strings are found in a log. This gives you a lot of flexibility for setting up and automatically managing your infrastructure.

You can even automatically start or stop instances depending on your alarms, even your custom metrics, which we will do in a follow up blogpost.

Add the following configuration to the template file. It will set up a separate log group, which we'll use in the second blogpost in this series to set up metrics and alerts. It also creates an IAM Role and Instance Profile that we can use for our Elastic Beanstalk application so it's able to talk to the CloudWatch Logs service.

Logs_LogGroup('ElasticBeanstalkMainLogGroup')
  IAM_Role('ElasticBeanstalkLoggingRole') do
    AssumeRolePolicyDocument(
      Version: '2012-10-17',
      Statement: [{
        Effect: 'Allow',
        Principal: {
          Service: ['ec2.amazonaws.com']
        },
        Action: ['sts:AssumeRole']
      }]
    )
    Path '/'
    Policies [{
      PolicyName: 'ElasticBeanstalkLogging',
      PolicyDocument: ({
        Version: '2012-10-17',
        "Statement": [
          {
            "Effect": 'Allow',
            "Action": [
              'logs:CreateLogStream',
              'logs:GetLogEvents',
              'logs:PutLogEvents',
              'logs:DescribeLogGroups',
              'logs:DescribeLogStreams',
              'logs:PutRetentionPolicy'
            ],
            "Resource": [
              'arn:aws:logs:us-east-1:*:*'
            ]
          }
        ]
      })
    }]
  end
  IAM_InstanceProfile('ElasticBeanstalkInstanceProfile') do
    Path '/'
    Roles [Ref('ElasticBeanstalkLoggingRole')]
  end

Make sure to update the Elastic Beanstalk environment with OptionSettings so the instance profile is set and the log group is added as a parameter that we can use in the log config files.

ElasticBeanstalk_Environment('DemoEnvironment') do
    DependsOn %w(DemoDockerApplication ElasticBeanstalkMainLogGroup)
    ApplicationName Ref('DemoDockerApplication')
    Description 'AWS Elastic Beanstalk Environment'
    SolutionStackName '64bit Amazon Linux 2015.09 v2.0.4 running Docker 1.7.1'
    OptionSettings [
      {
        Namespace: 'aws:autoscaling:launchconfiguration',
        OptionName: 'IamInstanceProfile',
        Value: Ref('ElasticBeanstalkInstanceProfile')
      },
      {
        Namespace: 'aws:elasticbeanstalk:customoption',
        OptionName: 'EBLogGroup',
        Value: Ref('ElasticBeanstalkMainLogGroup')
      }]
  end

After setting everything up in the CloudFormation stack, deploy the stack so we can configure the Elastic Beanstalk application.

Configuring ElasticBeanstalk to Send Logs to Cloudwatch Logs

We’ve already created the infrastructure for Elastic Beanstalk EC2 instances to send logs to; now we have to plug it together. Elastic Beanstalk supports a special folder called .ebextensions that allows you to configure the instances.

In the .ebextensions folder of the repository, we have three files:

  • cwl-log-setup.config. This sets up the Cloudwatch Logs Agent configuration file to push log-specific log files (application logs and NGINX logs in this case, but that can easily be extended) to Cloudwatch logs. It also sets up the log stream and takes the EBLogGroup Parameter we’ve set up through the OptionSettings.

  • cwl-setup.config. Installs the Cloudwatch Logs Agent on the machine.

  • eb-logs.config. Makes sure that the Elastic Beanstalk Agent is picking up log files for the CloudWatch Logs Agent as well when you go to the EB Dashboard and request logs there. This is very helpful for debugging the other configuration settings and files.

After deploying those files, the CloudFormation stack that represents your Elastic Beanstalk application will be updated and Elastic Beanstalk should start sending logs to CloudWatch Logs.

We’ve set it up here so that there will be a new logstream for each application and NGINX log per instance, so it's easier to look into problems on specific instances. With this setup, we’ll wrap this up for today. We'll implement more metrics on top of it in my follow-up blogpost.

Conclusions

Having all of your infrastructure under source control is an amazing boost to your productivity. You can easily set up staging systems or evolve it without losing track of what's happening in your infrastructure. No undocumented changes can happen in your infrastructure, and you can even roll back to an earlier setup if you wish.

With Elastic Beanstalk paired with the flexibility you get by being able to measure and look into every part of the stack, you have a really strong system to run and operate your application.

Give it a try and let us know in the comments if there are any additional tips and tricks you have for Elastic Beanstalk. Also, feel free to learn more in our Elastic Beanstalk documentation article where we explain how to deploy there with our classic Codeship CI service.

Check out the second post in this AWS series, "Custom Metrics and Alerting with CloudWatch."

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.