Here at Codeship, we strongly believe in giving you as much control over your test and deployment infrastructure as possible. While we take care of running the infrastructure that powers your builds, you should be able to have full control over the environment.
To that end, we’ve spent the last year working on our next generation elastic build infrastructure, the future of Codeship. We’ll share a lot of information about our new system over the next weeks, but today I want to focus on deployment, especially on integrating with AWS and EC2 Container Service in particular.
One of the issues that we set out to solve with our new infrastructure was providing a more modularized and composable build environment for our customers. Why should the tools that are only necessary during your deployment be installed in the environment that your tests run in? Why can’t we build small containers, a plugin infrastructure if you want, and plug those specialized containers into the parts of the build where we need them?
With Docker at the basis of our new infrastructure, this has now become possible. You can pull in any container you like to compose your build environment. These can be official Docker containers, provided by different companies like Codeship, or even your own.
If you want to have a unified process and toolset to deploy or test applications across your company, you can put those tools and scripts for deployment into a Docker container and use this container across your company’s builds. To help you deploy to AWS in this modular environment, we’ve created an AWS container for you to use.
Codeship AWS Deployment Container
The AWS deployment container lets you plug in your deployment tools without the need to include them in the testing or even production container. That keeps your containers small and focused on the specific task they need to accomplish in the build. By using the AWS deployment container, you get the tools you need to deploy to any AWS service and still have the flexibility to adapt it to your needs.
The container configuration is open source; you can find it in the codeship-library/aws-deployment project on GitHub.
We will use the codeship/aws-deployment
container throughout the documentation to interact with various AWS services.
Using other tools
While the container we provide for interacting with AWS gives you an easy and straight-forward way to run your deployments, it’s not the only way you can interact with AWS services. You can install your own dependencies, write your own deployment scripts, talk to the AWS API directly, or bring third-party tools in to do it for you. By installing those tools into a Docker container and running them, you’ve got a lot of flexibility in how to deploy to AWS.
Authentication
Before setting up the configuration files, codeship-services.yml
and codeship-steps.yml
, we're going to create an encrypted file to store our environment variables including our AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
Take a look at our encrypted environment files documentation and add a aws-deployment.env.encrypted
file to your repository. The file needs to contain an encrypted version of the following file:
AWS_ACCESS_KEY_ID=your_access_key_id AWS_SECRET_ACCESS_KEY=your_secret_access_key
You can get the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
from the IAM settings in your AWS Console. You can read more about this in the IAM documentation. Do not use the admin keys provided to your main AWS account and make sure to limit the access to what is necessary for your deployment through IAM.
Service Definition
Before reading through the documentation, please take a look at the Services documentation page so you have a good understanding of how services work and how to include different containers into a specific build.
The codeship-services.yml
file uses the codeship/aws-deployment
container and sets the encrypted environment file. To get access to files in the repository that we're running this build for, we set up a volume that shares ./
(the repository folder) to /deploy
. This gives us access to all files in the repository in /deploy/...
for the following steps.
awsdeployment: image: codeship/aws-deployment encrypted_env_file: aws-deployment.env.encrypted volumes: - ./:/deploy
Deploying with Codeship and AWS EC2 Container Service
To interact with ECS, you can simply use the corresponding AWS CLI commands. The following example will register two new task definitions and then update a service and run a batch task.
In the following example, we’ve parallelized the deployment of both tasks. Our Steps documentation can give you more information on how parallelization works and how you can use it to speed up your build.
You can use environment variables or command arguments to set the AWS Region or other parameters. Take a look at their environment variable documentation.
If you have more complex workflows for deploying your ECS tasks, you can put those commands into a script and run the script as part of your workflow. Then you could stop load balancers, gracefully shut down running tasks, or anything else you would like to do as part of your deployment.
We're using the task definitions from the AWS CLI ECS docs. Add the following to your codeship-steps.yml
:
- type: parallel steps: - type: serial steps: - service: awsdeployment command: aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json - service: awsdeployment command: aws ecs update-service --service my-backend-service --task-definition backend - type: serial steps: - service: awsdeployment command: aws ecs register-task-definition --cli-input-json file:///deploy/tasks/process_queue.json - service: awsdeployment command: aws ecs run-task --cluster default --task-definition process_queue --count 5
Combining deployment to various services with a script
If you want to interact with various AWS services in a more complex way, you can do this by setting up a deployment script and running it inside the container. The following script will upload different files into S3 buckets and then trigger a redeployment on ECS. The deployment script can access any files in your repository through /deploy
. In the following example, we're putting the script into scripts/aws_deployment
.
# !/bin/bash # Fail the build on any failed command set -e aws s3 sync /deploy/assets s3://my_assets_bucket aws s3 sync /deploy/downloadable_resources s3://my_resources_bucket # Register a new version of the task defined in tasks/backend.json and update # the currently running instances aws ecs register-task-definition --cli-input-json file:///deploy/tasks/backend.json aws ecs update-service --service my-backend-service --task-definition backend # Register a task to process a Queue aws ecs register-task-definition --cli-input-json file:///deploy/tasks/process_queue.json aws ecs run-task --cluster default --task-definition process_queue --count 5
And the corresponding codeship-steps.yml
:
- service: awsdeployment command: /deploy/scripts/aws_deployment
EC2 Container Registry
To answer your most pressing question, yes, we will integrate with the EC2 Container Registry that was announced during the re:Invent Keynote. We’re currently working closely with the ECS team to get this all set up and will send out more information as soon as the registry integration is finished and launched. As AWS customers, we’re very excited to see this new, fully managed Docker container registry and use it ourselves.
Conclusion
At AWS and Codeship, we share the philosophy of giving our customers a powerful infrastructure and process without having to waste time on maintenance. We want you to be in control and move ahead with your product with full speed and focus. With our new Docker infrastructure and all of the advantages it provides, including a very modular and easy-to-use AWS and ECS deployment, we’ve taken a large step in that direction.
We’re very excited to start showing what we’ve built over the last year. Expect more coming soon.