How We Built the Codeship API v2

Written by: Kyle Rames
5 min read
Stay connected

We started work on our API v2 at the beginning of 2017. We knew that implementing it could have significant implications for our architecture as well as our customers’ workflows, so we wanted to spend the time to get it right rather than rushing to deliver something and then having to live with the consequences.

Click to Tweet:

API Gateway Versus the Majestic Monolith

The first thing we looked at was how it would fit in our current architecture and our vision for the system going forward. Our existing architecture has a couple supporting services surrounded by a Rails monolith.

Our long-term vision is to move to a more services-based architecture, so we decided to first explore the idea of using an API gateway. This approach would allow us to reduce the number of requests going to our monolith frontend and will enable us to separate our UI load from our API load. Additionally, it would provide a central gatekeeper for all of our systems.

As appealing as this option was, the types of API calls we wanted would almost always end up hitting our monolith as we started to investigate. This, coupled with the overhead of having to add an API call to the gateway and then turn around and have to pass the same request to our monolith, seemed like it was a lot of extra work for an unknown future payoff.

In the end, we decided to build our API into our monolith as a Rails engine. Rails engines are just an embeddable Rails application that operates in an isolated namespace from the main Rails application. This separation helped prevent us from inadvertently using code from our monolith but still required a little bit of discipline on our part to prevent us from explicitly reaching into the monolith.

The above diagram highlights the interaction between our API Rails Engine and our monolith while retrieving a build object.

  1. An ApiV2::BuildRequest Ruby object is created using our validated request parameters.

  2. The ApiV2::BuildRequest object is passed to the ApiV2::BuildService, which currently serves as the interface into our monolith. As we move to a more service-oriented architecture, this could point to a separate service.

  3. The BuildProvider in our monolith then performs the necessary business logic to look up the resulting Active Record build object.

  4. The Active Record build object is then converted into a plain old Ruby object (PORO), which is then passed back to the API engine.

  5. The API Engine converts our PORO into JSON and sends it back to the client.

If you are interested in learning more about Rails Engines, I would highly recommend reading the Rails Guide on Getting Started with Engines.


The next significant design consideration was authenticating requests. Currently, all of our authentication and authorization occurs in our monolith. This would not have been a big deal if we kept our API in an embedded Rails engine, but thinking about extracting it out into an external service, this could amount to a lot of network traffic.

In searching for a solution, we found out that several of our developers had worked with JSON Web Tokens (JWT) in the past and suggested it as a possible solution. Using JWT, with the initial authentication request, we authenticate the user and generate a token containing the following information:

  "user_id": 1,
  "scopes": {
    "09ddc0f0-7ae2-0135-addd-745c899e7aa9": [
    "09fec460-7ae2-0135-addd-745c899e7aa9": [
  "exp": 1516220059,
  "iss": "",
  "aud": "client",
  "iat": 1516216459,
  "jti": "692e3d0a906870b604ae25aea02e038d"

Our example shows what a deencrypted JWT token looks like for Codeship. This particular token is for User 1 who has access to two different organizations. One of the organizations can both read and write project and build attributes. Our second organization can only read values.

This bearer token is passed with every request in the Authorization HTTP header. Upon receiving the request, our monolith verifies that the user has the correct scope for the resource that they are trying to access. If the user is authorized, we process the request.

If you are interested in learning more about JWT, there is an excellent introduction available at

REST Versus GraphQL

Even more important than the architecture issues was the API itself. We felt that there were only two choices to consider: REST and GraphQL.

GraphQL is an API query language that allows you to retrieve the exact data you are looking for in a single server request. A theoretical GraphQL query to find the build status and commit message for a build might look like this:

  organization(uuid: "09fec460-7ae2-0135-addd-745c899e7aa9") {
    project(uuid: "d7c7ea93-fe9c-4b5d-8943-25e83e22b60d")
    build(uuid: "95639b6e-0af5-4f72-9671-060177b8b6cb") {

We loved how powerful these queries are, but it didn’t feel quite as natural to trigger a build.

Taking a REST approach, our build would live at the following location:<organization_id>/projects/<project_id>/builds/<build_id>

We could retrieve the build information with a GET request against this URL. A POST request to the same URL would trigger the build.

Ultimately, we felt that our customers, as well as ourselves, were more experienced with REST, and we decided to take that approach. As we learn more about how users are interacting with our API, we may revisit GraphQL in the future.

If you haven’t yet looked at our API, you can find all the information you need to get started here. In case you get stuck working with the API or have feedback on how to make it better, feel free to reach out to our help desk or connect with us in our Community Slack.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.