Manage Your CD Pipeline in a Simple Way

Written by: Andres Cidoncha
4 min read

DevOps World Jenkins World 2019 LisbonEditor's note: This is a guest blog post from Andres Cidoncha, DevOps engineer, System73. Andres will be presenting a session next week, at DevOps World | Jenkins World Lisbon, along with his colleague, Airam Gonzalez, who is also a DevOps engineer at System73. They will present DevOps Cycle for our Serverless Apps with Jenkins ,on Wednesday, December 4, at 4:15pm. Check it out!

You have to manage the deployment of your growing application from scratch to the moment you have multiple environments. How can you do it following the KISS principle? I’ll show a little example that I hope will help you with this task.

You have a new project in hand and it’s time to deploy the first version to Dev to test the components integration. You have the two main components for your app: a back-end and a front-end.

Each component manages its dependencies and deployment steps with a pipeline, but you have to coordinate the general app deployment. Taking this into consideration, your pipeline could look like this:

pipeline {

    agent any

    options {

        disableConcurrentBuilds()

        timestamps()

    }

    stages {

        stage('Deploy Backend') {

            steps {

                build (job: 'DeployBackend')

            }

        }

        stage('Deploy FE') {

            steps {

                build (job: 'DeployFrontend')

            }

        }

    }

    post {

        success {

            slackNotification('Deployed!!') // THIS METHOD IS AN EXAMPLE

        }

        failure {

            emailErrorNotification() // THIS METHOD IS AN EXAMPLE

        }

        cleanup {

            cleanWs()

        }

    }

}

You call the deployment pipeline for each component (you can abstract this pipeline from the subprocess requirements), so the general pipeline is very simple. Now imagine that your app grows and DB migrations enter in-game…

As migrations can be executed in parallel to the front-end deployment (both depend on the back-end) the process may look like this:

pipeline {

    agent any

    options {

        disableConcurrentBuilds()

        timestamps()

    }

    stages {

        stage('Deploy Backend') {

            steps {

                build (job: 'DeployBackend')

            }

        }

        stage('Migrations && FE') {

            parallel {

                stage('Run DDBB migrations') {

                    steps {

                        build (job: 'DBMigrations')

                    }

                }

                stage('Deploy FE') {

                    steps {

                        build (job: 'DeployFrontend')

                    }

                }

            }

        }

    }

    post {

        success {

            slackNotification('Deployed!!') // THIS METHOD IS AN EXAMPLE

        }

        failure {

            emailErrorNotification() // THIS METHOD IS AN EXAMPLE

        }

        cleanup {

            cleanWs()

        }

    }

}

Cool! Everything is under control. Now you can say you have your app deployment pipeline! You can add a parameter to choose the deployment environment.

But … there’s something missing … testing! You’re deploying your app in a simple click but you should check some things before telling everyone that your deployment is okay! Let’s say you have some application tests with their own pipeline to manage dependency tools, steps and other stuff. To make the QA team happy, you should integrate the tests into your workflow.

pipeline {

    agent any

    options {

        disableConcurrentBuilds()

        timestamps()

    }

    parameters {

        choice(name: 'environment', choices: ['develop', 'staging', 'production'], description: 'Environment')

    }

    stages {

        stage('Deploy Backend') {

                steps {

                    build (job: 'DeployBackend', parameters: [string(name: 'environment', value: params.environment)])

                }

        }

        stage('Migrations && FE') {

            parallel {

                stage('Run DDBB migrations') {

                steps {

                    build (job: 'DBMigrations', parameters: [string(name: 'environment', value: params.environment)])

                        }

                }

                stage('Deploy FE') {

                        steps {

                                build (job: 'DeployFrontend', parameters: [string(name: 'environment', value: params.environment)])

                        }

                }

            }

        }

        stage('Application Tests') {

            steps {

                build (job: 'ApplicationTests', parameters: [string(name: 'environment', value: params.environment)])

            }

        }

    }

    post {

        success {

            slackNotification("Deployed ${params.environment}!!") // THIS METHOD IS AN EXAMPLE

        }

        failure {

            emailErrorNotification() // THIS METHOD IS AN EXAMPLE

        }

        cleanup {

            cleanWs()

        }

    }

}

You have already deployed an environment and validated the deployment. But promoting the code from Dev to STA (and production) has to still be carried out manually ... and you don’t want that. Why? Because you have already done enough testing to validate your deployment and you aren’t afraid of moving changes to production!

Then, you’ll have to choose between these two options:

  1. Create a Jenkins file for each deployment job. This would make no sense because the pipeline code will be the same in 99%.

  2. Use the same Jenkins file for all deployment jobs, managing the next job to call in the code.

You should choose option 2 (repeated code is never a good friend). But wait, there are more choices!

  1. Use the same job for every deployment job. This has two problems depending on your `disableConcurrentBuilds` configuration:

    • With concurrent builds: You allow the pipeline to run in parallel without restrictions. That means you can have two builds deploying in the same environment!

    • Without concurrent builds: Each build has to wait until the previous build ends. That means you can’t deploy your hotfix to production while another build is deploying development!

  2. Use an independent job for each deployment job, disabling concurrent builds.

You should choose option 2, again. But how do you manage some environment-specific options (for example, the test suite to run on the application tests) for each job if you’re using the same Jenkins file for all of them?

Use the JOB_BASE_NAME!

Don’t worry, I’ll show you an example:

Got it? In case the answer is “no”, let me explain: You call each job as <job_name>_<environment>. With that, you can easily learn information about what environment are you deploying by just creating a variable awsEnvironment = JOB_BASE_NAME.split('_')[1] !

deployJobName = env.JOB_BASE_NAME.split('_')[0]

awsEnvironment = env.JOB_BASE_NAME.split('_')[1]

nextEnvironment = ['develop': 'staging', 'staging': 'production', 'production': null]

pipeline {

    agent any

    options {

        disableConcurrentBuilds()

        timestamps()

    }

    stages {

        stage('Deploy Backend') {

            steps {

                build (job: 'DeployBackend', parameters: [string(name: 'environment', value: awsEnvironment)])

            }

        }

        stage('Migrations &#x26;&#x26; FE') {

            parallel {

                stage('Run DDBB migrations') {

                steps {

                    build (job: 'DBMigrations', parameters: [string(name: 'environment', value: awsEnvironment)])

                        }

                }

                stage('Deploy FE') {

                        steps {

                                build (job: 'DeployFrontend', parameters: [string(name: 'environment', value: awsEnvironment)])

                        }

                }

            }

        }

        stage('Application Tests') {

            steps {

                build (job: 'ApplicationTests', parameters: [string(name: 'environment', value: awsEnvironment)])

            }

        }

        stage('Promote to next environment') {

            when {

                expression { nextEnvironment[awsEnvironment] }

            }

            steps {

                build (job: "${deployJobName}_${nextEnvironment[awsEnvironment]}", propagate: false, wait: false)

            }

        }

    }

    post {

        success {

            slackNotification("Deployed ${awsEnvironment}!!") // THIS METHOD IS AN EXAMPLE

        }

        failure {

            emailErrorNotification() // THIS METHOD IS AN EXAMPLE

        }

        cleanup {

            cleanWs()

        }

    }

}

With that, you can deploy to production while another build is updating in dev … even in parallel to code promotion to STA! And you can do all that with the same Jenkins file, which is really simple (KISS!). This pipeline can be easily extended to add custom environments support, general app version control, dependency checks, automatic generation of changelog … and more!

On December 4, 2019, Airam Gonzalez and I will be speaking more on this topic at DevOps World | Jenkins World Lisbon . Join us if you'd like to hear more! Use JWFOSS for 30% discount on registration.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.