The following is a guest blog post by Carlos Sanchez, cloud engineer, experience manager cloud service at Adobe.
As part of a large software development team at Adobe, my main goal is to increase productivity and output. I want my team to ship solid code all day, every day, and we’ve been able to do that with the automation capabilities in GitOps, a framework of governance practices for Kubernetes-based applications and infrastructure. GitOps has become a core element of our development process.
We use Kubernetes a lot across multiple clusters and regions, so my team really likes the reconciliation cycle in GitOps. With the desired state in Git, we can automatically reconcile against what’s running. Every commit is automatically built, pushed and deployed into production.
Since we’ve defined our services and configuration in Git, we can automatically deploy changes across clusters and namespaces. We have many, many namespaces and clusters, so we use a pull model instead of a push model whenever possible. We created a small service in clusters that pulls data and does the reconciliation.
We also manage other services like DNS and CDN using GitOps. We use a combination of Jenkins and templates to continuously look at the changes in Git and apply them, as well as other custom made services.
GitOps for Applications AND Infrastructure
Infrastructure as code is a precondition to use GitOps with infrastructure, with tools like Terraform or Azure ARM. However, there’s a thin line between GitOps and infrastructure as code, and the key is automated reconciliation … to react automatically to the changes in Git so you can deploy and update your infrastructure, not just running Terraform in your development machine.
Another Benefit: Full Visibility Across All Systems
In addition to the automation services, the visibility we get with GitOps has been a huge game-changer for my team. It’s our single pane, or one place, for everyone on my team … developers, SREs, on-calls … to see what’s happening across all services and clusters, including git logs with commit history traceability. As different team members are working on development projects, people can submit pull requests for review and testing for deployment in different environments as well, and we can see all of that in Git.
We can use declarative configurations in YAML or JSON to define the state of clusters in GitOps. We use standard formats like Kubernetes config files, Kustomize, and Helm charts as well as custom formats, and we keep track of all of them in Git.
Continuous Delivery with GitOps
With all of our code in Git, we’ve created continuous delivery pipelines so we can automatically apply changes into the clusters, triggered by Git commits, which trigger Jenkins pipelines that test and deploy all these definitions.
Many different people are making commits and pushing, like the developers or SREs, but there are also instances where changes are coming from user interfaces and APIs. All changes converge into Git so they are consistent and we see an up-to-date view. It’s not only people making changes but also other tools and services.
We also need to decouple these systems from Git, using queues or services that transform actions into Git commits, like a change done in the UI. The change doesn’t go directly into a Git commit. Instead, the service or the queue helps us make sure that things are decoupled. You can also run Jenkins pipelines from message queues using the Jenkins JMS messaging plugin. You send a message to the queue, which will trigger a Jenkins job and apply the change.
My team has been very successful with GitOps to automate manual processes. I highly encourage you to give it a try. There are plenty of tools out there that can make your life easier, including Jenkins and Flux. The tools can help you transition from manual processes to a GitOps-driven fully automated process that can significantly boost the productivity of your team.
For more information about GitOps at Adobe, check out Carlos’ recent presentation at DevOps World.