One of my favorite aspects of being a product manager in the DevOps space is that I frequently take trips down memory lane to earlier times in my career when I wore both DEV and OPS hats. This happens when I speak with customers about the problems they have in their software delivery process and flash back to some similar issue I had faced in my past. Sometimes this isn’t the most pleasant experience if that flashback takes me to some event in my technical career that I’d rather forget (like when I accidentally brought down a production database and had the CEO himself fuming in the door of my cubicle). Other memories are more fun to touch on and remind me of the fulfillment that DevOps practitioners experience when they totally nail it (like when I rolled-out a fairly complex mapping application that touched a bunch of government and university databases). Regardless if these are good or bad trips down memory lane, I always return from them with renewed excitement for what I do: build tools to help DevOps be more successful. Recently, I spoke with an IT Manager who was faced with an application deployment challenge that instantly brought me back to 2008 and the sunset years of my career as a PHP developer (and part-time Ops guy). On the surface this was ‘deploy a PHP application to prod’ but underneath this was ‘deploy a bunch of loosely connected PHP apps built with different frameworks by different teams all at once to prod’. Good times, indeed! To provide a little more clarity here is what we are talking about:
- A product catalog website that was part hard-coded content, part dynamic, and part WordPress CMS,
- A password-protected area where paying customers would access specific data (how we made $),
- An internal portal where sales would pull reports on customer usage.
Each app had been built in different periods of time by different teams using different styles of PHP development, had their own 3rd party libraries and GIT repositories, and shared 2 databases between them all. That drove the developer in me nuts, however the Ops in me had it even worse because this entire rats nest had to be deployed in a very specific sequence of steps that involved pushing code, updating database schemas, pushing more code, smoke testing one area, pushing more code, checking 3rd party web services, smoke test again, etc. It was insane. I now look back and realize how many other people must either have gone through this same nightmare or currently are (like the IT Manager). If you’re the DevOps person in charge of deploying an application like this, what can you do? How do you make sense of this without asking your developers to tear it apart and start over (hah!)? What can you do to bring sanity into an otherwise insane situation? Back then, other than hacking together some scripts to perform pieces of the deploy sequence, I had no idea at all. But now… Now there are answers to these questions thanks to the world of DevOps practices and deploy automation. Tools like our own CloudBees Flow Deploy provide a variety of ways to alleviate the intense complexity, increase consistency, and reduce risk of even your most complex deployments. If I had to do it all over again today here is how I would use CloudBees Flow Deploy to show that PHP application who is boss.
This is How I would go about it now, with CloudBees Flow:
1.Modeling the Environments (WHERE things get deployed):
First, I start by modeling my environments tier by tier. Back in the day I had ‘stage’ and ‘prod’ that were close replicas of each other with the only difference being capacity and physical location of each environment. So now in CloudBees Flow Deploy I model each environment and assign the appropriate server resources to their tiers. Not only does the model provide me with a nice visual representation of my entire environment, it pays dividends later by providing an inventory view of what code is deployed where (more on that in Part II).
Complete model of my production environment
2.Model the Application (WHAT gets deployed):
With the environments modeled and ready for action, I then work through a similar process to model that painfully complex application. Here is where I really start to see some sanity develop as I create tiers of the application with individual components that map to each Git repository for that piece of the application. These components now represent those three main areas of my PHP application that I described earlier. By connecting them to their respective Git repos I’ve readied them to deploy. Check out this short 1 min video to see how easy it is to create a tier in CloudBees Flow Deploy:
Complete model of my application. Note that the WordPress and brochure site were considered to be under one umbrella but had their own Git repos. I now plug in source control - creating the processes for each individual component with steps (instructions) for pulling code from the Git repos and deploying to specific directories.
3.Tying Application Tiers to Environment Tiers (WHAT gets deployed WHERE):
Now, I map each tier of the application model to the appropriate tier of my environment model, essentially tying the application to the actual environment (note: I repeat this for both ‘stage’ and ‘prod’ environment models so I can deploy to either with one, single, application model).
Mapping application tiers to environment tiers (tying these main objects together).
Now the sanity is really building:
- My Git repos are tied to components in my application tiers
- Those tiers are tied to my environment model
- I’ve also got distinct processes in place for pulling code from Git and updating my database schema.
Now I can see exactly where the various pieces of my application will end up and how they will get there once I deploy. My level of confidence in, and control over, these major pieces of my deployment is now extremely high.
4.Create the Deploy Process (HOW and WHEN the application components are deployed):
At this point I am very close to having this monstrous PHP application deploy under control. But before we can actually kick off a deployment, I need to create the main deploy process that orchestrates all of the smaller component processes we defined earlier and puts the ‘automate’ in deploy automation. In CloudBees Flow Deploy the workflow to build your main Deploy process is simple and powerful. Not only can I easily call component processes I can also create steps to execute shell commands, kick-off external procedures, or tap specific functionality within our massive library of plugins (this flexibility is a blog topic unto itself). Watch this short video to see how easy it is to create a process-step in CloudBees Flow Deploy:
View of my deploy process with checkpoints
When finished, I have a very structured deploy process that I (or anybody else) can easily visualize step-by-step to understand exactly what happens, when. This is still a complex application with a complex set of deployment steps, but with CloudBees Flow’s model-driven architecture automated processes - I’ve now got structure around it all. That feels much better than the Run Book I used to maintain. To be safe I’ve also built in checks at appropriate points to ensure I’m notified early and often if the deploy goes off the rails with any problems. And to top it all off, CloudBees Flow Deploy not only tracks and displays changes made to all objects, including deploy processes, it also allows me to take a ‘snapshot’ of objects that I can roll back to later, if needed. With the process complete I’m ready to fire off a deployment to either stage or prod environments. And the next time I need to deploy- it’s all there, waiting for me to reuse. No more manual one-offs. No more headaches. Phew.. :) Now that we’ve seen how easy it is to set up your deployment process, in my next blog post I’ll show you how simple it is to launch a deployment and touch on a variety of benefits around post-deployment activities, like real-time status notifications, code artifact inventory, snapshots, and more. Stay tuned! Image: Wikimedia Commons.