The CloudBees team is coming off a monumental year in 2019 where our “hive” continued to grow as our vision for software development and DevOps expanded with Software Delivery Management . Someone that continues to be a valued partner to CloudBees along our journey is none other than Jon Collins , vice president of research at GigaOm.
Before the New Year began, we had a chance to chat with Jon to discuss what learnings he took away from 2019. We also talked about what he predicts will be the main focus for software delivery in 2020 as organizations continue to transform themselves into technology companies.
What has been interesting to you when looking back on 2019? What things do you expect to happen that didn’t – or that you were surprised that did happen?
Jon: Technology-based innovation, and more particularly, how to scale technology-based innovation at speed. To do digital transformation correctly involves a gamut of technology, not just software. When we look at IoT, embedded systems, and device manufacturers, for example, a lot is about software, but it's also a lot about other stuff.
When you think about “customer experience,” the question circles around how we do something new, and that can impact the business. Technology-related industries are constantly commoditizing, so the first-mover advantage is a must. The drive I’m seeing has nothing to do with making developers' lives better, although that's great. The whole point is helping happy developers actually make a measurable difference to their business’ bottom line.
Interesting point. So being a first-mover is important. How does that relate to managing legacy systems that are still providing value?
Jon: I don’t have a problem with legacy software in and of itself. I have a problem with an absence of APIs, by which I mean that if you've got a system, say an old Oracle database, and it's stuffed full of stored procedures. This forces your teams to work in a certain way because that’s the way the work has to be done. Functionality is literally baked in and gets calcified causing huge problems. We need existing systems to work with the foundations of CI/CD so people can start taking advantage of higher-level DevOps practices.
What is a major opportunity you are seeing and where does software delivery go from here?
Jon: We still have a way to go in software delivery and how management structures are optimized for new ways of working. For example, there's a huge amount of mileage to get out of doing foundational CI/CD right, making software builds efficient and automated within a technology modernization journey.
To enable this, we have to make “doing the right thing” the path of least resistance for the developer. Whether that’s a security scan, a test deployment, a performance test - by creating a policy definition or architectural diagram and configuring the CI capabilities to suit, an organization makes it so all the developer has to do commit their code, and it is sure to follow these guardrails and conform to those policies.
Once you've actually got CI/CD that delivers what modern companies need, governance and reporting must follow. We used to talk about VM sprawl as a big downside of virtual machines. We're already starting to see the challenges that come from microservices architectures, and we're going to see increased complexity. In today’s distributed application landscape, every application outage is like a giant “whodunit” game. That isn’t good for anybody.
What other challenges or opportunities do you see as a reflection of these ideas?
Jon: We're kind of going back to the 1970s in terms of the cohesion and coupling and modularity of structured systems. But today, we've got tools like Kubernetes, amazing network bandwidth, well-understood processes and containerized modules that are always deployed with the right libraries. Luckily, everyone's agreeing on that as an architecture. Therefore, we need tools designed to support those principles.
CI/CD becomes hugely important, and prescriptive guidance at each stage of the process also becomes critical. If you can build to that, if you can set guidance around the guardrails and bake in policy – architectural or security policy, for example – those particular elements should never leave the DMZ. And, that's fine. It's part of the policy. Since it is customer data, you're setting the security policy and that's going to have a new influence on the architecture.
When I wrote a recent DevSecOps report, for example, I was researching companies that enable CISO and security experts to be more than just the tin-pot policeman and actually start to say, "Okay, we've now set the policy." In a perfect CI/CD world, you should be able to throw those policies at your testing tools and if something is done that breaks that policy, the right people will be informed and the problem can be fixed. It is an opportunity for visibility.
Thank you so much, Jon. This was a helpful and insightful conversation. Tell the people what you have going on in 2020...
Jon: Over the past two years I’ve been building a library of resources across DevOps , including reports on Value Stream Management , DevSecOps and DevOps Quality and Testing ; I’ve also learned so much from interviewees on my Voices in DevOps podcast. I’m currently finalizing a report on the evolution of CI/CD, so watch this space for that one! In terms of my research agenda, I’m particularly focused on how enterprises can manage the complexity inherent in DevOps environments, across the software being created, pipelines and processes, and indeed the tools being used. I welcome any thoughts your readers may have in any of these areas.