John Willis - From Docker to DevOps

In this episode of DevOps Radio, we'll hear from John Willis, Director of Ecosystems Development at Docker. He'll tell us about the early days of DevOps, how Docker is influencing DevOps technologies, and where he sees the use cases for Docker.

Andre Pino: In today’s episode, “From Docker to DevOps,” we're joined by John Willis, the Director of Ecosystem Development at Docker. John, we're thrilled to have you today—welcome.

John Willis: Yeah, thanks—thanks for inviting me.

Andre: It’s great to have you here. John, you know, I know that many of our listeners probably know of your background, but you know, you've been involved in DevOps thought leadership since the very early days of DevOps. Can you tell us a little bit about what those early days were like and actually how you got involved in it?

John: Yeah, so you know, just a quick—I've been a startup guy my whole career. I've done like 10 startups in the last 30 years, and I've always been an operations, you know, kind of around operations, tools for operations, starting with mainframe, then distributed computing, got into an IBM suite, a product called Tivoli—all really the same thing. It was just, you know, tooling about IT infrastructure and operations infrastructure. Somewhere along the way, I started getting really interested in Puppet. I think I saw Luke Kanies early on, founder of Puppet, give a presentation and—it was basically an open source conference, and I was just, it kinda changed the way I thought about how I was doing kind of what I called version one configuration. And, you know, I literally went from the back of the room to the front of the room and just started asking questions. The joke was, I begged Luke for a job for about two years, and he was always like, “I don't know what to do with you,” and ten I ran into Adam Jacob, he was starting up kind of—Chef, which was kind of an add-on to Puppet. He was a frustrated Puppet user. And I got heavily involved in Chef, and about that time, I started working with Canonical. I'll try to keep this short, but I was working with Canonical. I went to work for Simon Wardley over at Canonical when they were introducing their first kind of private cloud—this was pre-OpenStack days. And so I was just hovering around this stuff, playing with Chef, I knew the people from Puppet, and I heard about this conference that was going on in Europe, again, and they were calling it DevOps Day. And so I reached out to the organizer, who happened to be Patrick Debois, who we now consider the godfather of DevOps. And I swapped—I actually swapped, Canonical paid for my travel to go there. They would put a logo sponsor for Canonical. Quite frankly, Canonical at the time had no idea what DevOps was, right? And I went and, you know, kinda the rest is a little bit history, because I was the only American in the first DevOps Day in Ghent in 2009. There was only about 35 of us. I came back to the U.S. and organized the first DevOps Days in Mountain View and then kinda orchestrated, really, the whole U.S. process and helped people around the world create DevOps Days and I was considered what was called a core organizer. So I've been involved in the movement for quite forever. As some of you might know, somewhere along the way, I met Gene Kim of The Phoenix Project, and we decided to write a book with Gene Kim, Jez Humble, Patrick Debois, and myself. For those of you who don’t know, The Phoenix Project was a novel, and the idea was originally called The DevOps Cookbook, but now called The DevOps Handbook. It was really to be a prescriptive guide, if you read the novel, how would you—you know, you read the novel and you say, “Wow, okay, what do I do?” and The DevOps Handbook is a book of about 48 case studies, and it’s doing really, really well. It came out last October, and it’s just—the book has really hit the mark.

Andre: Yeah. That book is doing really, really well. In fact, I interviewed Gene for DevOps Radio back just before the book was released, and he was very excited about it. I'm very happy to see how well it’s doing. So, let me ask you—clearly, you know, those were really early days of DevOps and it was very cool to be involved from those early days. What are your thoughts about where we are today with DevOps compared to back then?

John: You know, it’s a great movement, right? Because at the core of it, I think everybody—a lot of the, I would say, stewards of this movement have been very vigilant in expressing that culture is a big part of this. In fact, one of the acronyms that is used a reasonable amount of time is something called CAMS, and it’s something that Damon Edwards and myself came up with. It’s Culture, Automation, Measurement, and Sharing. And so we're always driving the importance of culture. So it’s kinda interesting—if you look at DevOps from the early days, you can watch the two threads, really, which is the culture and the tools, and we've gotten better at both, you know? So, for the longest time, there was this kind of jigsaw, you know, bouncing back and forth between, you know, is culture, are we talking too much about culture and not enough about tools, are we talking too much about tools and not about culture? But now, as I look back, the thread of tools has just been amazing, you know, from—and we can talk about that [Laughter]—and where we are with tools that are just tightly coupled with DevOps. But then we've gotten really better at even understanding the culture. We've embraced a lot more of lean—you know, a lot of this comes from Gene really putting, you know, The Phoenix Project and then what we learned about things like theory constraint and then the book that we did, the Handbook, we really explored deeper influences of culture to this thing called DevOps, and in the book we have, a lot of the case studies are based on culture. But my main point is, as I look back at the history, the two threads both have matured really nice. You know, where we sit today, we have a pretty good understanding of, you know, how we got here from a culture standpoint. A lot of it’s lean, but some of it is resilience engineering, human factors. And on the tools side, we've gone from, you know, starting off with people building checklists and spreadsheets to things like Chef and Puppet to now kind of immutable infrastructure with things like Docker.

Andre: Speaking of Docker, what’s your role today at Docker?

John: So, I am in a business development group, we call it Ecosystem Development. In the last, probably, five months, I’d been working specifically with Alibaba, and we made an announcement last year that we were gonna work with Alibaba, and Alibaba was gonna run the Docker hub there and be one of our primary distributors of the Docker data center—basically, our suite of tools and products. And so I’ve been working really closely with another person and we've been really just trying to get that deal shored up, get it all done, we've been doing some training. So, having a lot of fun going to China and just working with a really different culture. Alibaba has a pretty solid cloud. It’s, you know, they're kinda like the Amazon of China, including their retail business, but they also have a really, really good, robust called the AliCloud, their cloud infrastructure.

Andre: Nice, nice. So, with respect to Docker, what’s your thoughts on how Docker is influencing the technology side of DevOps?

John: Yeah, I think, you know, as early on, I had written an article on the IT revolution, Gene Kim’s kind of publishing site called The Convergence of DevOps. And I talked about how DevOps was really a converging of a lot of things hovering around. Eric Ries’ lean startup, you know, web-scale infrastructure, open source management tools, Chef, Puppet—right? These were all happening at the same time that by the time we all met in Ghent, there was this, like—this kind of super understanding of how things could be different than they've been for the last 10 or 20 years. I think Docker is a very similar story in that there’s a nice convergence of what I would call super web-scale. So, you know, companies like Google, we find—wow, Google’s been doing, running containers forever. Even companies like CloudBees and, you know, the dirty little secret of the past where they all were running containers.

Andre: Right.

John: Right? And so Docker just happened to expose that as an open source project, you know, how people were running kind of very highly scalable infrastructure running on containers, Docker just made it really easy and put it together. And I think the third convergence, which is the micro-services and the world of, you know, kinda—a world that was pent up from SOA into kinda domain driven, you know, 12 factor apps. Then we get to this kind of birth of this discussion of the micro-services. It all hits at the same time. Super-web scale companies starting to leak out how they do things, and in the case of Docker, actually pivoting a pass to an open source project and then, you know, then the whole kind of delivery of a micro-service all kinda happening at the same time. And then Docker putting a wrapper on just containers and making them incredibly easy—like, you see this explosion. We have seen, this’ll be Docker’s fourth year anniversary—birthday is in two weeks or a week from now, right, so four years. And we've all witnessed an insane adoption of this technology.

Andre: It’s been phenomenal to watch. It’s just been incredible. And in fact, you know, we ran the ankle Jenkins survey last year, and we're seeing just huge growth in the number of Jenkins users that are incorporating Docker into their technology stack. Do you see a similar thing, sort of that alliance between Jenkins and Docker?

John: Yeah, I mean, I think the kinda joke Gene says, you know, that there are unicorns and horses, right? And CloudBees are unicorns, Google are unicorns, right? And so the unicorns were running something nobody else really knew four and a half, five years ago, right? Which was that you could get massively scaled infrastructure at, really, the operational equal cost of virtualization, but then so many benefits that virtualization doesn’t have. So what I mean by operational equal cost, what we've found—and we find more and more every day—that you truly can run almost every application in a container as opposed to a VM. But, again, the unicorns knew that, but the horses like me didn't, right? And then Docker put this thing together, you know, that made it able for the horses to—you know, Docker, an app can install Docker Engine, Docker Run, Hello World—bang, right? Oh, I'm running containers! This is simpler than any other compute technology we've ever seen. So yes. What’s happened, though, the world over the last four years now has been exposed to this model of simplicity, this model of speed of delivery of, you know, how containers instantiate. You know, the density in terms of how much space it takes, in terms of the image. I won’t give an intro to Docker now, but what we find now is, it used to be—so, let’s go to Heritage, right? Early on, the super stories were people running horizontal testing in their CI and Dockerizing it, right? Because you could, you could do this crazy horizontal scaling. And then we started seeing people, you know, kinda web businesses, web scale businesses build their infrastructure on Docker—and Greenfield. Some of the enterprises started looking at Greenfield. But now we're seeing a lot of what’s called lift and shift. So actually, people are looking at old legacy applications and basically just putting them on kind of a Docker container, and now that, you know, the portability of once it’s containerized—what you do is actually decouple the application from the OS. So, for example, if a customer is running—remember the old stories of, “Hey, you should put this in.” “Oh, I can’t, because our whole suite is running some old RedHat or old CentOS and we can’t get rid of that whole fleet, right, because”—

Andre: Those guys are still hearing some of these.

John: - “until we upgrade the fleet”—right. So, with Docker, what we're finding is, Docker is just getting rid of those horrible fleet stories. We Dockerized the application and then basically, you can put it anywhere. So you can, once you've kind of decoupled the applications—so you kinda couple to kind of a boot file system that sort of looks like the requirements that you need for the application, and now you can take something that was running on a very old operating system and run it on the latest operating system and when you need to upgrade your operating system fleet, you don’t have to worry about it, because again, you're decoupled. And in fact, in the Microsoft world, this is getting really cool, because in the Microsoft world—like, we have some of that problem of the kinda fleet upgrade and “my application has to wait for the fleet upgrade”—Docker kinda decouples that. But in the Microsoft world—so Microsoft, you know, last year over a couple year period, they built a Docker implementation on a Windows Server 2016. So now, when you get Windows Server 2016, you can get Docker. It’s a Microsoft implementation of Docker containers, with the full Docker API suite. But here’s where it gets really interesting. So, if you—if, now, customers that were running old .net or old Microsoft applications, you know, C, C++, C#, whatever applications on old operating systems, and large fleets, and have that coupling problem, Microsoft and consultants are helping them go in, Dockerize that application, and now they can just move it to Windows Server 2016. See? Again, imagine that all the—not imagine, it is true that there are a lot of kind of lift and shifts going on in Windows, which is a win/win for Microsoft and a win/win for the customer.

Andre: Yeah, it sounds like it’s its extreme portability.

John: Yeah, and you know, it’s funny, because you watch the history, you know, you start out with kind of massively scaled continuous integration. Then you start people talking about, you know, all their new Greenfield applications being micro-services and Docker containers. But now we're looking at this, you know, ridiculous, old legacy, and a lot of—I won’t say all, but like, a lot of those old applications actually can be containerized. And once they are, you get this excellent opex from just the way you have to—you know, you decouple all that fleet management nonsense, and it becomes—in fact, I'll give you one short story. There was a guy that I was talking to at an insurance company the other day, he’s actually taking some old—I mean, old XP systems, and he’s got kind of the Microsoft open source .net stuff. He’s been able to get those developers to get their application running on Alpine Linux running the .net interface.

Andre: Wow.

John: Right? Isn’t that crazy?

Andre: That’s unbelievable. It’s amazing how flexible it is and the options it’s offering developers and others in how they utilize it.

John: Yeah. I mean, you know, I was gonna say that people used to say, you know, going back, the world will shrink, right? Like, things happen in almost three year periods now. But if you go like three years ago, four years ago, right, most savvy enterprises or kind of leading edge enterprises were adopting what they called a cloud first implementation, right? In other words, whatever you're gonna do, you have to prove why it can’t run on the cloud, right?

Andre: Right.

John: I'll tell you now, I mean, it’s just starting, but you're starting here like a container first, you know? Like, prove to me why—if you're gonna do a Greenfield application or if you're gonna do some new application, prove to me why it can’t be on a container first, right? So yeah, I think, to circle back to your original question, I think the growth of, you know—the argument against Docker and containers early on was, you know, is it ready for production? Well, we know it is. Well, can everybody retool all their applications to this kinda micro-service architecture? Well, some can and some can’t. But now we're finding, again, with the lift and shift, the growth—you know, when you talk about whatever your survey said this year, it’s gonna be explosive next year because you're gonna see a lot of legacy applications now running that will be Dockerized.

Andre: No doubt, no doubt. So John, when you think about DevOps and continuous delivery as an end to end process, you know, where do you see the use cases for Docker across that life cycle?

John: Yeah, I mean, I think—you know, again, I think Dockerizing your CI process, in fact, most, you know, some of their copies and all the SaaS based ones do that already for you. But if you're running your own infrastructure, you know, kinda Dockerizing your pipeline tools is a no-brainer. It’s almost lingua franca web-scale. But then the question is, you know, do you Dockerize your full cycle of your application, and the answer is probably yes. And so, I think it’s—to me, it’s really interesting in that, you know, if you went a year ago and you looked at most reasonable GitHub projects that, you know, you figured were kind of DevOps savvy, you’d find something called ________, right? And so, almost every project you would look, you’d see, “There’s a Docker ________.” So the idea of at least having the option to deliver a containerized version of some repo that represents some application, right, was pretty obvious. The thing that I get really excited about is a product that’s just, it’s taken a little harder for people to adopt and understand, it’s something called Docker Compose. And Docker Compose is a way to build kind of a service abstraction. You know, for those of you who probably now, if you don’t, a Docker file is how you construct an image, a Docker image. But it’s just the image, it doesn’t give the information of how to operate or create a service delivery operations structure, you know, it has to have these four servers. Compose is a YAML based, very readable abstraction to say I'm gonna run—like, if I wanna run a LAMP stack, I would run my proxy, you know, maybe an HDA proxy, I’d run my web tier, I’d run my database, and I would define that all in this YAML definition. And this is where it gets really interesting—I define that as a service, then I also can define network segmentation. Like, for example, what VMware has classically called micro-segmentation, so I could actually build—so Docker has a really robust networking plug-in architecture, and one of the network plug-ins is called AllTheWay which actually includes, it’s kind of VXLAN, SDN-ish solution for a multi-host. So I can literally take my kinda load balancer and attach it to kind of a front end network only, then I can take my web tier and have two interfaces, so I can have a front end and a back end, and then I can have my database on just a back end, right? And so, what you have then is, basically, you've got this segmentation so that anybody compromising a load browser can—you know, so it’s the idea that you can define your kind of computer structure, your network structure, and then also your volume structure in this service, human readable definition. So you can look at it and say, “These are all the components of my service, here its network configuration, here’s its volume configuration.” It’s kinda unparalleled. I know I'll get in trouble with somebody, but the idea that you can find a converged infrastructure in one human readable file—and then, by the way, tightly integrates with our Swarm product, which is actually embedded in Docker Engine’s 3112, so now you have this service definition that you can drive right into a container orchestration cluster run by Swarm as a service. And so you have all this composability, you have the abstraction, how you replicate, how you update—all that now is all put together. And we actually deliver that at the enterprise level on something called Docker Data Center. But now, everybody who’s listening by now knows I'm pretty long winded, but what I'm seeing now is, I'm seeing Docker composed YAML files in people’s GitHub projects. Which tells me, you know—one last thing about the difference between having a composed YAML or not, the composed YAML then becomes an artifact that is source controlled.

Andre: Nice.

John: So now you're taking all your operational definitions of your service definition, your network definition, your volume definition, and you've put it in that one place that can be treated as a source controlled, you know, version controlled system and delivered via the pipeline.

Andre: So, supporting the notion of “everything is code.”

John: Yeah, and this is—we've been talking about that. That’s an early—good point. I mean, that’s an early kind of mantra of “everything is code,” but we were like, “Yeah, but where do you really put the network stuff?”

Andre: Right.

John: You know? How do you really define the volume, you know, especially if you're using distributed volume storage stuff. You know, and yeah, you could, and there was, you know, I did some network DevOps a few years ago where we put switch configurations in kinda GitHub. But now, all that is coupled to the service. So it really is, “Here’s your service, here’s your network definitions, here’s your volume,” and it can be as complex network definitions as you need to be, they can be as complex—you know, you can be using Seth and some, you know, really exotic distributed file system, all defined right in that single YAML file. I mean, again, I get—I try not to be a fan boy of Docker, [Laughter] because I'm typically not the fan boy kind of person. But when I start thinking about integrating Compose with Swarm, it really solves—I'll say one nice thing, ________. When I was at that DevOps Day in Ghent, I literally was, like, almost at the, like, “Should I be switching careers and sell shoes for a living?” I was so tired of IT operations infrastructure and the way we had been doing things. And I show up at this event in Ghent with 35 people I didn't know—now they're all dear friends of mine—and I'm hearing this excitement and seeing, like, “Oh, my goodness, things could be done completely different.” I feel that way, honestly, today about kinda combining Docker, Compose, and kinda Swarm all together. I feel like it’s solving all these problems in one single, simple way that just, you know, that just kinda bugs us as IT people.

Andre: So I think that brings up a really interesting question, and that is, for the future of IT ops, for ops, does that mean they need to become a little, have a little more understanding of some of these new techniques, technologies, and shift more towards a developer type?

John: Well, you know, there’s two schools there. There’s actually kinda three schools, right, because there’s a school that’s out of school. [Laughter] So, but then there’s kind of the, I call it the Adam Jacob school because we've had this debate and fight forever. We're dear friends, but we definitely argue a lot. You know, and Adam Jacob, founder of Chef—I don't know if he still says this, but he would say that, “If you don’t know how to code like three or four—if you're not a polyglot and you don’t code three or four languages, as an operations person, you shouldn’t be doing it,” right? And again, if he’s not saying it any more, I apologize. And then that also is a pervasive way of thinking for a lot of web-scale companies. Most companies are web-scale—even small ones, like startup, that if, like, particularly ones that, where they used to work at Facebook or they used to work at Google. The interview process for an operations person is the same interview process as a developer. In other words, you gotta write code. You gotta sit there in the interview and write code and they're gonna ask you some very, you know, really kinda hard developer questions, you know, about binary tree and traversal—I don't know, crazy stuff, right? And if you don’t pass, you don’t get into operations. So there’s a lot of companies that have that line in the sand. Now, I'm on both sides of the fence, here. I think it’s great to have that, but I don't think it’s mandatory. So I don't think everybody in operations should be a polyglot, should pass the Google developer test. It doesn’t hurt if you can do that, and it’s probably, you know, like this whole notion of a site reliability engineer, right, which is this person who is a super human person, right? They can develop, they can understand the infrastructure, they can understand all compartments of the infrastructure, they can talk at length about Chef or Docker or any kind of Linux based, deep configuration item. I do believe, for the enterprise, that’s just a hard pill to swallow and maybe a bridge too far. So, there’s one school that says, “Hey, I don't need any of that, it’s just operations”—that’s a bad school. There’s Adam’s school, which is, you know, you've gotta be a polyglot, you've gotta be a coder and go down the SRE path. I love that path, but I do think there is a middle ground that I subscribe to, which is—and the answer is, you better know how to do shell coding, you better know Linux administration, and you better understand the pipeline.

Andre: Right.

John: So you better be able to go in and look at a repository and understand, you know, how they're doing future toggling or if, on the back end, if they're doing blue-green deploys, or—so, I don't care if you're not, like, you can’t pass the developer coding test, but I do think today it’s mandatory for you to be able to look at a repo and to be able to explain an application of really any language. Now, you don’t have to code that on, but you look at a node application or a Java application, the Java, because you—that person ought to understand what a POM .XML file does for Maven, right? They ought to be able to understand how they're doing kind of TDD for node application, right? I think you need to know all that. You may not know it all now, but you need to know that. So again, I hope I'm making sense here, that I think there’s a world for people who can look at code, look at feature flags in code or—not that they're gonna write them, but they're gonna definitely need to know them. Am I making sense?

Andre: No, absolutely. I think the point you made about pipelines was really central to all of this, and that is, if you don’t understand the pipeline, how the pipeline is constructed and models the, on the life cycle of the application, that’s got to be everybody’s center point in today’s world of DevOps.

John: Totally—yeah, yeah.

Andre: No matter what role you play, right?

John: Yeah. We just don’t need to get caught up that you need to be a super coder, right? I think there’s a middle place there, so.

Andre: Yeah, but I think the notion of, as code and being able to version and understanding, you know, about changes all need to be put through version control, I think those are key tenets.

John: Well, let’s put it this way—you know, it’s funny. I got a little anecdote, story, here—when I was first at Chef, I started, I was the ninth person at Chef. I came in after kinda all the founders and a couple of engineer hires from Amazon, and then I came in. I was brought in to build the customer facing business. So, there’s a guy I hired that used to work at Amazon, and we put some training together, and the first thing he did was, he was making everybody learn Git. And I was not a believer yet. [Laughter] This was late 2009 until 2010. And I fought hard to get that ranked out of the class, because I thought, “If we're going to enterprises and we're gonna teach them Git, they're gonna turn off, right, and they're not gonna learn Chef.” And I had a valid thing, but now, I look back, and today, it is absolutely mandatory—whether you know how to code or not, you have to now Git.

Andre: Yep.

John: Yeah, I think that’s a—I don't think you can do any job that uses the word DevOps where you don’t understand the basic workflow and where it starts and how Git plays the role in that beginning portion of, you know, the commit process and how you do things.

Andre: Yeah, I completely agree. So, you know, as you look to the future, where do you see things going with containers?

John: Well, there’s kinda two things that I think are interesting. One that isn’t quite there yet, and so it’s more of a wish list for somebody else to kinda build, which is, we really need a super abstraction for the pipeline. And there’s bits and pieces of it, but I just—I did, I finished early this year a course on the, for the Linux Foundation called Introduction to DevOps. It’s actually there’s two, there’s one Microsoft version—mine is not the Microsoft version. But it’s a full, it’s about 15 hours of videos, and if you follow all the recommended readings, it’s like 100 hours of work, [Laughter] but you don’t have to do that. But as I went through the class, you know, I realized that how—like, today, people are doing amazing things. Things like CloudBees and Java applications and node applications and test driven development, behavior driven development, you know, this language is using a tool like Artillery, this one’s using kind of Cucumber with this, right? And it’s really hard, right? I mean, as I went through this, I'm like, “My goodness!” And then, by the time I learn it, the cool kids are, like, there’s 10 other products that I haven’t heard of, right? I mean, I think that in order to get the world glued together, we really need a super abstraction for, you know, “Here’s the story, here’s how you build the kinda test driven, behavior driven parts of the story, here’s how it goes into the pie.” You know, so—and then that abstraction could basically kinda overlay all these incredible complexities of, you know, how this tool gets configured in Jenkins and, you know, which, you know, how this TDD tool gets installed, whether it’s gotta be NPM or it’s Ruby based and has a—I mean, just, you know, it’s a little scary. And I'll leave that for somebody else to solve. But the other one that is more kinda in our face right now is this whole DevSecOps movement. This is incredibly exciting, and this is real stuff, where people are finally getting the security people to think about them as being part of the software delivery chain. Instead of, like, we get done and then we pass it over to the security people, the security people basically start injecting into the pipeline, you know, static code analysis, vulnerability scanning. Just general usage, you know, if you're using Amazon, you're using—you know, are you making any really big infrastructure mistakes? Even, I've seen people even do things like, you know, Docker, you can create your own inheritance of image structure, which is actually recommended, right? Instead of using kinda off the street container images, you know, you kinda build your base image, and then you build your kinda base Java image, then you build your base Tomcat image, right? And then you have all these threads, like your base Ruby image. And actually, people are running scripts to inspect that you're using—you're not using any kind of out of band image inheritors.

Andre: Right.

John: Right? Again, that whole notion of, like, you know, static analysis, making sure you can find the auditability of the process, chain of custody, vulnerability scanning—and I could go on forever, but there are some really great, large enterprises now that have these, like, either they'll call them DevSecOps or they'll call them Security SDLC pipelines. And I think this is really exciting, as you're just shimmying in all these. And there’s an open source project called Gauntlet, right, which actually uses Cucumber to do things like NMAPS and checks for all the different attack vector types that could be in your code. So I think that’s really exciting.

Andre: Yeah. the discussion almost reminds me of the quality discussion, you know? You've got to build quality in from the beginning, you can’t just do testing at the end, you've gotta think about it all through the process, so I think what you described is a similar trend that’s happening now, which is security. Security needs a [Cross talk].

John: Yeah. We're just trying—it’s a pipeline, right? We deliver the service, and you're right, because originally, there was a lot of arguments about, like, “Well, you know, the QA folk, I feel like they're left out because of this continuous whatever stuff,” right? And then we're like, “No, no, no, you're not left out. Just inject your quality into the pipeline.”

Andre: Right.

John: Right? It’s you that puts it there—we're not doing it. And then, instead of us reaching out to you, you just put it in, and we're doing—you're right, we're doing the same thing with the security people now. We're just saying, “Hey, security people—you're part of this.” I heard a story the other day, it was actually Aetna. Aetna’s running this secure—they were at RSA, I was doing the keynote of the DevOps track at RSA a few weeks ago. And one of the guys from Aetna gave this amazing presentation about how they have their kinda security SDLC pipeline. So it’s their normal software delivery pipeline, but they've containerized it, so they've Dockerized it. They have kinda infrastructure scanning, like making sure there’s no mistakes in terms of how they implement Docker, the image stuff, making sure that they use in-house images, making sure that file systems are read only in the container, vulnerability scanning of the image going against the NST database, static code analysis on Commit. I mean, just all this in the pipeline, and he was talking about some of their production. So these are production applications and he was talking about, there was one production application that, when they started this initiative two years ago of Dockerizing it, building kind of a DevSecOps pipeline and an immutable delivery model, meaning that when they build the container from the development perspective, it’s kinda binary mutable all the way through. They literally started out, they measured security defects, Sev1s and Sev2s, by 10,000 lines of code. And we started out with 5 defects per 10,000 lines of code. They went ahead and added in some kind of supply chain stuff and got it down to, like, two. Basically, then the immutable delivery with Docker got it to 0.1. When they got into the kind of security stuff, they got it to 0.01. And now they have this one application that is actually 0 defects per 10,000 lines of code. And that’s not a fantasy zero—it’s a zero that’s been running for, I forget how long, let’s just say six months, that has been pen tested and bug tested, and it remains at zero.

Andre: Wow.

John: So you're talking about taking an application from 5 defects out of 10,000 down to 0.

Andre: That’s pretty impressive.

John: This is how powerful, when you can get, you know—and then you get all of the features of kind of the continuous delivery, every commit starts the build process, right? Including security, they have security stories, right? This is where—you know, when you start delivering applications that are only the software gets fast feedback and resilience, but the actual security, you know, security as code or security as the pipeline gets fast feedback, we start getting super secure and resilient. And one last thing he said to me, he said, “You know, the reason we love Docker,” he says, “Because now, the way we look at things is, one service, one Docker container, one read only file system, and one port.” Think about, now, your services—every service is basically kinda this read only file system that’s in a container, I know where it is, I have metadata to find it, and it only—because that service only requires one port. So now you've created his beautiful delivery of your services that are clean, less likely to get attacked.

Andre: Right. Sweet, sweet. John, is there anything that I didn't ask you that you’d like to talk to our audience about today?

John: You know, I think we covered a lot of ground. I think, you know, for those—The DevOps Handbook, I mean, again, I'm one of the four authors, but I mean, the other three authors are absolutely amazing people. So even if you listened to me and thought I was an idiot for the last half an hour, you know, a good portion of that book was written by Gene and Jez Humble, who’s kind of the continuous delivery guru, and then Patrick Debois, the godfather of DevOps. So, 48 case studies. You know, the one thing, what’s interesting, so in the early days of DevOps, people would come and they’d hear me present and they’d say, “John, I work at a big company, and it’s gonna be hard to get these ideas.” And I’d say, “Here’s what I want you to do. Go ahead and buy a physical copy of The Phoenix Project and give it to your boss, and then, you know, keep pestering him about, ‘Did you read it? Did you read it?’ And at some point, they're gonna feel guilty, they wind up reading it, and they get this ‘ah ha’ moment.” Well, now, we're actually finding out that The DevOps Handbook actually works better, because it’s the same hack, right? “Hey, get your boss a physical”—and the reason you get a physical copy is because they feel guilty that you bought it for them. You know, by the third time you ask them if they read it, they're like, “Oh, I didn't read it.” Then t heart rate like, “You know—you know what? I gotta read that book. This guy went out of his way to buy it for me.” But what really resonates with The DevOps Handbook is the array of the different companies. You've got Web-Scale, you've got 120-year-old companies like Nordstrom, Target, Disney, Barclays, manufacturing companies like CSG, you know, print manufacturing. And so, no matter who your boss is or what your company is, they're gonna see a similar business to theirs.

Andre: As a story that [Cross talk].

John: And then they're gonna be like, “Oh, wow.” So, The Phoenix Project, you kinda relate to because it’s a meta-story about, you know, an old project that’s overdue, people are gonna get fired, and they're like, “I get it, I get it.” And it helps. But when you read The DevOps Handbook, if you're in retail, you’d say, “Well, wait a minute. Nordstrom is our competitor. They did it, right? Or Target, or a large insurance company, or a large FinTech,” right? So, it’s a really, it’s a great book if you're struggling to kind of sell up DevOps. I know, again, I make money off it—not a whole lot. I mean, but the point is that it really is an effective tool if you're struggling selling up the concepts of DevOps. This book seems to be reasonably effective in kind of breaking that ice.

Andre: I think both books have been great tools for many individuals and many businesses. John, thanks so much for joining us today. I was thrilled to have you with us. We had a great discussion, and we’d love to have you back some time.

John: Yeah, no, I loved it, it was fun. In kind of the great words of the famous ________, “Hope I didn't suck.” [Laughter]

John: No—thanks a lot.

Andre: Thank you, John.

John: Alright. Bye bye.

Andre: Bye bye.

Andre Pino

Your host: Andre Pino, CloudBees (also sometimes seen incognito, as everyone’s favorite butler at Jenkins World!). André brings more than 20 years of experience in high technology marketing and communications to his role as vice president of marketing. He has experience in several enterprise software markets including application development tools, middleware, manufacturing and supply chain, enterprise search and software quality and testing tools.