Kelsey Hightower On Kubernetes - How to Build It, Use It and Try It Out
In this episode of DevOps Radio, Sacha Labourey sits down with Kelsey Hightower, staff developer advocate for Google Cloud Platform, Kubernetes, to discuss the latest in open source container orchestration.
Sacha Labourey: All right. Welcome, everybody, on this new episode of DevOps Radio. I’m Sacha, CEO of CloudBees. Today, I’m with Kelsey, Staff Developer Advocate at Google Cloud Platform. Hi, Kelsey.
Kelsey Hightower: Hey. How you doing?
Sacha: Very good, very good. Thanks for joining us for this new episode. So I’d like to start with something pretty easy, because normally when I talk to somebody, I love to hear where they come from, what they’ve done before, and I could not find a LinkedIn page for you. So you don’t have one?
Kelsey: No. On purpose, I don’t have a LinkedIn page. Maybe years ago it was probably something that would, you know, help me network with other people. But these days, my circle of folks is pretty large. So I found very little use of LinkedIn. Also, LinkedIn just ended up becoming me trying to be nice and just accepting everyone’s LinkedIn request, even though I didn’t know them. So over time, I just had thousands of connections of people that I didn’t even know, so I stopped using it.
Sacha: Oh, wow. Yeah, all right. Well, so you don’t really exist, right, if you don’t have a LinkedIn page.
Kelsey: [Laughs] yes, but I am definitely a real person. I guess my background, for those that are interested, out of high school I kind of started my own tech company doing tech support. I had a computer store. So this is back in the day, like ‘99, when people were building computers themselves, buying parts individually, putting them together. I did that for several years. And continuing with my tech career, my first job, I guess, was really at a Google data center in Atlanta, so working inside of a large Google data center. At that time, I thought, wow, this is what all data centers look like. Throughout my career at various jobs, whether it’s a system administrator, Web hosting or enterprise – I kind of understood that Google was pretty unique in terms of its size and scale. Not every company needs all that. And spending some time at enterprise IT, doing things what people call now DevOps or digital transformation. So before we used those labels, there was just a group of people getting things done, and I think those labels try to represent that. Then, I guess, rounding out my career, working in the open source space, so Puppet Labs, configuration management, CoreOS containers, and now at Google, doing a lot of that stuff over my career on the Google Cloud Platform.
Sacha: That’s really cool. So, essentially, you started – you moved from building your own computers to building a bigger computer called the data center, which was kind of the same, but bigger scale.
Kelsey: Yeah, all of that just built in with experience. I think that’s the key ingredient from all of this is the experience of seeing different companies do different things with different needs and making it work for everyone else.
Sacha: Yeah, that’s really cool. So you’re obviously a big advocate for Kubernetes. You even have a project that you have called “Kubernetes the Hard Way,” which I thought was very funny. You also have one called “No Code,” which I encourage people to go take a look. So could you tell us how you got to Kubernetes and how this long story started?
Kelsey: Yeah. If you take the titles away, so this whole advocate thing and all of these things, those titles kind of implied that I should be doing a particular thing, and that’s not necessarily how I treat the role or any of my previous job titles. So when it comes to Kubernetes, there’s an open source project created by Google, and when I was made aware of such a project, I took it out and started to just use it on my own and decided that I personally liked it. So it wasn’t necessarily a company initiative. It wasn’t something I was paid to do. It was something that I thought was interesting, and my first contribution to that project was a blog post in the form of how to actually use it, how to install it. There was no install instructions. How to build it, how to make use of it. And given my previous experience in this space, I gave people an example of how to try it out. I think that kind of kicked off other contributions that I would make into the core, helping to try to stabilize it, clean up the code. Then, since I was very excited, I decided to go out into the community at Meetups and show people, “Hey, look at this really awesome thing. I know it’s early. I know it’s not ready. But I think this looks like what we should all be doing.” So that’s how my entry point to Kubernetes started. It was very organic, very community-oriented, and that guide, “Kubernetes the Hard Way,” really represents me still wanting people to know how it works. So there’s all these Kubernetes distros. This vendor wants to sell you this. This vendor wants to sell you that. But at the end of the day, you as an individual need to know how it all works. So that’s why there’s “Kubernetes the Hard Way.”
Sacha: That’s really nice. That’s interesting because you were not essentially appointed to Kubernetes as a project. It’s really something that you fell in love with and started to have passion about, so, quote/unquote, on your own, right.
Kelsey: That’s right. The things I normally talk about are the things I care about. If I don’t care about it, you will never catch me on stage talking about something that someone told me to talk about. I’m an engineer first. I’m a product person first. I like to help customers succeed. I’m deep into the business aspects. What you see on the stage is just my time to put a check point on say, “Hey, here’s what I’ve been playing around with. You should check it out.” That’s what it is. It’s not necessarily to tell you something or trying to influence you to do things in a particular way. It’s just the way I share and we have been given a stage to do that on.
Sacha: Yeah. Now Kubernetes is still young, right. It’s barely more than three years old in terms of its open source root, and yet, we’ve seen the rise of Kubernetes and this huge adoption. So based on what you’re seeing in the field since you are meeting with lots of organizations, what would you say is a current adoption phase of Kubernetes?
Kelsey: I think Kubernetes – I guess it’s been out for let’s say a little over four years now. The reason why I think it’s such a good project to begin with is because it was built from experience. Google is doing this thing internally. You also have companies like Mesosphere doing things internally, Red Hat doing a lot of things with their customers in this space. And what you saw was day one of that project they made some important choices. One, build on top of Docker, which had already been out for a number of years. So it wasn’t like we were starting from scratch with the entire system. They built on top of SED. If you think about it, both of those are built on top of Linux. So a lot of the things that were underneath were pretty stable. What Kubernetes does, it put an API on top of all of that, and the API is based on this idea that we can manage application containers on top of a distributed system, provide a new set of primitives. So when you create, let’s say, a configuration file in Kubernetes, what you’re doing is just articulating to the API, “Here’s a collection of data – that will represent my config,” and when you run your application it just copies it to the same file system you were using before. So it’s not like we’ve started from scratch with the entire platform. We just put an API on top. So given that, when people started to use it for the first time, while the API may be new, the end results were very familiar, especially if you’d done anything with containers or configuration management. You start to get the idea that this just was a new way of thinking about the problem – automating things end-to-end, and was very focused on the application and not necessarily the machine, like our previous tools.
Sacha: Right. So I’d like to get back to this new API – this new abstraction you are talking about. So in terms of where Kubernetes is at in terms of adoption, because I feel like everybody is talking about it, but then obviously you see adoption as a production-ready type of technology, not so much because – Kubernetes is not ready, because I think it has proven that it’s very strong and very much able to cope with very big loads, but more in terms of the transition to Kubernetes and absorbing this new layer by organization. So where would you say that that is today?
Kelsey: The thing is, just speaking from Google Cloud alone, we have large retailers that you buy stuff from, that were considered traditional enterprises years ago, are using Kubernetes in production. Some of those retailers are using Kubernetes in their stores. Then you have some banks that are using Kubernetes in production, right. You have the startups. You have small, medium. You have all different types of companies that this abstract label of large company enterprise classified cannot move to new technology. So Kubernetes can support a range of workloads in production. That is not necessarily – I think no longer up for debate. Maybe three years ago we could have said this, but two years ago we see people making real money with Kubernetes in production. Now what’s immature is one area that has always been immature, in my opinion. Most enterprises attempt to buy technology that they believe they’re gonna use for 10 or 20 years. They buy something like WebLogic or Oracle, and they believe that they can just use one version for a decade and, in some cases, vendors attempt to accommodate that. So when new technology comes out, they believe that they can take their existing ten-year-old application that has no longer been maintained, and just simply pick it up and move it into something like Kubernetes, assuming Kubernetes wants to solve all of those problems. So what happens is I think a lot of people don’t take the time to learn Kubernetes, and say, “If it doesn’t solve all of my problems from a decade ago, it is not mature.” To me, I think that is just unfortunate because we need to learn from the past. The goal isn’t to introduce new technology and recreate the past every time. So I think a lot of frustration that comes from enterprise – and this is not the first time. We saw this with virtualization. We saw this with the cloud itself. We saw it with Docker. Now we’re seeing this with Kubernetes, where people believe that they get to dictate the pace of the future of technology when it isn’t true. So some of these new abstractions, while Kubernetes does support a lot of existing applications – I hate to use the word legacy, but here’s the thing. If your application is unmaintained, you don’t know how to build it; you can’t magically put it in a Docker container and then just run it on Kubernetes. It’s because if you can’t build your app now, whether you have Docker or not, that’s kind of a precursor. This new world assumes that you can recompile your app. And I don’t think that’s a new and shiny feature. That was just a good way of building software.
Sacha: Right. So essentially you would argue that if you want to move to Kubernetes, you either need to start with a new application or have an application that is sufficiently maintained and has some flexibility to adopt some of Kubernetes concept, maybe not move completely to a new architecture, but at least have some flexibility in adopting it, and not just consider this as a new run time that will have a one-to-one matching with the, quote, unquote, classic way of doing things, right.
Kelsey: Well, let’s talk about the requirements for Kubernetes. Does your application – can it run on Linux? If the answer is yes, your application will run in Kubernetes. That’s it. That’s all. The reason why people think that there’s something different is because they don’t understand how Kubernetes works. So if I took an app that runs on Red Hat 5, right, this is the 2.6 Linux kernel, I don’t care what language you’ve written in Fortran, COBOL, you pick it, your application is making system calls. Java, it doesn’t matter. You’re making system calls to a kernel. That’s what you’re doing. You’re not doing anything else. Your application takes input. It may take HTTP input. It may read from a message queue. It may load data from a file. It doesn’t matter. That’s what you’re doing. So to take that application that you’ve written ten years ago, the first step is you need to be able to put it in a Docker container. How do you put things in a Docker container? You specify the build process. You can import Red Hat 5 as a base image, so that way your libraries are all same from ten years ago. You can do that. Everyone can do that. So the next step is you need to know how to build your app, whether that’s Go Build, if it’s a Go app, if it’s a Java application. You need to know how to build a JAR file, maybe stuff JBoss in there, and you just can run this in more like a basket. If you can do that, Docker will take your existing application – I’ve even done it with Fortran – and it will spit out a tarball. We call them Docker images, but it’s a tarball, which is 30-year-old technology. It will take the tarball and put a file in there for metadata, and we can push this tarball to a registry. That’s it. Your old application is now in a tarball in a format that we can all now standardize on. The second thing is you tell Kubernetes how you would like to run that application. That’s where it gets a big tricky. So by default and for security reasons, we restrict what that application has access to. And this is where I think the enterprise starts to get a little confused. Today, people deploy their applications as roots. The application has access to the entire file system, access to all these libraries, and most people don’t even know what the app is doing. In the container world, we restrict what it can do by default. It cannot just come up and look at all the file systems. It can’t come up and do all kinds of crazy things by default. So we do isolation. Once you understand that piece, then you say, “Hey, Kubernetes, I trust this application to make a broader set of system calls that may be deemed insecure.” For example, if you want to just, you know, a big hammer to this problem, you could just run your container as privileged, and this will give it root access to pretty much everything and loosen up some of the container constraints. So you can take that ten-year-old application, if you had to, run it as privileged, and it would probably behave very similar to what it was doing on the old machine where you got it from. Then Kubernetes will just keep it running. I guess the last part, before I wrap, is there’s a dynamic nature to Kubernetes, which is every pod or every container set will get its own IP address. Again, this is something people are just not used to. People are used to taking their app, copying it to a machine that has a fixed IP address for its lifetime. So what Kubernetes does it says, well, every container will come up with its own IP address. The benefits are you don’t have to worry about ports and port conflicts, like I used to do in the JBoss and Java world. But the side effect is your tools need to understand dynamic updates. So these are all things I think that once people really put their hands on Kubernetes, really understand the technology instead of saying digital transformation and container strategy, this is just Linux with security constraints based on best practices. So we have to decouple that from maturity of the technology.
Sacha: What about data?
Kelsey: So, when it comes to data, people – again, I think there’s a big confusion. With binaries, if you just take a binary and you put it on a VM, the binary itself, which is analogous to a container, has never had the data in it. Binaries are always considered stateless. That was a good design practice. You do a build. You have a binary. You copy the binary to where the data is. So if you have a VM, you attach a volume to the VM or you copy the data to a VM in a particular directory. That’s how you do it on Linux, right. We’ve been doing that for 30 years. You copy the binary. The binary has no idea about data. Only when we expose the binary to a data path does it write data to that path. If you copy the binary to a different machine and you bring it up, the data won’t be there. So you either have to replicate the data, keep the data in sync or use an external database. This has always been true. With containers, it’s the same story, except, remember, by default we do not allow the container to access the entire file system. Nothing has changed. If you start a binary instead of a Docker container, you are responsible for mounting in the data path. If you mount in the data path, then it works like it used to do from 30 years ago. Nothing has changed. There’s nothing new. The problem is if you run a container, it has its own view of the file system. So it also has a slash data, if you want, and if [you] write to that temporary slash data because you’re isolated and you restart the process, by default, slash data will be gone. So again, most people didn’t take the time to understand how isolation works, why we do it, and they started to write data into the temporary data directory, and when they restart the process the data is gone. So in their mind, they say, “Wow. Docker doesn’t support data services.” This is like what do you mean? You just mount the data directory and it works like it did 30 years ago. That’s really why people had this idea that Docker or a binary has anything to do with data. It does not. So you can run MySQL on a single machine, in a container, and mount in the MySQL folder and it will work the same way you were using it before.
Sacha: All right. You talked before about APIs. You were talking about how essentially Kubernetes offers more of an application-centric, type of API versus a more traditional, talking about cloud, you know, AWS type of infrastructure, has a service type of API. That’s a pretty big shift, right and it has potentially the power to shift some workload, some mind share. What’s your perception on this shift we’re seeing?
Kelsey: So we have to understand what APIs are, right. An API is usually thought of as an abstraction and has a bit of complexity based on a data model. So if you have a database with hundreds of database tables that represents a user, we don’t want our customers giving us SQL queries. That would be dangerous. So instead, we give them an API. Here’s a user API, and if you want to create a new user, you can just post some data about the user, and then the application will handle the complexity of what it takes to store and retrieve that user. That’s what the API is. It’s an abstraction. So when you think about the computing scene before Kubernetes shows up, we really didn’t have great APIs. We had automation, which is create this virtual machine for me. Create this firewall for me, and, in some cases, deploy this application onto the machine for me. But the APIs you were working with, if you want to call it that, was RPM, Yum install, Dis binary. Then you have the UNIX API, which is create this user, create these file permissions, and most of those APIs only work with command line tools. So when we look at configuration management like Puppet, Chef and Ansible, they were really wrapping bash command line tools to attempt to give us this kind of API, so we can actually think about automating processes onto machines. What Kubernetes does is says, look, let’s forget the machine. Let’s not necessarily hold ourselves to the UNIX API –system, all of those things. Let’s not worry about those. Let’s take on a new set of ideas. Let’s assume that the user is going to take their application, Python, Java, Ruby, we don’t care. They’re going to put it in that container image, again, a glorified tarball in many ways. Since we now have this nice abstraction of what an application is, not a Java application, any application that will make system calls, now we can start to articulate new things. We can say, “I would like three of these applications deployed.” We don’t have to say which servers. All we have to do is just say, “Hey, Kubernetes, I need three of these things. Use this container image. Give it this memory. Mount in these data paths,” very explicit what our requirements are. We can take this object, right. Most people define this in YAML files. We can give it to Kubernetes. At this point, that’s the API interaction with Kubernetes. What Kubernetes does on the backend, since you don’t have to specify what server, we have a scheduler that says, “Okay. I see ten nodes in your cluster, and I’m going to pick the ones that are healthy, that are working well, and I’m going to spread your three applications across those automatically for you. Then what I’m going to do is once I figure out the IP address of those, I’m going to collect those IPs, put them behind a name of your choice. That way, you can just access them by name.” So just in that very simple operation, we’ve abstracted a way all of the things that good system admins used to do, right. Spread the application over multiple apps. If one of the nodes were to fall over, Kubernetes will automatically move or recopy that binary to a new server. So this is the stuff we used to do as system administrators with failover. All of those things are just now behind one API that says, “Deploy this application.” We already know what the patterns are for doing a good deployment and keeping it running. We don’t need to just have 10,000 tools to do that anymore. Hence, the Kubernetes API allows us to articulate all of these best practices in a small YAML definition.
Sacha: All right. Another hot topic we hear a lot about is obviously serverless. You talked about that in the past, but I’d love to get your thoughts on the podcast about how you see serverless moving and what relationship do you see with Kubernetes or not? And where do you see competition versus just more of a specific use case for Kubernetes?
Kelsey: Right. So in terms of competition, I think when we say competition in this context, we’re really talking about competition in terms of what you can use. If you go to a tool store, you go to the store. They sell you hammers, screwdrivers, sledgehammers, measuring tapes, nails. It’s so many tools, and all of those tools are competing, “Hey, I can be used in this way.” So it’s up to you to make a choice on which tool do you believe is the right one to use. So that’s the competition. It’s more of a competition than mind share. So Kubernetes definitely does a great job with containers. It gives you some workflows. It gives you some opinions. It gives you some abstractions. Now let’s fast-forward to serverless. If we’re going to be – if we’re going to have a good conversation, then we have to draw the line on what are we talking about. So I think the safest definition of what serverless is, it does involve events, this event-driven programming model, meaning I want an event such an upload of a file, a Web request to trigger some logic. That’s the first stage. If you don’t have events, you’re not talking about serverless right now. So there’s the events piece. So if you think event-driven programming has value or you can articulate a system and it may be one way to decompose some forms of complexity, but it is a tradeoff, because if you have enough events, then you’re now back into a different form of complexity. So let’s just use a very simple use case. I upload a file to a bucket and that generate an event of a file was created. Here’s the URL where the file is. So that just automatically kind of gives you a dedicated and opinionated workflow. This is when your app’s going to be called. So that means your app is not going to connect to a message bus. That means your app is not going to sit around waiting for a request to come in. Now we have a well-defined workflow when the logic will be triggered. So the second piece is now that we’re doing all of this stuff upfront, we don’t need to write as much server code. So forget the server, virtual machines and physical machine. When we write an application, we tend to think about what it takes to run a service or a server, and sometimes you have to come up. You have to figure out what port you want to bind to. You have to parse some configuration. Maybe that tells you what message queue to connect to. There’s all this boilerplate code that you end up writing, when you don’t really have an opinionated way that you’re going to be executed. So with serverless platforms such as Lambda, for example, or Google cloud functions, we know that you no longer need to do that, because we’re going to call your code when the event occurs. So now that you’re writing your code you can now kind of focus on what should happen when the event arrives. So at this point, you now have a different kind of mental model of talking about writing applications. You’re going to just respond to events. The more events you have, the more of these functions you have. The last thing to think about now is if that’s all we’re doing, and again, this doesn’t solve all problems. You won’t necessarily use this for machine learning, log writing services, maybe every form of low latency e-commerce, because we’re not talking about solving all problems, but it’s a great tool to solve problems that are well defined by this event-driven architecture. So now that we are using this smaller set of logic, now what we can start to think of is do I need all of the abstractions Kubernetes provides. For most people, the answer is no. What you need at this point is an execution environment that makes sense in the event-driven world. So this is where Lambda starts to look really great. I can give it a snippet of code, bundle it in some file format. In their case, they typically use a ZIP file. Store that where the platform can use it. Then at this point, the platform will take an opinionated flow. So to save money and to make it economical for all actors, we’re only going to run this snippet of code when the event arrives. So that means we can have a newer billing model. We can have a billing model that matches that particular use case. So instead of charging you per month or per hour or per minute or per second, we can charge you per innovation of the logic. So that means if there are no events, there is no code running. There is no charge to you. So when you put all of these things together, the billing model, the fact that the clapper can run very efficiently, you’re designing very small pieces of compute based on events upstream. You tie all of that stuff together and you end up with this idea of serverless, because you’re willing to adopt all of those constraints and patterns to get a new way of thinking about doing compute.
Sacha: Right. So some would argue that you could get to a similar type of behavior on to Kubernetes, and you could create services that would dynamically invoke or instantiate service for a specific request, and so on. So where do you think enterprises will set the limit between going full serverless versus using a more unified approach and leverage Kubernetes entirely?
Kelsey: Yes. Kubernetes could totally be used to layer on a lot of these things, because below most service platforms there is something like Kubernetes making scheduling decisions, running some type of container instance. Yes. All of those things are necessary at the very low level. But as you go up higher and you’re willing to accept those constraints, then the truth is you don’t need Kubernetes, but if you want to do this on-prem, then more than likely anyone building a serverless layer on top will assume to be doing it under Kubernetes because they need that. So on-premise, it totally makes sense that you want to try to layer these things up. But here the choice the enterprise has to make. The reality is most enterprises have not found it very easy to run things like OpenStack. We tried this, right. Let’s copy Amazon on-premise with OpenStack. That was a very big challenge and some people have never done it well. Then the same can be said of a serverless platform, but here’s where I think the serverless on-prem story is gonna be harder. In the cloud, the convenience of serverless is that serverless is not designed to run a database. It’s designed to use a fully managed database. Serverless is not designed to run an object store. It’s designed for you to use a fully managed object store. So the reason why I think serverless was born in the cloud is because in the cloud you have things like IAM, where you can automate the authentication of this piece of logic to all of these hosted services. You have such a large array of hosted services that then you can just focus on your application. You’re not trying to run MySQL inside of Lambda. You’re not trying to run Postgres or Redis inside of these platforms. You are assuming that all of those services are fully managed, highly available, all around you, in every region of the world that you want to operate in. This is normally not true of the enterprise that’s going to bring in Kubernetes, bring in serverless and then still have to answer questions around Redis, Postgres, authentication. There’s gonna be so much work to do that when you really look at it, it’s not going to be as convenient as it is in the cloud. So I think the decision the enterprise has to make, you really need to test it. All right, you’ve got to take a step back. All of these blog posts, people on Twitter saying all of these things. You have to be the engineer you were hired to be. You have to test your application maybe in a container. Do you like the performance? Is it meeting the requirements? Test your application in a serverless stack. Is the cold start problem too bad, meaning since those functions are used to create it on-demand, if I’m using it for the wrong use case, my user will see the latency of the function taking, even if it’s just 500 milliseconds to start, that’s a cold start and maybe business is like, “We can never have that. There is no case where I can ever have 500 milliseconds of latency.” Only you as the user can make that decision. So you have to look at all of these as individual tools. My guess is the reality is there will be serverless. There will be container platforms. There will be VMs, and there’s still mainframe. So I’m not sure why people think that one of these technologies is going to make all the other ones irrelevant.
Sacha: All right. In your opinion, as we move towards this big transformation we’re seeing with cloud and containers and DevOps and so on, if an organization that’s about to onboard on that transition, what would be your advice in terms of where they need to be investing, in terms of skills and tools potentially? What would be your short list, essentially?
Kelsey: So my shock list is accountability.
Sacha: Short – all right.
Kelsey: Forget all the buzzwords and transformation and DevOps and all this stuff. Accountability. So, a lot of people try to pattern themselves after this Lean processing model. They say, “Hey, DevOps is a lot like Toyota.” And my message to them is think about Toyota. They had an automated factory. They had an automated factory that gave them the ability to build a car pretty fast and produce as many of those cars as they wanted to. So when they sat around as a collective of skill sets, the decision was: what car do we build based on business objectives? Do we build minivans? Do we build a truck? Do we build an electric car? That’s a hard problem to solve, what to build and why, and what style. How should it look like? What price point do you want? Once they made that decision, they go the factory and they start shipping cars. Every year they can do this. In the enterprise, I’m watching people still deciding should they have a factory or not. Should we even do CI/CD? Should we even do automation? If you’re doing that, you cannot think of yourself like Toyota in any way. You have no factory. So even if you do make a decision as a team of cross-discipline skills, you have no factory. You can’t even make the car. So you have some work to do and you have to be very serious. This is not about transformation. This is about actually learning. Ten years ago, most enterprises made the mistake. The bought software. They paid the vendor to keep the old software going and they stopped learning. They stopped updating things. Hey, they got comfortable. And you know what, not doing anything for a very long time sounds pretty good. Now the new world is here and now that you’ve stopped learning, you have to learn everything. So your first nine months is learning. What is this CI/CD thing? How to configure it. How does it behave when it crashes? What is containers? So people that were learning all the while, they saw containers eight, ten years ago with LXE. Then they saw Docker five years ago. Then they saw Kubernetes four years ago. So if you’re always learning, you’re going to say, “Okay, this is not quite ready, but I’m learning. I’m paying attention, and I know why I can’t use it yet.” So you always will be learning, because think about it. Even with digital transformation, you’re going to get to a point where you get all of this stuff working. You get Jenkins for your CI/CD working. You get Kubernetes for your container runtime working, maybe even get a little bit of serverless. Here’s the thing. If you stop learning, in ten years you’ll be doing this again as you start to look at new technologies that come out, and then have to migrate from one platform to another. So, to me, the shock list is you need accountability. You cannot talk about 18 months of transformation and DevOps and saying all of these things. What you should be doing is in three weeks: where is the Jenkins install? And it’s okay if your team says, “We don’t know how to use Jenkins.” Then go find someone who does and find someone who’s willing to teach your team and give them the time to learn. So then in six weeks or eight weeks, they can tell you, “Here’s how we’re going to use Jenkins.” Give them time to make mistakes. Give them time to improve. Then you move on to the next thing with accountability. If you want to use Kubernetes, use Kubernetes. Do not waste time trying to reverse engineer Kubernetes. Use it for a weird use case. So that’s my advice to the enterprise. If you’re very serious, put down goals. Bring in people to help you with those goals. And give your people time to really learn something new while they attempt to keep the lights on.
Sacha: So, essentially, this is not a transition. This is not unique. We’re gonna be in a constant stream of change, so your culture needs to be about constantly learning and adapting to those changes, so you don’t have every five or ten years one big step to make, but it needs to be constant. It needs to be part of your culture.
Kelsey: That’s how the earth works. That’s how the universe works. That’s how humans work. As you age, you grow. The world around you changes, and those that pay attention to that change, they adapt to the world around them and they can be successful. Those that hide in a closet and come out only every ten years will be surprised that cars don’t look the same. Phones don’t work the same. This is just natural. This isn’t something special. This is the laws of the universe. This isn’t new. It’s been like that since the beginning of time. So I’m not sure why people believe that this is an initiative. It’s reality.
Sacha: Very nice, very nice. One last thing I wanted to talk about is closer to Google now. The Google Cloud platform launched in the marketplace and CloudBees is happy to be there as well. It means that you can launch commercial Kubernetes applications with just one click on the Google Kubernetes engine. Can you talk more about that and what are your expectations for customers around this?
Kelsey: Yeah. To me, Google Cloud really just represents, in my opinion – and one thing I like about being at Google is that we have strong open source roots, meaning we contribute to a lot of projects, and those number in the thousands, not just Kubernetes, TensorFlow and things like Chrome, but thousands of open source projects, big and small. If you look at our product offering, a lot of those products are open source projects in product form. We have managed MySQL, managed Postgres. Now we have managed Redis, Kubernetes, all of these things. A lot of those open source projects, they’re great if you know how to use them. The marketplace, for example, allows companies like CloudBees to come in and say, “Look, we have the expertise in things like Jenkins, for example. With that expertise, we boiled it down to this one click install.” So there’s lots of work that you all do to make sure that when I click that button and I hit the appropriate target, that it’s going to work as if I was able to borrow your team for a little while. So the marketplace, what we’re trying to do with the marketplace is have a way for companies like yours to display the expertise and leverage our platform and everything behind it, to make it where the customer don’t have to spend nine months learning how to do all of these things the right way. We take all your best practices, and the nice thing is the customer has a relationship with you all. So the bonus points here, if you can continue to make money and you can continue adding value to your customer, Google can provide the compute – [clears throat]. I’m gonna pause for a second. We can provide the underlying information and we just think that’s just the right way to go. So that’s why the marketplace exists, so that we can bring those communities together.
Sacha: That’s great. That’s very natural. That’s a very natural evolution. So, Kelsey, and final thoughts?
Kelsey: Yeah. My thoughts here is that this is a wonderful time to be in tech. All these tools are amazing. The fact that they’re open source means you have a direct say in your future, whether you can write documentation for these projects, contribute new features to these projects. This is the best time to be doing what we’re doing. But also, we need to be very pragmatic. We need to understand how these – we need to understand how these technologies work. There’s no magic. A lot of times, these things are just APIs and abstractions, on top of things that already work, and they attempt to provide workflows so that we don’t all have to reinvent and rediscover them on our own. So I think what people should do is approach all of these things in a very pragmatic way, and don’t forget to ask questions of people, so that way you can run a little bit faster than just raw experience.
Sacha: That’s great. Thanks a lot, Kelsey, for sharing your insights and wisdom with us today on DevOps Radio. It was great talking to you.
Kelsey: Awesome. Thank you.
Sacha: All right. Thanks, Kelsey. Good-bye.
Read More »