Cloud Native Part 2: In Practice

Written by: Joe Beda

3 min read

Stay connected

This article was originally published on Heptio's blog by Joe Beda. With their kind permission, we’re sharing it here for Codeship readers.

This is the second part in a multi-part series that examines multiple angles of how to think about and apply “cloud native” thinking.

Like any area with active innovation, there is quite a bit of churn in the Cloud Native world. It isn’t always clear how best to apply the ideas laid out in the first post in this series. In addition, any project of significance will be too important and too large for a from-scratch rewrite.

Instead, I encourage you to experiment with these new structures for newer projects or for new parts of an existing project. As older parts of the system are improved, take the time to apply new techniques and learnings as appropriate. Look for ways to break out new features or systems as microservices.

There are no hard and fast rules. Every organization is different and software development practices must be scaled to the team and project at hand. The map is not the territory.

Some projects are amenable to experimentation while others are critical enough that they should be approached much more carefully. There are also situations in the middle where the techniques that were proven out need to be formalized and tested at scale before being applied to critical systems.

Cloud Native is defined by better tooling and systems. Without this tooling, each new service in production will have a high operational cost. It is a separate thing that has to be monitored, tracked, provisioned, etc. That overhead is one of the main reasons why sizing of microservices should be done in an appropriate way.

The benefits in development team velocity must be weighed against the costs of running more things in production.

Similarly, introducing new technologies and languages, while exciting, comes with cost and risk that must be weighed carefully. Charity Majors has a great talk about this.

Automation is the key to reducing the operational costs associated with building and running new services. Systems like Kubernetes, containers, CI/CD, monitoring, etc. all have the same overarching goal of making application development and operations teams more efficient so they can move faster and build more reliable products.

The newest generation of tools and systems are better set up to deliver on the promise of cloud native over older traditional configuration management tools as they help to break the problem down so that it can easily be spread across teams. Newer tools generally empower individual development and ops teams to retain ownership and be more productive through self service IT.

Part 3 will look at how Cloud Native relates to DevOps. We'll also look a bit about how Google approaches this through the SRE role.

Check out the rest of the series:

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.

Loading form...
Your ad blocker may be blocking functionality on this page. Please disable for an improved experience.