The Cloud as a Tectonic Shift in IT: The Industrialization of IT

Written by: Heidi Gilmore
1 min read

This is the first in a series of blogs about  The Cloud as a Tectonic Shift in IT, authored by Sacha Labourey , CEO, CloudBees. In the series, Sacha examines the huge disruption happening across the IT industry today, looks at the effect it is having on traditional IT concepts and reviews the new IT service and consumption models that have emerged as a result of the cloud. Finally, Sacha makes some predictions about where this tectonic shift will lead us in the future.

The move to the cloud represents one of the largest paradigm shifts to ever affect the IT landscape. More than just a simple technology evolution, the cloud fundamentally changes many of the cornerstones on which IT was built. From redefining the concepts of operating systems and middleware, to revolutionizing the way IT services are built and consumed, the cloud is ushering in an era of change unlike any we have ever seen.

In  Part 1: The Industrialization of IT (below), Sacha examines the evolution of electricity and the development of standards for operating and using it. He then compares the evolution of grid delivery, instant-on access and pay-as-you-go models in the power industry with the parallel evolution occurring now in IT service delivery.

  The Industrialization of IT

History shows that human being tend to be pretty smart at predicting the predictable. Any time linear evolution happens, it tends to be pretty easy to look at where things are going, at what pace, and guess where it is going to land at some point. Take the cost of hard drives as an example (see graph below). Looking at the last 30 years, it seems like it has never been very hard to guess what price a hard drive would be 3, 5 or even 10 years from now. The same could be said about Moore’s law on CPUs. All of that is good and very predictable.


Figure 1 – Cost of hard drives per gigabyte
(Source: )

On the other hand, when paradigm shifts happen – a real shift, not just mere evolution within a given paradigm – we do not have the same ability to predict the future. This is because paradigm shifts can follow multiple option paths, some of which are incompatible with what we are doing today. And because these options can lead to entirely new behaviors, it is difficult to predict where things will eventually land. The other consequence is that once you’ve experienced the shift, it becomes hard to remember or figure how things were being done in the past.

A perfect example of a paradigm shift leading to unforeseen behaviors is the phone. The emergence of phones was so significant because people no longer needed to travel in order to communicate. But what’s even more interesting is the emergence of mobile phones. When mobile phones appeared, traditional wired phones were already a commodity available in most houses – so it was a paradigm shift for a device that was already quite widespread. It took less than a decade for most people in the U.S. to have a mobile phone and use it to dictate some of their daily behavior. Today, mobile phones are such an inherent part of our lives that we can barely remember how life was without them. Think about it for a moment: Has a kid ever asked you why phones used to have a cord in the past?

Since understanding the impact of a paradigm shift is not an easy endeavor, let’s look at a very powerful analogy to help us understand just how significant one can be.

A Story Backwards

The first electrical generators appeared at the end of the 19th century. They were big, complex, expensive and fragile. As such, they were only accessible to a few highly profitable companies.

Yet, very quickly, cities and countries started understanding the economic potential electricity could bring in terms of competitiveness. In 1878, the Exposition Universelle in Paris catalyzed the acceleration of this process. We should remember this fair as one where some of the inventions that most deeply impacted the 20th century were demonstrated: Alexander Graham Bell’s telephone, Thomas Edison’s phonograph and megaphone, as well as the completed head of the Statue of Liberty, which was ready to be shipped to New York City. Yet, some of the most impressive demonstrations were related to the use of something new: electricity .

In the years that followed, many cities started investing in power plants: St. Moritz in 1879, London and New York in 1882 and Grenoble in 1883. These cities understood that for their inhabitants and factories to ever afford electricity, it had to be a “community” investment. So the earliest power plants were the pride of the communities in which they were built. They were the Googleplex of the 19th century!


Exhibit 2 - Power plant built in 1906 in La Chaux-de-Fonds, now protected by UNESCO
(image source: )


Yet, those efforts were by far not enough to create an environment where electricity could be consumed as a “utility.” To start with, there were no standards. As an example, the picture below of Paris from around 1913 shows a single city with different networks and very different capabilities and connectivity: mono-phased, bi-phased, five wires, three wires, etc.


Exhibit 3 – Paris in 1913, segregated by type of electrical network


So as a consumer, what kind of freedom did you really have? Your equipment had to be customized for a specific zone and if the plant in your zone required maintenance you would essentially be without power. This showed the limits of community-scale and proprietary networks.

During the 20th century, massive standardization and network consolidation occurred – from the plug (size, shape, voltage, current, frequency, etc.) to the distribution grids and the producers. A lot of work was done. Fast-forward to the end of the 20th century, and consumers could buy any equipment or device anywhere, plug it in and use it.

Also, given the importance of electricity in both our lives and our economy, governments often played a strong role in the marketplace. Even though most power plants and distribution networks are owned by private companies, governments ensure they followed strict rules because no developed economy can advance without a hyper-reliable electricity production and distribution chain.

Where does that leave us today? Electricity providers are organized into grids in order to provide a highly available stream of energy. Distributors negotiate and dispatch this electricity to consumers, who then have access to a fully standardized commodity. Done.



Exhibit 4 - The electrical grid, today – standards everywhere

The State of IT Today

The evolution of electricity over the last century has distinct, direct parallels with the current state of IT. While these are different beasts in many aspects, numerous similarities remain. They are both significantly important in our lives and as a fabric of our economy. They are sophisticated technologies that are hard for non-specialists to manage. And they are “virtual goods,” in the sense that what they produce is not like traditional consumer goods.

Yet, when we look at the state of IT today, it’s clear that we are at a comparable level that electricity production and distribution was a century ago.



              Exhibit 5 – On the left, the first electrical generator. 

On the right,  Google’s first data center


Even when a company has clearly identified business objectives and knows what service it would like to implement, the path between this idea and its implementation seems like the map of Paris in 1913. There are so many options to consider – everything from hardware, operating systems (OS) and backup, to network and firewall configurations – that the business objectives may get lost in the shuffle.



Exhibit 6 – Typical timeline of a business-critical IT system


Today, many computing resources are custom built. Companies set up their infrastructures in unique ways; every application requires a different setup and environment and the overhead is high. It’s almost as if each company built its own power plant, based on the best available architecture and a well-defined maximum amount of computing and storage resources.

Thanks to the Internet, only the bandwidth component is a relatively mature IT layer: Companies understand that building their own WAN made no sense, so they have begun outsourcing their communication needs to a set of interconnected bandwidth vendors and do not really know – nor care – through which tube their bits are travelling, so long as service-level agreements (SLAs) are met. This relative maturity doesn’t apply to either the compute or storage/data layers.

Is there anything we can learn from the evolution of the production, distribution and consumption of electricity in the 20th century and what’s taking place with cloud computing?

From Selling Books to Initiating the IT Revolution

While many had dreamed about it, Amazon did it. In 2006, the company announced Amazon Web Services (AWS), and initiated the phenomenon that we now group within the “cloud computing” basket.

As any online vendor has learned – often the hard way – every tenth-of-a-second increase in response time equals a fair percentage of lost customers (source: , among others). To mitigate that risk, companies typically own enough resources to accommodate peak levels of demand. But since a number of online businesses have a strong seasonality, a considerable portion of the infrastructure ends up sitting idle the rest of the time. Amazon’s genius was to make those extra resources available through an API, as an on-demand service available via a pay-per-use model. And guess what? It worked great.

In just a few months, AWS was a great success. The graph below shows that about a year after launching, the bandwidth consumed by AWS customers had already eclipsed Amazon’s own consumption.



Exhibit 7 – Bandwidth consumed by Amazon’s own websites, compared with  bandwidth consumed by Amazon Web Services
(Source: Amazon Web Services blog, )


Very quickly, engineers realized that the arduous process of provisioning new servers they had been used to could now happen in just a few clicks with AWS! This opened new doors. Now, they were able to use compute resources like a disposable razor blade: take one, use it, toss it. Start an application, add resources to handle a peak in traffic and decommission it immediately after the fact.

A similar process in the enterprise can take weeks and more frequently, months. And once you obtain a resource, you typically get it for five years.

But, engineers also realized that not everything was customizable – they had to accept some level of standardization. Is this tradeoff acceptable? Is a more streamlined IT environment acceptable when resources are almost instantaneously available? The more customized you make it, the less automated it can be, the harder it is to reach a critical mass and the more you move away from a utility model. Think about it, it is a bit like asking for an 80-volt electrical plug just for a specific toaster. Objectively, providing an 80-volt source might end up being more efficient for a specific case, but that type of custom requirement is exactly what we’ve tried to eradicate. Today is all about standardization and simplicity, not constant customization.


Exhibit 8 – Mapping electrical delivery to IT/cloud services delivery


It’s a safe bet that cloud computing and IT in general will evolve in a very similar fashion to the production, distribution and consumption of electricity: Cloud providers are the new power plants, the Internet unifies clouds into a highly available grid and distribution and consumption is standardized around specific browsers (HTML5, CSS3, JavaScript, etc.). IT will move away from à la carte systems, on-premise data centers and customized client-side technologies. The critical mass on each of these layers will make any competing/proprietary technologies decidedly non-competitive.

Much like the transition of wired phones to mobile phones, cloud computing is not a mere evolution, but a true paradigm shift. Its consequences are still hard to precisely foresee, but it will without a doubt impact the current IT landscape – especially when it comes to how we develop software and identify leading IT vendors.

Up next in the blog series: The Irrelevance of Infrastructure as a Service (IaaS). 

To learn more :


Sacha Labourey

The Cloud as a Tectonic Shift in IT:



Stay up to date

We'll never share your email address and you can opt out at any time, we promise.