Automation is a term which was coined in the last century during the industrial revolution. It originates from the word automatic, which means “self-acting, moving, or acting on its own.”
As soon as a company starts to scale, the teams lacking automation are the ones who struggle the most. And many teams do lack automation, not because they aren't skilled in technical terms, but because of fear.
There's a lot that contributes to the fear of automation:
the fear of losing control
the fear of losing the job
the fear of introducing unknown risks
the fear to fail
the fear to permanently worsen things, with no option to go back
the fear to waste time
the fear to ship
The last five are the most problematic in IT and often appear in some combination together. This article will focus on those, examining how teams accumulate these different fears around automation. We'll try to get a deeper understanding of what makes them struggle in times of heavy scaling and how we can fix those issues.
The Fear to Introduce Unknown Risks
In regards to automation, people who call themselves “Ops” might say: “Well, automated deployments may work, but automated decommissioning of hardware won’t work. Our requirements are much more advanced; the idea is dangerous.”
If we meet a team of developers next, they might say: “It’s nice that you automated decommissioning of hardware for the Ops team, but deploying our complex application is much more complicated than just decommissioning hardware. It would be too dangerous to do that automatically.”
The Fear to Fail
Many teams don’t talk about failure in the open. It can be seen as something bad to fail or, worst case, something to lose a job over. In an environment where the culture is dominated by fear to fail, no one takes even small risks to improve the environment.
The fear to fail is a major road block for innovation and therefore automation. Also, it slows down learning for the whole team -- if nobody talks about failure, valuable lessons aren't shared with direct colleagues and across teams.
The Fear to Permanently Worsen Things
The fear to permanently worsen things often goes hand in hand with the fear to fail. A culture where you don’t talk in the open about failure offers no safe spaces to reflect. That means that reversing a bad idea is incredibly difficult -- nobody wants to admit their idea failed.
After a while, team members might even begin to think that attempted improvements often just make a situation worse. As a result, they react with fear regarding new proposals in the future, a fear to worsen things permanently.
The Fear to Waste Time
Teams with a lack of automation usually share another fear. Everyone's busy because almost nothing is automated; every member of the team lacks time. If failure isn't seen as a welcome opportunity to learn, a risk-avoiding team will be more likely to ask for help performing tasks manually than it would to ask for help automating the tasks.
A human reaction to a lack of something is being afraid of having even less of it. If all I have is an apple, my main fear will be to lose the apple. If I barely have any time, I'll be terrified of losing even more.
The Fear to Ship
The more fear a team has acquired, the less often they ship production code. When everyone avoids failure, processes around deployments remain complex.
If you don't promote a culture where failures are reflected upon, deployment remains a risky task, getting riskier and more complex over time. The direct result of the fear to fail and the fear to worsen things is the fear to ship.
Can you spot a pattern?
Automation Is About Processes
Specific environments support the development of fears. Fears work often like a magnet in a team: Over time, more fears are acquired.
The direct implications of fear are bad enough on their own, but let’s take a look at the indirect implications. The fear of automation will also scare away people who are interested in automation. The effect is cascading.
Let’s take Bob and Alice as an example. They notice a lot of potential for automation in a team and want to help. Bob and Alice make an ambitious proposal; failure is possible, and some team members fear the possibility of worsening the already bad situation with no option to go backward. Suddenly the team is discussing in all details why the project won't work instead of how the project will work. Bob and Alice are discouraged and decide to help another team.
Charlie, who works in a third team, hears about the discussion. In the past, Charlie played with the idea of automating some low-hanging fruits. But now, Charlie can see how hard it is for Bob and Alice to help the struggling team, and he decides it's too difficult to help team with automation tasks.
It is a cascading effect, as you can see.
Automation of complex workflows is not about programming languages or tools. Automation is also not about a script which occasionally breaks. If automation is buggy and breaks, the task isn’t yet fully automated -- it needs human intervention.
Automation in a deeper sense is about understanding complex processes, their risks, and making small steps until the risk to fully automate a task is calculably lower than doing the job manually.
Let’s take automatic deployments of an old legacy app as an example. A deployment easily can add up to 10-20 manual tasks. The app in this hypothetical example doesn’t have unit or integration tests, deployments are done 100 percent manually, and as a result, almost every deployment needs to be hot-fixed. The team has never heard of code reviews and other practices to ensure code quality.
Suggesting an automated deployment will probably make a team like this panic. They'll say, “That's dangerous!” and that will be the end of the discussion. What they want to say but don’t is: “Today, the factor of risk to automate the deployment is almost infinity, as we failed to improve it enough in the past.”
The problem with calling ambitious goals “dangerous” is that it limits our thinking. When "dangerous ideas" are seen as negative, we'll stop thinking further about the problem.
Risk Management Encourages Automation
This is where proper risk management comes into play. Risk management isn’t about avoiding risks at all costs. Practical risk management is about making risks calculable over time and reducing them more and more.
Sadly, practical risk management in daily software business isn’t taught a lot. That’s a big problem -- the fear of introducing unknown risks leads to avoiding risks at all costs, which leads to stagnating innovation. Fearing risks and failing to have a process for handling risk will paralyze teams and even whole companies.
It's important to realize that you won’t reach the goal of automatic deployments by buying or building a tool. In order to automate complex tasks, the culture in a team has to change.
The team has to incorporate and understand new processes and ideas. The members have to understand why code reviews are good, why they should write automated tests, why deploying with a red CI is a bad idea, and so on. Over time, the culture will change and lead to reliable tests which cover all important parts of the application. At some point, the team will be confident enough to realize that it's now safer to automate deployment rather than running the steps for a deployment manually.
Technical knowledge is often not why a low-performing team is struggling. There are several common fears in the workplace that lead to less automation and fewer improvements in the long term. Fear is kryptonite for automation and innovation; they can feed off other fears and accumulate over time. Trust and a culture where failure is a welcome opportunity to improve are the cornerstones of healthy teams.
Automation of complex processes is not a black-and-white, all-or-nothing moment. I have never seen a team automate a complex process immediately in one or two iterations. But with small iterations, risk management gets easier, and the first results provide knowledge for further risk reduction. They give confidence and are a constant source of new insights on the long way to ambitious goals.