Key Takeaways from Continuous Discussions (#c9d9) Episode 69: Continuous Testing

Written by: Electric Bee
1 min read
Stay connected

In a recent Continuous Discussions (#c9d9) video podcast, expert panelists discussed continuous testing . Our expert panel included: Andi Mann, chief technology advocate at Splunk, Andreas Grabner , technology strategist at Dynatrace; Arthur Hicken , Parasoft evangelist; Javier Delgado , automation fanatic; Jeff Sussna , consultant, writer and speaker; and, our very own Sam Fell and Anders Wallgren . During the episode, the panelists discussed the benefits and challenges of continuous testing and the pre-requisites for success. Continue reading for their full insights!

»Continuous Testing: What is it and Why?


Continuous testing enables the delivery of quality of software at speed, says Mann : “We talk about DevOps being about delivering faster with better quality. I think that's the core of what continuous testing brings to the party, this idea of better software faster, it's velocity with quality. It's easy to get software out fast if it's crap. It's easy to get out good software if you've got a whole year to dump a feature. But doing it continuously with quality, I think, is the big thing. And when you get to a virtual cycle of continuous testing for not just functional quality, not just for  things like compliance and code coverage, but the one thing, which I think is really important is getting that impact analysis as well.  When you talk about continuous testing, being able to test beyond release, I think is actually a really important part of that as well.”

It’s not continuous testing, it’s continuous experimentation explains, Grabner : “I like the continuous experimentation aspect because I think that's actually what it is. If you have a thesis, then you want to experiment on it, and figure out is this something people really like. I actually don't like the word ‘continuous testing’ itself because I think testing is so much, at least in my mind, something we do early on before we actually give something to our clients. It's more this continuous experimentation, a continuous innovation where you come up with an idea, and then there's different ways of testing that idea, and it could involve prototyping, it could involve reproduction testing, and then through A/B testing, through blue-green deployments to basically get feedback on whether what you're doing is actually good or not.”

Hicken explains that continuous testing is actually more like continuous assessment: “The idea of maybe a continuous assessment is really what we are talking about. Where we're looking at the code and saying, ‘Is it ready? Is it worse? Is it better?’ And I think that is really the key to it because if we look at continuous delivery, continuous deployment, it's kind of a holy grail for a lot of organizations. There are some that have achieved it. For most people, it's a thing that we're trying to reach for. So if we look at that, then we have these tests, we have this coding that's going on, we have the integrations going on, but at some point, we have to be able to say, ‘Can I deploy this? Did I make it worse? Did I make it better?’ That's where the continuous test, the continuous assessment comes in. That's the big difference between automation as a simple thing, and the continuous assessment because the automation is just executing the test.”

Continuous testing helps build confidence per Delgado : “I believe continuous testing is kind of a progression testing killer. If you keep on testing as much as possible, you're going to get higher feedback, and you are not rest assured that you are in traditional defects, but at least you can have some confidence. And related to continuous delivery, that confidence is the only thing we can preach. We only can be happy whenever we have at least one level of confidence. I would simplify everything and just keep on testing the whole suite we've got, we have to control that nobody does bad things, but this is your main feedback besides of the product being in the level.”

Continuous tests allow us to minimize the business risk, says Wallgren : “The way I think about continuous testing is I think of it as a strategy. One of the end goals of that strategy is to make sure that we lower the business risk as part of our releases, so that when code gets exposed to customers, or customers get exposed to code, that will minimize that risk, and that's the strategy. Test automation is a tactic that we use to implement continuous testing as much as possible. But I think the focus in continuous testing isn't so much on the nitty-gritty unit tests, or was the code formatted properly, and did we run and find bugs, and all of those kinds of things, that's certainly part of the whole pipeline. But, the thing that we're most concerned about is the business risk, not necessarily so much the technical risk. Are we ready to release? Can we take this to production now?”

Business needs should be kept in mind when doing continuous tests, suggests Sussna : “There are two levels to think about with continuous testing. One is part of the development and deployment technical IT pipeline. But I think it actually starts before that. With the world we live in now, we have to think of continuous testing as our approach to the whole business. If you look at things like design thinking and Lean UX, they're really about continuous testing in the sense that you have this great idea, and you think you're empathizing, and then you have to do some user testing, or you do an MVP. I've had lots of fascinating conversations about what the heck an MVP is, but really, what it's about is testing your assumptions, and then that kind of flows into the technical pipeline. The way that I like to think about things like DevOps is it's actually allowing us to test our assumptions even better because we can find out in the real world with real customers using real systems, which is actually the only way you can truly know whether what you did is working or not, is to see how people are actually using it.”

»Continuous Testing: Challenges


Analyzing continuous, automated testing is a major challenge Mann sees: “One of the big challenges that I see with my customers is they are running all this automated testing and it gets to a pace and a volume of output that it's really hard to analyze and say, ‘That's done.’ Being able to, as a human being, absorb all this output from multiple testing tools, black box from white box, static analysis, dynamic analysis, code coverage, versus functional testing, versus regression, all of this stuff is hard. So automating the analysis of the testing, I think, is something that I'm seeing a lot of energy being put into now. Sure, I've automated testing, but now I've got to sit down for three hours and try and figure out whether it worked or not. So doing that analysis that can feed back into automation tooling to keep that continuous flow going at pace, that I think is a part of the challenge that I'm seeing at the moment.”

Making sure both developers and operations teams have visibility into continuous testing analytics is challenging but important, explains Grabner : “The big thing is shifting it left, making it as easy as possible for developers to get immediate feedback, but then also, but shifting right. Shifting right means not only should I be interested in how is the feature doing in my development and test environment, but also how is it used because if I know how often my feature is actually used, and how it's behaving, this actually gives me more insight into what is the potential impact that I make. Because if I am working on a feature that is used by almost nobody, then it's easier to make a change. If it's a feature that is used by 80 of my users, and it's business-critical, then I have to even do more testing, and I need to be more careful. So making sure that we are shifting left, finding problems earlier, making it very easy and building it into the pipeline, but also pulling data from production so that we know what the potential impact is if I make a mistake.”

Don’t test more, test smarter, advises Hicken : “I think what people are missing is that we really need to test smarter. And that means, when I'm looking at coverage, I have to ask, ‘Do I need more tests to cover this?’ And when I'm looking at tests that are noisy, if I have a test suite that requires me to spend three hours to analyze it, that means I'm ignoring most of it. A test that you ignore should be turned off. I would just turn it off because at least you'll pay attention to other variances, and that's really important. Shifting left is the key to understanding when a change is made, if I have a ton of good tests that take a while, what's the minimum amount I have to run? What will tell me that it's safe? I don't want to test everything, but I don't only want to test the line of code that I changed. I want to test the things that depend on the line of code that I changed. And so getting that answer right I think is the key to letting you test smarter.”

The amount of tests that need to be run can be a roadblock, suggests Delgado : “I believe that one of the first challenges is the amount of testing. We need to pay attention to coverage, functional coverage, or even new line of code, but we have to pay special attention to run-time. It's not practical if we have a set of test suites that must be run for four or five hours because your feedback is going to be delayed too much, or at least it will slow the whole pipeline.  We have to approach this in several ways. We should work on parallelizing tests, and we should work on writing proper tests without overdoing it.”

Continuous monitoring is crucial to solving issues that may come out of continuous testing Wallgren “Why would you not monitor your code while you're testing it? Why would you not monitor your systems? If you want to get as close to production as possible, that definitely should include all of those kinds of things. I’ve told this anecdote several times of when we (CloudBees) had a production issue that turned out that it had thrown an error during unit testing. The tests didn't fail, it just logged an error, but we didn't know because we were not checking our logs after unit tests or unaccounted for errors. So we changed that and we haven't had that sort of issue since. You have to do monitoring and analysis of the data that gets thrown off as a side effect of all the testing that you do, and make sure that there's nothing unexpected in there. If you don't, that's a huge missed opportunity for finding problems before they get too far to the right.”

It can be really hard to keep up with continuous testing if you are deploying dozens of times a day, says Sussna : “I think what we're all saying here is that testing is an engineering discipline like any other. And test design is a non-trivial activity, that we generate test debt just like we generate software development debt. What do we need to test, and what don't we need to test is something that we learn and refine over time. The biggest objection that I see to continuous testing is how do we keep up if we're deploying 80 times a day, how do we keep up with our tests? An anti-pattern that I see too often is testers doing manual testing, and then trying to catch up often in the next sprint and writing automated tests, and falling farther and farther behind.”

»Pre-Requisites and Patterns for Success


Test early and often, says Mann : “I think risk segmentation, testing the right things at the right time, is important. I really do think having a framework for test analytics and sharing that framework with the organization is important because everyone at every point needs to do some kind of test whether it's a unit test, a functional test, UAT, performance test, even operations testing. So I think having the analytics that handle the level of volume that you get from automated testing, but also provide visibility to all the people that need it across the different segments of your delivery chain, so that everyone has the responses and the feedback all the time when they need it. It’s the Al Lowe method, I don't know if you ever played Leisure Suit Larry back in the day, but test early test often. It's the old Al Lowe thing, save early, save often from Leisure Suit Larry. But that means that if you've got this common framework for analyzing your test automation, then anyone can run automated testing and anyone can see whether that new application, that new feature is actually working.”

It may be challenging, but making application or feature teams can be beneficial says, Grabner : “One thing that I think could actually make all this easier (I know it's not easy for organizations) is making application teams, or feature teams, or development teams responsible end-to-end for what to develop including production. That's what I see happening with some companies. Because if I am, as an application team - including developers, testers, product managers, product donors - if we are responsible to make this feature a success, we first want to continuously test maybe with some test user group to figure out is this the right thing before we actually go down the path and build something that nobody likes in the end.”

Have local test environments and apply engineering thinking to software development, advises Hicken : “Having local test environments is crucial. If developers can't check if something is working before they commit it, then you're asking for chaos in the whole machine that you have that's assembling and testing. They put some crap in there and the whole machine blows up. So I think having local test environments is a really super crucial step to getting high quality continuous stuff done. And on top of that, there's just a lot of engineering. I don't think it applies to software development, but I think that it should. I know an awful lot of people who would claim that they're a software engineer, and I still have to say, ‘Where did you get your software engineering degree? Can you show it to me? Who's teaching that?’”

Wallgren would like to see more people do pre-commit builds: “Even more powerful than local testing is the ability to do a pre-commit build where you essentially run through as much of your pipeline as you want without doing an actual commit. It's not perfect, it's not going to remove 100 of the problems or the errors, and nothing will, but we've seen this drastically reduce the number of broken builds that happen. One of the key things is, if it breaks, you're only breaking it for yourself. You're only making a mess for yourself that you need to clean up, but you have the human factors bonus of if it's 7 o'clock and I need to get home for my kids’ recital, I don't have to stay at work because I didn't commit anything, I didn't break anything for anyone else. Yes, I'm not done as early as I want to be, but I'm not keeping other people from getting their work done by committing crappy code and then going home. So that's a very, very powerful feature that we use all the time and it'd be nice to see more of that quite frankly.”

Localized testing is good, but there’s more to consider, suggests Delgado : “I believe localized testing is important, but it's a new topic. As developers, we aren't going to get some power horses to test or put in development. Furthermore, there could be some third party requirements that even with Docker in the playroom, you aren't going to prepare a proper and complete environment. I would go for having a pull toward continuous integration or continuous delivery environment, whenever there is a change, it doesn't matter how deep it is. A developer could commit changes as soon as possible, just be assured that at least it compiles and let the testing, or the real heavy testing do an automatic environment.”

Looking at feedback is one thing, but actually doing something with it is a whole other challenge, says Sussna : “There’s all this talk about feedback. Are we actually willing to pay attention to the feedback we get? It's not actually as obvious as it may seem. I did a workshop for a company a while back with very, very mature Agile and DevOps practices. They brought me in because they were interested in my ideas, and I wasn't actually sure I had much to teach them. So I had them do this exercise where I had them think about linear processes and what happened when they turned them into circular processes. And they kind of chuckled at me, and they said, ‘Well, we don't really have any linear processes anymore. We've made them all circular.’ And I said, ‘Well, indulge me. Let's just do the exercise.’ I broke the group into four small teams and they went through the exercise, and three of the four teams independently came to the same conclusion, which is, we're really good at collecting feedback and then we ignore it, throw it on the floor. The lesson I've learned is that being willing to listen and change what you're doing and how you're doing based on the information you get back is actually a much bigger challenge than it may seem.”

Watch the full episode:

Want more Continuous Discussions (#c9d9)?

We hold our #c9d9 podcast every other Tuesday at 10 a.m. PST. Each episode features expert panelists talking about DevOps, Continuous Delivery, Agile and more. 

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.