How do you think about testing a new feature? If you’re a developer, your mind probably immediately jumps to things like automated unit tests. If you’re a product manager, the first thought that pops into your head might be manual QA testing. As an operations manager, your mind might jump to regression testing to ensure you won’t have to roll back a feature or bug fix after deployment.
In reality, all these approaches, and more, are critical parts of testing a new feature. When you roll out new code, you want to make sure it works right. The hard part is ensuring that you’re spending the right amount of time and energy testing that code. Spend too little, and it’s likely that you’ll ship a product riddled with bugs and sporting features that don’t work.
But the opposite problem is just as dangerous. Spend too much time testing your code, and your new feature might hit the market months late. Or it might be missing features that you couldn’t guarantee would work perfectly. In this post, we’re going to go over different facets of feature testing. We’ll also talk about how to make sure you’re maximizing value from each type of test.
Automated Feature Testing
Every feature you ship needs some form of automated testing that verifies the feature does what you want. The challenge is to write the correct number of tests for different parts of your code. Proponents of methodologies like test-driven development will advocate for validating every single line of code in your application with automated tests. At the other end of the spectrum, some teams don’t expect any tests to be written for new or existing code.
So, what’s the right amount? It’s not none. It’s probably not designing each function of your code by writing tests before you start writing functions either. For most teams, the sweet spot lies somewhere in the middle of those two extremes. When I’m advising junior developers about writing tests, I tell them to focus on the system’s most important parts. For instance, when you have a critical set of functions that do something like calculate sales taxes for your products, those need extensive tests. When you’re building a bit of display logic to ensure the total price of an invoice is aligned to the right side of the page, you probably don’t need as many tests to validate that logic.
The critical part of your testing strategy is that the tests you do run need to be fully automated. They need to be integrated into your CI/CD pipeline, and you should never ship a build with even one failing test.
Manual QA Feature Testing
Many companies today eschew manual QA testing. I can understand why—if automated testing feels slow, manual QA is often slower by an order of magnitude. Today’s software teams aren’t looking to slow software delivery; they’re trying to speed it up. But there’s still a positive place for manual QA testing in today’s software landscape. The key is to focus on what manual QA is good at and leave out the things it isn’t.
So, what is manual QA good at? The best QA engineers I’ve ever worked with had a very strong customer focus. They did everything they could to understand how our customers worked with the software we shipped. Then they approached each testing session looking at the software through that lens. Traditional QA teams have focused a lot on “smoke testing.” When smoke testing, individual testers manually go screen by screen through an application, entering both valid and invalid data to ensure that the application behaves the way that it should. Over the past few years, software teams have moved more of this type of testing into automated unit tests, which is why software teams have abandoned the manual QA effort.
If that’s all your manual QA team brings to the table, you’re right to think of them as easily replaceable. Having a QA engineer spend an hour manually smoke testing an application just to verify your existing unit tests work effectively isn’t a good investment of time or resources. Instead, we want our QA engineers to approach the software through the lens of different customer personas.
Great Manual QA Testing
Building a high-quality QA team is about more than just finding people who’ll find clever ways to break software. It’s about growing a team that understands the ways that your customers think and use your software. It’s also about including them in the design and development process from the earliest stages.
One of the most frustrating exchanges in the feature delivery pipeline happens when QA flags something as a bug when the product team requested the software work that way from the beginning. Everyone involved winds up frustrated due to the breakdown in communication.
Instead, bring your QA team into the room when you’re designing products, and make sure they’re up to speed when you’re developing features. By breaking down the barriers to communication early, you help erase those frustrating moments.
QA testing can also shift to production at a certain point of maturity. This allows QA to occur in production before a new feature is released to customers for further testing.
The other strategy I’ve found that helps manual QA really shine is to have QA engineers adopt product personas when running testing scripts. So, for instance, they might look at each screen through the lens of a new user, or an experienced administrator. Then, their job is not just to raise whether or not a particular bit of the functionality works or doesn’t work.
Automated testing can raise that issue. Instead, their job is to raise situations where something doesn’t work the way that their persona expects. When you move manual QA out of smoke testing and into feature-oriented testing, that’s how you can deliver delightful features that work intuitively for your users.
Regression testing often comes in as a combination of manual and automated testing. Its goal is to ensure that when you ship a new feature, you don't break any existing workflows. The good news here is that regression testing often works just like testing a new feature.
Existing unit tests work to detect regressions when developers refactor code. Existing manual QA flows retest areas of the code known to work well. Any new behaviors are quickly identified as regressions. Shipping a new feature only to discover it broke an existing workflow is a terrible feeling, both for developers and customers.
The absolute best feedback you’ll ever get is from your customers. Whenever you develop a new feature, you want to get it into customer hands as quickly as possible. In fact, getting code to customers and rapidly iterating based on their feedback is the core of the Agile development philosophy.
The catch with customer testing, though, is that to send code to customers, you need to send code to customers. This is a dangerous proposition, especially with your newest or experimental features. It’s likely that these features don’t work quite as well as you’d hope. It’s also likely that not every customer is going to know how to use this new feature. And, if you turn it loose on the world, customers who don’t know how to use the feature will inundate your support team with frivolous support requests.
This is where a tool like CloudBees Feature Management is invaluable. By adopting a feature management approach, you can turn a particular feature on only for specific customers. Once you turn your new feature loose for those customers, you can work with them specifically to solicit in-depth feedback from users who understand that feature’s value.
Fortunately, they’re also looking at the feature with fresh eyes, meaning they can tell you which things work intuitively versus which don’t work how they’d expect. Then you can rapidly deliver new iterations of the software directly to that limited set of customers. In this way, you work closely with those customers to make sure you’re building software that fits their needs precisely.
Feature Testing Isn’t Optional
There’s an old programmers’ truth that says that every software team has a testing environment. The lucky ones are the teams who have a testing environment separate from their production environment.
This joke persists (despite the rise of testing in production as a legitimate practice with the right safeguards). It speaks to a fundamental truth: You’re going to wind up testing your code no matter how you work on it.
As a team, your job is to determine the right amount of testing to undergo before feature code reaches your customers. Finding the right balance requires critically thinking about the feature you’re working on and applying an exacting attention to detail. The good news is that, if you listen to your testers and your customers, they’ll tell you what your team needs to know to ship exciting, polished features that delight your users. It’s up to your team to listen and find the right balance of time spent testing.
This post was written by Eric Boersma. Eric is a software developer and development manager who's done everything from IT security in pharmaceuticals to writing intelligence software for the U.S. government to building international development teams for non-profits. He loves to talk about the things he's learned along the way, and he enjoys listening to and learning from others as well.
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.