Codeship's Philosophical Approach to Frontend Testing

Written by: Roman Kuba

Apologies upfront that there won't be any code examples in the following post. This is more of an outcome of conversations about testing after a series of meetups, conferences, and general internal discussion at Codeship. A common theme in these conversations is:

"How do you decide what to test?"

My quick personal answer always sounded something like, "Ask yourself what happens if the code breaks -- if it's causing trouble or puts you in an uncomfortable position, you should test it."

Deep down though, this question got me thinking. My answer probably couldn't solve all the issues people face. There are, after all, more profound questions when it comes down to testing software and code that you write.

Working in Codeship's frontend development team, I find that the most relevant spectrum of testing ranges from verifying that the JS code works to ensuring that the UI gets rendered correctly and ultimately that the code works in the whole picture of the app.

Search online for "testing pyramid," and you'll find every possible nuance, from the idea that unit testing (testing single pieces of code) is the foundation and acceptance tests (testing the whole context of the app end to end) are the peak to the complete opposite.

I wonder how someone, not profoundly into writing tests for quite some time, can make a sound decision on this matter.

What Tests Should I Use?

My aim here is not to talk about the importance of testing -- countless people have written about that in great detail. The question is, though, when looking at testing from a bird's eye view, what are the surrounding factors that can help you make a decision about what and how to test?

Some opinions I've read:

  • Acceptance tests are great. They give you the certainty that the full picture works. You should only do acceptance tests to cover as much ground as possible with limited overhead in maintaining.

  • Acceptance tests are overrated. They mostly follow the happy path and only provide false confidence.

  • Unit tests are bad. They never give you the full picture. The unit can work, but you wouldn't realize it's been broken in context.

  • Unit tests are the foundation of every good app. A small part breaking can tear down the whole machine.

Considering all of this, it gets tough to know what the right path is. Honestly, there is probably truth in all of the above and the "correct" path lies somewhere in the middle.

Talking about acceptance tests without the cost of actually running them is missing the mark in my opinion. Tests that require all moving parts, like a server, a database, a browser, and probably more in between, are costly to run. Every test needs to create a new session, render things in the browser, write data into the database, read data, and so on. Just booting up everything before the first line of what you want to test gets executed can take multiple seconds.

Theoretically, if it takes three seconds to boot and there are 100 specs to run, its cost in time would be five minutes, just for creating new sessions and running those in serial. Obviously, some optimizations can be made, but for the sake of this example, let's go with it.

The critical piece here is that those five minutes don't even include the actual test execution time. So that adds even more cost to those specs.

Unit tests, on the other hand, are known to be fast. They have almost no prerequisites that need handled, and it's easy to get started and have a first green test within seconds.

But the argument that unit tests neglect the whole picture is valid as well.

Another form of tests that I haven't mentioned yet is integration tests. Existing beyond unit testing but not going all in like acceptance tests, they seem like the right tool.

I personally always struggled to grok the fine line between unit tests and integration tests. Testing a function that depends on another function to return a boolean value -- is that already an integration test? Or is it technically still a unit test as I look at that function?

To answer that, let me try to paint the picture we came up with at Codeship.

Lay Down Some Ground Rules

"It is not the beauty of a building you should look at; it's the construction of the foundation that will stand the test of time." -David Allen Coe

This quote is easily applicable for testing.

Treating specs as part of the foundation of your app makes it very important to construct them thoughtfully. First off, one of Codeship's ground rules is not to reach 100 percent code coverage. With a lot of work, you can reach that number, but it'll probably make the whole development cycle drag on, make it inflexible, and eventually there will be a lot of tedious specs that do nothing more than help you hit the coverage number.

Any other number is arbitrary as well. Measuring quality by only a number never guarantees that the critical fragments of code are tested in a meaningful way.

Avoid testing simple functions

Functions that return booleans or concat strings or comparable simple operations do not provide quality specs to our testing suite. It would be better to write a spec for the function consuming those simple functions.

Don't test your mock

Often there is a need to inject custom mocks or data to supply everything the function needs to work properly. If you find yourself verifying that mock data is there to the most point, it's time to reconsider how the spec is written. It eventually will not guarantee that the code works, but that the mock behaves correctly.

A better way is to verify behavior depending on the injected mock data. Mocks are cheap, so it's the perfect playground for some chaos monkey. Ideally the mock is a sample of real-life data.

Draw the line

Eventually, when structuring the specs, it's important to know where the separation happens between unit, integration, and acceptance specs. While it can be hard to draw the line between unit and integration, it's very easy to draw it regarding acceptance specs.

Our approach going forward is that acceptance tests should follow longer real-world usage stories. I'll talk a little more on this later.

!Sign up for a free Codeship Account

Testing a Component Driven UI

At Codeship, we decided to go with Vue as our frontend render library of choice a little over a year ago. It was an exciting journey and a successful one.

It's essential that the testing approach works library independently. No matter what component library you end up using, the benefits and effectiveness that you gain in testing allow for taking away a lot of burden from the acceptance suite.

The following questions have always been the main reasons for implementing acceptance tests:

  1. Does everything render correctly?

  2. Do the interactions work?

  3. Will the frontend show the correct state after specific actions?

With components, it becomes easier to verify 90 percent of those points. Components can be rendered independently from the browser with ease using jsdom that enables a browser environment on the node CLI. This comes baked in with test runners like Jest.

So besides unit testing, we can render components and verify that the generated HTML would look like we would expect.

Jest supports something called snapshots. A snapshot is stored alongside the specs and represents a stringified version of whatever piece of data you provide. Going forward, every other snapshot call would be verified against the stored version. So it's ideal to store rendered HTML and verify that one component or even a fully rendered page composed of multiple components would generate the correct output.

This takes away the burden of acceptance tests to verify that a button, a line of content, or an HTML class exists on the page. It's stored right there in your snapshot file and guarantees your components render accordingly. Following it would be even possible to test and verify specific behavior on the page.

  • Would a click on that element spawn that Ajax request?

  • Would filling out that form validate the inputs and print possible errors on the page?

This approach of integration tests combined with unit tests would eventually be able to provide a good amount of confidence in our code -- a perfect time to verify the conclusive picture with acceptance stories.

A Good Acceptance Test

Quantifying what a good acceptance test looks like is impossible to get right. After some helpful conversations with a colleague, we came to the following series of questions and answers.

  • How would a product manager verify that a feature is working? They would run through specific flows.

  • How would a user interact with the page? They would run through specific flows.

So obviously our acceptance test suite should also run through a flow and verify that this flow as a whole works. That said, flow does not always need to be the happy path. A flow could look like the following:

  • User navigates to the page.

  • User fills out a form and forgets a required field.

  • User submits the form.

  • User sees an error message on the screen.

  • User fills in missing field.

  • User submits form again.

  • User sees success message for planned action.

  • User can now visit the logical next page.

This flow would not only verify that everything in the UI behaves correctly but go through a whole suite of validating data, reacting to an error, sending something to a server, navigating around. A test structured this way would eventually provide trust in the flows a user runs through.

Conclusion

Obviously, there is no one-size-fits-all solution when it comes to testing. It’s a combination of a clean approach to code and testing. Eventually, it boils down to how much you trust your test suite. If you build a business on software, you should for sure be able to trust that code, because your customers have to trust it as well.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.