https://fast.wistia.com/embed/medias/er8xr2fnau.jsonphttps://fast.wistia.com/assets/external/E-v1.js
This is the eighth Testing Tuesday episode. Every week we will share our insights and opinions on the software testing space. Drop by every Tuesday to learn more! Last week we talked about how to manage your integration test's data with factory_girl.
Using Integration Testing and Unit Testing with Behavior-Driven Development
Traditionally application code is written first and tests afterwards. After writing a piece of code, you write unit tests. And once you finished a few pieces of code, you write integration tests to see if they fit together. In Behavior-Driven development there is an approach called Outside-In Development where we go the other way round:
We write our scenario first, and in order to make it work we write examples for single components. The application code comes last. This is our development cycle:
Specify a scenario
Run the scenario and watch it fail 2.1. Specify the behavior of a component in an example to fix the failure 2.2. Run the component specification and watch it fail 2.3. Implement the minimum functionality necessary to make the example work
Run the feature again. If it still fails, continue with 2.1.
The scenario works! Write another scenario until the feature is complete. Developing software Outside-In has numerous advantages. Watch the screencast to find out more!
Up next week:
Rspec is a framework for specifying behavior. Next week we're gonna take a look at specifying single components of our application with RSpec. We'll especially focus on how you can specify components in isolation from others with stubbing and mocking. Thanks to Amrit Deep Dhungana who asked us on our Facebook page to talk about RSpec stubbing and mocking. Yes we can! And we love to. Of course we support RSpec on the Codeship
Further readings:
Transcript
Integration tests and unit tests in BDD
Ahoi and welcome! My name is Clemens Helm and you’re watching Codeship Testing Tuesday number 8. Last week we talked about how test data can help you with your integration tests. This week we explore the benefits of integration tests and unit tests, and how they fit into your behavior-driven development workflow.
Just as a reminder: Integration tests test how multiple parts of your system work together, for example the user interface, the web application and the database. Unit tests only test one isolated part or component of the application, like the user interface or a model.
INTRO END
In traditional software development you write unit tests to develop parts of the software like single classes. When you have enough such tested units, you write an integration test to make sure these parts work together.
In Behavior-Driven Development many developers do it the other way round: They start by describing the behavior of the whole application and continue by specifying the single parts needed for this behavior. This approach is called outside-in development. At Codeship we also follow this approach, because it has several advantages:
Specifying your application's behavior first has a number of advantages
[focus] – you stay focused on what you are actually developing for: the user that interacts with the application.
[fit] – you can always be sure that the parts of your application can fit together. When you develop single parts in isolation first and write an integration test for these parts afterwards, it's likely that the parts don't fit together entirely. This causes additional work for adopting the parts.
[need] – you implement exactly what you need, wheras when you develop parts of software in isolation first, you tend to implement more than your application actually needs or you may forget to implement things that the application actually requires.
To avoid these problems in traditional software development, you need a detailed specification of the software and its parts first. This is unnecessary in Behavior-driven development.
In Behavior-Driven Development the equivalent to integration tests are features. When you develop outside-in, you follow this cycle:
Specify a scenario
Run the scenario and watch it fail 2.1. Specify the behavior of a component in an example to fix the failure 2.2. Run the component specification and watch it fail 2.3. Implement the minimum functionality necessary to make the example work
Run the feature again. If it still fails, continue with 2.1
The scenario works! Write another scenario until the feature is complete.
Let's try this out. For scenarios we use Cucumber. I already covered Cucumber in episodes #3 and #6. For writing examples for our components we use RSpec. I will cover RSpec in more detail next Testing Tuesday.
Let's say we've got a little application that lets us post our favorite superhero. There's a superhero overview and we can add new superheroes on a separate page with their name and description.
Unfortunately at the moment we can still add superheroes without a name. Let's change that:
First, we add a scenario to the "Posting superheroes" feature:
Scenario: No superheroes without name When I add a superhero without a name Then there should be no superhero And I should be told that the "Name can't be blank"
I will just paste the step definitions in here:
[implement step definitions]
When we run the scenario now, it fails. In Rails, the best way to achieve the desired behavior is to add a validation to the superheroes model. So in the superheroes_spec.rb we could say:
it "should be invalid without a name" do superhero = Superhero.new superhero.should be_invalid superhero.errors[:name].should == "can't be blank" end
When we run this example, it fails. Let's try to fix it by adding the validation to the model:
validates_presence_of :name
And now it works! When we run our scenario again now, it succeeds! Great, we're done! If there were another failure, we would write more examples for our components until the scenario would succeed.
But didn't we put too much effort into testing here? The scenario tests the validation anyway, so why did we need to add the example to the model?
Let's look at another example:
After a while we want to enable our users to quickly add superheroes from the overview. We add a scenario that makes this possible:
Scenario: No superheroes without name When I add "Batman" at the superheroes overview Then there should be the superhero "Batman"
We don't need an additional scenario for the validations now because this behavior already exists and we only write scenarios for behavior that's not there yet.
A few weeks later we decide to delete the old scenario, because we decided to remove the extra page for adding superheroes.
This is the moment where our model example saves us: If we hadn't written the example, the validation would now be untested. If someone removed the validation, there wouldn't be any complaints from our test suite.
So specifying examples for our components makes changing our application safe. Nowaday's web applications tend to change a lot during their lifetime and so do integration tests. New scenarios are added, others removed and it's very hard to tell if there is stil an integration test covering some functionality. Unit tests make sure that every piece of code remains tested, no matter how our application scenarios change.
So when you're developing a web application that will be changed over a few months and years, unit tests will make your life easier.
There are several other reasons why unit tests are handy: - [Faster] – Unit tests are much faster than integration tests. Integration tests can take a couple of seconds while unit tests can perform in a few milliseconds. This pays off especially when you're refactoring a class. You can run your unit tests quickly after each little refactoring step to make sure that everything still works. - [Clearer] – You can find out very easily if and how a piece of code is tested by reading the unit tests. Well-written examples for a class read like a summary of what the class is taking care of. - [Independent] – Unit tests let you specify functionality of a component without considering the rest of the system. We will look into this next week in detail.
And this is where I have to say goodbye again. It was great to see you and I'd love to see you again next Testing Tuesday! Next week we'll focus on unit testing. I will show you how you can specify single components of your application with RSpec and how to use a unit tester's best friends: stubbing and mocking. As usual I’d be thrilled about your comments and please remember: Always stay shipping!