A nice way to test redux reducers and actions - part I
July 15, 2019
This post is part one of a series, covering techniques for testing redux reducers.
One of the benefits of using redux is the ease of testing. Keeping state management separate from the rest of our application makes it easier to test in isolation.
That said, the mechanics of testing the various moving parts in a redux app - stores, reducers, actions, action creators, selectors - is not entirely obvious. One question that I see crop up frequently is how granular our tests should be. Should we test each of these moving parts in isolation, or in more integrated groupings?
Isolation by default
We’ll explore some different options by testing a simple redux reducer. The reducer in question is responsible for managing a list of todos. Here’s the implementation we have so far, which only supports a single action that adds a todo to the list.
We’ll start from the position that our tests should be as focused as possible, exercising small units of code in isolation. Here’s a test like that for our reducer:
and here’s an isolated test for our
addTodo action creator:
These tests seem reasonable - in fact, they are extremely similar to the testing examples you’d find in the redux docs. They’re certainly focused on a small isolated chunk of code, and it’s pretty easy to understand what they’re doing.
However, this type of extremely fine-grained tests are not without their downsides. These fine-grained tests are quite verbose, and not as expressive as I’d like. The intention of the reducer test is a little lost amongst the boilerplate of creating an action of the right shape and then verifying the details of the state that the reducer produces when processing that action. Additionally, as we build out our application logic we’ll find ourselves having to write a lot of these type of tests. Despite being a strong proponent of testing (and having spent over a decade practicing TDD), I find myself quietly sighing as I copy-pasta boilerplate test code in order to build out new behavior.
What’s more troubling is how brittle these tests are. Let’s consider what would happen if we decided to refactor the internal structure of the
addTodo action to conform to the Flux Standard Action format. We’d have to change our action creator implementation, of course, along with the corresponding test:
We’d also need to make a small change to our reducer implementation, so that it can still pluck information from the action correctly:
Unfortunately, this one-line reducer change also means that we have to change every related test for the reducer:
This is a shame, because this reducer test shouldn’t really care about an implementation detail like the action’s internal structure - the focus of the test is the behavior of the reducer, not the shape of the action. We’re seeing a classic characteristic of brittle tests here - a refactoring change has forced us to update tests which aren’t directly associated with the change. Now, with our current test suite this isn’t a big deal - we only have one test to update - but in a real-world test suite these types of brittle tests become a serious liability. They add significant additional overhead to refactoring work, which in turn discourages engineers from making the types of small, opportunistic “keep the campground clean” improvements which help keep a codebase ever-green.
There’s an even scarier issue with our current testing strategy, one that I had sort of glossed over. When we changed our action’s internal structure just now, our reducer was broken until we made a corresponding update to how the reducer was working with that action. However, our reducer tests would not have detected this and would have continued to pass, despite the fact that the reducer wouldn’t work when integrated into our real codebase. This is because our reducer tests aren’t using the action creator that our regular production code uses.
This type of false-negative test result is a common challenge with isolated tests - the very fact that a unit is isolated from its dependencies during test means that the test won’t detect breaking changes in those dependencies.
Getting less isolated
Let’s resolve these issues by adjusting our testing strategy. The problem with our initial approach is that our reducer tests are isolating our reducer from the action creator, causing false-negatives. What’s more, this isolation was achieved by re-implementing our action creator logic within the tests, causing brittle tests.
Both these issues go away if we change our reducer tests to use the real action creator implementation, making the tests a little less isolated:
Reducer tests using this approach no longer need to be changed when the internal shape of the action changes. In addition, such tests will fail appropriately when a change is made in the action creator implementation without a corresponding change in the reducer implementation. Win!
The fact that we are no longer testing our reducer in isolation might raise concerns - we’re using an action creator which isn’t part of the unit under test. Can we even call these unit tests any more?
Such concerns might be valid if we considered the reducer in isolation as the unit that we’re testing. However, I’d argue that it doesn’t really make sense to consider the reducer as a cohesive unit on its own. A reducer is only ever used in concert with actions - actions that should always be created by action creators. It doesn’t make sense to consider these things in isolation, and so it doesn’t make sense to try and test them in isolation. Therefore, these broader tests which consider both action creators and reducer as within their scope are still unit tests. We’re still testing a unit; it’s just that the scope of the “unit” under test has widened a little to include both the reducer and the actions which that reducer is built to process.
Action creator tests are unnecessary
Now that our tests are covering the implementation of both the reducer and the action creator, the isolated action creator tests which we initially wrote have become redundant. We are already verifying the implementation of the action creator via our new, broader reducer tests - those tests would give us good feedback if we were to break the output of the action creator somehow. We no longer need to write or maintain action creator tests, reducing our test burden and making our test suite more supple. With those tests gone, we can now refactor the internal structure of an action without having to touch any tests, while still retaining the safety net of having tests fail if we change the implementation in the action creator without the correct corresponding changes in the reducer. A win-win.
Enter the duck
Let’s step back, take a look at how we’ve changed our testing strategy, and see what that might tell us about the system that we’re testing.
We started with fine-grained, isolated tests which exercised our action creator and our reducer independently.
What we’ve moved to is a broader test which covers both action creator and reducer together, as a unit.
Our tests have led us to a realization that we shouldn’t consider our reducer and the actions that it processes as independent parts - they are a cohesive unit. In fact, we can consider the action creators as the public API for a reducer - they’re the interface that we interact with when we want to make use of a reducer - we never interact with a reducer directly.
It’s always exciting when our tests lead us to a deeper understanding of the design of our code!
We’re not the first to walk this path. Erik Rasmussen came before us, coining the concept of a duck: a formalized modular grouping of reducer + actions.
When we start thinking of our reducers and actions being implemented as a single module (a duck), we see why directly testing the output of the action creators wasn’t a sound approach. The shape of the actions that the action creators create is an internal implementation detail of that duck. We shouldn’t test these internals directly, just as we wouldn’t directly test the private methods or internal state of a class in an Object-Oriented system. The fact that our isolated reducer tests were rolling their own action data structures is another example of the same violation of our module’s private internals.
Ok, so now that we’re sold on the concept of the duck, let’s refactor our existing implementation to make it into a duck module:
There’s nothing exciting going on here, to be honest. We’re just merging the various components of the duck together into a single file.
the duck is the right unit to test
We started this article in search of a good strategy for testing reducers and the actions they process, and I’d say we’ve found one. By treating these as a cohesive unit, we can create unit tests which are shorter, more expressive, and more resilient to refactoring and code cleanup, without losing any tests coverage. Along the way, our testing has led us to a realization of a nice way to modularize our redux code.
But why stop here? We can make our tests even better! In the next post in this series we’ll look at how we can expand the scope of this cohesive unit, creating a “superduck”! In the third post we’ll explore testing techniques for the more complex, stateful behavior in our reducers.