My talk was about Rails as an SOA client. I talked about things like Conway’s Law, Contract Testing, Hexagonal Architecture, the faraday gem, and HTTP Caching.
Every ThoughtWorks project I’ve ever been on has a script in the root of the project repo, often named
./go.bat if you’re very unlucky). This script often starts life as the script which the CI server runs to test the project. It soon acquires extra capabilities. It becomes the way by which developers perform a pre-commit check before pushing their changes. It becomes the script to run when you first get started on the project to download all the various project dependencies.
./go can eventually become the primary interface into the dev tooling for the project. At that point it becomes a very powerful productivity boon for the team. Whenever a team is tempted to put some process or workaround into a README or a wiki page they instead automate it and put it into the
./go script. The project team’s tribal knowledge on how to perform certain tasks is automated, readily available, and under source control.
Some examples of things I’ve often seen added to a
./go script are:
- download software dependencies needed for dev tooling, or at least detect that a required dependency is missing and provide instructions on how to install it.
- set up or update an IntelliJ configuration for a JVM project.
- do the rvm/bundler dance for a ruby project.
- run a standard set of pre-commit tests
What makes a ./go script great?
I’ve spent some time thinking about what makes a good
./go script good. Here are my rules for an awesome
./go script, and thus a low-friction dev toolchain.
1. don’t make me think
Make your dev toolchain as low-friction as possible by making
./go as low-friction as possible. Give me sensible defaults. Give me clear error messages when something goes wrong, including hints on how to fix the problem. If there are two operations which need to be performed one after the other give me a higher-level operation which does them both (for example, wrap
vagrant destroy -f && vagrant up into
./go vagrant-nuke). Give me a list of all operations available to me, with a brief description of what they do.
The git cli is a good example of what you should be striving for.
2. isolate, isolate, isolate!
Try as hard as humanly possible to isolate your toolchain so that it’s maintained within the project. Strive to avoid system dependencies - software that needs to be installed on the workstation itself.
On a Scala project, run your own isolated version of sbt rather than the system sbt. On a ruby project use bundler’s –path option to vendorize your gems and avoid the need for gemsets. On a python project use virtualenv. Check in binary tools like chromedriver or phantomjs.
This is absolutely the best way to ensure consistency across machines, and avoid “it’s only (working|borked) on my machine” issues. Because toolchain dependencies are managed within the project repo this also makes it trivial to upgrade your tooling. No need to coordinate everyone switching versions of a tool.
Even if you can’t get to the point that your tools are fully isolated, your
./go script can still perform checks on shared system dependencies to warn your or out-right fail if they are not present, or are not at the right version.
3. uniform interface
Everything is done via
Don’t make me remember whether it’s
./scripts/test. Just put everything inside of
./go, even if
./go just immediately shells out to another tool. This is an extension to the “Don’t make me think” rule. It also makes it a lot easier to isolate your tooling. Importantly it adds a level of indirection which is under your project’s direct control. That means that you can modify or even switch your underlying tooling without having to re-educate every member of the team. You can add custom pre-flight checks for required dependencies before performing an operation. You can wrap every operation with benchmarking or instrumentation. You can run every operation inside a VM or container without the person running the script even knowing!
4. one step to a running local dev environment. One.
A new member on the team should be able to run one command to get to a running dev instance on their workstation. The setup steps in your project README should be:
git checkout yourProject cd yourProject ./go
In the worst case where you absolutely cannot automate some aspect of project setup then the
./go script should detect that the manual setup is required and provide clear and detailed instructions on how to perform it.
5. do on your workstation as you do in CI
Push as much of your CI/CD scripting as possible underneath your
./go script. This keeps everything centralized and under source control. It also makes it easy to reproduce and debug CI/CD problems on a local dev workstation, and makes it easier to maintain and extend those scripts.
You get the most benefit here if you’ve agressively followed the advice to isolate your dev dependencies. You need your local tooling to be the same as that which is running on your CI/CD agents.
This tooling consistency is such a valuable aspect of a good dev toolchain that it’s worth pushing your entire dev toolchain into a virtual machine, with the exception of GUI tools. Combining Vagrant or Docker with something like Chef or Ansible gives you a setup with a strong guarantee of a uniform toolchain across all dev/QA workstations, and also all CI/CD agents. It gives you a very simple way to make sure your toolchain hasn’t drifted from the project norms - just nuke your VM and rebuild from scratch. Rebuilding the VM might take a while but it’ll probably be quicker than debugging the cause of the problem and correcting it. This approach also forces you to automate every aspect of your toolchain setup and reduces the risk of workstation toolchains drifting apart over time.
Running your build in a VM hosted within your workstation may sound extreme, but the amount of time you’ll save not debugging irritating tooling issues is a fine tradeoff for the inefficiency of building within a virtual machine or LXC container.
I’m a bit fan of static sites, and have created a fair few little static one-page demos and toy apps over the last few years. I like to host these off of S3 and expose them under subdomains of my domain, thepete.net. In fact, you’re reading one of those S3-hosted static sites right now!
With modern infrastructure like S3 and Route 53 the process of setting up a new microsite is pretty straightforward, but the manual steps started to grate on me after a while. “I’m a dev”, I thought to myself. “Why can’t I automate this?”. And so of course I did.
I started off with some shell scripts around s3cmd, but eventually moved to an assemblage of ruby functions driving the fog gem. Prompted by an internal thread at ThoughtWorks (particularly by @gga) I cleaned things up and packaged this functionality into a gem called
Max Lincoln also gave me lots of guidance and good ideas. He was keen for me to use
fog so that microstatic could support alternate cloud provides such as OpenStack/Rackspace. I failed to keep to that vision, but I’m very happy to take pull requests to get back on that path.
A gem in two parts
Microstatic does two things. Firstly it provides a command-line tool that makes it ridiculously simple to create a new microsite. Secondly it provides a rake task that makes it ridiculously simple to push new content to the microsite.
Creating a new microsite
microstatic setup and you’re done. This will create a new S3 bucket to hold your microsite, and then add a Route 53 entry to wire that S3 bucket up to a subdomain.
Deploying content to your microsite
rake deploy and you’re done. Microstatic ships with a rake task that will sync your local contents with the S3 bucket hosting your site.
Here’s a 24 second demo. I’m creating a brand new microsite from scratch, setting up S3 buckets and DNS entries, and final deploying some initial content to it.
Pretty neat, huh?
The entry sparked a discussion on an internal ThoughtWorks mailing list as to whether Page Objects should include assertions or not. In fact Martin mentions this difference of opinion in the bliki entry itself. I fell on the side of favoring page objects with assertions baked in but couldn’t come up with a compelling reason why until I was working on some Capybara-based page objects with a client QA today.
If you don’t put assertions inside your page objects you are violating Tell Don’t Ask, which in turn hinders the ability for your testing framework to do implicit spin asserts - a very valuable feature.
In the following I’ll explain all of that in more detail.
Tell Don’t Ask vs Single Responsibility Principle
Let’s use a concrete example with Page Objects. I’m testing a web page which lists my bank accounts. In one of my tests I want to verify that the list of bank accounts includes a bank account with a balance of 100.25.
If I didn’t want assertions in my page object I would Ask my page object for a list of bank accounts, and then check for the balance myself:
On the other hand if I’m OK with assertions in my page object I would Tell my page object to check for a listed bank account with that balance:
This second example conforms to Tell Don’t Ask, the idea that it’s more object-oriented to tell an object what you want it to do, rather than to ask it questions about its state and then do the work yourself. However you could argue that the second page object example with embedded assertions also violates the Single Responsibility Principle - my page object is now responsible for both abstracting over the page being tested and also performing assertions on that page’s state.
At this point we see there are valid arguments both for and against the inclusion of assertions in our page objects. This illustrates that software design is rarely about right and wrong, but almost always about trade-offs between different design choices. However today I realized that there’s a feature of Capybara (and other similar UI testing frameworks) which in my opinion pushes the decision further towards complying with Tell Don’t Ask and including the assertions in the page object.
Spin Assert is a very valuable UI automation pattern. It is used to mitigate the fact that UI automation tests are often involved in races with the UI they are testing.
Let’s continue using our account listing example. Imagine I have a test in which I transfer some money from my checking account to my savings account, and then test that the balances are listed correctly:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Looks good. But what if the account balances take a while to update in the UI after the funds are transfered? My test might check that the checking account has a balance of 100.25 before the UI has had a chance to update. The test would see a balance of 200.25 instead of the 100.25, and thus the test would fail before the UI has had a chance to update to the correct state. The test has raced ahead of the UI, leading to an invalid test failure.
Spin Asserts deal with this by repeatedly checking for the expected state, rather than checking once and then summarily failing. Here’s what a crude spin assert implementation might look like in a
1 2 3 4 5 6 7
Here we just repeatedly search for an account with the expected name and balance, pausing briefly each time we loop. In a real spin assert implementation we would eventually time out and raise an exception to fail the test after spinning unsuccessfully for a while.
Implicit Spin Asserts
Spin Asserts are such a valuable technique that they is build into a lot of web automation tools. In fact a lot of web automation tools will do a spin assert without you even being aware of it. In Selenium/WebDriver this is referred to as ‘implicit waits’, and Capybara does the same thing.
When we tell Capybara “check that this HTML node contains the text ‘foo’” it will implicitly perform that check using a spin assert. It will repeatedly check the state of the UI until either the node in question contains the text ‘foo’ or the spin assert times out. Here’s what that assertion might look like:
I think it’s pretty neat that a spin assert is hidden in there for free. However what if we instead ask Capybara for the current content of an HTML node and then do further checks ourselves?
This looks very similar, but now we’re asking for
.text instead of telling Capybara that we want the text to contain ‘foo’. Capybara is required to return the full content of the node at the instant we ask for
.text, which robs it of the chance to do the helpful implicit spin asserts it could do if we were telling it to check for ‘foo’.
By violating Tell Don’t Ask we’re reducing Capybara role to dumbly exposing state and prevented it from enhancing that state with value-add behavior.
Page Objects should include Assertions
Hopefully you can see where I’m heading by now. If our example account listing page object includes its own assertions then we can tell the page object “verify that there is a checking account with a balance of 100.25”. The page object is free to internally use spin asserts while verifying that assertion. If we don’t include assertions within our page objects then we are required to verify the page state externally. This would mean we’d need to do implement our own spin asserts, and often wouldn’t be able to take advantage of the free implicit spin asserts provided by Capybara. An example which highlights the difference this can make is asserting that we’re on the correct page by checking the page title.
With internal assertions in our page object:
1 2 3 4 5 6 7 8 9 10 11
As opposed to external assertions:
1 2 3 4 5 6 7 8 9 10 11
In the first example we get Capybara’s implicit spin asserts for free. In the second example not only are we’re required to do our own explicit spin assert, but we’re required to do so every time we want to verify the page has the right title. Overly verbose and prone to error.
Tell Don’t Ask allows value-add behavior
There are other advantages we get from including assertions in our page objects (for example better assertion failure messages), but for me the ability to leverage implicit spin asserts is the big win. At the end of the day it means that a page object can present a higher level of abstraction in its public interface, adding value underneath. Yes, we’re weaker on Single Responsibility, but overall I think it’s a good tradeoff.
The rack-flags gem allows you to define a set of feature flags which can be turned either off or on for any user. Your application code inspects the state of these feature flags when processing a web request and uses that information to modify how it handles that request.
A concrete use case
Let’s use a concrete example. We’re developing a new photo sharing feature for our Web 3.0 application. This is a big project, and we want to be releasing our app to production on a regular basis while the Photo Sharing project is still in progress. To allow that to happen we add an ‘expose photo sharing UI’ feature flag to our app, which is defaulted to Off while the project is in progress. In our application code we can check for that feature flag and only display a link to the new Photo Sharing section of our UI if the feature is On for the current request. Internal users (product owners, QAs, devs, etc) who want access to the half-finished work do so by overriding the ‘expose photo sharing UI’ flag to On using an admin UI (a simple rack app that ships with the rack-flags gem).
Once the first version of Photo Sharing is ready for prime time we simple change the default for the feature flag to On. Now every user has access to the new functionality. After the feature has been live for a while and we’re confident that it works as advertised we can retire the ‘expose photo sharing UI’ flag, along with the checks that were added to to the application code.
The ability to override flags for a given user is achieved using cookies. the rack-flags gem provides a rack app which acts as a simple feature flag admin UI. It lists the flags which have been configured for your app, and allows you to override their default state. Those overrides are stored in the user’s cookie. On each request to your web application a small piece of rack middleware inspects that cookie. When your app asks rack-flags whether a feature is turned off or on rack-flags combines the default state of each flag with whatever overrides were detected in the cookie and uses that to determine whether the flag is Off or On.
Some code examples
A yaml file defines the set of feature flags for your app, including a description of the flag and its default state:
1 2 3 4 5 6
Setting up rack-flags for a rails app:
Checking whether a feature is Off or On for a specific request:
1 2 3 4 5 6
That’s all there is to it!
Check out the github repo’s README for more documentation, including an example of setting up a Sinatra app.
Credits and prior work
This gem is inspired by a very similar system which myself and some fellow ThoughtWorkers created while building a Rails-based e-commerce front end at a large online retailer. Last time I checked in with that client they were still using their version of this feature flagging system and were very happy with it.
I previously blogged about the cookie-based approach which we used at that client and subsequenctly re-implemented in this rack-flags gem.
Ryan Oglesby was one of the ThoughtWorkers on that project and co-author of this gem.
A lot of my thinking about feature flags is colored by working with a sophisticated ‘feature bit’ system several years ago. I believe Sean Ryan was the instigator of that system, which was chiefly implemented by Rob Loh and Elden Bishop under the guidance of of Erik Sowa. Erik and Rob presented on that feature bit system at Lean Software and Systems Conference 2010.
I’ll be running a couple of free workshops on iOS Unit Testing in San Francisco in late August/early September. If you’d like to attend please fill out this simple form and I’ll be in touch with details.
Why is it free? What’s the catch?
I’m preparing to run a paid 3 hour workshop on iOS unit testing for the upcoming super-awesome StrangeLoop conference. As part of that prep I’ll be doing a couple of ‘dry runs’ of the workshop so I can fine tune the content, delivery, pacing, etc. You are cordially invited to attend. You get to learn a bunch about iOS unit testing, I get to hone and refine the workshop. A win/win for us both.
Details on the workshop
Learn the why and the how of unit-testing for your iOS applications using Kiwi, a popular open-source testing library.
You will leave this workshop excited to apply what you’ve learned to start testing your iOS app. In fact, you’ll probably want to fire up your project in Xcode straight afterwards!
We’ll start with the basics, covering what unit-testing is (and isn’t), and why it will be a valuable addition to your programming toolbox.
Having covered some theory we’ll then spend the majority of the workshop learning by doing. We’ll start by setting up an Xcode project for testing with Kiwi, and quickly move on to writing our first simple test.
After learning the basics with some isolated examples we’ll move on to look at how to unit test a real iOS app. As we go we’ll learn about mocking and stubbing, and when to make the decision not to test. We’ll learn how to test in the context of a CocoaTouch app, and how testing can positively influence the design of your app (hint: your app delegate and your view controller aren’t the only place to put your business logic!).
We’ll wrap up by touching on some more advanced topics such as test-driven development, CI, and how to test asynchronous code.
If this sounds interesting to you just fill out this simple form and I’ll be in touch with details about attending.
What better than a bio written in the third person!
Pete Hodgson is a consultant with ThoughtWorks, where he’s spent the last few years helping teams become awesome at sustainable delivery of high-quality mobile applications. Pete is on a mission to bring modern agile practices like continuous integration and automated testing to the iOS development community. While working on an iOS application for a large financial institution he helped create Frank, an automated acceptance testing tool for native iOS applications. He is now the open-source maintainer for Frank, which has become a leading tool for automated testing.
Travis CI is a cloud-based Continuous Integration service which is free to use (but only on public github projects). They recently announced support for OS X agents, which means you can now use Travis to build and test iOS applications. In this post I’ll show how I set up basic CI for an example iOS app using Travis, with the help of xctool. xctool is the ‘better xcodebuild’ which Facebook recently open-sourced. It allows you to not only build your app on the command line but also run your app’s unit tests from the command line with the same capabilities that XCode offers when you run the tests from the IDE proper.
Getting started with Travis
How Travis works
Travis is driven entirely through github integration. You configure Travis with information on how to build your app by adding a
.travis.yml file to the root of your git repo. Then you log on to Travis using your github credentials and have it configure a github post-commit hook for the github repo. After doing that github will tell Travis every time you’ve pushed code to your repo. Travis will respond by dutifully downloading the commit which was pushed and will then do whatever build you’ve configured it to do in that
an initial travis setup
I’m not going to cover linking your github repo to Travis; their own docs explain this well. Once you have linked your repo the next step would be to add a
.travis.yml file to the root of the repo. Here’s a basic setup similar to the one that I use to build my Run2Bart example app:
1 2 3 4 5
First I’m telling Travis that this is an objective-c project. Next I tell Travis how I’d like it to do CI against this repo by giving it instructions on what scripts it should run in order to actually perform a build. I also give some extra instructions on what to do just prior to running a build. It’s quite common to put all the build steps inline right in the
.travis.yml file, but I prefer to actually create bash scripts in my repo inside a
travis directory in my git repo and then just refer to those scripts from my
.travis.yml. This keeps the .yml file nice and small, and also makes it easy for me to test the travis build scripts locally.
We gave Travis a before_script in the .yml file above. This is intended to be used by the Travis agent to download tools needed as part of the build. Here’s what it looks like:
1 2 3 4 5
Very simple. We just use homebrew to install xctool on the build agent. All travis build agents come with homebrew pre-installed, but sometimes the formula aren’t up to date, so it’s best to run a
brew update before attempting a
That’s all we need to do to prepare our agent for the build. Next let’s look at the build script itself:
1 2 3 4
Again, this is really simple. We first do a basic sanity check by asking xctool to build our app, specifying a workspace and scheme. This just checks that we don’t have any compilation errors. Assuming that succeeds xctool will then build and run the unit testing target for our app, launching the Simulator on the Travis agent if needed.
It’s that easy
And that’s all there is to it for a basic build-and-test setup. At this point you’re going to get very fast feedback from Travis as soon as anyone pushes code which either causes compilation errors or causes your unit tests to fail. From here you can easily build upon this basic CI set up. There are a lot of things you can do to expand this, some of which I’ve done in my example Run2Bart app.
You can add extra stages to your build such as Frank acceptance tests.
You could even have Travis distribute a build of your app to alpha users using something like TestFlight or HockeyApp. That’s definitely an advanced topic though - it would be very fiddly to achieve due to having to do code-signing of a device build on the Travis agent.
You don’t need to use Travis for this
And of course you can do all of the above on your own Jenkins (or TeamCity, or Go, or Bamboo) CI server rather than on Travis. In fact unless you’re building an open-source app or you’re a beta customer of Travis Pro then you’ll likely want to use a different CI technology. Regardless, the basic approach remains the same.
In the previous posts in this series we’ve done some test-driven development of Backbone Models, Collections, and Views. In this post I’m going to cover another role which I believe should be present in most SPA Backbone apps: Controllers.
Controllers in a Backbone app
There’s nothing in the Backbone.js library for building Controllers. There’s not a
Backbone.Controller for you to extend from. But that doesn’t mean that Controllers don’t have a part to play in your Backbone app. I suspect that the reason we don’t see Backbone.Controller in the library is simply because there isn’t much helpful shared functionality that could be put there. The Controller role is more loosly defined than other more concrete roles such as View or Model and so there’s not a need for a concrete base implementation in the Backbone.js library. However that does not mean that your app wouldn’t benefit from code organized under the Controller role.
No Controller class or prototype
What does a Controller do?
Simply put, a controller mediates between the UI and the rest of your application. In the case of a Backbone app this generally means reacting to events published from your View layer. A controller may respond to these events by updating Model instances, or by making calls to Services (which we’ll get to in a later post in this series).
Doesn’t a View do that in a Backbone app?
In a lot of Backbone applications which don’t have Controllers you see Backbone Views that do a lot of the coordination work I just described. Rather than just acting as a translation layer between the UI and the rest of the application these Fat Backbone Views take on additional responsibilities, often acting as Controllers too. These views tend to be large, unwieldy, and tough to test. Their implementations are hard to read and hard to maintain. They cause these issues because they are violating the Single Responsibility Principle, both acting as an abstraction over the DOM and also implementing application logic. Introducing a Controller helps avoid this situation. Views can remain focussed on their single responsibility - mapping between the UI and the application. Controllers take on the additional responsibilities which would otherwise muddy the View’s role. Additionally, as Controllers aren’t coupled to a DOM the way Views are they are much more amenable to testing in isolation.
Back to our example
Now we’ll see what a typical Controller might look like, using our Card Wall app as an example.
Adding new cards to a card wall
When we left our Cards application in our last post we had a
CardWall model which represented a set of cards. We also had a
NewCardView which allowed a user to enter a new card. What’s missing is something which ties those two elements together. When a user fills out the text for a new card and hits create then a new card should be added to the CardWall. This is exactly the place where a Controller would come into play - bridging between a View and a Collection or Model (or both).
Let’s get going and use tests to drive out that controller.
Once again, we’ll write an initial silly-simple test to get us started. We’ll test that we can create a controller:
1 2 3 4
This will fail because we don’t have that constructor function defined yet. We easily remedy that and get our test back to green:
Simple enough. A function that takes no parameters and return an empty object literal. That’s enough to get us to green.
OK, what should we test next? Well, this controller needs to react to a
NewCardView firing an
create-card event by adding a card to a
CardWall. That’s quite a lot of functionality to bite off in one test but I don’t really see a way to break it down, so let’s just give it a go and see if we can express all of that in a reasonably small step:
1 2 3 4 5 6 7 8 9 10 11 12
There’s a lot going on here. We are creating fake instances of
CardWall and passing them into our controller’s constructor function. Then we simulate a
create-card event by explicitly triggering it on our fake
NewCardView instance. Finally we check that our controller reacted to that
create-card event by calling
addCard on the
CardWall instance it was given during construction.
We created our fake instance of
NewCardView by just taking the base
Backbone.Events module and mixing it into an empty object. That Events module is the same module which is mixed into pretty much every Backbone-derived object (Model, View, Collection, Router, even the Backbone object itself!). Mixing the Events module into our empty object gives it the standard Backbone event methods like
trigger. That’s all we needed our fake instance to be able to do in this case - be an empty object with some event methods mixed in.
So now that we understand what the test is doing let’s get it to pass:
1 2 3 4
Reasonably straight-forward. When the controller is constructed it now immediately registers a handler for any
create-card events from the
NewCardView instance - note that the controller is now being provided the
NewCardView instance (and a
CardWall instance) via the constructor function. The handler for that
create-card event simply calls
addCard on the
That gets our previous test to green, but unfortunately it breaks our initial simple test. That test wasn’t passing any arguments to the controller’s constructor function. That means that
newCardView is undefined, and calling
newCardView.on fails. We could solve this by modifying that initial test to pass in some kind of fake
newCardView. However, looking at that test it is providing no real value now. It was there to get us started TDDing our controller, and now that we’re started we don’t really need that test to stick around. It’s very unlikely to catch any regressions or provide any other value in the future so we’ll just delete that first test, which will bring out full test suite back to green again.
If a test isn’t providing any value then don’t be afraid to delete it.
We now have a controller which reacts to
create-card events by asking the
CardWall to add a card, but it’s not passing any information about the card from the
NewCardView to the
CardWall. Specifically, it’s not passing the text which the user entered into the new card view. To drive out that functionality we could add a new test which describes the additional behaviour. However in this case that test would just be an expanded version of the existing test, so instead we’ll just take our existing test and flesh it out more:
1 2 3 4 5 6 7 8 9 10 11 12
We’ve expanded our
fakeNewCardView to include a stub implementation of the
getText method. This method would be present on a
NewCardView instance. We’ve also extended the event triggering in the test to also include the sender of the event. This is what the actual
NewCardView implemention which we built in the previous post does. Finally we modify our verification step. Instead of just checking that
addCard was called it checks that it is called with a
text parameter equal to the card text returned by the
We run this test and of course it fails, but it’s very simple to get it passing:
1 2 3 4
The only thing we’ve changed here is to take the sender argument passed with the
create-card event, grab the text from that sender and then include it when calling
cardWall.addCard. Tests are green once more. Looking at this implementation you can see why we had to add the extra parameter in the explicit
fakeNewCardView.trigger('create-card',fakeNewCardView) call in our test code. If we didn’t pass the
fakeNewCardView with the event then
sender would be undefined in our controller code, which would cause the test to fail. It’s unfortunate that this level of implementation detail leaked into our test writing, but that implementation detail is part of the
NewCardView public API, so this isn’t too big a concern.
Our Controller is complete
And at this point we’re done. We have a Controller which mediates between our Views and the rest of our app, creating new cards when a view tells it that the user wants a new card.
That’s our Controller?!
Our controller ended up being 3 lines of code. Was that worth it? Why didn’t we just throw that logic into
Well, if we had done that then
NewCardView would now have to know which
CardWall it was associated with (so that it could call
addCard on the right object. That means
NewCardView would now be coupled directly to the
CardWall. That means you’d either need to wire up each
NewCardView with a
CardWall - probably via constructor params or by setting fields - or worse you’d need
CardWall to become a singleton. Both of these are design compromises. It’s better to keep the different parts of your system decoupled if you can.
Also note that if
NewCardView knows about a
CardWall then you are likely to start to have circular dependencies - Collections or Models know about Views, which in turn know about other Collections or Models. Your design becomes quite hard to reason about - you don’t really know what objects are involved in a particular interaction. More coupling also makes things harder to test. If a
NewCardView is directly calling a
CardWall whenever card creation is requested then you need to supply fake versions of
CardWall in a lot of tests. That’s one more piece of complexity in your tests for you to read, understand and maintain. Better to keep things decoupled so that your tests can remain isolated and focussed, along with the rest of your code.
In the previous installment of this series we looked at the basics of test-driving the development of a Backbone.js app by TDDing some simple functionality for Backbone Models.
In this installment we’re going to get a bit more advanced, looking at how we can test-drive Backbone Views. Because views integrate with the DOM they can be a little more fiddly to test, but TDDing your views is still a very achievable goal.
HTML test fixtures - to be avoided if possible
However in the context of a Backbone we shouldn’t need to lean on fixtures too often. In fact if we need to use fixtures often in our Backbone View tests then we should be concerned. This is because in most cases a Backbone view should not be accessing parts of the DOM outside of its own
el property and thus should not need fixtures during testing. The DOM is essentially one big chunk of mutable shared state (or, put another way, one big global variable) and we should avoid relying upon that shared state in our design unless absolutely necessary.
Mutable shared state is a tricky thing to deal with in code. You can never be sure what code is going the change the state, and when. You have to think about how your entire application works whenever you change how you interact with that state. A Backbone view which restricts itself to just modifying its own
el on the other hand is easy to reason about - all code that modifies that state is localized within that view’s implementation, and it is usually easy to hold in your head all at once. This very powerful advantage goes out of the window when we start accessing state that’s not owned by the view. So, we should strive to avoid access to the wider DOM whenever possible by restricting our view to only touching its own
el property. If we do this then we have no need for fixtures. This means that if we are relying on fixtures a lot then that should be considered a test smell which is telling us that we are failing in our goal of keeping as much DOM-manipulation code as possible localized to within a single view.
The one place where we must relax this guidance a little is when we have what I call Singleton Views. These are top-level Backbone views in our Single Page App which bind themselves to pre-existing ‘shell’ sections of the DOM which came down into the browser when the app was first loaded. These views are generally the highest level visual elements in the application, and as such there tend to be no more than 3 or 4 Singleton Views in a given SPA. For example a Gmail-type app may have Singleton Views for a navigation bar, a tool bar, the left-hand folder view and the main right-hand email container.
In order to test these Singleton Views properly we need to test them in the context of the initial DOM structure which they will bind themselves to, since this is the context they will always be within in when the full app is running. HTML fixtures allow us to write tests for these Singleton Views in the context of their DOM structure but in an independent way, without affecting other tests. We’ll see an example of TDDing a Singleton View when we create the New Card view later on.
Our first View
Let’s start off by building a
CardView, which our app will use to render an instance of a
Card model. As we did before we’ll start off with a test, before we even have a
CardView defined at all. Also like before it’s good to start off with a simple test that drives out an basic initial implementation of what we’re building.
1 2 3 4
This test should be pretty self-explanatory. We’re saying that we expect a card view to render itself as a div of class ‘card’. Note that we take advantage of Jasmine-jQuery’s
toBe matcher here, which says ‘the element in question should match this css selector’.
Of course this test will fail initially because we haven’t even defined a
CardView. Let’s get the test passing by defining a
CardView along with the necessary configuration specifying what class and type of tag it renders as:
1 2 3
With that we’re back to green. Now let’s move on to our next test for this view. It’s rendering as a div with the appropriate class, but what content is inside that div? If we’re rendering a
Card model then we probably want the
text attribute within that card model to be rendered in our
1 2 3 4 5 6 7 8 9
In our new test we create an instance of a
Card model with some specific text. We then create a
CardView instance for that model. Finally we render that
CardView instance and verify that its
$el contains a
<p> tag which itself contains the
Card model’s text. Note we also include some funky characters in our card text to get some confidence that we’re escaping our HTML correctly.
That’s quite a complex test, but it’s pretty easy to get passing:
1 2 3 4 5 6 7
We’ve added a
render method to our view. This method replaces whatever html is currently in the view’s
$el with a
<p> tag containing the html-escaped contents of the model’s
We’re still following Simplest Possible Thing here. we could start using some sort of client-side templating to render our view, but for the sake of this one
<p> tag it seems unnecessary. So we’re using good old-fashioned string interpolation until we reach a point where a template makes more sense.
It turns out that this new
render method gets our second test passing but also breaks our first test. Our first test doesn’t pass a model to the
CardView constructor, which means that the subsequent call to
@model.escape('text') fails. We’ll quickly fix up our tests:
1 2 3 4 5 6 7 8 9 10 11 12
That gets us back to green. We’ve added a little
createView helper function. We can explicitly pass it a model to use when it constructs a view, but if we don’t care we can just have the helper function create some generic model to supply the view with.
a View for adding new cards
A card wall isn’t much use if you can’t add new cards to the wall. Let’s start addressing that by creating a
NewCardView. This view which will represent the user interface used to supply text for a new card and then add it to the card wall.
NewCardView will be a Singleton View - rather than creating its own
el DOM element at construction time it will instead bind
el to a DOM element which already exists on the page when it loads. First off let’s take a first stab at what that pre-existing HTML will look like by creating a shell index.html file:
1 2 3 4 5 6 7 8
Pretty simple. a
<textarea> to enter the card’s text into a
<button> to add a card with that text to the wall. Now let’s drive out the basics of our
1 2 3 4 5 6
Here we use Jasmine-jQuery’s
loadFixtures helper to insert the contents of our index.html file into the DOM, but only for the duration of the test. Note that we’re inserting the regular index.html into our DOM during testing, not a test-specific fixture. Doing this gives us more confidence that we’re testing the Singleton View in the same context as when it’s running in our real app. Next we create a
NewCardView instance and verify that its
$el is bound to the right
<section> within that initial static HTML coming from
index.html. Let’s get that first test passing:
As we can see, Backbone makes it very easy to create a Singleton View. We simple specify a css selector as the
el field. When the view is constructed that CSS selector string is used to select a DOM element within the page which the view instance’s
el will then refer to.
Test-driven DOM event handling
Now that we have driven out the basic view, let’s write a test for some event firing behaviour. We want our view to trigger an event whenever the user clicks the ‘add card’ button. As we did before, we’ll use Sinon spy functions in our test to describe the behavior we expect:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
In this new test we register a spy on a
create-card event, so that whenever that instance triggers an event of that type it will be recorded by the spy. Then we use jQuery to simulate a click on the ‘add card’ button. Finally we verify that the
create-card event was triggered by checking whether the spy we registered for that event has been called.
This test will fail of course, since we haven’t implemented any custom
create-card event in
NewCardView. Let’s do that now and get the test passing:
1 2 3 4 5 6 7 8
Here we use the
events field to declaratively tell our Backbone View that whenever a
<button> within the view’s
el receives a
click event it should call the view’s
onClickButton method. In addition we implement that
onClickButton method to trigger a
create-card event on the view itself.
This is a very common pattern in Backbone views. They react to low-level DOM events within their
el and re-publish a higher-level application-specific event. Essentially the Backbone view is acting as a translation layer between the messy low-level details like DOM elements and click events and the more abstract higher-level application concepts like creating a card.
Now that our tests are green again we can do a little refactor of our test code, DRYing it up a little:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Nothing exciting here, we just extracted out the common test setup stuff into a
Now, maybe we can improve
NewCardView’s API a little here. Something that’s receiving one of these
create-card events will probably want to react to that event by finding out more details about the new card which the user is asking to create. Let’s elaborate on that second test and say that we want the
create-card event to pass along the view which triggered the event.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
All we’ve done here is expanded our expectation at the bottom to say that we expect our event spy to have been passed the view which triggered the event as an argument. That test will now fail, but it’s trivial to get it passing:
1 2 3 4 5 6 7 8
An extra 3 characters and we’ve added an argument to the
trigger call which passes a reference to the view itself to anyone listening for that
So if someone has received this event and has a reference to the view they’re probably going to want to get access to whatever text the user has entered into the text area. Let’s drive that API out:
1 2 3 4 5 6 7 8 9 10 11 12
We’re using jQuery to simulate a user entering text into the text area, and then verifying that we can fetch that text using a
getText method on the view. Of course this test will fail because the view has no
getText method. Let’s address that:
1 2 3 4 5 6 7 8 9 10 11
We’re using the view’s
$(...) helper method to do a scoped lookup of the
<textarea> element inside our view’s
el. It is a shorthand way of saying
@$el.find('textarea'). Then we just grab the value of that
<textarea> and return it. Tests are green once more.
Our new card view now provides all the functionality we need, so we’ll wrap this up.
In this installment we’ve seen how Jasmine-jQuery can help assert what HTML is being rendered by our Backbone Views, and how its Fixtures functionality can help set up pre-requisite chunks of HTML content for the duration of a test. That said, we’ve also learned that if we require fixtures to test our views then this could indicate a poor design. We’ve observed that Backbone views act as a translation layer between the core of the application and the DOM. Finally, we’ve seen how to use jQuery to simulate user interactions and how sinon spy’s can be used to check that events are being triggered correctly.
In the next installment of this series I cover where Controllers fit into a Backbone app and show how we can TDD the implementation of them.
In this series of posts I’m going to walk through some practical details on how we can develop Backbone.js applications in a test-driven manner. Developing this way leads to better structured applications, as we’ll discover.
Part one - test driven models
Let’s start off simple and test-drive a Backbone model which will represent a card on the wall. Test-driven means writing the test before the production code, so we’ll start off by writing a test for the model, before the model even exists. I like to start off a new test file with a really dumb test that just drives out the initial setup of the class:
1 2 3
Pretty much the simplest test I can think of: check that
Card has been defined somewhere. Of course when we run this test it will fail, because we haven’t defined that model yet. Let’s do that.
OK, with that done our first test is passing.
I know, I know. We didn’t define
Card to be a backbone model. I’m being a bit dogmatic here for a moment and following the test-driven tenent of doing the Simplest Possible Thing to get the test passing. In this case defining
Card as a string is a simple thing to do, so that’s what I did. Using the keyword ‘slime’ is a trick I picked up from Gary Bernhardt to indicate that this is obviously not finished code.
So, we have a slimed Card definition which passes the test. Our next step is to write a test which drives us to remove that slime and make
Card a real Backbone.Model.
1 2 3 4 5 6 7 8
Note that I’m still sticking to the Simplest Possible Thing tenent. Our test doesn’t expect
Card to have any additional functionality beyond the vanilla functionality it gets as a Backbone.Model, so I don’t bother to use
Backbone.Model.extend. When I’m driving out code this way it’s not unusual that I never have to go beyond what I thought at the time was a placeholder implementation. I wouldn’t be surprised if that holds true here and that we end up never needing to extend
Card beyond this vanilla implementation.
A Card Wall
Now that we have a
Card defined, let’s look at building a Wall of cards. There’s a temptation here to think ‘Oh, a wall of cards is like a collection of cards; let’s define it as a Backbone.Collection’. That’s a bad idea here. A card wall does contain a collection of cards, but it will likely also have other attributes. It might have a title, and an owner maybe. I’ve found that with Backbone if you’re modeling an entity in your system that’s more than just purely a collection of other things then it’s best to represent that entity as a Backbone.Model which contains a Backbone.Collection as an attribute, rather than modelling that entity as a Backbone.Collection with additional custom properties. If you use custom properties rather than the attributes of a Backbone.Model then you lose all the nice functionality that Backbone gives your model’s attributes (e.g. serialization, change events).
Given that, let’s test-drive a
CardWall which contains a collection of
Cards. Again we’ll start off really simply:
1 2 3 4
This test will fail because we haven’t defined
CardWall anywhere. Let’s get it to pass:
That gets our tests back to green.
Note that I’ve drifting a little from my previous Simplest Possible Thing dogmatism. I know that I’m going to be making a Backbone.Model and I’m gaining a bit more confidence in my TDD flow, so I’m going to start taking slightly bigger TDD steps and define
CardWall as a Backbone.Model even though the test isn’t strictly requiring that.
Taking bigger TDD steps like this is OK, as long as we’re always ready to slow down and taking smaller steps if we start feeling uncomfortable with our code. A good rule of thumb is that if you’re stuck in the Red part of your TDD cycle for more than a few minutes then you’re probably taking steps which are too big and should dial it back. As an aside, Kent Beck has a very interesting discussion around this in a presentation he did a few years ago on design principles - it’s well worth a watch. And hey, if Kent bends the TDD ‘rules’ sometimes then we’re allowed to too.
OK, now let’s drive out a
cards collection on the
CardWall model as we discussed earlier.
1 2 3 4 5 6 7 8 9
We’re checking that our card wall has a
cards attribute, and then we’re checking that the
cards attribute ‘quacks like’ a Backbone.Collection by checking that is has a
Now let’s get this test passing:
1 2 3 4 5
This is a pretty big step, but it gets our tests back to green. We’ve defined a
Cards collection, and we’ve extended our
CardWall model to have an instance of that collection as a
cards attribute by default.
Pause to refactor
Our tests are green again, but instead of moving on to our next test I’m going to take a moment to refactor. It’s really easy to forget the Refactor part of the Red-Green-Refactor cycle but focusing on Refactor is crucial if you want TDD to drive you towards cleaner code. Without refactoring frequently you will end up with a codebase which is functional but not maintainable. Over time changes will become more and more expensive, and tests will take longer and longer to write and get passing.
So, let’s refactor! I’m going to DRY up our card wall tests a little bit:
1 2 3 4 5 6 7 8 9 10 11
All I’ve done here is pull out a shared
cardWall variable and set it up in a
beforeEach function. A small change, but it reduces some duplication from the tests and makes them a little bit easier to read. Small refactorings like this sometimes seem unnecessary in the moment but applying refactorings like this continuously over time they will do amazing things to your code. Imagine living in a codebase which always feels like a greenfield project.
What’s next? Let’s give
title attribute, and have it default to something sensible:
1 2 3 4 5 6 7 8 9
And let’s get that test to pass:
1 2 3 4 5 6
And we’re back to green.
Adding cards to a card wall
We are going to want to add
Card instances to our
CardWall. A client of
CardWall could do that with the existing implementation by doing something like
myCardWall.get('cards').add( cardProperties ), but that’s not a nice approach. It’s ugly to read and it’s a Law of Demeter violation which exposes the internals of
CardWall. Let’s expose a nice helper method on
CardWall that hides those details and makes the client’s life easier. Here’s a test describing what we want the method to do:
1 2 3 4 5 6 7 8 9 10 11 12
We expect to be able to call an
addCard method which will add a card to the CardWall’s
cards collection with the appropriate attributes. We can get that test passing with a one-line method:
1 2 3 4 5 6 7 8 9
Clients of our
CardWall model are going to want to know when cards are added to our wall. By default a Backbone model only fires a change event when one of its attributes is re-assigned, not when one of its attributes changes internally. That means that a client which has subscribed to changes on a
CardWall instance won’t be notified when its
cards collection changes. Let’s confirm that problem via a test:
1 2 3 4 5 6 7 8 9 10 11 12
We’re creating a Sinon spy function and registering that function with our
cardWall instance as an event handler for the
change:cards event. So whenever
cardWall fires a
change:cards event that spy function will be called. We then add a card to
cardWall using the
addCard method we created previously and then check whether our spy has been called. If the spy was called then that
change:cards event has indeed been published. If it wasn’t called then we know that the
change:cards event isn’t firing when we add a card via
As expected, this test fails because a Model doesn’t automatically notify subscribers of changes inside its attributes. However we can get the test to pass pretty simply:
1 2 3 4 5 6 7 8 9 10
We’ve added a
trigger call at the end of our
addCard method. This will fire off the expected event whenever we add a card. With that change our tests are back to green, and we’re at a good point to wrap up this installment.
What have we covered
So what did we cover in this installment?
We’ve learned the basics of how to test-drive a Backbone model. We’ve learned that we should strive to get tests passing by doing the Simplest Possible Thing, but that we don’t need to be to dogmatic about that. We’ve discovered that a Backbone.Collection is only approriate for modelling a pure collection of items. We’ve seen that it’s better to encapsulate access to internal attributes by using helpful functions like
addCard. Finally we’ve seen how to use Sinon spies to test basic Backbone event publication.
No DOM, no jQuery
One final thing to note before we wrap up - we have not made any reference to the DOM in these tests. In fact, when developing this code I ran my tests within node.js with not a DOM implementation in sight. This is a very important point, and a key property of a well-structured Backbone application - Backbone views should be the only thing referencing the DOM (and therefore the only thing using JQuery’s almighty $).
The flip side of keeping the DOM and jQuery contained without our Views is that we should always strive to keep our Backbone Views as skinny and logic-free as possible. Because Views interact with the DOM they are particularly tricky to test, and our tests are slower to run because we have to run them in a browser. We want to keep our views as simple as possible so that we don’t have to write too many of those tricky tests. Instead we want to push logic into other parts of our application where it can be easily tested in isolation.
In the next installment we’ll dive into testing Backbone Views and their interaction with the DOM. I’ll also go into more details on what a View’s responsibilities should be.