h

Being Agile

Pete Hodgson's blurgh

The 5 Rules of an Awesome Dev Toolchain

| Comments

Every ThoughtWorks project I’ve ever been on has a script in the root of the project repo, often named ./go or ./go.sh (or ./go.bat if you’re very unlucky). This script often starts life as the script which the CI server runs to test the project. It soon acquires extra capabilities. It becomes the way by which developers perform a pre-commit check before pushing their changes. It becomes the script to run when you first get started on the project to download all the various project dependencies.

./go can eventually become the primary interface into the dev tooling for the project. At that point it becomes a very powerful productivity boon for the team. Whenever a team is tempted to put some process or workaround into a README or a wiki page they instead automate it and put it into the ./go script. The project team’s tribal knowledge on how to perform certain tasks is automated, readily available, and under source control.

Some examples of things I’ve often seen added to a ./go script are:

  • download software dependencies needed for dev tooling, or at least detect that a required dependency is missing and provide instructions on how to install it.
  • set up or update an IntelliJ configuration for a JVM project.
  • do the rvm/bundler dance for a ruby project.
  • run a standard set of pre-commit tests

What makes a ./go script great?

I’ve spent some time thinking about what makes a good ./go script good. Here are my rules for an awesome ./go script, and thus a low-friction dev toolchain.

1. don’t make me think

Make your dev toolchain as low-friction as possible by making ./go as low-friction as possible. Give me sensible defaults. Give me clear error messages when something goes wrong, including hints on how to fix the problem. If there are two operations which need to be performed one after the other give me a higher-level operation which does them both (for example, wrap vagrant destroy -f && vagrant up into ./go vagrant-nuke). Give me a list of all operations available to me, with a brief description of what they do.

The git cli is a good example of what you should be striving for.

2. isolate, isolate, isolate!

Try as hard as humanly possible to isolate your toolchain so that it’s maintained within the project. Strive to avoid system dependencies - software that needs to be installed on the workstation itself.

On a Scala project, run your own isolated version of sbt rather than the system sbt. On a ruby project use bundler’s –path option to vendorize your gems and avoid the need for gemsets. On a python project use virtualenv. Check in binary tools like chromedriver or phantomjs.

This is absolutely the best way to ensure consistency across machines, and avoid “it’s only (working|borked) on my machine” issues. Because toolchain dependencies are managed within the project repo this also makes it trivial to upgrade your tooling. No need to coordinate everyone switching versions of a tool.

Even if you can’t get to the point that your tools are fully isolated, your ./go script can still perform checks on shared system dependencies to warn your or out-right fail if they are not present, or are not at the right version.

3. uniform interface

Everything is done via ./go. Everything.

Don’t make me remember whether it’s rake test, rspec, or ./scripts/test. Just put everything inside of ./go, even if ./go just immediately shells out to another tool. This is an extension to the “Don’t make me think” rule. It also makes it a lot easier to isolate your tooling. Importantly it adds a level of indirection which is under your project’s direct control. That means that you can modify or even switch your underlying tooling without having to re-educate every member of the team. You can add custom pre-flight checks for required dependencies before performing an operation. You can wrap every operation with benchmarking or instrumentation. You can run every operation inside a VM or container without the person running the script even knowing!

4. one step to a running local dev environment. One.

A new member on the team should be able to run one command to get to a running dev instance on their workstation. The setup steps in your project README should be:

git checkout yourProject
cd yourProject
./go

In the worst case where you absolutely cannot automate some aspect of project setup then the ./go script should detect that the manual setup is required and provide clear and detailed instructions on how to perform it.

5. do on your workstation as you do in CI

Push as much of your CI/CD scripting as possible underneath your ./go script. This keeps everything centralized and under source control. It also makes it easy to reproduce and debug CI/CD problems on a local dev workstation, and makes it easier to maintain and extend those scripts.

You get the most benefit here if you’ve agressively followed the advice to isolate your dev dependencies. You need your local tooling to be the same as that which is running on your CI/CD agents.

This tooling consistency is such a valuable aspect of a good dev toolchain that it’s worth pushing your entire dev toolchain into a virtual machine, with the exception of GUI tools. Combining Vagrant or Docker with something like Chef or Ansible gives you a setup with a strong guarantee of a uniform toolchain across all dev/QA workstations, and also all CI/CD agents. It gives you a very simple way to make sure your toolchain hasn’t drifted from the project norms - just nuke your VM and rebuild from scratch. Rebuilding the VM might take a while but it’ll probably be quicker than debugging the cause of the problem and correcting it. This approach also forces you to automate every aspect of your toolchain setup and reduces the risk of workstation toolchains drifting apart over time.

Running your build in a VM hosted within your workstation may sound extreme, but the amount of time you’ll save not debugging irritating tooling issues is a fine tradeoff for the inefficiency of building within a virtual machine or LXC container.

Microstatic: Radically Simple Static Microsites

| Comments

I’m a bit fan of static sites, and have created a fair few little static one-page demos and toy apps over the last few years. I like to host these off of S3 and expose them under subdomains of my domain, thepete.net. In fact, you’re reading one of those S3-hosted static sites right now!

With modern infrastructure like S3 and Route 53 the process of setting up a new microsite is pretty straightforward, but the manual steps started to grate on me after a while. “I’m a dev”, I thought to myself. “Why can’t I automate this?”. And so of course I did.

I started off with some shell scripts around s3cmd, but eventually moved to an assemblage of ruby functions driving the fog gem. Prompted by an internal thread at ThoughtWorks (particularly by @gga) I cleaned things up and packaged this functionality into a gem called microstatic.

Max Lincoln also gave me lots of guidance and good ideas. He was keen for me to use fog so that microstatic could support alternate cloud provides such as OpenStack/Rackspace. I failed to keep to that vision, but I’m very happy to take pull requests to get back on that path.

A gem in two parts

Microstatic does two things. Firstly it provides a command-line tool that makes it ridiculously simple to create a new microsite. Secondly it provides a rake task that makes it ridiculously simple to push new content to the microsite.

Creating a new microsite

microstatic setup and you’re done. This will create a new S3 bucket to hold your microsite, and then add a Route 53 entry to wire that S3 bucket up to a subdomain.

Deploying content to your microsite

rake deploy and you’re done. Microstatic ships with a rake task that will sync your local contents with the S3 bucket hosting your site.

Demo

Here’s a 24 second demo. I’m creating a brand new microsite from scratch, setting up S3 buckets and DNS entries, and final deploying some initial content to it.

Pretty neat, huh?

Assertions in Page Objects

| Comments

Martin Fowler recently added a bliki entry for the Page Object pattern. It’s a good writeup - if you haven’t read it I recommend doing so now. Go ahead. I’ll wait.

The entry sparked a discussion on an internal ThoughtWorks mailing list as to whether Page Objects should include assertions or not. In fact Martin mentions this difference of opinion in the bliki entry itself. I fell on the side of favoring page objects with assertions baked in but couldn’t come up with a compelling reason why until I was working on some Capybara-based page objects with a client QA today.

TL;DR

If you don’t put assertions inside your page objects you are violating Tell Don’t Ask, which in turn hinders the ability for your testing framework to do implicit spin asserts - a very valuable feature.

In the following I’ll explain all of that in more detail.

Tell Don’t Ask vs Single Responsibility Principle

Let’s use a concrete example with Page Objects. I’m testing a web page which lists my bank accounts. In one of my tests I want to verify that the list of bank accounts includes a bank account with a balance of 100.25.

If I didn’t want assertions in my page object I would Ask my page object for a list of bank accounts, and then check for the balance myself:

1
2
accounts = bank_account_page.listed_accounts
accounts.find{ |account| account[:balance] == 100.25 }.should_not be_nil

On the other hand if I’m OK with assertions in my page object I would Tell my page object to check for a listed bank account with that balance:

1
bank_account_page.verify_has_listed_account_with_balance_of( 100.25 )

This second example conforms to Tell Don’t Ask, the idea that it’s more object-oriented to tell an object what you want it to do, rather than to ask it questions about its state and then do the work yourself. However you could argue that the second page object example with embedded assertions also violates the Single Responsibility Principle - my page object is now responsible for both abstracting over the page being tested and also performing assertions on that page’s state.

At this point we see there are valid arguments both for and against the inclusion of assertions in our page objects. This illustrates that software design is rarely about right and wrong, but almost always about trade-offs between different design choices. However today I realized that there’s a feature of Capybara (and other similar UI testing frameworks) which in my opinion pushes the decision further towards complying with Tell Don’t Ask and including the assertions in the page object.

Spin Asserts

Spin Assert is a very valuable UI automation pattern. It is used to mitigate the fact that UI automation tests are often involved in races with the UI they are testing.

Let’s continue using our account listing example. Imagine I have a test in which I transfer some money from my checking account to my savings account, and then test that the balances are listed correctly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def verify_funds( name, balance )
  accounts = bank_account_page.listed_accounts
  accounts.find{ |x| x[:name] == name && x[:balance] == balance }.should_not be_nil
end

#...

verify_funds( "Checking Account", 200.25 )
verify_funds( "Savings Account", 20.15 )

transfer_funds( 100, from: "Checking Account", to: "Savings Account" )

verify_funds( "Checking Account", 100.25 )
verify_funds( "Savings Account", 120.15 )

Looks good. But what if the account balances take a while to update in the UI after the funds are transfered? My test might check that the checking account has a balance of 100.25 before the UI has had a chance to update. The test would see a balance of 200.25 instead of the 100.25, and thus the test would fail before the UI has had a chance to update to the correct state. The test has raced ahead of the UI, leading to an invalid test failure.

Spin Asserts deal with this by repeatedly checking for the expected state, rather than checking once and then summarily failing. Here’s what a crude spin assert implementation might look like in a verify_funds implementation:

1
2
3
4
5
6
7
def verify_funds( name, balance )
  loop do
    accounts = bank_account_page.listed_accounts
    break if accounts.find{ |x| x[:name] == name && x[:balance] == balance }
    sleep 0.1
  end
end

Here we just repeatedly search for an account with the expected name and balance, pausing briefly each time we loop. In a real spin assert implementation we would eventually time out and raise an exception to fail the test after spinning unsuccessfully for a while.

Implicit Spin Asserts

Spin Asserts are such a valuable technique that they is build into a lot of web automation tools. In fact a lot of web automation tools will do a spin assert without you even being aware of it. In Selenium/WebDriver this is referred to as ‘implicit waits’, and Capybara does the same thing.

When we tell Capybara “check that this HTML node contains the text ‘foo’” it will implicitly perform that check using a spin assert. It will repeatedly check the state of the UI until either the node in question contains the text ‘foo’ or the spin assert times out. Here’s what that assertion might look like:

1
page.find(".container .blah").should have_content('foo')

I think it’s pretty neat that a spin assert is hidden in there for free. However what if we instead ask Capybara for the current content of an HTML node and then do further checks ourselves?

1
page.find(".container .blah").text.should include('foo')

This looks very similar, but now we’re asking for .text instead of telling Capybara that we want the text to contain ‘foo’. Capybara is required to return the full content of the node at the instant we ask for .text, which robs it of the chance to do the helpful implicit spin asserts it could do if we were telling it to check for ‘foo’.

By violating Tell Don’t Ask we’re reducing Capybara role to dumbly exposing state and prevented it from enhancing that state with value-add behavior.

Page Objects should include Assertions

Hopefully you can see where I’m heading by now. If our example account listing page object includes its own assertions then we can tell the page object “verify that there is a checking account with a balance of 100.25”. The page object is free to internally use spin asserts while verifying that assertion. If we don’t include assertions within our page objects then we are required to verify the page state externally. This would mean we’d need to do implement our own spin asserts, and often wouldn’t be able to take advantage of the free implicit spin asserts provided by Capybara. An example which highlights the difference this can make is asserting that we’re on the correct page by checking the page title.

With internal assertions in our page object:

1
2
3
4
5
6
7
8
9
10
11
class SomePageObject
  #...

  def verify_page_title
    page.should have_title "The Page Title"
  end
end

#...

some_page.verify_page_title

As opposed to external assertions:

1
2
3
4
5
6
7
8
9
10
11
class SomePageObject
  #...

  def has_correct_page_title?
    page.title == "The Page Title"
  end
end

#...

spin_assert{ some_page.has_correct_page_title? }

In the first example we get Capybara’s implicit spin asserts for free. In the second example not only are we’re required to do our own explicit spin assert, but we’re required to do so every time we want to verify the page has the right title. Overly verbose and prone to error.

Tell Don’t Ask allows value-add behavior

There are other advantages we get from including assertions in our page objects (for example better assertion failure messages), but for me the ability to leverage implicit spin asserts is the big win. At the end of the day it means that a page object can present a higher level of abstraction in its public interface, adding value underneath. Yes, we’re weaker on Single Responsibility, but overall I think it’s a good tradeoff.

Introducing Rack-flags

| Comments

I’m pleased to announce rack-flags, a very simple way to add feature flags/bits/toggles to your Rails app (or any other rack-based web app).

The rack-flags gem allows you to define a set of feature flags which can be turned either off or on for any user. Your application code inspects the state of these feature flags when processing a web request and uses that information to modify how it handles that request.

A concrete use case

Let’s use a concrete example. We’re developing a new photo sharing feature for our Web 3.0 application. This is a big project, and we want to be releasing our app to production on a regular basis while the Photo Sharing project is still in progress. To allow that to happen we add an ‘expose photo sharing UI’ feature flag to our app, which is defaulted to Off while the project is in progress. In our application code we can check for that feature flag and only display a link to the new Photo Sharing section of our UI if the feature is On for the current request. Internal users (product owners, QAs, devs, etc) who want access to the half-finished work do so by overriding the ‘expose photo sharing UI’ flag to On using an admin UI (a simple rack app that ships with the rack-flags gem).

Once the first version of Photo Sharing is ready for prime time we simple change the default for the feature flag to On. Now every user has access to the new functionality. After the feature has been live for a while and we’re confident that it works as advertised we can retire the ‘expose photo sharing UI’ flag, along with the checks that were added to to the application code.

The mechanics

The ability to override flags for a given user is achieved using cookies. the rack-flags gem provides a rack app which acts as a simple feature flag admin UI. It lists the flags which have been configured for your app, and allows you to override their default state. Those overrides are stored in the user’s cookie. On each request to your web application a small piece of rack middleware inspects that cookie. When your app asks rack-flags whether a feature is turned off or on rack-flags combines the default state of each flag with whatever overrides were detected in the cookie and uses that to determine whether the flag is Off or On.

Some code examples

A yaml file defines the set of feature flags for your app, including a description of the flag and its default state:

feature_flags.yaml
1
2
3
4
5
6
foo:
  description: This is my first feature flag
  default: true
show_new_ui:
  description: render our experimental new UI to the user
  default: false

Setting up rack-flags for a rails app:

application.rb
1
config.middleware.use RackFlags::RackMiddleware, yaml_path: File.expand_path('../feature_flags.yaml',__FILE__)
routes.rb
1
mount RackFlags::AdminApp, at: 'some_route'

Checking whether a feature is Off or On for a specific request:

SomeController.rb
1
2
3
4
5
6
  class SomeController < ApplicationController
    def some_action
      features = RackFlags.for_env(request.env)
      @show_whizzy_bits = features.on?(:show_new_ui)
    end
  end

That’s all there is to it!

Check out the github repo’s README for more documentation, including an example of setting up a Sinatra app.

Credits and prior work

This gem is inspired by a very similar system which myself and some fellow ThoughtWorkers created while building a Rails-based e-commerce front end at a large online retailer. Last time I checked in with that client they were still using their version of this feature flagging system and were very happy with it.

I previously blogged about the cookie-based approach which we used at that client and subsequenctly re-implemented in this rack-flags gem.

Ryan Oglesby was one of the ThoughtWorkers on that project and co-author of this gem.

A lot of my thinking about feature flags is colored by working with a sophisticated ‘feature bit’ system several years ago. I believe Sean Ryan was the instigator of that system, which was chiefly implemented by Rob Loh and Elden Bishop under the guidance of of Erik Sowa. Erik and Rob presented on that feature bit system at Lean Software and Systems Conference 2010.

iOS Unit Testing Workshop

| Comments

TL;DR

I’ll be running a couple of free workshops on iOS Unit Testing in San Francisco in late August/early September. If you’d like to attend please fill out this simple form and I’ll be in touch with details.

Why is it free? What’s the catch?

I’m preparing to run a paid 3 hour workshop on iOS unit testing for the upcoming super-awesome StrangeLoop conference. As part of that prep I’ll be doing a couple of ‘dry runs’ of the workshop so I can fine tune the content, delivery, pacing, etc. You are cordially invited to attend. You get to learn a bunch about iOS unit testing, I get to hone and refine the workshop. A win/win for us both.

Details on the workshop

Learn the why and the how of unit-testing for your iOS applications using Kiwi, a popular open-source testing library.

You will leave this workshop excited to apply what you’ve learned to start testing your iOS app. In fact, you’ll probably want to fire up your project in Xcode straight afterwards!

We’ll start with the basics, covering what unit-testing is (and isn’t), and why it will be a valuable addition to your programming toolbox.

Having covered some theory we’ll then spend the majority of the workshop learning by doing. We’ll start by setting up an Xcode project for testing with Kiwi, and quickly move on to writing our first simple test.

After learning the basics with some isolated examples we’ll move on to look at how to unit test a real iOS app. As we go we’ll learn about mocking and stubbing, and when to make the decision not to test. We’ll learn how to test in the context of a CocoaTouch app, and how testing can positively influence the design of your app (hint: your app delegate and your view controller aren’t the only place to put your business logic!).

We’ll wrap up by touching on some more advanced topics such as test-driven development, CI, and how to test asynchronous code.

If this sounds interesting to you just fill out this simple form and I’ll be in touch with details about attending.

About me

What better than a bio written in the third person!

Pete Hodgson is a consultant with ThoughtWorks, where he’s spent the last few years helping teams become awesome at sustainable delivery of high-quality mobile applications. Pete is on a mission to bring modern agile practices like continuous integration and automated testing to the iOS development community. While working on an iOS application for a large financial institution he helped create Frank, an automated acceptance testing tool for native iOS applications. He is now the open-source maintainer for Frank, which has become a leading tool for automated testing.

Using Travis CI and Xctool to Build and Test iOS Apps

| Comments

Travis CI is a cloud-based Continuous Integration service which is free to use (but only on public github projects). They recently announced support for OS X agents, which means you can now use Travis to build and test iOS applications. In this post I’ll show how I set up basic CI for an example iOS app using Travis, with the help of xctool. xctool is the ‘better xcodebuild’ which Facebook recently open-sourced. It allows you to not only build your app on the command line but also run your app’s unit tests from the command line with the same capabilities that XCode offers when you run the tests from the IDE proper.

To see a working example of the sort of thing we’ll be setting up, take a look at the github repo and the corresponding Travis page for my Run2Bart example app.

Getting started with Travis

How Travis works

Travis is driven entirely through github integration. You configure Travis with information on how to build your app by adding a .travis.yml file to the root of your git repo. Then you log on to Travis using your github credentials and have it configure a github post-commit hook for the github repo. After doing that github will tell Travis every time you’ve pushed code to your repo. Travis will respond by dutifully downloading the commit which was pushed and will then do whatever build you’ve configured it to do in that .travis.yml file.

an initial travis setup

I’m not going to cover linking your github repo to Travis; their own docs explain this well. Once you have linked your repo the next step would be to add a .travis.yml file to the root of the repo. Here’s a basic setup similar to the one that I use to build my Run2Bart example app:

.travis.yml
1
2
3
4
5
---
  language: objective-c

  before_script: travis/before_script.sh
  script: travis/script.sh

First I’m telling Travis that this is an objective-c project. Next I tell Travis how I’d like it to do CI against this repo by giving it instructions on what scripts it should run in order to actually perform a build. I also give some extra instructions on what to do just prior to running a build. It’s quite common to put all the build steps inline right in the .travis.yml file, but I prefer to actually create bash scripts in my repo inside a travis directory in my git repo and then just refer to those scripts from my .travis.yml. This keeps the .yml file nice and small, and also makes it easy for me to test the travis build scripts locally.

We gave Travis a before_script in the .yml file above. This is intended to be used by the Travis agent to download tools needed as part of the build. Here’s what it looks like:

travis/before_script.sh
1
2
3
4
5
#!/bin/sh
set -e

brew update
brew install xctool

Very simple. We just use homebrew to install xctool on the build agent. All travis build agents come with homebrew pre-installed, but sometimes the formula aren’t up to date, so it’s best to run a brew update before attempting a brew install.

That’s all we need to do to prepare our agent for the build. Next let’s look at the build script itself:

travis/script.sh
1
2
3
4
#!/bin/sh
set -e

xctool -workspace MyWorkspace -scheme MyScheme build test

Again, this is really simple. We first do a basic sanity check by asking xctool to build our app, specifying a workspace and scheme. This just checks that we don’t have any compilation errors. Assuming that succeeds xctool will then build and run the unit testing target for our app, launching the Simulator on the Travis agent if needed.

It’s that easy

And that’s all there is to it for a basic build-and-test setup. At this point you’re going to get very fast feedback from Travis as soon as anyone pushes code which either causes compilation errors or causes your unit tests to fail. From here you can easily build upon this basic CI set up. There are a lot of things you can do to expand this, some of which I’ve done in my example Run2Bart app.

You can have xctool generate pretty output by specifying a different reporter, and then archive that report as a build artifact using something like travis-artifacts.

You can add extra stages to your build such as Frank acceptance tests.

You could even have Travis distribute a build of your app to alpha users using something like TestFlight or HockeyApp. That’s definitely an advanced topic though - it would be very fiddly to achieve due to having to do code-signing of a device build on the Travis agent.

You don’t need to use Travis for this

And of course you can do all of the above on your own Jenkins (or TeamCity, or Go, or Bamboo) CI server rather than on Travis. In fact unless you’re building an open-source app or you’re a beta customer of Travis Pro then you’ll likely want to use a different CI technology. Regardless, the basic approach remains the same.

Test-driven Backbone.js - Part Three

| Comments

In the previous posts in this series we’ve done some test-driven development of Backbone Models, Collections, and Views. In this post I’m going to cover another role which I believe should be present in most SPA Backbone apps: Controllers.

Controllers in a Backbone app

There’s nothing in the Backbone.js library for building Controllers. There’s not a Backbone.Controller for you to extend from. But that doesn’t mean that Controllers don’t have a part to play in your Backbone app. I suspect that the reason we don’t see Backbone.Controller in the library is simply because there isn’t much helpful shared functionality that could be put there. The Controller role is more loosly defined than other more concrete roles such as View or Model and so there’s not a need for a concrete base implementation in the Backbone.js library. However that does not mean that your app wouldn’t benefit from code organized under the Controller role.

No Controller class or prototype

Since we don’t need any shared functionality in our controller we won’t need to use any inheritance (prototypical or pseudo-classical) when building it. We can build controllers using simple constructor functions. I prefer to use constructor functions over inheritance heirarchies whenever possible - I think they’re a better fit with the ‘good parts’ of JavaScript - so that’s what we’ll use here.

What does a Controller do?

Simply put, a controller mediates between the UI and the rest of your application. In the case of a Backbone app this generally means reacting to events published from your View layer. A controller may respond to these events by updating Model instances, or by making calls to Services (which we’ll get to in a later post in this series).

Doesn’t a View do that in a Backbone app?

In a lot of Backbone applications which don’t have Controllers you see Backbone Views that do a lot of the coordination work I just described. Rather than just acting as a translation layer between the UI and the rest of the application these Fat Backbone Views take on additional responsibilities, often acting as Controllers too. These views tend to be large, unwieldy, and tough to test. Their implementations are hard to read and hard to maintain. They cause these issues because they are violating the Single Responsibility Principle, both acting as an abstraction over the DOM and also implementing application logic. Introducing a Controller helps avoid this situation. Views can remain focussed on their single responsibility - mapping between the UI and the application. Controllers take on the additional responsibilities which would otherwise muddy the View’s role. Additionally, as Controllers aren’t coupled to a DOM the way Views are they are much more amenable to testing in isolation.

Back to our example

Now we’ll see what a typical Controller might look like, using our Card Wall app as an example.

Adding new cards to a card wall

When we left our Cards application in our last post we had a CardWall model which represented a set of cards. We also had a NewCardView which allowed a user to enter a new card. What’s missing is something which ties those two elements together. When a user fills out the text for a new card and hits create then a new card should be added to the CardWall. This is exactly the place where a Controller would come into play - bridging between a View and a Collection or Model (or both).

Let’s get going and use tests to drive out that controller.

Implementation

Once again, we’ll write an initial silly-simple test to get us started. We’ll test that we can create a controller:

main_controller_spec.coffee
1
2
3
4
describe 'main controller', ->
  it 'can be created', ->
    controller = createMainController()
    expect( controller ).toBeDefined()

This will fail because we don’t have that constructor function defined yet. We easily remedy that and get our test back to green:

main_controller.coffee
1
2
createMainController = ()->
  {}

Simple enough. A function that takes no parameters and return an empty object literal. That’s enough to get us to green.

OK, what should we test next? Well, this controller needs to react to a NewCardView firing an create-card event by adding a card to a CardWall. That’s quite a lot of functionality to bite off in one test but I don’t really see a way to break it down, so let’s just give it a go and see if we can express all of that in a reasonably small step:

main_controller_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe 'main controller', ->

  it 'reacts to a create-card event by creating a new card in the card wall', ->
    fakeNewCardView = _.extend({},Backbone.Events)
    fakeCardWall = _.extend({
      addCard: sinon.spy()
    })

    createMainController( newCardView: fakeNewCardView, cardWall: fakeCardWall )

    fakeNewCardView.trigger('create-card')
    expect(fakeCardWall.addCard).toHaveBeenCalled()

There’s a lot going on here. We are creating fake instances of NewCardView and CardWall and passing them into our controller’s constructor function. Then we simulate a create-card event by explicitly triggering it on our fake NewCardView instance. Finally we check that our controller reacted to that create-card event by calling addCard on the CardWall instance it was given during construction.

We created our fake instance of NewCardView by just taking the base Backbone.Events module and mixing it into an empty object. That Events module is the same module which is mixed into pretty much every Backbone-derived object (Model, View, Collection, Router, even the Backbone object itself!). Mixing the Events module into our empty object gives it the standard Backbone event methods like on and trigger. That’s all we needed our fake instance to be able to do in this case - be an empty object with some event methods mixed in.

So now that we understand what the test is doing let’s get it to pass:

main_controller.coffee
1
2
3
4
createMainController = ({newCardView,cardWall})->
  newCardView.on 'create-card', ->
    cardWall.addCard()
  {}

Reasonably straight-forward. When the controller is constructed it now immediately registers a handler for any create-card events from the NewCardView instance - note that the controller is now being provided the NewCardView instance (and a CardWall instance) via the constructor function. The handler for that create-card event simply calls addCard on the CardWall instance.

That gets our previous test to green, but unfortunately it breaks our initial simple test. That test wasn’t passing any arguments to the controller’s constructor function. That means that newCardView is undefined, and calling newCardView.on fails. We could solve this by modifying that initial test to pass in some kind of fake newCardView. However, looking at that test it is providing no real value now. It was there to get us started TDDing our controller, and now that we’re started we don’t really need that test to stick around. It’s very unlikely to catch any regressions or provide any other value in the future so we’ll just delete that first test, which will bring out full test suite back to green again.

If a test isn’t providing any value then don’t be afraid to delete it.

We now have a controller which reacts to create-card events by asking the CardWall to add a card, but it’s not passing any information about the card from the NewCardView to the CardWall. Specifically, it’s not passing the text which the user entered into the new card view. To drive out that functionality we could add a new test which describes the additional behaviour. However in this case that test would just be an expanded version of the existing test, so instead we’ll just take our existing test and flesh it out more:

main_controller_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe 'main controller', ->

  it 'reacts to a create-card event by creating a new card in the card wall', ->
    fakeNewCardView = _.extend( {getText: -> 'text from new card'}, Backbone.Events )
    fakeCardWall = _.extend({
      addCard: sinon.spy()
    })

    createMainController( newCardView:fakeNewCardView, cardWall: fakeCardWall )

    fakeNewCardView.trigger('create-card',fakeNewCardView)
    expect(fakeCardWall.addCard).toHaveBeenCalledWith(text:'text from new card')

We’ve expanded our fakeNewCardView to include a stub implementation of the getText method. This method would be present on a NewCardView instance. We’ve also extended the event triggering in the test to also include the sender of the event. This is what the actual NewCardView implemention which we built in the previous post does. Finally we modify our verification step. Instead of just checking that addCard was called it checks that it is called with a text parameter equal to the card text returned by the NewCardView instance’s getText method.

We run this test and of course it fails, but it’s very simple to get it passing:

main_controller.coffee
1
2
3
4
createMainController = ({newCardView,cardWall})->
  newCardView.on 'create-card', (sender)->
    cardWall.addCard( text: sender.getText() )
  {}

The only thing we’ve changed here is to take the sender argument passed with the create-card event, grab the text from that sender and then include it when calling cardWall.addCard. Tests are green once more. Looking at this implementation you can see why we had to add the extra parameter in the explicit fakeNewCardView.trigger('create-card',fakeNewCardView) call in our test code. If we didn’t pass the fakeNewCardView with the event then sender would be undefined in our controller code, which would cause the test to fail. It’s unfortunate that this level of implementation detail leaked into our test writing, but that implementation detail is part of the NewCardView public API, so this isn’t too big a concern.

Our Controller is complete

And at this point we’re done. We have a Controller which mediates between our Views and the rest of our app, creating new cards when a view tells it that the user wants a new card.

That’s our Controller?!

Our controller ended up being 3 lines of code. Was that worth it? Why didn’t we just throw that logic into NewCardView?

Well, if we had done that then NewCardView would now have to know which CardWall it was associated with (so that it could call addCard on the right object. That means NewCardView would now be coupled directly to the CardWall. That means you’d either need to wire up each NewCardView with a CardWall - probably via constructor params or by setting fields - or worse you’d need CardWall to become a singleton. Both of these are design compromises. It’s better to keep the different parts of your system decoupled if you can.

Also note that if NewCardView knows about a CardWall then you are likely to start to have circular dependencies - Collections or Models know about Views, which in turn know about other Collections or Models. Your design becomes quite hard to reason about - you don’t really know what objects are involved in a particular interaction. More coupling also makes things harder to test. If a NewCardView is directly calling a CardWall whenever card creation is requested then you need to supply fake versions of CardWall in a lot of tests. That’s one more piece of complexity in your tests for you to read, understand and maintain. Better to keep things decoupled so that your tests can remain isolated and focussed, along with the rest of your code.

Test-driven Backbone.js - Part Two

| Comments

In the previous installment of this series we looked at the basics of test-driving the development of a Backbone.js app by TDDing some simple functionality for Backbone Models.

In this installment we’re going to get a bit more advanced, looking at how we can test-drive Backbone Views. Because views integrate with the DOM they can be a little more fiddly to test, but TDDing your views is still a very achievable goal.

Jasmine-jQuery

Testing backbone views is in large part about testing how JavaScript interacts with the DOM via jQuery. A helpful tool for this is a little extension to Jasmine called Jasmine-jQuery. Jasmine-jQuery adds a set of custom matchers to Jasmine which make it easier to assert on various properties of a $-wrapped DOM element. For example you can assert that a DOM element contains specific text, matches a specific selector, is visible (or hidden), etc. The project’s github README describes all the matchers added by the plugin in detail.

HTML test fixtures - to be avoided if possible

Jasmine-jQuery also provides a feature called fixtures. The idea here is that you describe any specific HTML content that you need to be available in the browser DOM during a test. Jasmine-jQuery will ensure that that content is in the browser DOM before your test runs, and will take care of cleaning the content out after the test completes. This is a useful feature if you are testing JavaScript code which acts on content across the entire document using a $ selector. In that situation you sometimes have to pre-load the DOM with the HTML which that $ selector is expecting in order to test how the JavaScript under test behaves.

However in the context of a Backbone we shouldn’t need to lean on fixtures too often. In fact if we need to use fixtures often in our Backbone View tests then we should be concerned. This is because in most cases a Backbone view should not be accessing parts of the DOM outside of its own el property and thus should not need fixtures during testing. The DOM is essentially one big chunk of mutable shared state (or, put another way, one big global variable) and we should avoid relying upon that shared state in our design unless absolutely necessary.

Mutable shared state is a tricky thing to deal with in code. You can never be sure what code is going the change the state, and when. You have to think about how your entire application works whenever you change how you interact with that state. A Backbone view which restricts itself to just modifying its own el on the other hand is easy to reason about - all code that modifies that state is localized within that view’s implementation, and it is usually easy to hold in your head all at once. This very powerful advantage goes out of the window when we start accessing state that’s not owned by the view. So, we should strive to avoid access to the wider DOM whenever possible by restricting our view to only touching its own el property. If we do this then we have no need for fixtures. This means that if we are relying on fixtures a lot then that should be considered a test smell which is telling us that we are failing in our goal of keeping as much DOM-manipulation code as possible localized to within a single view.

The one place where we must relax this guidance a little is when we have what I call Singleton Views. These are top-level Backbone views in our Single Page App which bind themselves to pre-existing ‘shell’ sections of the DOM which came down into the browser when the app was first loaded. These views are generally the highest level visual elements in the application, and as such there tend to be no more than 3 or 4 Singleton Views in a given SPA. For example a Gmail-type app may have Singleton Views for a navigation bar, a tool bar, the left-hand folder view and the main right-hand email container.

In order to test these Singleton Views properly we need to test them in the context of the initial DOM structure which they will bind themselves to, since this is the context they will always be within in when the full app is running. HTML fixtures allow us to write tests for these Singleton Views in the context of their DOM structure but in an independent way, without affecting other tests. We’ll see an example of TDDing a Singleton View when we create the New Card view later on.

Our first View

Let’s start off by building a CardView, which our app will use to render an instance of a Card model. As we did before we’ll start off with a test, before we even have a CardView defined at all. Also like before it’s good to start off with a simple test that drives out an basic initial implementation of what we’re building.

card_view_spec.coffee
1
2
3
4
describe CardView, ->
  it 'renders a div.card',->
    view = new CardView()
    expect( view.render().$el ).toBe('div.card')

This test should be pretty self-explanatory. We’re saying that we expect a card view to render itself as a div of class ‘card’. Note that we take advantage of Jasmine-jQuery’s toBe matcher here, which says ‘the element in question should match this css selector’.

Of course this test will fail initially because we haven’t even defined a CardView. Let’s get the test passing by defining a CardView along with the necessary configuration specifying what class and type of tag it renders as:

card_view.coffee
1
2
3
CardView = Backbone.View.extend
  tagName: 'div'
  className: 'card'

With that we’re back to green. Now let’s move on to our next test for this view. It’s rendering as a div with the appropriate class, but what content is inside that div? If we’re rendering a Card model then we probably want the text attribute within that card model to be rendered in our CardView:

card_view_spec.coffee
1
2
3
4
5
6
7
8
9
describe CardView, ->
  it 'renders a div.card',->
    view = new CardView()
    expect( view.render().$el ).toBe('div.card')

  it 'renders the card text', ->
    model = new Card(text:'I <3 HTML!')
    view = new CardView( model: model )
    expect( view.render().$el.find('p') ).toHaveText("I <3 HTML!")

In our new test we create an instance of a Card model with some specific text. We then create a CardView instance for that model. Finally we render that CardView instance and verify that its $el contains a <p> tag which itself contains the Card model’s text. Note we also include some funky characters in our card text to get some confidence that we’re escaping our HTML correctly.

That’s quite a complex test, but it’s pretty easy to get passing:

card_view.coffee
1
2
3
4
5
6
7
CardView = Backbone.View.extend
  tagName: 'div'
  className: 'card'

  render: ->
    @$el.html( "<p>#{@model.escape('text')}</p>" )
    @

We’ve added a render method to our view. This method replaces whatever html is currently in the view’s $el with a <p> tag containing the html-escaped contents of the model’s text attribute.

We’re still following Simplest Possible Thing here. we could start using some sort of client-side templating to render our view, but for the sake of this one <p> tag it seems unnecessary. So we’re using good old-fashioned string interpolation until we reach a point where a template makes more sense.

It turns out that this new render method gets our second test passing but also breaks our first test. Our first test doesn’t pass a model to the CardView constructor, which means that the subsequent call to @model.escape('text') fails. We’ll quickly fix up our tests:

card_view_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe CardView, ->
  createView = ( model = new Card() ) ->
    new CardView(model:model)

  it 'renders a div.card',->
    view = createView()
    expect( view.render().$el ).toBe('div.card')

  it 'renders the card text', ->
    model = new Card(text:'I <3 HTML!')
    view = createView( model )
    expect( view.render().$el.find('p') ).toHaveText("I <3 HTML!")

That gets us back to green. We’ve added a little createView helper function. We can explicitly pass it a model to use when it constructs a view, but if we don’t care we can just have the helper function create some generic model to supply the view with.

a View for adding new cards

A card wall isn’t much use if you can’t add new cards to the wall. Let’s start addressing that by creating a NewCardView. This view which will represent the user interface used to supply text for a new card and then add it to the card wall.

This NewCardView will be a Singleton View - rather than creating its own el DOM element at construction time it will instead bind el to a DOM element which already exists on the page when it loads. First off let’s take a first stab at what that pre-existing HTML will look like by creating a shell index.html file:

index.html
1
2
3
4
5
6
7
8
<html>
  <body>
    <section id='new-card'>
      <textarea placeholder='Make a new card here'></textarea>
      <button>add card</button>
    </section>
  </body>
</html>

Pretty simple. a <textarea> to enter the card’s text into a <button> to add a card with that text to the wall. Now let’s drive out the basics of our NewCardView:

new_card_view_spec.coffee
1
2
3
4
5
6
describe 'NewCardView', ->

  it 'binds $el to the right DOM element', ->
    loadFixtures 'index.html'
    view = new NewCardView()
    expect( view.$el ).toBe( 'section#new-card' )

Here we use Jasmine-jQuery’s loadFixtures helper to insert the contents of our index.html file into the DOM, but only for the duration of the test. Note that we’re inserting the regular index.html into our DOM during testing, not a test-specific fixture. Doing this gives us more confidence that we’re testing the Singleton View in the same context as when it’s running in our real app. Next we create a NewCardView instance and verify that its $el is bound to the right <section> within that initial static HTML coming from index.html. Let’s get that first test passing:

new_card_view.coffee
1
2
NewCardView = Backbone.View.extend
  el: "section#new-card"

As we can see, Backbone makes it very easy to create a Singleton View. We simple specify a css selector as the el field. When the view is constructed that CSS selector string is used to select a DOM element within the page which the view instance’s el will then refer to.

Test-driven DOM event handling

Now that we have driven out the basic view, let’s write a test for some event firing behaviour. We want our view to trigger an event whenever the user clicks the ‘add card’ button. As we did before, we’ll use Sinon spy functions in our test to describe the behavior we expect:

new_card_view_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
describe 'NewCardView', ->

  it 'binds $el to the right DOM element', ->
    loadFixtures 'index.html'
    view = new NewCardView()
    expect( view.$el ).toBe( 'section#new-card' )

  it 'triggers a create-card event when the add card button is clicked', ->
    loadFixtures 'index.html'
    view = new NewCardView()

    eventSpy = sinon.spy()
    view.on( 'create-card', eventSpy )

    $('section#new-card button').click()

    expect( eventSpy ).toHaveBeenCalled()

In this new test we register a spy on a NewCardView instance’s create-card event, so that whenever that instance triggers an event of that type it will be recorded by the spy. Then we use jQuery to simulate a click on the ‘add card’ button. Finally we verify that the create-card event was triggered by checking whether the spy we registered for that event has been called.

This test will fail of course, since we haven’t implemented any custom create-card event in NewCardView. Let’s do that now and get the test passing:

new_card_view.coffee
1
2
3
4
5
6
7
8
NewCardView = Backbone.View.extend
  el: "section#new-card"

  events:
    "click button": "onClickButton"

  onClickButton: ->
    @trigger( 'create-card' )

Here we use the events field to declaratively tell our Backbone View that whenever a <button> within the view’s el receives a click event it should call the view’s onClickButton method. In addition we implement that onClickButton method to trigger a create-card event on the view itself.

This is a very common pattern in Backbone views. They react to low-level DOM events within their el and re-publish a higher-level application-specific event. Essentially the Backbone view is acting as a translation layer between the messy low-level details like DOM elements and click events and the more abstract higher-level application concepts like creating a card.

Now that our tests are green again we can do a little refactor of our test code, DRYing it up a little:

new_card_view_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
describe 'NewCardView', ->

  view = null
  beforeEach ->
    loadFixtures "index.html"
    view = new NewCardView

  it 'binds $el to the right DOM element', ->
    expect( view.$el ).toBe( 'section#new-card' )

  it 'triggers a create-card event when the add card button is clicked', ->
    eventSpy = sinon.spy()
    view.on( 'create-card', eventSpy )

    $('section#new-card button').click()

    expect( eventSpy ).toHaveBeenCalled()

Nothing exciting here, we just extracted out the common test setup stuff into a beforeEach.

Now, maybe we can improve NewCardView’s API a little here. Something that’s receiving one of these create-card events will probably want to react to that event by finding out more details about the new card which the user is asking to create. Let’s elaborate on that second test and say that we want the create-card event to pass along the view which triggered the event.

new_card_view_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
describe 'NewCardView', ->

  view = null
  beforeEach ->
    loadFixtures "index.html"
    view = new NewCardView

  it 'binds $el to the right DOM element', ->
    expect( view.$el ).toBe( 'section#new-card' )

  it 'triggers a create-card event when the add card button is clicked', ->
    eventSpy = sinon.spy()
    view.on( 'create-card', eventSpy )

    $('section#new-card button').click()

    expect( eventSpy ).toHaveBeenCalledWith(view)

All we’ve done here is expanded our expectation at the bottom to say that we expect our event spy to have been passed the view which triggered the event as an argument. That test will now fail, but it’s trivial to get it passing:

new_card_view.coffee
1
2
3
4
5
6
7
8
NewCardView = Backbone.View.extend
  el: "section#new-card"

  events:
    "click button": "onClickButton"

  onClickButton: ->
    @trigger( 'create-card', @)

An extra 3 characters and we’ve added an argument to the trigger call which passes a reference to the view itself to anyone listening for that create-card event.

So if someone has received this event and has a reference to the view they’re probably going to want to get access to whatever text the user has entered into the text area. Let’s drive that API out:

new_card_view_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe 'NewCardView', ->

  view = null
  beforeEach ->
    loadFixtures "index.html"
    view = new NewCardView

  # ... other tests ....

  it 'exposes the text area text', ->
    $('section#new-card textarea').val('I AM A NEW CARD')
    expect( view.getText() ).toBe( 'I AM A NEW CARD' )

We’re using jQuery to simulate a user entering text into the text area, and then verifying that we can fetch that text using a getText method on the view. Of course this test will fail because the view has no getText method. Let’s address that:

new_card_view.coffee
1
2
3
4
5
6
7
8
9
10
11
NewCardView = Backbone.View.extend
  el: "section#new-card"

  events:
    "click button": "onClickButton"

  onClickButton: ->
    @trigger( 'create-card', @)

  getText: ->
    @$('textarea').val()

We’re using the view’s $(...) helper method to do a scoped lookup of the <textarea> element inside our view’s el. It is a shorthand way of saying @$el.find('textarea'). Then we just grab the value of that <textarea> and return it. Tests are green once more.

Wrap up

Our new card view now provides all the functionality we need, so we’ll wrap this up.

In this installment we’ve seen how Jasmine-jQuery can help assert what HTML is being rendered by our Backbone Views, and how its Fixtures functionality can help set up pre-requisite chunks of HTML content for the duration of a test. That said, we’ve also learned that if we require fixtures to test our views then this could indicate a poor design. We’ve observed that Backbone views act as a translation layer between the core of the application and the DOM. Finally, we’ve seen how to use jQuery to simulate user interactions and how sinon spy’s can be used to check that events are being triggered correctly.

In the next installment of this series I cover where Controllers fit into a Backbone app and show how we can TDD the implementation of them.

Test-driven Backbone.js - Part One

| Comments

In this series of posts I’m going to walk through some practical details on how we can develop Backbone.js applications in a test-driven manner. Developing this way leads to better structured applications, as we’ll discover.

Setup

We’ll be using backbone and underscore to build a single page app which simulates a card wall. We’ll be writing our app and our tests using coffeescript. If there’s enough interest I’d be happy to set up a JavaScript translation of the code snippets - please let me know. We’ll be using Jasmine as our test runner and we’ll be using sinon.js to create test doubles (mocks and stubs). We’ll also use a few other small utilities and plugins as we go.

Part one - test driven models

Let’s start off simple and test-drive a Backbone model which will represent a card on the wall. Test-driven means writing the test before the production code, so we’ll start off by writing a test for the model, before the model even exists. I like to start off a new test file with a really dumb test that just drives out the initial setup of the class:

card_spec.coffee
1
2
3
describe Card, ->
  it 'is defined', ->
    expect( Card ).not.toBeUndefined()

Pretty much the simplest test I can think of: check that Card has been defined somewhere. Of course when we run this test it will fail, because we haven’t defined that model yet. Let’s do that.

card.coffee
1
Card = 'slime'

OK, with that done our first test is passing.

I know, I know. We didn’t define Card to be a backbone model. I’m being a bit dogmatic here for a moment and following the test-driven tenent of doing the Simplest Possible Thing to get the test passing. In this case defining Card as a string is a simple thing to do, so that’s what I did. Using the keyword ‘slime’ is a trick I picked up from Gary Bernhardt to indicate that this is obviously not finished code.

So, we have a slimed Card definition which passes the test. Our next step is to write a test which drives us to remove that slime and make Card a real Backbone.Model.

card_spec.coffee
1
2
3
4
5
6
7
8
describe Card, ->
  it 'is defined', ->
    expect( Card ).not.toBeUndefined()

  it 'looks like a BB model', ->
    card = new Card
    expect( _.isFunction(card.get) ).toBe(true)
    expect( _.isFunction(card.set) ).toBe(true)

This new test verifies that Card instances ‘quack like a Model’. Since JavaScript doesn’t really have a strong notion of type I follow a duck-typing approach here and just verify that the Card instance implements methods that I’d expect a Backbone Model to have. This is good enough to drive us towards a more sensible initial implementation of Card:

card.coffee
1
Card = Backbone.Model

Note that I’m still sticking to the Simplest Possible Thing tenent. Our test doesn’t expect Card to have any additional functionality beyond the vanilla functionality it gets as a Backbone.Model, so I don’t bother to use Backbone.Model.extend. When I’m driving out code this way it’s not unusual that I never have to go beyond what I thought at the time was a placeholder implementation. I wouldn’t be surprised if that holds true here and that we end up never needing to extend Card beyond this vanilla implementation.

A Card Wall

Now that we have a Card defined, let’s look at building a Wall of cards. There’s a temptation here to think ‘Oh, a wall of cards is like a collection of cards; let’s define it as a Backbone.Collection’. That’s a bad idea here. A card wall does contain a collection of cards, but it will likely also have other attributes. It might have a title, and an owner maybe. I’ve found that with Backbone if you’re modeling an entity in your system that’s more than just purely a collection of other things then it’s best to represent that entity as a Backbone.Model which contains a Backbone.Collection as an attribute, rather than modelling that entity as a Backbone.Collection with additional custom properties. If you use custom properties rather than the attributes of a Backbone.Model then you lose all the nice functionality that Backbone gives your model’s attributes (e.g. serialization, change events).

Given that, let’s test-drive a CardWall which contains a collection of Cards. Again we’ll start off really simply:

card_wall_spec.coffee
1
2
3
4
describe CardWall, ->
  it 'is defined', ->
    cardWall = new CardWall
    expect( cardWall ).not.toBeUndefined()

This test will fail because we haven’t defined CardWall anywhere. Let’s get it to pass:

card_wall.coffee
1
CardWall = Backbone.Model

That gets our tests back to green.

Note that I’ve drifting a little from my previous Simplest Possible Thing dogmatism. I know that I’m going to be making a Backbone.Model and I’m gaining a bit more confidence in my TDD flow, so I’m going to start taking slightly bigger TDD steps and define CardWall as a Backbone.Model even though the test isn’t strictly requiring that.

Taking bigger TDD steps like this is OK, as long as we’re always ready to slow down and taking smaller steps if we start feeling uncomfortable with our code. A good rule of thumb is that if you’re stuck in the Red part of your TDD cycle for more than a few minutes then you’re probably taking steps which are too big and should dial it back. As an aside, Kent Beck has a very interesting discussion around this in a presentation he did a few years ago on design principles - it’s well worth a watch. And hey, if Kent bends the TDD ‘rules’ sometimes then we’re allowed to too.

OK, now let’s drive out a cards collection on the CardWall model as we discussed earlier.

card_wall_spec.coffee
1
2
3
4
5
6
7
8
9
describe CardWall, ->
  it 'is defined', ->
    cardWall = new CardWall
    expect( cardWall ).not.toBeUndefined()

  it 'has a collection of cards', ->
    cardWall = new CardWall
    expect( cardWall.has('cards') ).toBe(true)
    expect( cardWall.get('cards').models ).toBeDefined()

We’re checking that our card wall has a cards attribute, and then we’re checking that the cards attribute ‘quacks like’ a Backbone.Collection by checking that is has a models property.

Now let’s get this test passing:

card_wall.coffee
1
2
3
4
5
Cards = Backbone.Collection

CardWall = Backbone.Model.extend
  defaults:
    cards: new Cards

This is a pretty big step, but it gets our tests back to green. We’ve defined a Cards collection, and we’ve extended our CardWall model to have an instance of that collection as a cards attribute by default.

Pause to refactor

Our tests are green again, but instead of moving on to our next test I’m going to take a moment to refactor. It’s really easy to forget the Refactor part of the Red-Green-Refactor cycle but focusing on Refactor is crucial if you want TDD to drive you towards cleaner code. Without refactoring frequently you will end up with a codebase which is functional but not maintainable. Over time changes will become more and more expensive, and tests will take longer and longer to write and get passing.

So, let’s refactor! I’m going to DRY up our card wall tests a little bit:

card_wall_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
describe CardWall, ->
  cardWall = null
  beforeEach ->
    cardWall = new CardWall

  it 'is defined', ->
    expect( cardWall ).not.toBeUndefined()

  it 'has a collection of cards', ->
    expect( cardWall.has('cards') ).toBe(true)
    expect( cardWall.get('cards').models ).toBeDefined()

All I’ve done here is pull out a shared cardWall variable and set it up in a beforeEach function. A small change, but it reduces some duplication from the tests and makes them a little bit easier to read. Small refactorings like this sometimes seem unnecessary in the moment but applying refactorings like this continuously over time they will do amazing things to your code. Imagine living in a codebase which always feels like a greenfield project.

What’s next? Let’s give CardWall a title attribute, and have it default to something sensible:

card_wall_spec.coffee
1
2
3
4
5
6
7
8
9
describe CardWall, ->
  cardWall = null
  beforeEach ->
    cardWall = new CardWall

  # ... other tests here ...

  it 'has a default title', ->
    expect( cardWall.get('title') ).toBe( 'Card Wall' )

And let’s get that test to pass:

card_wall.coffee
1
2
3
4
5
6
Cards = Backbone.Collection

CardWall = Backbone.Model.extend
  defaults: ->
    cards: new Cards
    title: 'Card Wall'

And we’re back to green.

Adding cards to a card wall

We are going to want to add Card instances to our CardWall. A client of CardWall could do that with the existing implementation by doing something like myCardWall.get('cards').add( cardProperties ), but that’s not a nice approach. It’s ugly to read and it’s a Law of Demeter violation which exposes the internals of CardWall. Let’s expose a nice helper method on CardWall that hides those details and makes the client’s life easier. Here’s a test describing what we want the method to do:

card_wall_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe CardWall, ->
  cardWall = null
  beforeEach ->
    cardWall = new CardWall

  # ... other tests here ...

  it 'can add cards to the cards collection', ->
    cardWall.addCard( text: 'new card text' )

    addedCard = cardWall.get('cards').at(0)
    expect( addedCard.get('text') ).toBe('new card text')

We expect to be able to call an addCard method which will add a card to the CardWall’s cards collection with the appropriate attributes. We can get that test passing with a one-line method:

card_wall.coffee
1
2
3
4
5
6
7
8
9
Cards = Backbone.Collection

CardWall = Backbone.Model.extend
  defaults:
    cards: new Cards
    title: 'Card Wall'

  addCard: (attrs)->
    @get('cards').add( attrs )

Testing events

Clients of our CardWall model are going to want to know when cards are added to our wall. By default a Backbone model only fires a change event when one of its attributes is re-assigned, not when one of its attributes changes internally. That means that a client which has subscribed to changes on a CardWall instance won’t be notified when its cards collection changes. Let’s confirm that problem via a test:

card_wall_spec.coffee
1
2
3
4
5
6
7
8
9
10
11
12
describe CardWall, ->
  cardWall = null
  beforeEach ->
    cardWall = new CardWall

  # ... other tests here ...

  it 'fires a change:cards event when a card is added', ->
    eventSpy = sinon.spy()
    cardWall.on('change:cards',eventSpy)
    cardWall.addCard()
    expect( eventSpy.called ).toBe(true)

We’re creating a Sinon spy function and registering that function with our cardWall instance as an event handler for the change:cards event. So whenever cardWall fires a change:cards event that spy function will be called. We then add a card to cardWall using the addCard method we created previously and then check whether our spy has been called. If the spy was called then that change:cards event has indeed been published. If it wasn’t called then we know that the change:cards event isn’t firing when we add a card via addCard.

As expected, this test fails because a Model doesn’t automatically notify subscribers of changes inside its attributes. However we can get the test to pass pretty simply:

card_wall.coffee
1
2
3
4
5
6
7
8
9
10
Cards = Backbone.Collection

CardWall = Backbone.Model.extend
  defaults:
    cards: new Cards
    title: 'Card Wall'

  addCard: (attrs)->
    @get('cards').add( attrs )
    @trigger('change:cards')

We’ve added a trigger call at the end of our addCard method. This will fire off the expected event whenever we add a card. With that change our tests are back to green, and we’re at a good point to wrap up this installment.

What have we covered

So what did we cover in this installment?

We’ve learned the basics of how to test-drive a Backbone model. We’ve learned that we should strive to get tests passing by doing the Simplest Possible Thing, but that we don’t need to be to dogmatic about that. We’ve discovered that a Backbone.Collection is only approriate for modelling a pure collection of items. We’ve seen that it’s better to encapsulate access to internal attributes by using helpful functions like addCard. Finally we’ve seen how to use Sinon spies to test basic Backbone event publication.

No DOM, no jQuery

One final thing to note before we wrap up - we have not made any reference to the DOM in these tests. In fact, when developing this code I ran my tests within node.js with not a DOM implementation in sight. This is a very important point, and a key property of a well-structured Backbone application - Backbone views should be the only thing referencing the DOM (and therefore the only thing using JQuery’s almighty $).

The flip side of keeping the DOM and jQuery contained without our Views is that we should always strive to keep our Backbone Views as skinny and logic-free as possible. Because Views interact with the DOM they are particularly tricky to test, and our tests are slower to run because we have to run them in a browser. We want to keep our views as simple as possible so that we don’t have to write too many of those tricky tests. Instead we want to push logic into other parts of our application where it can be easily tested in isolation.

In the next installment we’ll dive into testing Backbone Views and their interaction with the DOM. I’ll also go into more details on what a View’s responsibilities should be.

Deploying to Heroku From CI - the Gory Details

| Comments

In my previous post I discussed why you might want to deploy to Heroku as part of a CI build. I demonstrated how my heroku-headless gem makes it very easy to script such a deployment. In this post I’ll go into the details on how that gem does its work. I’ll talk about what the Heroku deployment tooling expects to be available, why that’s not necessarily going to be there in the context of a CI build environment, and how the gem helps resolve that.

Heroku’s deployment model

Heroku’s deployment model centers around pushing git commits to a special heroku git repo. When you want to deploy a new version of your application you push the git commit corresponding to that build up to the special heroku repo. As a side effect of updating the remote repo heroku will deploy a copy of the application as of that commit.

Of course Heroku won’t let anyone deploy a new version of your application. It only allows a registered collaborator to push to an app’s repo. You can manage which heroku users are collaborators of the app via the Heroku web interface, or via the heroku sharing set of commands.

But how does Heroku know which user is trying to push to an app’s heroku repo? It looks at the ssh key that git is using when it makes the push. Unless the ssh key is registered to a listed collaborator of the app then Heroku will reject the push.

A Heroku user usually registers their ssh key with Heroku using the heroku keys:add command. The average Heroku user only has to perform this procedure when they’re setting up a new dev machine. Once it’s done Heroku deploys are very low-friction since your registered ssh key is automatically used by git whenever you push. git push heroku is all you need to do. It’s easy to forget that you registered your ssh key with Heroku at one point.

What’s different for a headless CI deploy

Things can be a bit different when deploying from a CI agent. The CI agent’s user may not even have an ssh key generated, and if it does it is probably not associated with a Heroku user that has collaborator access to the Heroku app you want to deploy to.

One way to solve this would be to ensure that the CI agent user has a ssh key generated and to manually register that key for a Heroku user who has collaborator rights to the target app. This works, but it’s not ideal. The manual setup is tedious and error prone, and you have to do it for every agent in your CI system. You also have to make sure that the Heroku user which the CI agent is acting as is registered as a collaborator for every Heroku app that it might be deploying to. If you’re using a cloud-like CI system such as Travis then you might not even have access to the CI agent in order to generate and register an ssh key, and even if you did you have no control over which agent will be running your next build. With some systems you will be given an agent with a totally pristine environment for each build. In other words, you can’t always rely on manually pre-configuring an agent’s environment.

All of this means that it’s better to avoid the need for manual setup of pre-existing ssh keys. A better approach is to generate a disposable ssh key, register it with a Heroku user, do a git push using that key, and then remove the disposable key.

As luck would have it Heroku exposes API for adding and removing ssh keys for a user. When you use the Heroku API you pass a secret API key which Heroku uses to both authenticate you and also to figure out which user you are acting as. That allows the API to know which user’s keys you are managing.

This disposable key approach is more secure and has no requirements on a CI agent having a previously configured environment. You could take a totally pristine box and use it to run a deploy without any other setup. Transversely, you can test a deploy script on a full-configured developer workstation without your local environment affecting the deploy script and without the deploy affecting your environment.

Note that the disposable key approach still requires that you have previously set up a Heroku user who has collaborator access to the app you are deploying to. It also requires that your build scripts have access to that user’s secret Heroku API key. You need to be careful here - if that key gets into the wrong hands it could be used to run up a very large bill with Heroku. As I said in my previous post, you’ll want to use a feature in your CI system along the lines of Travis’s secure environment variables to protect access to that key. Most CI/CD systems provide similar functionality.

The steps for a headless deploy using disposable keys

So we have our basic approach laid out. Whenever we want to deploy our app we need our script to:

  • generate a new disposable ssh key
  • register that key with Heroku
  • use the key to deploy the app via a git push
  • unregister the key with Heroku
  • delete the key locally

Implementation details

I’ll now briefly describe how the heroku-headless gem does all that. If you want more details I encourage you to study the gem’s implementation. It’s really pretty simple - a handful of classes, about 200 lines of code in total.

Creating a local scratch directory

We use ruby’s tmpdir module to generate a temporary working directory which will contain our disposable ssh keys and some configuration files. After we’re done with the deploy we’ll delete this directory.

Generating a disposable ssh key

Next we’ll generate our disposable public/private key pair inside our new scratch directory. We use the ssh-keygen command which is available on pretty much any unix box: ssh-keygen -t rsa -N "" -C #{ssh_key_name} -f #{ssh_key_path}

Registering the key with Heroku

The heroku-api gem is our friend here. We create an instance of the Heroku API with heroku = Heroku::API.new(). If you don’t explicitly pass in an API key the gem will use the value of the HEROKU_API_KEY environment variable, so you need to make sure that that environment variable is set correctly by your CI system just prior to running your deploy script. Alternatively you can explicitly pass in an API key to the constructor, but again you need to be careful you don’t expose this key.

Given all of that, we can register our disposable ssh key with that API key’s Heroku user by doing something like:

register-disposable-key
1
2
3
heroku = Heroku::API.new()
public_ssh_key = File.read(path_to_public_ssh_key)
heroku.post_key(public_ssh_key)

Note that we’re sending the public key to Heroku. Private ssh keys are never exposed.

Pushing to the heroku git remote using our disposable key

This is the fiddly bit. We need to have git push to heroku using that newly generated ssh key, but we don’t want to mess with any system ssh configuration which might be in place. Luckily git allows you to override the path to the underlying ssh executable it uses when connecting to a remote repo, via a GIT_SSH environment variable. We’ll use that to point git to a little wrapper script. This script calls through to the system’s standard ssh executable but adds a few command line arguments along the way. Those command line arguments will tell ssh to identify itself using our disposable key (as opposed to whatever may be setup in ~/.ssh/). We also add a few arguments which tell ssh to not ask for confirmation the first time we connect to the heroku host, and also to prevent ssh from recording the heroku host as a known host.

The wrapper script looks like this:

git_ssh_wrapper.sh
1
2
#!/bin/sh
exec ssh -o StrictHostKeychecking=no -o CheckHostIP=no -o UserKnownHostsFile=/dev/null -i /path/to/disposable_ssh_key -- "$@"

All credit to this Stack Overflow question which that wrapper script is based on.

Once we’ve generated that script and placed it in our scratch directory we can ask git to push to our app’s heroku repo using that custom ssh wrapper like so:

push-to-heroku
1
system( {'GIT_SSH'=>custom_git_ssh_path}, "git push git@heroku.com:#{app_name}.git HEAD:master" )

Note that in this example we’re pushing whatever HEAD currently points to, but we could push any arbitrary commit up to Heroku using this same command.

Deregistering the disposable ssh key

This one is easy: heroku.delete_key(ssh_key_name). The ssh_key_name we pass in should be the same key name we passed to ssh-keygen via the -C flag.

Cleanup

Lastly, we clean up after ourselves by deleting the local scratch directory.

Fin

That’s it. It did take a fair amount of Internet research to figure all that out, but I should be clear that almost all of what I’ve described was lifted from other blog posts, Stack Overflow answers, etc. Hopefully by collating that info here I’ll help someone else travelling down a similar path. And again, if you don’t really care about the details and just want to get your app deployed via CI then just use the heroku-headless gem and move on!