h

Being Agile

Pete Hodgson's blurgh

The Ultimate Traveler’s Coffee Kit

| Comments

I travel a fair bit for work and have discovered that when you’re living in a hotel it’s the simple comforts that you miss the most. Near the top of my list of simple comforts is a good cup of coffee soon after I wake up. The hideous coffee tea-bag things that most hotel rooms provide barely counts as coffee, let alone good coffee. A year or so ago I set out to find a way to make good coffee in a hotel room without using up a ton of luggage real-estate. Today I am very happy with my setup, apparently to the point that I’m OK with writing a blog post about it titled “The Ultimate Traveler’s Coffee Kit”.

Just to lay my cards on the table, I’m a fan of good coffee but I’m not a serious coffee geek. At home I do have a nice burr grinder but I don’t own a goose-neck kettle and I’ve never used a scale or a stopwatch while brewing coffee. Given that, this travelling setup works great for me.

The kit

My kit consists of:

Plus a couple of optional extras:

  • a cut-down chamois towel
  • some tea bags, for when I don’t want coffee

This setup packs almost perfectly into a small Eagle Creek pack-it bag that I found at REI. I won’t admit to the inappropriate amount of time I spent in the packing-paraphernalia aisle trying out different bags before I settled on this one.

I’m very happy with how compact this whole setup is. It doesn’t take up much room in a suitcase at all. It’s also great for backpacking.

The details

Many coffee geeks use the Aeropress as their standard brewing method, and for traveling it’s a great compact option. I ran out of the little paper filters one too many times while traveling so now I use a stainless steel filter. When staying in a hotel you usually only have access to paper coffee cups which seem prone to being crushed when pressing down on the aeropress, so I pack a little enamelled metal camping mug too.

There are a surprising variety of after market parts you can buy to enhance your Aeropress Experience. As well as the reusable filter I also have a little silicone cap which covers the plunger tube, allowing me to pack coffee beans inside. I get about 4 brews worth of beans in there - just enough to start my mornings off right during a short business trip.

By pushing the plunger through the outer tube the wrong way then placing the filter and filter cap on the narrow end the whole lot fits inside the camping mug. Aeropress, beans and cup in a very small package.

Initially I would bring pre-ground coffee with me when travelling but found that it didn’t taste so great by the end of a trip. The fact that the aeropress requires a pretty fine grind probably makes this worse. Eventually I bit the bullet and upgraded to a small Hario Slim hand grinder. It’s a bit of a pain to grind the beans every morning, but the results are worth it. The Hario Slim is teeny - only a bit bigger than a large pepper grinder - and I’ve actually found it easier to grind with than the larger Hario Skerton, perhaps because you can get your hand all the way around it to hold it steady while grinding.

The grinder breaks down small, and along with the aeropress the whole setup fits perfectly inside my bag. If I wanted to pack extra beans I could add maybe two brews worth in the top section of the grinder which has a little lid to keep the beans from wandering.

With the grinder, aeropress and cup in the bag I have just enough room left for some tea bags and a little cloth. The cloth is to dry off the equipment - you might feel guilty about the brown stains you leave on those nice white hotel towels otherwise.

Brewing in a hotel room

Brewing coffee requires hot water. Turns out that a lot of hotels will provide you a small electric kettle if you ask. If that’s not an option you can use the crappy coffee machine in the hotel room to boil water. Be warned though that you won’t want to use that water to make tea - it invariably has a subtle taste of old coffee. While I wait for water to boil I measure and grind my beans. I use the bottom of the grinder to measure a dose and a half of beans by filling the beans up halfway between the 1 mark and the 2 mark. I have my Hario Slim grinder set at 9 clicks looser than fully tightened, based on advice I read on the Interwebs. I use the inverted technique with my aeropress, and wait about 30 to 60 seconds before flipping it. Then I plunge the coffee into my little camping mug and top up the mug with hot water. After that I squeeze the plunger down hard into the hotel bathroom sink to compress the “puck” - this means less coffee grounds to clean off of the filter. Then I unscrew the filter cap and push the push into the trash. Don’t forget to take the stainless filter off first! I rinse the equipment in the sink and leave it to dry on top of my little towel. Finally I sit at my teeny tiny hotel desk and enjoy a really good cup of coffee while I start my day.

Video for My different.js Talk

| Comments

I had the opportunity to present at Forward JS a few weeks ago on “different.js”. No, the presentation wasn’t about a new, shiny, artisinally-crafted micro-framework. I talked about how to “level up” at JavaScript by introducing artificial constraints which force you to approach things a different way. I worked through a specific example, abstaining from the use of this and prototypes in favor of closures.

Slides for the talk are here, video is below (thanks New Circle).

Optimizing 3rd Party JavaScript Loading

| Comments

I was doing some research today on preventing slow/flakey external javascript files from 3rd-party sources from slowing down your perceived page load times. Things like tracking pixels and chat widgets are generally loaded via script tags, and if you’re not careful then one rogue provider can create a really poor experience for users on your site.

I don’t have any original thoughts here, just a collection of resources which I found that summarize techniques I’ve heard about in the past but not had to use directly until now.

General best practices

I hate that phrase, but oh well.

Async function queuing

That’s the technique of defering loading the bulk of a 3rd party library, and while it’s loading having client code add what it wants to do to a ‘queue’ of deferred operations which are then executed once the 3rd party library eventually loads. Google’s async analytics tag is a well-known example of this technique.

It’s well described in this StackOverflow post.

Using ‘async’ and ‘defer’ script tag attributes

As usual, Ilya Grigorik does a great job covering how to do async script loading.

Chris Coyier at CSS-Tricks covers similar ground too.

Don’t forget that async and defer are not available in all browsers. Note that as Ilya mentions in his article, you should add both defer and async for a safe and backwards-compatible approach.

Rails as an SOA Client

| Comments

I got to present a talk at RailsConf this year. video is here, slides are here.

My talk was about Rails as an SOA client. I talked about things like Conway’s Law, Contract Testing, Hexagonal Architecture, the faraday gem, and HTTP Caching.

The 5 Rules of an Awesome Dev Toolchain

| Comments

Every ThoughtWorks project I’ve ever been on has a script in the root of the project repo, often named ./go or ./go.sh (or ./go.bat if you’re very unlucky). This script often starts life as the script which the CI server runs to test the project. It soon acquires extra capabilities. It becomes the way by which developers perform a pre-commit check before pushing their changes. It becomes the script to run when you first get started on the project to download all the various project dependencies.

./go can eventually become the primary interface into the dev tooling for the project. At that point it becomes a very powerful productivity boon for the team. Whenever a team is tempted to put some process or workaround into a README or a wiki page they instead automate it and put it into the ./go script. The project team’s tribal knowledge on how to perform certain tasks is automated, readily available, and under source control.

Some examples of things I’ve often seen added to a ./go script are:

  • download software dependencies needed for dev tooling, or at least detect that a required dependency is missing and provide instructions on how to install it.
  • set up or update an IntelliJ configuration for a JVM project.
  • do the rvm/bundler dance for a ruby project.
  • run a standard set of pre-commit tests

What makes a ./go script great?

I’ve spent some time thinking about what makes a good ./go script good. Here are my rules for an awesome ./go script, and thus a low-friction dev toolchain.

1. don’t make me think

Make your dev toolchain as low-friction as possible by making ./go as low-friction as possible. Give me sensible defaults. Give me clear error messages when something goes wrong, including hints on how to fix the problem. If there are two operations which need to be performed one after the other give me a higher-level operation which does them both (for example, wrap vagrant destroy -f && vagrant up into ./go vagrant-nuke). Give me a list of all operations available to me, with a brief description of what they do.

The git cli is a good example of what you should be striving for.

2. isolate, isolate, isolate!

Try as hard as humanly possible to isolate your toolchain so that it’s maintained within the project. Strive to avoid system dependencies - software that needs to be installed on the workstation itself.

On a Scala project, run your own isolated version of sbt rather than the system sbt. On a ruby project use bundler’s –path option to vendorize your gems and avoid the need for gemsets. On a python project use virtualenv. Check in binary tools like chromedriver or phantomjs.

This is absolutely the best way to ensure consistency across machines, and avoid “it’s only (working|borked) on my machine” issues. Because toolchain dependencies are managed within the project repo this also makes it trivial to upgrade your tooling. No need to coordinate everyone switching versions of a tool.

Even if you can’t get to the point that your tools are fully isolated, your ./go script can still perform checks on shared system dependencies to warn your or out-right fail if they are not present, or are not at the right version.

3. uniform interface

Everything is done via ./go. Everything.

Don’t make me remember whether it’s rake test, rspec, or ./scripts/test. Just put everything inside of ./go, even if ./go just immediately shells out to another tool. This is an extension to the “Don’t make me think” rule. It also makes it a lot easier to isolate your tooling. Importantly it adds a level of indirection which is under your project’s direct control. That means that you can modify or even switch your underlying tooling without having to re-educate every member of the team. You can add custom pre-flight checks for required dependencies before performing an operation. You can wrap every operation with benchmarking or instrumentation. You can run every operation inside a VM or container without the person running the script even knowing!

4. one step to a running local dev environment. One.

A new member on the team should be able to run one command to get to a running dev instance on their workstation. The setup steps in your project README should be:

git checkout yourProject
cd yourProject
./go

In the worst case where you absolutely cannot automate some aspect of project setup then the ./go script should detect that the manual setup is required and provide clear and detailed instructions on how to perform it.

5. do on your workstation as you do in CI

Push as much of your CI/CD scripting as possible underneath your ./go script. This keeps everything centralized and under source control. It also makes it easy to reproduce and debug CI/CD problems on a local dev workstation, and makes it easier to maintain and extend those scripts.

You get the most benefit here if you’ve agressively followed the advice to isolate your dev dependencies. You need your local tooling to be the same as that which is running on your CI/CD agents.

This tooling consistency is such a valuable aspect of a good dev toolchain that it’s worth pushing your entire dev toolchain into a virtual machine, with the exception of GUI tools. Combining Vagrant or Docker with something like Chef or Ansible gives you a setup with a strong guarantee of a uniform toolchain across all dev/QA workstations, and also all CI/CD agents. It gives you a very simple way to make sure your toolchain hasn’t drifted from the project norms - just nuke your VM and rebuild from scratch. Rebuilding the VM might take a while but it’ll probably be quicker than debugging the cause of the problem and correcting it. This approach also forces you to automate every aspect of your toolchain setup and reduces the risk of workstation toolchains drifting apart over time.

Running your build in a VM hosted within your workstation may sound extreme, but the amount of time you’ll save not debugging irritating tooling issues is a fine tradeoff for the inefficiency of building within a virtual machine or LXC container.

Microstatic: Radically Simple Static Microsites

| Comments

I’m a bit fan of static sites, and have created a fair few little static one-page demos and toy apps over the last few years. I like to host these off of S3 and expose them under subdomains of my domain, thepete.net. In fact, you’re reading one of those S3-hosted static sites right now!

With modern infrastructure like S3 and Route 53 the process of setting up a new microsite is pretty straightforward, but the manual steps started to grate on me after a while. “I’m a dev”, I thought to myself. “Why can’t I automate this?”. And so of course I did.

I started off with some shell scripts around s3cmd, but eventually moved to an assemblage of ruby functions driving the fog gem. Prompted by an internal thread at ThoughtWorks (particularly by @gga) I cleaned things up and packaged this functionality into a gem called microstatic.

Max Lincoln also gave me lots of guidance and good ideas. He was keen for me to use fog so that microstatic could support alternate cloud provides such as OpenStack/Rackspace. I failed to keep to that vision, but I’m very happy to take pull requests to get back on that path.

A gem in two parts

Microstatic does two things. Firstly it provides a command-line tool that makes it ridiculously simple to create a new microsite. Secondly it provides a rake task that makes it ridiculously simple to push new content to the microsite.

Creating a new microsite

microstatic setup and you’re done. This will create a new S3 bucket to hold your microsite, and then add a Route 53 entry to wire that S3 bucket up to a subdomain.

Deploying content to your microsite

rake deploy and you’re done. Microstatic ships with a rake task that will sync your local contents with the S3 bucket hosting your site.

Demo

Here’s a 24 second demo. I’m creating a brand new microsite from scratch, setting up S3 buckets and DNS entries, and final deploying some initial content to it.

Pretty neat, huh?

Assertions in Page Objects

| Comments

Martin Fowler recently added a bliki entry for the Page Object pattern. It’s a good writeup - if you haven’t read it I recommend doing so now. Go ahead. I’ll wait.

The entry sparked a discussion on an internal ThoughtWorks mailing list as to whether Page Objects should include assertions or not. In fact Martin mentions this difference of opinion in the bliki entry itself. I fell on the side of favoring page objects with assertions baked in but couldn’t come up with a compelling reason why until I was working on some Capybara-based page objects with a client QA today.

TL;DR

If you don’t put assertions inside your page objects you are violating Tell Don’t Ask, which in turn hinders the ability for your testing framework to do implicit spin asserts - a very valuable feature.

In the following I’ll explain all of that in more detail.

Tell Don’t Ask vs Single Responsibility Principle

Let’s use a concrete example with Page Objects. I’m testing a web page which lists my bank accounts. In one of my tests I want to verify that the list of bank accounts includes a bank account with a balance of 100.25.

If I didn’t want assertions in my page object I would Ask my page object for a list of bank accounts, and then check for the balance myself:

1
2
accounts = bank_account_page.listed_accounts
accounts.find{ |account| account[:balance] == 100.25 }.should_not be_nil

On the other hand if I’m OK with assertions in my page object I would Tell my page object to check for a listed bank account with that balance:

1
bank_account_page.verify_has_listed_account_with_balance_of( 100.25 )

This second example conforms to Tell Don’t Ask, the idea that it’s more object-oriented to tell an object what you want it to do, rather than to ask it questions about its state and then do the work yourself. However you could argue that the second page object example with embedded assertions also violates the Single Responsibility Principle - my page object is now responsible for both abstracting over the page being tested and also performing assertions on that page’s state.

At this point we see there are valid arguments both for and against the inclusion of assertions in our page objects. This illustrates that software design is rarely about right and wrong, but almost always about trade-offs between different design choices. However today I realized that there’s a feature of Capybara (and other similar UI testing frameworks) which in my opinion pushes the decision further towards complying with Tell Don’t Ask and including the assertions in the page object.

Spin Asserts

Spin Assert is a very valuable UI automation pattern. It is used to mitigate the fact that UI automation tests are often involved in races with the UI they are testing.

Let’s continue using our account listing example. Imagine I have a test in which I transfer some money from my checking account to my savings account, and then test that the balances are listed correctly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def verify_funds( name, balance )
  accounts = bank_account_page.listed_accounts
  accounts.find{ |x| x[:name] == name && x[:balance] == balance }.should_not be_nil
end

#...

verify_funds( "Checking Account", 200.25 )
verify_funds( "Savings Account", 20.15 )

transfer_funds( 100, from: "Checking Account", to: "Savings Account" )

verify_funds( "Checking Account", 100.25 )
verify_funds( "Savings Account", 120.15 )

Looks good. But what if the account balances take a while to update in the UI after the funds are transfered? My test might check that the checking account has a balance of 100.25 before the UI has had a chance to update. The test would see a balance of 200.25 instead of the 100.25, and thus the test would fail before the UI has had a chance to update to the correct state. The test has raced ahead of the UI, leading to an invalid test failure.

Spin Asserts deal with this by repeatedly checking for the expected state, rather than checking once and then summarily failing. Here’s what a crude spin assert implementation might look like in a verify_funds implementation:

1
2
3
4
5
6
7
def verify_funds( name, balance )
  loop do
    accounts = bank_account_page.listed_accounts
    break if accounts.find{ |x| x[:name] == name && x[:balance] == balance }
    sleep 0.1
  end
end

Here we just repeatedly search for an account with the expected name and balance, pausing briefly each time we loop. In a real spin assert implementation we would eventually time out and raise an exception to fail the test after spinning unsuccessfully for a while.

Implicit Spin Asserts

Spin Asserts are such a valuable technique that they is build into a lot of web automation tools. In fact a lot of web automation tools will do a spin assert without you even being aware of it. In Selenium/WebDriver this is referred to as ‘implicit waits’, and Capybara does the same thing.

When we tell Capybara “check that this HTML node contains the text ‘foo’” it will implicitly perform that check using a spin assert. It will repeatedly check the state of the UI until either the node in question contains the text ‘foo’ or the spin assert times out. Here’s what that assertion might look like:

1
page.find(".container .blah").should have_content('foo')

I think it’s pretty neat that a spin assert is hidden in there for free. However what if we instead ask Capybara for the current content of an HTML node and then do further checks ourselves?

1
page.find(".container .blah").text.should include('foo')

This looks very similar, but now we’re asking for .text instead of telling Capybara that we want the text to contain ‘foo’. Capybara is required to return the full content of the node at the instant we ask for .text, which robs it of the chance to do the helpful implicit spin asserts it could do if we were telling it to check for ‘foo’.

By violating Tell Don’t Ask we’re reducing Capybara role to dumbly exposing state and prevented it from enhancing that state with value-add behavior.

Page Objects should include Assertions

Hopefully you can see where I’m heading by now. If our example account listing page object includes its own assertions then we can tell the page object “verify that there is a checking account with a balance of 100.25”. The page object is free to internally use spin asserts while verifying that assertion. If we don’t include assertions within our page objects then we are required to verify the page state externally. This would mean we’d need to do implement our own spin asserts, and often wouldn’t be able to take advantage of the free implicit spin asserts provided by Capybara. An example which highlights the difference this can make is asserting that we’re on the correct page by checking the page title.

With internal assertions in our page object:

1
2
3
4
5
6
7
8
9
10
11
class SomePageObject
  #...

  def verify_page_title
    page.should have_title "The Page Title"
  end
end

#...

some_page.verify_page_title

As opposed to external assertions:

1
2
3
4
5
6
7
8
9
10
11
class SomePageObject
  #...

  def has_correct_page_title?
    page.title == "The Page Title"
  end
end

#...

spin_assert{ some_page.has_correct_page_title? }

In the first example we get Capybara’s implicit spin asserts for free. In the second example not only are we’re required to do our own explicit spin assert, but we’re required to do so every time we want to verify the page has the right title. Overly verbose and prone to error.

Tell Don’t Ask allows value-add behavior

There are other advantages we get from including assertions in our page objects (for example better assertion failure messages), but for me the ability to leverage implicit spin asserts is the big win. At the end of the day it means that a page object can present a higher level of abstraction in its public interface, adding value underneath. Yes, we’re weaker on Single Responsibility, but overall I think it’s a good tradeoff.

Introducing Rack-flags

| Comments

I’m pleased to announce rack-flags, a very simple way to add feature flags/bits/toggles to your Rails app (or any other rack-based web app).

The rack-flags gem allows you to define a set of feature flags which can be turned either off or on for any user. Your application code inspects the state of these feature flags when processing a web request and uses that information to modify how it handles that request.

A concrete use case

Let’s use a concrete example. We’re developing a new photo sharing feature for our Web 3.0 application. This is a big project, and we want to be releasing our app to production on a regular basis while the Photo Sharing project is still in progress. To allow that to happen we add an ‘expose photo sharing UI’ feature flag to our app, which is defaulted to Off while the project is in progress. In our application code we can check for that feature flag and only display a link to the new Photo Sharing section of our UI if the feature is On for the current request. Internal users (product owners, QAs, devs, etc) who want access to the half-finished work do so by overriding the ‘expose photo sharing UI’ flag to On using an admin UI (a simple rack app that ships with the rack-flags gem).

Once the first version of Photo Sharing is ready for prime time we simple change the default for the feature flag to On. Now every user has access to the new functionality. After the feature has been live for a while and we’re confident that it works as advertised we can retire the ‘expose photo sharing UI’ flag, along with the checks that were added to to the application code.

The mechanics

The ability to override flags for a given user is achieved using cookies. the rack-flags gem provides a rack app which acts as a simple feature flag admin UI. It lists the flags which have been configured for your app, and allows you to override their default state. Those overrides are stored in the user’s cookie. On each request to your web application a small piece of rack middleware inspects that cookie. When your app asks rack-flags whether a feature is turned off or on rack-flags combines the default state of each flag with whatever overrides were detected in the cookie and uses that to determine whether the flag is Off or On.

Some code examples

A yaml file defines the set of feature flags for your app, including a description of the flag and its default state:

feature_flags.yaml
1
2
3
4
5
6
foo:
  description: This is my first feature flag
  default: true
show_new_ui:
  description: render our experimental new UI to the user
  default: false

Setting up rack-flags for a rails app:

application.rb
1
config.middleware.use RackFlags::RackMiddleware, yaml_path: File.expand_path('../feature_flags.yaml',__FILE__)
routes.rb
1
mount RackFlags::AdminApp, at: 'some_route'

Checking whether a feature is Off or On for a specific request:

SomeController.rb
1
2
3
4
5
6
  class SomeController < ApplicationController
    def some_action
      features = RackFlags.for_env(request.env)
      @show_whizzy_bits = features.on?(:show_new_ui)
    end
  end

That’s all there is to it!

Check out the github repo’s README for more documentation, including an example of setting up a Sinatra app.

Credits and prior work

This gem is inspired by a very similar system which myself and some fellow ThoughtWorkers created while building a Rails-based e-commerce front end at a large online retailer. Last time I checked in with that client they were still using their version of this feature flagging system and were very happy with it.

I previously blogged about the cookie-based approach which we used at that client and subsequenctly re-implemented in this rack-flags gem.

Ryan Oglesby was one of the ThoughtWorkers on that project and co-author of this gem.

A lot of my thinking about feature flags is colored by working with a sophisticated ‘feature bit’ system several years ago. I believe Sean Ryan was the instigator of that system, which was chiefly implemented by Rob Loh and Elden Bishop under the guidance of of Erik Sowa. Erik and Rob presented on that feature bit system at Lean Software and Systems Conference 2010.

iOS Unit Testing Workshop

| Comments

TL;DR

I’ll be running a couple of free workshops on iOS Unit Testing in San Francisco in late August/early September. If you’d like to attend please fill out this simple form and I’ll be in touch with details.

Why is it free? What’s the catch?

I’m preparing to run a paid 3 hour workshop on iOS unit testing for the upcoming super-awesome StrangeLoop conference. As part of that prep I’ll be doing a couple of ‘dry runs’ of the workshop so I can fine tune the content, delivery, pacing, etc. You are cordially invited to attend. You get to learn a bunch about iOS unit testing, I get to hone and refine the workshop. A win/win for us both.

Details on the workshop

Learn the why and the how of unit-testing for your iOS applications using Kiwi, a popular open-source testing library.

You will leave this workshop excited to apply what you’ve learned to start testing your iOS app. In fact, you’ll probably want to fire up your project in Xcode straight afterwards!

We’ll start with the basics, covering what unit-testing is (and isn’t), and why it will be a valuable addition to your programming toolbox.

Having covered some theory we’ll then spend the majority of the workshop learning by doing. We’ll start by setting up an Xcode project for testing with Kiwi, and quickly move on to writing our first simple test.

After learning the basics with some isolated examples we’ll move on to look at how to unit test a real iOS app. As we go we’ll learn about mocking and stubbing, and when to make the decision not to test. We’ll learn how to test in the context of a CocoaTouch app, and how testing can positively influence the design of your app (hint: your app delegate and your view controller aren’t the only place to put your business logic!).

We’ll wrap up by touching on some more advanced topics such as test-driven development, CI, and how to test asynchronous code.

If this sounds interesting to you just fill out this simple form and I’ll be in touch with details about attending.

About me

What better than a bio written in the third person!

Pete Hodgson is a consultant with ThoughtWorks, where he’s spent the last few years helping teams become awesome at sustainable delivery of high-quality mobile applications. Pete is on a mission to bring modern agile practices like continuous integration and automated testing to the iOS development community. While working on an iOS application for a large financial institution he helped create Frank, an automated acceptance testing tool for native iOS applications. He is now the open-source maintainer for Frank, which has become a leading tool for automated testing.

Using Travis CI and Xctool to Build and Test iOS Apps

| Comments

Travis CI is a cloud-based Continuous Integration service which is free to use (but only on public github projects). They recently announced support for OS X agents, which means you can now use Travis to build and test iOS applications. In this post I’ll show how I set up basic CI for an example iOS app using Travis, with the help of xctool. xctool is the ‘better xcodebuild’ which Facebook recently open-sourced. It allows you to not only build your app on the command line but also run your app’s unit tests from the command line with the same capabilities that XCode offers when you run the tests from the IDE proper.

To see a working example of the sort of thing we’ll be setting up, take a look at the github repo and the corresponding Travis page for my Run2Bart example app.

Getting started with Travis

How Travis works

Travis is driven entirely through github integration. You configure Travis with information on how to build your app by adding a .travis.yml file to the root of your git repo. Then you log on to Travis using your github credentials and have it configure a github post-commit hook for the github repo. After doing that github will tell Travis every time you’ve pushed code to your repo. Travis will respond by dutifully downloading the commit which was pushed and will then do whatever build you’ve configured it to do in that .travis.yml file.

an initial travis setup

I’m not going to cover linking your github repo to Travis; their own docs explain this well. Once you have linked your repo the next step would be to add a .travis.yml file to the root of the repo. Here’s a basic setup similar to the one that I use to build my Run2Bart example app:

.travis.yml
1
2
3
4
5
---
  language: objective-c

  before_script: travis/before_script.sh
  script: travis/script.sh

First I’m telling Travis that this is an objective-c project. Next I tell Travis how I’d like it to do CI against this repo by giving it instructions on what scripts it should run in order to actually perform a build. I also give some extra instructions on what to do just prior to running a build. It’s quite common to put all the build steps inline right in the .travis.yml file, but I prefer to actually create bash scripts in my repo inside a travis directory in my git repo and then just refer to those scripts from my .travis.yml. This keeps the .yml file nice and small, and also makes it easy for me to test the travis build scripts locally.

We gave Travis a before_script in the .yml file above. This is intended to be used by the Travis agent to download tools needed as part of the build. Here’s what it looks like:

travis/before_script.sh
1
2
3
4
5
#!/bin/sh
set -e

brew update
brew install xctool

Very simple. We just use homebrew to install xctool on the build agent. All travis build agents come with homebrew pre-installed, but sometimes the formula aren’t up to date, so it’s best to run a brew update before attempting a brew install.

That’s all we need to do to prepare our agent for the build. Next let’s look at the build script itself:

travis/script.sh
1
2
3
4
#!/bin/sh
set -e

xctool -workspace MyWorkspace -scheme MyScheme build test

Again, this is really simple. We first do a basic sanity check by asking xctool to build our app, specifying a workspace and scheme. This just checks that we don’t have any compilation errors. Assuming that succeeds xctool will then build and run the unit testing target for our app, launching the Simulator on the Travis agent if needed.

It’s that easy

And that’s all there is to it for a basic build-and-test setup. At this point you’re going to get very fast feedback from Travis as soon as anyone pushes code which either causes compilation errors or causes your unit tests to fail. From here you can easily build upon this basic CI set up. There are a lot of things you can do to expand this, some of which I’ve done in my example Run2Bart app.

You can have xctool generate pretty output by specifying a different reporter, and then archive that report as a build artifact using something like travis-artifacts.

You can add extra stages to your build such as Frank acceptance tests.

You could even have Travis distribute a build of your app to alpha users using something like TestFlight or HockeyApp. That’s definitely an advanced topic though - it would be very fiddly to achieve due to having to do code-signing of a device build on the Travis agent.

You don’t need to use Travis for this

And of course you can do all of the above on your own Jenkins (or TeamCity, or Go, or Bamboo) CI server rather than on Travis. In fact unless you’re building an open-source app or you’re a beta customer of Travis Pro then you’ll likely want to use a different CI technology. Regardless, the basic approach remains the same.