h

Being Agile

Pete Hodgson's blurgh

Deploying to Heroku From CI - the Gory Details

| Comments

In my previous post I discussed why you might want to deploy to Heroku as part of a CI build. I demonstrated how my heroku-headless gem makes it very easy to script such a deployment. In this post I’ll go into the details on how that gem does its work. I’ll talk about what the Heroku deployment tooling expects to be available, why that’s not necessarily going to be there in the context of a CI build environment, and how the gem helps resolve that.

Heroku’s deployment model

Heroku’s deployment model centers around pushing git commits to a special heroku git repo. When you want to deploy a new version of your application you push the git commit corresponding to that build up to the special heroku repo. As a side effect of updating the remote repo heroku will deploy a copy of the application as of that commit.

Of course Heroku won’t let anyone deploy a new version of your application. It only allows a registered collaborator to push to an app’s repo. You can manage which heroku users are collaborators of the app via the Heroku web interface, or via the heroku sharing set of commands.

But how does Heroku know which user is trying to push to an app’s heroku repo? It looks at the ssh key that git is using when it makes the push. Unless the ssh key is registered to a listed collaborator of the app then Heroku will reject the push.

A Heroku user usually registers their ssh key with Heroku using the heroku keys:add command. The average Heroku user only has to perform this procedure when they’re setting up a new dev machine. Once it’s done Heroku deploys are very low-friction since your registered ssh key is automatically used by git whenever you push. git push heroku is all you need to do. It’s easy to forget that you registered your ssh key with Heroku at one point.

What’s different for a headless CI deploy

Things can be a bit different when deploying from a CI agent. The CI agent’s user may not even have an ssh key generated, and if it does it is probably not associated with a Heroku user that has collaborator access to the Heroku app you want to deploy to.

One way to solve this would be to ensure that the CI agent user has a ssh key generated and to manually register that key for a Heroku user who has collaborator rights to the target app. This works, but it’s not ideal. The manual setup is tedious and error prone, and you have to do it for every agent in your CI system. You also have to make sure that the Heroku user which the CI agent is acting as is registered as a collaborator for every Heroku app that it might be deploying to. If you’re using a cloud-like CI system such as Travis then you might not even have access to the CI agent in order to generate and register an ssh key, and even if you did you have no control over which agent will be running your next build. With some systems you will be given an agent with a totally pristine environment for each build. In other words, you can’t always rely on manually pre-configuring an agent’s environment.

All of this means that it’s better to avoid the need for manual setup of pre-existing ssh keys. A better approach is to generate a disposable ssh key, register it with a Heroku user, do a git push using that key, and then remove the disposable key.

As luck would have it Heroku exposes API for adding and removing ssh keys for a user. When you use the Heroku API you pass a secret API key which Heroku uses to both authenticate you and also to figure out which user you are acting as. That allows the API to know which user’s keys you are managing.

This disposable key approach is more secure and has no requirements on a CI agent having a previously configured environment. You could take a totally pristine box and use it to run a deploy without any other setup. Transversely, you can test a deploy script on a full-configured developer workstation without your local environment affecting the deploy script and without the deploy affecting your environment.

Note that the disposable key approach still requires that you have previously set up a Heroku user who has collaborator access to the app you are deploying to. It also requires that your build scripts have access to that user’s secret Heroku API key. You need to be careful here - if that key gets into the wrong hands it could be used to run up a very large bill with Heroku. As I said in my previous post, you’ll want to use a feature in your CI system along the lines of Travis’s secure environment variables to protect access to that key. Most CI/CD systems provide similar functionality.

The steps for a headless deploy using disposable keys

So we have our basic approach laid out. Whenever we want to deploy our app we need our script to:

  • generate a new disposable ssh key
  • register that key with Heroku
  • use the key to deploy the app via a git push
  • unregister the key with Heroku
  • delete the key locally

Implementation details

I’ll now briefly describe how the heroku-headless gem does all that. If you want more details I encourage you to study the gem’s implementation. It’s really pretty simple - a handful of classes, about 200 lines of code in total.

Creating a local scratch directory

We use ruby’s tmpdir module to generate a temporary working directory which will contain our disposable ssh keys and some configuration files. After we’re done with the deploy we’ll delete this directory.

Generating a disposable ssh key

Next we’ll generate our disposable public/private key pair inside our new scratch directory. We use the ssh-keygen command which is available on pretty much any unix box: ssh-keygen -t rsa -N "" -C #{ssh_key_name} -f #{ssh_key_path}

Registering the key with Heroku

The heroku-api gem is our friend here. We create an instance of the Heroku API with heroku = Heroku::API.new(). If you don’t explicitly pass in an API key the gem will use the value of the HEROKU_API_KEY environment variable, so you need to make sure that that environment variable is set correctly by your CI system just prior to running your deploy script. Alternatively you can explicitly pass in an API key to the constructor, but again you need to be careful you don’t expose this key.

Given all of that, we can register our disposable ssh key with that API key’s Heroku user by doing something like:

register-disposable-key
1
2
3
heroku = Heroku::API.new()
public_ssh_key = File.read(path_to_public_ssh_key)
heroku.post_key(public_ssh_key)

Note that we’re sending the public key to Heroku. Private ssh keys are never exposed.

Pushing to the heroku git remote using our disposable key

This is the fiddly bit. We need to have git push to heroku using that newly generated ssh key, but we don’t want to mess with any system ssh configuration which might be in place. Luckily git allows you to override the path to the underlying ssh executable it uses when connecting to a remote repo, via a GIT_SSH environment variable. We’ll use that to point git to a little wrapper script. This script calls through to the system’s standard ssh executable but adds a few command line arguments along the way. Those command line arguments will tell ssh to identify itself using our disposable key (as opposed to whatever may be setup in ~/.ssh/). We also add a few arguments which tell ssh to not ask for confirmation the first time we connect to the heroku host, and also to prevent ssh from recording the heroku host as a known host.

The wrapper script looks like this:

git_ssh_wrapper.sh
1
2
#!/bin/sh
exec ssh -o StrictHostKeychecking=no -o CheckHostIP=no -o UserKnownHostsFile=/dev/null -i /path/to/disposable_ssh_key -- "$@"

All credit to this Stack Overflow question which that wrapper script is based on.

Once we’ve generated that script and placed it in our scratch directory we can ask git to push to our app’s heroku repo using that custom ssh wrapper like so:

push-to-heroku
1
system( {'GIT_SSH'=>custom_git_ssh_path}, "git push git@heroku.com:#{app_name}.git HEAD:master" )

Note that in this example we’re pushing whatever HEAD currently points to, but we could push any arbitrary commit up to Heroku using this same command.

Deregistering the disposable ssh key

This one is easy: heroku.delete_key(ssh_key_name). The ssh_key_name we pass in should be the same key name we passed to ssh-keygen via the -C flag.

Cleanup

Lastly, we clean up after ourselves by deleting the local scratch directory.

Fin

That’s it. It did take a fair amount of Internet research to figure all that out, but I should be clear that almost all of what I’ve described was lifted from other blog posts, Stack Overflow answers, etc. Hopefully by collating that info here I’ll help someone else travelling down a similar path. And again, if you don’t really care about the details and just want to get your app deployed via CI then just use the heroku-headless gem and move on!

Deploying to Heroku From CI

| Comments

If you’re working on top of a modern web stack then the Heroku hosting platform is a compelling option for standing up an instance of your app with what is usually a trivial amount of effort. No need to provision EC2 servers, write chef recipes, or mess with deploy tools. Just run a couple of commands in the shell and then git push heroku.

Heroku as a staging environment

This makes Heroku a compelling choice for hosting a staging environment for your application. Staging environment contains the latest ‘good’ build of your app. They are used for things like automated acceptance testing, manual QA, story sign-off, or internal demos.

Hooking up an automatic staging environment deploy to the end of a CI run is a nice way of ensuring that your staging environment always contain the freshest good build of your app.

Headless Heroku deploys are fiddly

Unfortunately Heroku’s tooling isn’t really optimized for deploying from a CI agent - sometimes referred to as headless deployment. Heroku is really geared towards deployment from a developer or operator workstation. It expects you to have an SSH key registered with Heroku before deploying, and to have a heroku remote configured in the local git repo you’re deploying from. That’s a very reasonable assumption for a developer working on an app, but it’s not so reasonably for a CI agent. A CI agent may well be building multiple different apps, and often clears out things like git repos between builds. In the extreme case there are hosted tools like Travis CI where each build of your app takes place on essentially a totally fresh box, with no ability to pre-configure things like SSH keys.

This isn’t an insurmountable problem. It is possible to deploy from a git commit to Heroku in these circumstances. It’s just a bit of a hassle to figure out. But luckily for you I’ve figured it out on your behalf, and even released a ruby gem which does the work for you.

gem install heroku-headless

This gem grew out of me setting up Travis to deploy to a Heroku app after each successful CI build. After getting it working I distilled the relevant magic into the heroku-headless gem.

Using it is as simple as:

travis-after-script
1
2
require 'heroku-headless'
HerokuHeadless::Deployer.deploy( 'your-app-name' )

This script will deploy the commit you currently have checked out in your local repo to the Heroku app name you specify. The only other setup needed for that script is to have a HEROKU_API_KEY environment variable set, where the user associated with that api key is registered as a collaborator on the Heroku app you’re deploying to. Obviously Heroku doesn’t let random people just deploy arbitrary code to an app.

If you register a script like the one above as an after-script in your Travis setup then that Heroku app will always be running the last successful build of your application. Very handy for acceptance tests, manual QA, or showcasing.

Keep that API key secret

That Heroku API key should be considered a very private secret unless you want someone running up a huge bill on your Heroku account. I would advise not checking that key into source control. Using something like Travis’s secure environment variables is a good way to get that secret injected into your build scripts without exposing it to prying eyes.

Bonus Feature: disposable apps

While building out the functionality in heroku-headless I also experimented with the creation of disposable apps. The concept there is that you might want to create an entirely new Heroku app, deploy to it, run some tests, and then delete the app entirely. I never ended up using this functionality, but it’s in the gem. To use it you’d do something like:

deploy-to-disposable-app
1
2
require 'heroku-headless/disposable_deployer'
HerokuHeadless::DisposableDeployer.new.go

UPDATE: Behind the curtain

I wrote a follow up post to this one which describes how the heroku-headless gem actually works. Check that out if you’re interested in the gory details.

Writing iOS Acceptance Tests Using Kiwi

| Comments

In this post I’ll describe an experiment where rather than using Frank to write iOS acceptance tests I instead combined Kiwi with the low-level libraries that Frank uses internally. This allowed me to write acceptance tests in pure Objective-C which run in the app process itself, very similarly to the way KIF works.

What?

Before I start, let me be clear that I personally wouldn’t use this approach to writing acceptance tests. I much prefer using a higher-level language like ruby to write these kinds of tests. The test code is way less work and way more expressive, assuming you’re comfortable in ruby. And that’s why I wanted to try this experiment. I’ve spoken to quite a few iOS developers over time who are not comfortable writing tests in ruby. They are more comfortable in Objective-C than anything else, and would like to write their tests in the same language they use for their production code. Fair enough.

Why not KIF?

I suspect that the ability to write your tests in Objective-C is the main reason that some developers turn to KIF for their acceptance tests.

I have two problems with KIF though. First, the tool conflates three distinct activities - test organization, view selection, and simulating interactions. I think these activities are better served when broken out into distinct responsibilities. In the case of test organization, tools like Cedar and Kiwi have had a lot of uptake and Kiwi in particular is becoming a very popular choice. Alternatively you can use one of the more established tools like OCUnit. These tools handle things like organizing test code into test cases and test suites, running those tests, asserting on values, and reporting the output in a useful way. Why re-invent that wheel in KIF, when it’s the core competency of these other tools? When it comes to view selection and simulating interactions, because these two are intertwined in KIF you end up with really verbose code if you want to do something like find an element, check it’s visible, tap it, then check it went away.

The second concern I have with KIF is simply that it doesn’t seem to be under active development. I have also heard from a few people that it’s not really used much by teams within Square at this point.

So what’s the alternative?

I visited a couple of iOS development teams in Australia recently and this topic came up with both teams while chatting with them. It occurred to me that you could probably implement KIF-style tests pretty simply using the view selection and automation library which Frank uses, plus Kiwi for a test runner. I had a 15 hour flight back to San Francisco, and this seemed like a fun experiment to while away a couple of those hours.

The idea is simple really. Wire up Kiwi just like you would for Application Unit Tests, but rather than doing white-box testing on individual classes inside your app you instead drive your app’s UI by selecting views programmatically using Shelley (Frank’s view selection engine) and then simulate interacting with those views using PublicAutomation (the lightweight wrapper over Apple’s private UIAutomation framework that Frank also uses). Alternatively after selecting views using Shelley you might then just programatically inspect the state of the views to confirm that the UI has responded appropriately to previous steps in your test.

How does it work?

To prove this concept out I wrote a really simple User Journey test which was exercising the same 2012 Olympics app I’ve used as an example in previous posts. Here’s what it looks like:

BasicUserJourney_Spec.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#import "AcceptanceSpecHelper.h"
#import "EventsScreen.h"

SPEC_BEGIN(BasicUserJourney)

describe(@"User Journey", ^{

    beforeAll(^{
        sleepFor(0.5);
    });
    it(@"visits the event screen and views details on a couple of events", ^{
        EventsScreen *eventsScreen = [EventsScreen screen];
        [[theValue([eventsScreen currentlyOnCorrectTab]) should] beTrue];

        [eventsScreen selectEvent:@"beach volleyball"];
        [[theValue([eventsScreen currentlyViewingEventDetailsFor:@"Beach Volleyball"]) should] beTrue];

        [eventsScreen goBackToOverview];

        [eventsScreen selectEvent:@"canoe sprint"];
        [[theValue([eventsScreen currentlyViewingEventDetailsFor:@"Canoe Sprint"]) should] beTrue];

    });
});


SPEC_END

I’m using the Page Object pattern to encapsulate the details of how each screen is automated. In this case the EventsScreen class is playing that Page Object role.

The aim here is that you can read the high-level flow test above and quite easily get the gist of what it’s testing. Now let’s dive into the details and see how the magic happens inside EventsScreen:

EventsScreen.m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
#import "EventsScreen.h"

@interface EventsScreen()
- (NSArray *)viewsViaSelector:(NSString *)selector;
- (UIView *)viewViaSelector:(NSString *)selector;
- (void) tapViewViaSelector:(NSString *)selector;
@end

@implementation EventsScreen

+ (EventsScreen *) screen{
    EventsScreen *screen = [[[EventsScreen alloc] init] autorelease];
    [screen visit];
    return screen;
}

- (void) visit{
    [self tapViewViaSelector:@"view:'UITabBarButton' marked:'Events'"];
}

- (BOOL) currentlyOnCorrectTab{
    return [self viewViaSelector:@"view:'UITabBarButton' marked:'Events' view:'UITabBarSelectionIndicatorView'"] != nil;
}

- (void) selectEvent:(NSString *)eventName{
    NSString *viewSelector = [NSString stringWithFormat:@"view:'UIScrollView' button marked:'%@'",eventName];
    [self tapViewViaSelector:viewSelector];
}

- (void) goBackToOverview{
    [self tapViewViaSelector:@"view:'UINavigationButton' marked:'Back'"];
}

- (BOOL) currentlyViewingEventDetails{
    return [self viewViaSelector:@"label marked:'Key Facts'"] && [self viewViaSelector:@"label marked:'The basics'"];
}

- (BOOL) currentlyViewingEventDetailsFor:(NSString *)eventName{
    return [self currentlyViewingEventDetails] && [self viewViaSelector:[NSString stringWithFormat:@"label marked:'%@'",eventName]];
}


#pragma mark internals


- (NSArray *)viewsViaSelector:(NSString *)selector{
    return [[Shelley withSelectorString:selector] selectFrom:[[UIApplication sharedApplication] keyWindow]];
}

- (UIView *)viewViaSelector:(NSString *)selector{
    NSArray *views = [self viewsViaSelector:selector];
    if( [views count] == 0 )
        return nil;
    else
        return [views objectAtIndex:0];
}

- (void) tapViewViaSelector:(NSString *)viewSelector{
    [UIAutomationBridge tapView:[self viewViaSelector:viewSelector]];
    sleepFor(0.1); //ugh
}

@end

As you can see, most of EventsScreen methods are only a line or two long. They generally either tap on a view or check that a view exists in the heirarchy. They use Shelley view selectors, which is a big part of keeping the code declarative and concise. There are also a few internal helper methods inside EventsScreen which would probably get moved out into a shared location pretty soon, maybe a BaseScreen class from which concrete Page Object classes could be derived.

Proof of Concept satisfied

And that’s pretty much all that was needed. There was the usual tedious XCode plumbing to get Kiwi and everything else working together, plus a few other bits and pieces, but really there wasn’t much to it.

A few people have asked to see all the moving parts of this experiment, so I’ve pushed the changes to this branch on github.

What else would you need?

For any large test suite you’d want a lot more helpers. Based on my experience writing acceptance tests with Frank you usually need things like:

  • wait for something to happen, with timeout
  • scroll things into view so you can interact with them
  • support for higher-level generic questions (is this view currently visible, is anything animating, etc).

Eventually you’d probably also evolve a little internal DSL that lets you implement your Page Object classes in a more expressive way.

What do you think?

I’m very interested to see if this approach is appealing to people. If you’re interested - or even better if you take this and run with it - then please let me know.

Cookie-based Feature Flag Overrides

| Comments

Introduction

If you’re practicing Continuous Delivery then you’re probably using Feature Flags to hide half-baked features which are being shipped into production as latent code. It’s useful to allow individual users to manually override those feature flags so that they can get a preview of these latent features before they are released to everyone. People wanting to do this would be testers, product stakeholders, and external beta testers.

This is a similar concept to the Canary Releasing approach which organizations like Facebook use to trial new features. The difference is that with manual overrides each individual is opting in for features themselves, as opposed to being arbitrarily placed into the pool of users assigned as canaries.

A lightweight flag override

In the past I’ve blogged about manually setting feature flags using query strings. On my current project we took an alternate approach using cookies instead. It was still a simple lightweight implementation, but rather than specifying an override using query params we instead used a permanent cookie.

General information about the current set of feature flags in play within our app is stored inside a per-environment configuration file which ships with the rest of our server-side code. This configuration file lists which feature flags are available, along with a brief description of the flag and a default state (on or off).

However in addition to that default server-side state we also allow flags to be overridden via a feature_flags cookie. This cookie is a simple JSON document describing which flags should be overridden. Any flag states listed in that cookie’s JSON payload will override the default state specified in the environment’s feature flag configuration.

An example

Let’s say our production environment’s feature flag config looks like this:

feature_flags.yaml
1
2
3
4
5
6
7
enable_chat_feature:
  description: Expose our new experimental chat interface (WIP)
  default: false

use_new_email_service:
  description: Send registration emails using our new email service (still under pilot)
  default: false

By default anyone accessing our application will not see the new chat interface which we’re still working on because the default state for that feature is off. Likewise when someone signs up for a new account they will be sent welcome email using our old creaky email service, rather than the fancy new one that our buddies in a backend service team have been working on.

Now let’s say I’m a QA and I want to test whether that email service is ready for prime time. All I need to do is set my feature_flags cookie in my browser to something like:

feature_flags cookie
1
2
3
{
 "use_new_email_service": true
}

Once that cookie is set in my browser then whenever I register a new account using that browser then the application will notice the override, interpret the use_new_email_service feature as on, and send my welcome email using the new email service.

Simple usability improvements

Obviously it’s not ideal to have users manually editing raw JSON within a cookie. In our app we added a simple admin-restricted page for viewing and modifying these overrides. This page listed all known feature flags in the environment and allowed users to control which flags are being manually overridden via radio buttons with three states - On, Off, or Default. Changing those buttons simply modified which flag overrides were stored in the cookie JSON.

This page also highlighted stale overrides - flags which no longer existed in the environment but were still being ‘overridden’ in the cookie. This is a fairly common occurance if you’re retiring your feature flags regularly (which you absolutely should be doing). These stale flags have no effect on the application; listing them is simply a way of prompting the user that the overrides are stale and should probably be removed.

Limitations

This approach is pretty simplistic and clearly there are some limitations. Firstly the overrides are stored client-side. That means you can’t restrict which flags are overriden, you can’t audit which flags are overridden, and you can’t go in as an administrator and modify the overrides. There are obviously also security concerns to be aware of when you’re allowing client-side state to impact what happens server-side.

Another issue is that the overrides are per-browser rather than per-login. That means that users have to explicitly configure the overrides for each browser (and mobile device) they access your app with, and remember to keep them in sync. You also need to remember that you’re setting the flags for all users that you log in as via that browser, not just the current user. This is probably counter to a lot of people’s intuition. However the fact that the overrides aren’t tied to a specific user can sometimes be helpful for testing - the registration email example above was actually a good example of that.

Wrapping up

All in all this is a nice simple approach to get started along the path of configurable feature flags. An organization that really embraces this idea will probably outgrow this basic implementation fairly quickly, but I belive it is a simple way to get started and show the value of feature overrides without a bunch of effort. If you’re using feature flagging but not using overrides currently then I encourage you to consider this approach too.

Frank With CocoaPods

| Comments

A Frank field trip

Today I’m visiting 955 Dreams (the guys who make Band of the Day, amongst other things) visiting my friend (and their CTO) Chris. We had a fun time figuring out how to get Frank to play nicely with CocoaPods. It wasn’t that tricky, and I’m going to document what we found here to hopefully make it easier for other Cocoapod users.

Problem the first

Frank had trouble with cocoapods for two reasons. Firstly when frankifying the app we needed to frankify the app’s Xcode project, but to build the app we needed to point frank build at the main Xcode workspace, so that cocoapods could work its magic during the build. This was simply a case of passing the appropriate --workspace and --scheme arguments to frank build.

Problem the second

Cocoapods uses a Pods.xcconfig which overides the OTHER_LDFLAGS linker flag setting (amongst other things). Part of the work that frank setup does is to include some frank-specific settings in the project’s linker flags. Since cocoapods overrides OTHER_LDFLAGS the frank-specific additions are lost, meaning that the Frank server doesn’t get linked into the app. To fix this we created a seperate .xcconfig file that included both the cocaoapods and the frank .xcconfig files:

frank_and_pods.xcconfig
1
2
#include "Pods.xcconfig"
#include "Foo/Frank/frankify.xcconfig"

However to get xcodebuild to use that file we had to abandon frank build (which is really just a thin wrapper around xcodebuild) and instead just invoke xcodebuild directly, passing in a -xcconfig argument. That worked and solved the problem but I think there’s an alternative approach that would let you still use frank build. Adding a #include "../Pods.xcconfig" line to the top of the frankify.xcconfig file should achieve the same ends.

The happy ending

Either way, after making those changes we were able to get a Frankfied app up and running and inspectable within Symbiote. I told Chris that I think long term it usually ends up being better to create a Frankified app in CI by creating a custom xcodebuild setup. We’ve established today what’s needed to do that with an app which uses Cocoapods.

Towards Frank 1.0

| Comments

One of the many geeky hats I wear is that of maintainer for Frank, an open-source tool for automated testing of native iOS applications. Frank has been around for over 2 years now, which is actually quite a long time in the mobile space. It has evolved a fair amount in that time, but has had a surprisingly small amount of change to the core architecture. I’m actually quite proud of that. I think that the core concepts have been proven out enough that Frank is ready for a 1.0 release which cleans up some long-standing cruft and solidifies at least parts of its client/server API.

The main motivator for the Big One Oh is that I want to remove some old unmaintained libraries which Frank currently depends upon. This can be done in a mostly backwards-compatible way, but by doing this as part of a major version bump I can make some reasonably aggressive improvements without letting down existing users who would rightly expect backwards compatibility from a point release.

Adios UISpec

The main dependency I’d like to remove is UISpec. We have used it in the past for both view selection and for touch synthesis. At the time I started building Frank it was a great option, but it has since become unmaintained, and surpassed by other, newer tools.

About a year ago I wrote a new view selection engine called Shelley from scratch with the goal of replacing our use of UISpec, and a few months back we started using KIF for touch synthesis. So at this point new users of Frank aren’t really using UISpec for anything, and they’re benefiting from faster view selection, and more robust touch synthesis. However I kept UISpec around for backwards compatibility. With a 1.0 release I can finally cut the cord and fully remove UISpec.

A big issue that several users have had with UISpec is its GPL license. This license was necessarily inherited by the rest of our Frank server code, which made some users nervous, and prevented some folks from using Frank in a commercial setting. By removing UISpec fully from the Frank repo we no longer need to have a GPL license for any part of Frank, which should make people more comfortable. We’ll be moving to an Apache 2.0 license for the server parts of Frank, the same license which other Frank components have always had.

Adios KIF

The other big library dependency I’ll be leaving behind is KIF. I really like KIF, and in general there aren’t many reasons to not use it. It’s a very nicely written library, and a good choice for Objective-C developers who are looking for a lower-level testing tool.

The motivation for leaving KIF is the opportunity to switch to using Apple’s own UIAutomation private framework. I recently opened up a library called PublicAutomation which exposes this private framework for use by tools like Frank, and I’m excited to be able to use the same super-solid touch-synthesis code that Apple themselves use.

As an example of why I’d like Frank to switch, it appears that with PublicAutomation you can simulate device rotation on a physical device without having to physically rotate the hardware, something which I believe was impossible before. It is also possible to extend PublicAutomation with whatever complex multi-touch gesture simulation your tests might need to perform. Finally, I can be confident that when Apple release new iOS SDKs they will do a lot of work to ensure that UIAutomation remains compatible and fully functional.

Timeline

I already have a branch of the Frank repo which contains the above changes, and it appears to be working well with the apps I’ve tested it on. I hope to release that soon (within the next couple of weeks) as a 1.0.pre1 version of the frank-cucumber gem. At that point I’ll be asking folks in the Frank community to try it out and give me feedback on what works and what doesn’t. I expect a few minor teething troubles with tests that use older, more obscure UISpec helper methods that Frank will have lost, but there shouldn’t be any issue that take more than a few lines of code to solve.

Introducing PublicAutomation

| Comments

I’m excited to announce PublicAutomation, a friendly wrapper around Apple’s private UIAutomation framework. PublicAutomation allows you to use Apple’s own private framework to simulate user interactions (taps, swipes, keyboard typing) via a simple Objective-C API.

Previous approaches to this problem have relied on reverse-engineering and monkey-patching iOS’s touch events system. PublicAutomation takes a different approach. It links in the low-level API of the private framework which Apple itself uses for its own UIAutomation tooling. PublicAutomation provides the stability of Apple’s proprietary UIAutomation tool with the flexibility of an open source library maintained by the community.

I have already had great success in my experiments using PublicAutomation as the user-simulation backend for Frank (the automated acceptance testing tool I maintain), replacing our previous dependency on KIF. KIF is a great tool but Frank was only using it for its user-simulation features, which was always a bit of a weird fit. I’m confident we can now replace our KIF dependency with a smaller more focused user-simulation library in the form of PublicAutomation.

Some history

As I said above, Frank currently uses KIF to do the low-level work of simulating user interactions. KIF achieves this with some clever monkey-patching of iOS’s event system. This works remarkably well, but there are some issues with simulating certain interactions (e.g. tapping a deletion confirmation button in a table view cell). It also has some strange side-effects at times - an app which has KIF linked in doesn’t react to tap events from an actual user in the same way.

Recently I spent a bit of time investigating the feasibility of using Apple’s private UIAutomation framework outside of their official UIAutomation tooling. I blogged about that initial research in a previous post. The findings were both good and bad. The good news was that the private framework can be linked in and used both in the simulator and on the device. The bad news was that only the low-level UIASyntheticEvents API works reliably. The high-level API that UIAutomation exposes via JavaScript does not appear to be usable programatically.

My goal in investigating the UIAutomation private framework was to replace KIF’s event monkey-patching. I’d also hoped to get some nice new high-level API (e.g. send a flick to this view), but that was really more of a nice-to-have than an absolute requirement. The main outcome of this research is the discovery that we can expose and use the UIAutomation private framework’s low level API.

Bringing in KIF’s keyboard typing

It turns out that this low level UIASyntheticEvents API has a drop-in replacement for almost every feature of KIF we use in Frank. The only thing missing was keyboard typing. I extracting KIF’s keyboard typing code into a separate class inside of KIF and sent them a pull request. Then I took that KIFTypist class and ported it into PublicAutomation. At this point PublicAutomation has everything Frank needs. It also appears to not have the issues we’ve seen when monkey-patching the event system. For example we can now tap on deletion confirmation buttons.

The future

I’m working on solidifying Frank’s usage of PublicAutomation. There will probably be an official switch over to using it as part of a 1.0 release of Frank (along with removing our UISpec dependency, but that’s a different story).

I’m also hoping that other non-UIAutomation tools can take advantage of it - Calabash for example. My hope is that PublicAutomation can become a standard shared library for user simulation in iOS. To achieve that it does need some extra work. Right now it only supports single-finger taps and swipes. Extending support to more complex multi-touch gestures should be trivial.

It should also be trivial to add support for features which have previously not been easily accessible to non-UIAutomation tools. For example simulating the home button being pressed, the screen being locked, etc. As far as I can tell everything exposed by UIASyntheticEvents class is up for grabs. An exciting prospect!

Marker Branches in Git

| Comments

In my last post I talked about setting up a basic deployment pipeline using marker branches in git to keep track of what was where.

In this post I want to go into a little more detail describing how these marker branches work. To do that I’ll walk through a simple example showing the state of a git repo as code moves through being committed, being pushed to master, deployed to pre-prod, and finally promoted to prod.

A sample git branch progression

In these diagrams the rounded boxes are git commits, the rectangles are git branches. The ‘WD’ rounded box represents uncommitted local changes in the current working directory.

Many thanks to Scott Chacon, git teacher extrodinaire, who very generously shares an omnigraffle file containing all the really nice git diagrams he uses in his books and presentations. I’ve shamelessly used that as the basis of these diagrams.

starting state

At the start of this scenario we have our master branch pointing at the C5 commit. We also have a couple of marker branches, pre-prod and prod. We’ll talk more about these guys momentarily. Finally we have some local code changes inside our working directory. These are the changes which we are going to be following as the travel their path to production.

changes checked in

In this next diagram you can see I’ve now checked in the local changes I had in my working directory to my master branch as C6. master is now pointing to the newly created C6 commit but nothing else has changed.

A release candidate is born

At this point I run some acceptance tests, and decide that what I currently have in master is a viable release candidate. I want to push it to pre-prod to start more exhaustive testing of this candidate.

To do that I run a deployment script which pushes the code revision which master is currently pointing to into the pre-prod environment. If the deployment succeeds the script will also update the pre-prod marker branch to point to the code revision which is now deployed to pre-prod. That’s the essence of the marker branch concept. It’s a way to indicate which revision of the codebase is in which environment.

after deploying to pre-prod

Here’s what our git repo looks like after that deploy to pre-prod. The pre-prod branch has been updated to point to C6, since that’s the commit which my script just successfully deployed to the pre-prod environment. Master also continues to point to C6, because there haven’t been any other checkins to master since I decided to deploy to pre-prod.

Pre-prod signoff

Now I will do some more exhaustive testing in my pre-prod environment. Probably some exploratory QA testing, and probably some sign-off with my product and design friends as well. Note that there’s a very easy way to see exactly what is in pre-prod that has changed since our last release to prod. We have a prod marker branch which indicates which code revision is currently in production, and a pre-prod marker branch which shows what code revision the current pre-prod release candidate is from. If anyone needs to know exactly what changes are involved in this release candidate we can use standard git diffing and logging tools to find out.

Release candidates don’t stop other work

While we’re verifying our release candidate other development work can continue to happen, with more commits to master.

meanwhile, work continues…

Here we see that the master branch has continued to move forward with new commits C7 and C8. I included this in the scenario to highlight the benefits of the pre-prod marker branch. We don’t have to stop forward development while we verify that a specific code revision is good for release. We also don’t need to create a true branch in the repo. We simply use a marker branch to make a note of what revision is currently in pre-prod while allowing unrelated development to move forward.

Promote to production

At this point our friends in QA and Product have given us a happy thumbs up and agreed that what’s in pre-prod is ready for release to production. We’ll now run a deployment script which takes the code revision pointed to by the pre-prod marker branch and promotes (i.e. re-deploys) that code revision to the production environment.

released to production

Here’s the state of the git repo after a successful promotion from pre-prod to prod. After some smoke tests against the production environment have passed the script updates the prod marker branch to reflect the new reality - the current code in production is the code at commit 6 in the repo.

Conclusion

I’ve shown how marker branches can act as a simple way to track which version of your application is live in which environment. I’ve also shown that you can use marker branches to enforce lightweight process constraints - for example you can’t deploy an arbitrary code revision to prod, it has to be the code revision that’s currently in pre-prod.

Marker branches are not a substitute for a real grown-up build pipeline with build artifacts and an associated artifact repository. However for a really simple system (e.g. deploying a blog) marker branches can make sense.

The lightweight constraints can also potentially work as a way to manage how code changes enter CI when working in a large team of developers. For example you could only allow developers to check in code on top of a passed–ci-smoke marker branch. This would prevent a developer from accidentally checking in on top of code which has not yet gone through a CI smoke test.

Octopress Deployment Pipeline

| Comments

I spent a fun evening recently setting up a deployment pipeline for this blog. I’d like to share some details on what I set up and why.

The motivation was that I wanted some way to publish draft blog entries for other people to review, but I didn’t want these drafts to show up on my public site. I played with Octopress’s published: false option, but it really didn’t give me what I needed. Then I saw someone commenting that the ideal would be to have a preview version of the entire site available at a separate url. A pre-production environment, essentially. Hmm, I thought. Every web project I work on has one of these. It’s used to showcase the version of the system which is going to be released to production. That’s what I need - why don’t I just set that up for my blog?

Delivery pipelines

When using a pre-prod environment to showcase it’s important that what goes to prod is exactly what was showcased on pre-prod. A delivery pipeline helps ensure that. You could think of it as Continuous Integration on steroids. Every time code is committed it is built, unit tested, and then packaged into a build artifact. That build artifact (or just ‘build’) then moves through a pipeline which potentially ends in production. The build is often initially deployed to a dev/integration environment. Assuming it passes muster it may then be promoted to a QA environment, and perhaps then promoted again to pre-prod. Promoting a build means deploying it to the next environment in the pipeline. Finally that build may go through some sort of manual sign-off process in pre-prod and then finally be promoted to production.

The key principle here is that of a well-defined build artifact which is moving through these different environments. Jez Humble talks about this pipeline as a sort of obstacle course, where you gain confidence in the quality of the build artifact as it moves through your pipeline, passing stricter and stricter quality hurdles at each stage. This delivery pipeline is a core part of a Continuous Delivery system.

CD for a blog?! Srsly?

Now clearly what I’ve just described is a little bit over the top for this lowly blog. I realize that. But setting up a sort of lightweight pipeline was a fun and surprisingly useful project. It helped clarify some CD concepts for me (the best learning is by doing) and I do actually use the pipeline. In fact I made use of it while writing this very post!

Pre-production environment

This blog is powered by Octopress, and it’s published as a set of static files which are hosted by Amazon’s S3 service. Happily this means that creating a ‘pre-prod environment’ was as simple as creating a new S3 bucket and wiring it up to a subdomain via a CNAME entry.

The approach

I didn’t want to go too overboard, so rather than tracking a physical build artifact I opted to instead pass around git commits as my ‘artifact’. Now this is actually a bad idea to do in a real CD system. You have to repeatedly re-build and re-package your code at each stage in the pipeline, and you risk differences in how that build process occurs during at those different stages. That said, for my purposes just tracking commits in version control will work well enough.

My goal was to be able to deploy from any git commit to my pre-prod environment using a simple script. I’d then be able to ‘showcase’ my changes by pointing my browser at the pre-prod site. Assuming everything passes muster I could then run another script to promote whatever is in pre-prod to prod. Note that I allowed no way for me to deploy an arbitrary (and thus un-verified) commit to production. Anything I want to push to prod has to move through my pre-prod environment first.

Tracking what’s where

I track what is in pre-prod and prod using git marker branches. After successfully deploying a commit to an environment the corresponding marker branch is updated to point to that commit. That way my scripts can always know which commit is deployed to each environment just by checking with git.

For concrete details on how I do the git branch manipulation you can take a look at the push_preprod and promote_preprod_to_prod scripts themselves.

Deploying a commit

The steps I use to deploy an arbitrary commit of an octopress blog to S3 are:

  • extract a snapshot of that commit to a temporary directory using git archive
  • run the rake task which compiles the Octopress source into the static site files
  • push the static files up to an S3 bucket using s3cmd

I encapsulated this in a Deployer class which is bundled with my blog’s source here.

Done

That’s pretty much it. It’s a simple lightweight system that give me just enough release management with very little overhead. I am able to deploy any arbitrary version of my blog to a pre-prod environment which is basically an exact replica of prod. I can then promote what’s in pre-prod to prod with a single command. I was pleasantly surprised at how easy this was to accomplish. It’s literally just an evening’s work to set up a simple delivery pipeline.

TODO

Ideally I’d like to have my pre-prod environment use a slightly different Octopress configuration than my prod environment. For example I’d like to turn off disqus commenting in pre-prod since I don’t want peoples comments to be lost. I’d also like to add a little banner so people know they’re viewing a preview copy of my blog. I’m not quite sure at the moment on the best way to approach this, so I’m leaving it be for now.

Running Frank as Part of iOS CI

| Comments

Frank is a tool that allows you to run automated acceptance tests against your native iOS application. A major reason for creating automated acceptance tests is so that you can run them as part of your Continuous Integration (CI) process. Doing this enables a rapid feedback loop where a developer checking in code is informed very quickly if that change caused a defect in your app.

In this post I’ll show how to configure a basic CI setup using Jenkins which will build your app and run Frank tests against it every time you check in code. CI for iOS projects don’t seem to be a common practice. This is a shame because the CI can bring just as many benefits for iOS applications as for other technologies. I suspect part of the reason CI is less popular is that Apple doesn’t make it particularly easy. It’s not a trivial task to automate things like building your application and running unit tests. Things are moving in the right direction though, with both Apple and open-source developers making it simpler to integrate a dev toolchain into a CI setup.

Our plan of attack

I needed a simple app to demonstrate a CI setup. I’ll be using the same open source ‘2012 Olympics’ app that I’ve used as a pedagogical example in previous posts. To keep the CI setup distinct from the application I created a small master repo which contains just the CI setup for the Olympics app, along with including the app source code via a git submodule. I’ve also had to fork the Olympics app because I needed to set it up for Frank testing, as well as make small changes to the app’s project settings which allow CI to be easily set up. When setting up your own app for CI you’d likely already have these changes in place.

So, let’s walk through what’s involved in getting a basic CI set up for our iOS application. Our overall strategy will be:

  • set up a build script which can build our app from the command line
  • set up CI so that we can run that build whenever we check in code
  • add Frank testing to our build script
  • set up CI to enable Frank tests

Let’s get started by creating a build script.

Scripting the build

Before we start setting up CI itself we need to automate the build process. This will involve stepping outside the safe environs of XCode and entering the powerful and intimidating world of the command line. Have no fear, it’s not as scary as it sounds.

We’re going to use a ruby tool called Rake to create our build script. Rake is similar to tools like Make and Ant - it makes it easy for us to succinctly express the different tasks we want our build script to perform, and allows us to chain those tasks together. We’ll also be using a ruby gem called xcodebuild-rb from Luke Redpath (one of the most industrious open-source developers I know of). This gem makes it trivially easy to drive xcodebuild (the command line interface for XCode) from inside a Rake file.

Before we get started on the build script itself we need to create a Gemfile which declares our ruby dependencies. This file can be used by another tool called Bundler to ensure that developer machines and CI systems have everything they need.

Gemfile
1
2
3
4
source "https://rubygems.org"

gem "rake"
gem "xcodebuild-rb", "~> 0.3.0"

Those are all the dependencies we have for now. If you run bundle install bundler should now set up those gems. Next we’ll create our initial Rakefile - the file which defines our app’s build tasks:

Rakefile
1
2
3
4
5
6
7
8
9
10
11
12
13
require 'rubygems'
require 'xcodebuild'

namespace :xcode do
  XcodeBuild::Tasks::BuildTask.new :debug_simulator do |t|
    t.invoke_from_within = './app'
    t.configuration = "Debug"
    t.sdk = "iphonesimulator"
    t.formatter = XcodeBuild::Formatters::ProgressFormatter.new
  end
end

task :default => ["xcode:debug_simulator:cleanbuild"]

This Rakefile does a few things. First it loads the libraries we need. Then it defines an xcode namespace (a namespace is just a way of logically grouping a set of tasks). Inside that xcode namespace it uses an xcodebuild-rb helper to create a set of Rake tasks for automating a debug iphone simulator build of the project contained inside the app directory. Finally a default task is defined. This task doesn’t do anything itself, but declares a dependency on the xcode:debug_simulator:cleanbuild task. That means that whenever the default task is run it will run that dependent task, causing a clean build of the debug simulator version of the app to be generated.

If you were to try that out now by running rake from the command line you should see xcodebuild creating a clean build of the app. You could also run rake -T to get a list of all interesting rake tasks. If you did so you’d notice that xcodebuild-rb has created a few different Rake tasks, not just for building the app but also tasks for cleaning build output and archiving the build. For the purposes of this blog post we’ll just be using the cleanbuild task.

At this point we have an automated way to generate a clean build of the application. Now we want to make sure that our build script leaves that built application in a common ‘artifact’ directory so that our CI system can archive it. There’s no point building the app if you don’t save it for use later on. I’ll follow a convention of putting everything which I want my CI system to save inside a ‘ci_artifact’ directory. I add the following to my Rakefile:

Rakefile additions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
namespace :ci do
  def move_into_artifacts( src )
    FileUtils.mkdir_p( 'ci_artifacts' )
    FileUtils.mv( src, "ci_artifacts/" )
  end

  task :clear_artifacts do
    FileUtils.rm_rf( 'ci_artifacts' )
  end

  task :build => ["xcode:debug_simulator:cleanbuild"] do
    move_into_artifacts( Dir.glob("app/build/Debug-iphonesimulator/*.app") )
  end
end

task :ci => ["ci:clear_artifacts","ci:build"]

Here I’ve created a ci namespace. Inside that I’ve added a clear_artifacts task and a build task. In addition I’ve also created a ci task in the root namespace. That task depends on the clear_artifacts and build tasks, meaning that whenever I run rake ci Rake will run ci:clear_artifacts and then ci:build.

ci:clear_artifacts simply deletes any existing ci_artifact directory. ci:build depends on the existing xcode build task to actually create a build of the app, and then it copies the built app into the ci_artifacts directory, creating the directory if necessary. I didn’t want to hard-code the app name into my Rakefile so I cheated a bit and used a glob to select any directory with a .app extension.

If I now run rake ci I should end up with a freshly-built copy of the application in a ci_artifacts directory.

Setting up Jenkins

Now we have an automated build script we need to get our CI system set up. I’m going to use Jenkins in this post because it’s probably the most commonly used CI server. Pretty much the exact same approach would be used for other tools such as TeamCity or Go.

I installed a sandbox copy of Jenkins on my laptop with a simple brew install jenkins, courtesy of homebrew. If you’re a developer using a mac then I highly recommend homebrew. I followed the instructions provided with the homebrew recipe to launch the Jenkins server and then went to http://localhost:8080/.

In the Jenkins UI I created a new ‘free-style’ job and configured it to point to my main git repo. Next I needed to tell Jenkins how to build our app. A good practice is to keep as much of your configuration as possible inside version control. As part of that I usually create a simple CI script inside the root directory of my app, and then have the CI system call that script. In this example that script is called go.sh and lives in the root of my main repo, under source control like everything else. The only thing I need to configure in Jenkins itself is a single line ‘Execute Shell’ build step which calls go.sh. Another nice benefit of this approach is that you can test tweaks you’re making to your CI setup by calling ./go.sh directly on your dev box, rather than having to kick off a new CI build.

Here’s what my initial go.sh looks like:

go.sh
1
2
3
#!/bin/sh
bundle install --deployment
bundle exec rake ci

Pretty simple. It uses bundler to make sure all my ruby dependencies are installed and then runs the ci rake task.

The last thing I need to do is tell Jenkins to archive all the artifacts it finds inside the ci_artifacts directory by checking the ‘Archive the artifacts’ checkbox and then specifying ci_artifacts/**/* as the files to archive.

That’s it, we’re done with our Jenkins set up. If all has been done correctly when you kick off that Jenkins job it should build the app and save the resulting 2012 Olympics.app inside Jenkin’s Build Artifacts for that build.

Setting up Frank tests

We now have a very basic CI setup for our iOS app. Next I’ll describe how to integrate Frank into this CI system. I’m going to assume that your app itself has already been set up for Frank. If not, check out a previous post of mine for all the details. It’s a painless process.

First we need to declare our dependency on the frank-cucumber gem which we’ll use to actually run our Frank tests. We do that by updating our Gemfile:

Gemfile
1
2
3
4
5
6
source "https://rubygems.org"

gem "rake"
gem "xcodebuild-rb", "~> 0.3.0"

gem "frank-cucumber", "~> 0.9.4"

The next step is to create a Rake task which will generate a Frankified build of your app. I’ll add that to the ci namespace in my Rakefile as follows:

Rakefile additions
1
2
3
4
5
6
7
8
9
namespace :ci do

  # ... existing ci tasks don't change 

  task :frank_build do
    sh '(cd app && frank build)'
    move_into_artifacts( "app/Frank/frankified_build/Frankified.app" )
  end
end

This task shells out to the frank build command, and then copies the app bundle that it builds into our ci_artifacts directory.

Now that we have a Frankified build we want to run frank tests against it. Our Frank tests have been written using Cucumber, which happily comes with great Rake integration. We just need to use that to create a rake task which runs our cucumber features against our Frankified build:

Rakefile additions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# ... near the top of our Rakefile
require 'cucumber/rake/task'

HERE = File.expand_path( '..',__FILE__ )
ENV['APP_BUNDLE_PATH'] = File.join( HERE, 'ci_artifacts/Frankified.app' )

# ... existing Rakefile code still here

namespace :ci do

  # ... existing ci namespace code here

  Cucumber::Rake::Task.new(:frank_test, 'Run Frank acceptance tests, generating HTML report as a CI artifact') do |t|
    t.cucumber_opts = "app/Frank/features --format pretty --format html --out ci_artifacts/frank_results.html"
  end
end

# ... redefine our ci task here
task :ci => ["ci:clear_artifacts","ci:build","ci:frank_build","ci:frank_test"]

There are a few things going on here. We require in Cucumber’s rake helper code. Next we set the APP_BUNDLE_PATH environment variable to point to the location of the Frankified build inside our ci_artifacts directory. Frank uses that environment variable to know which app to launch in the simulator at the start of your Frank tests. We then use a Cucumber helper to generate a rake task called ci:frank_test. We configure that task to run the Cucumber tests inside app/Frank/features. We also ask Cucumber to generate a nice HTML test report for each test run, saving it into the ci_artifacts directory so that it can be accessed by the CI system. Finally we extend our main ci task to depend on those new tasks.

This means that when you run the rake ci command rake will now generate a Frankified build and then run tests against it, in addition to generating the debug simulator build as it did previously. So if you ran ./go.sh at this point to simulate a full CI run you would see a debug build of your app generated, followed by a frankified build, and finally a Frank test run would run. You’d also see the Frankified app plus a nice HTML test run report in the ci_artifacts directory. We’re almost done!

Launching apps in the Simulator from a CI build

However, there’s one final hurdle. If you now kicked off a Jenkins run you’d likely see the Frank tests fail to launch your app, even though Jenkins is using the exact same go.sh script we just ran successfully by hand. Not good.

The reason for this is a bit subtle. Apple doesn’t provide an offical way to automate launching an app in the simulator, so Frank uses an open source tool called SimLauncher which reverse-engineers the way XCode launches apps. However this approach appears to only work if the process launching the app is attached to the OS X windowing system. In the case of Jenkins the process running a CI build is not always attached to the windowing system. To work around this fact SimLauncher has a client-server mode. You launch a SimLauncher server on your CI build box by hand so that it is attached to the windowing system. You then tell Frank to use SimLauncher in client-server mode when running CI. Frank will now ask that SimLauncher server to launch the app, rather than trying to launch it directly. Because the SimLauncher server process is attached to the windowing system it is able to launch the simulator even though the CI process itself isn’t attached.

That was a rather complex sidebar, but fortunately the actual setup is straight forward. First open a new terminal window and run the simlauncher command. That will start up a simlauncher server in your terminal.

Next, update your go.sh script to look like this:

go.sh
1
2
3
4
#!/bin/sh
export USE_SIM_LAUNCHER_SERVER=YES
bundle install --deployment
bundle exec rake ci

The only change we made was exporting that USE_SIMLAUNCHER_SERVER environment variable. This tells Frank to launch the Frankified app using SimLauncher in client-server mode rather than trying to launch it directly.

Next, test out your change by running go.sh. You should see the same CI run as before (including a successful Frank test run), but you should also notice that the terminal window running the SimLauncher contains some output showing that the server was responding to launch requests from Frank during the test run. At this point you should also be able to perform a complete CI run via Jenkins (as long as you have the SimLauncher server running of course).

Starting the simlauncher server by hand in a terminal is a bit of a hassle, but in practice it turns out to not be a big deal. You have to do it once every time you reboot your build box, which with OS X is a fairly infrequent event.

Next steps

We now have a working CI setup. However this basic configuration should only be the start of the journey. Because of the value they provide CI systems tend to grow over time. I’ll briefly describe some directions in which you might grow this system.

The first thing I’d want to add is an automated unit testing run (before the Frank run). After that one could start adding internal quality metrics (code duplication, unit- and acceptance test coverage, cyclometric complexity reports, etc.). You might want builds which have passed your acceptance test suite to be automatically deployed to QA devices via HockeyApp or TestFlight. At that point you’re starting to move towards a Continuous Delivery system where features and bug fixes move through one or more delivery pipelines from checkin through automated testing to QA deployment and eventual production deployment. As you add more functionality your builds will start to take longer to run, which means slower feedback and more time waiting for a build to pass initial quality checks. At that point you’ll probably want to look at parallelizing your build, most likely by standing up multiple build agents.