h

Being Agile

Pete Hodgson's blurgh

Towards Frank 1.0

| Comments

One of the many geeky hats I wear is that of maintainer for Frank, an open-source tool for automated testing of native iOS applications. Frank has been around for over 2 years now, which is actually quite a long time in the mobile space. It has evolved a fair amount in that time, but has had a surprisingly small amount of change to the core architecture. I’m actually quite proud of that. I think that the core concepts have been proven out enough that Frank is ready for a 1.0 release which cleans up some long-standing cruft and solidifies at least parts of its client/server API.

The main motivator for the Big One Oh is that I want to remove some old unmaintained libraries which Frank currently depends upon. This can be done in a mostly backwards-compatible way, but by doing this as part of a major version bump I can make some reasonably aggressive improvements without letting down existing users who would rightly expect backwards compatibility from a point release.

Adios UISpec

The main dependency I’d like to remove is UISpec. We have used it in the past for both view selection and for touch synthesis. At the time I started building Frank it was a great option, but it has since become unmaintained, and surpassed by other, newer tools.

About a year ago I wrote a new view selection engine called Shelley from scratch with the goal of replacing our use of UISpec, and a few months back we started using KIF for touch synthesis. So at this point new users of Frank aren’t really using UISpec for anything, and they’re benefiting from faster view selection, and more robust touch synthesis. However I kept UISpec around for backwards compatibility. With a 1.0 release I can finally cut the cord and fully remove UISpec.

A big issue that several users have had with UISpec is its GPL license. This license was necessarily inherited by the rest of our Frank server code, which made some users nervous, and prevented some folks from using Frank in a commercial setting. By removing UISpec fully from the Frank repo we no longer need to have a GPL license for any part of Frank, which should make people more comfortable. We’ll be moving to an Apache 2.0 license for the server parts of Frank, the same license which other Frank components have always had.

Adios KIF

The other big library dependency I’ll be leaving behind is KIF. I really like KIF, and in general there aren’t many reasons to not use it. It’s a very nicely written library, and a good choice for Objective-C developers who are looking for a lower-level testing tool.

The motivation for leaving KIF is the opportunity to switch to using Apple’s own UIAutomation private framework. I recently opened up a library called PublicAutomation which exposes this private framework for use by tools like Frank, and I’m excited to be able to use the same super-solid touch-synthesis code that Apple themselves use.

As an example of why I’d like Frank to switch, it appears that with PublicAutomation you can simulate device rotation on a physical device without having to physically rotate the hardware, something which I believe was impossible before. It is also possible to extend PublicAutomation with whatever complex multi-touch gesture simulation your tests might need to perform. Finally, I can be confident that when Apple release new iOS SDKs they will do a lot of work to ensure that UIAutomation remains compatible and fully functional.

Timeline

I already have a branch of the Frank repo which contains the above changes, and it appears to be working well with the apps I’ve tested it on. I hope to release that soon (within the next couple of weeks) as a 1.0.pre1 version of the frank-cucumber gem. At that point I’ll be asking folks in the Frank community to try it out and give me feedback on what works and what doesn’t. I expect a few minor teething troubles with tests that use older, more obscure UISpec helper methods that Frank will have lost, but there shouldn’t be any issue that take more than a few lines of code to solve.

Introducing PublicAutomation

| Comments

I’m excited to announce PublicAutomation, a friendly wrapper around Apple’s private UIAutomation framework. PublicAutomation allows you to use Apple’s own private framework to simulate user interactions (taps, swipes, keyboard typing) via a simple Objective-C API.

Previous approaches to this problem have relied on reverse-engineering and monkey-patching iOS’s touch events system. PublicAutomation takes a different approach. It links in the low-level API of the private framework which Apple itself uses for its own UIAutomation tooling. PublicAutomation provides the stability of Apple’s proprietary UIAutomation tool with the flexibility of an open source library maintained by the community.

I have already had great success in my experiments using PublicAutomation as the user-simulation backend for Frank (the automated acceptance testing tool I maintain), replacing our previous dependency on KIF. KIF is a great tool but Frank was only using it for its user-simulation features, which was always a bit of a weird fit. I’m confident we can now replace our KIF dependency with a smaller more focused user-simulation library in the form of PublicAutomation.

Some history

As I said above, Frank currently uses KIF to do the low-level work of simulating user interactions. KIF achieves this with some clever monkey-patching of iOS’s event system. This works remarkably well, but there are some issues with simulating certain interactions (e.g. tapping a deletion confirmation button in a table view cell). It also has some strange side-effects at times - an app which has KIF linked in doesn’t react to tap events from an actual user in the same way.

Recently I spent a bit of time investigating the feasibility of using Apple’s private UIAutomation framework outside of their official UIAutomation tooling. I blogged about that initial research in a previous post. The findings were both good and bad. The good news was that the private framework can be linked in and used both in the simulator and on the device. The bad news was that only the low-level UIASyntheticEvents API works reliably. The high-level API that UIAutomation exposes via JavaScript does not appear to be usable programatically.

My goal in investigating the UIAutomation private framework was to replace KIF’s event monkey-patching. I’d also hoped to get some nice new high-level API (e.g. send a flick to this view), but that was really more of a nice-to-have than an absolute requirement. The main outcome of this research is the discovery that we can expose and use the UIAutomation private framework’s low level API.

Bringing in KIF’s keyboard typing

It turns out that this low level UIASyntheticEvents API has a drop-in replacement for almost every feature of KIF we use in Frank. The only thing missing was keyboard typing. I extracting KIF’s keyboard typing code into a separate class inside of KIF and sent them a pull request. Then I took that KIFTypist class and ported it into PublicAutomation. At this point PublicAutomation has everything Frank needs. It also appears to not have the issues we’ve seen when monkey-patching the event system. For example we can now tap on deletion confirmation buttons.

The future

I’m working on solidifying Frank’s usage of PublicAutomation. There will probably be an official switch over to using it as part of a 1.0 release of Frank (along with removing our UISpec dependency, but that’s a different story).

I’m also hoping that other non-UIAutomation tools can take advantage of it - Calabash for example. My hope is that PublicAutomation can become a standard shared library for user simulation in iOS. To achieve that it does need some extra work. Right now it only supports single-finger taps and swipes. Extending support to more complex multi-touch gestures should be trivial.

It should also be trivial to add support for features which have previously not been easily accessible to non-UIAutomation tools. For example simulating the home button being pressed, the screen being locked, etc. As far as I can tell everything exposed by UIASyntheticEvents class is up for grabs. An exciting prospect!

Marker Branches in Git

| Comments

In my last post I talked about setting up a basic deployment pipeline using marker branches in git to keep track of what was where.

In this post I want to go into a little more detail describing how these marker branches work. To do that I’ll walk through a simple example showing the state of a git repo as code moves through being committed, being pushed to master, deployed to pre-prod, and finally promoted to prod.

A sample git branch progression

In these diagrams the rounded boxes are git commits, the rectangles are git branches. The ‘WD’ rounded box represents uncommitted local changes in the current working directory.

Many thanks to Scott Chacon, git teacher extrodinaire, who very generously shares an omnigraffle file containing all the really nice git diagrams he uses in his books and presentations. I’ve shamelessly used that as the basis of these diagrams.

starting state

At the start of this scenario we have our master branch pointing at the C5 commit. We also have a couple of marker branches, pre-prod and prod. We’ll talk more about these guys momentarily. Finally we have some local code changes inside our working directory. These are the changes which we are going to be following as the travel their path to production.

changes checked in

In this next diagram you can see I’ve now checked in the local changes I had in my working directory to my master branch as C6. master is now pointing to the newly created C6 commit but nothing else has changed.

A release candidate is born

At this point I run some acceptance tests, and decide that what I currently have in master is a viable release candidate. I want to push it to pre-prod to start more exhaustive testing of this candidate.

To do that I run a deployment script which pushes the code revision which master is currently pointing to into the pre-prod environment. If the deployment succeeds the script will also update the pre-prod marker branch to point to the code revision which is now deployed to pre-prod. That’s the essence of the marker branch concept. It’s a way to indicate which revision of the codebase is in which environment.

after deploying to pre-prod

Here’s what our git repo looks like after that deploy to pre-prod. The pre-prod branch has been updated to point to C6, since that’s the commit which my script just successfully deployed to the pre-prod environment. Master also continues to point to C6, because there haven’t been any other checkins to master since I decided to deploy to pre-prod.

Pre-prod signoff

Now I will do some more exhaustive testing in my pre-prod environment. Probably some exploratory QA testing, and probably some sign-off with my product and design friends as well. Note that there’s a very easy way to see exactly what is in pre-prod that has changed since our last release to prod. We have a prod marker branch which indicates which code revision is currently in production, and a pre-prod marker branch which shows what code revision the current pre-prod release candidate is from. If anyone needs to know exactly what changes are involved in this release candidate we can use standard git diffing and logging tools to find out.

Release candidates don’t stop other work

While we’re verifying our release candidate other development work can continue to happen, with more commits to master.

meanwhile, work continues…

Here we see that the master branch has continued to move forward with new commits C7 and C8. I included this in the scenario to highlight the benefits of the pre-prod marker branch. We don’t have to stop forward development while we verify that a specific code revision is good for release. We also don’t need to create a true branch in the repo. We simply use a marker branch to make a note of what revision is currently in pre-prod while allowing unrelated development to move forward.

Promote to production

At this point our friends in QA and Product have given us a happy thumbs up and agreed that what’s in pre-prod is ready for release to production. We’ll now run a deployment script which takes the code revision pointed to by the pre-prod marker branch and promotes (i.e. re-deploys) that code revision to the production environment.

released to production

Here’s the state of the git repo after a successful promotion from pre-prod to prod. After some smoke tests against the production environment have passed the script updates the prod marker branch to reflect the new reality - the current code in production is the code at commit 6 in the repo.

Conclusion

I’ve shown how marker branches can act as a simple way to track which version of your application is live in which environment. I’ve also shown that you can use marker branches to enforce lightweight process constraints - for example you can’t deploy an arbitrary code revision to prod, it has to be the code revision that’s currently in pre-prod.

Marker branches are not a substitute for a real grown-up build pipeline with build artifacts and an associated artifact repository. However for a really simple system (e.g. deploying a blog) marker branches can make sense.

The lightweight constraints can also potentially work as a way to manage how code changes enter CI when working in a large team of developers. For example you could only allow developers to check in code on top of a passed–ci-smoke marker branch. This would prevent a developer from accidentally checking in on top of code which has not yet gone through a CI smoke test.

Octopress Deployment Pipeline

| Comments

I spent a fun evening recently setting up a deployment pipeline for this blog. I’d like to share some details on what I set up and why.

The motivation was that I wanted some way to publish draft blog entries for other people to review, but I didn’t want these drafts to show up on my public site. I played with Octopress’s published: false option, but it really didn’t give me what I needed. Then I saw someone commenting that the ideal would be to have a preview version of the entire site available at a separate url. A pre-production environment, essentially. Hmm, I thought. Every web project I work on has one of these. It’s used to showcase the version of the system which is going to be released to production. That’s what I need - why don’t I just set that up for my blog?

Delivery pipelines

When using a pre-prod environment to showcase it’s important that what goes to prod is exactly what was showcased on pre-prod. A delivery pipeline helps ensure that. You could think of it as Continuous Integration on steroids. Every time code is committed it is built, unit tested, and then packaged into a build artifact. That build artifact (or just ‘build’) then moves through a pipeline which potentially ends in production. The build is often initially deployed to a dev/integration environment. Assuming it passes muster it may then be promoted to a QA environment, and perhaps then promoted again to pre-prod. Promoting a build means deploying it to the next environment in the pipeline. Finally that build may go through some sort of manual sign-off process in pre-prod and then finally be promoted to production.

The key principle here is that of a well-defined build artifact which is moving through these different environments. Jez Humble talks about this pipeline as a sort of obstacle course, where you gain confidence in the quality of the build artifact as it moves through your pipeline, passing stricter and stricter quality hurdles at each stage. This delivery pipeline is a core part of a Continuous Delivery system.

CD for a blog?! Srsly?

Now clearly what I’ve just described is a little bit over the top for this lowly blog. I realize that. But setting up a sort of lightweight pipeline was a fun and surprisingly useful project. It helped clarify some CD concepts for me (the best learning is by doing) and I do actually use the pipeline. In fact I made use of it while writing this very post!

Pre-production environment

This blog is powered by Octopress, and it’s published as a set of static files which are hosted by Amazon’s S3 service. Happily this means that creating a ‘pre-prod environment’ was as simple as creating a new S3 bucket and wiring it up to a subdomain via a CNAME entry.

The approach

I didn’t want to go too overboard, so rather than tracking a physical build artifact I opted to instead pass around git commits as my ‘artifact’. Now this is actually a bad idea to do in a real CD system. You have to repeatedly re-build and re-package your code at each stage in the pipeline, and you risk differences in how that build process occurs during at those different stages. That said, for my purposes just tracking commits in version control will work well enough.

My goal was to be able to deploy from any git commit to my pre-prod environment using a simple script. I’d then be able to ‘showcase’ my changes by pointing my browser at the pre-prod site. Assuming everything passes muster I could then run another script to promote whatever is in pre-prod to prod. Note that I allowed no way for me to deploy an arbitrary (and thus un-verified) commit to production. Anything I want to push to prod has to move through my pre-prod environment first.

Tracking what’s where

I track what is in pre-prod and prod using git marker branches. After successfully deploying a commit to an environment the corresponding marker branch is updated to point to that commit. That way my scripts can always know which commit is deployed to each environment just by checking with git.

For concrete details on how I do the git branch manipulation you can take a look at the push_preprod and promote_preprod_to_prod scripts themselves.

Deploying a commit

The steps I use to deploy an arbitrary commit of an octopress blog to S3 are:

  • extract a snapshot of that commit to a temporary directory using git archive
  • run the rake task which compiles the Octopress source into the static site files
  • push the static files up to an S3 bucket using s3cmd

I encapsulated this in a Deployer class which is bundled with my blog’s source here.

Done

That’s pretty much it. It’s a simple lightweight system that give me just enough release management with very little overhead. I am able to deploy any arbitrary version of my blog to a pre-prod environment which is basically an exact replica of prod. I can then promote what’s in pre-prod to prod with a single command. I was pleasantly surprised at how easy this was to accomplish. It’s literally just an evening’s work to set up a simple delivery pipeline.

TODO

Ideally I’d like to have my pre-prod environment use a slightly different Octopress configuration than my prod environment. For example I’d like to turn off disqus commenting in pre-prod since I don’t want peoples comments to be lost. I’d also like to add a little banner so people know they’re viewing a preview copy of my blog. I’m not quite sure at the moment on the best way to approach this, so I’m leaving it be for now.

Running Frank as Part of iOS CI

| Comments

Frank is a tool that allows you to run automated acceptance tests against your native iOS application. A major reason for creating automated acceptance tests is so that you can run them as part of your Continuous Integration (CI) process. Doing this enables a rapid feedback loop where a developer checking in code is informed very quickly if that change caused a defect in your app.

In this post I’ll show how to configure a basic CI setup using Jenkins which will build your app and run Frank tests against it every time you check in code. CI for iOS projects don’t seem to be a common practice. This is a shame because the CI can bring just as many benefits for iOS applications as for other technologies. I suspect part of the reason CI is less popular is that Apple doesn’t make it particularly easy. It’s not a trivial task to automate things like building your application and running unit tests. Things are moving in the right direction though, with both Apple and open-source developers making it simpler to integrate a dev toolchain into a CI setup.

Our plan of attack

I needed a simple app to demonstrate a CI setup. I’ll be using the same open source ‘2012 Olympics’ app that I’ve used as a pedagogical example in previous posts. To keep the CI setup distinct from the application I created a small master repo which contains just the CI setup for the Olympics app, along with including the app source code via a git submodule. I’ve also had to fork the Olympics app because I needed to set it up for Frank testing, as well as make small changes to the app’s project settings which allow CI to be easily set up. When setting up your own app for CI you’d likely already have these changes in place.

So, let’s walk through what’s involved in getting a basic CI set up for our iOS application. Our overall strategy will be:

  • set up a build script which can build our app from the command line
  • set up CI so that we can run that build whenever we check in code
  • add Frank testing to our build script
  • set up CI to enable Frank tests

Let’s get started by creating a build script.

Scripting the build

Before we start setting up CI itself we need to automate the build process. This will involve stepping outside the safe environs of XCode and entering the powerful and intimidating world of the command line. Have no fear, it’s not as scary as it sounds.

We’re going to use a ruby tool called Rake to create our build script. Rake is similar to tools like Make and Ant - it makes it easy for us to succinctly express the different tasks we want our build script to perform, and allows us to chain those tasks together. We’ll also be using a ruby gem called xcodebuild-rb from Luke Redpath (one of the most industrious open-source developers I know of). This gem makes it trivially easy to drive xcodebuild (the command line interface for XCode) from inside a Rake file.

Before we get started on the build script itself we need to create a Gemfile which declares our ruby dependencies. This file can be used by another tool called Bundler to ensure that developer machines and CI systems have everything they need.

Gemfile
1
2
3
4
source "https://rubygems.org"

gem "rake"
gem "xcodebuild-rb", "~> 0.3.0"

Those are all the dependencies we have for now. If you run bundle install bundler should now set up those gems. Next we’ll create our initial Rakefile - the file which defines our app’s build tasks:

Rakefile
1
2
3
4
5
6
7
8
9
10
11
12
13
require 'rubygems'
require 'xcodebuild'

namespace :xcode do
  XcodeBuild::Tasks::BuildTask.new :debug_simulator do |t|
    t.invoke_from_within = './app'
    t.configuration = "Debug"
    t.sdk = "iphonesimulator"
    t.formatter = XcodeBuild::Formatters::ProgressFormatter.new
  end
end

task :default => ["xcode:debug_simulator:cleanbuild"]

This Rakefile does a few things. First it loads the libraries we need. Then it defines an xcode namespace (a namespace is just a way of logically grouping a set of tasks). Inside that xcode namespace it uses an xcodebuild-rb helper to create a set of Rake tasks for automating a debug iphone simulator build of the project contained inside the app directory. Finally a default task is defined. This task doesn’t do anything itself, but declares a dependency on the xcode:debug_simulator:cleanbuild task. That means that whenever the default task is run it will run that dependent task, causing a clean build of the debug simulator version of the app to be generated.

If you were to try that out now by running rake from the command line you should see xcodebuild creating a clean build of the app. You could also run rake -T to get a list of all interesting rake tasks. If you did so you’d notice that xcodebuild-rb has created a few different Rake tasks, not just for building the app but also tasks for cleaning build output and archiving the build. For the purposes of this blog post we’ll just be using the cleanbuild task.

At this point we have an automated way to generate a clean build of the application. Now we want to make sure that our build script leaves that built application in a common ‘artifact’ directory so that our CI system can archive it. There’s no point building the app if you don’t save it for use later on. I’ll follow a convention of putting everything which I want my CI system to save inside a ‘ci_artifact’ directory. I add the following to my Rakefile:

Rakefile additions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
namespace :ci do
  def move_into_artifacts( src )
    FileUtils.mkdir_p( 'ci_artifacts' )
    FileUtils.mv( src, "ci_artifacts/" )
  end

  task :clear_artifacts do
    FileUtils.rm_rf( 'ci_artifacts' )
  end

  task :build => ["xcode:debug_simulator:cleanbuild"] do
    move_into_artifacts( Dir.glob("app/build/Debug-iphonesimulator/*.app") )
  end
end

task :ci => ["ci:clear_artifacts","ci:build"]

Here I’ve created a ci namespace. Inside that I’ve added a clear_artifacts task and a build task. In addition I’ve also created a ci task in the root namespace. That task depends on the clear_artifacts and build tasks, meaning that whenever I run rake ci Rake will run ci:clear_artifacts and then ci:build.

ci:clear_artifacts simply deletes any existing ci_artifact directory. ci:build depends on the existing xcode build task to actually create a build of the app, and then it copies the built app into the ci_artifacts directory, creating the directory if necessary. I didn’t want to hard-code the app name into my Rakefile so I cheated a bit and used a glob to select any directory with a .app extension.

If I now run rake ci I should end up with a freshly-built copy of the application in a ci_artifacts directory.

Setting up Jenkins

Now we have an automated build script we need to get our CI system set up. I’m going to use Jenkins in this post because it’s probably the most commonly used CI server. Pretty much the exact same approach would be used for other tools such as TeamCity or Go.

I installed a sandbox copy of Jenkins on my laptop with a simple brew install jenkins, courtesy of homebrew. If you’re a developer using a mac then I highly recommend homebrew. I followed the instructions provided with the homebrew recipe to launch the Jenkins server and then went to http://localhost:8080/.

In the Jenkins UI I created a new ‘free-style’ job and configured it to point to my main git repo. Next I needed to tell Jenkins how to build our app. A good practice is to keep as much of your configuration as possible inside version control. As part of that I usually create a simple CI script inside the root directory of my app, and then have the CI system call that script. In this example that script is called go.sh and lives in the root of my main repo, under source control like everything else. The only thing I need to configure in Jenkins itself is a single line ‘Execute Shell’ build step which calls go.sh. Another nice benefit of this approach is that you can test tweaks you’re making to your CI setup by calling ./go.sh directly on your dev box, rather than having to kick off a new CI build.

Here’s what my initial go.sh looks like:

go.sh
1
2
3
#!/bin/sh
bundle install --deployment
bundle exec rake ci

Pretty simple. It uses bundler to make sure all my ruby dependencies are installed and then runs the ci rake task.

The last thing I need to do is tell Jenkins to archive all the artifacts it finds inside the ci_artifacts directory by checking the ‘Archive the artifacts’ checkbox and then specifying ci_artifacts/**/* as the files to archive.

That’s it, we’re done with our Jenkins set up. If all has been done correctly when you kick off that Jenkins job it should build the app and save the resulting 2012 Olympics.app inside Jenkin’s Build Artifacts for that build.

Setting up Frank tests

We now have a very basic CI setup for our iOS app. Next I’ll describe how to integrate Frank into this CI system. I’m going to assume that your app itself has already been set up for Frank. If not, check out a previous post of mine for all the details. It’s a painless process.

First we need to declare our dependency on the frank-cucumber gem which we’ll use to actually run our Frank tests. We do that by updating our Gemfile:

Gemfile
1
2
3
4
5
6
source "https://rubygems.org"

gem "rake"
gem "xcodebuild-rb", "~> 0.3.0"

gem "frank-cucumber", "~> 0.9.4"

The next step is to create a Rake task which will generate a Frankified build of your app. I’ll add that to the ci namespace in my Rakefile as follows:

Rakefile additions
1
2
3
4
5
6
7
8
9
namespace :ci do

  # ... existing ci tasks don't change 

  task :frank_build do
    sh '(cd app && frank build)'
    move_into_artifacts( "app/Frank/frankified_build/Frankified.app" )
  end
end

This task shells out to the frank build command, and then copies the app bundle that it builds into our ci_artifacts directory.

Now that we have a Frankified build we want to run frank tests against it. Our Frank tests have been written using Cucumber, which happily comes with great Rake integration. We just need to use that to create a rake task which runs our cucumber features against our Frankified build:

Rakefile additions
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# ... near the top of our Rakefile
require 'cucumber/rake/task'

HERE = File.expand_path( '..',__FILE__ )
ENV['APP_BUNDLE_PATH'] = File.join( HERE, 'ci_artifacts/Frankified.app' )

# ... existing Rakefile code still here

namespace :ci do

  # ... existing ci namespace code here

  Cucumber::Rake::Task.new(:frank_test, 'Run Frank acceptance tests, generating HTML report as a CI artifact') do |t|
    t.cucumber_opts = "app/Frank/features --format pretty --format html --out ci_artifacts/frank_results.html"
  end
end

# ... redefine our ci task here
task :ci => ["ci:clear_artifacts","ci:build","ci:frank_build","ci:frank_test"]

There are a few things going on here. We require in Cucumber’s rake helper code. Next we set the APP_BUNDLE_PATH environment variable to point to the location of the Frankified build inside our ci_artifacts directory. Frank uses that environment variable to know which app to launch in the simulator at the start of your Frank tests. We then use a Cucumber helper to generate a rake task called ci:frank_test. We configure that task to run the Cucumber tests inside app/Frank/features. We also ask Cucumber to generate a nice HTML test report for each test run, saving it into the ci_artifacts directory so that it can be accessed by the CI system. Finally we extend our main ci task to depend on those new tasks.

This means that when you run the rake ci command rake will now generate a Frankified build and then run tests against it, in addition to generating the debug simulator build as it did previously. So if you ran ./go.sh at this point to simulate a full CI run you would see a debug build of your app generated, followed by a frankified build, and finally a Frank test run would run. You’d also see the Frankified app plus a nice HTML test run report in the ci_artifacts directory. We’re almost done!

Launching apps in the Simulator from a CI build

However, there’s one final hurdle. If you now kicked off a Jenkins run you’d likely see the Frank tests fail to launch your app, even though Jenkins is using the exact same go.sh script we just ran successfully by hand. Not good.

The reason for this is a bit subtle. Apple doesn’t provide an offical way to automate launching an app in the simulator, so Frank uses an open source tool called SimLauncher which reverse-engineers the way XCode launches apps. However this approach appears to only work if the process launching the app is attached to the OS X windowing system. In the case of Jenkins the process running a CI build is not always attached to the windowing system. To work around this fact SimLauncher has a client-server mode. You launch a SimLauncher server on your CI build box by hand so that it is attached to the windowing system. You then tell Frank to use SimLauncher in client-server mode when running CI. Frank will now ask that SimLauncher server to launch the app, rather than trying to launch it directly. Because the SimLauncher server process is attached to the windowing system it is able to launch the simulator even though the CI process itself isn’t attached.

That was a rather complex sidebar, but fortunately the actual setup is straight forward. First open a new terminal window and run the simlauncher command. That will start up a simlauncher server in your terminal.

Next, update your go.sh script to look like this:

go.sh
1
2
3
4
#!/bin/sh
export USE_SIM_LAUNCHER_SERVER=YES
bundle install --deployment
bundle exec rake ci

The only change we made was exporting that USE_SIMLAUNCHER_SERVER environment variable. This tells Frank to launch the Frankified app using SimLauncher in client-server mode rather than trying to launch it directly.

Next, test out your change by running go.sh. You should see the same CI run as before (including a successful Frank test run), but you should also notice that the terminal window running the SimLauncher contains some output showing that the server was responding to launch requests from Frank during the test run. At this point you should also be able to perform a complete CI run via Jenkins (as long as you have the SimLauncher server running of course).

Starting the simlauncher server by hand in a terminal is a bit of a hassle, but in practice it turns out to not be a big deal. You have to do it once every time you reboot your build box, which with OS X is a fairly infrequent event.

Next steps

We now have a working CI setup. However this basic configuration should only be the start of the journey. Because of the value they provide CI systems tend to grow over time. I’ll briefly describe some directions in which you might grow this system.

The first thing I’d want to add is an automated unit testing run (before the Frank run). After that one could start adding internal quality metrics (code duplication, unit- and acceptance test coverage, cyclometric complexity reports, etc.). You might want builds which have passed your acceptance test suite to be automatically deployed to QA devices via HockeyApp or TestFlight. At that point you’re starting to move towards a Continuous Delivery system where features and bug fixes move through one or more delivery pipelines from checkin through automated testing to QA deployment and eventual production deployment. As you add more functionality your builds will start to take longer to run, which means slower feedback and more time waiting for a build to pass initial quality checks. At that point you’ll probably want to look at parallelizing your build, most likely by standing up multiple build agents.

Using the UIAutomation Private Framework

| Comments

I’ve recently spent a few hours investigating how feasible it would be to use Apple’s private UIAutomation framework to simulate user interactions in iOS apps. The motivation for this is of course being able to use this for Frank, the UI testing tool I maintain. This post will summarize my progress to date. You can also check out my work-in-progress uiautomation branch for Frank. Note that that branch is very rough and ready at the moment. I pushed it up to my github repo just so anyone who wants to see progress so far can do so.

Credit

First off, credit where credit’s due. This whole approach was suggested by Kam Dahlin, who has been quietly advocating for it every now and then on the Frank mailing lists. He is also the guy who told me that the ARM build of UIAutomation.framework is embedded inside a disk image which is mounted on the device by XCode.

Also credit goes to Eloy Durán who started investigating the same approach described here in order to add some UI automation features to the macbacon framework used by RubyMotion projects. He gave me some good pointers. From reading his implementation, we ended up in pretty similar places.

Finding the framework

The UIAutomation private framework comes with XCode, but at first glance it seems that framework only contains a library built for the Simulator, not the device:

1
2
3
⋙ mdfind -name UIAutomation.framework
/Users/Thoughtworks/Library/Developer/Xcode/iOS DeviceSupport/5.1.1 (9B206)/Symbols/Developer/Library/PrivateFrameworks/UIAutomation.framework
/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk/Developer/Library/PrivateFrameworks/UIAutomation.framework
1
2
⋙ file /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk/Developer/Library/PrivateFrameworks/UIAutomation.framework/UIAutomation 
/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk/Developer/Library/PrivateFrameworks/UIAutomation.framework/UIAutomation: Mach-O dynamically linked shared library i386

But (thanks to a tip from Kam), you can see that it’s also available inside the DeveloperDiskImage.dmg which comes with XCode. After enabling a device for development using XCode this disk image will be mounted on your device by XCode whenever you connect your device to your development machine. More info on that here.

The long and the short of this is that you can dynamically load the framework at runtime when running on the device, but only if you have mounted the developer disk image on the device. The easy way to do that is to simple connect the device to XCode.

Note that you can subsequently disconnect your device and it appears that the disk image stays mounted although I’m not sure how long for.

Linking in the framework

If you’re building for the simulator you can just tell the linker to link in the simulator version of the private framework. Something like -framework /path/to/UIAutomation.framework should do the trick.

To link the framework on the device you’ll need to ensure the Developer Disk Image is mounted (as discussed above). Assuming that’s the case then this one-liner while your app is booting should work fine:

1
dlopen([@"/Developer/Library/PrivateFrameworks/UIAutomation.framework/UIAutomation" fileSystemRepresentation], RTLD_LOCAL);

Accessing the framework

Because it’s a private framework UIAutomation doesn’t come with any header files. class-dump to the rescue! This is a really handy utility when working with private frameworks. It will analyse a framework and generate a header file based on what it finds. You can install it with homebrew - brew install class-dump and then generate a header file with something like class-dump /path/to/UIAutomation.framework > UIAutomation.h.

The file class-dump produces may need a little bit of massaging - the ordering of classes might not work first of all because of dependencies between them. Once it’s in good shape you can include it in your project.

Now that you have that header file you can access UIAutomation just as you would any other framework. To test this out, try something like:

1
2
UIASyntheticEvents *events = [UIASyntheticEvents sharedEventGenerator];
[events lockDevice];

You should see the device lock as soon as your app executes this code.

Limitations

The main limitation with programatically driving UIAutomation seems to be that the high-level automation API doesn’t work when executing from the foreground app. According to Eloy Durán it seems that it may work if executed by a background app. The high-level API is things like UIATarget, UIATableView, etc. However the lower-level UIASyntheticEvents API does seem to work, at least for the bits I’ve tried to use.

I have confirmed that the high-level API does not always work when called from the app under test. I haven’t tried running it from a backgrounded app. However, I have also found that some of the high level API will work, but sometimes will lock up. It looks like it’s locking up while waiting for events to be processed by a run loop. For example, using UIAKeyboard#typeString will work if all the characters you want to type are currently visible on the keyboard, but will lock up if you try to type characters not in the current keyplane. This makes me think that internally UIAKeyboard is switching the keyboard’s plane (e.g. to the symbol plane) and then waiting for it to be ready in that new plane. If UIAKeyboard is waiting for the app to update while the app itself is waiting for UIAKeyboard to finish up then we’d have a livelock, which is what this looks like. Anyway, this is all just conjecture at this point; needs more investigation.

The good news

All that said, the low-level API exposed by UIASyntheticEvents seems to work wonderfully. It appears to allow automation of the full range of taps, swipes, and multi-touch gestures. This basically replaces Frank’s need for KIF or UISpec as an interaction synthesis library. The one exception here is driving the keyboard - KIF has some really impressive code which will do that based only on low-level touch events. That implementation might need to be ported over to use UIASyntheticEvents instead. I’m also quite hopeful that some clever hackery will allow the high-level UIAutomation API to be used.

Writing Your First Frank Test

| Comments

You’ve heard about how awesome Frank is and you’re all fired up about getting some automated functional testing set up for your iOS app. Now you’re ready to start writing your first test. In this article I’ll cover the fundamentals of writing Frank tests, using an open source iOS application written for the 2012 Olympic Games as an example.

get the code for your app

I’m guessing you’ll already have the code for your own app, but if you’d like to follow along with the examples in this post you’ll want to download the sample app I’m using. You’ll find it on github here. Create a local clone of that repo with git clone https://github.com/Frahaan/2012-Olympics-iOS--iPad-and-iPhone--source-code.git

Now open up the app in XCode and make sure you can run it in the simulator.

Frankify your app

Open up a terminal, and cd into the project’s root directory (the one with the app’s .xcodeproj file in it). Now we can Frankify the app as follows:

  • install the frank-cucumber gem if you haven’t already by running sudo gem install frank-cucumber. That sudo part won’t be necessary if you’re using rvm to manage your ruby setup (which I’d recommend).
  • run frank setup to create a Frank subdirectory which contains everything necessary to Frankify your app.
  • run frank build to create a Frankified version of your app.
  • run frank launch to launch the Frankified app in the simulator.
  • check that you are indeed running a Frankified version of the app by running frank inspect. This will open up Symbiote in your web browser.

Symbiote

Symbiote is a little web app which is embedded into your Frankified app. It allows you to inspect the current state of your app as it’s running. It also lets you experiment with view selectors. View selectors are how you specify which views in your app you want to interact with or inspect the value of. If you’re familiar with CSS selectors or XPath expressions the concept is the same.

Let’s learn a bit more about selectors by experimenting with our example Olympics app. I’ve followed the steps above, and am looking at the home screen of the app in Symbiote.

Note that I have the actual iOS Simulator side-by-side with my web browser. That’s useful when testing view selectors in Symbiote, because the main way of testing selectors is to flash the views in the simulator which match a selector. Let’s try that now. Type view marked:'Events' into the selector field at the top left of Symbiote in the web browser, and then hit the Flash button. You should see the Events button in the simulator flash. Congratulations, you are using Frank to manipulate specific parts of your iOS app.

running our first cucumber test

Our goal is to use Frank and cucumber to run automated tests against our app. The frank setup command provides an initial cucumber test for you so that you can verify that cucumber is working correctly in your system. First off we need to tell cucumber where our Frankified app is located so that cucumber can launch the app at the start of each test scenario. To do this, open up Frank/features/support/env.rb. At the bottom of that file you’ll see a TODO comment about setting the APP_BUNDLE_PATH. Replace that section of the file with something like this:

1
APP_BUNDLE_PATH = File.expand_path( '../../../frankified_build/Frankified.app', __FILE__ )

This tells Frank the location of your Frankified app, relative to that env.rb file. Now that we have done that we can run the initial cucumber test that was provided as part of frank setup. To do that simple run cucumber in a terminal from the Frank subdirectory. When you do that you should see the Frankified app launch in the simulator, and then perform some rotations. You’ll also see some output in the terminal from cucumber describing the test steps it has performed and eventually declaring the test scenario passed.

writing our own cucumber test

All right, now that we know how to run cucumber tests we should write our own. We’re going to write a cuke test which verifies that the tab navigation buttons in our example Olympics app work correctly. We’ll write these tests in a new feature file called Frank/features/navigation.feature. Create that file with the following content:

navigation.feature
1
2
3
4
5
6
7
8
Feature: Navigating between screens

Scenario: Moving from the 'Home' screen to the 'Events' screen
Given I launch the app
Then I should be on the Home screen

When I navigate to "Events"
Then I should be on the Events screen

This expresses a test scenario. Now let’s ask cucumber to test just this feature by running cucumber features/navigation.feature. You should see the app launch, but then cucumber will complain because it doesn’t know how to execute any of the steps we’ve described after launching the app. That’s fair enough; we haven’t defined them anywhere yet! Let’s do that now.

Create a step definition file called features/step_definitions/navigation_steps.rb. When cucumber encountered the undefined steps just now it outputted a bunch of boilerplate code for defining those steps. We’ll cut and paste that code into our new step definition file. You should end up with this:

navigation_steps.rb
1
2
3
4
5
6
7
8
9
10
11
Then /^I should be on the Home screen$/ do
  pending # express the regexp above with the code you wish you had
end

When /^I navigate to "(.*?)"$/ do |arg1|
  pending # express the regexp above with the code you wish you had
end

Then /^I should be on the Events screen$/ do
  pending # express the regexp above with the code you wish you had
end

Now we’re going to implement these step definitions one at a time. Let’s start with the first one. We need to check that we are on the home screen. Looking at the home screen in Symbiote it consists of a big UIView with a UIImageView inside of it called Icon512x512.png. Our test can verify that the app is displaying the home screen by checking whether that UIImageView is in the view heirarchy. If it is then presumably we’re on the home screen. If we don’t see that view then we’re not on the home screen. This isn’t an ideal way of verifying where we are - for a start that image view should have a better accessibility label which would make our tests less brittle and coincidentally also make the VoiceOver experience better - but it will do for now.

To test whether a view exists we create a view selector which selects a UIImageView with the correct accessibility label and then we ask frank to verify that that selector matches at least one view. The entire step definition will look like this:

1
2
3
Then /^I should be on the Home screen$/ do
  check_element_exists "view view:'UIImageView' marked:'Icon512x512.png'"
end

Now when we run the cucumber feature again with cucumber features/navigation.feature we should see that step show up as green and passing in cucumber’s output. We are indeed at the Home screen at launch, and our test has verified that. One step down, two to go!

Next we need to define the step which navigates to a specific tab. To do that we’ll ask frank to touch the appropriate tab button in the UI. Looking in Symbiote it appears that those buttons are implemented as views of class UITabBarButton. We can also see that UIKit is giving them nice acccessibility labels. This means all we need to do to implement this step is something like this:

1
2
3
When /^I navigate to "(.*?)"$/ do |tab_name|
  touch "view:'UITabBarButton' marked:'#{tab_name}'"
end

If I run the feature again with cucumber I should see that step executing and taking us to the Events screen in the UI.

Now we’ll write our final step definition. We need to check that we are on the Events screen. Back in Symbiote if we inspect the view heirarchy for that screen we see a UIScrollView which contains a bunch of different buttons for the different Olympic events. I suppose a reasonable check for whether we’re on this screen would be to look for a few of those buttons. Again, this isn’t ideal but it should work pretty well. Here’s our step definition for that:

1
2
3
4
5
Then /^I should be on the Events screen$/ do
  check_element_exists "view:'UIScrollView' view:'UIButton' marked:'archery'"
  check_element_exists "view:'UIScrollView' view:'UIButton' marked:'badminton'"
  check_element_exists "view:'UIScrollView' view:'UIButton' marked:'boxing'"
end

And with that we are done. Running our feature again should result in a green, passing cucumber test. Hurray, we have written our first Frank test!

Don’t forget to refactor

Hang on, before we rush out to celebrate let’s clean up a bit. That last step has loads of duplication in it. Let’s refactor it a bit to this:

1
2
3
4
5
Then /^I should be on the Events screen$/ do
  %w{archery badminton boxing}.each do |expected_label|
    check_element_exists "view:'UIScrollView' view:'UIButton' marked:'#{expected_label}'"
  end
end

That seems a bit cleaner to me. We run our tests again to make sure we’re still green.

Now we can go out and celebrate our first Frank test!

Where to turn for help

Frank is a pretty powerful tool, and it does take time to learn the nooks and crannies of Frank. This is particularly true if you are also new to ruby and/or cucumber.

testingwithfrank.com has documentation, screencasts, and links to more resources.

The frank mailing list is a particularly great resource. Lots of folks there have gone through the same learning process you will go through, and are happy to help.

Lowering Frank’s Barrier to Entry

| Comments

I’ve always known that it’s really important new users of Frank to easily get started by creating a Frankified version of their app - a build which has been enhanced with the embedded server which lets Frank do its magic.

When the project was first started it was really quite complex to setup. You had to add a bunch of source code to your app, and then modify a few bits of your app’s startup code.

After a while, with the help of some Objective C big-brains like Stew Gleadow, the steps need to Frankify your app were simplified. You ‘just’ needed to duplicate your app’s main target, link in a static library and a resource bundle and add a few linker flags. Note that I put ‘just’ in quotes there. While this definitely helped make it less intimidating I still thought that there were many potential users of Frank who were missing out because they hit some snag or other while getting started.

While I was at WWDC this year I took the opportunity to investigate ways to make Frank even easier to get started with. I asked some questions in the labs and got some great advice from some of the Xcode team. I was also stoked to meet a bunch of other non-Apple Objective C big-brains, once again courtesy of Stew.

Zero-config Frank

A couple of days of post-WWDC hacking later, and I’m pleased to announce a new zero-configuration Frank setup. You can now create a Frankified version of your application without touching your Xcode project at all, using a simple frank build command. Here’s a screencast showing how it works.

Benefits

To my mind the biggest benefit is that this will help grow the community of Frank users by reducing the barrier to entry. It also means you no longer need to maintain a duplicate Xcode target, and therefore no longer need to make sure that .m files included in one target are also included in the other.

How does it work

There’s actually not much magic going on here. The concept is to leave the Xcode project as-is and instead just use xcodebuild to build a customized version of the app. That essentially just means adding a few linker flags and linking in a few static libraries and frameworks. This is all specified in a frankify.xcconfig file. We also specify a few other xcodebuild flags so that the app is built into a known location inside of the Frank subdirectory. This is much nicer than having to hunt around for it in the DerivedData dirs that Xcode 4 introduced.

Aside from the xcodebuild bits the main chunk of work was in creating a Thor-based command line tool with the cunningly original name frank. This is installed as part of the frank-cucumber gem, just like the (now deprecated) frank-skeleton script was. This command line tool provides commands for setting up a Frank directory inside your app project and building the Frankified app using the custom xcodebuild approach I just described. There are also commands for updating the frank server code, launching your frankified app in the simulator, opening Symbiote in the browser, and for launching a frank console. You can run frank help for more details.

The frank console

Another thing I added as part of this goal of making Frank easier to use is frank console. This is some very minimal code which uses Pry to bring up a simple ruby REPL that has the Frank helper code already mixed in. This will hopefully give more technical users a nice way to quickly prototype ruby test code against a running instance of their app. I demonstrate the basic functionality in the screencast above.

The Birth of a Domain Object in Ruby

| Comments

Software designs don’t pop into existance fully formed. They evolve over time. In this post I’m going to show how a concept can grow from a simple method parameter to an all-growed-up Domain Object.

Initial requirements

Imagine we’re building some code that will take bug reports from a user and submit them to a third-party bug tracking service. We expose an HTTP endpoint in a sinatra app to allow this:

1
2
3
4
5
post '/bug_report' do
   bug_description = params[:bug_description]

   BugFiler.report_bug( bug_description )
end

So far so good. We’ll assume that BugFiler is some service code that takes a bug description string and makes the appropriate service call to create a bug with that description in our bug tracking system. We won’t be focussing on the implementation of that class here, we’ll just be a client of it.

So a concept that’s starting to surface here is that of a Bug. So far it is represented by a string primitive and that’s OK because that’s all we need right now. Some java programmers would leap to their IDE keyboard shortcuts at this point and fix this Primitive Obsession code smell by refactoring this string primitive into a Bug type. But we’re ruby programmers, so we won’t do that. We’re pragmatic. YAGNI.

New requirements

Now our product guys have realized that it’d be nice to know who is submitting these bug reports. They’d like us to record the name of the person filing the report. OK, we can do that.

1
2
3
4
5
6
post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]

   BugFiler.report_bug( bug_description, bug_reporter )
end

You can assume that we also modified BugFiler#report_bug to take that second arg.

So now we’re good. New requirements are satisfied and we can move on to our next task.

More new requirements

Oh actually, the product folks have realized we should really be tracking severity too. OK.

1
2
3
4
5
6
7
post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity]

   BugFiler.report_bug( bug_description, bug_reporter, severity )
end

So this works, but now alarm bells are ringing for me. 3 params is about my limit for maintainable code. Let’s change BugFiler#report_bug to take a hash instead.

1
2
3
4
5
6
7
8
9
10
11
post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity]

   BugFiler.report_bug({
     :description => bug_description,
     :reporter => bug_reporter,
     :severity => severity
     })
end

A bit more verbose, but I’d say more readable too.

Default severity

Turns out that sometimes a user doesn’t bother to explicitly specify a bug severity, so we should set that to a default of ‘medium’ by default.

1
2
3
4
5
6
7
8
9
10
11
post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity] || 'medium'

   BugFiler.report_bug({
     :description => bug_description,
     :reporter => bug_reporter,
     :severity => severity
     })
end

Deriving a bug summary

Our product friends are back. Turns out that our third party bug tracker allows us to specify a summary line describing the bug, but we don’t currently have any UI exposed which lets users specify that summary. We’d like to work around this by just clipping the first line of text from the bug description and using that as the bug summary. OK, I think we can do that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity] || 'medium'

   bug_summary = bug_description.split("\n").first

   BugFiler.report_bug({
     :description => bug_description,
     :reporter => bug_reporter,
     :severity => severity,
     :summary => bug_summary
     })
end

Replacing Primitive Obsession with a Struct

Now at this point I think we have too much logic in our controller code. It feels like we have the beginnings of a domain object here. We don’t want to disturb too much code as we initially introduce this new object, so we’ll just replace the primitive hash we currently have with a Struct.

1
2
3
4
5
6
7
8
9
10
11
12
13
# elsewhere in the codez
Bug = Struct.new(:description,:reporter,:severity,:summary)

post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity] || 'medium'

   bug_summary = bug_description.split("\n").first

   bug = Bug.new( bug_description, bug_reporter, severity, bug_summary )
   BugFiler.report_bug( bug )
end

We’ve used ruby’s built in Struct facility to dynamically generate a Bug class which takes some parameters at construction and then exposes them as attributes. This type of class is variously referred to as a DTO or Value Object.

Note here that we would have also modified BugFiler#report_bug to expect a Bug instance rather than a raw hash.

Introducing a Domain Object

We now have a Bug class, but it’s just a boring old Value Object with no behaviour attached. Let’s move the logic which is cluttering up our controller code into our Bug class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# elsewhere in the codez
class Bug < Struct.new(:description,:reporter,:severity)
  def summary
    description.split("\n").first
  end

  def severity
    super || 'medium'
  end
end

post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity]

   bug = Bug.new( bug_description, bug_reporter, severity )
   BugFiler.report_bug( bug )
end

This is one of my favorite ruby tricks. We define the class Bug which inherits from a dynamically created Struct class, and then we add in some extra behavior. This way we don’t have to write boring initialize methods or attr_accessor code in our class definition, and our intent is clearly captured by the use of Struct.

Solidifying our Domain Object

Now that we have a Bug, why don’t we allow it to file itself?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class Bug < Struct.new(:description,:reporter,:severity)
  def summary
    description.split("\n").first
  end

  def severity
    super || 'medium'
  end

  def file_report!
    BugFiler.report_bug(self)
  end
end

post '/bug_report' do
   bug_description = params[:bug_description]
   bug_reporter = params[:reporter]
   severity = params[:severity]

   bug = Bug.new( bug_description, bug_reporter, severity )
   bug.file_report!
end

That seems quite nice. We have some controller logic which doesn’t do much apart from transform params into an object and then call methods on that object.

Our end state

At this point we have the beginnings of a real Domain Object in our Bug class. In the future we could add the ability to serialize a Bug instance to a flat file or a database, and not really need to touch our controller code. We can also test all of the business logic without needing to thing about HTTP mocking, Rack::Test, or anything like that.

I’ve noticed that domain concepts follow evolutionary paths like this quite frequently. In this case our path was:

  • a single parameter
  • a group of parameters
  • a hash object
  • a hash object plus some unattached logic
  • a value object plus some unattached logic
  • a domain object with attached logic

YAGNI!

Something I tried to get across as we worked through this scenario was the importance of taking YAGNI approach to software design. It is very tempting to start introducing domain objects as soon as you see a potential for them, and it’s fun to do so. But a lot of the time you don’t ever need to go this far down the path towards a real domain object. If you don’t need to attach any behaviour then a lot of times a simple hash is just fine.

Feature Flags in JavaScript

| Comments

On a recent project we were applying Continous Delivery practices and were releasing to production reasonably frequently - sometimes multiple times a week. We didn’t want to expose features which were under active development in our production environment but we also wanted to avoid feature branches. I’m going to describe the simple approach we used to solve this problem using feature flagging with javascript and CSS.

What are feature flags?

Features flags (aka feature bits, feature toggles) are a very useful technique for managing latent functionality within your application. You expose switches in your application which configure whether certain behavior is enabled or disabled. Typically your would use this pattern for functionality which is under active development and is not ‘fully baked’.

There are various classes of behavior which you might expose toggles for. Common use cases include hiding user-facing features which are still in development, or switching whether your code uses a new version of a third-party service. This allows you to do branch-by-abstraction and therefore avoid long-lived feature branches which can often be the source of considerable pain to a development team.

Further reading

For a more in-depth discussion on this pattern, Martin Fowler has a nice writeup. My former colleague Erik Sowa also has a nice presentation describing how we applied ‘feature bits’ at a previous company I worked in. Derek Hammer has a really nice write up on the broader range of patterns available for isolating code when practicing Continous Delivery.

Feature flagging using query strings, javascript and css

As mentioned above, I’ll be describing a simple but effective feature flagging technique we used on a recent project. Our team decided to use feature-flagging to hide new features at the UI level using CSS. The mechanism to toggle the features on and off involved query strings and some simple javascript.

step 1: extracting feature flags from the query string

First we needed some way to specify whether a feature flag was on or off. We used query params to do this. These query params were totally ignored by the server-side code. There were just there so that javascript on the client side could inspect them at page load.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function parseQueryParamPart(part){
  var keyAndValue = part.split("=");
  return {
    key: unescape(keyAndValue[0]),
    value: unescape(keyAndValue[1])
  };
}

// returns an array of {key:"",value:""} objects
function getQueryParamKeyValuePairs(searchSection){
  var params = searchSection.slice(1).split("&");
  return _.map( params, parseQueryParamPart );
}

function getFeatureFlags(){
  var queryParamKeyValuePairs = getQueryParamKeyValuePairs( window.location.search );

  return _.compact( _.map( queryParamKeyValuePairs, function(pair){
    if( pair.key === 'ff' )
      return pair.value;
    else
      return undefined;
    end
  }));
}

Here we do some simple parsing of window.location.search with the help of underscore.js (my favorite javascript library of all time). We find all query params that have a key of ‘ff’ and return the values in an array. Note that query params may have multiple identical keys; we handle that by representing the parsed key-value pairs as an array of key-value pairs, rather than the simplistic approach of creating a hash table directly from the query string.

step 2: expose feature flags as classes in the DOM

1
2
3
4
5
$(function(){
  _.each( getFeatureFlags(), function(featureFlag){
    $('html').addClass( "ff-" + featureFlag );
  });
});

Here we use jQuery to register a function on document ready. This function will grab all the feature flags specified in the query params and then add a class to the doc’s <html> element for each flag, prepended with ‘ff-’ to avoid any naming collisions with other CSS classes.

step 4: show or hide UI elements with CSS

Our goal here is to control whether or not a feature is exposed to an end user. A lot of the time we can hide a feature by simply applying a display:none the right DOM element. Because we added our feature flag classes to the root <html> element we can apply simple CSS rules like the following:

1
2
3
4
5
6
7
.search-wrapper {
  display: none;
}

.ff-search .search-wrapper {
  display: block;
}

This will hide the search-wrapper element (and anything inside of it) by default, but will show it if there is a parent with a class of ff-search. Combined with the javascript above this has the effect that search functionality will be hidden unless you pass in a query params of ff=search when loading the page. There we have it, a simple feature-flag.

Further refinements

This simple approach works quite nicely for a single-page app (which is what our team was developing), but it does have some issues in other cases. The main problem is that any feature flag you specify in a query param will be lost whenever you leave the current page. This could be remedied by storing feature flags in a cookie or in HTML5 local storage. You’d then check for flags in that store in addition to in the query param during page load. Presumably a flag specified in a query param would override anything in the store.

I’ve only covered using a feature flag to drive CSS changes, but you could easily expose a simple javascript API which would allow developers to plug in different code based on whether a flag was off or on. In sophisticated use cases you would probably use feature flags to drive the initial wiring up of your code’s dependencies during boot.