h

Being Agile

Pete Hodgson's blurgh

Advice for the Founding Engineer, Part 1

| Comments

The founding engineer at a new startup carries a heavy weight upon their shoulders. Technical decisions made at this early stage resonate for a long time. Beginnings are such delicate times, as they say. Compounding this is an underlying tension inherent to every young enterprise - we must get an initial product out of the door ASAP but still support what comes after the MVP. Grounding your choices against a small set of guiding principles will help keep them sensible and coherent.

Witnessing these early decisions - and how they eventually play out! - has led me to 3 core maxims which I believe will serve the founding engineer well. In this series of posts I’ll introduce each maxim along with some more concrete guidance that emerges from these maxims.

We’ll start off with my first maxim - “You’ll Never Know Less Than You Know Right Now”. In later posts I’ll cover the other two: “Optimize For Iteration” and “Boring Is Good”.

Maxim 1: You’ll Never Know Less Than You Know Right Now

The very nature of a startup is an exploration of unknown terrain. What appears to be a sure fire hit today will turn out to be a dud in a month’s time. What seems like an unimportant detail this week may end up being a core part of your product next year. To succeed in this environment you must embrace the fact that you’re operating with very limited knowledge of what the future holds, and act accordingly.

Be humble.

As a founding engineer you are almost certainly doing things you’ve never done before, while at the same time operating in uncertain terrain. You will make mistakes rooted in second order ignorance - not knowing what you don’t know. Embrace that and try to minimize the risk by building up resources that you can humbly seek advice from. Experts in your network, company advisors, a technical mentor, your co-workers, a consultant guru. All of the above can help you validate your thinking and point out when you’re drifting into Dunning-Kruger territory. It’s one thing to be a novice driver, it’s quite another to be a novice driver who thinks they’re an expert.

A huge area for inefficiency in early startups is re-inventing a wheel you didn’t know you were re-inventing. Sometimes this is due to a misplaced desire to build everything in-house - something we’ll cover later on - but more often it’s simply because the engineers didn’t know that they were attacking a problem that had already been solved (and that solution probably open-sourced). When you see an interesting problem your first instinct as a founding engineer should not be “how might we solve this problem”. It should be “What’s the name for this class of problem and what existing tools and libraries can I use to help solve it”.

Don’t be afraid to pull in experts once you known that you’re operating in an well understood problem space where you don’t have much experience. If your MVP requires a rules engine and you’ve never worked with a rules engine before, find a hired gun who can build out an initial version and teach you the ropes at the same time. Don’t stubbornly beat your head against a wall learning everything from first principles when an expert could do a much better job in 1/10th of the time.

Pull risk forward.

Technical uncertainty will not be uniformly spread throughout your system. Some parts will have an obvious approach, while in other parts the optimal solution will be unclear. Some examples of sources for technical uncertainty include building out an integration with a 3rd party API that you’ve not worked with before, or creating a novel algorithm that needs to be invented and refined.

In a startup where many things about the future are uncertain there is a lot of value in reducing uncertainty where you can. So, do what you can to “pull forward” work on those uncertain areas. Starting on these areas early allows you to move from an unknown risk - “we don’t really know how hard this will be” - to a (roughly) understood risk - “it is probably going to take about 6 weeks to get a workable implementation of this”. This is the architectural equivalent to validating your product hypothesis by building a minimal version of the product. Validate your technical hypothesis as soon as you can by pulling forward implementation of that piece of tech.

Since removing technical uncertainty is a valuable outcome in and of itself, take it into account when prioritizing product features. Consider pulling forward a feature if it allows you to explore a technical solution you’re unsure of. This doesn’t mean you have to built the entire feature out - just enough to gain understanding of the terrain.

Consider doing “spikes” of throw-away work when exploring new technical terrain. Treat the work as a throw-away prototype - this allows you to go fast and maximize for learning. Once you’re done exploring don’t succumb to the sunk-cost fallacy and attempt to turn your prototype into your product. Throw it away and re-build using what you learned from the prototype. What you build the second time around will be much more robust and you’ll probably get it done in half the time thanks to the understanding you gained from the spike.

Build for now, keep an eye on what’s coming.

Engineers have a strong desire to build generalized abstractions. But remember, you are in a deeply uncertain world. Embrace the fact that you’ll never know less than you know right now. Resist the urge to build what you “know” you are going to need in 3 months time and it’s “more efficient” to build it in one go now. 3 months is a very long time in startup land.

In building something that you don’t need now because you “know” you’ll need it in the future you start paying various costs now on the basis of an uncertain return on investment in the future. Defer the cost by deferring the creation of a generalized solution. Solve for what you need now. By the time you actually need a generalized solution you’ll understand the problem space a lot better, and will create a superior solution in less time.

That said, it is important to have an awareness for where you think you’re going. Let’s say you’re building out user notifications. You’re starting with email today, but you “know” you’ll need to support SMS and in-app notifications soon. You should not build the SMS and in-app notification capability until you actually need it. You should also not build a generic notifications system which initially only has one implementation for email - you will inevitably get the abstraction wrong. You should build just the email notification capability you need today, but do so with an understanding that you expect to eventually generalize. Create a focused solution but with design seams in place so that you can enhance it to support other channels once you get to SMS and in-app. Even when you believe you understand the seams of the abstraction you’ll likely still need to revise your approach when you actually build the other channels, but hopefully you will have architectural boundaries in roughly the right places to make those small modifications relatively easy.

Design for 10x, consider 100x, don’t support 1000x.

When deciding on a technical approach it’s important to plan for growth, but you must keep in mind that there are very real trade-offs when you design for scale. An architecture which supports 1000 transactions a second costs more than one which supports 10 transactions a second. The same feature will take longer to develop and cost more to operate and maintain in a 1000 tps system than an 10 tps system. Prematurely scaling your architecture will slow down your ability to deliver features without delivering any benefit (until you get close to that scale).

Google’s approach is to design for 10x your current scale, but expect to rewrite before you hit 100x. This doesn’t mean you shouldn’t think about the 100x scenario - you should consider it - but you should not expect to be able to predict what your 100x architecture will look. You’ll never know less than you do now, remember. Your consideration should be in building a modular architecture where you can pull out the part that isn’t scaling and replace it with a more scalable design without massive impact on the rest of your system. Re-writing systems is expensive, so try to modularize along sensible boundaries in order to contain the cost of the rewrite.

Stay tuned for more

This post covered my first maxim - “You’ll Never Know Less Than You Know Right Now”. Coming up in the series I’ll talk about two more - “Optimize For Iteration” and “Boring Is Good”. Follow me on twitter or subscribe to the blog feed to be updated as I publish the rest of the series.

The JS Testing Ecosystem Needs a Standard Matcher Library

| Comments

The testing ecosystem in JavaScript today is full of tools which keep re-inventing the same wheel - Matchers. If we can extract this concept from individual tools into a general abstraction we can focus energy to build a much more expressive API, and provide a common ground for re-use and extension. Read on to find out why.

Matchers when making assertions

Unit-testing JavaScript code means making assertions about the shape of things. We make simple assertions like expect(result).to.eql("SUCCESS") all the time. We often make slightly more complex assertions like expect(result).to.have.property('resultType','SUCCESS') or expect(result.count).to.be.above(0). Some tools allow us to make even more sophisticated assertions like:

1
2
3
4
5
expect(result).toMatchObject({
  resultType: 'SUCCESS',
  message: 'game saved',
  referenceCode: referenceCodeSentToServer
});

The general pattern here is we’re passing a matcher to an assertion. We’re saying “I expect this thing to match this specification”. The most common form of match is equality, but we also say things like “I expect this thing to contain this property with this specific value”, or “I expect this thing to contain these properties, but maybe additional properties that I don’t care about”.

Matchers when configuring mocks

Sometimes we create mock functions when testing, either using a mocking library such as sinon.js or testdouble.js, or using the built-in capabilities that come with frameworks like Jest and Jasmine. When configuring mock functions we also use the concept of matchers, in a slightly different way. We might say callback.withArgs(12,'foo').returns(true), which means “when this function is called with the arguments 12 and 'foo', return true”. Same general matcher concept - “when we see a set of arguments that match these specifications, return this value”.

Sometimes we need to loosen up our configuration and say “when this function is called with anything at all as the first argument and 'foo' as the second argument, return true”. That’s where being able to apply that generalized matcher concept and apply it in this new context of configuring mock functions becomes really handy.

Matchers against async results

I can also think of a third context where it’s nice to have a generalized concept of matchers - testing async code which returns promises. A typical approach might be:

1
2
3
doTheThing().then(function(result){
  expect(result).to.eql("SUCCESS");
});

This gets a little clunky, so some tools enable us to instead say things like:

1
expect(doTheThing).to.eventually.equal("SUCCESS");

That’s nice, but what if we have more complex assertions on that result. We’re often forced back to something clunky:

1
2
3
4
doTheThing().then(function(result){
  expect(result).to.have.property('resultType','SUCCESS');
  expect(result).to.have.property('message');
});

Problem #1 - Learning multiple APIs

Today every testing tool that does matching does it with its own unique implementation. That means that even if you always use the same testing stack you’ll still end up using two matcher implementations - one for your test assertions and one in your mock configurations. If you move between testing stacks frequently then you have it even worse. And that’s likely, given that we work in a somewhat fragmented community where React peeps do it one way, Angular peeps another, and server-side peeps 3 additional ways.

This means that you need to hold learn and remember multiple matcher APIs as you move between codebases. And of course they’re often implemented using cute DSLs which usually target readability over intuitive write-ability.

Problem #2 - Generalized extensions aren’t possible

Let’s say I write a bunch of tests against an API client. I’m sick of writing tests like this:

1
2
3
4
5
6
7
8
9
  const requestedUrl = url.parse(fetchMock.lastUrl());

  expect(requestedUrl).toHaveProperty('host','api.bart.gov');
  expect(requestedUrl).toHaveProperty('pathname','/api/etd.aspx');

  const query = querystring.parse(requestedUrl.query);
  expect(query).toHaveProperty('cmd','etd');
  expect(query).toHaveProperty('key');
  expect(query).toHaveProperty('orig','some-abbr');

Instead I want to create an extension to my testing tooling so I can write something along the lines of:

1
2
3
4
5
6
7
8
9
10
  expect(fetchMock.lastUrl()).matches(
    aUrl()
      .withHost('api.bart.gov')
      .withPath('/api/etd.aspx')
      .includingQueryParams({
        cmd: 'etd',
        key: anyString(),
        orig: 'some-abbr'
      })
  );

I might well want to also use a similar abstraction to configure a mock function - “when you’re called with a URL where the query parameter cmd is etd, return a list of ETDS. When you’re called with a query parameter cmd of stations, return a list of stations”. Unfortunately since every tool uses its own matcher implementation I would get very litte code reuse between those two extensions.

More generally, if I wanted to share this extension with the world as a little open source library it would only be targeted at one specific testing tool. My dream of creating the One True URL Matcher module will never become a reality. The marketplace for sharable extensions to a matcher is so fragmented that it’s unlikely that any one library will gain traction.

Problem #3 - A lack of true composability

Because matcher functionality is often a side-feature of a broad tool the API for matching doesn’t get as much thought as it could. Most (all?) of the popular tools miss out on a key feature of a matcher API - composability. If we treat a matcher as a first class concept - a specification that a value either matches or does not - then we can start composing matchers together. We can create the intersection of two matchers: “the value must be both greater than 0 AND less than 10”. We can create the union between two matchers: “the value must be either a string OR false”. We can parameterize matchers with matchers. We can create an every matcher that specifies that every element in a collection matches some specified matcher: “every element in this array must be greater than 50”. We could also say “at least one element in this array must have an id property of 12”.

This is a huge win in terms of expressivity. Our tests become a lot more declarative, and our assertion failure messages suddenly have a load more context. For that last example if we’re using a composable matcher library the out-of-the-box error message could be Expected at least one element in the following collection to match: have a property 'id' with the value '12', but none did. Imagine what the error message would be with a hand-rolled check for the same assertion. Probably something like expected 0 to be greater than 0. A lot less helpful.

“It’s like Promises, but for matching things”

Introducing a shared abstraction around matching would provide a lift similar to the lift we got with Promises. Promises took a concept - an async operation with an errback and a callback - and turned it into a concrete thing you could work with in code via a standardized API. That meant we could start composing these things together - e.g. Promise.all()- and we could start writing libraries that extended their capabilities - e.g. instrumenting a promise with logging, building coroutine implementations. By formalizing the concept of a matcher we can create a standard API for a general concept and start sharing innovations and tooling built on top of it. Consider the example I gave early in this article around using matchers to assert async results. That’s a great example of where we leverage the benefits of two standardized abstractions (matchers and Promises). We can take any type of async operation, and apply any type of matcher against it. And whatever matcher we apply does not need to have any awareness of the async-ness of the result. This is a nice example of two simple concepts implemented orthogonally such that they be combined cleanly.

The road to a standardized Promises API started with Q (I think?), and became generalized via the Promises/A+ spec. I’m certainly not proposing that we should shoot for a standardized spec for matchers, but I think if we created a high-quality standalone matcher library and made it easy to plug into the existing test ecosystem then we could see some big wins.

An illustrative sketch

Here’s some before and after to illustrate how some tests of an imaginary JSON API client might change if they had a common matcher library available.

Today we might have a test like this:

1
2
3
4
5
6
7
8
9
10
it('returns a user when given a legit id', function () {
  const theUserId = '1234';
  return apiClient.getUser(theUserId).then(function (result) {
    expect(result).to.have.property('found',true);
    expect(result).to.have.property('user');
    expect(result.user).to.have.property('id',theUserId);
    expect(result.user).to.have.property('firstName').a('string');
    expect(result.user).to.have.property('lastName').a('string');
  });
});

With our matcher library we can turn that series of assertions into one expressive matcher spec:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
it('returns a user when given a legit id', function () {
  const theUserId = '1234';
  const expectedResult = M.objectWith({
    found: true,
    user: M.objectWith({
      id: theUserId,
      firstName: M.ofType("string"),
      lastName: M.ofType("string")
    })
  });

  return apiClient.getUser(theUserId)
  .then(function (user) {
    expect(user).to.match(expectedResult)
  });
});

Here we’re leveraging composability to build a reasonably complex matcher for a complex object, assembling the matcher out of a series of component matchers. And because our matcher has a lot more context about the thing it’s asserting against it will be able to give much richer error messages.

But wait, there’s more! Since this is a general purpose library, we can go further and use the same syntax to express an expectation for an async result:

1
2
3
4
5
6
7
8
9
10
11
12
13
it('returns a user when given a legit id', function () {
  const theUserId = '1234';

  const result = apiClient.getUser(theUserId);
  return expect(result).to.eventually.match(M.objectWith({
    found: true,
    user: M.objectWith({
      id: theUserId,
      firstName: M.ofType("string"),
      lastName: M.ofType("string")
    })
  }));
});

Now let’s see how our shared matcher library might improve a test which is using sinon.js to verify how our API client interacts with the underlying request module which we’re using to make HTTP calls.

Here’s where we’d start today:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
it('makes a call with the right path and query params', function () {
  const spyRequestFn = sinon.stub().returns(fakeRequest());
  const apiClient = createApiClient({request:spyRequestFn});

  const theUserId = '4213';

  return apiClient.getUser(theUserId)
    .then(function () {
      expect(spyRequestFn).to.have.been.called;

      const requestParams = spyRequestFn.firstCall.args[0];

      expect(requestParams).to.have.property('baseUrl').a('string');
      expect(requestParams).to.have.property('uri', '/findUser');
      expect(requestParams).to.have.property('qs');
      expect(requestParams.qs).to.have.property('id',theUserId);
});

First of all, we could start by expressing all those expectations against requestParams as a single matcher:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
it('makes a call with the right path and query params', function () {
  const spyRequestFn = sinon.stub().returns(fakeRequest());
  const apiClient = createApiClient({request:spyRequestFn});

  const theUserId = '4213';

  return apiClient.getUser(theUserId)
    .then(function () {
      expect(spyRequestFn).to.have.been.called;

      const requestParams = spyRequestFn.firstCall.args[0];

      expect(requestParams).to.match(M.objectWith({
        baseUrl: M.ofType('string'),
        uri: '/findUser',
        qs: M.objectWith({
          id: theUserId
        })
      }));
    });
});

That’s a bit cleaner - and again we’ll get clearer error messages if those expectations aren’t met. We can go further though. Because we’re working with a general purpose library we can also plug it into sinon.js’s own concept of matchers, resulting in a test like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
it('makes a call with the right path and query params', function () {
  const spyRequestFn = sinon.stub().returns(fakeRequest());
  const apiClient = createApiClient({request:spyRequestFn});

  const theUserId = '4213';

  return apiClient.getUser(theUserId)
    .then(function () {
      expect(spyRequestFn).to.have.been.calledWithMatch(M.objectWith({
        uri: '/findUser',
        qs: M.objectWith({
          id: theUserId
        })
      }));
    });
});

And we’re only getting started here. Since testing the interactions against the request function is something a lot of things do, you can imagine building a higher-level abstraction over the matcher DSL we have here, working in the domain of HTTP requests (URIs, query params, hosts) rather than the generic domain of object matching.

The Symbol-Keyed State Accessor Pattern

| Comments

In this post I’ll show three different techniques for encapsulating internal state in JavaScript, concluding with my preferred approach, a pattern I’m calling “Symbol-keyed State Accessor”.

I was in a discussion about how to implement the Value Object pattern in JavaScript recently. Most of the examples folks were proposing involved a prototypical approach to OOO, using either class or prototype to create a Value Object type, and managing the object’s internal state via this. Here’s how one might build the basics of a Money Value Object in that style:

class-based-money.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
module.exports = class Money {
  constructor(amount,currency){
    this._amount = amount;
    this._currency = currency;
  }

  currency(){
    return this._currency;
  }

  amount(){
    return this._amount;
  }

  equals(other){
    return this._amount === other._amount
      && this._currency === other._currency;
  }
}

Of note is that we’re implementing a custom value equality method. Equality-by-value is an important aspect of the Value Object pattern.

As I’ve previously noted, I’m not a fan of traditional JavaScript OOP. I started sketching out what a closure-based approach might look like:

closure-based-money.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
module.exports = function Money(_amount,_currency){
  function currency(){
    return _currency;
  }

  function amount(){
    return _amount;
  }

  function equals(other){
    other = other._getState();
    return _amount === other._amount
      && _currency === other._currency;
  }

  function getState(){
    return {_amount,_currency};
  }

  return {
    currency,
    amount,
    equals,
    _getState: getState
  };
}

This is mostly a vanilla example of how to build a type in JavaScript without using prototypical inheritance. Again, something I’ve discussed in depth previously. However there is one wrinkle here. Usually I would want to avoid exposing a Money instance’s internal state at all - one of the big advantages of the closure-based approach is the encapsulation it provides by preventing direct access to internal state. However in order to implement the equals(...) method we need some way to compare our state to another Money’s state, which means we have to expose that state somehow. In this example I enabled this by adding a _getState() method. The leading underscore is a common (if ugly) convention used to indicate that this is not part of the type’s public API. We saw a similar convention in our first class-based example too.

Hope is not a strategy

Having to use a leading underscore is exactly the sort of thing that I dislike about this-based JavaScript code. Unfortunately it didn’t seem to be avoidable here, even with a closure-based approach that doesn’t use this at all. Let’s unpack that a bit. Most object-oriented languages provide the ability to explicitly mark parts of an API as private or internal; only accessible to an instance of the same class. Javascript is missing this feature, which has led to a lot of folks faking it, resorting to techniques like naming things with leading underscores in the hopes of persuading consumers to not bypass our public API. Unfortunately in my experience this hope turns out to be a little naive in the face of engineers with a tight deadline and a somewhat optimistic belief that they’ll “come back and fix this later”.

A big reason for me prefering closures over this is that we can mostly demarcate internal state and behaviour by using lexical scoping. This prevents any access to internals from outside the scope of the closure which instantiates an object. We can usually avoid reliance on hope as a strategy. Unfortunately this protection is too strong in this case - it prevents other instances of the same type from accessing a private API, since they are outside the lexical scope of the closure which created the instance. And thus I had to fall back to exposing state but marking it as internal with that leading underscore.

Symbol-keyed properties

What I needed was some way to expose state to other instances of the same type but to nothing else. Eventually I remembered that JavaScript gained a new feature in ES6 that was designed for just this scenario - the Symbol type.

When a property is added to an object using a Symbol as a key - a symbol-keyed property - it doesn’t show up in things like for .. in loops, or in calls to Object.keys() or Object.getOwnPropertyNames(). It’s also (almost) impossible to access a symbol-keyed property if you don’t have access to the Symbol itself. We can take advantage of this feature along with JavaScript’s lexical scoping rules to create an encapsulated object property which is only accessible by code which shares a lexical scope with the Symbol which keys the encapsulated property.

Here’s how we leverage this to implement a better version of our Money type:

money-with-symbol-keyed-encapsulation.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
const stateAccessor = Symbol(['Money#getState']);

function stateFrom(money){
  return money[stateAccessor]();
}

module.exports = function Money(_amount,_currency){
  function currency(){
    return _currency;
  }

  function amount(){
    return _amount;
  }

  function equals(other){
    other = stateFrom(other);
    return _amount === other._amount
      && _currency === other._currency;
  }

  function getState(){
    return {_amount,_currency};
  }

  return {
    currency,
    amount,
    equals,
    [stateAccessor]: getState
  };
}

Note that within this file anything can access any Money instance’s state using the stateFrom() function, which in turn uses the stateAccessor symbol. You can see that we use stateFrom() in equals(other) to access the other Money instance’s state. However outside of this file there is no way to access either stateFrom or stateAccessor (due to lexical scoping), and therefore no way to access any Money instance’s internal state.

Well, actually

Technically in some cases you can access a symbol-keyed property without access to the original symbol, but it’s pretty dirty:

hackery.js
1
2
3
4
const someMoney = Money(5,'USD');
const privateProperties = Object.getOwnPropertySymbols(someMoney);
const stateAccessor = privateProperties[0]; // let's hope our stateAccessor is always the first property!
const privateState = someMoney[stateAccessor]();

In my opinion this is gross enough that anyone violating an object’s privacy this way would be aware that they are really going against the grain of that object’s API and should be responsible for the consequences.

DIY Stir Plate for Homebrew Yeast Starters

| Comments

I brew beer as a hobby, and one of the secrets to really good homebrew is happy, healthy, plentiful yeast at the start of fermentation. Creating a yeast starter really helps with this, and a good yeast starter requires a stir plate to keep the yeast in suspension.

Stir plates aren’t super expensive, but at around 100USD they’re not something you’d buy on a whim. I found a lot of information online on how to make your own DIY stir plate using an old computer fan and some magnets, and since I also like tinkering with electronics I decided to give it a go. After a couple of design iterations I’m really happy with how this project turned out.

The internals

This is a pretty simple piece of electronics. In terms of components we have:

  • a $10 aluminum project box
  • a 12 volt computer fan (sometimes called a muffin fan) liberated from an old computer
  • a 9 volt DC power supply which I scavenged from somewhere (note NOT 12V, which would spin the fan too fast for our purposes)
  • a $3 panel mount barrel jack for the power supply
  • a panel mount 25 ohm potentiometer, plus knob (maybe $2?) to act as a voltage divider to control the fan’s speed
  • a $0.50 panel mount switch for a power switch
  • a steel washer which I hot-glued to the computer fan
  • two rare-earth magnets ($4 for 6), which are stuck to that washer just with magnetism.
  • a few strands of wire to build our simple circuit

Note that a lot of the project writeups I found online were using magnets salvaged from an old computer disk drive. If you’ve got one of those lying around and have always wondered what the inside looks like then go ahead and salvage them. Initially I did this myself, but ended up going with the ones I bought for $4 since they seemed to work better for my setup.

here’s a circuit diagram showing how the electronics are assembled.

Pretty simple, which is good for me since I’m not an electronics whiz. I have read in another guide that it’s not a good idea to use just a simple pot-based voltage divider to adjust the speed since it will be pulling a lot of wattage when the speed is turned down. However with my current configuration I usually leave the stir plate at pretty much full blast anyway with a 9V supply, so I’m not too worried.

Building it

I hot-glued a steel washer to the computer fan, then attached the two magnets about 1” apart. Next I did a dry-run assembly of the circuit and tested that the fan spun and I could control speed with the potentiometer. Next I put the fan in the project box and tested that it was able to spin a small magnetic stir bar inside my Erlenmeyer flask filled with water.

Positioning the magnets

Getting the magnets placed correctly is probably the most tricky part of this. First off, it’s important to make sure that you place the magnets with opposing polarities. In other words, one with the north pole pointing up, the other with the south pole pointing up. You’ll know if you get this wrong because the stir plate will “throw” the stir bar when it’s running. I also had to play with the distance between the magnets, the size of the stir bar (I also had a 2” bar which tended to get thrown), and most importantly the distance between the base of the flask and the top of the magnets. You can probably reduce the distance by putting some sort of spacer under the fan, or under the magnets. You can perhaps increase the distance by putting something like these these stick-on grippy dots on the surface that the flask sits on to raise it a little higher. Finally, you could experiment with adding more magnets, either stacked or in placed at 12, 3, 6, and 9 o’clock. I did briefly try stacking a second pair of magnets in an attempt to allow the stir bar to spin EVEN FASTER!!!!! without throwing but it actually had the opposite effect. A lot of these adjustments were counterintuitive, so I’d suggest just playing around until you find something that works. Finally, do make sure you test the setup with a stir bar in the flask you’ll use, with liquid in there.

Once I had figured all that out and had a fully functional setup I started assembling the stir plate for real. I started off by attaching the computer fan to the base of the project box with a square of foam mounting tape. Then I drilled holes in the aluminum project box for my panel-mount power switch, potentiometer and barrel jack. I used a small drill bit to make a pilot hole followed by a step drill bit to widen the hole. Once I had all the components in place I just needed to solder a few wires and the project was pretty much complete.

For a final touch I added some clear grippy plastic bumpers to the base of the box to help reduce vibration noise when the stir plate is running.

All in all this probably took me 4 hours of time, spread over a couple of weekends. I’m really very happy with the result. It works well and it actually looks like a professional job, something that’s very rare for things I make with my own hands! It even got a seal of approval from my Junior Homebrewer friend.

The Ultimate Traveler’s Coffee Kit

| Comments

I travel a fair bit for work and have discovered that when you’re living in a hotel it’s the simple comforts that you miss the most. Near the top of my list of simple comforts is a good cup of coffee soon after I wake up. The hideous coffee tea-bag things that most hotel rooms provide barely counts as coffee, let alone good coffee. A year or so ago I set out to find a way to make good coffee in a hotel room without using up a ton of luggage real-estate. Today I am very happy with my setup, apparently to the point that I’m OK with writing a blog post about it titled “The Ultimate Traveler’s Coffee Kit”.

Just to lay my cards on the table, I’m a fan of good coffee but I’m not a serious coffee geek. At home I do have a nice burr grinder but I don’t own a goose-neck kettle and I’ve never used a scale or a stopwatch while brewing coffee. Given that, this travelling setup works great for me.

The kit

My kit consists of:

Plus a couple of optional extras:

  • a cut-down chamois towel
  • some tea bags, for when I don’t want coffee

This setup packs almost perfectly into a small Eagle Creek pack-it bag that I found at REI. I won’t admit to the inappropriate amount of time I spent in the packing-paraphernalia aisle trying out different bags before I settled on this one.

I’m very happy with how compact this whole setup is. It doesn’t take up much room in a suitcase at all. It’s also great for backpacking.

The details

Many coffee geeks use the Aeropress as their standard brewing method, and for traveling it’s a great compact option. I ran out of the little paper filters one too many times while traveling so now I use a stainless steel filter. When staying in a hotel you usually only have access to paper coffee cups which seem prone to being crushed when pressing down on the aeropress, so I pack a little enamelled metal camping mug too.

There are a surprising variety of after market parts you can buy to enhance your Aeropress Experience. As well as the reusable filter I also have a little silicone cap which covers the plunger tube, allowing me to pack coffee beans inside. I get about 4 brews worth of beans in there - just enough to start my mornings off right during a short business trip.

By pushing the plunger through the outer tube the wrong way then placing the filter and filter cap on the narrow end the whole lot fits inside the camping mug. Aeropress, beans and cup in a very small package.

Initially I would bring pre-ground coffee with me when travelling but found that it didn’t taste so great by the end of a trip. The fact that the aeropress requires a pretty fine grind probably makes this worse. Eventually I bit the bullet and upgraded to a small Hario Slim hand grinder. It’s a bit of a pain to grind the beans every morning, but the results are worth it. The Hario Slim is teeny - only a bit bigger than a large pepper grinder - and I’ve actually found it easier to grind with than the larger Hario Skerton, perhaps because you can get your hand all the way around it to hold it steady while grinding.

The grinder breaks down small, and along with the aeropress the whole setup fits perfectly inside my bag. If I wanted to pack extra beans I could add maybe two brews worth in the top section of the grinder which has a little lid to keep the beans from wandering.

With the grinder, aeropress and cup in the bag I have just enough room left for some tea bags and a little cloth. The cloth is to dry off the equipment - you might feel guilty about the brown stains you leave on those nice white hotel towels otherwise.

Brewing in a hotel room

Brewing coffee requires hot water. Turns out that a lot of hotels will provide you a small electric kettle if you ask. If that’s not an option you can use the crappy coffee machine in the hotel room to boil water. Be warned though that you won’t want to use that water to make tea - it invariably has a subtle taste of old coffee. While I wait for water to boil I measure and grind my beans. I use the bottom of the grinder to measure a dose and a half of beans by filling the beans up halfway between the 1 mark and the 2 mark. I have my Hario Slim grinder set at 9 clicks looser than fully tightened, based on advice I read on the Interwebs. I use the inverted technique with my aeropress, and wait about 30 to 60 seconds before flipping it. Then I plunge the coffee into my little camping mug and top up the mug with hot water. After that I squeeze the plunger down hard into the hotel bathroom sink to compress the “puck” - this means less coffee grounds to clean off of the filter. Then I unscrew the filter cap and push the push into the trash. Don’t forget to take the stainless filter off first! I rinse the equipment in the sink and leave it to dry on top of my little towel. Finally I sit at my teeny tiny hotel desk and enjoy a really good cup of coffee while I start my day.

Video for My different.js Talk

| Comments

I had the opportunity to present at Forward JS a few weeks ago on “different.js”. No, the presentation wasn’t about a new, shiny, artisinally-crafted micro-framework. I talked about how to “level up” at JavaScript by introducing artificial constraints which force you to approach things a different way. I worked through a specific example, abstaining from the use of this and prototypes in favor of closures.

Slides for the talk are here, video is below (thanks New Circle).

Optimizing 3rd Party JavaScript Loading

| Comments

I was doing some research today on preventing slow/flakey external javascript files from 3rd-party sources from slowing down your perceived page load times. Things like tracking pixels and chat widgets are generally loaded via script tags, and if you’re not careful then one rogue provider can create a really poor experience for users on your site.

I don’t have any original thoughts here, just a collection of resources which I found that summarize techniques I’ve heard about in the past but not had to use directly until now.

General best practices

I hate that phrase, but oh well.

Async function queuing

That’s the technique of defering loading the bulk of a 3rd party library, and while it’s loading having client code add what it wants to do to a ‘queue’ of deferred operations which are then executed once the 3rd party library eventually loads. Google’s async analytics tag is a well-known example of this technique.

It’s well described in this StackOverflow post.

Using ‘async’ and ‘defer’ script tag attributes

As usual, Ilya Grigorik does a great job covering how to do async script loading.

Chris Coyier at CSS-Tricks covers similar ground too.

Don’t forget that async and defer are not available in all browsers. Note that as Ilya mentions in his article, you should add both defer and async for a safe and backwards-compatible approach.

Rails as an SOA Client

| Comments

I got to present a talk at RailsConf this year. video is here, slides are here.

My talk was about Rails as an SOA client. I talked about things like Conway’s Law, Contract Testing, Hexagonal Architecture, the faraday gem, and HTTP Caching.

The 5 Rules of an Awesome Dev Toolchain

| Comments

Every ThoughtWorks project I’ve ever been on has a script in the root of the project repo, often named ./go or ./go.sh (or ./go.bat if you’re very unlucky). This script often starts life as the script which the CI server runs to test the project. It soon acquires extra capabilities. It becomes the way by which developers perform a pre-commit check before pushing their changes. It becomes the script to run when you first get started on the project to download all the various project dependencies.

./go can eventually become the primary interface into the dev tooling for the project. At that point it becomes a very powerful productivity boon for the team. Whenever a team is tempted to put some process or workaround into a README or a wiki page they instead automate it and put it into the ./go script. The project team’s tribal knowledge on how to perform certain tasks is automated, readily available, and under source control.

Some examples of things I’ve often seen added to a ./go script are:

  • download software dependencies needed for dev tooling, or at least detect that a required dependency is missing and provide instructions on how to install it.
  • set up or update an IntelliJ configuration for a JVM project.
  • do the rvm/bundler dance for a ruby project.
  • run a standard set of pre-commit tests

What makes a ./go script great?

I’ve spent some time thinking about what makes a good ./go script good. Here are my rules for an awesome ./go script, and thus a low-friction dev toolchain.

1. don’t make me think

Make your dev toolchain as low-friction as possible by making ./go as low-friction as possible. Give me sensible defaults. Give me clear error messages when something goes wrong, including hints on how to fix the problem. If there are two operations which need to be performed one after the other give me a higher-level operation which does them both (for example, wrap vagrant destroy -f && vagrant up into ./go vagrant-nuke). Give me a list of all operations available to me, with a brief description of what they do.

The git cli is a good example of what you should be striving for.

2. isolate, isolate, isolate!

Try as hard as humanly possible to isolate your toolchain so that it’s maintained within the project. Strive to avoid system dependencies - software that needs to be installed on the workstation itself.

On a Scala project, run your own isolated version of sbt rather than the system sbt. On a ruby project use bundler’s –path option to vendorize your gems and avoid the need for gemsets. On a python project use virtualenv. Check in binary tools like chromedriver or phantomjs.

This is absolutely the best way to ensure consistency across machines, and avoid “it’s only (working|borked) on my machine” issues. Because toolchain dependencies are managed within the project repo this also makes it trivial to upgrade your tooling. No need to coordinate everyone switching versions of a tool.

Even if you can’t get to the point that your tools are fully isolated, your ./go script can still perform checks on shared system dependencies to warn your or out-right fail if they are not present, or are not at the right version.

3. uniform interface

Everything is done via ./go. Everything.

Don’t make me remember whether it’s rake test, rspec, or ./scripts/test. Just put everything inside of ./go, even if ./go just immediately shells out to another tool. This is an extension to the “Don’t make me think” rule. It also makes it a lot easier to isolate your tooling. Importantly it adds a level of indirection which is under your project’s direct control. That means that you can modify or even switch your underlying tooling without having to re-educate every member of the team. You can add custom pre-flight checks for required dependencies before performing an operation. You can wrap every operation with benchmarking or instrumentation. You can run every operation inside a VM or container without the person running the script even knowing!

4. one step to a running local dev environment. One.

A new member on the team should be able to run one command to get to a running dev instance on their workstation. The setup steps in your project README should be:

git checkout yourProject
cd yourProject
./go

In the worst case where you absolutely cannot automate some aspect of project setup then the ./go script should detect that the manual setup is required and provide clear and detailed instructions on how to perform it.

5. do on your workstation as you do in CI

Push as much of your CI/CD scripting as possible underneath your ./go script. This keeps everything centralized and under source control. It also makes it easy to reproduce and debug CI/CD problems on a local dev workstation, and makes it easier to maintain and extend those scripts.

You get the most benefit here if you’ve agressively followed the advice to isolate your dev dependencies. You need your local tooling to be the same as that which is running on your CI/CD agents.

This tooling consistency is such a valuable aspect of a good dev toolchain that it’s worth pushing your entire dev toolchain into a virtual machine, with the exception of GUI tools. Combining Vagrant or Docker with something like Chef or Ansible gives you a setup with a strong guarantee of a uniform toolchain across all dev/QA workstations, and also all CI/CD agents. It gives you a very simple way to make sure your toolchain hasn’t drifted from the project norms - just nuke your VM and rebuild from scratch. Rebuilding the VM might take a while but it’ll probably be quicker than debugging the cause of the problem and correcting it. This approach also forces you to automate every aspect of your toolchain setup and reduces the risk of workstation toolchains drifting apart over time.

Running your build in a VM hosted within your workstation may sound extreme, but the amount of time you’ll save not debugging irritating tooling issues is a fine tradeoff for the inefficiency of building within a virtual machine or LXC container.

Microstatic: Radically Simple Static Microsites

| Comments

I’m a bit fan of static sites, and have created a fair few little static one-page demos and toy apps over the last few years. I like to host these off of S3 and expose them under subdomains of my domain, thepete.net. In fact, you’re reading one of those S3-hosted static sites right now!

With modern infrastructure like S3 and Route 53 the process of setting up a new microsite is pretty straightforward, but the manual steps started to grate on me after a while. “I’m a dev”, I thought to myself. “Why can’t I automate this?”. And so of course I did.

I started off with some shell scripts around s3cmd, but eventually moved to an assemblage of ruby functions driving the fog gem. Prompted by an internal thread at ThoughtWorks (particularly by @gga) I cleaned things up and packaged this functionality into a gem called microstatic.

Max Lincoln also gave me lots of guidance and good ideas. He was keen for me to use fog so that microstatic could support alternate cloud provides such as OpenStack/Rackspace. I failed to keep to that vision, but I’m very happy to take pull requests to get back on that path.

A gem in two parts

Microstatic does two things. Firstly it provides a command-line tool that makes it ridiculously simple to create a new microsite. Secondly it provides a rake task that makes it ridiculously simple to push new content to the microsite.

Creating a new microsite

microstatic setup and you’re done. This will create a new S3 bucket to hold your microsite, and then add a Route 53 entry to wire that S3 bucket up to a subdomain.

Deploying content to your microsite

rake deploy and you’re done. Microstatic ships with a rake task that will sync your local contents with the S3 bucket hosting your site.

Demo

Here’s a 24 second demo. I’m creating a brand new microsite from scratch, setting up S3 buckets and DNS entries, and final deploying some initial content to it.

Pretty neat, huh?