Sunday, November 23, 2008

The problem with conventional continuous integration servers

I recently presented a talk on continuous integration at Agile North.

There was one topic in the talk that I think is so important that I decided to write an article about it. Suprisingly, it's a topic that many users of continuous integration (CI) servers (and even the developers of them in some cases!) don't seem to have thought much about.

Continuous integration

This article assumes you already know what continuous integration is.

The problem with conventional continuous integration servers

Imagine there are three developers, Tom, Dick and Harriet, and a continuous integration server. There is some code in a source code repository and these three developers start with the code cleanly checked out.

  1. Tom makes some changes and commits his changes.
  2. The CI server starts running a build.
  3. Harriet makes some changes and commits her changes.
  4. Dick makes some changes and commits his changes.
  5. The CI server reports that the build is OK.
  6. The CI server starts running a build (because there are new changes for it to run the build on).
  7. Tom makes some changes and commits his changes.
  8. The CI server reports that the build is broken.
Here's a diagram representing that:
The question is - who broke the build? (Left as an exercise for the reader). The problem with conventional continuous integration servers is that they can't tell you. A situation like this is inevitable with the vast majority of continuous integration server installations (that actually exist - I know you could install your favourite CI server differently if you had enough money - read on ...).

BTW - Step 7 is a bit of a red herring. It is not necessary for the purpose of the main point of this article, but a common enough situation. Tom thinks he's committed on a green build, but really the build is already broken in terms of the committed code, just not in terms of what the CI server is saying.

Why it is a problem

The problem with not knowing which commit broke the build is that it takes longer to work out who should look at the problem and how they can fix it. If you know which commit broke the build you know who should look at it and they can review their changes to work out why it broke. Furthermore, if you know which commit broke the build (even if you can't work out why), you can revert that change set while the problem is fixed "off line" from the rest of the team.

I am convinced (from years of using CI servers on many teams) that not knowing which commit broke the build is a major contributor to sloppy CI practice - builds staying red for ages, nobody taking responsibility, reduction in commit frequency etc etc. Just knowing which test failed, or "why" the build broke isn't enough. The symptoms don't always tell you the cause. Knowing the cause - i.e. which commit - is what you need to know.

When do you get this problem?

You suffer this problem more as the team gets larger, commits become more frequent, and the length of the build goes up. (Oh, and if developers get sloppier).

You often can't do anything about the size of the team, you'd like to encourage people to commit more frequently and making the build faster can be really hard. (And you might be able to get rid of sloppy developers, but that's quite a different blog post). Ideally the build should be fast, but that is often easier said than done, and even if the build is fast, it doesn't completely eliminate the problem described in this article.


Most continuous integration servers don't solve this problem, but some do. These are the solutions that I know about. The CI server can:

a) run the build for the commits (revisions) between the last known good and the first known bad (provided that there is enough capacity in the build farm, e.g. people stop committing while the build is broken).
b) check that the build passes (on a build agent) before committing changes.
c) run multiple (preferably all) commits in parallel on different build agents in a build farm.

build-o-matic does (a) automatically - using a binary search. TeamCity (version 4 EAP) and Pulse (and maybe others) allow running of a previous revision so you can do the equivalent manually (maybe someone will write a plug-in for TeamCity to do what build-o-matic does? There is a precedent).

TeamCity and Pulse do (b) (build-o-matic doesn't - it's a cool feature; I've used it in TeamCity and it works well but you do need a lot of build agents).

build-o-matic does (c). I think TeamCity and possibly some others will too if you have enough build agents. The problem with this is approach is that you really might need a lot of build agents, particularly if the team is large, commit frequently and the build is long.

My preferred solution is to buy enough build agents to do (c) - computers are very cheap. Note however that just because a CI server supports a build farm, even if the build farm is infinitely large it doesn't necessarily mean it'll run the build for all the commits - check your CI server documentation for details. I believe Bamboo has something up it's sleeve on this topic - but I'm not sure if it's public yet (I'll find out and add a comment as appropriate).

There is another solution which is not to use a CI server at all, but instead have a "build token" or an "integration machine" - i.e. serialize all commits, that way you never commit while the build is running. That only works well for small co-located teams with a fast build. But when conditions are suitable, it really works well!


There are lots of CI servers to choose from. I consider working out which commit broke the build as a basic minimum feature of a CI server installation but suprisingly not all CI servers are capable of telling you - and you really need to understand this before you choose a CI server that will leave you with a broken build and not knowing which commit caused it.

Copyright © 2008 Ivan Moore

Sunday, November 9, 2008

In my previous article I mentioned the book "Clean Code" - this article is a brief critique of it (in some cases you'll just have to read the book to see what I'm talking about because I'm not going to rewrite the book here!).

What I liked

The first chapter is excellent - particularly pages 4-6 (buy the book to find out what they contain). I got the same feeling of wanting to get everyone to read these pages as I had from the section about comments in Kent Beck's Smalltalk Best Practice Patterns. I wanted to shout (while shaking people by their lapels) "read this - it's what I've been trying to tell you all this time".

What I didn't like - examples (particularly Chapter 3)

There are some examples that aren't great. The one that stuck out particularly badly was in chapter 3 - HtmlUtil. The problem with HtmlUtil is that it is classic procedural style - which is OK as an example of bad style - but the refactored version does not address that aspect of it's badness.

This example is a method on HtmlUtil with signature:

public static String testableHtml(PageData pageData, boolean includeSuiteSetup) throws Exception

with lots of methods called on pageData. It gets nicely refactored into a short method of a different name but with the same signature.

My main problem with this example is that it looks (to someone who doesn't know the codebase that it is taken from) as if this should be a method on PageData. Maybe there is a good reason it is a static method on HtmlUtil but the author doesn't explain whether there is or not. To me it seems like an obvious question and by not mentioning anything about Utils with static methods that do things to objects that should be quite capable of doing stuff for themselves, it shows an example of arguably bad code, early on, with no explanation or apology for using such an example. I assume that they wanted to focus only on procedural refactoring for the example and deliberately wanted to avoid anything object-oriented - but if so, they should have said that very clearly.

The second red flag that this example waved at me was declaring that the method throws Exception. It isn't clear from reading the code that it needs to. Maybe it does, but again I think if it does then the author should have explained why and apologised for that aspect of the example.

What I didn't like - Structured Programming (Chapter 3)

In the (very short) section on Structured Programming there is a claim for the rules of structured programming; "It is only in larger functions that such rules provide significant benefit." I almost screamed at my fellow passengers on the train into work. My blood pressure is rising as I write this. (Imagine me screaming at the top of my voice - "NO NO NO").

I don't believe there is ANY benefit, let alone significant benefit, of applying the rules of structured programming to Java code. If there is, then the author should justify their comments rather than invoke the "proof by repeated assertion" that the cargo cult followers will no doubt spout in comments on this article.

What I liked - formatting (Chapter 5)

Lots of good stuff - I didn't agree with it all but some of it has definitely made me rethink (or in some cases just think) about what I'm doing with formatting more than before.

What I didn't like - Chapter 11 (Systems)

Any chapter which starts with an analogy for building a software system as building a city, has already got off to a bad start as far as I'm concerned. Whenever I hear of such analogies I wonder what Swiss Toni's analogy would be. "Building a system is very much like making love to a beautiful woman ..."

Apart from the analogy - the author then goes on to talk about Spring somewhat implying that it's a Good Thing. Just don't get me started. That's a whole other blog post.

Nevertheless, despite these things, there is also some good stuff in this chapter.


Great. Buy it. It's full of good stuff and only very few things that make my blood boil.

Copyright © 2008 Ivan Moore

Saturday, November 8, 2008

Programming in the small - Exceptions

It's a long time since I wrote my previous "programming in the small" article. This article is about Exceptions in Java. I've kept it very short just to cover the absolute minimum.

Handle or throw

What is likely to be wrong with this code?
    public void makeTea() {
        try {
        } catch (ElectricityCutOffException e) {
            LOGGER.log("Didn't pay bill; no tea today.");

Logging that some exception has been thrown is not handling it. In this case, "pourContentsInto" will still be called even if "boil" threw an exception. The exception indicates a problem, and to keep executing will probably mean that the system is in an unknown, inconsistent or bad state.

In this case, I'll end up with cold water in the teaPot, ruining the tea in an unrecoverable way.

In many cases, catching an exception and not handling it causes bugs which are tediously difficult to track down because the system ends up in an inconsistent state and the exception that eventually causes the system to fail, or the error in its functionality, ends up being somewhere that looks completely unrelated to the code that caused the problem by hiding the exception.

If this method is a sensible place to fully handle the exception, then it should do that. For example:
    public void makeTea() {
        try {
        } catch (ElectricityCutOffException e) {

If it can't handle the exception (I don't have a butler), then the best thing is to just let the exception percolate up to the calling code to either handle or throw, that is:
    public void makeTea() throws ElectricityCutOffException {

Now calling code has to either handle or throw the exception.

Do NOT declare an exception that a method does not throw

If you declare that a method throws a checked exception that it cannot actually throw, then you are condemning callers to having to handle or throw an exception that cannot happen - percolating unnecessary try/catch code around the system. Don't do it.

For a method which implements or overrides a method of an interface or superclass which declares that it throws an exception, do not make it also declare that it throws that exception unless it actually does.

Unchecked exceptions and more

I have recently read "Clean Code" - I really like the first chapter - worth buying just for that. There is a chapter on exceptions, in which Michael Feathers writes about the use of unchecked exceptions and other things. I'm not going to write more myself - buy "Clean Code" instead.

If you are too cheap to buy "Clean Code" or have read it and want a different opinion, then read my "programming in the small" articles which cover some of the same ground (in some cases better, in some cases worse, and not a complete intersection of topics - in particular there is more stuff covered
in "Clean Code").

Copyright © 2008 Ivan Moore

Sunday, November 2, 2008

Why are NOJOs so popular?

Following on from my previous articles on NOJOs and their frequent complements, Utils classes, I have talked to colleagues about why NOJOs are so popular in enterprise Java development.

Here I will try to write up some of the ideas we discussed (thanks to Mike Hill, Nat Pryce, Pippa Newbold, Rob Dupuis and Tung Mac).

Education by frameworks

A lot of the early examples in the Spring documentation are NOJOs. For example, on the page that introduces The IoC container there are several (here's one):

package examples;

public class ExampleBean {

// No. of years to the calculate the Ultimate Answer
private int years;

// The Answer to Life, the Universe, and Everything
private String ultimateAnswer;

public ExampleBean(int years, String ultimateAnswer) {
this.years = years;
this.ultimateAnswer = ultimateAnswer;

Similarly, have a look at the JavaBeans on wikipedia - is a NOJO. I think that there are a lot of developers who think that JavaBeans are NOJOs - as indicated by one of the comments on my NOJO article.

Fear of using the framework incorrectly (the framework won't like it)

It is possible that developers think that the early examples are the "correct" way to use such frameworks - and are worried that if they add methods to their NOJO then the framework will do something peculiar. This isn't entirely unreasonable as sometimes such frameworks do unexpected things due to the amount of behind-the-scenes-magic going on.

Fear of using the framework incorrectly (my colleagues won't like it)

Another possible fear is that that the early examples are the "correct" way to use such frameworks - and doing anything else is not how the framework is intended to be used (even though the framework seems to still work).

Separation of concerns

Another fear is that of putting the behaviour in the wrong place - in particular in enterprise Java projects, I think there is a perception that there is no worse "crime" than putting behaviour in the wrong "layer" (or the wrong sort of class). Therefore, rather than risk putting the behaviour on the wrong object in the right layer, enterprise Java developers will choose to put the behaviour on the wrong class in the right layer. Related to this is a desire to stick to some prescribed pattern - e.g. DTOs. I have been in a situation when pair programming with someone where I was told, "you can't add a method to that" followed by some lame reasoning justified by some pattern that they wanted to stick to religiously.

Afraid of "new"

Yet another fear is that you might have to create a new object (shock, horror!). To avoid that, maybe some developers prefer to write static methods on some Utils class so they don't need to create any objects?

Object-oriented programming education and thinking

Perhaps the popularity of NOJOs is just the manifestation of how developers (don't) learn object-oriented programming? Perhaps object-oriented programming simply doesn't suit how all developers think? Certainly, I've come across very good developers who prefer functional programming and don't really "get" object-oriented programming, so it's not meant to be a criticism (or at least, not in all cases). Many developers have learnt their programming from non object-oriented programming languages, like C. Perhaps it's not suprising that to a C programmer an object looks like a struct?

Intra-team APIs

Another source of NOJO programming is that the APIs that people design for communicating between team's subsystems often involve setting up, sending and receiving NOJOs.

Automated Testing

Another possibility is that developers who are not used to TDD find writing tests that use NOJOs easier than, for example, using Mock Objects.


Using UML to "design" a system up front encourages thinking about the fields of objects rather than their behaviour - because that's what is easiest in notions like UML.


Java IDEs can do lots of good things. One of the arguably less good things they do is generate getters and setters if you want. It is possible that the easy generation of getters and setters encourages their use, leading to NOJOs rather than objects that do things with their own fields. Using setter injection (probably the most common way people use Spring) also tempts developers into generating the getters too - after all, it's probably an extra click to not generate getters and what harm can some getters do? (Rhetorical).

Should I care that NOJOs are popular? Should I do anything about it?

Left as an exercise for the reader.

Copyright © 2008 Ivan Moore