Posted by: jsonmez | September 3, 2010

One Build to Rule Them All

I spent a good time last night troubleshooting a “works on my machine” problem.

It takes pain to learn something, this pain perhaps was good.  It reminded me of a concept that is really important in your software development infrastructure.

I have three golden rules of development environments and deployment:

  1. I should be able to run a build locally that is the exact same build that will be run on the continuous integration server.
  2. Only bits that were built by the continuous integration server are deployed and the same bits are deployed to each environment.
  3. Differences in configuration in environments only exist in one place and on that machine.

You don’t have to follow these rules, but if you don’t, most likely you will experience some pain at some point.

If you are building a new deployment system, or you are revamping an old one, keeping these 3 rules in mind can save you a large amount of headache down the road.


I think it is worth talking about each rule and why I think it is important.

Rule 1: Local Build = Build Server Build

If you want your continuous integration environment to be successful, it needs to appropriately report failures and successes.

If your build server reports failures that are false, no one will believe the build server and you will be troubleshooting problems that are build configuration related instead of actual software problems.  Troubleshooting these kinds of problems provides absolutely no business value.  It is just a time sink.

If you report false successes, when you deploy the code to another environment, you will discover the issue, and will be wasting time deploying broken code, and you will have a long feedback loop for fixing it.

As a developer, I should be able to run the exact same command the build server will run when I check in my code.  I would even recommend setting up a local version of the continuous integration server your company is using.

By being able to be confident that a build will not fail on the build server or during deployment if it doesn’t fail when running it locally, you will prevent yourself from ever troubleshooting a false build failure.  (The deployment still could fail, and the application could still exhibit different behavior on different environments, but at least you will know that you are building the exact same bits using the exact same process.)

Rule 2: Build Server Bits = Only Deployed Bits

Build it once, and deploy those bits everywhere.  Why?

Because it is a waste of time to build what should be the exact same bits more than once.

Because the only way to be sure the exact same code gets deployed to each environment (dev, test, staging, production, etc.), is to make sure that the exact same bits are deployed in each environment.

The exact same SVN number is not good enough, because an application is more than source code.  You can build the exact same source code on two different machines and get a totally different application.  No, I’m not loony.  This is a true statement.  Different libraries on a machine could produce different binaries.  A different compiler could produce different binaries.

Don’t take a risk.  If you want to be sure that code you tested in QA will work in production exactly the same way, make sure it is the exact same code.

This means you can’t package your configuration with the deployment package.  Yes, I know you always have done it that way.  Yes, I know it is painful to figure out another way, but the time it will save you by never having to question the bits of a deployment ever again will be worth it.

Rule 3: Environment Configuration Resides in Environment

Obeying this rule will save you a huge amount of grief. 

Think about it.

If the only thing different in each environment is in one place, in one file in that environment, how easy will it be to tell what is different?

I know there are a lot of fancy schemes for adding configuration to the deployment package based on what environment the deployment is going to.  I have written at least 3 of those systems myself.

But, they always fail somewhere down the line and you spend hours tracing through them to figure out what went wrong and ask yourself “how the heck did this work again?”

By making the configuration for an environment live in the environment and in one place, you take the responsibility of managing the configuration away from the build process and put it in one known place.

As always, you can subscribe to this RSS feed to follow my posts on Making the Complex Simple.  Feel free to check out where I post about the topic of writing elegant code about once a week.  Also, you can follow me on twitter here.


  1. […] One Build to Rule Them All – John Sonmez discusses the importance of repeatable builds across development and continuous integration / build servers […]

  2. Would you mind expanding on point 3? I agree wholeheartedly with points 1 and 2, but I’m not so sure on 3. We keep all our configurations under source control, then part of the build process is to create one build package with the configurations for each environment that the package will be deployed to. Then, when deploying to an environment, all you need to do is copy the appropriate configuration file to the executable folder. You can easily diff the configurations to find out what’s different between environments, and because it’s all under source control, you can check who has changed stuff; you could always do more fancy stuff using a combination of code generation templates and backend configuration databases to generate the configuration files.

    If you’re talking about, for example, storing all configuration in the registry, or in a specific location in the file system, how do you manage changes to configuration with a new release (e.g. adding a new connection string), especially across multiple servers if you’re deploying to a server farm?

    • Hi David,

      What I mean in point 3 is essentially that you end up with one configuration file on that environment that is for that environment.
      They way I would recommend doing that is probably similar to what you do. There is no reason you can’t have the file live in source control for each environment.
      The key to point 3 is that there be only one place to look for configuration and those differences live in one place in the environment itself.

  3. […] is also critical that the same bits that were built by your build server are what is deployed to each […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: