Justifying Continuous Integration Expenditure

So why, oh why, oh why is it so difficult to get an additional server? Has anyone come up with a formula to produce some numbers for the bean counters to justify this already?

I propose this is an endemic problem that the guys on the ground give up fighting to resolve because there is no budget for more servers.

Here's a start: by measuring all the time that developers waste waiting for Continuous Integration. Paul Julius writes about the cup of coffee metric. If you can measure that time, there's your formula:

hours wasted per dev * number of developers * average hourly rate * 22 working days a month (on average) = monthly cost of not having the right hardware.

A local example: 30 minutes a day, for six developers, average cost £50 gives £3,300 a month. That's not even a high rate for developers, or much time spent. Of course, your organisation may be unable to budget any more on infrastructure. At the same time as it pays developers to watch the Continuous Integration machine. That's a different problem than managers understanding the true cost of something.

Thanks for the interesting comment, Banos. Has anyone tried this kind of approach and won? Tell us!

(photo via See-Ming Lee)

DevOps New Zealand