|"Well, maybe it collided with a tin of paint"|
(Photo: Jon Smith photography, Flickr)
A study that came out last week was about IT projects breaking their budgets (see this and this). According to the research, in a sample of 1,471 large-scale IT projects, they ran on average 27% over budget, but the headline-grabber was then observation that one in six projects go three times over budget. The researchers have named these projects “black swans”, and blames managers for failing to account for low-probability high-cost risks in big IT projects. To the more cynical IT professionals, this is nothing unexpected. It’s not hard for a software tester to witness at least one project like this – failing that, you don’t have to look far for the latest story about the notorious NHS IT system.
What was interesting, however, was the reference to the Black Swan theory. This phrase was originally coined by Lebanese-American essayist Nassim Nicholas Taleb. There’s a whole book about this, but the basic idea was that there was a time when it was believed all swans were white. No-one had ever seen a swan in any other colour, so no-one gave serious thought to this possibility. Then Dutch explorer William de Vlamingh went to Australia and discovered that some swans are black, fundamentally changing how people saw swans. And in hindsight, it was nonsensical to assume swans could never be that colour just because you hadn’t seen one before. Taleb used this analogy for all sorts of events: he suggested, amongst other things, the attack on the World Trade Center and the Credit Crunch could be considered “black swan events” – both unexpected at the time, both easy to rationalise now.
There is, however, a flaw in applying this analogy to IT projects. Under Taleb’s criteria of a Black Swan event, the event has to be: unexpected at the time; have a major impact; and be easily understood in hindsight. If it is true that one IT project in six is running three times over budget, the excuse that you didn’t have hindsight wears thin. With this many projects overrunning costs this badly, it rather suggests that people don’t learn lessons from earlier IT disasters and make the same mistakes. A comparable act would be to go to Australia after Vlamingh’s expedition still expecting all swans to be white.
To be fair to managers, it’s difficult to get an IT project right. There are so many ways projects can go wrong that it’s practically impossible to think of everything. But this is precisely why it doesn’t pay to go cheap on software testing: testers are needed to find the mistakes you haven’t thought of, before they get in to the live system and wreak havoc. This is also why there is a widely-held principle that testing activities should begin as early as possible. You don’t have to wait until the code is delivered before you start planning the test process – you can do it as soon there are design documents to work on. And if a tester spots a flaw in the design at an early stage, this can save a very expensive correction later.
However, I can’t pretend that software testing is the answer to every problem. It’s little help if the only fault the testers can report is that the system crashes every time they try to use it – this has been known to happen. I suspect the biggest cause of IT projects running over time and budget is wildly optimistic expectations of cost and timescale at the outset, compounded by “feature creep”. It certainly doesn’t help when IT contracts are awarded to whichever company makes the most extravagant promises over what they can deliver. The only real solution is to be realistic with IT projects. If you expect a quick and cheap solution to a complex problem, chances are people will tell you what you want to hear, and you will discover reality the hard way. Companies have to understand this risks, expect the unexpected, and factor in the inevitable setbacks. Remember, the worst thing a black swan can do is spoil your holiday snaps; the worst thing a black swan IT project can do is close your business.