It all starts out with a detailed plan. Then someone says, “Can we deliver by October?” A few features get cut, some of the estimates get revised downwards and everyone gets back to work. Then you get to the end of September and find that someone removed all the contingency and that in the rush to finish the requirements a heap of important stuff got missed.
You spend days, weeks, quite possibly months pushing to get the software developed and, a few months before the real end, a crunch point is reached. It’s missing what’s now realised to be critical functionality; it takes an hour to process something that should be instant; the data doesn’t look like you thought it would; it’s too late for any real benefit to be obtained.
The whole project gets cancelled. Or at the very least, suffers from a near-death experience.
The technology, they say, wasn’t right. It was immature. Or badly supported by the vendor. It was open source. Or not open source. Word quickly gets around your industry that the whole project failed because some new software didn’t work.
Sound familiar?
I think every project that I’ve been on that has been significantly delayed — that is, most of them — has followed a similar arc. And, in each and every case, the diagnosis of the failure has been the same: the technology. And in pretty much every case it wasn’t really true.
The neat thing is that no individual is to blame and, even better, it’s completely impossible to prove.
Let’s look at the timeline above. How different would this have been had another technology been chosen? Not very I’d wager.
Undoubtedly the technology had problems. It always does. It fell over when you pushed it some unusual way. It leaked memory. It was too slow. It doesn’t really matter whether you’re using an exciting new technology from an unknown vendor or a widely used “industry standard,” if you’re doing anything vaguely interesting you will come across the unexpected. But given time, almost all these problems are tractable.
Unfortunately, it’s time that was lacking. Testing, contingency, everything deemed non-essential was sacrificed in order to make an externally defined ship date.
The thing to remember about a software development project is that the only deliverable that matters to end-users is the software. When users come to look at the near-finished product and it doesn’t meet their needs, they blame the software and the development team.
The development team often end up blaming the new development tools as well because, well, the alternative is saying that they screwed up and who is going to make a career limiting mistake like that?
The truth, however, is often revisionist. It’s altered by people who either have an idealised view of how the project should have been run rather than how it really went or by people who focus on the wrong parts of the whole.
They don’t remember or weren’t involved in the discussions that preceded the development work. They don’t look back at the project plan or the design documents or even the reams of requirements that they probably signed off months ago.
All the problems sure look like technology problems. Missing functionality. Low quality. Poor performance. But are they ultimately caused by poor technology?
Nearing the end of the project it is easy to forget all the work that happened at the start. It’s also easy to forget that the preparatory work was late and incomplete.
Project plans and design documents and test strategies are all important. It would be a mistake to try to run a large project without some form of any of them, but they’re either not visible to most end users or a transitory work that ends up filed away and rarely looked at once the software is functional.
As ever, the real problem is the people. Politics. Pressure. Poor communication. Technology problems are almost always easier to debug than the people involved.