Orange

I went through quite a few thoughts and pictures before I settled on this one for the PhotoFriday theme “Orange.” I went clever-but-not-in-the-spirit picture of a wind-farm in Orange county, then obvious photographs of oranges or orange sweets. But I finally went for this rust on a canon in Rhodes.

Please also vote for my entry in last weeks challenge, “Wet.” I’m entry number 64.

Simpleton Explores Microcomputers

It’s easy to forget how much computers have changed over a relatively short. time. A book I found in my old room at my parent house, “Simpleton Explores Microcomputers,” helped me get some perspective.

I don’t know exactly when it’s from, but it’s certainly early eighties. Possibly 1983 or 1984.

It explores the computers that are available at the time and what it’s like to own one. One of the most telling aspects is that it’s written for people who have never owned, possibly used, a computer.

It starts with a little history (that’s still, by and large, relevant) and moves on to talk about some of the jargon users will come across and the problems they might be subjected to.

Hard drives are rare beasts that come in 5, 10 or 20Mb capacities . There are even these things called “floppy diskettes,” though most users will be saving their data on cassette tapes. (Before MP3s there were CDs. Before CDs there were cassette tapes. Ask your dad.)

Later on, its British bias shows through when it asks which of the major brands of machine you should buy, the BBC Micro, the Apricot, the IBM PC or the Torch.

There are four pages dedicated to each computer. I love the detail here, where they explain what can be plugged in (such as the cassette unit) and what the CAPS LOCK key does. It talks about the ports and the power switch. But absolutely nothing about the software!

The final section discusses the relative merits of dot-matrix and daisy wheel printers. I still remember by own 9-pin dot-matrix printer. It was slow and noisy. I think I’ll stick with my ink jet, thank you.

I scanned the whole thing and put it up on Flickr if you want to read all of the book. And if you like that, you might also like Digital Retro (no connection, I just think it’s a great book).

Here comes the crunch

It all starts out with a detailed plan. Then someone says, “Can we deliver by October?” A few features get cut, some of the estimates get revised downwards and everyone gets back to work. Then you get to the end of September and find that someone removed all the contingency and that in the rush to finish the requirements a heap of important stuff got missed.

You spend days, weeks, quite possibly months pushing to get the software developed and, a few months before the real end, a crunch point is reached. It’s missing what’s now realised to be critical functionality; it takes an hour to process something that should be instant; the data doesn’t look like you thought it would; it’s too late for any real benefit to be obtained.

The whole project gets cancelled. Or at the very least, suffers from a near-death experience.

The technology, they say, wasn’t right. It was immature. Or badly supported by the vendor. It was open source. Or not open source. Word quickly gets around your industry that the whole project failed because some new software didn’t work.

Sound familiar?

I think every project that I’ve been on that has been significantly delayed — that is, most of them — has followed a similar arc. And, in each and every case, the diagnosis of the failure has been the same: the technology. And in pretty much every case it wasn’t really true.

The neat thing is that no individual is to blame and, even better, it’s completely impossible to prove.

Let’s look at the timeline above. How different would this have been had another technology been chosen? Not very I’d wager.

Undoubtedly the technology had problems. It always does. It fell over when you pushed it some unusual way. It leaked memory. It was too slow. It doesn’t really matter whether you’re using an exciting new technology from an unknown vendor or a widely used “industry standard,” if you’re doing anything vaguely interesting you will come across the unexpected. But given time, almost all these problems are tractable.

Unfortunately, it’s time that was lacking. Testing, contingency, everything deemed non-essential was sacrificed in order to make an externally defined ship date.

The thing to remember about a software development project is that the only deliverable that matters to end-users is the software. When users come to look at the near-finished product and it doesn’t meet their needs, they blame the software and the development team.

The development team often end up blaming the new development tools as well because, well, the alternative is saying that they screwed up and who is going to make a career limiting mistake like that?

The truth, however, is often revisionist. It’s altered by people who either have an idealised view of how the project should have been run rather than how it really went or by people who focus on the wrong parts of the whole.

They don’t remember or weren’t involved in the discussions that preceded the development work. They don’t look back at the project plan or the design documents or even the reams of requirements that they probably signed off months ago.

All the problems sure look like technology problems. Missing functionality. Low quality. Poor performance. But are they ultimately caused by poor technology?

Nearing the end of the project it is easy to forget all the work that happened at the start. It’s also easy to forget that the preparatory work was late and incomplete.

Project plans and design documents and test strategies are all important. It would be a mistake to try to run a large project without some form of any of them, but they’re either not visible to most end users or a transitory work that ends up filed away and rarely looked at once the software is functional.

As ever, the real problem is the people. Politics. Pressure. Poor communication. Technology problems are almost always easier to debug than the people involved.

The ‘D’ in ‘DVCS’

Last week, GitHub was the victim of a days long denial of service attack that meant that it wasn’t available reliably despite the hard work of their team.

What surprised me was the number of people on Twitter complaining that they couldn’t work. I was surprised because one of the key things about Git (the tool rather than the website) is that it is distributed, that is there is no centralised server in the way that there is with, say, Subversion. Clearly there are things you can’t do without GitHub – most obviously downloading new repositories – but almost everything else can be done peer-to-peer. The rest of this post explains how.

First, a couple of disclaimers. I’m not a Git expert and I’m not a server admin. There certainly are other ways of doing this. Some may even be more secure or easier in your environment. What I like about this approach is that it works on pretty much any machine that has git installed.

Step one: set up a Git server.

“Woah, now,” you’re thinking, “I don’t have all day!”

Turns out it’s just two commands:

touch > .git/git-daemon-export-ok
git daemon

(Assuming you’re in the folder that’s at the root of the repository you want to export.)

Then on your other PC you can enter the following command to pull or push:

git pull git://othermachine/Users/stephend/temp/

You could even clone a repo to a local server, run a git server on it and use it as a temporary ’central’ repository.

If there’s a downside it’s that it’s difficult to recommend pushing to a repository using this method because there’s no authentication. However, as a quick and dirty way of pulling commits between any two machines with git installed it’s pretty reliable.

(If you know git you’re probably thinking “Why not ssh?” The short version is that it’s often locked down in corporate machines and is usually not available on Windows machines. But if you can use it you probably should.)

Photography, opinions and other random ramblings by Stephen Darlington