Category Archives: Opinion

Thoughts on computers and the IT industry.

Simpleton Explores Microcomputers

It’s easy to forget how much computers have changed over a relatively short. time. A book I found in my old room at my parent house, “Simpleton Explores Microcomputers,” helped me get some perspective.

I don’t know exactly when it’s from, but it’s certainly early eighties. Possibly 1983 or 1984.

It explores the computers that are available at the time and what it’s like to own one. One of the most telling aspects is that it’s written for people who have never owned, possibly used, a computer.

It starts with a little history (that’s still, by and large, relevant) and moves on to talk about some of the jargon users will come across and the problems they might be subjected to.

Hard drives are rare beasts that come in 5, 10 or 20Mb capacities . There are even these things called “floppy diskettes,” though most users will be saving their data on cassette tapes. (Before MP3s there were CDs. Before CDs there were cassette tapes. Ask your dad.)

Later on, its British bias shows through when it asks which of the major brands of machine you should buy, the BBC Micro, the Apricot, the IBM PC or the Torch.

There are four pages dedicated to each computer. I love the detail here, where they explain what can be plugged in (such as the cassette unit) and what the CAPS LOCK key does. It talks about the ports and the power switch. But absolutely nothing about the software!

The final section discusses the relative merits of dot-matrix and daisy wheel printers. I still remember by own 9-pin dot-matrix printer. It was slow and noisy. I think I’ll stick with my ink jet, thank you.

I scanned the whole thing and put it up on Flickr if you want to read all of the book. And if you like that, you might also like Digital Retro (no connection, I just think it’s a great book).

Here comes the crunch

It all starts out with a detailed plan. Then someone says, “Can we deliver by October?” A few features get cut, some of the estimates get revised downwards and everyone gets back to work. Then you get to the end of September and find that someone removed all the contingency and that in the rush to finish the requirements a heap of important stuff got missed.

You spend days, weeks, quite possibly months pushing to get the software developed and, a few months before the real end, a crunch point is reached. It’s missing what’s now realised to be critical functionality; it takes an hour to process something that should be instant; the data doesn’t look like you thought it would; it’s too late for any real benefit to be obtained.

The whole project gets cancelled. Or at the very least, suffers from a near-death experience.

The technology, they say, wasn’t right. It was immature. Or badly supported by the vendor. It was open source. Or not open source. Word quickly gets around your industry that the whole project failed because some new software didn’t work.

Sound familiar?

I think every project that I’ve been on that has been significantly delayed — that is, most of them — has followed a similar arc. And, in each and every case, the diagnosis of the failure has been the same: the technology. And in pretty much every case it wasn’t really true.

The neat thing is that no individual is to blame and, even better, it’s completely impossible to prove.

Let’s look at the timeline above. How different would this have been had another technology been chosen? Not very I’d wager.

Undoubtedly the technology had problems. It always does. It fell over when you pushed it some unusual way. It leaked memory. It was too slow. It doesn’t really matter whether you’re using an exciting new technology from an unknown vendor or a widely used “industry standard,” if you’re doing anything vaguely interesting you will come across the unexpected. But given time, almost all these problems are tractable.

Unfortunately, it’s time that was lacking. Testing, contingency, everything deemed non-essential was sacrificed in order to make an externally defined ship date.

The thing to remember about a software development project is that the only deliverable that matters to end-users is the software. When users come to look at the near-finished product and it doesn’t meet their needs, they blame the software and the development team.

The development team often end up blaming the new development tools as well because, well, the alternative is saying that they screwed up and who is going to make a career limiting mistake like that?

The truth, however, is often revisionist. It’s altered by people who either have an idealised view of how the project should have been run rather than how it really went or by people who focus on the wrong parts of the whole.

They don’t remember or weren’t involved in the discussions that preceded the development work. They don’t look back at the project plan or the design documents or even the reams of requirements that they probably signed off months ago.

All the problems sure look like technology problems. Missing functionality. Low quality. Poor performance. But are they ultimately caused by poor technology?

Nearing the end of the project it is easy to forget all the work that happened at the start. It’s also easy to forget that the preparatory work was late and incomplete.

Project plans and design documents and test strategies are all important. It would be a mistake to try to run a large project without some form of any of them, but they’re either not visible to most end users or a transitory work that ends up filed away and rarely looked at once the software is functional.

As ever, the real problem is the people. Politics. Pressure. Poor communication. Technology problems are almost always easier to debug than the people involved.

The ‘D’ in ‘DVCS’

Last week, GitHub was the victim of a days long denial of service attack that meant that it wasn’t available reliably despite the hard work of their team.

What surprised me was the number of people on Twitter complaining that they couldn’t work. I was surprised because one of the key things about Git (the tool rather than the website) is that it is distributed, that is there is no centralised server in the way that there is with, say, Subversion. Clearly there are things you can’t do without GitHub – most obviously downloading new repositories – but almost everything else can be done peer-to-peer. The rest of this post explains how.

First, a couple of disclaimers. I’m not a Git expert and I’m not a server admin. There certainly are other ways of doing this. Some may even be more secure or easier in your environment. What I like about this approach is that it works on pretty much any machine that has git installed.

Step one: set up a Git server.

“Woah, now,” you’re thinking, “I don’t have all day!”

Turns out it’s just two commands:

touch > .git/git-daemon-export-ok
git daemon

(Assuming you’re in the folder that’s at the root of the repository you want to export.)

Then on your other PC you can enter the following command to pull or push:

git pull git://othermachine/Users/stephend/temp/

You could even clone a repo to a local server, run a git server on it and use it as a temporary ’central’ repository.

If there’s a downside it’s that it’s difficult to recommend pushing to a repository using this method because there’s no authentication. However, as a quick and dirty way of pulling commits between any two machines with git installed it’s pretty reliable.

(If you know git you’re probably thinking “Why not ssh?” The short version is that it’s often locked down in corporate machines and is usually not available on Windows machines. But if you can use it you probably should.)

“Preview” is damaged and can’t be opened.

“Preview” is damaged and can’t be opened. You should move it to the Trash.

"Preview" is damaged.

This was the rather surprising error message that I’ve been getting when I try to open a PDF from the Finder since I upgraded to OS X Yosemite. It’s bad enough when you get an error message, but one suggesting that you delete a frequently used app is inconvenient to say the least!

Since I found it, I’ve been using a workaround: start Preview.app and open the file manually.

But there’s a better answer. Apparently it just has a duff “quarantine” attribute set. To get rid of it and get things back to normal simply do the following in a Terminal window:

/Users/stephend $ cd /Applications 
/Applications $ xattr -l Preview.app
com.apple.quarantine: 0006;53563f06;Acorn;
/Applications $

If you see the same thing, proceed to the next step:

/Applications $ sudo xattr -d com.apple.quarantine Preview.app
Password:****

And that’s all. Now opening a PDF from the Finder should work. It’s bizarre.

I raised Radar 19384558 with Apple. Please dupe if you see the same thing. Thanks to Gus Mueller (developer of Acorn) for the support.

Update: It seems that a couple of other apps also have the same, incorrect attributes:

/Applications $ xattr -l *.app Utilities/*.app | grep Acorn       
Mail.app: com.apple.quarantine: 0006;53563f06;Acorn;
iPhoto.app: com.apple.quarantine: 0006;53563f06;Acorn;

I have also removed those tags, though I’ve not seen any problems with them so far.

QA Mindf**k

When I read Rand’s recent post on QA I was pretty much entirely in agreement. A good QA team is a real asset to any project, especially large ones. However, a bad QA team can be a huge liability and cause problems for everyone.

Bad testers don’t understand the product they’re working on. They follow test scripts they don’t understand, write short, inaccurate bug reports and make no attempt to appreciate the context of any error.

This should all be obvious from Rand’s post. What’s maybe not so clear is what problems it causes.

What follows is not the tale of any one project but all of these things have happened at one time or another.

  • The QA team starts too early in development and doesn’t comprehend that it’s working on unfinished software. Many bug reports are raised for things that are known to be missing or incomplete.
  • The real bug reports contain so little data that they can’t be reproduced, or at least can’t be replicated without a lot of effort.
  • There are many bugs in the test scripts too, but since the QA team have no real understanding of what the software is supposed to do they refuse to take a second look at them.
  • Since QA don’t understand the desired functionality, important modules are not tested, leaving an unpleasant surprise at a later, critical stage of the project.
  • As the bugs pile up, the development team have to spend more and more time working on them instead of completing the project as originally planned.
  • Eventually, overwhelmed, the project is replanned.
  • This time the development team is under increased scrutiny. Metrics – bugs raised, time to fix, closure rate – are tracked.
  • Collecting all the metrics slows down development further.
  • Bugs pile up, the metrics start taking over the project.
  • The numbers mean everything. Few actually understand what is wrong, only how many defects there are.
  • Ironically, the pressure of developing the remaining features, working on the stream of defects and the new management overhead result in a decrease in quality as the same people are expected to do more work.
  • Management adds more people to the project to solve this problem. Unfortunately Fred Brookes wasn’t wrong.
  • The new project plan fails, further replanning…
  • Rinse. Repeat.

It’s a death spiral. Longer hours. Weekend work. Replanning. Delays. People, frustrated, leave. It’s toxic.

More often than not, projects like these get cancelled but sometimes, for political reasons, they have to be delivered. In order to do so the cycle has to be broken. This always requires someone brave to come in, slash the scope and come up with a truly realistic plan.

But it’s best just to have a constructive relationship between the development and QA teams. After all, the goal for everyone should be to deliver the best possible software.