Tag Archives: Computing


“Do you ever change this type of trade?”

I was sat on the trading floor discussing a new feature that I was implementing with the person who would be using it the most.

“No, never.”

This was one detail of the change that would have far-reaching consequences in the code. A “no” would mean a few days of development, a “yes” would indicate several weeks.

“Are you absolutely sure? You don’t change it even once a month?”

I knew they’d like the smaller estimate but, equally, I knew that I didn’t want to end up trying to implement several weeks worth of functionality in the smaller time frame. I’ve seen that kind of thing happen too often.

“John1, we don’t ever change these trades do we?”

John was the head trader. If anyone would know it would be him.

He rubs his head and leans back, thinking.

“Why would you want to? No, I can’t see us ever changing one.”

Of course, you can guess what happened thirty minutes into the first trading day with the new software.

So, what happened here? Why did we get the requirements so wrong? And why did we only find the mismatch after the system was moved into production?

Both the problem and the solution is the same thing: communication. Or more precisely, communicating using the right language.

Some developers are happy to only “speak” technical, proud that they are masters of their programming environment but ignorant of their users problems or how they really use the software.

Above I started out correctly, I was trying to understand the traders business and talking in terms of booking trades, positions, legal entities and a bunch of acronyms that would make even less sense out of context.

But I made one error: I used the word “change” without defining what I meant. I meant, well, any change. And so did they. Yet they didn’t consider moving a trade from one book to another to be a change and, unfortunately, I did.

You’d like to think that there were checks and balances in place to make sure that this kind of thing didn’t happen. And there were. In addition to informal and formal testing, there was over a week of “parallel running,” where the traders had to use the old and the new system together and check that the results were the same in both of them.

Were there any moved trades during this time? Of course. Why did the traders not notice? Well, it was about right; the differences, while present, were explicable and so not considered significant enough to mention even though I asked to hear about any problems at all.

So, again, communication. Or at least human nature. I wanted to hear about any differences but they tried to help me by only talking about differences that they couldn’t account for.

What’s the answer? Well, I’m not sure there’s an easy one. “Understanding your user” is a short, simple phrase but hides so much. If you spent the time to fully understood their job you probably wouldn’t have the time to do your own. But finding the balance is crucial.

  1. Names have been changed to protect the… well, they work for a bank so I hesitate to say “innocent” but you know what I mean. []

The Up-Sell

I don’t mean to single out a single business here. The flaw I’m pointing out is shared by many sites but this post was inspired by a recent visit to TripIt. In general it’s a great service. It’s well thought out, allowing you to enter all your details with a minimum of effort; just forwarding your email confirmation to them is a masterstroke.

However. (You knew that was coming.) However, many links on the main page are non-functional, by which I mean they push you straight through to their paid-for service sign-up form.

The “tricky” part is that before you press them it is difficult to know which links actually work and which ones just ask for money.

There are a number of other tricks that some sites have. Another favourite is the interstitial screen, forcing you to view adverts before you can do what you actually want to do.

But it’s not just that I find it obnoxious. I don’t have data but I do have a nice anecdote that shows that it doesn’t really work.

During the dot.com boom I helped build a website. The launch went pretty well but the client decided that they wanted to push a secondary product, one with great margins but where customers really needed to be vetted. (I don’t want to get into specifics but it was a financial product.) The marketing people said that a pop-up would be the right thing to do.

We balked at the idea. At the time, pop-ups were the scourge of the Internet. They were used on all the least reputable sites. Technically adept users closed them without looking; the less fortunate were conned into either filling their screen with pointless adverts or visiting website they had no interest in. Pop-up blockers were a few years away.

In short, we felt that at best there was a reputational risk. Unfortunately we couldn’t come up with numbers to show that it was financially a bad idea, plus it was actually pretty cheap to implement. So they asked us to go ahead with it, over our objections.

As I recall it didn’t last very long.

After go-live there was a substantial up-tick in the number of people applying for this secondary product. However, there was actually a drop in the number of people who were accepted. That’s to say that it attracted exactly the wrong kind of person, which is bad enough, but there was also a cost associated with each rejected application.

Moving back to 2009, I think the problem with pushing your paid products too hard is that you actually make your free version less appealing. And, frankly, if your free version is a pain to use I’m certainly not going to pay for the full version just to make the evil bits go away.

To be clear, I have nothing against the so-called “freemium” business model. It can work really well. Flickr, for example, seems to have the balance about right: the site is useful even if you don’t pay for it with the extras useful for regular users. And paying LWN readers can get their content a week ahead of other people.

In short, if your paid extras are genuinely useful you don’t need to be obnoxious, you don’t need lots of “dead” links or interstitial adverts. And making your free version painful is most certainly not the answer.

Snow Leopard

Most people reading this will know that Snow Leopard refers to version 10.6 of the Macintosh Operating System, Apple’s latest update released late last month.

I wasn’t sure whether I should upgrade initially. I have been stung before by being an early adopter. Mac OS X 10.4 was a nightmare on my iMac G5. The big ticket new features such as Dashboard and Spotlight worked just fine1. What didn’t work were little thing like, oh, networking. Eight times out of ten it couldn’t connect to my AirPort Base station. This made almost everything, including downloading patches to fix this very problem, a compete and utter pain. I think it took until 10.4.3 before everything worked reliably.

I waited several months before making the leap to 10.5 for this very reason. But Leopard at least had some neat new features (and the lame new look of the dock) to try to tempt me over. Snow Leopard, by design, has few user-facing enhancements to make it worth the risk.

Of course I’m not a typical end user. The reason I moved from Windows to the Mac back in 2001 was because of its Unix underpinnings:

MacOS X is based on a BSD Unix kernel (called Darwin and available under an Open Source licence) and has an enhanced Macintosh user interface grafted on top. This is truly the key. You have the complex internals available from a command-line when you need it and a state of the art GUI when you just need a word processor.

And now that I’m an iPhone developer I have a vested interest in using the best tools available for the platform, and they were only available for Snow Leopard. Also a lure where the new APIs (Grand Central Dispatch, OpenCL) and language enhancements (blocks). I’ve not done much Macintosh development but these were exactly the kind of things that would potentially get me started.

All this is a long way of saying that, despite the risks, I took the plunge anyway.


Well, so far it’s pretty much been a non-event.

Yes, it’s quicker. Most noticeably in starting up, shutting down, Time Machine and in Mail. Don’t get me wrong, there are lots of nice little things — and I’m still finding new ones — but it’s mostly been entirely seamless, almost an invisible upgrade. And I mean that in a good way.

Yes, all my programs still work. I’d read reports that PhotoShop Elements didn’t work under Snow Leopard. I can report that it takes a considerable amount of time to start up and frequently beach-balls afterwards. Or, put another way, it works just as well as it did under 10.5.

I’d also seen scare-stories about old versions of Microsoft Office and other PPC applications that need Rosetta to run but, again, I’ve not seen any problems2. Even lower level software like my screen calibration program and film scanner software are fine.

I have two negatives so far, both fairly minor in the grand scheme of things.

The first affects Yummy and Yummy Browser and that’s the fact that the new version of Xcode only supports developing for iPhone OS 3.x3. Luckily there are very few users on 2.x but it’s still a little disappointing that I have had to make the move.

Secondly, it’s my printer. There is no longer a HP-supplied driver for my 2002-era DeskJet. Luckily Apple includes GutenPrint with Snow Leopard and there’s a bundled driver that recognises it. So on the plus-side I don’t have to go out and buy a new printer as I feared I might have to. On the down side the quality is just not there. While it was never a match for any contemporary photo printer, it was more than adequate for my needs. With GutenPrint, text is readable but there’s noticeable banding. I’m not sure I’d use it any more for “official” letters, though maybe I’m just being a snob. Photos have the same issue with banding but have the added distraction of some coarse dappling as a substitute for the more subtle colours.

No significant upgrade is going to be entirely problem-free but overall I’m happy with it. It’s about as easy as it could be and, despite Apple’s claim of no new features, there are certainly tangible benefits to making the leap.

  1. Some would argue with that statement. Personally I never had any serious problems with Spotlight. []
  2. To be fair, I moved to Office 2008 around the same time. []
  3. It’s true that you can build for older releases but there’s no way to test it in the simulator. I’m not willing to release software that I’ve not been able to test. []

Geeking out in Silicon Valley

As if wandering around a conference centre before the start of the conference wasn’t enough, I also went to the south of the Bay Area to visit some of the major sights in Silicon Valley.

I started at the excellent Computer History Museum. I don’t doubt that most people would find this mind-numbingly dull but I thought that the large archive of “significant” computers was great. It would be easy to argue over the machines that were on display, the ones that were more significant or, well, less American1.

Still, that’s nit-picking. It was great to see the PDP-8 — the successor to the PDP-7 that the original version of Unix was written for — and a couple of Cray-1’s. Purely for nostalgia value, it was great to see the Sinclair Spectrum (my first computer) and the ZX81 (the name of this website). I also remember wanting to get a QL because a friend had one and because it was cheap and powerful and had a great built-in programming language.

I’m guessing that many people reading this won’t have heard of the Xerox Alto. You can think of this as the first machine with what might be recognisable as a Graphical User Interface — or the point and click interface that we’re all used to now with the Macintosh and Windows. Talking of the Macintosh, the NeXT Cube is in many ways the precursor to the modern Mac. I remember getting some of the marketing bumph from NeXT when they were still being manufactured. I wasn’t completely sure why they were cool or what I would do with it if I had one, but I wanted one. The connection? Well, this was Steve Jobs company after he was booted from Apple in 1985 and the operating system forms the foundation of Mac OS X2.

There were lots of other interesting (mainly bigger and older) machines but these are the main ones that stood out to me. They have a policy of only displaying machines that are ten or more year old in order to get some perspective and decide what is truly significant. It will be interesting to see where they go in the next few years. Most of the interesting stuff in the last few years has been either in software or in gadgets that are not traditionally considered to be computers (such as iPods and mobile phones).

Unfortunately, the major problem with the rest of the valley is that it’s just a bunch of office buildings. Even the ones where interesting work is going on are still just office buildings. So I went to the other side of Mountain View to have a quick look at Google and then a quick stint on the freeway to Cupertino3 to have a word with the iPhone application review team (not really).

And from there it was back to San Francisco for some good food and some more traditional sight-seeing.

  1. Some would argue that the first “modern” computer was built at Manchester University in the UK, but there are a number of good contenders. []
  2. Actually, if you want to go further back, NeXTStep is a variant of Unix which we can trace back to the PDP-7 in 1969. []
  3. It seemed right that I’d take the picture of the Infinite Loop sign using my iPhone. []

The W Effect

This is probably the meanest article title I’ve ever written, as the “W” refers to a person, someone that I used to work with1. The critical phrase went something like this:

“How hard can it be? It’s only a button!”

Those two, tiny sentences hide a lot. Let me explain.

I’m mainly technical. I have been in the industry for over ten years now, did a computer science degree and spent many hours when I should have been revising for my German GCSE programming my Sinclair Spectrum. This means that when someone says “It’s only a button” I instinctively cringe. I may not know the details but I’ve seen enough “simple” buttons with days worth of work behind them that I’ve learned to be cautious.

Of course, not only technical skills are required for most modern applications. Even a relatively small iPhone utility, such as Yummy, needed some time in front of Adobe Illustrator for the icon. Needless to say, that time wasn’t mine.

I am a keen photographer and I have read The Non-Designer’s Design Book but when it comes to art and design I leave the implementation to other people.

Naturally I have opinions. I may, as a “customer,” have constraints. It has to be a particular size or colour, the shape must evoke a certain feeling or imagery. I probably even have a budget. I instinctively like or dislike designs.

But what I don’t profess to know is the design process or how long it should take, and that’s the problem with the “how hard can it be” quote from above.

“W” was from another discipline, couldn’t imagine what might be hard technically and made a commitment to the client based on that hunch. Unfortunately while their part would only take a few hours, it turned out that there were several weeks of technical work to make that button operate.

Of course I don’t want to come down too hard on “W,” as this is both a fairly extreme case and something that we all do to some extent. Things that we don’t understand almost always seem easier than they are in reality. The trick, insofar as there is one, is acknowledge that it does happen and consult with someone who does understand it before making commitments.

  1. In fact I had a number of choices, and that’s the point. However this, as you’ll see, is an extreme case and is the first I remember. []


Here’s an exchange that occurred just the other day: colleague A asked colleague B for some help in PowerPoint. B says, “It’s easy, I’ll show you how to do it.” A immediately objects: “I don’t want to know how to do it, can you just do it for me?”

The dialogue continued for a while, with A not happy to have to learn something new and B not happy to become A‘s lackey.

The traditional twist in a story like this is to say that in fact I was Colleague B. Only I wasn’t. And no, I wasn’t A either. But the whole conversation put my teeth on edge.

This is a supposedly smart and experienced guy but he shows a complete unwillingness to both learn something new and to be self-sufficient.

This is whatever is the complete opposite of a winning combination is called.

I have regularly come across both traits in my working life. Most often you get the Java programmer who is only interested in Java. These are usually career programmers, people who are in the industry because it pays the bills and little more. There is nothing wrong with that of course. Do people ever get passionate about accountancy? Actually, probably some do, but my point is that to most it’s a job.

However that kind of outlook is limiting. Lapsing into cliché for a second: When all you have is a hammer, everything looks like a nail. This isn’t a problem most of the time. Usually getting the job done is enough. But for the really interesting problems a little Lisp or functional programming or the dining philosophers can make all the difference.

My colleague didn’t even want to learn more about PowerPoint which, given his position, pretty much should have been his job.

But an unwillingness to learn new stuff would have been fine had he been able to work unaided. Unfortunately he needed pretty much constant support. Everything from PowerPoint to making a cup of tea required someone else’s help. Naturally, it wasn’t an inability to make tea rather he was unwilling to do so.

The key here is that it’s not about ability. In your first few weeks in a job there are going to be lots of things that you need to ask about, lots of things that you need help with. But what I really hate to see is an unwillingness to learn, a lack of intellectual curiosity and no desire to be self-sufficient.