Tag Archives: Computing

Snow Leopard

Most people reading this will know that Snow Leopard refers to version 10.6 of the Macintosh Operating System, Apple’s latest update released late last month.

I wasn’t sure whether I should upgrade initially. I have been stung before by being an early adopter. Mac OS X 10.4 was a nightmare on my iMac G5. The big ticket new features such as Dashboard and Spotlight worked just fine1. What didn’t work were little thing like, oh, networking. Eight times out of ten it couldn’t connect to my AirPort Base station. This made almost everything, including downloading patches to fix this very problem, a compete and utter pain. I think it took until 10.4.3 before everything worked reliably.

I waited several months before making the leap to 10.5 for this very reason. But Leopard at least had some neat new features (and the lame new look of the dock) to try to tempt me over. Snow Leopard, by design, has few user-facing enhancements to make it worth the risk.

Of course I’m not a typical end user. The reason I moved from Windows to the Mac back in 2001 was because of its Unix underpinnings:

MacOS X is based on a BSD Unix kernel (called Darwin and available under an Open Source licence) and has an enhanced Macintosh user interface grafted on top. This is truly the key. You have the complex internals available from a command-line when you need it and a state of the art GUI when you just need a word processor.

And now that I’m an iPhone developer I have a vested interest in using the best tools available for the platform, and they were only available for Snow Leopard. Also a lure where the new APIs (Grand Central Dispatch, OpenCL) and language enhancements (blocks). I’ve not done much Macintosh development but these were exactly the kind of things that would potentially get me started.

All this is a long way of saying that, despite the risks, I took the plunge anyway.

And…

Well, so far it’s pretty much been a non-event.

Yes, it’s quicker. Most noticeably in starting up, shutting down, Time Machine and in Mail. Don’t get me wrong, there are lots of nice little things — and I’m still finding new ones — but it’s mostly been entirely seamless, almost an invisible upgrade. And I mean that in a good way.

Yes, all my programs still work. I’d read reports that PhotoShop Elements didn’t work under Snow Leopard. I can report that it takes a considerable amount of time to start up and frequently beach-balls afterwards. Or, put another way, it works just as well as it did under 10.5.

I’d also seen scare-stories about old versions of Microsoft Office and other PPC applications that need Rosetta to run but, again, I’ve not seen any problems2. Even lower level software like my screen calibration program and film scanner software are fine.

I have two negatives so far, both fairly minor in the grand scheme of things.

The first affects Yummy and Yummy Browser and that’s the fact that the new version of Xcode only supports developing for iPhone OS 3.x3. Luckily there are very few users on 2.x but it’s still a little disappointing that I have had to make the move.

Secondly, it’s my printer. There is no longer a HP-supplied driver for my 2002-era DeskJet. Luckily Apple includes GutenPrint with Snow Leopard and there’s a bundled driver that recognises it. So on the plus-side I don’t have to go out and buy a new printer as I feared I might have to. On the down side the quality is just not there. While it was never a match for any contemporary photo printer, it was more than adequate for my needs. With GutenPrint, text is readable but there’s noticeable banding. I’m not sure I’d use it any more for “official” letters, though maybe I’m just being a snob. Photos have the same issue with banding but have the added distraction of some coarse dappling as a substitute for the more subtle colours.

No significant upgrade is going to be entirely problem-free but overall I’m happy with it. It’s about as easy as it could be and, despite Apple’s claim of no new features, there are certainly tangible benefits to making the leap.

  1. Some would argue with that statement. Personally I never had any serious problems with Spotlight. []
  2. To be fair, I moved to Office 2008 around the same time. []
  3. It’s true that you can build for older releases but there’s no way to test it in the simulator. I’m not willing to release software that I’ve not been able to test. []

Geeking out in Silicon Valley

As if wandering around a conference centre before the start of the conference wasn’t enough, I also went to the south of the Bay Area to visit some of the major sights in Silicon Valley.

I started at the excellent Computer History Museum. I don’t doubt that most people would find this mind-numbingly dull but I thought that the large archive of “significant” computers was great. It would be easy to argue over the machines that were on display, the ones that were more significant or, well, less American1.

Still, that’s nit-picking. It was great to see the PDP-8 — the successor to the PDP-7 that the original version of Unix was written for — and a couple of Cray-1’s. Purely for nostalgia value, it was great to see the Sinclair Spectrum (my first computer) and the ZX81 (the name of this website). I also remember wanting to get a QL because a friend had one and because it was cheap and powerful and had a great built-in programming language.

I’m guessing that many people reading this won’t have heard of the Xerox Alto. You can think of this as the first machine with what might be recognisable as a Graphical User Interface — or the point and click interface that we’re all used to now with the Macintosh and Windows. Talking of the Macintosh, the NeXT Cube is in many ways the precursor to the modern Mac. I remember getting some of the marketing bumph from NeXT when they were still being manufactured. I wasn’t completely sure why they were cool or what I would do with it if I had one, but I wanted one. The connection? Well, this was Steve Jobs company after he was booted from Apple in 1985 and the operating system forms the foundation of Mac OS X2.

There were lots of other interesting (mainly bigger and older) machines but these are the main ones that stood out to me. They have a policy of only displaying machines that are ten or more year old in order to get some perspective and decide what is truly significant. It will be interesting to see where they go in the next few years. Most of the interesting stuff in the last few years has been either in software or in gadgets that are not traditionally considered to be computers (such as iPods and mobile phones).

Unfortunately, the major problem with the rest of the valley is that it’s just a bunch of office buildings. Even the ones where interesting work is going on are still just office buildings. So I went to the other side of Mountain View to have a quick look at Google and then a quick stint on the freeway to Cupertino3 to have a word with the iPhone application review team (not really).

And from there it was back to San Francisco for some good food and some more traditional sight-seeing.

  1. Some would argue that the first “modern” computer was built at Manchester University in the UK, but there are a number of good contenders. []
  2. Actually, if you want to go further back, NeXTStep is a variant of Unix which we can trace back to the PDP-7 in 1969. []
  3. It seemed right that I’d take the picture of the Infinite Loop sign using my iPhone. []

The W Effect

This is probably the meanest article title I’ve ever written, as the “W” refers to a person, someone that I used to work with1. The critical phrase went something like this:

“How hard can it be? It’s only a button!”

Those two, tiny sentences hide a lot. Let me explain.

I’m mainly technical. I have been in the industry for over ten years now, did a computer science degree and spent many hours when I should have been revising for my German GCSE programming my Sinclair Spectrum. This means that when someone says “It’s only a button” I instinctively cringe. I may not know the details but I’ve seen enough “simple” buttons with days worth of work behind them that I’ve learned to be cautious.

Of course, not only technical skills are required for most modern applications. Even a relatively small iPhone utility, such as Yummy, needed some time in front of Adobe Illustrator for the icon. Needless to say, that time wasn’t mine.

I am a keen photographer and I have read The Non-Designer’s Design Book but when it comes to art and design I leave the implementation to other people.

Naturally I have opinions. I may, as a “customer,” have constraints. It has to be a particular size or colour, the shape must evoke a certain feeling or imagery. I probably even have a budget. I instinctively like or dislike designs.

But what I don’t profess to know is the design process or how long it should take, and that’s the problem with the “how hard can it be” quote from above.

“W” was from another discipline, couldn’t imagine what might be hard technically and made a commitment to the client based on that hunch. Unfortunately while their part would only take a few hours, it turned out that there were several weeks of technical work to make that button operate.

Of course I don’t want to come down too hard on “W,” as this is both a fairly extreme case and something that we all do to some extent. Things that we don’t understand almost always seem easier than they are in reality. The trick, insofar as there is one, is acknowledge that it does happen and consult with someone who does understand it before making commitments.

  1. In fact I had a number of choices, and that’s the point. However this, as you’ll see, is an extreme case and is the first I remember. []

Attitude

Here’s an exchange that occurred just the other day: colleague A asked colleague B for some help in PowerPoint. B says, “It’s easy, I’ll show you how to do it.” A immediately objects: “I don’t want to know how to do it, can you just do it for me?”

The dialogue continued for a while, with A not happy to have to learn something new and B not happy to become A‘s lackey.

The traditional twist in a story like this is to say that in fact I was Colleague B. Only I wasn’t. And no, I wasn’t A either. But the whole conversation put my teeth on edge.

This is a supposedly smart and experienced guy but he shows a complete unwillingness to both learn something new and to be self-sufficient.

This is whatever is the complete opposite of a winning combination is called.

I have regularly come across both traits in my working life. Most often you get the Java programmer who is only interested in Java. These are usually career programmers, people who are in the industry because it pays the bills and little more. There is nothing wrong with that of course. Do people ever get passionate about accountancy? Actually, probably some do, but my point is that to most it’s a job.

However that kind of outlook is limiting. Lapsing into cliché for a second: When all you have is a hammer, everything looks like a nail. This isn’t a problem most of the time. Usually getting the job done is enough. But for the really interesting problems a little Lisp or functional programming or the dining philosophers can make all the difference.

My colleague didn’t even want to learn more about PowerPoint which, given his position, pretty much should have been his job.

But an unwillingness to learn new stuff would have been fine had he been able to work unaided. Unfortunately he needed pretty much constant support. Everything from PowerPoint to making a cup of tea required someone else’s help. Naturally, it wasn’t an inability to make tea rather he was unwilling to do so.

The key here is that it’s not about ability. In your first few weeks in a job there are going to be lots of things that you need to ask about, lots of things that you need help with. But what I really hate to see is an unwillingness to learn, a lack of intellectual curiosity and no desire to be self-sufficient.

Growing Up in Public

What do Britney Spears and Yummy, my iPhone Delicious.com client, have in common? If you had asked me a few months ago I would have said nothing but I’d have been wrong. No, they both have had to grow up in public.

For a version 1.0 product, Yummy seemed solid to me. It was fast, coped will all my bookmarks and had the ability to add, edit and delete entries. I didn’t think that this would remain as a unique feature for as long as it has, but hey, that’s a bonus.

Within a few days I had exceeded what I had expected to sell and received positive feedback on the iTunes store. But not long after that I also received my first bug report.

This turned out to be an odd one. It crashed early on while starting up and downloading all the bookmarks for the first time. My first guess — incorrect as it turned out — was that it was running out of memory. It took some investigation with the help of a very kind end-user to discover that… Delicious allows technically invalid URLs. By that I mean both that they don’t follow web standards and, worse, that it’s not even possible to open them in Mobile Safari.

I don’t feel so bad about not spotting that one during testing, although I should have put in more error handling to spot various “impossible” events and make sure that it didn’t crash. The reason I mention it is to give an idea of the kind of things that happen in “real life.”

But my biggest mistake has been assuming that I am a typical user of Delicious. I thought a few hundred bookmarks was a lot but I now realise that I was wrong. I have some users with over a thousand bookmarks and have read about another with nearly ten times that1.

The exact number of bookmarks that you can store depends on a number of variables, such as the length of the URL, title and notes, the number of tags, the iPhone operating system2 and a bunch of other details outside my control. Looking at the reviews on iTunes I believe a few people had more than whatever that limit is. Unfortunately the error handling was lacking, resulting in Yummy crashing rather than an inconvenient but understandable error message3.

Version 1.0.2 was actually a big release in terms of the amount of code changed, if not in terms of visible functionality (which is why it was such a small change in the version number). Under the hood, though, I dramatically increased the number of bookmarks that Yummy could handle. However it was starting to become clear that the internal architecture was holding me back. Further increasing the number of usable bookmarks would be hard, if not impossible, without seriously degrading performance and some new features that I wanted to add would end up in a nasty tangle of unmaintainable code.

I decided to take a step back and fix the structure of the code. For much of the time since the last formal release, Yummy has been, metaphorically speaking, in pieces on the floor. Most of those pieces have now been polished and reassembled, and it’s now working well enough that I have replaced the copy of 1.0.2 that I have been using day-to-day on my own iPhone with the development version.

This is a long way of saying that there is a new version coming. There will be a number of great new features but many of the big changes are behind the scenes. I sincerely hope that you don’t notice them.

  1. I confess to being a little sceptical about some claims. At some points it becomes a bit of a pissing competition. []
  2. Upgrading from version 2.0 to 2.1 tipped at least one user over the edge, and many developers do not get previews of new versions. []
  3. I’ve not got to the bottom of this one yet. It seems that, sometimes, you can only spot a bad memory allocation by noting that an otherwise mandatory field is missing. []

C++

Introduction

I don’t want to start off on the wrong foot again, but I’m afraid I might have to. If you read my discussion of the C programming language you may imagine that I’d like C++. After all, C++ fixes some of C’s idiosyncrasies, adds object orientation and a whole host of new features.

You’d be wrong though. In many ways I consider C++ to be a step backwards from its parent and this piece will hopefully explain why.

The big things in life

Identifying the main thing wrong with C++ is easy when you start making a list of features. I don’t mean a list trying to identify things it does badly, but a genuine feature list, stuff like object orientation, exceptions, strong-ish typing, multiple inheritance… Well I’ve only just started, but there’s a huge list.

And that is the problem. C++ has tried to incorporate just about every interesting software engineering development that has been made over the last twenty-five years. In some ways that’s a very good thing: it allows programmers to build code in the most appropriate way which ever that way might be.

The problem is that there’s more than one way to skin any particular cat. While just about any approach is fine on a small program, one with a single developer, when you have a team writing code if there’s no consistency in approach you get the situation where no-one is able to understand the whole. There is no one head big enough.

While There’s More Than One Way To Do It is a great motto for Perl, as a language it has a very different objective. Most Perl programs are ‘hacks,’ small programs designed to solve a particular problem. C++ is a hard-core software engineering language; large teams of developers are common. The same approach used for small programs just doesn’t work for bigger systems. I can build a thousand line program at the keyboard, but a ten million line system? Anyone that thinks they can are deluding themselves. Even on the off-chance that they aren’t, other people need to understand it too. No-one is ever around for ever and no-one is indispensable (except in the case of bad management, but that’s a different story).

Counter Arguments

People often cite C++’s similarity to C as a major plus. If you’ve already learned C, then C++ is easy, right? Just a few extra commands, use “class” instead of “struct” and you’re well away. Except some of the worst C++ code I’ve ever seen has come from people who think like that. Using “//” to start your comments rather than “/*” doesn’t make you a C++ programmer!

There are, however, some benefits for C programmers using C++ compilers. They tend to be less forgiving of bad code, they often give better diagnostics and error messages. But so do Java and C#, only more so. And the jump from C to Java is probably easier than moving from C to C++.

Conclusion

If we think right back to to the beginning of the development of programming languages, we remember that they were designed to simplify things; they were designed so that you could think about the problem rather than what the machine would do.

For the audience that they were aimed at, many of the earlier languages did just that. Fortran allowed scientists to write programs (in fact it’s still being used). Cobol put a greater focus on the business than had ever been the case.

And this is where C++ falls down. Its audience is software engineers, people who write very large and complex applications. Yet its complexity actually hinders development. With a large team, “write-only” code, programs that no-one can understand once they have been constructed, become not just possible but almost guaranteed. There are so many ways of doing the same thing, so many ways to shoot yourself in the foot, that the odds of it being both bug-free and maintainable are almost zero.

C++ does have its plus points, though. It is an excellent language to show how smart you are. If you can understand the entire language and write huge, complex and error-free programs in your sleep, you are clearly much more clever than I am.

Myself, I prefer to fight the problem rather than the development language.