Category Archives: Opinion

Thoughts on computers and the IT industry.

Open Sources

Introduction

This is a very strange book by almost any criteria. Firstly, much of the content is available on the web in one form or another. This includes an appendix which is literally a Usenet discussion printed. Secondly, most of the writers are techies first and writers second. You don’t get that kind of admission from most writers, even when it’s obviously true.

Content

There are fifteen essays by eighteen writers. I’m not going to go through all of them, but I shall note some of the highlights.

The style prize, without doubt, goes to Larry Wall for ‘Diligence, Patience, and Humility.’ It’s one of the longer essays, has dozens of useless-looking diagrams and for much of the time seems to go nowhere. You keep reading, though. You may not see where it’s going, but you’re intrigued. And it’s worth it — despite initial appearances, it does go somewhere!

As a Linux-biased web site, I couldn’t miss out Linus Torvalds piece, ‘The Linux Edge.’ His simple message — Linux has survived by having a good design — is thoroughly investigated, but the best bit is that he criticises just about everyone else in the industry, often without much justification, and still comes out the other end smelling of roses! I’m not sure how he did it, but I know I’d hate Bill Gates more if he said pretty much the same things.

As always he comes across as very modest, and attributes many of the good ideas to other people.

I liked Marshall Kirk McKusick’s potted history of Unix too. I think I’ve seen much of that before, but not in one place.

Since he practically started the whole thing, I need to mention Richard Stallmans ‘The GNU Operating System and the Free Software Movement.’ It’s an interesting piece in that he contradicts some of what the other authors in the book have to say (as has been well documented, he dislikes that phrase ‘Open Source’ and demands that people call it ‘free’ software). It’s clear that he knows exactly what he wants and where he wants to go, but it’s equally clear that he’s going to put a lot of businesses off free software if he keeps going the way he is. RMS, I respect what you’re saying, but calm down!

The final mention goes to the two Eric Raymond essays. Raymond has been at the centre of the Open Source movement since the Cathedral and the Bazaar,’ and fully deserves the opportunity to write two pieces in Open Sources. The first piece ‘A Brief History of Hackerdom,’ describes the key points and people that gave rise to our current position. The second, ‘The Revenge of the Hackers,’ looks to the future.

Like much Raymond stuff, some is ‘personal’ and has a number of anecdotes about himself. It seems that many people hate this, but I feel that where it doesn’t get in the way of the message and while it’s still interesting and well written, it’s fine. The two essays are, indeed, fine.

Overall

I can’t really criticise this book. All the people in it are more influential than myself, better developers than myself, better writers than myself or, more commonly, all of the above. So while I like some of the writer more than others, and while I actually disagree with some of the assertions made, it is, at least, well written and thought provoking.

As a book intended to document the new-found popularity of the Open Source model, the book is a classic and a must-buy.

The facts

Author: Edited by Chris DiBona, Sam Ockman and Mark Stone

Cost: US$24.95

ISBN: 1-56592-582-3

Buy this book from Amazon.com or from Amazon.co.uk.

Accidental Empires: How the boys in Silicon Valley…

Introduction

This is neither a new book nor a new purchase for me, so why am I reviewing it? Bottom line: it’s a book that I’ve enjoyed a lot over the years and one that I feel the need to recommend to as many people as possible.

What’s in it?

The obvious format for a book on the personal computer industry would be chronological, but as he points out early on in the book, things just aren’t that simple. Instead he uses what, on paper, might look to be a random arrangment of anecdotes, jumping from Apple to Xerox Parc to Microsoft and IBM in the matter of a few pages. But that’s just the nature of the beast.

What’s good

Cringelys writing is easy and engaging to read. It would be very tempting to just sit down and read the entire book from beginning to end. It’s friendly, chatty and full of interesting little anecdotes about all the main characters, from Bill Gates to Steve Jobs.

He freely admits that he’s not been a true historian. He’s missed out some arguably important stuff, but it would take a long, dull book to get all that information in. The charm of Accidental Empires is the fact that it’s easy to read.

Conclusion

When I do reviews, I normally have a section on the bad stuff. I don’t have one here. That’s not because the book is flawless, but because it achieves perfectly what it set out to do.

If you’re at all interested in how the PC industry came to be, this is the book for you.

The facts

Author: Robert X. Cringely

Cost: ?6.99

ISBN: 0887308554

Oracle Builtin Packages

Introduction
Steven Feuerstein’s ‘Oracle PL/SQL Programming‘ book has, over the last couple of years, become my bible on the subject of writing sizable Oracle PL/SQL programs. As I said in my review, it’s useful because it covers just about everything, including the things that don’t work.

So if that book covers just about everything, why would anyone want to buy ‘Oracle Builtin Packages’?

Content

In fact, as the first chapter of the book explains, this entire book was origianlly chapter 15 of ‘PL/SQL Programming’ but Oracle complicated things by adding more to the PL/SQL programming language (all the pseudo-object oriented stuff in version 8 ) and many more new or enhanced packages. The result: either a single two thousand page monster, or two more reasonably sized tomes.

But like the first book, this is still a bit of a monster in its own right. It stands at 931 pages and there’s very little padding; if only all technical books had such a high signal-to-noise ratio!

It seems rather pointless to go into detail on the content of all the different sections…

The two chapters that I’ve used the most are those on DBMS_FILE, which allows you to manipulate operating system files, and DBMS_SQL. Just about everything I know about these modules I learned from this book. When I was originally writing the code, ‘Oracle Builtin Packages’ was by my side, open at the relevant page. When colleagues mistakenly thought I knew what I was talking about, this book was open beneath my desk giving me superiour bluffing ability.

The main critisism that I can think of is that some of the material is getting rather out of date. The DBMS_SQL package is no longer as necesary as it used to be — Oracle 8i introduced some new syntax to the PL/SQL language that largely replaces it.

Conclusion

I’ll be brutal: in marked contrast to Feuerstein’s first book, if you regularly write PL/SQL code you can get by without reading this book.

But you will be more productive if you get it. You won’t be spending days writing code to do things that Oracle have kindly supplied a routine to do and you won’t give up on PL/SQL and write a program in Pro*C because you don’t realise, for example, that you can manipulate files.

No, this book is less vital than ‘Oracle PL/SQL Programming’ but it is still a thorough, well organised and useful book. It’ll quickly pay for itself many times over, and that’s a very high recommendation.

The facts

Author: Steven Feuerstein, Charles Dye, John Beresniewicz

Cost: US$46.95

ISBN: 1-56592-375-8

Minolta Dual Scan II

Introduction

Oddly, the main reason I’m writing this review is that I feel that the Minolta Dual Scan II has been harshly treated in the media. Most magazines seem to skip over this, the entry level, model and move on to the Scan Elite. On photo.net all are singing the praises of expensive Nikons and Canons, and complain about the lack of ICE on this model.

In a sense they are right, but everything is a compromise. Here’s why the Dual Scan II is a compromise that works for me.

What is the Dual Scan II?

Flat-bed scanners have plummeted in price over the last few years. Just seven years ago the only way most people could own a scanner was by getting one of those hand-held ones that you manually dragged across your document. They were quite neat, but getting a good scan was tricky. You needed a steady hand and lots of patience.

Fast forward to the present day and you can get good and cheap flat-beds for reasonable prices. You don’t need a steady hand, just the patience — much less than used to be the case too — and a computer capable of accepting large image files. Most of the pictures you can see on the site have come from a very cheap flat-bed, so why did I go and buy a new one?

I have only very occasionally scanned anything other than my own photos. At first glance, a flat-bed seems ideal for the task: simply place the print on the glass and scan away. What’s wrong with that?

Image quality. Each stage the image goes through loses information. By taking the picture rather than looking at it directly with your eyes, you lose information. Scanning it in loses more and printing it onto photographic paper does too. So scanning from a print loses more information than scanning directly from the slide or negative.

Many flat-beds have a transparency adaptor, but I’m not impressed. Most scanners operate at between 600 and 1200dpi, which is great (too much really) for prints, but slides and negatives are much smaller so you’ll need to enlarge them to print. And negatives come out a funny colour. Much better, I thought, to get a scanner dedicated to scanning the originals.

Hence the Scan Dual II. It scans at 2820dpi, which is nearly three times the resolution that I could expect on a reasonably prices flat-bed. It’s much better than a digital camera, too. That resolution is roughly equivalent to a ten mega-pixel digital. Don’t bother looking in the shops for one of those just yet.

It’s designed especially for the task I’m interested in, meaning that you can automate some of the process. I can do up to six negatives or four slides in one go. It’s smaller than any flat-bed and conveniently connects to my iBook’s USB port (many other scanners in this price range are SCSI, which is difficult with a laptop). And it comes with software for the Mac, albeit only MacOS 9, which is another major consideration!

I see what they mean

I spent so much time in image editing software trying to correct the colours of my scans that buying a new scanner was worth the effort. To my eyes, the colours produced by the Minolta are fantastic. It’s able to find details in the negatives and slides that you can’t see in the prints.

It’s kind of obvious in retrospect, but now I find that I’m still spending time in Photoshop (much less through). The problem now is dust. When the source is so small, even small motes of dust look huge. I guess this is why people are happy to pay another few hundred pounds getting a scanner with ICE software. I’ve still not found a 100% reliable way of cleaning my negatives yet, so please let me know if you know of one!

But, as I said, it’s all a compromise. I could have brought a pretty good flat-bed scanner for half the price of the Dual Scan, so I stretched myself getting it. Spending more for ICE just wasn’t an option.

The software that comes with the scanner seems to be quite powerful, but does stop some way short of real image editing software. They supply Adobe Photoshop LE for that purpose, which is getting on a bit. It’s a cut-down version of Photoshop 5. Since we’re now on version 7 it’s rather ancient, and, like the scanning software, is not OS X native.

Conclusion

Slide scanners are very expensive compared with flat-bed scanners. Not only are they tasked with scanning much smaller sources, but far less people buy them. This means that in the broader scanner market, the Dual Scan II is horrendously expensive.

But as far as slide scanners are concerned it’s great. There are cheaper scanners, but they work at much lower resolutions and are only able to work on a single exposure at a time. The more expensive models tend to have image enhancement software which, while useful, is not worth the extra for someone with my (lack of) artistic abilities.

The bottom line is that if you’re on a limited budget, or would rather spend more money on camera equipment rather than computer peripherals, then I consider the Dual Scan II to be a good buy. However, the ICE software on the next model up are quite possibly worth the money.

Note: Some time ago I emailed Minolta technical support to ask them about a MacOS X version of their software. I was surprised when they said that they were not going to produce one. I was, therefore, even more surprised when I recently found said scanner software in a native MacOS X flavour. Ed Hamrick’s Vuescan shareware application is still a viable option, especially if you want to scan negatives (on which it does a far better job out of the box) but I think I’ll be sticking with the “unavailable” Minolta software.

Dreadful Conclusions

Introduction

I still can’t quite believe that I did it. I actually bought and Apple Macintosh, just like I said back in February. After years of using Windows and Unix is seems a little odd, but I think I like it.

There’s a lot to like about it, though. Here are some of my thoughts as a Windows and Unix user.

Hardware

It was the combination of the new, white iBook and Mac OS X that swayed me in the end. There’s no way that I’d buy one of the original iMacs and my budget didn’t stretch to a PowerBook no matter how much I wanted it to.

One thing that I really like is the hardware. Unlike most PC’s, it feels as though it’s been designed rather than just thrown together. Even compared to my old Dell laptop, this one feels well put together.

Having said that, it’s not perfect. I’m sure that it looks neat on all the design sketches, but I can’t imagine that having all the ports down one side of the machine is the most optimal way of doing things. For once, it probably works best for left handed people! The ports are all down the left side so the mouse cable goes in the correct side. Unfortunately I’m right handed…

Also, it’s deliberate that there are no flaps over the ports. The idea being that there’s nothing to snap or fall off over time. On the other hand, I’m sure that means that they’ll fill up with fluff and other random detritus.

Unlike most PC’s, Apple have completely parted with the past. There are no serial, parallel or PS/2 ports (not as though you’d ever expect PS/2 ports on a Mac). This has bothered me less than I imagined it would. The main down-side is printing to my parallel-ported Deskjet, but I managed it using my Linux box as an intermediary (and Postscript interpreter). Not the ease of use that Apple imagined!

The last thing I’m going to mention about the hardware is something that is an after-thought with most machines: the power-supply. Basically it’s tiny, only just bigger than ink cartidges for the aforementioned Deskjet. After using laptops with power-supplies near as big as the computer this came as a surprise.

Software

I didn’t buy the iBook for it’s hardware, though (although that was important!). I got it for Mac OS X. As I mentioned before, Mac OS X is a rather neat combination of a BSD Unix kernel and a Mac-like user interface. On paper it looked fantastic. It has all the things that the original Macintosh operating system lacked, such as a real networking stack, multi-threading, pre-emptive multi-tasking and the ability to use more than one mouse button. (Okay, I’m joking about the last one.)

The incredible thing, after all the disappointments I’ve had comparing marketing literature with the real thing, is that it does deliver.

In the previous section I mentioned that I now print using my Linux box as a server. It’s not pandering to any Macintosh oddities. Mac OS X is sending print jobs directly to the Linux print spooler, just as another Linux or Solaris machine would. Very neat.

Another thing you can’t really see from screen shots in magazines is how good it all looks. Semi-translucent windows, drop-shadows instead of borders, the way loading programs bounce up and down in the dock, the way that progress bars and the default button in dialogs pulsate… They’re all completely unnecessary, but totally cool. It makes working with the machine that much more fun.

Fun. Now there’s a word you don’t hear in connection with Windows very often. Linus Torvalds wrote Linux “Just For Fun” (his book), whereas Windows was written purely for money. I guess they’ve both succeeded in their own goals. I hope Apple can profit from their combination of both.

Annoyances

There are only a few things that I really dislike, and some of them are rather petty.

Firstly, Apple are still not too confident with it. When you get a new machine it defaults to starting Mac OS 9. If you’re used to dual-booting your PC between Windows and Linux you’d probably expect a menu when the machine starts up asking which operating system to start (that’s what I was thinking). But no. You have to find the Startup Disk control panel, change some settings and restart the machine. Not difficult when you know but not in keeping with the well known Macintosh user-friendliness. (Apple have just announced that they’re making OS X the default OS. This has not been well received by many, who are waiting until Quark and Photoshop are native OS X applications before switching.)

The other things are really niggles. For example, in the Finder although you can search for NFS and (presumably) Apple shares, you can’t browse Windows shares. (Of course my Linux box only had SMB shares at the time…) In fact, I’ve not been able to connect to any SMB shares on my server yet. However this “problem” has not been widely reported so I think that we can assume that it’s my local configuration.

And this is the churlish complaint: they’re updating it too often! Within days of getting hold of the machine there have been many megabytes of fixes. Which is kind of good, but the upgrade to 10.1.2 is 30Mb, rather a lot over a dial-up line especially when dropping the line means you have to restart the download from scratch.

Conclusions

Stepping away from what used to be called IBM Compatibles seemed such a big step. At this stage I half expected to be annoyed with myself, and cursing spending all that money on something I didn’t fully understand how to use.

The key has to be its value. I want to be able to access the Internet, edit MS Word compatible documents and write software. The iBook can do all that using free or preinstalled software, comes in a very neat package with some unique features

It is still kind of odd having to think about how to do some things that are “obvious” to me in Windows and Linux, but I’m still of the opinion that it’s worth the hassle.

Extreme Programming Explained

Introduction

Naturally the key selling point of Extreme Programming — reduced risk and increased fun — appeal to me. I’ve worked on many projects that were either risky, no fun or both and any way to improve that would be a good idea.

However, most of the successful projects were run using fairly heavy-weight methodologies, CMM or ISO accredited, for example. Extreme Programming promises to deliver the benefits while still being simple. I was sceptical. This sounds a little too close to one of Fred Brookes Silver Bullets.

How does it work?

Extreme Programming sounds too simple to ever work. None of its components are new or especially controversial, some are even very common throughout the industry. What’s new is their application together. The key is that the strengths of one part paper over the weaknesses of others.

I won’t detail all the bits — that’s what the book is for — but there are a couple of things I want to talk about.

The one thing that came through in all other reviews of the book was the pair programming aspect. The idea is that whenever writing code, there are always two people sat at the computer. This sounds very wasteful of resources but is based on some very sound theories: reviewing code is good and sharing knowledge around the team reduces the dependency on any one person. XP takes it to the extreme (hence the name) by having, in effect, every line of code being reviewed by two people all the time.

I can certainly see the benefits, but I can imagine a client being very suspicious when you suddenly have two people appearing to do a job you previously only had one doing. He also proposes using a utilisation metric which documents that people are actually not doing genuinely productive work all the time.

Anyone that writes software will immediately recognise that a lot of time is spent helping colleagues, in design or review meetings and any number of other things that are not directly and visibly related to increasing the functionality of a program. Making it more visible sounds like a much harder “sell” than is suggested in the book.

Is it easy to understand?

Brock is clearly very enthusiastic about his methodology and this shows in his prose, which is clear, friendly and concise. It’s sensibly organised and it usually answers questions you’re thinking of in just the right place.

However, it only does all that if you remember what the book is trying to explain. It’s trying to explain Extreme Programming not how to implement it effectively. I had a little difficulty seeing how you could automate tests in some environments. We tried and failed with Oracle Forms (or whatever it’s called this week) for example. I don’t disagree with the principle, but there are some definite technical barriers that are not even acknowledged.

I’m not sure that the distinction between describing what XP is and how to implement it is a good one. It sounds like an excuse to sell two books when only one would suffice. Why would anyone be interested in learning the theory but not the practice of a simple methodology?

Disadvantages

Beck does not claim that Extreme Programming works in all circumstances. There’s an entire (albeit short) chapter on it, which is quite refreshing. Most people would claim that their baby was ideal in all circumstances!

I only thought of a couple of other problems that are not addressed, although neither are necessarily any more of a problem with XP than with other methodologies. (However, they are areas that are claimed to be addressed.)

Documentation is never kept up to date anyway, and in XP there is a much greater chance of at least one person knowing how it works. Couple that with a full set of tests, the ‘business’ person being on the team right from the start and, well, what’s my problem?

The problem is identified in Death March. Who are the stake holders on the project? Is it your team-member (the end user)? Is it their manager, who might want a bunch of reports? Is it the CTO who doesn’t actually care what the system does and just wants a success of some kind in order to get promoted?! The book and the Extreme Programming approach treat the problem of requirements as something that is eventually known and is fix-able given close enough interaction with the users. I wish that were true.

Beck might suggest that this ‘unknown’ is just another form of change than can be managed in the normal manner, but my guess would be that the magnitude of changes required by people that don’t have day-to-today access to the team would be too big and would break the process.)

Or perhaps this is just a size issue. With a system, with a small number of well-defined users there’s no reason it couldn’t work.

Conclusion

I think my initial and immediate dislike of XP stemmed from the fact that most of the projects I’ve ever worked on have fallen into the “won’t work” camp. Now that I understand its background I can appreciate it much more.

And it’s only fair to note that this appreciation comes solely from the book. It’s worth reading even if you don’t plan on implementing Extreme Programming.

The facts

Author: Kent Beck

Cost: $29.95

ISBN: 0-201-61641-6