Italy, 2001

Most of my recent trips have been prompted by a change of jobs, and this one was no exception. I decided on Northern Italy as I’d been there before, but only when I was eleven or so. I had great memories of the place — well, a great milk-shake in Sirmeone — and wanted to explore the area more and see what the place looked like from an adults perspective.

Last time we’d been based near Venice. This time I stayed away from the coast and flew into Milan’s Linate airport, spent a day there (Milan, not the airport!) and then headed, by rail, to Desenzano del Garda, a small town on the south side of Lago di Garda (Lake Garda). From here I was able to visit Verona and see the delights of the other towns on the lake.

Also see my newer pictures in Siena, Pisa, San Gimignano and Colle di Val d’Elsa in 2004 and even more of Florence, Poppi, Fiesole, Monteriggioni and others in 2008.

Milan, Italy

Milan, Italy

Milan, Italy

Sirmeone, Lake Garda, Italy

Sirmeone, Lake Garda, Italy

View over Sirmeone, Italy

Desenzano, Lake Garda, Italy
Verona, Italy

Verona, Italy

Verona, Italy

Verona, Italy

Desenzano, Lake Garda, Italy

Desenzano, Lake Garda, Italy

Desenzano, Lake Garda, Italy

All shots were taken with my EOS. I experimented with a number of different films on this trip: Fuji Superia 100, Fuji Reala 100, Kodak Royal Gold 100 and Fuji Velvia 50. Due to the quality of the scans, you can’t really tell the difference but the prints do vary. Differences in colour saturation are mainly due to the light and polarisation.

If the pictures have piqued your interest, there are a few web Sites that you might want to visit:

  • Italian Tourist Web Site. Where I booked one of my hotels.
  • I used the Michelin “in your pocket” guide to the Italian Lakes (from Amazon UK or US) and the Insight Milan FlexiMap (from Amazon UK or US).
  • As always, there’s a Lonely Planet guide. You can buy a copy from Amazon (UK or US).

Extreme Programming Explained

Introduction

Naturally the key selling point of Extreme Programming — reduced risk and increased fun — appeal to me. I’ve worked on many projects that were either risky, no fun or both and any way to improve that would be a good idea.

However, most of the successful projects were run using fairly heavy-weight methodologies, CMM or ISO accredited, for example. Extreme Programming promises to deliver the benefits while still being simple. I was sceptical. This sounds a little too close to one of Fred Brookes Silver Bullets.

How does it work?

Extreme Programming sounds too simple to ever work. None of its components are new or especially controversial, some are even very common throughout the industry. What’s new is their application together. The key is that the strengths of one part paper over the weaknesses of others.

I won’t detail all the bits — that’s what the book is for — but there are a couple of things I want to talk about.

The one thing that came through in all other reviews of the book was the pair programming aspect. The idea is that whenever writing code, there are always two people sat at the computer. This sounds very wasteful of resources but is based on some very sound theories: reviewing code is good and sharing knowledge around the team reduces the dependency on any one person. XP takes it to the extreme (hence the name) by having, in effect, every line of code being reviewed by two people all the time.

I can certainly see the benefits, but I can imagine a client being very suspicious when you suddenly have two people appearing to do a job you previously only had one doing. He also proposes using a utilisation metric which documents that people are actually not doing genuinely productive work all the time.

Anyone that writes software will immediately recognise that a lot of time is spent helping colleagues, in design or review meetings and any number of other things that are not directly and visibly related to increasing the functionality of a program. Making it more visible sounds like a much harder “sell” than is suggested in the book.

Is it easy to understand?

Brock is clearly very enthusiastic about his methodology and this shows in his prose, which is clear, friendly and concise. It’s sensibly organised and it usually answers questions you’re thinking of in just the right place.

However, it only does all that if you remember what the book is trying to explain. It’s trying to explain Extreme Programming not how to implement it effectively. I had a little difficulty seeing how you could automate tests in some environments. We tried and failed with Oracle Forms (or whatever it’s called this week) for example. I don’t disagree with the principle, but there are some definite technical barriers that are not even acknowledged.

I’m not sure that the distinction between describing what XP is and how to implement it is a good one. It sounds like an excuse to sell two books when only one would suffice. Why would anyone be interested in learning the theory but not the practice of a simple methodology?

Disadvantages

Beck does not claim that Extreme Programming works in all circumstances. There’s an entire (albeit short) chapter on it, which is quite refreshing. Most people would claim that their baby was ideal in all circumstances!

I only thought of a couple of other problems that are not addressed, although neither are necessarily any more of a problem with XP than with other methodologies. (However, they are areas that are claimed to be addressed.)

Documentation is never kept up to date anyway, and in XP there is a much greater chance of at least one person knowing how it works. Couple that with a full set of tests, the ‘business’ person being on the team right from the start and, well, what’s my problem?

The problem is identified in Death March. Who are the stake holders on the project? Is it your team-member (the end user)? Is it their manager, who might want a bunch of reports? Is it the CTO who doesn’t actually care what the system does and just wants a success of some kind in order to get promoted?! The book and the Extreme Programming approach treat the problem of requirements as something that is eventually known and is fix-able given close enough interaction with the users. I wish that were true.

Beck might suggest that this ‘unknown’ is just another form of change than can be managed in the normal manner, but my guess would be that the magnitude of changes required by people that don’t have day-to-today access to the team would be too big and would break the process.)

Or perhaps this is just a size issue. With a system, with a small number of well-defined users there’s no reason it couldn’t work.

Conclusion

I think my initial and immediate dislike of XP stemmed from the fact that most of the projects I’ve ever worked on have fallen into the “won’t work” camp. Now that I understand its background I can appreciate it much more.

And it’s only fair to note that this appreciation comes solely from the book. It’s worth reading even if you don’t plan on implementing Extreme Programming.

The facts

Author: Kent Beck

Cost: $29.95

ISBN: 0-201-61641-6

Dreadful Thoughts

Introduction

I have a terrible confession to make. I am not a spiritual man so, rather than seek penance through the church, I shall document my reasoning here on the web and you can make your own conclusions.

Please go easy on me.

My confession: I can see myself buying a Macintosh later this year.

To some, that may not sound like something to be ashamed of but it’s all a matter of perspective. By trade and education I’m an engineer or scientist; I have short hair, no piercings and virtually no artistic ability (I present the evidence of that here on this web site!). My computer of choice tends to have a barren, baroque user interface. Let’s be honest here, it’s Unix.

The Macintosh and Unix sit virtually on opposite ends of a spectrum. It’s usually called usability, but I think it’s even more precise than that. The Mac makes it easy for people to learn how to use it. Apple make things easy to use and they sacrifice power and flexibility in order to do that.

One of the best examples if this extreme position is the mouse. In Unix you tend to have three buttons. Windows originally had two but now seems to have spawned wheels and more buttons than a Space Shuttle. In each case although it only takes a short time to realise that the left button does most of the useful stuff, Apple decided that one button was less confusing. They’re right, of course. But it does limit your options as far as, say, short-cut menus are concerned.

Unix is the opposite. It has a huge learning curve, but an expert can quickly do just about anything. After spending a lot of time on that curve, I’d actually say that Unix was more usable than a Mac. I’m under no illusions, though, that it’s more difficult to learn.

I guess these extremes, to some extent, explain the success of Windows. Ignoring Apples mistakes and the fragmentation of the Unix market in the early nineties, we can see that Windows is easier to use than Unix but much less so than a Mac. It has the Start menu, Wizards, pop-up help and often hides information rather than bombard the user with difficult, unnecessary detail. On the other hand, it does have rough and ready multi-user facilities, solid TCP/IP networking and a command-line interface (for the brave). It fits the middle ground doing neither task especially well.

The battle ground

That’s how they stand right now, but six months from now things may be very different.

The Unix side probably isn’t going to change much. Linux, especially, will continue with its vast range of incremental upgrades. distributions will eventually come on-line with the new 2.4 kernel, and improvements will continue in both the KDE and GNOME environments.

In the same time-frame, the next desktop version of Windows, XP, will be unleashed on the world. The beta’s are currently doing the rounds and people seem generally impressed. The interface is easier, more consistent and more aesthetically pleasing, and its built on the Windowds 2000 core which has generally been well recieved.

Normally it might have been enough for me to upgrade from my old copy of Windows 95. But for two things: you can’t, you can only upgrade from Windows 98 or above; and MacOS X.

It would be an obvious choice to buy a new PC preloaded with Windows XP since I’ve had a small succession of similar machines over the last ten years, but I find the improvements in MacOS X to be so compelling than I’m considering moving to a completely new architecture. In a sound-bite those reasons are power and user-friendly in one.

MacOS X is based on a BSD Unix kernel (called Darwin and available under an Open Source licence) and has an enhanced Macintosh user interface grafted on top. This is truly the key. You have the complex internals available from a command-line when you need it and a state of the art GUI when you just need a word processor.

In conclusion

There is only one other operating system that supports such a neat combination of Unix and User Friendliness (the BeOS) and that has many problems: I have tried it on three machines and all have had some device that is unsupported, so this can hardly be a unique scenario; and the software support is worse. I may prefer not to use Microsoft Office, but I need to be able to exchange files with people that do.

No other operating system will have quite that level of flexibility. Microsoft won’t be adding more Unix like functionality to Windows and the open source community just can’t compete with the many years of experience that Apple have designing computers for people who don’t write code.

C

Introduction

Talking about C is not easy. Almost all professional programmers have used it at some point and many have a strong attachment to it. I don’t want to start by saying that it’s a poor language, alienating much of my audience, but I figure I’m going to end up doing that anyway so I may as well get it out of the way at the beginning.

Compared to many languages that have come since, and even some that came before, C just isn’t a very good language.

There, I’ve said it.

History

Perhaps more than almost any other language since, C has a rich and famous history. It’s almost impossible to discuss it without also talking a little about the history of Unix.

In 1969, Ken Thompson finally got hold of a PDP-7 and decided to write an operating system for it. (It happened more often than you’d think back then.) That operating system was Unix.

A few years later, Dennis Ritchie designed and built a language based on B (itself based on a language called BCPL) and, creatively, called it C. Unlike its immediate ancestors, C had types and a number of other useful bits and pieces.

C was so useful that Unix was quickly rewritten in it. These days that sounds obvious, but until that point operating systems had always been written in assembler or machine code. This is an important part of computer history.

Utility

C became popular largely because it filled a very useful niche. At one extreme you have all the ‘real’ languages. At the time, real computer scientists would have used Algol68. Structured and clever, Algol was a great language but you couldn’t do anything very low level with it.

At the other extreme there is assembler which people had to resort to if they wanted to do anything close to the hardware. Assembler is only one step removed from machine code which makes writing reliable, bug-free code very difficult, especially when you’re building something as large and complex as an operating system.

C fits in the middle. Described by some as a high-level assembler, it allows you to do low level coding, accessing particular memory addresses and the like, and use high-level constructs such as functions and types.

The following books and papers helped me learn to hate C.

Practical C Programming” by Steve Oualline.

Writing Solid Code” by Steve Maguire.

Code Complete” by Steve McConnell.

Hello world. How many nasty ways can you write “Hello World” in C?

Pointers

Key to C’s ability to mix high- and low-level constructs are pointers. Most ‘serious’ languages have some concept of a them. Some call them references, some call them links, but they all, basically, refer to something that identifies a chunk of memory. Most other languages only use them when you have to, but they are C. No pointers, no language.

You want an array? That’s really a pointer. Pass by reference? A pointer. Strings? Ah, they’re arrays! (Which are pointers.)

The advantages of using pointers for just about everything in the entire language are mainly one sided: it makes writing compilers easier. For the poor souls that actually end up using the language all is not so rosy. As Steve McConnell puts it, “pointers are one of the most error-prone areas of modern programming” (Code Complete, section 11.9).

Some of the side-effects of using pointers are not immediately apparent, either. In most languages, arrays have bound-checking (the ability for the language to raise an error if you try to access an element that doesn’t exist). But C doesn’t really have arrays, it has pointers and a little syntactic sugar that makes it look like it has arrays. Pointers don’t know how much memory is being pointed to so you don’t get bound-checking. If you’re lucky your program causes a segmentation fault, if not you might corrupt other data or, on some operating systems, your program.

It takes all types

Another one of C’s biggest problem is it’s typing. As most people will already know, C allows you to put numbers into character variables, integers into floating points and any number of other nasty combinations. To C they’re all valid, but what they do are not always well defined or consistent. Even the same compiler sometimes does different things depending on the level of optimisation in use, the phase of the moon, etc.

The odd thing is that a weakly typed language doesn’t allow you to do more than one with strong types, it merely allows you to do the wrong thing more easily. For a language used for large-scale software engineering projects the risk of poor code is just too great and weak typing should be outlawed.

I’m sure that people are going to mention the myriad of warnings that modern compilers are able to produce, or that ‘lint’ has been available for nearly as long as the language has been. My counter-argument: I don’t see why you should have to add extra tools or read through pages of warnings in order to correct deficiencies in the source language!

The good bits

If C was truly as appalling as I’ve made out so far, no-one would actually be using it. The main ‘win’ for C, as far as I can see, is that it is small, well defined and widely available.

All three merits are in many ways different sides of the same coin (if you can imagine a three sided coin). To make the language well defined, it helps if it’s small. If lots of people are to use it, it needs to be simple enough that they aren’t put off (Ada anyone? Thought not.). Small and well-defined make it easier to write compilers too, meaning that it’s available on everything from the lowliest PC right up to mainframes.

The theory also goes that your programs should recompile on this range of machines too, but that’s not as true as we’d all like to think. If it was true, we wouldn’t need Java or hundreds of ‘#ifdefs’ throughout. My code tends not to be that low-level, but I wouldn’t like to make any promises about its portability. However, that doesn’t make C’s wide availability useless. Even if your program isn’t cross-platform, the skills required are. A C programmer can quickly write code for just about any machine.

Summary

I’ve probably written more lines of C code than just about any other language, so when I say that I don’t like it I hope that you can see that I’m not being narrow-minded or prejudiced.

As I’ve mentioned above there are some things that I like about C, it’s just that there is so much to hate about it and, even at the time it was written, there wasn’t an excuse for it!

The languages main features are its weak typing and over-use of pointers, both of which allow developers to make truly horrendous coding errors with ease.

If those were C’s only problems I might be able to forgive it, but they aren’t. C has more ways for both experienced and novice programmers alike to hang themselves than any other language I can think of (with the possible exception of Intercal). Even if it didn’t have the ‘=’ and ‘==’ operators to confuse, there’s still the wonderful ‘?:’ and a whole host of spectacularly error prone API calls (does all your code check the value that ‘malloc’ returns?)

No, C, as a language, is a dinosaur. It deserves to whither and die. If you write anything other than an operating system kernel and use C, switch to another language. You’ll be far more productive when you start battling the problem rather than the language.

Sri Lanka, 2001

Sri Lanka (nee Ceylon) is famous for its tea and Arthur C. Clarke, but, as I found out, there’s much more to it than that!

We started in Negombo, a beach resort a few miles away from the airport, moved round to take in the ‘Cultural Triangle’, down to Kandy, Adams Peak and a tea plantation. Finally, we headed to Unawatuna, a beach resort near to Galle (and the England – Sri Lanka test series) and then back to Negombo for the last night. We made plenty of stops along the route.

A combination of a new camera and the beauty of the place meant that I took as many pictures in Sri Lanka as I took in Georgia and Thailand put together! Here I present the highlights.

Click the small pictures below for a full size version.

All these pictures were taken on my Canon EOS300, mainly on Fuji Superior ISO400 film. I ran out of film towards the end of the trip so the last few are on Kodak Max (ISO400).

The same pictures that are on Kodak film have white lines down the middle. They are scratches on the negative that appear to have been put there during either developing or printing. I’m tempted to name the guilty company…

If the pictures have piqued your interest, there are a few web sites that you might want to visit:

  • The Lonely Planet guide is usually worth consulting.
  • If you prefer hard-copy (much more portable than your laptop!), you can buy a copy from Amazon (UK or US).

Photography, opinions and other random ramblings by Stephen Darlington