Tag Archives: Programming

Programming Pearls

Every year I try to complete the Advent of Code. Every year I fail to finish. I get about halfway through, and the exercises start taking longer to complete than I have time.

Every year I think about Jon Bentley’s Programming Pearls1, because the same kinds of challenges you find in Advent of Code can be found in the book. The main difference being the quality of the answers. At least in my case2. In the words of the preface: “Programming pearls whose origins lie beyond solid engineering, in the realm of insight and creativity.”

The format of the book involves presenting a programming problem and then iterating on the solution while discussing the trade-offs involved at each step. It’s quite an old book by computing standards – the second edition was published in 1999 – and you may be put off by the use of C to illustrate the solutions. I would urge you to continue anyway, even if you are not an expert in C. You may also find some of the solutions to be hard work. Honestly, that’s part of the fun. If you don’t like having your brain turned inside out, this isn’t the book for you!

As you work your way through the chapters, you realise that the key for most of them is not esoteric optimisations or low-level hacking made possible by the C programming language. Instead, it’s data structures. If you somehow manage to store your data in the “correct” way, the algorithm to process it becomes simpler, clearer and faster. It’s almost miraculous.

Of course, there’s a lively debate about “computer science” and whether it should be the subject of developer interviews. What I would say is that the kinds of people who like to attempt Advent of Code are very likely the kind of people who will also enjoy Programming Pearls.


  1. Not to be confused with Programming Perl. ↩︎
  2. In my defence, I usually use Advent of Code to learn (or brush up on) a new programming language rather than solve the puzzle in the best way. That’s my excuse, and I’m sticking to it. ↩︎

Panic

The whole team got this email today. Okay, it wasn’t today and these are not the exact words, but it was something like this:

We have a serious regression in build 456. We have set the project back rather than taken it forward. We need the utmost focus and commitment on fixing it. We’ve broken it and we stay in the office until it’s fixed.

I’ve had a few of those messages over the years and while it’s intended to focus minds it often has the opposite effect. Let’s examine why.

Projects see the same mistakes made over and over and this email encompasses many of those sins; it’s one message but represents a microcosm of large part of my career.

Here are a few problems that I see immediately:

  • No problem definition
  • No person accountable
  • No next action
  • A deadline but no understanding of the work involved

This has a number of consequences.

There are studies that show if you have a heart attack in a crowded area you are less likely to receive life-saving CPR from a stranger than if you’re in an area with one other person.

In this case the passers by (the project team) don’t know that they’re needed. Without a problem definition I don’t know if the regression was caused by one of my changes or even if it affects my code. Without a person accountable everyone likely assumes that it’s someone else’s code. “Someone would have mentioned it if it was my code.” And with no “next action” it’s easy to assume that someone else will handle it.

Arguably the deadline is not really a deadline. What if the fix would take a week to implement? Instead it’s a target. You can’t take an estimate and reduce it to fit to an externally imposed date. It doesn’t work like that. You may hit your deadline if you’re lucky, but a good plan doesn’t need luck.

Even worse, the arbitrary deadline and lack of direction gives the entire project a sense of panic. I think the intention was urgency but urgency implies you know what you need to do and that you need to do it quickly. As we’ve seen, the task above is neither well defined nor assigned. The only clear things in the original email are the version that is broken and the deadline.

However, the biggest sin is questioning the commitment and competence of the people needed to resolve the issue. In my experience, this is rarely the case, yet asking the question can make it true. If you’re not trusted, why put in the extra work? Next time you need to make a change, are you going to do it the “right” way or the way with the absolute lowest risk? Putting in a lot of good work and then getting kicked for your efforts is not a good incentive for doing a job well.

Unix: A History and a Memoir

This is probably the geekiest book I’ve read in a long time. It’s basically one step up from reading the source code for your favourite operating system. Or perhaps having a favourite operating system.

What I would say is that Unix has been pretty much the only constant throughout my career. I started with Solaris and HP-UX at university. I installed an early version of Linux on my personal machine to avoid the thirty-minute walk from home to the university labs. I’ve done consulting, I’ve developed both vertical and horizontal applications1, C and C++, Swift and Java, banking and telecoms. Pretty much the only thing they’ve all had in common was some sort of Unix underpinning.

And that’s bizarre. So much of computing changes in five years, yet Unix wasn’t even new when I started at university!

This book is the story, the memoir, of one of the people who built it. And it’s fascinating but probably only for a relatively small audience. I loved the first chapter, where he name-dropped some of the people who Kernighan worked with. Plaugher. Aho. Ullman. Honestly, if you’ve not heard of them, you’re probably not the target market for this book.

Also, if you’re Richard Stallman, you’re probably not the target for this book either: in the last chapter, he says that GNU software is “open source.”

On the other hand, if you’re not Stallman and you know about some or all of the people involved, then you are the target for this book. Read it. You’ll love it.


  1. Is that common terminology? A “vertical” application is one that’s applicable only to one industry, such as a trading application. A “horizontal” application is usable by many, like a database or operating system. ↩︎

Starting Coding

Graham Lee’s “Why is programming so hard?” made me think about how I started programming and whether I’d be coding now if I was twelve.

When I began, I had a Sinclair Spectrum. Back then, home computers booted into a programming language (BASIC typically) and to do anything you needed to know at least one command. To be fair, even then many people didn’t get beyond J, Symbol Shift-P, Symbol Shift-P (‘LOAD “”‘, the command used to load a program from tape).

My first memory of using a Spectrum is trying to type in a program from the manual. This was harder than it sounds as the Spectrum had a weird-and-wonderful command entry system in lieu of a lexer in the BASIC interpreter. I can’t even remember if I got it working, but I was hooked.

I still played games, but I occasionally looked at something and thought “how do they do that?” I’d then try to replicate it. Over time I’d spend more time coding and less gaming. I’d also read magazines and type in code from books and magazines. Then as now, working with other peoples code can be very instructive.

(But, to answer the question in the original blog, I can’t really remember the process of going from a few lines of trivial code to something more complex.)

I think three things — at least — were appealing at the time.

First, I had one. I guess that sounds obvious but even a few years earlier that wouldn’t have been a given. Prior to my interest in computers, I was fascinated by cars. I could identify all the models on sight, memorised the statistics and could tell you that a new Ford Fiesta had terrible handling. But I didn’t really know any of the basics about actually driving as it would be another six years before I could get a provisional licence and start taking lessons. The possibility of owning a car was even more remote (I’ve still never owned one!). There’s a dramatic difference between reading about something and actually using it.

Secondly, it was possible to pretty much understand the whole machine, from the hardware right up to the BASIC interpreter. I didn’t, of course, but nothing seemed out of reach. I started in BASIC but I also played with Z80 assembler. The Spectrum was a simple enough machine that working at that level exposed a lot of what was happening with the hardware — interrupts, memory mapped hardware — even without breaking out a soldering iron.

Thirdly, even the state of the art seemed achievable. I was writing programs to fade text in from black to white and animating crude figures around the screen. What was Jet Set Willy but more of the same? (That’s a rhetorical question. In practice the difference was pretty substantial, but it was still one guy sitting alone at home.)

There was a learning curve but I thought that everything was understandable and figured that I could do even the hard stuff. This really spurred me on, made me try things and play around. There is no better way of learning; nothing kills my enthusiasm more than thinking I can’t solve a problem.

Fast forward to today and, while most people own a computer, the machines are not fully understandable and the state of the art seems distant and impossible for a mere mortal to achieve.

Look at a game that appears as simple as Monument Valley. It doesn’t take long to realise that it’s not that simple. Where would you even start? What would you need to understand just to put a few of your own pixels on screen?

I fear that I would get overwhelmed by the amount that I didn’t know — and possibly would never know — before I got interested enough to keep going.

So where is the “in” to programming these days?

It’s easy to get hung up on the programming language or environment, but I don’t think that’s the key. If it’s interesting people will put in the time to get past those kinds of obstacles. (Whether they should need to or not is another question.) Instead, we need projects that can start small and gradually get more complex, and they have to be in environments that the user understands, and the results need to look immediately impressive or be immediately useful.

This is why I don’t think learning to code in, say, Python is likely to be compelling for many people. It is programming, but few people use command lines any more and the results don’t look impressive. Printing the numbers from one to ten? Yawn.

Web programming is, perhaps, the obvious example. People start copy-and-pasting can graduate to writing code. Please don’t start the BASIC versus JavaScript debate in the comments.

I have heard about (but have no experience of) building “mods” for Minecraft. Much as I did, people play the game and get hooked. Rather than trying to replicate features, they try to modify them or enhance them.

Once people have left academia, the most popular method of starting programming that I’ve seen has been Excel. People start building complex spreadsheets, then add a few macros and then begin playing around with VBA. At each stage they can do a little more than they could before and at each stage it’s useful.

So, back to the original question: would I be programming now if I was twelve? I think the most likely route would be through something like Minecraft, but I never got hooked on anything more complicated than Bomb Jack so I have to think the answer would be no. For my own sanity I probably should think too hard whether or not that’s a good thing.

Swift Types

If you look at the Swift Language guide, you get the distinct impression that the type system is sleek and modern. However the more you dig into it the more eccentricities you find.

The one I’m going to look at today makes sense only if you look at the problem domain from a slightly skewed perspective. I’ve been trying to think whether this is a sensible, pragmatic way of designing a language or a mistake. Judge for yourself.

So, the feature. Let’s define a dictionary:

var test1 = [ "Foo" : "Bar" ]

Check the type and we find that it’s of type Dictionary<String,String>. The generics and type inference are doing exactly what you’d image.

test1["Test"] = "Works"

So basically it’s all good.

So, what type is this expression?

var test2 = [:]

And why does this not work?

test2["Test"] = "Doesn't work"

Let’s take a step back. What’s the problem? Well, [:] is an empty dictionary but give us no clue what the type is. Remember, Swift dictionaries and arrays use generics, so the compiler only allows objects of a particular type to be added.

A good guess for the type would be Dictionary<AnyObject,AnyObject>. But a little fishing around tells you that’s not the case because AnyObject is neither “Hashable” or “Equatable” and keys need to be both.

The answer? test2 is an NSDictionary. That is, in this one circumstance, Swift extends outside its native dictionary type and decides to use a class found in Foundation.

Once you know that, it is clear that the second line should be:

test2.setValue("Does work now", forKey:"Test")

Maybe if you’re familiar with the guts of both Objective C and Swift this behaviour makes sense, but a language built-in returning a completely different type just because it can’t figure out the type feels broken to me.

In the end I think I’ve convinced myself that, while it might be convenient to allow this syntax, it’s a bad idea to saddle the language with these semantics so early on. In a few years when no one uses Objective C or when Swift is no longer fully tied to Cocoa, will this make sense?

I would prefer to see it being a compiler error with the correct approach being explicit with the type:

var test2:Dictionary<String,String> = [:]

Thoughts?

Swift Hate

I’m seeing a surprising amount of vitriol aimed at Swift, Apple’s new programming language for iOS and Mac development. I understand that there can be reasoned debate around the features (or lack thereof), syntax and even the necessity of it but there can be little doubt about the outcome: if you want to consider yourself an iOS developer, it’s a language that you will need to learn.

The only variable I can think of is when you learn it.

I think it’s perfectly reasonable to delay learning it as you have code to deliver now and because Swift is, well, very beta currently.

But asking what the point of Swift is not constructive. Asking what problems can be solved exclusively by Swift makes no sense at all you can do almost anything in most programming languages. Just because Intercal is Turing complete doesn’t mean that you’d want to use it for any real work. What varies between languages is what’s easy and what’s hard.

Objective-C undoubtedly makes some things easier than Swift. It’s a more dynamic language and its C foundations open up a lot of low-level optimisations that probably are not there in higher level languages.

But that same flexibility comes with a price: segmentation faults and memory leaks; pointers; easy-to-get-wrong switch statement; a lack of bounds checking. It also inherits a lot of ambiguity from original C language specification which makes certain automatic optimisations impossible.

How many applications require low-level optimisations more than safety? (And that’s ignoring that the biggest optimisations are usually found in designing a better algorithm or architecture.) How often is it better to have five lines of code instead of one? Every line is a liability, something that can go wrong, something that needs to be understood, tested and maintained.

Whatever its failings, it’s already clear that Swift is more concise and safer than Objective C.

There are absolutely applications where Objective C, C or C++ would be a better choice than Swift. It’s the old 80-20 rule applied to a programming language. And, for those resistant to learning a new language, the 80% is not on “your” side.

Right now, some of this requires a bit of a leap of faith. Swift clearly isn’t finished. You can either learn it now and potentially have some say on what the “final” version looks like, or you can learn it afterwards and just have to accept what’s there. But, either way, you’ll probably using it next year. Get used to it.