Category Archives: Opinion

Thoughts on computers and the IT industry.

Starting Coding

Graham Lee’s “Why is programming so hard?” made me think about how I started programming and whether I’d be coding now if I was twelve.

When I began, I had a Sinclair Spectrum. Back then, home computers booted into a programming language (BASIC typically) and to do anything you needed to know at least one command. To be fair, even then many people didn’t get beyond J, Symbol Shift-P, Symbol Shift-P (‘LOAD “”‘, the command used to load a program from tape).

My first memory of using a Spectrum is trying to type in a program from the manual. This was harder than it sounds as the Spectrum had a weird-and-wonderful command entry system in lieu of a lexer in the BASIC interpreter. I can’t even remember if I got it working, but I was hooked.

I still played games, but I occasionally looked at something and thought “how do they do that?” I’d then try to replicate it. Over time I’d spend more time coding and less gaming. I’d also read magazines and type in code from books and magazines. Then as now, working with other peoples code can be very instructive.

(But, to answer the question in the original blog, I can’t really remember the process of going from a few lines of trivial code to something more complex.)

I think three things — at least — were appealing at the time.

First, I had one. I guess that sounds obvious but even a few years earlier that wouldn’t have been a given. Prior to my interest in computers, I was fascinated by cars. I could identify all the models on sight, memorised the statistics and could tell you that a new Ford Fiesta had terrible handling. But I didn’t really know any of the basics about actually driving as it would be another six years before I could get a provisional licence and start taking lessons. The possibility of owning a car was even more remote (I’ve still never owned one!). There’s a dramatic difference between reading about something and actually using it.

Secondly, it was possible to pretty much understand the whole machine, from the hardware right up to the BASIC interpreter. I didn’t, of course, but nothing seemed out of reach. I started in BASIC but I also played with Z80 assembler. The Spectrum was a simple enough machine that working at that level exposed a lot of what was happening with the hardware — interrupts, memory mapped hardware — even without breaking out a soldering iron.

Thirdly, even the state of the art seemed achievable. I was writing programs to fade text in from black to white and animating crude figures around the screen. What was Jet Set Willy but more of the same? (That’s a rhetorical question. In practice the difference was pretty substantial, but it was still one guy sitting alone at home.)

There was a learning curve but I thought that everything was understandable and figured that I could do even the hard stuff. This really spurred me on, made me try things and play around. There is no better way of learning; nothing kills my enthusiasm more than thinking I can’t solve a problem.

Fast forward to today and, while most people own a computer, the machines are not fully understandable and the state of the art seems distant and impossible for a mere mortal to achieve.

Look at a game that appears as simple as Monument Valley. It doesn’t take long to realise that it’s not that simple. Where would you even start? What would you need to understand just to put a few of your own pixels on screen?

I fear that I would get overwhelmed by the amount that I didn’t know — and possibly would never know — before I got interested enough to keep going.

So where is the “in” to programming these days?

It’s easy to get hung up on the programming language or environment, but I don’t think that’s the key. If it’s interesting people will put in the time to get past those kinds of obstacles. (Whether they should need to or not is another question.) Instead, we need projects that can start small and gradually get more complex, and they have to be in environments that the user understands, and the results need to look immediately impressive or be immediately useful.

This is why I don’t think learning to code in, say, Python is likely to be compelling for many people. It is programming, but few people use command lines any more and the results don’t look impressive. Printing the numbers from one to ten? Yawn.

Web programming is, perhaps, the obvious example. People start copy-and-pasting can graduate to writing code. Please don’t start the BASIC versus JavaScript debate in the comments.

I have heard about (but have no experience of) building “mods” for Minecraft. Much as I did, people play the game and get hooked. Rather than trying to replicate features, they try to modify them or enhance them.

Once people have left academia, the most popular method of starting programming that I’ve seen has been Excel. People start building complex spreadsheets, then add a few macros and then begin playing around with VBA. At each stage they can do a little more than they could before and at each stage it’s useful.

So, back to the original question: would I be programming now if I was twelve? I think the most likely route would be through something like Minecraft, but I never got hooked on anything more complicated than Bomb Jack so I have to think the answer would be no. For my own sanity I probably should think too hard whether or not that’s a good thing.

Recruitment Tests

Over the years I’ve been asked to do a lot of programming aptitude tests. I’ve had to do some in the last couple of months and I’m deliberately writing this now before I get the results back of the most recent one so you won’t think that this post is just sour grapes…

I’m not going to get into the details of the tests because it doesn’t really matter what they are or who administered them for the purposes of this post.

I don’t like these tests. I don’t think they work well for either the candidate or even the company that is using them.

An obvious complaint is that the tests bare little resemblance to Real Life. On Twitter I cynically suggested that a more realistic test would involve trying it write a program based on an ambiguous and constantly changing specification. The test would end when you quit in frustration.

A bigger issue, I think, is the amount of time it takes and when they take place.

Let’s tackle the latter point first. Every time I’ve been asked to take a test it has been before I’ve spoken to anyone. All they’ve seen of me is my CV/resume. All I know about the company and job is what I see on their website and a brief job description. Am I a nice person? What is it like to work there? Neither party has any idea.

My objection to this is that we both have a lot to lose if the recruitment process goes wrong. I always consider an interview to be a two way process — they need to learn about me and I need to understand more about them — yet the very first stage in the recruitment process is them demanding a couple of hours or a days commitment of my time but only a couple of minutes of their time.

To be clear, I have sat on the other side of the table. I do know that far too many candidates have no real hope of filling the position. But, equally, you don’t want to push away the most qualified candidates.

Two hours is a good a chunk of time. A day is a lot of time. If I already have a full time job and am just looking to move to something better how are those requests going to fit into my schedule? Badly I’d hazard. A full day is half my weekend. Two hours is an evening. (At least! Finding two contiguous hours at home these days is a challenge.)

Sure, the least qualified candidates will fail it but the very best candidates probably won’t even take it. They’ll just say “no” and move on to an opportunity that requires less upfront effort.

I came across this quote earlier today: “Never allow someone to be your priority while allowing yourself to be their option” (either Mark Twain or Maya Angelou, not sure who said it first). That seems very appropriate here. Should I really prioritise these companies over others? Are these companies that special? (A possible exception would be really big names like Google or Apple but I think it’s fair to say that none of the companies that have asked me to do tests have been in that class.)

So, what’s the answer? Certainly there’s no silver bullet that pleases everyone and finds only the very best matches. Having thought about this I think the best compromise would be asking candidates to do something as basic as FizzBuzz (but probably not exactly that as it would be very easy to Google the answer).

To people who have never done any recruitment this probably sounds incredibly patronising. All I can say is that I wish that were true.

And if you really want to administer a test that takes more than thirty minutes, I think they’d be more acceptable much later in the recruitment process. Or failing that, offering to pay isn’t a ridiculous suggestion, though I suspect most employers would argue otherwise.

Afterword: After writing all this, why am I even taking the tests? Mostly because I’m currently between jobs so it’s difficult to argue that I don’t have the time.

After-afterword: I passed the test. I still don’t like them.

Swift Hate

I’m seeing a surprising amount of vitriol aimed at Swift, Apple’s new programming language for iOS and Mac development. I understand that there can be reasoned debate around the features (or lack thereof), syntax and even the necessity of it but there can be little doubt about the outcome: if you want to consider yourself an iOS developer, it’s a language that you will need to learn.

The only variable I can think of is when you learn it.

I think it’s perfectly reasonable to delay learning it as you have code to deliver now and because Swift is, well, very beta currently.

But asking what the point of Swift is not constructive. Asking what problems can be solved exclusively by Swift makes no sense at all you can do almost anything in most programming languages. Just because Intercal is Turing complete doesn’t mean that you’d want to use it for any real work. What varies between languages is what’s easy and what’s hard.

Objective-C undoubtedly makes some things easier than Swift. It’s a more dynamic language and its C foundations open up a lot of low-level optimisations that probably are not there in higher level languages.

But that same flexibility comes with a price: segmentation faults and memory leaks; pointers; easy-to-get-wrong switch statement; a lack of bounds checking. It also inherits a lot of ambiguity from original C language specification which makes certain automatic optimisations impossible.

How many applications require low-level optimisations more than safety? (And that’s ignoring that the biggest optimisations are usually found in designing a better algorithm or architecture.) How often is it better to have five lines of code instead of one? Every line is a liability, something that can go wrong, something that needs to be understood, tested and maintained.

Whatever its failings, it’s already clear that Swift is more concise and safer than Objective C.

There are absolutely applications where Objective C, C or C++ would be a better choice than Swift. It’s the old 80-20 rule applied to a programming language. And, for those resistant to learning a new language, the 80% is not on “your” side.

Right now, some of this requires a bit of a leap of faith. Swift clearly isn’t finished. You can either learn it now and potentially have some say on what the “final” version looks like, or you can learn it afterwards and just have to accept what’s there. But, either way, you’ll probably using it next year. Get used to it.

Failure is an option

My first project out of university was a disaster.

The client was unhappy, technically it was a mess, no one knew what it was supposed to do despite the volume of requirements and functional specification documents and the quality of what was there was terrible. People were working hard but it wasn’t really going anywhere.

All of this, I should note, was happening before I joined. I didn’t realise how bad it really was at the time. The Real World was so different and new from university that I was blinded the problems and just did what I thought was best.

Of course, despite the title of the piece, this isn’t a good option. But that’s not to say that no good came out of the whole process.

Six months, maybe a year, into my time on the project somebody realised that something needed to change. They replaced the project manager with one known inside the company as a “trouble shooter.” And it’s no exaggeration to say that within a few months the project as a whole had been turned around. Indeed, the relationship with the client changed so much we won new business. And a senior manager, after a few drinks, happily sang the praises of this new manager. With the old manager pretty much any discussion by the client would be unquotable on a family website.

The details of what happened are too numerous and it happened far too long ago for me to remember them all anyway. Instead, what I take away from this is (at least) two things.

Firstly, even when things are badly wrong it is possible to turn them around. It can take effort and time that you maybe unwilling or unable to invest, but deliberate change is always possible; just waiting for a miracle or someone to stop paying the invoices is not productive.

Secondly, it has made me try to keep my eyes open, to try to understand what’s going on on the whole project rather than just in my little corner. On that first project, there were talented people doing good work but it wasn’t coordinated and it wasn’t being prioritised correctly. On other projects you see a few bad eggs who (deliberately or not) manage to sour the whole team. And on others there is so much bureaucracy, so much overhead that good people can’t do good work because of all the meetings and paperwork.

Of course, as a small cog in a big machine it is often not possible to actually fix these problems, but an awareness of what’s going on can make it easier to anticipate problems and try to work around the worst aspects. You alone may not be able to make the project a success but you can at least complete your tasks as well as possible.

Webcam

I’m not entirely sure what I was thinking. In about 2005 I bought an iSight, Apple’s relatively short-lived external webcam. It was a beautiful device. Sleek, easy to use and functional.

At least, I think it was functional.

For a device that cost me well over £100 I didn’t really think it through. No one else I knew at the time had a Mac with iChat. Or a webcam.

Before I finally gave in and sold it on eBay I did use it a few times with my then girlfriend (now wife). And it was really nice; like the future. Having grown up with old, slow computers the idea of playing video on them is still slightly magical to me. To have a computer simultaneously record, compress, transmit, receive, decode and display high resolution videos still strikes me as pretty amazing.

Even now, web chatting once a week, I think it’s neat. My son, before he was two, thought nothing of having long, detailed “conversations” with his grandparents. What’s high-tech to me is entirely normal to him.

And all this leads me to my latest technology purchase: a webcam. I got it for two reasons: firstly, I’ve been using my laptop with the lid closed a lot, which means I can’t use its builtin webcam. The second reason: it only cost £5.

I’ve probably already used it more than I ever used the iSight.

Is it as pretty as the iSight? Is it as well made? No and no. But it’s amazing what £5 can get you these days. I added the following as a review:

Considering the cost it’s remarkably well put together, comes with a decent length USB cable, has a flexible stand and works straight out of the box. The LEDs are a bit of a gimmick and the picture quality is a little muddy compared with the built-in camera on my MacBook, but it’s totally usable and easily forgiven given the price.

Webcams have moved from an expensive toy that I wanted to like but couldn’t actually use to a practically disposable tool — I’m sure there are drinks in your favourite coffee chain that cost more — that I use almost daily in less than ten years. I don’t mind being a foolish early adopter if it helps get genuinely useful technology into the hands of more people. If only all my other toys prove to be quite to useful.

Not so smart phones

The flood of new so-called smart watches continues. Some people seem to love theirs, others remain to be convinced.

Count me in with the unconvinced, though only because the current ones seem to be poorly conceived.

Marco Arment says:

Portability is critical to modern device usefulness, and there are only two classes that matter anymore: always with you, and not… Smartphones dominate always with you.

I think this gets to the heart of why the current range of devices — both those for sale and also those just announced at CES — just are not very compelling.

Let’s ignore for the moment the fact that most of them look awful.

Actually, no. Let’s not. You can’t sell a device for hundreds of pounds, one designed to sit on your wrist, replacing the only jewellery that many men wear, and make it look like a digital watch from 1981. I half expect the next smart watch to have a calculator keyboard on like the old Casios.

(For what it’s worth, I think new Pebbles are a big step forward over the original version. Unfortunately I like my super-thin Skagen so I’d still consider it a long way from acceptable.)

But yes, let’s assume the form-factor was more pleasing. Then it still doesn’t work. They’re not replacing anything. They’re accessories for an already expensive and always-with-you device. Sure, looking at your wrist is easier than getting your phone out of your pocket, but is it really that much easier? Is it several hundred pounds better? Is it worth the hit on you phones battery life and the inconvenience of yet another device to charge? I think most people will conclude, no.

So in summary, I think that smart watches have two main problems:

  1. Aesthetics
  2. They’re companion devices

The first is solvable with advancing technology and a degree of taste. The latter means that not every manufacturer will solve it but once the components become small enough putting them in an attractive case becomes possible.

Moore’s Law can partly solve the second point, too, but it’s not enough on its own. You’d also need changes in the way you interact with the device if it’s to be a full replacement for a smartphone.

I don’t think the answer to that is speech. Sure, Siri will get better, but there are places where you can’t or wouldn’t want to talk to your devices. And it would be hard to do an Android and get bigger and bigger screens — at least until we evolve to have bigger fore-arms.

Instead, I wonder if smart-watches are really a bit of a technological dead-end. If over time components tend smaller and smaller, then why stop at wrist watch size?

The other side of the equation is the smart phone. Is it really the best form-factor that we can possibly imagine? Do we use smart phones because they are the best place to put small computers, radios and piles of sensors?

Or put another way: if you could have the same functionality that’s currently in your smart phone in the form of a watch, would you take it? If you could take all that functionality and not even have to wear a watch, would you take it?

The smart phone is a great compromise. It’s small and with you most of the time. But you still have to carry it. You can still easily lose or drop it and break it.

Smart watches and Google Glass try to solve these problems but, as Marco says, they do so with some pretty serious draw-backs. The smartphone is better for most people right now but that won’t always be the case.