Code- The Hidden Language of Computer Hardware and Software

While I have more than enough books on my “to read” list, I am always up for suggestions. “Code” came up in a Twitter conversations about computer hardware. I noted that one of my favourite courses from my Computer Science degree (in hindsight if not at the time) was where we went from “What is electricity?” right up to a pretty much fully working CPU. “Code” was recommended as it covers the same ground.

If you’d like to refresh your memory or you never took such a course, this is great introduction to how computers work.

It’s a book of two halves.

The first half starts with the foundations and principles. It starts with the concepts, like Morse Code, before building up from relays, to logic gates, to half-adders, to a complete, working CPU.

That bit is great. Clear steps and descriptions. I was reminded of many things that I first picked up at university and learned some details that I’d either completely forgotten or had never internalised at all.

After you get a working CPU the book largely turns into a history lessen, albeit from the year 2000. It talks about the rest of the computer but, out of necessity, in significantly less detail.

I found this second part to be weaker, though this may be because I’m coming at it from 2021 rather than 2000. These last sections have dated much more than the earlier, CPU-bound section and I wonder if the book had been about building just the CPU rather than the whole computer it would have dated better?

Having said all that, while weaker than the first half, it’s still well written and easy to understand.

Even if you skim the later sections, what quickly becomes apparent is that a computer has layer upon layer of abstractions. You may not understand every layer in the same amount of detail but knowing that they exist is, I think, useful as a software developer.

I can’t help but recommend this book to anyone interested in the subject.

The Computers That Made Britain

I’m still fascinated by the computers of the eighties. Without well known standards, every machine was different, not only from those of other manufacturers but also older machines from the same company. As as user it was terrible. Back the wrong horse and you’d be stuck with a working computer with no software and no one else to share your disappointment with.

But looking back, there’s a huge diversity of ideas all leaping onto the market in just a few years. Naturally, some of those ideas were terrible. Many machines were rushed and buggy, precisely because there was so much competition. Going on sale at the right time could make or break a machine.

Tim Danton’s “The Computers That Made Britain” is the story of a few of those machines.

He covers all the obvious ones, like the Spectrum and the BBC Micro, and others that I’ve not seen the stories of before, like the PCW8256.

While it’s called “The Computers That Made Britain” rather than “Computers that were made in Britain,” I would argue with some entries. The Apple II is certainly an important computer but, as noted in the book, they didn’t sell well in the UK. Our school literally had one, and I think that’s the only one I’ve ever seen “in the wild.” Sales obviously isn’t the only criterion, but the presence of these machines presumably pushed out the New Brain and the Cambridge Z88 (among others). Since this book is about the computers than made Britain, I would have liked to see more about them and less about the already well documented American machines like the Apples and IBMs.

The chapters are largely standalone, meaning you don’t need to read them in order. I read about the machines I’ve owned first, before completing a second pass on the remaining ones. They’re invariably well researched, including interviews with the protagonists. Some machines get more love than others, though. Talking about the Spectrum, it finishes with a detailed look at all the subsequent machines, right up to the Spectrum Next, though curiously missing the SAM Coupe. But the Archimedes gets nothing, even though there was a range of machines. Did they run out of time or was there a page count?

But those are minor complaints for an otherwise well put together book. Recommended.

It’s published by the company that makes Raspberry Pis, which you could argue is the spiritual successor to the Sinclair and Acorn machine. You can download the book for free, but you should buy it! The above link is for Amazon, but if you’re near Cambridge you should pop into the Raspberry Pi store and pick up your copy there instead.

If this is your kind of book, I would also recommend “Digital Retro” and “Home Computers: 100 Icons that defined a digital generation,” both of which are more photography books than stories.

I’m a Joke and So Are You

“I have read just about enough about a large enough number of things to be wrong about nearly everything.”

It took me a long time to read “I’m a Joke and So Are You” by Robin Ince, but you shouldn’t take that as a reflection of the book itself. Instead, it’s more about timing and other distractions.

If you’re familiar with Ince’s work on the Infinite Monkey Cage, the structure of the book should make sense. The theme is “What makes comedians a special breed?” He asks comedians and scientists about various aspects, throwing in thoughts, quips and stories of his own,

He starts with childhood and meanders through many aspects of life from “the self” to imposter syndrome, finishing with death.

While it’s not earth shattering, with no huge reveals that make you reevaluate your life, it’s an entertaining and occasionally thought-provoking read.

Talking to Strangers

I met a man once. He was tall and dark, with straight hair in parting on his left side. His smooth, fair skin contrasted with his choice of a dark, tailored suit. He rarely wore a tie, but in a small concession to whimsy his cuff links bore small images of Daffy Duck. When anyone noticed, he’d laugh it off, saying they were a gift from his youngest.

Sat in his office on the fifteenth floor of an anonymous office block in the City of London he had a realisation, one that would change his life forever.

The man is entirely fictional. But the last two paragraphs are real. So is this one. And you had to read them to get to the forth paragraph which is where, if you’re lucky, I might finally get to the point.

And the point is this.

There are a lot of words and not a lot to say. And that is my problem with Malcolm Gladwell’s “Talking to Strangers.”

It’s not that it’s bad. It’s easy to follow and read. It’s well written. There are some good ideas.

The difficulty is that it’s several hundred pages long, yet there are not several hundred pages worth of ideas. Each chapter uncovers a concept and then beats all the fun out of it by giving the life story of various people by way of an example.

It’s a fine structure but, wow, the ratio of words to ideas is way out of whack.

What are Registers?

When people say that Twitter is a cesspool of conspiracy and abuse, I don’t recognise it based on my experience. My Twitter timeline is all jokes and geeky chat1, and that’s where this post takes its cue:

When I started learning assembler, no site ever mentioned what registers were good for. Wish it had said:

CPU talking to a RAM chip is slow, registers are a bit of memory built into the CPU in which you load numbers from RAM, do several calculations, and only THEN write back.

I said that this was a RISC-centric approach and was challenged to come up with a better definition.

It’s a harder question to answer than I initially thought. Every time I came up with a pithy definition, I thought of an exception. And with every clarification I got further and further away from 280 characters.

With no character limit, what is a register?

Wikipedia says that it’s “a quickly accessible location available to a computer’s processor,” which is the definition that I was arguing against. Thanks, Wikipedia.

Nevertheless, I maintain that I’m right. Let’s dig into the definition further.

It wasn’t the “quick access” bit I didn’t like. Registers are faster than main memory2. They’re also faster than on-die caches3.

The RISC-y bit was in the second sentence: loading bits of memory, do a few calculations, write back.

To explain why that’s not always true we have to take a quick tour of CPU design history. By necessity I’ve missed out many details or made sweeping generalisations4. I don’t think this detracts from my point.

First, a quick aside: for the sake of completeness, I should point out that there’s nothing sacred about registers. There are CPU architectures that do not have them, primarily stack-based but there may be others. Let’s ignore them for now.

There have been a few waves of CPU and instruction set architecture design, and the use of registers has changed over that time.

Processor Registers Instructions
Intel 40045 16 46
MOS 6502 5 56
Intel 8086 8 ?6
Motorola 68000 15 77
PowerPC 32 ?
ARM 31 ?

In The Beginning, making a CPU at all was a challenge. Getting more than a few thousand transistors on a single chip was hard! The first chips designers optimised for what was possible rather than to make things easy for programmers.

To minimise the number of transistors, there were few registers, and those that were present had predefined uses. One could be used for adding or subtracting (the accumulator). Some could be used to store memory addresses. Others might record the status of other operations. But you couldn’t use the address registers for arithmetic or vice versa.

The answer to the question “what is a register for” at this point is that it saves transistors. By wiring up the logic circuits to a single register and having few of them anyway, you could have enough space to do something.

As it became easier to add transistors to a slice of silicon, CPU designers started to make things easier for programmers. To get the best out of the limited hardware, many developers wrote code in assembler7. Therefore, making it easier for programmers meant adding new, more powerful hardware instructions.

The first CPUs might have had an instruction to “copy address n from memory into a register” and another to “add register 2 to the accumulator.” The next generation built on those foundations with instructions like “add the value at address n to the accumulator and write to address m.” The complexity of instructions grew over time.

Working on these early machines was hard partly because they had few registers and the instructions were simple. The new instructions made things easier by being able to work directly with the values in memory.

These new instructions were not magic, however. Having a single instruction to do the work didn’t make copying data from memory quicker. Developers trying to eke out the best performance had an increasingly difficult time figuring out the best combination of instructions. Is this single instruction that takes twenty cycles faster than these other three instructions that nominally does the same thing?

Around the same time, writing code in higher level languages became increasingly popular. The funny thing with compilers and interpreters is that it’s easier to write and optimise them when you use a limited set of instructions. All those esoteric instructions that were designed for people were rarely used when computers wrote the assembler code.

CPU designers figured out that they could make real-world code execute more quickly by heavily optimising a small number of instructions.

This led to completely different CPU designs, called RISC, or reduced instruction set chips. Rather than have a small number of special purpose registers and a large number of complex instructions, RISC insisted on large numbers of general purpose registers and few instructions8. The reduction in the instruction count was partly achieved by making arithmetic operations only work on registers. If you wanted to add two numbers stored in memory, you first had to load them into registers, add them together and write them back out to memory.

At this point, the answer to the question “what is a register for” changed. It became a sensible option to throw away the transistors used to implement many of the complex instructions and use them for general purpose registers. Lacking instructions that worked directly on memory, the definition of a register became “temporary storage for fast computations” — pretty much what we started with.

Since then, the original designs, with lots of instructions and a small number of registers (CISC), and the newer one, with lots of registers and few instructions (RISC), have to some extent merged. RISC chips have become less “pure” and have more special purpose instructions. CISC chips have gained more registers9 and, internally at least, have taken on many of the attributes traditionally attributed to RISC chips10.

Let’s loop back to the original question. Are registers a bit of memory built into the CPU in which you load numbers from RAM, do several calculations, and only then write back?

We’ve seen how registers are not necessary. We’ve see that their importance has waxed and waned. But, if we had to distill the answer down to a single word or sentence, what would that be?

On current hardware, on most machines, much of the time, the answer is: yes, they are a bit of memory built into the CPU in which you load numbers from RAM, do several calculations, and only then write back.

Was I being pedantic for no reason?


  1. I appreciate that there’s an element of white, male privilege here. ↩︎
  2. Even on architectures like Apple’s M1 chip where the main memory is in the same package as the CPU. ↩︎
  3. I’m going to assume you know a fair few concepts here. My focus is more “how did we get here” than “what do these things do.” ↩︎
  4. While I’ve mostly done this deliberately, I’m sure I’ve done it by accident in a few places too. ↩︎
  5. Four bits wide! ↩︎
  6. It was surprisingly hard to get a count of the instructions for most of these. ↩︎
  7. Machine code is the list of numbers that the CPU understands. Assembler is one level above that, replacing the numbers with mnemonics. Even the mnemonics are considered impenetrable by many programmers. ↩︎
  8. Of all the gross simplifications in this piece, this is probably the biggest. ↩︎
  9. I don’t want to get into all the details here but how they’ve managed to do this without substantially changing the instruction set is both fascinating and architecturally horrifying. Rather than add new instructions to address the new registers, they have a concept called “register renaming” where the register names, but not the values, get reused. ↩︎
  10. Again, without wishing to get into too much detail, modern CISC CPUs typically convert their complex instructions into more RISC-like instructions internally, and execute those. ↩︎

Sweet Caress

I rate William Boyd as one of my favourite authors, so when I say that “Sweet Caress” isn’t his best work you have to calibrate it appropriately.

As a structure, it’s almost identical to “Any Human Heart.” It’s a journal or memoir of an interesting character, covering pretty much their entire life. In this one, Amory Clay is born early in the twentieth century and lives a full life as a photographer in Europe, North America and Asia. The timing allows her to see the World Wars, the rise of fascism, the Vietnam war and much more besides. It covers her successes and failures, and the consequences of them both.

What makes it work is that Clay is entirely believable. She’s fun and brave, impulsive and flawed. Lacking any of those qualities might have made it less of an entertaining read or less plausible.

Boyd is a great writer. He has the characters, the structure and the story all wrapped up in a way that appears effortless. There are surprises and twists. Even the ending is satisfying.

The plan was to read more fiction this year. This was a good start.

Photography, opinions and other random ramblings by Stephen Darlington