Range

I’m biased. As Mulder did, I want to believe. Except, I want to believe that being a generalist can work. And that’s what “Range,” by David Epstein, claims. It’s subtitle is, “How generalists triumph in a specialised world.”

It’s not a challenging read. There is a lot of anecdata, examples of people who took a broad path and still succeeded. In that sense, maybe it’s like “Quiet,” which is about introverts. It doesn’t tell you how to succeed, only that it’s possible and that you’re not alone. Maybe that’s enough?

In that sense, it’s not a game changer for me. But there are some good lines in it, some scenarios that I could relate to. For example, I like this:

As education pioneer John Dewey put it in Logic, The Theory of Inquiry, “a problem well put is half-solved.”

This is absolutely my experience. The process of asking a well formed question often leads to the answer. I have started asking questions on Stack Overflow countless times but I’ve asked only twenty-one questions in the fourteen years I’ve been on the site.

I also like this, which I read as an argument for diverse teams.

“When all members of the laboratory have the same knowledge at their disposal, then when a problem arises, a group of similar minded individuals will not provide more information to make analogies than a single individual,” Dunbar concluded.

It’s no good to have a team where you have a lone genius and a bunch of grunts. It’s much better to have a team of differently smart people who can learn from each other; I can “trade” my deeper knowledge in one area for your experience in another. It seems that it’s not just good for the individuals but for the team, and possible society as a whole, too.

I come across this a lot:

The best forecasters view their own ideas as hypothesis in need of testing. Their aim is not to convince their teammates of their own expertise, but to encourage their teammates to help them falsify their own notions.

I share some half-formed theory or idea, with the expectation that other people find the holes and tell me how much of an idiot I am. I am then surprised when people take them as a finished item and run with them.

Generalists … believe employers will view their varied background as a liability, so they downplay it.

And this is certainly me. Employers are almost always looking for a very specific list of requirements and often see detours in an unfavourable light. I found that including my iPhone development activities on my CV sometimes worked against me, for example.

I’ve started to “own” my background much more recently. It becomes self-selecting. The companies that don’t value that extra experience won’t want to hire me, but nor would I want to work for them. A win for us both.

Back to the book. In the end, it’s a fine but not an essential read.

History

A few years ago I had a job where every new recruit would go through a long process of shock and gradual acclimatisation to the main software product.

What it did doesn’t matter as much as how it was built: it was an application developed on top of a proprietary programming language and user interface designer. The reaction was always the same. Why? Why?! Why would you reinvent Visual Basic on Unix? Why would you inflict a programming language even worse than Basic on developers?1

The answer, it turns out, is that the original developers were idiots.

No, of course that’s not true. But if that’s the case, then why did almost every developer start from that point of view when they first arrived at the company?

That brings us to Twitter and its new owner. One of his first public proclamations is to declare that there are too many micro-services running, and, worse, most of do nothing useful! The reply-guys all agree and, between them, argue that it’s entirely possible to rebuild Twitter from the ground-up in weeks, possibly even a weekend if given enough pizza and Blue Bottle.

Were the original developers of Twitter also idiots?

I don’t know as much about Twitter’s architecture, but I’d be willing to bet that, no, they were also not stupid.

If it’s not the original developers, what does it say about the critic? It says that they see the complexity but not the nuance. They see the current state but they do not see any of the decisions that lead up the current system. They see complexity, but without understanding the whole problem domain they don’t see why that complexity exists.

In the case of my job, the software predated Visual Basic, which is a pretty good reason for not using it. It also had to work on Unix and be editable on client sites without extra tooling. By the time I worked there, it may have been dated but it was in production at many clients. It worked. Sure, it’s not how you’d architect it now but the decisions that led to the design did make sense.

If it’s dated, then why not rewrite it? That has been covered many times before, but the short answer is that when you design it, you focus so much on the clean, new solution that you forget why you added the warts to the old system. The layers upon layers of fixes and enhancements represent real world experience. Those micro-services are there for a reason. Not understanding the reason doesn’t change that2.

This is not an argument against evolving the software, only that you should understand what you already have. Sometimes rewriting can be justified. Rationalising a bunch of micro-services isn’t always a ridiculous idea. But there’s an important difference between complex and complicated. Can you know which your inherited system is after a few days on the job?


  1. It was a stack-based language, along the lines of Forth and Postscript. Long time users could do amazing things with tiny amounts of code. I never quite got there. ↩︎

  2. Logical fallacy: argument from incredulity. ↩︎

In The Open

I recently shared a blog post entitled “The Most Successful Developers Share More Than They Take” with the comment:

I try to practice “public by default” though, because of my work, it’s often “on the internal wiki” rather than fully open.

Unfortunately the article spends a lot of time talking about blogging and podcasting which, perhaps, undermined the point I was trying to make. If you want to write blogs, speak on podcasts, and present at conferences, good luck to you1. Not everyone will want to do those things, and that’s fine. I’m not advocating for that. I think most people can do what I meant.

Here’s the key point: make your “content” as widely available as practicable. Allow people to pull when it’s convenient for them rather than you push the information you assume they’d be interested in.

In this context, “public” doesn’t have to mean on the internet or even visible to your entire company. Nor does it mean pushing it to everyone. Updates do not need to land in everyone’s inbox.

Here are a few examples.

I work on multiple projects with a number of different clients. When I make notes, or update the status, or write meeting minutes, I put them on the company wiki rather than keep them on my local machine. My manager might be interested in how often I’m meeting with a specific client. The product team might be interested to learn which clients are using Kubernetes. I wouldn’t share most of this outside the company, but internally it’s not confidental.

If I build a small demo for a client or play with some software, I push my toy project to GitHub. Depending on what it is, it might be limited only to my team, more widely to any of my colleagues or it might be public, but I’ll be as open with it as I can.

If I’m researching something, a new technology or how to implement a particular use case, I’ll put my notes on the wiki.

If I ask a question, I will typically ask it in a public Slack channel rather than as a direct message.

An important aspect of all of these things is that I was already typing the information. The only difference is that instead of keeping it on my local machine or sharing with individuals, it’s “public.”

It means that other people can see the current state of my projects without asking for it. This immediately benefits me because I’m lazy. But in a distributed environment, where timezones are significant, it can save everyone time.

Asking questions in public can get answers from unexpected sources. That new guy might have experience you didn’t know about. Someone in a nearby timezone might get you an answer hours earlier than you were expecting. The person you would have asked might not know or be on vacation.

There are downsides, of course. If you ask a stupid question in public, then everyone will see how dumb you are. Your notes might document a terrible, old technology that you shouldn’t be using at all, or your solution might fail horribly.

But here’s the thing: you’re not stupid. I bet other people have a similar misunderstanding. And the journey itself can be interesting. As Kepler noted:

“What matters to me is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led me to my discoveries.”

Those “lucky hazards” might help other avoid the same mistakes. Can we fix the documentation? Include it in the company induction? Is there a blog or a conference talk in it?2 These steps may require a little extra work but they have benefits for everyone, from future you, to your colleagues and your customers.

Someone is wrong on the internet.

The other thing is that it’s a good strategy for getting the right answer. People can be too busy to respond, right up to the point where they find that Someone On The Internet Was Wrong. People are more likely to offer to fix your work more readily than they will be to come up with a working solution from scratch.

What if no one looks up your status updates? What happens when your notes go unread? Well… nothing. You were already writing the notes and no one except you read them. Worst case, you’re exactly where you were.

In short, this is a terrible process if you want to be seen as being right all the time. However, if you value getting to the right answer and acknowledge that you’re a fallible human, if your ego can handle it, then I find it works well.

And, best of all, there is no need to speak on a podcast or to have a website.


  1. Again, possibly undermining my argument, I do write blogs — hello! — and have spoken at conferences. I’ve never appeared on a podcast, though! ↩︎

  2. I said I wasn’t advocating podcasting or blogging, but that doesn’t mean you shouldn’t if it’s the best way of sharing the information. ↩︎

Twitter

Sometimes it’s only when you start writing about a subject that you truly understand your opinion. That’s the approach I’m taking to answering the question: are you going to leave Twitter?

A few people have asked me in the last couple of months and the only response I have is that I’m not jumping ship and closing my account immediately.

But as the weeks have progressed, as I’ve written this piece, my thinking has evolved. It’s not that I’m going to immediately close my account but I can see The End approaching. Indeed, my usage of Twitter has dropped considerably.

When Twitter was delisted from the stock market, the concept at the top of my mind was this: can you judge a company on the person or entity that owns it?

Twitter has been badly managed or owned by incredibly rich people (or both) for a long time, but they still have millions of users. Is a change of rich person really that significant?

Is the fact that Musk isn’t terribly likeable a factor? Many people bought products from Apple even though Jobs was famous for pushing people to breaking point. You can appreciate the vision even if you couldn’t work for the individual.

To be clear, I’m not saying that no-one avoided Twitter and Apple for these reasons. I’m sure there are some, but not me and not millions of others. Is there a line that he could cross where I would leave immediately? Yes, and, in fairness, he’s got pretty close by allowing back some of the extremists who have been banned.

And, circling back to the management, Twitter has been a mess pretty much since the beginning. They seem to have difficulty shipping anything. They’ve largely eliminated the “fail whale” but what big, beneficial features have come since? The algorithmic timeline?1

Like it or not, maybe the company needs shaking up.

Though, starting on the “cons” side, shaking up the company like this likely isn’t what is needed. It is the tech equivalent of the Brexit “solution” to Britain’s problems. Needing change isn’t the same as supporting chaos.

I don’t understand what over seven thousand people do at Twitter, but neither did Musk, hence the call going out to some of those laid off, asking for them to come back. More slash and burn than measure twice, cut once.

And Twitter’s considered approach to changes is out, replaced by arbitrary deadlines and hunches. $20 for Twitter Blue? No, how about $8. Available on Monday. Or Tuesday. Could be next week.

One common reason that people have left Twitter previously is the volume of hate and harassment. While I don’t doubt their experience, it’s not something that I’ve seen personally. I stay in my little bubble with tech and jokes and a bit of politics.

But it doesn’t feel like we’re heading in the right direction. Musk’s naive views on free speech are perhaps the most worrying, not in the sense that they have the most direct, immediate effect but because they demonstrate that he doesn’t Get It.

My hope is that Musk quickly learns and pivots to a more sensible, nuanced position. But his recent tweets about American politics and abandoning putting warnings on COVID misinformation makes me think this isn’t likely. He seems to think that the problems at Twitter are about the technology, that removing a few micro-services and adding a few blade servers will make a difference. However, the problems are all about people, those who use the platform, those who advertise on it, and those who work there. Until he understands that, or defers to someone who does, things will continue to spiral.

In the end, as an end user, Twitter is all about the people I interact with every day. If they leave, it doesn’t matter whether it’s because of something that Musk said or did, or not. Their absence will make the site not worth visiting any more.

In short, I stay on Twitter despite the company that runs it and despite the person who owns it. I’m there for the geeky discussions, the dad jokes and despairing at the state of British politics. If that goes away, so do I. Find me here if that happens.


  1. Most long-time Twitter users think it’s terrible. While it does occasionally surface interesting Tweets, I do think I’d prefer the original reverse chronological timeline, too. ↩︎

Reading 2022

I’ve been working from home for five years. I started well before the pandemic and, like many who have tried it, would have a hard time going back to an office full time. However, I used to spend my commute reading. In those years I have not managed to consistently find time to just sit and read.

What I’m saying is that 2022, from a book reading perspective, has gone not got well, even worse than 2021! I have only completed four books. I enjoyed two of them, the other two were a bit meh. Not actually bad but I wouldn’t say that they justified their word count.

Next year I am, possibly masochistically, sticking with a target of twelve books. I hope I can do better, even though history suggests I won’t. My backlog of reading material continues to grow and they are not going to read themselves.

At the same time, I am migrating from the Amazon-owned Good Reads to community-owned BookWyrm, which is federated like Mastodon. I’m here if you want to follow my progress.

Programming Pearls

Every year I try to complete the Advent of Code. Every year I fail to finish. I get about halfway through, and the exercises start taking longer to complete than I have time.

Every year I think about Jon Bentley’s Programming Pearls1, because the same kinds of challenges you find in Advent of Code can be found in the book. The main difference being the quality of the answers. At least in my case2. In the words of the preface: “Programming pearls whose origins lie beyond solid engineering, in the realm of insight and creativity.”

The format of the book involves presenting a programming problem and then iterating on the solution while discussing the trade-offs involved at each step. It’s quite an old book by computing standards – the second edition was published in 1999 – and you may be put off by the use of C to illustrate the solutions. I would urge you to continue anyway, even if you are not an expert in C. You may also find some of the solutions to be hard work. Honestly, that’s part of the fun. If you don’t like having your brain turned inside out, this isn’t the book for you!

As you work your way through the chapters, you realise that the key for most of them is not esoteric optimisations or low-level hacking made possible by the C programming language. Instead, it’s data structures. If you somehow manage to store your data in the “correct” way, the algorithm to process it becomes simpler, clearer and faster. It’s almost miraculous.

Of course, there’s a lively debate about “computer science” and whether it should be the subject of developer interviews. What I would say is that the kinds of people who like to attempt Advent of Code are very likely the kind of people who will also enjoy Programming Pearls.


  1. Not to be confused with Programming Perl. ↩︎
  2. In my defence, I usually use Advent of Code to learn (or brush up on) a new programming language rather than solve the puzzle in the best way. That’s my excuse, and I’m sticking to it. ↩︎

Photography, opinions and other random ramblings by Stephen Darlington