Tag Archives: work

History

A few years ago I had a job where every new recruit would go through a long process of shock and gradual acclimatisation to the main software product.

What it did doesn’t matter as much as how it was built: it was an application developed on top of a proprietary programming language and user interface designer. The reaction was always the same. Why? Why?! Why would you reinvent Visual Basic on Unix? Why would you inflict a programming language even worse than Basic on developers?1

The answer, it turns out, is that the original developers were idiots.

No, of course that’s not true. But if that’s the case, then why did almost every developer start from that point of view when they first arrived at the company?

That brings us to Twitter and its new owner. One of his first public proclamations is to declare that there are too many micro-services running, and, worse, most of do nothing useful! The reply-guys all agree and, between them, argue that it’s entirely possible to rebuild Twitter from the ground-up in weeks, possibly even a weekend if given enough pizza and Blue Bottle.

Were the original developers of Twitter also idiots?

I don’t know as much about Twitter’s architecture, but I’d be willing to bet that, no, they were also not stupid.

If it’s not the original developers, what does it say about the critic? It says that they see the complexity but not the nuance. They see the current state but they do not see any of the decisions that lead up the current system. They see complexity, but without understanding the whole problem domain they don’t see why that complexity exists.

In the case of my job, the software predated Visual Basic, which is a pretty good reason for not using it. It also had to work on Unix and be editable on client sites without extra tooling. By the time I worked there, it may have been dated but it was in production at many clients. It worked. Sure, it’s not how you’d architect it now but the decisions that led to the design did make sense.

If it’s dated, then why not rewrite it? That has been covered many times before, but the short answer is that when you design it, you focus so much on the clean, new solution that you forget why you added the warts to the old system. The layers upon layers of fixes and enhancements represent real world experience. Those micro-services are there for a reason. Not understanding the reason doesn’t change that2.

This is not an argument against evolving the software, only that you should understand what you already have. Sometimes rewriting can be justified. Rationalising a bunch of micro-services isn’t always a ridiculous idea. But there’s an important difference between complex and complicated. Can you know which your inherited system is after a few days on the job?


  1. It was a stack-based language, along the lines of Forth and Postscript. Long time users could do amazing things with tiny amounts of code. I never quite got there. ↩︎

  2. Logical fallacy: argument from incredulity. ↩︎

In The Open

I recently shared a blog post entitled “The Most Successful Developers Share More Than They Take” with the comment:

I try to practice “public by default” though, because of my work, it’s often “on the internal wiki” rather than fully open.

Unfortunately the article spends a lot of time talking about blogging and podcasting which, perhaps, undermined the point I was trying to make. If you want to write blogs, speak on podcasts, and present at conferences, good luck to you1. Not everyone will want to do those things, and that’s fine. I’m not advocating for that. I think most people can do what I meant.

Here’s the key point: make your “content” as widely available as practicable. Allow people to pull when it’s convenient for them rather than you push the information you assume they’d be interested in.

In this context, “public” doesn’t have to mean on the internet or even visible to your entire company. Nor does it mean pushing it to everyone. Updates do not need to land in everyone’s inbox.

Here are a few examples.

I work on multiple projects with a number of different clients. When I make notes, or update the status, or write meeting minutes, I put them on the company wiki rather than keep them on my local machine. My manager might be interested in how often I’m meeting with a specific client. The product team might be interested to learn which clients are using Kubernetes. I wouldn’t share most of this outside the company, but internally it’s not confidental.

If I build a small demo for a client or play with some software, I push my toy project to GitHub. Depending on what it is, it might be limited only to my team, more widely to any of my colleagues or it might be public, but I’ll be as open with it as I can.

If I’m researching something, a new technology or how to implement a particular use case, I’ll put my notes on the wiki.

If I ask a question, I will typically ask it in a public Slack channel rather than as a direct message.

An important aspect of all of these things is that I was already typing the information. The only difference is that instead of keeping it on my local machine or sharing with individuals, it’s “public.”

It means that other people can see the current state of my projects without asking for it. This immediately benefits me because I’m lazy. But in a distributed environment, where timezones are significant, it can save everyone time.

Asking questions in public can get answers from unexpected sources. That new guy might have experience you didn’t know about. Someone in a nearby timezone might get you an answer hours earlier than you were expecting. The person you would have asked might not know or be on vacation.

There are downsides, of course. If you ask a stupid question in public, then everyone will see how dumb you are. Your notes might document a terrible, old technology that you shouldn’t be using at all, or your solution might fail horribly.

But here’s the thing: you’re not stupid. I bet other people have a similar misunderstanding. And the journey itself can be interesting. As Kepler noted:

“What matters to me is not merely to impart to the reader what I have to say, but above all to convey to him the reasons, subterfuges, and lucky hazards which led me to my discoveries.”

Those “lucky hazards” might help other avoid the same mistakes. Can we fix the documentation? Include it in the company induction? Is there a blog or a conference talk in it?2 These steps may require a little extra work but they have benefits for everyone, from future you, to your colleagues and your customers.

Someone is wrong on the internet.

The other thing is that it’s a good strategy for getting the right answer. People can be too busy to respond, right up to the point where they find that Someone On The Internet Was Wrong. People are more likely to offer to fix your work more readily than they will be to come up with a working solution from scratch.

What if no one looks up your status updates? What happens when your notes go unread? Well… nothing. You were already writing the notes and no one except you read them. Worst case, you’re exactly where you were.

In short, this is a terrible process if you want to be seen as being right all the time. However, if you value getting to the right answer and acknowledge that you’re a fallible human, if your ego can handle it, then I find it works well.

And, best of all, there is no need to speak on a podcast or to have a website.


  1. Again, possibly undermining my argument, I do write blogs — hello! — and have spoken at conferences. I’ve never appeared on a podcast, though! ↩︎

  2. I said I wasn’t advocating podcasting or blogging, but that doesn’t mean you shouldn’t if it’s the best way of sharing the information. ↩︎

Panic

The whole team got this email today. Okay, it wasn’t today and these are not the exact words, but it was something like this:

We have a serious regression in build 456. We have set the project back rather than taken it forward. We need the utmost focus and commitment on fixing it. We’ve broken it and we stay in the office until it’s fixed.

I’ve had a few of those messages over the years and while it’s intended to focus minds it often has the opposite effect. Let’s examine why.

Projects see the same mistakes made over and over and this email encompasses many of those sins; it’s one message but represents a microcosm of large part of my career.

Here are a few problems that I see immediately:

  • No problem definition
  • No person accountable
  • No next action
  • A deadline but no understanding of the work involved

This has a number of consequences.

There are studies that show if you have a heart attack in a crowded area you are less likely to receive life-saving CPR from a stranger than if you’re in an area with one other person.

In this case the passers by (the project team) don’t know that they’re needed. Without a problem definition I don’t know if the regression was caused by one of my changes or even if it affects my code. Without a person accountable everyone likely assumes that it’s someone else’s code. “Someone would have mentioned it if it was my code.” And with no “next action” it’s easy to assume that someone else will handle it.

Arguably the deadline is not really a deadline. What if the fix would take a week to implement? Instead it’s a target. You can’t take an estimate and reduce it to fit to an externally imposed date. It doesn’t work like that. You may hit your deadline if you’re lucky, but a good plan doesn’t need luck.

Even worse, the arbitrary deadline and lack of direction gives the entire project a sense of panic. I think the intention was urgency but urgency implies you know what you need to do and that you need to do it quickly. As we’ve seen, the task above is neither well defined nor assigned. The only clear things in the original email are the version that is broken and the deadline.

However, the biggest sin is questioning the commitment and competence of the people needed to resolve the issue. In my experience, this is rarely the case, yet asking the question can make it true. If you’re not trusted, why put in the extra work? Next time you need to make a change, are you going to do it the “right” way or the way with the absolute lowest risk? Putting in a lot of good work and then getting kicked for your efforts is not a good incentive for doing a job well.

Maker, Manager and Consultant Schedule

Have you heard about the Maker Schedule? The idea resonated as it explained a lot about my productivity.

For the uninitiated, here are how the two types are defined.

The manager’s schedule is [where] each day [is] cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour.

The Maker’s schedule, on the other hand:

[Makers] generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started.

The challenge I have is that I’m neither a maker nor a manager. As a “consultant”1 I fall somewhere between the two. The variety is what makes it interesting to me. The variety is also what makes it difficult.

Sometimes I have to sit and concentrate for a large block of time. Maybe I’m sketching out an architecture, researching something, building a prototype, or debugging some code. Perhaps I’m writing a report or putting together a presentation.

On other days I have back-to-back meetings. Workshops, delivering presentations or training, and explaining the results of my research.

But the worst are days where I have an hour’s meeting followed by an hour “gap,” repeated. Technically, I’m only in meetings for half the time, but it wipes out the whole day in terms of productivity. I spend half the day either context switching or with a tranche of time that’s too small to do much useful. A few days of these meetings and I feel very busy but not very productive.

The internet is full of suggestions on how to manage your schedule correctly. Few of them work for me.

A popular suggestion is to schedule meetings together, always in the morning or late in the day, for example. But what if I have clients in India and colleagues in the US? (I do.) There can be meetings at any time, and timezones mean that I can’t reasonably move them. I do have a certain amount of latitude in terms of moving some meetings around, but there are constraints beyond my productivity. If I (an individual) am trying to set up a workshop with half-a-dozen people and they’re paying for the privilege of meeting me, I’m at a disadvantage, schedule-wise.

Another suggestion is to block out the time in my calendar. This is probably the most effective method, but it’s not without its challenges. It’s hard to ‘hold the line’ against people booking meetings. Whether it’s someone not knowing how to check your calendar (or them not caring), frequently pushing back when you have no other firm commitments can look like you’re not a “team player.”

Another reason it’s hard to keep those time blocks is that it can be hard to know how long they need to be. Consultants usually have to maximise “billable time,” which makes it hard to turn down sure-fire client-facing hours. If I block out four hours in my diary, what’s to say I won’t get stuck after an hour and have to wait for feedback? What do I then do with the remaining three hours? It’s a real optimisation and opportunity cost problem.

So what’s the answer? I wish I had the One True Way. I’m not sure that one exists. Unlike many of the other people giving advice on the internet2, I am not 100% in control of my schedule. The best I’ve been able to come up with is a combination of all of the above. I try to have meeting-free days. I try to block time where it makes sense.

To an extent, it’s my job to be flexible and available for clients. Maybe I just need to learn to live with it?


  1. I’m a “field engineer”, which means I help clients get the most out of my company’s software. ↩︎
  2. As ever, you get what you pay for. ↩︎

Generalist Software Engineering

I greatly enjoyed Graham Lee’s series of posts about specialisation versus generalisation in software engineering1, quite possibly because it’s me.

My background is a little different from Lee’s, though, so I thought it was worth sharing.

I have a two tier experience2. With a few minor blips, Unix has been a constant technology underpinning since my first year at university. I started using Linux around the time 1.0 was released. I got a Mac when — or possibly before — OS X was ready for mainstream use because it was Unix with a nice UI. At work I’ve seen the change from big Solaris and HP-UX machines, to Linux, to containerised applications (which are normally based on minimal Linux distributions). Sure, the different Unix variants are not exactly the same, but most of them have something bash-like and ls does the same thing everywhere, even if the more esoteric options vary.

I should say that this is largely a preference. I don’t like Windows but that’s not an objective criticism. I do joke about it from time to time3 and I do admit the limits on my knowledge, but I don’t refuse to work with it!

Sadly, being a long-time Unix user is not a career.

I started my career at a “pure” consulting company. Each client I worked with wanted to do something different and I ended up using varied technology stacks. This was great from a “generalist” point of view. I flitted from Uniface to PL/SQL to Perl to Oracle Applications4, but beyond abstract concepts like “problem solving” I didn’t get anything like a transferable skill. The obvious path would have been management but that wasn’t where I wanted to go. I saw friends and colleagues specialising, and earning considerably more money for the privilege.

I appreciate that I’m likely leaving money on the table, but I stumbled on a solution that works for me: being part of the field engineering team of software vendors.

Being in the field team means that I work directly with customers and they end up wanting to do all kinds of things. In a year I can work with dozens of use cases, satisfying my need for novelty. At an end-user, I’d be looking at a small number of use cases the whole time.

At the same time, there is also a degree of specialisation. By working with the same product all the time, I can become The Expert in it and some adjacent technologies. For example, I’m pretty good with Kubernetes and Java these days. I need to write code and sketch out software architectures. My knowledge has to be deep enough to demonstrate credibility but I don’t have to build production code. Additionally, I can bring domain expertise. I wouldn’t sell myself purely on my banking knowledge, but it’s a nice value add.

This may not be the solution for you. There may even be better options for me that I’ve not found yet! But I thought it was worth documenting my experience, since most of the articles I’ve seen are for more traditional “software engineering” or management roles. Other positions are out there if you know where to look.


  1. Though I wasn’t able to write anything about it in a timely manner! ↩︎
  2. Maybe second tier too, but in this case I mean there are at least two layers. ↩︎
  3. There’s a Dilbert cartoon I use occasionally. This from before Adams went off the rails. ↩︎
  4. Whether you’d want to make a career out of any of those technologies is a different story. ↩︎

The backlash

The backlash has begun. Four months ago, everywhere was proclaiming that working from home was both the New Hotness and Here to Stay. In the last few weeks, those same venues have switched gears, documenting how people can’t wait to go back to the office. What changed?

Nothing. Simply the novelty wore off.

I get it. The last time I had a work-from-home job I didn’t really enjoy it. It was a decade ago and the technology wasn’t quite there. No Slack, an emphasis on phone calls rather than video-chats and much weaker collaboration tools like wikis. I was also one of only a few people working remotely. But, perhaps most significantly, I was at a different stage of my life.

If the pandemic and subsequent lockdown had hit back then, how would I have coped? It’s impossible to say for sure, of course, but less well I think.

And if you go back much before that and I wouldn’t have been able to work at all. I remember a teacher friend grumbling that she took work home but I didn’t. Couldn’t would have been more accurate: I didn’t have a £250,000 computer at home! A lot of business don’t even have machines like that any more.

Ultimately, people are learning that working remotely is a skill. If you just plonk people in disparate places and hope for the best, you’re probably going to fail over the medium term. Those unplanned meetings in corridors really won’t happen. As suspected, people won’t schedule a thirty-minute meeting to discuss… well, who knows what… the whole point is the serendipity. If these things are important — and I think they are — then you can’t just say “That doesn’t work remotely,” give up and insist everyone return to the office. That’s a total abdication of leadership!

The meeting won’t happen in exactly the same way, but you can encourage public conversations in Slack or Teams. You can have “happy hours” when the team can dial in for chit-chat.

I’m sure you have your own ideas. The point is that collaboration can and does happen remotely. Sure, there are cases where it can’t happen, or at least can’t happen easily. If you’re designing or making hardware it’s difficult.

Hopefully, as the threat of COVID-19 lifts, we’ll remember the lessons we’ve learned. We shouldn’t go back exactly to how things were before. The people who like working from home should still be able to do so, at least some of the time. I want to believe that the idea that people can’t be productive at home is no longer wide-spread. On the other hand, it’s not a panacea. We shouldn’t be closing all the office space in cities and we shouldn’t force people to work from home if they prefer being in an office.

In the end, treating staff like adults and trusting them to Get The Job Done is rarely a bad policy.