Tag Archives: raspberry pi

Unsociable Christmas Tree (Part 2)

Back in 2020 I talked about my Raspberry Pi powered Christmas Tree. Today’s blog delves into what I’ve done this year.

Previously, I decided that setting up a web server on which family members could change the light patterns on the tree was too boring. At the time, I was thinking of extending the sample Python code – much as I did with the OpenCV frontend – with a simple Python-based web server.

There are a few problems with this approach. Firstly, while I’ve never set up a Python web server, I don’t necessarily feel that it would be a challenge. More of a challenge would be my web development skills, which are, for the most part, stuck in about 1997. If you want formatting using tables rather CSS, I’m your man.

I found a way to solve both problems, while also using a Raspberry Pi Zero W for something outside its intended purpose.

I decided to use the Vaadin Java framework for the user interface. With Vaadin you don’t need to worry about HTML and formatting, you say, for example, I want a block with a group of radio buttons and a text box, and it generates your HTML, JavaScript and backend code. You can kind of think of it as SwiftUI for the web.

I wrapped it all in Spring Boot, because using an enterprise-grade UI library on a slow, single-core, 32-bit ARM CPU with 512Mb of memory isn’t ridiculous enough.

Jumping ahead for a second – spoiler alert – the application takes about two minutes to start on the Pi Zero W. It performs reasonably well once it’s started, but I think it’s fair to say that few corporations are going to be using Pi Zeros as part of their infrastructure.

It sounds bad, but the other way of looking at it is that it runs! A computer that costs £15 is able to run 47Mb of modern Java 17 code1. It’s not practical, but it is pretty amazing.

Before figuring out the user interface, the first thing I needed to work on was how to get Java to talk to the LED Christmas Tree. The sample Python code uses some library code that reads and writes to the Pi’s GPIO pins. Can you even do that in Java?

It didn’t take a lot of searching around to find Pi4j, a Java library to do exactly that.

It took a lot of experimentation to get it working. The defaults didn’t work – no errors, it just didn’t light up the LEDs. The Pi Zero isn’t an ideal Java development platform, so I did the coding on my MacBook, which also isn’t an ideal Java development environment. I was lazy, in a bad way, and didn’t properly build a test environment, instead building a JAR file on my laptop, manually copying over to the Pi and running it. I also found that Visual Studio Code isn’t as good as IntelliJ for writing Java programs.

Through a process of experimentation, I realised that I needed to use the pigpio backend provider2. The downside of using pipgio is that it requires super-user access. The alternative LinuxFS plugin does not need to be run as root, but it does not currently support access to the GPIO pins3.

Having proved that it was possible to flash the Christmas Tree’s lights in Java, I set out to port the Python library that came with it. The basics were pretty straight-forward. The sample code, however, uses sleep statements between the various lighting patterns, something that I didn’t want to do. I overestimated how complicated this would be and ended up simplifying my code. This is rare; I normally start super-simple and iterate.

Finally, I connected the two parts together, et voila, a working, Java-based program driving the Christmas Tree lights with a web interface for configuration.

Computer JVM Startup time
Pi Zero W openjdk version “17.0.8” 111
MacBook openjdk version “21.0.1” 4.2
Pi 4 openjdk version “17.0.9” 15
Pi 5 openjdk version “17.0.9 5.6

Interestingly, the Pi Zero and the Pi 4 both start the JVM in “client” mode, while the MacBook and the Pi 5 use “server” mode4. The Pi Zero also runs in 32-bit mode. I suspect, but didn’t verify, that the difference between the 4 and 5 is at least partly down to the newer instruction set in the 5, which supports native “atomic” operations5.

Software is never completed, only abandoned. And in that spirit, there are a number of things that it would be nice to improve.

The most important aspect is that it’s currently running as root (well, my user with sudo). One perk is this is that it can trivially run on port 80. The risks of running as root on my home network are pretty low, but it’s still not a good idea. I’ll need to update the Pi library so it can access the GPIO pins as a non-privileged user. I’ll then need to figure out a clean way to have the server accessible on port 80 (a web proxy?).

Since we’re not aiming for practical, another option would be to go for the micro services approach. One service to manage the LEDs and another for the UI.

Who knows if I’ll get time for all that. Next year, maybe.


  1. I’m sure it used to cost less than this. The Pi Zero 2 W is now available for about the same price, but has a quad-core, 64-bit CPU. ↩︎
  2. “Pig-pio” is how I always think of it. ↩︎
  3. As I write this, I see that there’s a new release of the Pi4j library which does support accessing the GPIO pins with LinuxFS. I’ll update the code when I get a chance. ↩︎
  4. Switching the Pi 5 to client mode didn’t make a huge difference. ↩︎
  5. At work, we benchmarked Apache Ignite on an ARM-based server, and found that it performed relatively poorly because it lacked the Large System Extensions. ↩︎

Unsociable Christmas Tree

Last year I got myself a Raspberry Pi-powered Christmas Tree. It has eleven LEDs, and you can program the Pi to switch them on and off.

Naturally, doing all that takes time, and last year I just didn’t have very much. I just downloaded the sample project and set it up with a random flashing pattern.

It amused me, anyway.

This year I wanted to get a little more sophisticated. I decided that it should be interactive. My first thought was a web server where people could connect using their phones and change the LED patterns. Then I thought better of it. Because of COVID we have no guests, rendering it far less interesting. Also, setting up a web server is hardly very exciting.

I wanted it to detect something but options were somewhat limited. The tree connects to the Pi’s pin connectors, but it didn’t leave any pins free to plug in anything else.

Next, I looked at my Arduino components. Could I do the sensing on the Arduino and the lights on the Pi? A sensible argument would be that wiring up two small computers like that would be ridiculously and unnecessarily complicated. While you wouldn’t be wrong, the whole point of this exercise is ridiculous.

In my Arduino bag of tricks, I have distance, presence, light and moisture sensors, buttons, switches and displays. I could have cobbled together something but while reading Twitter I got a better idea: how about I plug in a webcam and have the Pi detect faces and show different patterns depending on who is looking at it?

Luckily, other people have done most of the hard work. I based most of what follows on a blog “How to train your Raspberry Pi for facial recognition.”

There are lots of steps, but they’re relatively easy to follow. It was all working fine until it asked me to build OpenCV from source, at which point I ran out of disk space.

I gave up for a while.

What I’m saying is that while I was planning on something a little more elaborate, I ran out of time. Again.

When I came back to it, I realised that OpenCV was something that many Raspberry Pi owners were likely to use and I was surprised that I had to build my own version.

Luckily my surprise was supported by the actual Raspberry Pi software archive: if you install python3-opencv you get all the libraries you need, and all without all the hassle of having to build your own1. As a side benefit, this removes about half of the steps in the tutorial!

The rest worked incredibly well “out of the box.” I ran it, trained it on a few unsuspecting family members and was very impressed that it worked the first time. It uses a lot of CPU on my Pi 4, so I’m not sure that it would work on any of the earlier models.

My next task was to hook in the Christmas Tree code so that the tree responds to changes in what the webcam could see. And… that’s where I ran out of time.

The interface between the facial-recognition and the tree lights is, well, minimal. If it finds someone it recognises, all the lights come on, otherwise, it goes dark. You can see the code on GitHub — only a handful of lines are mine.

It technically meets the requirements of an Unsociable Christmas Tree but is certainly less ambitious than I would have liked. Still, getting machine learning working on a Pi and connecting it to something physical was fun. Maybe next year I’ll get the time to bring everything together?


  1. In theory, installing python3-opencv means you can skip the whole of point 4, “Install OpenCV by running the following commands in your Terminal,” from the guide. In practice, I tried to build my own version of OpenCV so it’s possible that I have extra libraries installed that you also need. If I get the time, I’ll come back and try this on a default installation of Raspberry Pi OS. ↩︎