Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

March 19, 2018

Waiting for the other Gantry to drop

When cutting the first side of the gantry I used the "traditional" hold down method of clamps and the like on the edges. That works ok when you have a much larger piece of alloy than the part you are cutting. In this case there isn't much spare in the height axis (left right as shown) and as you can see very little in the x axis (up/down in the below image). My clamping allowed for more vibration on the first cutting than I like so I changed how I went about the second side of the gantry.

For the second gantry, after flipping things in the software so that I was coming in from the other side I drilled out 4 m6 holes and countersank them.

This way the bolts (m6x40) were almost flush with the work piece. These bolts go straight through the plywood and connect with t-slot nuts in the alloy bed of the cnc. So there isn't much ability to use bolts that are too long for this application. Counter sinking the bolts helps on a machine with limited Z travel as using some non stubby drill bits really locks down the amount of free play and clearance you can get. The downside of this work holding is that you are left with 4 m6 holes that don't really need to be in the final product.

In this case it doesn't matter as I can use them and a new plate to mount one or two cameras on the back gantry facing forwards. I have found that the best vantage for CNC viewing is when not in the same room and looking at the video streams.

In future jobs I might move the countersunk bolts to the edge so they are not on the final work piece.

So now all I have to do is free this piece from the waste, tap a bunch of m5 holes, drill and tap 5 holes on 3 sides of the new gantry pieces and I'm getting close to loading it on.

March 18, 2018

Dynamic DNS on your own domain

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short

Auto apply latest package updates on OpenWrt (LEDE Project)

Running Linux on your router and wifi devices is fantastic, but it’s important to keep them up-to-date. This is how I auto-update my devices with the latest packages from OpenWrt (but not firmware, I still do that manually when there’s a new release).

This is a very simple shell script which uses OpenWrt’s package manager to fetch a list of updates, and then install them, rebooting the machine if that was successful. The log file is served up over http, in case you want to get the log easily to see what’s been happening (assuming you’re running uhttpd service).

Make a directory to hold the script.
root@firewall:~# mkdir -p /usr/local/sbin

Make the script.
root@firewall:~# cat > /usr/local/sbin/ << \EOF
opkg update
PACKAGES="$(opkg list-upgradable |awk '{print $1}')"
if [ -n "${PACKAGES}" ]; then
  opkg upgrade ${PACKAGES}
  if [ "$?" -eq 0 ]; then
    echo "$(date -I"seconds") - update success, rebooting" \
>> /www/update.result
    exec reboot
    echo "$(date -I"seconds") - update failed" >> /www/update.result
  echo "$(date -I"seconds") - nothing to update" >> /www/update.result

Make the script executable and touch the log file.
root@firewall:~# chmod u+x /usr/local/sbin/
root@firewall:~# touch /www/update.result

Give it a run manually, if you want.
root@firewall:~# /usr/local/sbin/

Next schedule the script in cron.
root@firewall:~# crontab -e

My cron entry looks like this, to run at 2am every day.

0 2 * * * /usr/local/sbin/

Now just start and enable cron.
root@firewall:~# /etc/init.d/cron start
root@firewall:~# /etc/init.d/cron enable

Download a copy of the log from another machine.
chris@box:~$ curl http://router/update.result
2018-03-18T10:14:49+1100 - nothing to update

That’s it! Now if you have multiple devices you can do the same, but maybe just set the cron entry for a different time of the night.

March 17, 2018

DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

March 16, 2018

Racism in the Office

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

Vale Stephen Hawking

Stephen Hawking was born on the 300th anniversary of Galileo Galilei‘s death (8 March 1942), and died on the anniversary of Albert Einstein‘s birth (14 March).   Having both reached the age of 76, Hawking actually lived a few months longer than Einstein, in spite of his health problems.  By the way, what do you call it when […]

March 12, 2018

Measuring SDR Noise Figure in Real Time

I’m building a sensitive receiver for FreeDV 2400A signals. As a first step I tried a HackRF with an external Low Noise Amplifier (LNA), and attempted to measure the Noise Figure (NF) using the system Mark and I developed two years ago.

However I was getting results that didn’t make sense and were not repeatable. So over the course of a few early morning sessions I came up with a real time NF measurement system, and wrinkled several bugs out of it. I also purchased a few Airspy SDRs, and managed to measure NF on them as well as the HackRF.

It’s a GNU Octave script called nf_from_stdio.m that accepts a sample stream from stdio. It assumes the signal contains a sine wave test tone from a calibrated signal generator, and noise from the receiver under test. By sampling the test tone it can establish the gain of the receiver, and by sampling the noise spectrum an estimate of the noise power.

The script can be driven from command line utilities like hackrf_transfer or airspy_rx or via software receivers like gqrx that can send SSB-demodaulted samples over UDP. Instructions are at the top of the script.


I’m working from a home workbench, with rudimentary RF skills, a strong signal processing background and determination. I do have a good second hand signal generator (Marconi 2031), that cost AUD$1000 at a Hamfest, and a Rigol 815 Spec An (generously donated by Mel K0PFX, and Jim, N0OB) to support my FreeDV work. Both very useful and highly recommended. I cross-checked the sig-gen calibrated output using an oscilloscope and external attenuator (within 0.5dB). The Rigol is less accurate in amplitude (1.5dB on its specs), but useful for relative measurements, e.g. comparing cable attenuation.

For the NF test method I have used a calibrated signal source is required. I performed my tests at 435MHz using a -100dBm carrier generated from the Marconi 2031 sig-gen.

Usage and Results

The script accepts real samples from a SSB demod, or complex samples from an IQ source. Tune your receiver so that the sinusoidal test tone is in the 2000 to 4000 Hz range as displayed on Fig 2 of the script. In general for minimum NF turn all SDR gains up to maximum. Check Fig 1 to ensure the signal is not clipping, reduce the baseband gain if necessary.

Noise is measured between 5000 and 10000 Hz, so ensure the receiver passband is flat in that region. When using gqrx, I drag the filter bandwidth out to 12000 Hz.

The noise estimates are less stable than the tone power estimate, leading to some sample/sample variation in the NF estimate. I take the median of the last five estimates.

I tried supplying samples to nf_from_stdio using two methods:

  1. Using gqrx in UDP mode to supply samples over UDP. This allows easy tuning and the ability to adjust the SDR gains in real time, but requires a few steps to set up
  2. Using a “single” command line approach that consists of a chain of processing steps concatenated together. Once your signal is tuned you can start the NF measurements with a single step.

Instructions on how to use both methods are at the top of nf_from_stdio.m

Here are some results using both gqrx and command line methods, with and without an external (20dB gain/1dB NF) LNA. They were consistent across two laptops.

SDR Gqrx LNA Cmd Line LNA Cmd Line no LNA
AirSpy Mini 2.0 2.2 7.9
AirSpy R2 1.7 1.7 7.0
HackRF One 2.6 3.4 11.1

The results with LNA are what we would expect for system noise figures with a good LNA at the front end.

The “no LNA” Airspy NF results are curious – the Airspy specs state a NF of just 3.5dB. So we contacted Airspy via Twitter and email to see how they measured their stated NF. We haven’t received a response to date. I posted to the Airspy mailing list and one gentleman (Dave – WØLEV) kindly replied and has measured noise figures of 4dB using calibrated noise sources and attenuators.

Looking into the data sheets for the Airspy, it appears the R820T tuner at the front end of the Airspy has a NF of 3.5dB. However a system NF will always be worse than the first device, as other devices (e.g. the ADC) also inject noise.

Other possibilities for my figures are measurement error, ambient noise sources at my site, frequency dependent NF, or variations in individual R820T samples.

In our past work we have used Bit Error Rate (BER) results as an independent method of confirming system noise figure. We found a close match between theoretical and measured BER when testing with and without a LNA. I’ll be repeating similar low level BER tests with FreeDV 2400A soon.

Real Time Noise Figure

It’s really nice to read the system noise figure in real time. For example you can start it running, then experiment with grounding, tightening connectors, or moving the SDR away from the laptop, or connect/disconnect a LNA in real time and watch the results. Really helps catch little issues in these difficult to perform tests. After all – we are measuring thermal noise, a very weak signal.

Some of the NF problems I could find and remove with a real time measurement:

  • The Airspy mini is nearly 1dB worse on the front left USB port than the rear left USB port on my X220 Thinkpad!
  • The Airspy mini really likes USB extension cables with ferrite clamps – without the ferrite I found the LNA was ineffective in reducing the NF – being swamped by conducted laptop noise I guess.
  • Loose connectors can make the noise figure a few dB worse. Wiggle and tighten them all.
  • Position of SDR/LNA near the radio and other bench equipment.
  • My magic touch can decrease noise figure! Grounding effect I guess?

Development Bugs

I had to work through several problems before I started getting sensible numbers. This was quite discouraging for a while as the numbers were jumping all over the place. However its fair to say measuring NF is a tough problem. From what I can Google its an uncommon measurement for people in home workshops.

These bugs are worth mentioning as traps for anyone else attempting home NF measurements:

  1. Cable loss: I found a 1.5dB loss is some cable I was using between the sig gen and the SDR under test. I Measured the loss by comparing a few cables connected between my sig gen and spec an. While the 815 is not accurate in terms of absolute calibration (rated at 1.5dB), it can still be used for comparative measurements. The cable loss can be added to the calculations or just choose a low loss cable.
  2. Filter shape: I had initially placed the test tone under 1000Hz. However I noticed that the gqrx signal had a few dB of high pass filtering in this region (Fig 2 below). Not an issue for regular USB demodulation, but a few dB really matters for NF! So I moved the test tone to the 2-4kHz region where the gqrx output was nice and flat.
  3. A noisy USB port, especially without a clamp, on the Airspy Mini (photo below). Found by trying different SDRs and USB ports, and finally a clamp. Oh Boy, never expected that one. I was connecting the LNA and the NF was stuck at 4dB – swamped by noise from the USB Port I guess.
  4. Compression: Worth checking the SDR output is not clipped or in compression. I adjusted the sig gen output up and down 3dB, and checked the power estimate from the script changed by 3dB. Also worth monitoring Fig 1 from the script, make sure it’s not hitting the limits. The HackRF needed it’s baseband gain reduced, but the Airspys were OK.
  5. I used latest Airspy tools built from source (rather than Ubuntu 17 package) to get stdout piping working properly and not have other status information from printfs injected into the sample stream!


Thanks Mark, for the use of your RF hardware, and I’d also like to mention the awesome CSDR tools and fantastic gqrx software – both very handy for SDR work.

March 10, 2018

I said, let me tell you now

Montage of Library Bookshelves

Ever since I heard this month’s #AusGlamBlog theme was “Happiness” I’ve had that Happy song stuck in my head.

“Clap along if you know what happiness is to you”

I’m new to the library world as a professional, but not new to libraries. A sequence of fuzzy memories swirl in my mind when I think of libraries.

First, was my local public library children’s cave filled with books that glittered with colour like jewels.

Next, I recall the mesmerising tone and timbre of the librarian’s voice at primary school. Each week she transported us into a different story as we sat, cross legged in front of her, in some form of rapture.

Coming into closer focus I recall opening drawers in the huge wooden catalogue in the library at high school. Breathing in the deeply lovely, dusty air wafting up whilst flipping through those tiny cards was a tactile delight. Some cards were handwritten, some typewritten, some plastered with laser printed stickers.

And finally, I remember relishing the peace and quiet afforded by booking one of 49 carrel study booths at La Trobe University.

I love libraries. Libraries make me happy.

The loss of libraries makes me sad. I think of Alexandria, and more recently in Timbuktu, and closer to home, I mourn the libraries lost to the dreaming by the ravages of destructive colonial force on this little continent so many of us now call home.

Preservation and digitisation, and open collections give me hope. There can only ever be one precious original of a thing, but facsimiles, and copies and 3D blueprints increasingly means physical things can now too be shared and studied without needing to handle, or risk damaging the original.

Sending precious things from collection to collection is fraught with danger. The revelations of what Australian customs did to priceless plant specimens from France & New Zealand still gives me goosebumps of horror.

Digital. Copies. Catalogues, Circulation, Fines, Holds, Reserves, and Serial patterns. I’m learning new things about the complexities under the surface as I start to work seriously with the Koha Community Integrated Library System. I first learned about the Koha ILS more than a decade ago, but I'm only now getting a chance to work with it. It brings my secret love of libraries and my publicly proclaimed love of open source together in a way I still can’t believe is possible.

So yeah.

OH HAI! I’m Donna, and I’m here to help.

“Clap along if you feel like that's what you wanna do”

March 09, 2018

Amelia Earhart in the news

Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]

March 07, 2018


Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() {
   curl $1 | sudo /usr/bin/env bash

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

LUV March 2018 Workshop: Comparing window managers

Mar 17 2018 12:30
Mar 17 2018 16:30
Mar 17 2018 12:30
Mar 17 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

We're still looking for more people who can talk about the window manager they are using, what they like and dislike about it, and maybe demonstrate a little.

Please email me at <> with the name of your window manager if you think you could help!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

read more

Audiobooks – Background and February 2018 list


I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10





March 05, 2018

WordPress Multisite on Debian

WordPress (a common CMS for blogs) is designed to be copied to a directory that Apache can serve and run by a user with no particular privileges while managing installation of it’s own updates and plugins. Debian is designed around the idea of the package management system controlling everything on behalf of a sysadmin.

When I first started using WordPress there was a version called “WordPress MU” (Multi User) which supported multiple blogs. It was a separate archive to the main WordPress and didn’t support all the plugins and themes. As a main selling point of WordPress is the ability to select from the significant library of plugins and themes this was a serious problem.

Debian WordPress

The people who maintain the Debian package of WordPress have always supported multiple blogs on one system and made it very easy to run in that manner. There’s a /etc/wordpress directory for configuration files for each blog with names such as This allows having multiple separate blogs running from the same tree of PHP source which means only one thing to update when there’s a new version of WordPress (often fixing security issues).

One thing that appears to be lacking with the Debian system is separate directories for “media”. WordPress supports uploading images (which are scaled to several different sizes) as well as sound and apparently video. By default under Debian they are stored in /var/lib/wordpress/wp-content/uploads/YYYY/MM/filename. If you have several blogs on one system they all get to share the same directory tree, that may be OK for one person running multiple blogs but is obviously bad when several bloggers have independent blogs on the same server.


If you enable the “multisite” support in WordPress then you have WordPress support for multiple blogs. The administrator of the multisite configuration has the ability to specify media paths etc for all the child blogs.

The first problem with this is that one person has to be the multisite administrator. As I’m the sysadmin of the WordPress servers in question that’s an obvious task for me. But the problem is that the multisite administrator doesn’t just do sysadmin tasks such as specifying storage directories. They also do fairly routine tasks like enabling plugins. Preventing bloggers from installing new plugins is reasonable and is the default Debian configuration. Preventing them from selecting which of the installed plugins are activated is unreasonable in most situations.

The next issue is that some core parts of WordPress functionality on the sub-blogs refer to the administrator blog, recovering a forgotten password is one example. I don’t want users of other blogs on the system to be referred to my blog when they forget their password.

A final problem with multisite is that it makes things more difficult if you want to move a blog to another system. Instead of just sending a dump of the MySQL database and a copy of the Apache configuration for the site you have to configure it for which blog will be it’s master. If going between multisite and non-multisite you have to change some of the data about accounts, this will be annoying on both adding new sites to a server and moving sites from the server to a non-multisite server somewhere else.

I now believe that WordPress multisite has little value for people who use Debian. The Debian way is the better way.

So I had to back out the multisite changes. Fortunately I had a cron job to make snapshots of the BTRFS subvolume that has the database so it was easy to revert to an older version of the MySQL configuration.

Upload Location

update etbe_options set option_value='/var/lib/wordpress/wp-content/uploads/' where option_name='upload_path';

It turns out that if you don’t have a multisite blog then there’s no way of changing the upload directory without using SQL. The above SQL code is an example of how to do this. Note that it seems that there is special case handling of a value of ‘wp-content/uploads‘ and any other path needs to be fully qualified.

For my own blog however I choose to avoid the WordPress media management and use the following shell script to create suitable HTML code for an image that links to a high resolution version. I use GIMP to create the smaller version of the image which gives me a lot of control over how to crop and compress the image to ensure that enough detail is visible while still being small enough for fast download.

set -e

if [ "$BASE" = "" ]; then

while [ "$1" != "" ]; do
  SMALL=$(echo $1 | sed -s s/-big//)
  RES=$(identify $SMALL|cut -f3 -d\ )
  WIDTH=$(($(echo $RES|cut -f1 -dx)/2))px
  HEIGHT=$(($(echo $RES|cut -f2 -dx)/2))px
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$SMALL\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"

Compromised Guest Account

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.

March 02, 2018

Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme

February 27, 2018

Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more

February 26, 2018

At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]

February 25, 2018

Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad

February 24, 2018

LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more

February 23, 2018

Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

February 22, 2018

Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

February 21, 2018

MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.

February 17, 2018

An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more

February 16, 2018

Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

February 09, 2018

Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]

February 08, 2018

Thinkpad X1 Carbon

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.


I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark ( [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.


Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.


X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.


This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.


This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

February 03, 2018

Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.


View the code on Gist.

February 02, 2018

Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]

February 01, 2018

Querying Installed Package Versions Across An Openstack Cloud

AKA: The Joy of juju run

Package upgrades across an OpenStack cloud do not always happen at the same time. In most cases they may happen within an hour or so across your cloud but for a variety reasons, some upgrades may be applied inconsistently, delayed or blocked on some servers.

As these packages may be rolling out a much needed patch or perhaps carrying a bug, you may wish to know which services are impacted in fairly short order.

If your OpenStack cloud is running Ubuntu and managed by Juju and MAAS, here's where juju run can come to the rescue.

For example, perhaps there's an update to the Corosync library libcpg4 and you wish to know which of your HA clusters have what version installed.

From your Juju controller, create a list of servers managed by Juju:

Juju 1.x:

$ juju stat --format tabular > jsft.out

Now you could fashion a query like this, utilising juju run:

$ for i in $(egrep -o '[a-z]+-hacluster/[0-9]+' jsft.out | cut -d/ -f1 | sort -u);
do juju run --timeout 30s --service $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';

The output returned will look something like this:

2.3.3-1ubuntu4 == ceilometer-hacluster/1
2.3.3-1ubuntu4 == ceilometer-hacluster/0
2.3.3-1ubuntu4 == ceilometer-hacluster/2
2.3.3-1ubuntu4 == cinder-hacluster/0
2.3.3-1ubuntu4 == cinder-hacluster/1
2.3.3-1ubuntu4 == cinder-hacluster/2
2.3.3-1ubuntu4 == glance-hacluster/3
2.3.3-1ubuntu4 == glance-hacluster/4
2.3.3-1ubuntu4 == glance-hacluster/5
2.3.3-1ubuntu4 == keystone-hacluster/1
2.3.3-1ubuntu4 == keystone-hacluster/0
2.3.3-1ubuntu4 == keystone-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/1
2.3.3-1ubuntu4 == mysql-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/1
2.3.3-1ubuntu4 == ncc-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/1
2.3.3-1ubuntu4 == neutron-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/1
2.3.3-1ubuntu4 == osd-hacluster/2
2.3.3-1ubuntu4 == swift-hacluster/1
2.3.3-1ubuntu4 == swift-hacluster/0
2.3.3-1ubuntu4 == swift-hacluster/2

Juju 2.x:

$ juju status > jsft.out

Now you could fashion a query like this:

$ for i in $(egrep -o 'hacluster-[a-z]+/[0-9]+' jsft.out | cut -d/ -f1 |sort -u);
do juju run --timeout 30s --application $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';

The output returned will look something like this:

2.3.5-3ubuntu2 == hacluster-ceilometer/1
2.3.5-3ubuntu2 == hacluster-ceilometer/0
2.3.5-3ubuntu2 == hacluster-ceilometer/2
2.3.5-3ubuntu2 == hacluster-cinder/1
2.3.5-3ubuntu2 == hacluster-cinder/0
2.3.5-3ubuntu2 == hacluster-cinder/2
2.3.5-3ubuntu2 == hacluster-glance/0
2.3.5-3ubuntu2 == hacluster-glance/1
2.3.5-3ubuntu2 == hacluster-glance/2
2.3.5-3ubuntu2 == hacluster-heat/0
2.3.5-3ubuntu2 == hacluster-heat/1
2.3.5-3ubuntu2 == hacluster-heat/2
2.3.5-3ubuntu2 == hacluster-horizon/0
2.3.5-3ubuntu2 == hacluster-horizon/1
2.3.5-3ubuntu2 == hacluster-horizon/2
2.3.5-3ubuntu2 == hacluster-keystone/0
2.3.5-3ubuntu2 == hacluster-keystone/1
2.3.5-3ubuntu2 == hacluster-keystone/2
2.3.5-3ubuntu2 == hacluster-mysql/0
2.3.5-3ubuntu2 == hacluster-mysql/1
2.3.5-3ubuntu2 == hacluster-mysql/2
2.3.5-3ubuntu2 == hacluster-neutron/0
2.3.5-3ubuntu2 == hacluster-neutron/2
2.3.5-3ubuntu2 == hacluster-neutron/1
2.3.5-3ubuntu2 == hacluster-nova/1
2.3.5-3ubuntu2 == hacluster-nova/2
2.3.5-3ubuntu2 == hacluster-nova/0

You can of course substitute libcpg4 in the above query for any package that you need to check.

By far and away my most favourite feature of Juju at present, juju run reminds me of knife ssh, which is unsurprisingly one of my favourite features of Chef.

January 31, 2018

January 27, 2018

Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
Tis a mystery. Or is it?
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at on Monday “Turning stories, into software.”




Photo by Josh Simmons

January 26, 2018 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker


  • LCA 2019 will be in Christchurch, New Zealand –
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions



Share 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out


  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication


  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • can filter

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email


Share 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report


Share 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings



January 25, 2018 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper


  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution


  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution


  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages



Share 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page


  • Goto
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler


  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve


  • localStorage instead of cookies


Share 2018 – Day 4 – Session 1

Panel: Meltdown, Spectre, and the free-software community Jonathan Corbet, Andrew ‘bunnie’ Huang, Benno Rice, Jess Frazelle, Katie McLaughlin, Kees Cook

  • FreeBSD only heard 11 days beforehand. Would have liked more notice
  • Got people involved from the Kernel Summit in Oct
  • Hosting company only heard once it went official, been busy patching since
  • Likely to be class-action lawsuit for $billions. That might make chip makers more paranoid about documentation and disclosure.
  • Thoughts in embargo
    • People noticed strange patches going in beforehand.
    • Only broke 6 days early, had been going for 6 months
    • “Linus is happy with this, something is terribly wrong”
    • Sad that the 2nd-tier cloud providers didn’t know. Exclusive club and lines as to who got informed were not clear
    • Projects that don’t have explicit relationship with Intel didn’t get informed
  • Thoughts on other vendors
    • This class of bugs could affect anybody, open hardware would probably not fix
    • More open hardware could enable people to review the processors and find these from the design rather than poking around
    • Hard to guarantee the shipped hardware matches the design
    • Software people can build everything at home and check. FABs don’t work at home.
  • Speculative execution warned about years ago. Danger ignored. How to make sure the next one isn’t ignored?
    • We always have to do some risky stuff
    • The research on this built up slowly over the years
    • Even if you have only found impractical attacks against something doesn’t mean the practical one doesn’t exist.
  • What criteria do we use to decide who is in?
    • Mechanisms do exist, they were mainly not used. Perhaps because they were for software vulnerabilities
  • Did people move providers?
    • No but Containers made things easier to reboot stuff and shuffle
  • Are there similar vulnerabilities ( similar or general hardware ) coming along?
    • The Kernel page-table patches were fairly general, should cover many similar ones
    • All these performance optimising bit of your CPU are now attack surfaces
    • What are people going to do if this slows down hardware too much?
  • How do we explain problems like these to politicians etc
    • Legos
    • We still have kernel devs getting their laptops
  • Can be use CPUs that don’t have speculative execution?
    • Not really. Back to 486s
  • Who are we protesting against with the embargo?
    • Everybody
    • The longer period let better fixes get in
    • The meltdown fix could be done in semi-public so had better quality

What is the most common street name in Australia? Rachel Bunder

  • Why?
    • Saw a map with most common name by US street
  • Just looking at name, not end bit “park” , “road”
  • Data
    • PSMA Geocoded national address file – Great but came out after project
    • Use Open Street Maps
  • Started with Common Name in Sydney
    • Used Metro Extracts – site closing down soon
    • Format is geojson
    • Road files separately provided
  • Procedure
    • Used python, R also has good features and libaraies
    • geopandas
    • Had some paths with no names
    • What is a road? – “Something with a name I can drive a car on”
  • Sydney
    • Full street name
      • Victoria Road
      • Pacific Highway
      • oops like like names are being counted twice
    • Tried merging them together
    • Roads don’t 100% match ends. Added function to fuzzy merge the roads that are 100m apart
    • Still some weird ones but probably won’t affect top
    • Second attempt
      • Short st, George st, William st, John st, Church st
  • Now with just the “name bit”
    • Tried taking out just the last name. ended up with “the” as most common.
    • Started with “The” = whole name
    • Single word = whole name
    • name – descriptor – suffex
    • lots of weird names
    • name list – Park, Victoria, Railway, William, Short
  • Wouldn’t work in many other counties
  • Now for all over Australia
    • overpass data
    • Downloaded in 50kmx50x squares
  • Lessons
    • Start small
    • Choose something familiar
    • Check you bias (different naming conventions)
    • Constance vigerlence
    • Know your problem
  • Common plant names
    • Wattle – 15th – 385
  • Other name
    • “The Esplanade” more common than “The Avenue”
  • Top names
    • 5th – Victoria
    • 4th – Church – 497
    • 3rd – George –  551
    • 2nd – Railway
    • 1st – Park – 693
  • By State
    • WA – Forest
    • SA – Railway
    • Vic – Park
    • Tas – Esplanade
    • NT – Smith/Stuart
    • NSW – Park


Share 2018 – Day 4 – Keynote – Hugh Blemings

Wandering through the Commons

Reflections on Free and Open Source Software/Hardware in Australia, New Zealand and beyond

  • Past’s reviewed
  • FOSS in Aus and NZ
    • Above per capita
  • List of Aus / NZ people and their contributions
    • John Lions , Lions book on Unix
    • Pia Andrews/Waugh/Smith – Open Government, GovHack, Linux Australia, Open Data
    • Vik Oliver – 3D Printing
    • Clare Cuuran – Open Government in NZ
    • plus a bunch of others

Working in Free Software and Open Hardware

  • The basics
    • Be visable in projects of relevance
      • You will be typed into Google, looked at in GitHub
    • Be yourself
      • But be business Friendly
    • Linkedin is a thing, really
    • Need a accurate basic presence
  • Finding a new job
    • Networks
    • Local user groups
    • Conferences
    • The projects you work on
  • Application and negotiation
    • Be professional, courteous
    • Do homework about company and culture
    • Talk to people that work there
    • Spend time on interview prep
      • Know your stuff, if you don’t know, say so
    • Think about Salary expectations and stick to them
      • Val Aurora’s page on this is excellent
    • Ask to keep copyright on your code
      • Should be a no-brainer for a FOSS.OH company
  • In the Job
    • Takes time to get into groove, don’t sweat it
    • Get out every now and then, particularly if working from home
    • Work/life balance
    • Know when to jump
      • Poisonous workplaces
    • An aside to People’s managers
      • Bring your best or don’t be a people manager
      • Take your reports welfare seriously

Looking after You

  • Ours is in the main a sedentary and solitary pursuit
    • exercise
  • Sitting and standing in front of a desk all day is bad
    • takes breaks
  • Depression is a real thing
  • Eat more vegetables
  • Find friends/colleagues to exercise with

Working if FOSS / OH – Staying Current

  • Look over a colleagues shoulder
  • Do something that is not part of your regular job
    • low level programming
    • Karger systems, Openstack
  • Stay uptodate with Security Blogs and the like
    • Many of the attack vectors have generic relevance
  • Take the lid off, tinker with hardware
    • Lots of videos online to help or just watch

Make Hay while the Sun Shines

  • Save some money for rainy day
  • Keep networks Open
  • Even when you have a job

You’re fired … Now What? – In a moment

  • Don’t panic
    • Going out in a twitter storm won’t help anyone
  • It’s not personal
    • It is the position that is no longer needed, not you
  • If you think it an unfair dismissal, seek legal advice before signing anything
  • It is normal to feel rubbish
  • Beware of imposter syndrome
  • Try to keep 2-3 opportunities in the pipeline
  • Don’t assume people will remember you
    • It’s not personal, everyone gets busy
    • It’s okay to (politely naturally) follow up periodically
  • Keep search a little narrow for the first week or two
    • The expand widely
  • Balance take “something/everything” as better than waiting for your dream job

Dream Job

  • Power 9 CPU
    • 14nm process
    • 4GHz, 24 cores
    • 25km of wires
    • 8 billion transisters
    • 3900 official chips pins
    • ~19,000 connections from die to the pin


  • Part of a vibrant FOSS/OH community both hear and abroad
  • We have accomplished much
  • The most exciting (in both senses) things lie before us
  • We need all of you to be part at every level of the stack
  • Look forward to working with you…


January 24, 2018 2018 – Day 3 – Session 3 – Booting

Securing the Linux boot process Matthew Garrett

  • Without boot security there is no other security
  • MBR Attacks – previously common, still work sometimes
  • Bootloader attacks – Seen in the wild
  • Malicious initrd attacks
    • RAM disk, does stuff like decrypt hard drive
    • Attack captures disk pass-shrase when typed in
  • How do we fix these?
    • UEFI Secure boot
    • Microsoft required in machines shipped after mid-2012
    • sign objects, firmware trusts some certs, boots things correctly signed
    • Problem solved! Nope
    • initrds are not signed
  • initrds
    • contain local changes
    • do a lot of security stuff
  • TPMs
    • devices on system motherboards
    • slow but inexpensive
    • Not under control of the CPU
    • Set of registers “platform configuration registers”, list of hashes of objects booted in boot process. Measurements
    • PCR can enforce things, stop boots if stuff doesn’t match
    • But stuff changes all the time, eg update firmware . Can brick machine
  • Microsoft to the resuce
    • Tie Secure boot into measured boot
    • Measure signing keys rather than the actual files themselves
    • But initrds are not signed
  • Systemd to the resuce
    • systemd boot stub (not the systemd boot loader)
    • Embed initrd and the kernel into a single image with a single signature
    • But initrds contain local information
    • End users should not be signing stuff
  • Kernel can be handed multiple initranfs images (via cpio)
    • each unpacked in turn
    • Each will over-write the previous one
    • configuration can over-written but the signed image, perhaps safely so that if config is changed, stuff fails
    • unpack config first, code second
  • Kernel command line is also security sensative
    • eg turn off iommu and dump RAM to extract keys
    • Have a secure command line turning on all security features, append on the what user sends
  • Proof of device state
    • Can show you are number after boot based on TPM. Can compare to 2FA device to make sure it is securely booted. Safe to type in passwords
  • Secure Provision of secrets
    • Know a remote machine is booted safely and not been subverted before sending it secret stuff.


LXC setup on Debian stretch

Here's how to setup LXC-based "chroots" on Debian stretch. While I wrote about this on Debian jessie, I had to make some networking changes for stretch and so here are the full steps that should work on stretch.

Start by installing (as root) the necessary packages:

apt install lxc libvirt-clients debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here): = veth = lxcbr0 = up = 00:FF:AA:xx:xx:xx

That configuration requires that the veth kernel module be loaded. If you have any kinds of module-loading restrictions enabled, you probably need to add the following to /etc/modules and reboot:


Next, I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

  2. and then applying it using:

    sysctl -p
  3. Restart the network bridge:

    systemctl restart lxc-net.service
  4. and ensure that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A FORWARD -d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
    -A INPUT -d -s -j ACCEPT
  5. and applying the rules using:


Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR= lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64 -F

Since the root password is randomly generated, you'll need to reset it before you can login as root:

sudo lxc-attach -n sid64 passwd

Then login as root and install a text editor inside the container because the root image doesn't have one by default:

apt install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Mounting your home directory inside a container

In order to have my home directory available within the container, I created a user account for myself inside the container and then added the following to the container config file (/var/lib/lxc/sid64/config):

lxc.mount.entry=/home/francois home/francois none bind 0 0

before restarting the container:

lxc-stop -n sid64
lxc-start -n sid64

Fixing locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

If you see these errors while reconfiguring the locales package:

Generating locales (this might take a while)...
  en_US.UTF-8...cannot change mode of new locale archive: No such file or directory
  fr_CA.UTF-8...cannot change mode of new locale archive: No such file or directory
Generation complete.

and see the following dmesg output on the host:

[235350.947808] audit: type=1400 audit(1441664940.224:225): apparmor="DENIED" operation="chmod" info="Failed name lookup - deleted entry" error=-2 profile="/usr/bin/lxc-start" name="/usr/lib/locale/locale-archive.WVNevc" pid=21651 comm="localedef" requested_mask="w" denied_mask="w" fsuid=0 ouid=0

then AppArmor is interfering with the locale-gen binary and the work-around I found is to temporarily shutdown AppArmor on the host:

lxc-stop -n sid64
systemctl stop apparmor
lxc-start -n sid64

and then start up it later once the locales have been updated:

lxc-stop -n sid64
systemctl start apparmor
lxc-start -n sid64

AppArmor support

If you are running AppArmor, your container probably won't start until you add the following to the container config (/var/lib/lxc/sid64/config):

lxc.aa_allow_incomplete = 1

January 23, 2018

LUV Main February 2018 Meeting: report

Feb 6 2018 18:30
Feb 6 2018 20:30
Feb 6 2018 18:30
Feb 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Tuesday, February 6, 2018
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


  • Russell Coker and others, LCA conference report

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 6, 2018 - 18:30

January 22, 2018

LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Kernel Miniconf in Sydney — the slides from the talk are here:

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

January 21, 2018

4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.

The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!

January 19, 2018

January 16, 2018

More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

January 10, 2018

Priorities for my team

(unthreaded from here)

During the day, I’m a Lead of a group of programmers. We’re responsible for a range of tools and tech used by others at the company for making games.

I have a list of the my priorities (and some related questions) of things that I think are important for us to be able to do well as individuals, and as a team:

  1. Treat people with respect. Value their time, place high value on their well-being, and start with the assumption that they have good intentions
    (“People” includes yourself: respect yourself, value your own time and well-being, and have confidence in your good intentions.)
  2. When solving a problem, know the user and understand their needs.
    • Do you understand the problem(s) that need to be solved? (it’s easy to make assumptions)
    • Have you spoken to the user and listened to their perspective? (it’s easy to solve the wrong problem)
    • Have you explored the specific constraints of the problem by asking questions like:
      • Is this part needed? (it’s easy to over-reach)
      • Is there a satisfactory simpler alternative? (actively pursue simplicity)
      • What else will be needed? (it’s easy to overlook details)
    • Have your discussed your proposed solution with users, and do they understand what you intend to do? (verify, and pursue buy-in)
    • Do you continue to meet regularly with users? Do they know you? Do they believe that you’re working for their benefit? (don’t under-estimate the value of trust)
  3. Have a clear understanding of what you are doing.
    • Do you understand the system you’re working in? (it’s easy to make assumptions)
    • Have you read the documentation and/or code? (set yourself up to succeed with whatever is available)
    • For code:
      • Have you tried to modify the code? (pull a thread; see what breaks)
      • Can you explain how the code works to another programmer in a convincing way? (test your confidence)
      • Can you explain how the code works to a non-programmer?
  4. When trying to solve a problem, debug aggressively and efficiently.
    • Does the bug need to be fixed? (see 1)
    • Do you understand how the system works? (see 2)
    • Is there a faster way to debug the problem? Can you change code or data to cause the problem to occur more quickly and reliably? (iterate as quickly as you can, fix the bug, and move on)
    • Do you trust your own judgement? (debug boldly, have confidence in what you have observed, make hypotheses and test them)
  5. Pursue excellence in your work.
    • How are you working to be better understood? (good communication takes time and effort)
    • How are you working to better understand others? (don’t assume that others will pursue you with insights)
    • Are you responding to feedback with enthusiasm to improve your work? (pursue professionalism)
    • Are you writing high quality, easy to understand, easy to maintain code? How do you know? (continue to develop your technical skills)
    • How are you working to become an expert and industry leader with the technologies and techniques you use every day? (pursue excellence in your field)
    • Are you eager to improve (and fix) systems you have worked on previously? (take responsibility for your work)

The list was created for discussion with the group, and as an effort to articulate my own expectations in a way that will help my team understand me.

Composing this has been useful exercise for me as a lead, and definitely worthwhile for the group. If you’ve never tried writing down your own priorities, values, and/or assumptions, I encourage you to try it :)

January 07, 2018

Engage the Silent Drive

I’ve been busy electrocuting my boat – here are our first impressions of the Torqueedo Cruise 2.0T on the water.

About 2 years ago I decided to try sailing, so I bought a second hand Hartley TS16; a popular small “trailer sailor” here in Australia. Since then I have been getting out once every week, having some very pleasant days with friends and family, and even at times by myself. Sailing really takes you away from everything else in the world. It keeps you busy as you are always pulling a rope or adjusting this and that, and is physically very active as you are clambering all over the boat. Mentally there is a lot to learn, and I started as a complete nautical noob.

Sailing is so quiet and peaceful, you get propelled by the wind using aerodynamics and it feels like like magic. However this is marred by the noise of outboard motors, which are typically used at the start and end of the day to get the boat to the point where it can sail. They are also useful to get you out of trouble in high seas/wind, or when the wind dies. I often use the motor to “un hit” Australia when I accidentally lodge myself on a sand bar (I have a lot of accidents like that).

The boat came with an ancient 2 stroke which belched smoke and noise. After about 12 months this motor suffered a terminal melt down (impeller failure and over heated) so it was replaced with a modern 5HP Honda 4-stroke, which is much quieter and very fuel efficient.

My long term goal was to “electrocute” the boat and replace the infernal combustion outboard engine with an electric motor and battery pack. I recently bit the bullet and obtained a Torqeedo Cruise 2kW outboard from Eco Boats Australia.

My friend Matt and I tested the motor today and are really thrilled. Matt is an experienced Electrical Engineer and sailor so was an ideal companion for the first run of the Torqueedo.

Torqueedo Cruise 2.0 First Impressions

It’s silent – incredibly so. Just a slight whine conducted from the motor/gearbox pod beneath the water. The sound of water flowing around the boat is louder!

The acceleration is impressive, better than the 4-stroke. Make sure you sit down. That huge, low RPM prop and loads of torque. We settled on 1000W, experimenting with other power levels.

The throttle control is excellent, you can dial up any speed you want. This made parking (mooring) very easy compared to the 4-stroke which is more of a “single speed” motor (idles at 3 knots, 4-5 knots top speed) and is unwieldy for parking.

It’s fit for purpose. This is not a low power “trolling” motor, it is every bit as powerful as the modern Honda 5HP 4-stroke. We did a A/B test and obtained the same top speed (5 knots) in the same conditions (wind/tide/stretch of water). We used it with 15 knot winds and 1m seas and it was the real deal – pushing the boat exactly where we wanted to go with authority. This is not a compromise solution. The Torqueedo shows internal combustion who’s house it is.

We had some fun sneaking up on kayaks at low power, getting to within a few metres before they heard us. Other boaties saw us gliding past with the sails down and couldn’t work out how we were moving!

A hidden feature is Azipod steering – it steers through more than 270 degrees. You can reverse without reverse gear, and we did “donuts” spinning on the keel!

Some minor issues: Unlike the Honda the the Torqueedo doesn’t tilt complete out of the water when sailing, leaving some residual drag from the motor/propeller pod. It also has to be removed from the boat for trailering, due to insufficient road clearance.

Walk Through

Here are the two motors with the boat out of the water:

It’s quite a bit longer than the Honda, mainly due to the enormous prop. The centres of the two props are actually only 7cm apart in height above ground. I had some concerns about ground clearance, both when trailering and also in the water. I have enough problems hitting Australia and like the way my boat can float in just 30cm of water. I discussed this with my very helpful Torqueedo dealer, Chris. He said tests with short and long version suggested this wasn’t a problem and in fact the “long” version provided better directional control. More water on top of the prop is a good thing. They recommend 50mm minimum, I have about 100mm.

To get started I made up a 24V battery pack using a plastic tub and 8 x 3.2V 100AH Lithium cells, left over from my recent EV battery upgrade. The cells are in varying conditions; I doubt any of them have 100AH capacity after 8 years of being hammered in my EV. On the day we ran for nearly 2 hours before one of the weaker cells dipped beneath 2.5V. I’ll sort through my stock of second hand cells some time to optimise the pack.

The pack plus motor weighs 41kg, the 5HP Honda plus 5l petrol 32kg. At low power (600W, 3.5 knots), this 2.5kWHr pack will give us a range of 14 nm or 28km. Plenty – on a huge days sailing we cover 40km, of which just 5km would be on motor.

All that power on board is handy too, for example the load of a fridge would be trivial compared to the motor, and a 100W HF radio no problem. So now I can quaff ice-cold sparkling shiraz or a nice beer, while having an actual conversation and not choking on exhaust fumes!

Here’s Matt taking us for a test drive, not much to the Torqueedo above the water:

For a bit of fun we ran both motors (maybe 10HP equivalent) and hit 7 knots, almost getting the Hartley up on the plane. Does this make it a Hybrid boat?


We are in love. This is the future of boating. For sale – one 5HP Honda 4-stroke.

Annual Penguin Picnic, January 28, 2018

Jan 28 2018 12:00
Jan 28 2018 18:00
Jan 28 2018 12:00
Jan 28 2018 18:00
Infoxchange, 33 Elizabeth St. Richmond


The Linux Users of Victoria Annual Penguin Picnic will be held on Sunday, January 28, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

Due to the predicted extreme hot weather on Sunday, the LUV committee has decided to change to an indoor picnic with dips, cheeses, cured meats, fruits, cakes, icecreams and icy poles, cool drinks, etc. instead of a BBQ.  The meeting will now be held at our regular workshop venue, Infoxchange at 33 Elizabeth St. Richmond, right by Victoria Parade and North Richmond railway station.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 28, 2018 - 12:00

read more

January 06, 2018

A little bit of floating point in a memory allocator — Part 1: Background

This post contains the same material as this thread of tweets, with a few minor edits.

Over my holiday break at the end of 2017, I took a look into the TLSF (Two Level Segregated Fit) memory allocator to better understand how it works. I’ve made use of this allocator and have been impressed by its real world performance, but never really done a deep dive to properly understand it.

The mapping_insert() function is a key part of the allocator implementation, and caught my eye. Here’s how that function is described in the paper A constant-time dynamic storage allocator for real-time systems:

I’ll be honest: from that description, I never developed a clear picture in my mind of what that function does.

(Reading it now, it seems reasonably clear – but I can say that only after I spent quite a bit of time using other methods to develop my understanding)

Something that helped me a lot was by looking at the implementation of that function from  There’s a bunch of long-named macro constants in there, and a few extra implementation details. If you collapse those it looks something like this:

void mapping_insert(size_t size, int* fli, int* sli)
  int fl, sl;
  if (size < 256)
    fl = 0;
    sl = (int)size / 8;
    fl = fls(size);
    sl = (int)(size >> (fl - 5)) ^ 0x20;
    fl -= 7;
  *fli = fl;
  *sli = sl;

It’s a pretty simple function (it really is). But I still failed to *see* the pattern of results that would be produced in my mind’s eye.

I went so far as to make a giant spreadsheet of all the intermediate values for a range of inputs, to paint myself a picture of the effect of each step :) That helped immensely.

Breaking it down…

There are two cases handled in the function: one for when size is below a certain threshold, and on for when it is larger. The first is straightforward, and accounts for a small number of possible input values. The large size case is more interesting.

The function computes two values: fl and sl, the first and second level indices for a lookup table. For the large case, fl (where fl is “first level”) is computed via fls(size) (where fls is short for “find last set” – similar names, just to keep you on your toes).

fls() returns the index of the largest bit set, counting from the least significant slbit, which is the index of the largest power of two. In the words of the paper:

“the instruction fls can be used to compute the ⌊log2(x)⌋ function”

Which is, in C-like syntax: floor(log2(x))

And there’s that “fl -= 7” at the end. That will show up again later.

For the large case, the computation of sl has a few steps:

  sl = (size >> (fl – 5)) ^ 0x20;

Depending on shift down size by some amount (based on fl), and mask out the sixth bit?

(Aside: The CellBE programmer in me is flinching at that variable shift)

It took me a while (longer than I would have liked…) to realize that this
size >> (fl – 5) is shifting size to generate a number that has exactly six significant bits, at the least significant end of the register (bits 5 thru 0).

Because fl is the index of the most significant bit, after this shift, bit 5 will always be 1 – and that “^ 0x20” will unset it, leaving the result as a value between 0 and 31 (inclusive).

So here’s where floating point comes into it, and the cute thing I saw: another way to compute fl and sl is to convert size into an IEEE754 floating point number, and extract the exponent, and most significant bits of the mantissa. I’ll cover that in the next part, here.

A little bit of floating point in a memory allocator — Part 2: The floating point


This post contains the same material as this thread of tweets, with a few minor edits.

In IEEE754, floating point numbers are represented like this:


nnn is the exponent, which is floor(log2(size)) — which happens to be the fl value computed by TLSF.

sss… is the significand fraction: the part that follows the decimal point, which happens to be sl.

And so to calculate fl and sl, all we need to do is convert size to a floating point value (on recent x86 hardware, that’s a single instruction). Then we can extract the exponent, and the upper bits of the fractional part, and we’re all done :D

That can be implemented like this:

double sf = (int64_t)size;
uint64_t sfi;
memcpy(&sfi, &sf, 8);
fl = (sfi >> 52) - (1023 + 7);
sl = (sfi >> 47) & 31;

There’s some subtleties (there always is). I’ll break it down…

double sf = (int64_t)size;

Convert size to a double, with an explicit cast. size has type size_t, but using TLSF from, the largest supported allocation on 64bit architecture is 2^32 bytes – comfortably less than the precision provided by the double type. If you need your TLSF allocator to allocate chunks bigger than 2^53, this isn’t the technique for you :)

I first tried using float (not double), which can provide correct results — but only if the rounding mode happens to be set correctly. double is easier.

The cast to (int64_t) results in better codegen on x86: without it, the compiler will generate a full 64bit unsigned conversion, and there is no single instruction for that.

The cast tells the compiler to (in effect) consider the bits of size as if they were a two’s complement signed value — and there is an SSE instruction to handle that case (cvtsi2sdq or similar). Again, with the implementation we’re using size can’t be that big, so this will do the Right Thing.

uint64_t sfi;
memcpy(&sfi, &sf, 8);

Copy the 8 bytes of the double into an unsigned integer variable. There are a lot of ways that C/C++ programmers copy bits from floating point to integer – some of them are well defined :) memcpy() does what we want, and any moderately respectable compiler knows how to select decent instructions to implement it.

Now we have floating point bits in an integer register, consisting of one sign bit (always zero for this, because size is always positive), eleven exponent bits (offset by 1023), and 52 bits of significant fraction. All we need to do is extract those, and we’re done :)

fl = (sfi >> 52) - (1023 + 7);

Extract the exponent: shift it down (ignoring the always-zero sign bit), subtract the offset (1023), and that 7 we saw earlier, at the same time.

sl = (sfi >> 47) & 31;

Extract the five most significant bits of the fraction – we do need to mask out the exponent.

And, just like that*, we have mapping_insert(), implemented in terms of integer -> floating point conversion.

* Actual code (rather than fragments) may be included in a later post…

January 05, 2018

That gantry just pops right off

Hobby CNC machines sold as "3040" may have a gantry clearance of about 80mm and a z axis travel of around 55mm. A detached gantry is shown below. Notice that there are 3 bolts on the bottom side mounting the z-axis to the gantry. The stepper motor attaches on the side shown so there are 4 NEMA holes to hold the stepper. Note that the normal 3040 doesn't have the mounting plate shown on the z-axis, that crossover plate allows a different spindle to be mounted to this machine.

The plan is to create replacement sides with some 0.5inch offcut 6061 alloy. This will add 100mm to the gantry so it can more easily clear clamps and a 4th axis. Because that would move the cutter mount upward as well, replacing the z-axis with something that has more range, say 160mm becomes an interesting plan.

One advantage to upgrading a machine like this is that you can reassemble the machine after measuring and designing the upgrade and then cut replacement parts for the machine using the machine.

The 3040 can look a bit spartan with the gantry removed.

The preliminary research is done. Designs created. CAM done. I just have to cut 4 plates and then the real fun begins.

January 04, 2018

Pivoting ‘the book’ from individuals to systems

In 2016 I started writing a book, “Choose Your Own Adventure“, which I wanted to be a call to action for individuals to consider their role in the broader system and how they individually can make choices to make things better. As I progressed the writing of that book I realised the futility of changing individual behaviours and perspectives without an eye to the systems and structures within which we live. It is relatively easy to focus on oneself, but “no man is an island” and quite simply, I don’t want to facilitate people turning themselves into more beautiful cogs in a dysfunctional machine so I’m pivoting the focus of the book (and reusing the relevant material) and am now planning to finish the book by mid 2018.

I have recently realised four paradoxes which have instilled in me a sense of urgency to reimagine the world as we know it. I believe we are at a fork in the road where we will either reinforce legacy systems based on outdated paradigms with shiny new things, or choose to forge a new path using the new tools and opportunities at our disposal, hopefully one that is genuinely better for everyone. To do the latter, we need to critically assess the systems and structures we built and actively choose what we want to keep, what we should discard, what sort of society we want in the future and what we need to get there.

I think it is too easily forgotten that we invented all this and can therefore reinvent it if we so choose. But to not make a choice is to choose the status quo.

This is not to say I think everything needs to change. Nothing is so simplistic or misleading as a zero sum argument. Rather, the intent of this book is to challenge you to think critically about the systems you work within, whether they enable or disable the things you think are important, and most importantly, to challenge you to imagine what sort of world you want to see. Not just for you, but for your family, community and the broader society. I challenge you all to make 2018 a year of formative creativity in reimagining the world we live in and how we get there.

The paradoxes in brief, are as follows:

  • That though power is more distributed than ever, most people are still struggling to survive.
    It has been apparent to me for some time that there is a growing substantial shift in power from traditional gatekeepers to ordinary people through the proliferation of rights based philosophies and widespread access to technology and information. But the systemic (and artificial) limitations on most people’s time and resources means most people simply cannot participate fully in improving their own lives let alone in contributing substantially to the community and world in which they live. If we consider the impact of business and organisational models built on scarcity, centricity and secrecy, we quickly see that normal people are locked out of a variety of resources, tools and knowledge with which they could better their lives. Why do we take publicly funded education, research and journalism and lock them behind paywalls and then blame people for not having the skills, knowledge or facts at their disposal? Why do we teach children to be compliant consumers rather than empowered makers? Why do we put the greatest cognitive load on our most vulnerable through social welfare systems that then beget reliance? Why do we not put value on personal confidence in the same way we value business confidence, when personal confidence indicates the capacity for individuals to contribute to their community? Why do we still assume value to equate quantity rather than quality, like the number of hours worked rather than what was done in those hours? If a substantial challenge of the 21st century is having enough time and cognitive load to spare, why don’t we have strategies to free up more time for more people, perhaps by working less hours for more return? Finally, what do we need to do systemically to empower more people to move beyond survival and into being able to thrive.
  • Substantial paradigm shifts have happened but are not being integrated into people’s thinking and processes.
    The realisation here is that even if people are motivated to understand something fundamentally new to their worldview, it doesn’t necessarily translate into how they behave. It is easier to improve something than change it. Easier to provide symptomatic relief than to cure the disease. Interestingly I often see people confuse iteration for transformation, or symptomatic relief with addressing causal factors, so perhaps there is also a need for critical and systems thinking as part of the general curriculum. This is important because symptomatic relief, whilst sometimes necessary to alleviate suffering, is an effort in chasing one’s tail and can often perpetrate the problem. For instance, where providing foreign aid without mitigating displacement of local farmer’s efforts can create national dependence on further aid. Efforts to address causal factors is necessary to truly address a problem. Even if addressing the causal problem is outside your influence, then you should at least ensure your symptomatic relief efforts are not built to propagate the problem. One of the other problems we face, particularly in government, is that the systems involved are largely products of centuries old thinking. If we consider some of the paradigm shifts of our times, we have moved from scarcity to surplus, centralised to distributed, from closed to openness, analog to digital and normative to formative. And yet, people still assume old paradigms in creating new policies, programs and business models. For example how many times have you heard someone talk about innovative public engagement (tapping into a distributed network of expertise) by consulting through a website (maintaining central decision making control using a centrally controlled tool)? Or “innovation” being measured (and rewarded) through patents or copyright, both scarcity based constructs developed centuries ago? “Open government” is often developed by small insular teams through habitually closed processes without any self awareness of the irony of the approach. And new policy and legislation is developed in analog formats without any substantial input from those tasked with implementation or consideration with how best to consume the operating rules of government in the systems of society. Consider also the number of times we see existing systems assumed to be correct by merit of existing, without any critical analysis. For instance, a compliance model that has no measurable impact. At what point and by what mechanisms can we weigh up the merits of the old and the new when we are continually building upon a precedent based system of decision making? If 3D printing helped provide a surplus economy by which we could help solve hunger and poverty, why wouldn’t that be weighed up against the benefits of traditional scarcity based business models?
  • That we are surrounded by new things every day and yet there is a serious lack of vision for the future
    One of the first things I try to do in any organisation is understand the vision, the strategy and what success should look like. In this way I can either figure out how to best contribute meaningfully to the overarching goal, and in some cases help grow or develop the vision and strategy to be a little more ambitious. I like to measure progress and understand the baseline from which I’m trying to improve but I also like to know what I’m aiming for. So, what could an optimistic future look like for society? For us? For you? How do you want to use the new means at our disposal to make life better for your community? Do we dare imagine a future where everyone has what they need to thrive, where we could unlock the creative and intellectual potential of our entire society, a 21st century Renaissance, rather than the vast proportion of our collective cognitive capacity going into just getting food on the table and the kids to school. Only once you can imagine where you want to be can we have a constructive discussion where we want to be collectively, and only then can we talk constructively the systems and structures we need to support such futures. Until then, we are all just tweaking the settings of a machine built by our ancestors. I have been surprised to find in government a lot of strategies without vision, a lot of KPIs without measures of success, and in many cases a disconnect between what a person is doing and the vision or goals of the organisation or program they are in. We talk “innovation” a lot, but often in the back of people’s minds they are often imagining a better website or app, which isn’t much of a transformation. We are surrounded by dystopic visions of the distant future, and yet most government vision statements only go so far as articulating something “better” that what we have now, with “strategies” often focused on shopping lists of disconnected tactics 3-5 years into the future. The New Zealand Department of Conservation provides an inspiring contrast with a 50 year vision they work towards, from which they develop their shorter term stretch goals and strategies on a rolling basis and have an ongoing measurable approach.
  • That government is an important part of a stable society and yet is being increasingly undermined, both intentionally and unintentionally.
    The realisation here has been in first realising how important government (and democracy) is in providing a safe, stable, accountable, predictable and prosperous society whilst simultaneously observing first hand the undermining and degradation of the role of government both intentionally and unintentionally, from the outside and inside. I have chosen to work in the private sector, non-profit community sector, political sector and now public sector, specifically because I wanted to understand the “system” in which I live and how it all fits together. I believe that “government” – both the political and public sectors – has a critical part to play in designing, leading and implementing a better future. The reason I believe this, is because government is one of the few mechanisms that is accountable to the people, in democratic countries at any rate. Perhaps not as much as we like and it has been slow to adapt to modern practices, tools and expectations, but governments are one of the most powerful and influential tools at our disposal and we can better use them as such. However, I posit that an internal, largely unintentional and ongoing degradation of the public sectors is underway in Australia, New Zealand, the United Kingdom and other “western democracies”, spurred initially by an ideological shift from ‘serving the public good’ to acting more like a business in the “New Public Management” policy shift of the 1980s. This was useful double speak for replacing public service values with business values and practices which ignores the fact that governments often do what is not naturally delivered by the marketplace and should not be only doing what is profitable. The political appointment of heads of departments has also resulted over time in replacing frank, fearless and evidence based leadership with politically palatable compromises throughout the senior executive layer of the public sector, which also drives necessarily secretive behaviour, else the contradictions be apparent to the ordinary person. I see the results of these internal forms of degradations almost every day. From workshops where people under budget constraints seriously consider outsourcing all government services to the private sector, to long suffering experts in the public sector unable to sway leadership with facts until expensive consultants are brought in to ask their opinion and sell the insights back to the department where it is finally taken seriously (because “industry” said it), through to serious issues where significant failures happen with blame outsourced along with the risk, design and implementation, with the details hidden behind “commercial in confidence” arrangements. The impact on the effectiveness of the public sector is obvious, but the human cost is also substantial, with public servants directly undermined, intimidated, ignored and a growing sense of hopelessness and disillusionment. There is also an intentional degradation of democracy by external (but occasionally internal) agents who benefit from the weakening and limiting of government. This is more overt in some countries than others. A tension between the regulator and those regulated is a perfectly natural thing however, as the public sector grows weaker the corporate interests gain the upper hand. I have seen many people in government take a vendor or lobbyist word as gold without critical analysis of the motivations or implications, largely again due to the word of a public servant being inherently assumed to be less important than that of anyone in the private sector (or indeed anyone in the Minister’s office). This imbalance needs to be addressed if the public sector is to play an effective role. Greater accountability and transparency can help but currently there is a lack of common agreement on the broader role of government in society, both the political and public sectors. So the entire institution and the stability it can provide is under threat of death by a billion papercuts. Efforts to evolve government and democracy have largely been limited to iterations on the status quo: better consultation, better voting, better access to information, better services. But a rethink is required and the internal systemic degradations need to be addressed.

If you think the world is perfectly fine as is, then you are probably quite lucky or privileged. Congratulations. It is easy to not see the cracks in the system when your life is going smoothly, but I invite you to consider the cracks that I have found herein, to test your assumptions daily and to leave your counter examples in the comments below.

For my part, I am optimistic about the future. I believe the proliferation of a human rights based ideology, participatory democracy and access to modern technologies all act to distribute power to the people, so we have the capacity more so than ever to collectively design and create a better future for us all.

Let’s build the machine we need to thrive both individually and collectively, and not just be beautiful cogs in a broken machine.

Further reading:

Chapter 1.2: Many hands make light work, for a while

This is part of a book I am working on, hopefully due for completion by mid 2018. The original purpose of the book is to explore where we are at, where we are going, and how we can get there, in the broadest possible sense. Your comments, feedback and constructive criticism are welcome! The final text of the book will be freely available under a Creative Commons By Attribution license. A book version will be sent to nominated world leaders, to hopefully encourage the necessary questioning of the status quo and smarter decisions into the future. Additional elements like references, graphs, images and other materials will be available in the final digital and book versions and draft content will be published weekly. Please subscribe to the blog posts by the RSS category and/or join the mailing list for updates.

Back to the book overview or table of contents for the full picture. Please note the pivot from focusing just on individuals to focusing on the systems we live in and the paradoxes therein.

“Differentiation of labour and interdependence of society is reliant on consistent and predictable authorities to thrive” — Durkheim

Many hands makes light work is an old adage both familiar and comforting. One feels that if things get our of hand we can just throw more resources at the problem and it will suffice. However we have made it harder on ourselves in three distinct ways:

  • by not always recognising the importance of interdependence and the need to ensure the stability and prosperity of our community as a necessary precondition to the success of the individuals therein;
  • by increasingly making it harder for people to gain knowledge, skills and adaptability to ensure those “many hands” are able to respond to the work required and not trapped into social servitude; and
  • by often failing to recognise whether we need a linear or exponential response in whatever we are doing, feeling secure in the busy-ness of many hands.

Specialisation is when a person delves deep on a particular topic or skill. Over many millennia we have got to the point where we have developed extreme specialisation, supported through interdependence and stability, which gave us the ability to rapidly and increasingly evolve what we do and how we live. This resulted in increasingly complex social systems and structures bringing us to a point today where the pace of change has arguably outpaced our imagination. We see many people around the world clinging to traditions and romantic notions of the past whilst we hurtle at an accelerating pace into the future. Many hands have certainly made light work, but new challenges have emerged as a result and it is more critical than ever that we reimagine our world and develop our resilience and adaptability to change, because change is the only constant moving forward.

One human can survive on their own for a while. A tribe can divide up the labour quite effectively and survive over generations, creating time for culture and play. But when we established cities and states around 6000 years ago, we started a level of unprecedented division of labour and specialisation beyond mere survival. When the majority of your time, energy and resources go into simply surviving, you are largely subject to forces outside your control and unable to justify spending time on other things. But when survival is taken care of (broadly speaking) it creates time for specialisation and perfecting your craft, as well as for leisure, sport, art, philosophy and other key areas of development in society.

The era of cities itself was born on the back of an agricultural technology revolution that made food production far more efficient, creating surplus (which drove a need for record keeping and greater proliferation of written language) and prosperity, with a dramatic growth in specialisation of jobs. With greater specialisation came greater interdependence as it becomes in everyone’s best interests to play their part predictably. A simple example is a farmer needing her farming equipment to be reliable to make food, and the mechanic needs food production to be reliable for sustenance. Both rely on each other not just as customers, but to be successful and sustainable over time. Greater specialisation led to greater surplus as specialists continued to fine tune their crafts for ever greater outcomes. Over time, an increasing number of people were not simply living day to day, but were able to plan ahead and learn how to deal with major disruptions to their existence. Hunters and gatherers are completely subject to the conditions they live in, with an impact on mortality, leisure activities largely fashioned around survival, small community size and the need to move around. With surplus came spare time and the ability to take greater control over one’s existence and build a type of systemic resilience to change.

So interdependence gave us greater stability, as a natural result of enlightened self interest writ large where ones own success is clearly aligned with the success of the community where one lives. However, where interdependence in smaller communities breeds a kind of mutual understanding and appreciation, we have arguably lost this reciprocity and connectedness in larger cities today, ironically where interdependence is strongest. When you can’t understand intuitively the role that others play in your wellbeing, then you don’t naturally appreciate them, and disconnected self interest creates a cost to the community. When community cohesion starts to decline, eventually individuals also decline, except the small percentage who can either move communities or who benefit, intentionally or not, on the back of others misfortune.

When you have no visibility of food production beyond the supermarket then it becomes easier to just buy the cheapest milk, eggs or bread, even if the cheapest product is unsustainable or undermining more sustainably produced goods. When you have such a specialised job that you can’t connect what you do to any greater meaning, purpose or value, then it also becomes hard to feel valuable to society, or valued by others. We see this increasingly in highly specialised organisations like large companies, public sector agencies and cities, where the individual feels the dual pressure of being anything and nothing all at once.

Modern society has made it somewhat less intuitive to value others who contribute to your survival because survival is taken for granted for many, and competing in ones own specialisation has been extended to competing in everything without appreciation of the interdependence required for one to prosper. Competition is seen to be the opposite of cooperation, whereas a healthy sustainable society is both cooperative and competitive. One can cooperate on common goals and compete on divergent goals, thus making best use of time and resources where interests align. Cooperative models seem to continually emerge in spite of economic models that assume simplistic punishment and incentive based behaviours. We see various forms of “commons” where people pool their resources in anything from community gardens and ’share economies’ to software development and science, because cooperation is part of who we are and what makes us such a successful species.

Increasing specialisation also created greater surplus and wealth, generating increasingly divergent and insular social classes with different levels of power and people becoming less connected to each other and with wealth overwhelmingly going to the few. This pressure between the benefits and issues of highly structured societies and which groups benefit has ebbed and flowed throughout our history but, generally speaking, when the benefits to the majority outweigh the issues for that majority, then you have stability. With stability a lot can be overlooked, including at times gross abuses for a minority or the disempowered. However, if the balances tips too far the other way, then you get revolutions, secessions, political movements and myriad counter movements. Unfortunately many counter movements limit themselves to replacing people rather than the structures that created the issues however, several of these counter movements established some critical ideas that underpin modern society.

Before we explore the rise of individualism through independence and suffrage movements (chapter 1.3), it is worth briefly touching upon the fact that specialisation and interdependence, which are critical for modern societies, both rely upon the ability for people to share, to learn, and to ensure that the increasingly diverse skills are able to evolve as the society evolves. Many hands only make light work when they know what they are doing. Historically the leaps in technology, techniques and specialisation have been shared for others to build upon and continue to improve as we see in writings, trade, oral traditions and rituals throughout history. Gatekeepers naturally emerged to control access to or interpretations of knowledge through priests, academics, the ruling class or business class. Where gatekeepers grew too oppressive, communities would subdivide to rebalance the power differential, such a various Protestant groups, union movements and the more recent Open Source movements. In any case, access wasn’t just about power of gatekeepers. The costs of publishing and distribution grew as societies grew, creating a call from the business class for “intellectual property” controls as financial mechanisms to offset these costs. The argument ran that because of the huge costs of production, business people needed to be incentivised to publish and distribute knowledge, though arguably we have always done so as a matter of survival and growth.

With the Internet suddenly came the possibility for massively distributed and free access to knowledge, where the cost of publishing, distribution and even the capability development required to understand and apply such knowledge was suddenly negligible. We created a universal, free and instant way to share knowledge, creating the opportunity for a compounding effect on our historic capacity for cumulative learning. This is worth taking a moment to consider. The technology simultaneously created an opportunity for compounding our cumulative learning whilst rendered the reasons for IP protections negligible (lowered costs of production and distribution) and yet we have seen a dramatic increase in knowledge protectionism. Isn’t it to our collective benefit to have a well educated community that can continue our trajectory of diversification and specialisation for the benefit of everyone? Anyone can get access to myriad forms of consumer entertainment but our most valuable knowledge assets are fiercely protected against general and free access, dampening our ability to learn and evolve. The increasing gap between the haves and have nots is surely symptomatic of the broader increasing gap between the empowered and disempowered, the makers and the consumers, those with knowledge and those without. Consumers are shaped by the tools and goods they have access to, and limited by their wealth and status. But makers can create the tools and goods they need, and can redefine wealth and status with a more active and able hand in shaping their own lives.

As a result of our specialisation, our interdependence and our cooperative/competitive systems, we have created greater complexity in society over time, usually accompanied with the ability to respond to greater complexity. The problem is that a lot of our solutions to change have been linear responses to an exponential problem space. the assumption that more hands will continue to make light work often ignores the need for sharing skills and knowledge, and certainly ignores where a genuinely transformative response is required. A small fire might be managed with buckets, but at some point of growth, adding more buckets becomes insufficient and new methods are required. Necessity breeds innovation and yet when did you last see real innovation that didn’t boil down to simply more or larger buckets? Iteration is rarely a form of transformation, so it is important to always clearly understand the type of problem you are dealing with and whether the planned response needs to be linear or exponential. If the former, more buckets is probably fine. If the latter, every bucket is just a distraction from developing the necessary response.

Next chapter I’ll examine how the independence movements created the philosophical pre-condition for democracy, the Internet and the dramatic paradigm shifts to follow.

January 02, 2018

Premier Open Source Database Conference Call for Papers closing January 12 2018

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Resolving a Partitioned RabbitMQ Cluster with JuJu

On occasion, a RabbitMQ cluster may partition itself. In a OpenStack environment this can often first present itself as nova-compute services stopping with errors such as these:

ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager._sync_power_states: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c
TRACE nova.openstack.common.periodic_task self._raise_timeout_exception(msg_id)
TRACE nova.openstack.common.periodic_task File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/", line 218, in _raise_timeout_exception
TRACE nova.openstack.common.periodic_task 'Timed out waiting for a reply to message ID %s' % msg_id)
TRACE nova.openstack.common.periodic_task MessagingTimeout: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c

Merely restarting the stopped nova-compute services will not resolve this issue.

You may also find that querying the rabbitmq service may either not return or take an awful long time to return:

$ sudo rabbitmqctl -p openstack list_queues name messages consumers status

...and in an environment managed by juju, you could also see JuJu trying to correct the RabbitMQ but failing:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0           idle 0/lxc/12 5672/tcp
rabbitmq-server/1   error   idle 1/lxc/8  5672/tcp   hook failed: "config-changed"
rabbitmq-server/2   error   idle 2/lxc/10 5672/tcp   hook failed: "config-changed"

You should now run rabbitmqctl cluster_status on each of your rabbit instances and review the output. If the cluster is partitioned, you will see something like the below:

ubuntu@my_juju_lxc:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...

You can clearly see from the above that there are two partitions for RabbitMQ. We need to now identify which of these is considered the leader:

maas-my_cloud:~$ juju run --service rabbitmq-server "is-leader"
- MachineId: 0/lxc/12
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/0
- MachineId: 1/lxc/8
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/1
- MachineId: 2/lxc/10
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/2

As you see above, in this example machine 0/lxc/12 is the leader, via it's status of "True". Now we need to hit the other two servers and shut down RabbitMQ:

# service rabbitmq-server stop

Once both services have completed shutting down, we can resolve the partitioning by running:

$ juju resolved -r rabbitmq-server/<whichever is leader>

Substituting <whichever is leader> for the machine ID identified earlier.

Once that has completed, you can start the previously stopped services with the below on each host:

# service rabbitmq-server start

and verify the result with:

$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...

No partitions \o/

The JuJu errors for RabbitMQ should clear within a few minutes:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0             idle 0/lxc/12 5672/tcp 19
rabbitmq-server/1   unknown   idle 1/lxc/8  5672/tcp 19
rabbitmq-server/2   unknown   idle 2/lxc/10 5672/tcp

You should also find the nova-compute instances starting up fine.

December 27, 2017

First Look at Snaps

I've belatedly come to have a close up look at both Ubuntu Core (Snappy), Snaps and the Snappy package manager.

The first pass was to rebuild my rack of Raspberry Pi's from Debian armhf to Ubuntu Core for the Raspberry Pi.


This proved to be the most graceful install I've ever had on any hardware, ever. No hyperbole: boot, authenticate, done. I repeated this for all six Pi's in such a short time frame that I was concerned I'd done something wrong. Your SSH keys are already installed, you can log in immediately and just get on with it.

Which is where snaps come into play.

Back on my laptop, I followed the tutorial Create Your First Snap which uses GNU Hello as an example snap build and finishes with a push to the snap store at

I then created a Launchpad Repo, related a snap package, told it to build for armhf and amd64 and before long, I could install this snap on both my laptop and the Pi's.

Overall this was a pretty impressive and graceful process.

December 20, 2017

Designing Shared Cars

Almost 10 years ago I blogged about car sharing companies in Melbourne [1]. Since that time the use of such services appears to have slowly grown (judging by the slow growth in the reserved parking spots for such cars). This isn’t the sudden growth that public transport advocates and the operators of those companies hoped for, but it is still positive. I have just watched the documentary The Human Scale [2] (which I highly recommend) about the way that cities are designed for cars rather than for people.

I think that it is necessary to make cities more suited to the needs of people and that car share and car hire companies are an important part of converting from a car based city to a human based city. As this sort of change happens the share cars will be an increasing portion of the new car sales and car companies will have to design cars to better suit shared use.

Personalising Cars

Luxury car brands like Mercedes support storing the preferred seat position for each driver, once the basic step of maintaining separate driver profiles is done it’s an easy second step to have them accessed over the Internet and also store settings like preferred radio stations, Bluetooth connection profiles, etc. For a car share company it wouldn’t be particularly difficult to extrapolate settings based on previous use, EG knowing that I’m tall and using the default settings for a tall person every time I get in a shared car that I haven’t driven before. Having Bluetooth connections follow the user would mean having one slave address per customer instead of the current practice of one per car, the addressing is 48bit so this shouldn’t be a problem.

Most people accumulate many items in their car, some they don’t need, but many are needed. Some of the things in my car are change for parking meters, sunscreen, tools, and tissues. Car share companies have deals with councils for reserved parking spaces so it wouldn’t be difficult for them to have a deal for paying for parking and billing the driver thus removing the need for change (and the risk of a car window being smashed by some desperate person who wants to steal a few dollars). Sunscreen is a common enough item in Australia that a car share company might just provide it as a perk of using a shared car.

Most people have items like tools, a water bottle, and spare clothes that can’t be shared which tend to end up distributed in various storage locations. The solution to this might be to have a fixed size storage area, maybe based on some common storage item like a milk crate. Then everyone who is a frequent user of shared cars could buy a container designed to fit that space which is divided in a similar manner to a Bento box to contain whatever they need to carry.

There is a lot of research into having computers observing the operation of a car and warning the driver or even automatically applying the brakes to avoid a crash. For shared cars this is more important as drivers won’t necessarily have a feel for the car and can’t be expected to drive as well.

Car Sizes

Generally cars are designed to have 2 people (sports car, Smart car, van/ute/light-truck), 4/5 people (most cars), or 6-8 people (people movers). These configurations are based on what most people are able to use all the time. Most car travel involves only one adult. Most journeys appear to have no passengers or only children being driven around by a single adult.

Cars are designed for what people can drive all the time rather than what would best suit their needs most of the time. Almost no-one is going to buy a personal car that can only take one person even though most people who drive will be on their own for most journeys. Most people will occasionally need to take passengers and that occasional need will outweigh the additional costs in buying and fueling a car with the extra passenger space.

I expect that when car share companies get a larger market they will have several vehicles in the same location to allow users to choose which to drive. If such a choice is available then I think that many people would sometimes choose a vehicle with no space for passengers but extra space for cargo and/or being smaller and easier to park.

For the common case of one adult driving small children the front passenger seat can’t be used due to the risk of airbags killing small kids. A car with storage space instead of a front passenger seat would be more useful in that situation.

Some of these possible design choices can also be after-market modifications. I know someone who removed the rear row of seats from a people-mover to store the equipment for his work. That gave a vehicle with plenty of space for his equipment while also having a row of seats for his kids. If he was using shared vehicles he might have chosen to use either a vehicle well suited to cargo (a small van or ute) or a regular car for transporting his kids. It could be that there’s an untapped demand for ~4 people in a car along with cargo so a car share company could remove the back row of seats from people movers to cater to that.

December 18, 2017

Percona Live Santa Clara 2018 CFP

Percona Live Santa Clara 2018 call for papers ends fairly soon — December 22 2017. It may be extended, but I suggest getting a topic in ASAP so the conference committee can view everything fairly and quickly. Remember this conference is bigger than just MySQL, so please submit topics on MongoDB, other databases like PostgreSQL, time series, etc., and of course MySQL.

What are you waiting for? Submit TODAY!
(It goes without saying that speakers get a free pass to attend the event.)

December 15, 2017

Celebration Time!

Here at OpenSTEM we have a saying “we have a resource on that” and we have yet to be caught out on that one! It is a festive time of year and if you’re looking for resources reflecting that theme, then here are some suggestions: Celebrations in Australia – a resource covering the occasions we […]

December 14, 2017

Huawei Mate9

Warranty Etc

I recently got a Huawei Mate 9 phone. My previous phone was a Nexus 6P that died shortly before it’s one year warranty ran out. As there have apparently been many Nexus 6P phones dying there are no stocks of replacements so Kogan (the company I bought the phone from) offered me a choice of 4 phones in the same price range as a replacement.

Previously I had chosen to avoid the extended warranty offerings based on the idea that after more than a year the phone won’t be worth much and therefore getting it replaced under warranty isn’t as much of a benefit. But now that it seems that getting a phone replaced with a newer and more powerful model is a likely outcome it seems that there are benefits in a longer warranty. I chose not to pay for an “extended warranty” on my Nexus 6P because getting a new Nexus 6P now isn’t such a desirable outcome, but when getting a new Mate 9 is a possibility it seems more of a benefit to get the “extended warranty”. OTOH Kogan wasn’t offering more than 2 years of “warranty” recently when buying a phone for a relative, so maybe they lost a lot of money on replacements for the Nexus 6P.


I chose the Mate 9 primarily because it has a large screen. It’s 5.9″ display is only slightly larger than the 5.7″ displays in the Nexus 6P and the Samsung Galaxy Note 3 (my previous phone). But it is large enough to force me to change my phone use habits.

I previously wrote about matching phone size to the user’s hand size [1]. When writing that I had the theory that a Note 2 might be too large for me to use one-handed. But when I owned those phones I found that the Note 2 and Note 3 were both quite usable in one-handed mode. But the Mate 9 is just too big for that. To deal with this I now use the top corners of my phone screen for icons that I don’t tend to use one-handed, such as Facebook. I chose this phone knowing that this would be an issue because I’ve been spending more time reading web pages on my phone and I need to see more text on screen.

Adjusting my phone usage to the unusually large screen hasn’t been a problem for me. But I expect that many people will find this phone too large. I don’t think there are many people who buy jeans to fit a large phone in the pocket [2].

A widely touted feature of the Mate 9 is the Leica lens which apparently gives it really good quality photos. I haven’t noticed problems with my photos on my previous two phones and it seems likely that phone cameras have in most situations exceeded my requirements for photos (I’m not a very demanding user). One thing that I miss is the slow-motion video that the Nexus 6P supports. I guess I’ll have to make sure my wife is around when I need to make slow motion video.

My wife’s Nexus 6P is well out of warranty. Her phone was the original Nexus 6P I had. When her previous phone died I had a problem with my phone that needed a factory reset. It’s easier to duplicate the configuration to a new phone than restore it after a factory reset (as an aside I believe Apple does this better) I copied my configuration to the new phone and then wiped it for my wife to use.

One noteworthy but mostly insignificant feature of the Mate 9 is that it comes with a phone case. The case is hard plastic and cracked when I unsuccessfully tried to remove it, so it seems to effectively be a single-use item. But it is good to have that in the box so that you don’t have to use the phone without a case on the first day, this is something almost every other phone manufacturer misses. But there is the option of ordering a case at the same time as a phone and the case isn’t very good.

I regard my Mate 9 as fairly unattractive. Maybe if I had a choice of color I would have been happier, but it still wouldn’t have looked like EVE from Wall-E (unlike the Nexus 6P).

The Mate 9 has a resolution of 1920*1080, while the Nexus 6P (and many other modern phones) has a resolution of 2560*1440 I don’t think that’s a big deal, the pixels are small enough that I can’t see them. I don’t really need my phone to have the same resolution as the 27″ monitor on my desktop.

The Mate 9 has 4G of RAM and apps seem significantly less likely to be killed than on the Nexus 6P with 3G. I can now switch between memory hungry apps like Pokemon Go and Facebook without having one of them killed by the OS.


The OS support from Huawei isn’t nearly as good as a Nexus device. Mine is running Android 7.0 and has a security patch level of the 5th of June 2017. My wife’s Nexus 6P today got an update from Android 8.0 to 8.1 which I believe has the fixes for KRACK and Blueborne among others.

Kogan is currently selling the Pixel XL with 128G of storage for $829, if I was buying a phone now that’s probably what I would buy. It’s a pity that none of the companies that have manufactured Nexus devices seem to have learned how to support devices sold under their own name as well.


Generally this is a decent phone. As a replacement for a failed Nexus 6P it’s pretty good. But at this time I tend to recommend not buying it as the first generation of Pixel phones are now cheap enough to compete. If the Pixel XL is out of your price range then instead of saving $130 for a less secure phone it would be better to save $400 and choose one of the many cheaper phones on offer.

Remember when Linux users used to mock Windows for poor security? Now it seems that most Android devices are facing the security problems that Windows used to face and the iPhone and Pixel are going to take the role of the secure phone.

December 13, 2017

Thinkpad X301

Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.


The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

December 11, 2017

Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

December 08, 2017

Happy Holidays, Queensland!

It’s finally holidays in Queensland! Yay! Congratulations to everyone for a wonderful year and lots of hard work! Hope you all enjoy a well-earned rest! Most other states and territories have only a week to go, but the holiday spirit is in the air.- Should you be looking for help with resources, rest assured that […]

December 05, 2017

A Tale of Two Conferences: ISC and TERATEC 2017

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of
choice at both was overwhelming.

read more

December 03, 2017

How Inlets Generate Thrust on Supersonic aircraft

Some time ago I read Skunk Works, a very good “engineering” read.

In the section on the SR-71, the author Ben Rich made a statement that has puzzled me ever since, something like: “Most of the engines thrust is developed by the intake”. I didn’t get it – surely an intake is a source of drag rather than thrust? I have since read the same statement about the Concorde and it’s inlets.

Lately I’ve been watching a lot of AgentJayZ Gas Turbine videos. This guy services gas turbines for a living and is kind enough to present a lot of intricate detail and answer questions from people. I find his presentation style and personality really engaging, and get a buzz out of his enthusiasm, love for his work, and willingness to share all sorts of geeky, intricate details.

So inspired by AgentJayZ I did some furious Googling and finally worked out why supersonic planes develop thrust from their inlets. I don’t feel it’s well explained elsewhere so here is my attempt:

  1. Gas turbine jet engines only work if the air is moving into the compressor at subsonic speeds. So the job of the inlet is to slow the air down from say Mach 2 to Mach 0.5.
  2. When you slow down a stream of air, the pressure increases. Like when you feel the wind pushing on your face on a bike. Imagine (don’t try) the pressure on your arm hanging out of a car window at 100 km/hr. Now imagine the pressure at 3000 km/hr. Lots. Around a 40 times increase for the inlets used in supersonic aircraft.
  3. So now we have this big box (the inlet chamber) full of high pressure air. Like a balloon this pressure is pushing equally on all sides of the box. Net thrust is zero.
  4. If we untie the balloon neck, the air can escape, and the balloon shoots off in the opposite direction.
  5. Back to the inlet on the supersonic aircraft. It has a big vacuum cleaner at the back – the compressor inlet of the gas turbine. It is sucking air out of the inlet as fast as it can. So – the air can get out, just like the balloon, and the inlet and the aircraft attached to it is thrust in the opposite direction. That’s how an inlet generates thrust.
  6. While there is also thrust from the gas turbine and it’s afterburner, turns out that pressure release in the inlet contributes the majority of the thrust. I don’t know why it’s the majority. Guess I need to do some more reading and get my gas equations on.

Another important point – the aircraft really does experience that extra thrust from the inlet – e.g. it’s transmitted to the aircraft by the engine mounts on the inlet, and the mounts must be designed with those loads in mind. This helps me understand the definition of “thrust from the inlet”.

December 01, 2017

My Canadian adventure exploring FWD50

I recently went to Ottawa for the FWD50 conference run by Rebecca and Alistair Croll. It was my first time in Canada, and it combined a number of my favourite things. I was at an incredible conference with a visionary and enthusiastic crowd, made up of government (international, Federal, Provincial and Municipal), technologists, civil society, industry, academia, and the calibre of discussions and planning for greatness was inspiring.

There was a number of people I have known for years but never met in meatspace, and equally there were a lot of new faces doing amazing things. I got to spend time with the excellent people at the Treasury Board of Canadian Secretariat, including the Canadian Digital Service and the Office of the CIO, and by wonderful coincidence I got to see (briefly) the folk from the Open Government Partnership who happened to be in town. Finally I got to visit the gorgeous Canadian Parliament, see their extraordinary library, and wander past some Parliamentary activity which always helps me feel more connected to (and therefore empowered to contribute to) democracy in action.

Thank you to Alistair Croll who invited me to keynote this excellent event and who, with Rebecca Croll, managed to create a truly excellent event with a diverse range of ideas and voices exploring where we could or should go as a society in future. I hope it is a catalyst for great things to come in Canada and beyond.

For those in Canada who are interested in the work in New Zealand, I strongly encourage you to tune into the D5 event in February which will have some of our best initiatives on display, and to tune in to our new Minister for Broadband, Digital and Open Government (such an incredible combination in a single portfolio), Minister Clare Curran and you can tune in to our “Service Innovation” work at our blog or by subscribing to our mailing list. I also encourage you to read this inspiring “People’s Agenda” by a civil society organisation in NZ which codesigned a vision for the future type of society desired in New Zealand.


  • One of the great delights of this trip was seeing a number of people in person for the first time who I know from the early “Gov 2.0″ days (10 years ago!). It was particularly great to see Thom Kearney from Canada’s TBS and his team, Alex Howard (@digiphile) who is now a thought leader at the Sunlight Foundation, and Olivia Neal (@livneal) from the UK CTO office/GDS, Joe Powell from OGP, as well as a few friends from Linux and Open Source (Matt and Danielle amongst others).
  • The speech by Canadian Minister of the Treasury Board Secretariat (which is responsible for digital government) the Hon Scott Brison, was quite interesting and I had the chance to briefly chat to him and his advisor at the speakers drinks afterwards about the challenges of changing government.
  • Meeting with Canadian public servants from a variety of departments including the transport department, innovation and science, as well as the Treasury Board Secretariat and of course the newly formed Canadian Digital Service.
  • Meeting people from a range of sub-national governments including the excellent folk from Peel, Hillary Hartley from Ontario, and hearing about the quite inspiring work to transform organisational structures, digital and other services, adoption of micro service based infrastructure, the use of “labs” for experimentation.
  • It was fun meeting some CIO/CTOs from Canada, Estonia, UK and other jurisdictions, and sharing ideas about where to from here. I was particularly impressed with Alex Benay (Canadian CIO) who is doing great things, and with Siim Sikkut (Estonian CIO) who was taking the digitisation of Estonia into a new stage of being a broader enabler for Estonians and for the world. I shared with them some of my personal lessons learned around digital iteration vs transformation, including from the DTO in Australia (which has changed substantially, including a name change since I was there). Some notes of my lessons learned are at
  • My final highlight was how well my keynote and other talks were taken. People were really inspired to think big picture and I hope it was useful in driving some of those conversations about where we want to collectively go and how we can better collaborate across geopolitical lines.

Below are some photos from the trip, and some observations from specific events/meetings.

My FWD50 Keynote – the Tipping Point

I was invited to give a keynote at FWD50 about the tipping point we have gone through and how we, as a species, need to embrace the major paradigm shifts that have already happened, and decide what sort of future we want and work towards that. I also suggested some predictions about the future and examined the potential roles of governments (and public sectors specifically) in the 21st century. The slides are at and the full speech is on my personal blog at

I also gave a similar keynote speech at the NerHui conference in New Zealand the week after which was recorded for those who want to see or hear the content at

The Canadian Digital Service

Was only set up about a year ago and has a focus on building great services for users, with service design and user needs at the heart of their work. They have some excellent people with diverse skills and we spoke about what is needed to do “digital government” and what that even means, and the parallels and interdependencies between open government and digital government. They spoke about an early piece of work they did before getting set up to do a national consultation about the needs of Canadians ( which had some interesting insights. They were very focused on open source, standards, building better ways to collaborate across government(s), and building useful things. They also spoke about their initial work around capability assessment and development across the public sector. I spoke about my experience in Australia and New Zealand, but also in working and talking to teams around the world. I gave an informal outline about the work of our Service Innovation and Service Integration team in DIA, which was helpful to get some feedback and peer review, and they were very supportive and positive. It was an excellent discussion, thank you all!

CivicTech meetup

I was invited to talk to the CivicTech group meetup in Ottawa ( about the roles of government and citizens into the future. I gave a quick version of the keynote I gave at 2017 (, which explores paradigm shifts and the roles of civic hackers and activists in helping forge the future whilst also considering what we should (and shouldn’t) take into the future with us. It included my amusing change.log of the history of humans and threw down the gauntlet for civic hackers to lead the way, be the light :)

CDS Halloween Mixer

The Canadian Digital Service does a “mixer” social event every 6 weeks, and this one landed on Halloween, which was also my first ever Halloween celebration  I had a traditional “beavertail” which was a flat cinnamon doughnut with lemon, amazing! Was fun to hang out but of course I had to retire early from jet lag.

Workshop with Alistair

The first day of FWD50 I helped Alistair Croll with a day long workshop exploring the future. We thought we’d have a small interactive group and ended up getting 300, so it was a great mind meld across different ideas, sectors, technologies, challenges and opportunities. I gave a talk on culture change in government, largely influenced by a talk a few years ago called “Collaborative innovation in the public service: Game of Thrones style” ( People responded well and it created a lot of discussions about the cultural challenges and barriers in government.


Finally, just a quick shout out and thanks to Alistair for inviting me to such an amazing conference, to Rebecca for getting me organised, to Danielle and Matthew for your companionship and support, to everyone for making me feel so welcome, and to the following folk who inspired, amazed and colluded with me  In chronological order of meeting: Sean Boots, Stéphane Tourangeau, Ryan Androsoff, Mike Williamson, Lena Trudeau, Alex Benay (Canadian Gov CIO), Thom Kearney and all the TBS folk, Siim Sikkut from Estonia, James Steward from UK, and all the other folk I met at FWD50, in between feeling so extremely unwell!

Thank you Canada, I had a magnificent time and am feeling inspired!

This Week in HASS – term 4, week 9

Well, we’re almost at the end of the year!! It’s a time when students and teachers alike start to look forward to the long, summer break. Generally a time for celebrations and looking back over the highlights of the year – which is reflected in the activities for the final lessons of the Understanding Our […]

November 29, 2017

Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname,, pointing to the main Libravatar server:

CNAME acme

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d

The resulting certificate will then be automatically renewed before it expires.

November 27, 2017

Steve Ports an OFDM modem from Octave to C

Earlier this year I asked for some help. Steve Sampson K5OKC stepped up, and has done some fine work in porting the OFDM modem from Octave to C. I was so happy with his work I asked him to write a guest post on my blog on his experience and here it is!

On a personal level working with Steve was a great experience for me. I always enjoy and appreciate other people working on FreeDV with me, however it is quite rare to have people help out with programming. As you will see, Steve enjoyed the process and learned a great deal in the process.

The Problem with Porting

But first some background on the process involved. In signal processing it is common to develop algorithms in a convenient domain-specific scripting language such as GNU Octave. These languages can do a lot with one line of code and have powerul visualisation tools.

Usually, the algorithm then needs to be ported to a language suitable for real time implementation. For most of my career that has been C. For high speed operation on FPGAs it might be VHDL. It is also common to port algorithms from floating point to fixed point so they can run on low cost hardware.

We don’t develop algorithms directly in the target real-time language as signal processing is hard. Bugs are difficult to find and correct. They may be 10x or 100x times harder (in terms of person-hours) to find in C or VHDL than say GNU Octave.

So a common task in my industry is porting an algorithm from one language to another. Generally the process involves taking a working simulation and injecting a bunch of hard to find bugs into the real time implementation. It’s an excellent way for engineering companies to go bankrupt and upset customers. I have seen and indeed participated in this process (screwing up real time implementations) many times.

The other problem is algorithm development is hard, and not many people can do it. They are hard to find, cost a lot of money to employ, and can be very nerdy (like me). So if you can find a way to get people with C, but not high level DSP skills, to work on these ports – then it’s a huge win from a resourcing perspective. The person doing the C port learns a lot, and managers are happy as there is some predictability in the engineering process and schedule.

The process I have developed allows people with C coding (but not DSP) skills to port complex signal processing algorithms from one language to another. In this case its from GNU Octave to floating point C. The figures below shows how it all fits together.

Here is a sample output plot, in this case a buffer of received samples in the demodulator. This signal is plotted in green, and the difference between C and Octave in red. The red line is all zeros, as it should be.

This particular test generates 12 plots. Running is easy:

$ cd codec2-dev/octave
$ ../build_linux/unittest/tofdm
$ octave
>> tofdm
W........................: OK
tx_bits..................: OK
tx.......................: OK
rx.......................: OK
rxbuf in.................: OK
rxbuf....................: OK
rx_sym...................: FAIL (0.002037)
phase_est_pilot..........: FAIL (0.001318)
rx_amp...................: OK
timing_est...............: OK
sample_point.............: OK
foff_est_hz..............: OK
rx_bits..................: OK

This shows a fail case – two vectors just failed so some further inspection required.

Key points are:

  1. We make sure the C and Octave versions are identical. Near enough is not good enough. For floating point I set a tolerance like 1 part in 1000. For fixed point ports it can be bit exact – zero difference.
  2. We dump a lot of internal states, not just the inputs and outputs. This helps point us at exactly where the problem is.
  3. There is an automatic checklist to give us pass/fail reports of each stage.
  4. This process is not particularly original. It’s not rocket science, but getting people (especially managers) to support and follow such a process is. This part – the human factor – is really hard to get right.
  5. The same process can be used between any two versions of an algorithm. Fixed and float point, fixed point C and VHDL, or a reference implementation and another one that has memory or CPU optimisations. The same basic idea: take a reference version and use software to compare it.
  6. It makes porting fun and strangely satisfying. You get constant forward progress and no hard to find bugs. Things work when they hit real time. After months of tough, brain hurting, algorithm development, I find myself looking forward to the productivity the porting phase.

In this case Steve was the man doing the C port. Here is his story…..

Initial Code Construction

I’m a big fan of the Integrated Debugging Environment (IDE). I’ve used various versions over the years, but mostly only use Netbeans IDE. This is my current favorite, as it works well with C and Java.

When I take on a new programming project I just create a new IDE project and paste in whatever I want to translate, and start filling-in the Java or C code. In the OFDM modem case, it was the Octave source code ofdm_lib.m.

Obviously this code won’t do anything or compile, but it allows me to write C functions for each of the Octave code blocks. Sooner or later, all the Octave code is gone, and only C code remains.

I have very little experience with Octave, but I did use some Matlab in college. It was a new system just being introduced when I was near graduation. I spent a little time trying to make the program as dynamic as the Octave code. But it became mired in memory allocation.

Once David approved the decision for me to go with fixed configuration values (Symbol rate, Sample rate, etc), I was able to quickly create the header files. We could adjust these header files as we went along.

One thing about Octave, is you don’t have to specify the array sizes. So for the C port, one of my tasks was to figure out the array sizes for all the data structures. In some cases I just typed the array name in Octave, and it printed out its value, and then presto I now knew the size. Inspector Clouseau wins again!

The include files were pretty much patterned the same as FDMDV and COHPSK modems.

Code Starting Point

When it comes to modems, the easiest thing to create first is the modulator. It proved true in this case as well. I did have some trouble early on, because of a bug I created in my testing code. My spectrum looked different than Davids. Once this bug was ironed out the spectrums looked similar. David recommended I create a test program, like he had done for other modems.

The output may look similar, but who knows really? I’m certainly not going to go line by line through comma-separated values, and anyway Octave floating point values aren’t the same as C values past some number of decimal points.

This testing program was a little over my head, and since David has written many of these before, he decided to just crank it out and save me the learning curve.

We made a few data structure changes to the C program, but generally it was straight forward. Basically we had the outputs of the C and Octave modulators, and the difference is shown by their different colors. Luckily we finally got no differences.

OFDM Design

As I was writing the modulator, I also had to try and understand this particular OFDM design. I deduced that it was basically eighteen (18) carriers that were grouped into eight (8) rows. The first row was the complex “pilot” symbols (BPSK), and the remaining 7 rows were the 112 complex “data” symbols (QPSK).

But there was a little magic going on, in that the pilots were 18 columns, but the data was only using 16. So in the 7 rows of data, the first and last columns were set to a fixed complex “zero.”

This produces the 16 x 7 or 112 complex data symbols. Each QPSK symbol is two-bits, so each OFDM frame represents 224 bits of data. It wasn’t until I began working on the receiver code that all of this started to make sense.

With this information, I was able to drive the modulator with the correct number of bits, and collect the output and convert it to PCM for testing with Audacity.

DFT Versus FFT

This OFDM modem uses a DFT and IDFT. This greatly simplifies things. All I have to do is a multiply and summation. With only 18 carriers, this is easily fast enough for the task. We just zip through the 18 carriers, and return the frequency or time domain. Obviously this code can be optimized for firmware later on.

The final part of the modulator, is the need for a guard period called the Cyclic Prefix (CP). So by making a copy of the last 16 of the 144 complex time-domain samples, and putting them at the head, we produce 160 complex samples for each row, giving us 160 x 8 rows, or 1280 complex samples every OFDM frame. We send this to the transmitter.

There will probably need to be some filtering, and a function of adjusting gain in the API.

OFDM Modulator

That left the Demodulator which looked much more complex. It took me quite a long time just to get the Octave into some semblance of C. One problem was that Octave arrays start at 1 and C starts at 0. In my initial translation, I just ignored this. I told myself we would find the right numbers when we started pushing data through it.

I won’t kid anyone, I had no idea what was going on, but it didn’t matter. Slowly, after the basic code was doing something, I began to figure out the function of various parts. Again though, we have no idea if the C code is producing the same data as the Octave code. We needed some testing functions, and these were added to tofdm.m and tofdm.c. David wrote this part of the code, and I massaged the C modem code until one day the data were the same. This was pretty exciting to see it passing tests.

One thing I found, was that you can reach an underflow with single precision. Whenever I was really stumped, I would change the single precision to a double, and then see where the problem was. I was trying to stay completely within single precision floating point, because this modem is going to be embedded firmware someday.

Testing Process

There was no way that I could have reached a successful conclusion without the testing code. As a matter of fact, a lot of programming errors were found. You would be surprised at how much damage a miss placed parenthesis can do to a math equation! I’ve had enough math to know how to do the basic operations involved in DSP. I’m sure that as this code is ported to firmware, it can be simplified, optimized, and unrolled a bit for added speed. At this point, we just want valid waveforms.

C99 and Complex Math

Working with David was pretty easy, even though we are almost 16 time-zones apart. We don’t need an answer right now, and we aren’t working on a deadline. Sometimes I would send an email, and then four hours later I would find the problem myself, and the morning was still hours away in his time zone. So he sometimes got some strange emails from me that didn’t require an answer.

David was hands-off on this project, and doesn’t seem to be a control freak, so he just let me go at it, and then teamed-up when we had to merge things in giving us comparable output. Sometimes a simple answer was all I needed to blow through an Octave brain teaser.

I’ve been working in C99 for the past year. For those who haven’t kept up (1999 was a long time ago), but still, we tend to program C in the same way. In working with complex numbers though, the C library has been greatly expanded. For example, to multiply two complex numbers, you type” “A * B”. That’s it. No need to worry about a simulated complex number using a structure. You need a complex exponent, you type “cexp(I * W)” where “I” is the sqrt(-1). But all of this is hidden away inside the compiler.

For me, this became useful when translating Octave to C. Most of the complex functions have the same name. The only thing I had to do, was create a matrix multiply, and a summation function for the DFT. The rest was straight forward. Still a lot of work, but it was enjoyable work.

Where we might have problems interfacing to legacy code, there are functions in the library to extract the real and imaginary parts. We can easily interface to the old structure method. You can see examples of this in the testing code.

Looking back, I don’t think I would do anything different. Translating code is tedious no matter how you go. In this case Octave is 10 times easier than translating Fortran to C, or C to Java.

The best course is where you can start seeing some output early on. This keeps you motivated. I was a happy camper when I could look and listen to the modem using Audacity. Once you see progress, you can’t give up, and want to press on.


Reading Further

The Bit Exact Fairy Tale is a story of fixed point porting. Writing this helped me vent a lot of steam at the time – I’d just left a company that was really good at messing up these sorts of projects.

Modems for HF Digital Voice Part 1 and Part 2.

The cohpsk_frame_design spreadsheet includes some design calculations on the OFDM modem and a map of where the data and pilot symbols go in time and frequency.

Reducing FDMDV Modem Memory is an example of using automated testing to port an earlier HF modem to the SM1000. In this case the goal was to reduce memory consumption without breaking anything.

Fixed Point Scaling – Low Pass Filter example – is consistently one of the most popular posts on this blog. It’s a worked example of a fixed point port of a low pass filter.

November 24, 2017

This Week in HASS – term 4, week 8

Well, the end of term is in sight! End of year reporting is in full swing and the Understanding Our World® activities are designed to keep students engaged whilst minimising requirements for teachers, especially over these critical weeks. The current activities for all year levels are tailored to require minimal teaching, allowing teacher aides and […]

November 21, 2017

LUV December 2017 end of year celebration: Meetup Mixup Melbourne

Dec 21 2017 18:00
Dec 21 2017 23:59
Dec 21 2017 18:00
Dec 21 2017 23:59
Loop Project Space and Bar, 23 Meyers Pl, Melbourne VIC 3000

There will be no December workshop, but there will be an end of year party in conjunction with other Melbourne groups including Buzzconf, Electronic Frontiers Australia, Hack for Privacy, the Melbourne PHP Users Group, Open Knowledge Australia, PyLadies Melbourne and R-Ladies Melbourne.

Please note that there's a $8.80 cover fee, which includes a drink and nibbles, and bookings are essential as spaces are limited.  Tickets are available at

Linux Users of Victoria is a subcommittee of Linux Australia.

December 21, 2017 - 18:00

November 20, 2017

Communication skills for everyone

Donna presenting this talk at DrupalSouth - Photo by Tim Miller

Update: Video of this talk is now available.

Communication is a skill most of us practice every day.

Often without realising we're doing it.

Rarely intentionally.

I take my communication skills for granted. I'm not a brilliant communicator, not the best by any means, but probably, yes, I'm a bit above average. It wasn't until a colleague remarked on my presentation skills in particular that I remembered I'd actually been taught a thing or two about being on a stage. First as a dancer, then as a performer, and finally as a theatre director.

It's called Stagecraft. There's a lot to it, but when mastering stagecraft, you learn to know yourself. To use your very self as a tool to amplify your message. Where and how you stand, awareness of the space, of the light, of the size of the room, and of how to project your voice so all will hear you. All these facets need polish if you want your message to shine.

But you also need to learn to know your audience. Why are they there? What have they come to hear? What do they need to learn? How will they be transformed? Tuning your message to serve your audience is the real secret to giving a great presentation.

But presenting is just one of many communication skills. It's probably the one that people tell me most instils fear. Then there's writing of course. I envy writers! I would love to write more. I think of these as the "broadcast" skills. The "loud" skills. But the most important communication skill, in my view, is Listening.

As I've developed new skills as a business analyst, I've come to understand that Listening is the communication skill I need to improve most.

I was delighted to read this article by Tammy Lenski on the very morning I was to give this comms skills talk at DrupalSouth. Tammy refers to 5 Types of Listening identified in a talk given by Stephen Covey some years back. She says

"He described a listening continuum that runs from ignoring all the way over on the left, to pretend listening (patronizing), then selective listening, then attentive listening, and finally to empathic listening on the right."

Listening continuum


I think this is really useful.  If we are to get better at listening, we need to study it. But more importantly, we need to practice it. "Practice makes perfect". Kathy Sierra talks a lot about the power of intentional practice in her book Badass: Making Users Awesome

So, communication is a huge, huge topic to try and cover in a conference talk, so I tried to distil it down into three elements.

The what.

The how,

and The why.

The what is the message itself.  The how is the channel, the method, the style, or the medium, as Marshall Mcluhan said, and finally, there's the why; the intent, the purpose, or the reason for communicating.  I believe we need to understand the "why" of what we're saying, or hearing if it is to be of any benefit. 

Here's my slides:


November 17, 2017

Hackweek0x10: Fun in the Sun

We recently had a 5.94KW solar PV system installed – twenty-two 270W panels (14 on the northish side of the house, 8 on the eastish side), with an ABB PVI-6000TL-OUTD inverter. Naturally I want to be able to monitor the system, but this model inverter doesn’t have an inbuilt web server (which, given the state of IoT devices, I’m actually kind of happy about); rather, it has an RS-485 serial interface. ABB sell addon data logger cards for several hundred dollars, but Rick from Affordable Solar Tasmania mentioned he had another client who was doing monitoring with a little Linux box and an RS-485 to USB adapter. As I had a Raspberry Pi 3 handy, I decided to do the same.

Step one: Obtain an RS-485 to USB adapter. I got one of these from Jaycar. Yeah, I know I could have got one off eBay for a tenth the price, but Jaycar was only a fifteen minute drive away, so I could start immediately (I later discovered various RS-485 shields and adapters exist specifically for the Raspberry Pi – in retrospect one of these may have been more elegant, but by then I already had the USB adapter working).

Step two: Make sure the adapter works. It can do RS-485 and RS-422, so it’s got five screw terminals: T/R-, T/R+, RXD-, RXD+ and GND. The RXD lines can be ignored (they’re for RS-422). The other three connect to matching terminals on the inverter, although what the adapter labels GND, the inverter labels RTN. I plugged the adapter into my laptop, compiled Curt Blank’s aurora program, then asked the inverter to tell me something about itself:

aurora -a 2 -Y 4 -e /dev/ttyUSB0Interestingly, the comms seem slightly glitchy. Just running aurora -a 2 -e /dev/ttyUSB0 always results in either “No response after 1 attempts” or “CRC receive error (1 attempts made)”. Adding “-Y 4″ makes it retry four times, which is generally rather more successful. Ten retries is even more reliable, although still not perfect. Clearly there’s some tweaking/debugging to do here somewhere, but at least I’d confirmed that this was going to work.

So, on to the Raspberry Pi. I grabbed the openSUSE Leap 42.3 JeOS image and dd’d that onto a 16GB SD card. Booted the Pi, waited a couple of minutes with a blank screen while it did its firstboot filesystem expansion thing, logged in, fiddled with network and hostname configuration, rebooted, and then got stuck at GRUB saying “error: attempt to read or write outside of partition”:

error: attempt to read or write outside of partition.

Apparently that’s happened to at least one other person previously with a Tumbleweed JeOS image. I fixed it by manually editing the partition table.

Next I needed an RPM of the aurora CLI, so I built one on OBS, installed it on the Pi, plugged the Pi into the USB adapter, and politely asked the inverter to tell me a bit more about itself:

aurora -a @ -Y 4 -d 0 /dev/ttyUSB0

Everything looked good, except that the booster temperature was reported as being 4294967296°C, which seemed a little high. Given that translates to 0×100000000, and that the south wall of my house wasn’t on fire, I rather suspected another comms glitch. Running aurora -a 2 -Y 4 -d 0 /dev/ttyUSB0 a few more times showed that this was an intermittent problem, so it was time to make a case for the Pi that I could mount under the house on the other side of the wall from the inverter.

I picked up a wall mount snap fit black plastic box, some 15mm x 3mm screws, matching nuts, and 9mm spacers. The Pi I would mount inside the box part, rather than on the back, meaning I can just snap the box-and-Pi off the mount if I need to bring it back inside to fiddle with it.

Then I had to measure up and cut holes in the box for the ethernet and USB ports. The walls of the box are 2.5mm thick, plus 9mm for the spacers meant the bottom of the Pi had to be 11.5mm from the bottom of the box. I measured up then used a Dremel tool to make the holes then cleaned them up with a file. The hole for the power connector I did by eye later after the board was in about the right place.

20171115_164538 20171115_165407 20171115_165924 20171115_172026 20171115_173200 20171115_174705 20171115_174822 20171115_175002

I didn’t measure for the screw holes at all, I simply drilled through the holes in the board while it was balanced in there, hanging from the edge with the ports. I initially put the screws in from the bottom of the box, dropped the spacers on top, slid the Pi in place, then discovered a problem: if the nuts were on top of the board, they’d rub up against a couple of components:


So I had to put the screws through the board, stick them there with Blu Tack, turn the Pi upside down, drop the spacers on top, and slide it upwards into the box, getting the screws as close as possible to the screw holes, flip the box the right way up, remove the Blu Tack and jiggle the screws into place before securing the nuts. More fiddly than I’d have liked, but it worked fine.

One other kink with this design is that it’s probably impossible to remove the SD card from the Pi without removing the Pi from the box, unless your fingers are incredibly thin and dexterous. I could have made another hole to provide access, but decided against it as I’m quite happy with the sleek look, this thing is going to be living under my house indefinitely, and I have no plans to replace the SD card any time soon.


All that remained was to mount it under the house. Here’s the finished install:


After that, I set up a cron job to scrape data from the inverter every five minutes and dump it to a log file. So far I’ve discovered that there’s enough sunlight by about 05:30 to wake the inverter up. This morning we’d generated 1KW by 08:35, 2KW by 09:10, 8KW by midday, and as I’m writing this at 18:25, a total of 27.134KW so far today.

Next steps:

  1. Figure out WTF is up with the comms glitches
  2. Graph everything and/or feed the raw data to

This Week in HASS – term 4, week 7

This week our younger students are preparing for their play/ role-playing presentation, whilst older students are practising a full preferential count to determine the outcome of their Class Election. Foundation/Prep/Kindy to Year 3 Our youngest students in Foundation/Prep/Kindy (Unit F.4) and integrated classes with Year 1 (Unit F-1.4) are working on the costumes, props and […]

November 15, 2017

Save the Dates: Linux Security Summit Events for 2018

There will be a new European version of the Linux Security Summit for 2018, in addition to the established North American event.

The dates and locations are as follows:

Stay tuned for CFP announcements!


November 13, 2017

Test mail server on Ubuntu and Debian

I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data.

First, install the postfix mail server:

apt install postfix

and choose the "Local only" mail server configuration type.

Then change the following in /etc/postfix/

default_transport = error


default_transport = local:root

and restart postfix:

systemctl restart postfix.service

Once that's done, you can find all of the emails in /var/mail/root.

So you can install mutt:

apt install mutt

and then view the mailbox like this:

mutt -f /var/mail/root

Rattus Norvegicus ESTs with BLAST and Slurm

The following is a short tutorial on using BLAST with Slurm using fasta nucleic acid (fna) FASTA formatted sequence files for Rattus Norvegicus. It assumes that BLAST (Basic Local Alignment Search Tool) is already installed.

First, create a database directory, download the datafile, extract, and load the environment variables for BLAST.

mkdir -r ~/applicationtests/BLAST/dbs
cd ~/applicationtests/BLAST/dbs
gunzip rat.1.rna.fna.gz
module load BLAST/2.2.26-Linux_x86_64

Having extracted the file, there will be a fna formatted sequence file, rat.1.rna.fna. An example header line for a sequence:

>NM_175581.3 Rattus norvegicus cathepsin R (Ctsr), mRNA

read more

November 12, 2017

LUV Main December 2017 Meeting - ISC and TERATEC: A Tale of Two Conferences / nfatbles

Dec 5 2017 18:30
Dec 5 2017 20:30
Dec 5 2017 18:30
Dec 5 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Lev Lafayette, ISC and TERATEC: A Tale of Two Conferences

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of choice at both was overwhelming.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

December 5, 2017 - 18:30

read more

LUV November 2017 Workshop: Status at a glance with LCDproc

Nov 18 2017 12:30
Nov 18 2017 16:30
Nov 18 2017 12:30
Nov 18 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

Status at a glance with LCDproc

Andrew Pam will demonstrate how to use small LCD or LED displays to provide convenient status information using LCDproc.  Possibly also how to write custom modules to display additional information.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 18, 2017 - 12:30

read more

Access and Memory: Open GLAM and Open Source

Over the years of my involvement with library projects, like Coder Dojo, programming workshops and such, I’ve struggled to nail down the intersection between libraries and open source. At this years in Sydney (my seventeenth!) I’m helping to put together a miniconf to answer this question: Open GLAM. If you do work in the intersection of galleries, libraries, archives, musuems and open source, we’d love to hear from you.

Filed under: lca, oss, Uncategorized

November 10, 2017

This Week in HASS – term 4, week 6

This week our youngest students are starting work on their Class Play, slightly older students are choosing a family group from around the world for a role play activity and our oldest students are holding a Class Election! What an activity-filled week! Foundation/Prep/Kindy to Year 3 Our youngest students in standalone Foundation/Prep/Kindy classes (Unit F.4) […]

November 08, 2017

FWD50 Keynote: The Tipping Point

I was invited to an incredible and inaugural conference in Canada called FWD50 which was looking at the next 50 days, months and years for society. It had a digital government flavour to it but had participants and content from various international, national and sub-national governments, civil society, academia, industry and advocacy groups. The diversity of voices in the room was good and the organisers committed to greater diversity next year. I gave my keynote as an independent expert and my goal was to get people thinking bigger than websites and mobile apps, to dream about the sort of future we want as a society (as a species!) and work towards that. As part of my talk I also explored what the big paradigm shifts have happened (note the past tense) and potential roles for government (particularly the public sector) in a hyper connected, distributed network of powerful individuals. My slides are available here (simple though they are). It wasn’t recorded but I did an audio recording and transcribed. I was unwell and had lost my voice so this is probably better anyway :)

The tipping point and where do we go from here

I’ve been thinking a lot over many years about change and the difference between iteration and transformation, about systems, about what is going on in the big picture, because what I’m seeing around the world is a lot of people iterating away from pain but not actually iterating towards a future. Looking for ways to solve the current problem but not rethinking or reframing in the current context. I want to talk to you about the tipping point.

We invented all of this. This is worth taking a moment to think. We invented every system, every government, every means of production, we organised ourselves into structures and companies, all the things we know, we invented. By understanding we invented we can embrace the notice we aren’t stuck with it. A lot of people start from the normative perspective that it is how it is and how do we improve it slightly but we don’t have to be constrained to assumption because *we* invented it. We can take a formative approach.

The reason this is important is because the world has fundamentally changed. The world has started from a lot of assumptions. This (slide) is a map of the world as it was known at the time, and it was known for a long time to be flat. And at some point it became known that the world was not flat and people had to change their perspective. If we don’t challenge those assumptions that underpin our systems, we run the significant risk of recreating the past with shiny new things. If we take whatever the shiny thing is today, like blockchain or social media 10 years ago, and take that shiny thing to do what we have always done, then how are we progressing? We are just “lifting and shifting” as they like to say, which as a technologist is almost the worst thing I can hear.

Actually understanding the assumptions that underpin what we do, understanding the goal that we have and what we are trying to achieve, and actually having to make sure that we intentionally choose to move forward with the assumptions that we want to take into the future is important because a lot of the biases and assumptions that underpin the systems that we have today were forged centuries or even millennia ago. A long time before the significant paradigm shifts we have seen.

So I’m going to talk a little bit about how things have changed. It’s not that the tipping point is happening. The tipping point has already happened. We have seen paradigm shifts with legacy systems of power and control. Individuals are more individually powerful than ever in the history of our species. If you think way back in hunter and gatherer times, everyone was individually pretty powerful then, but it didn’t scale. When we moved to cities we actually started to highly specialise and become interdependent and individually less powerful because we made these systems of control that were necessary to manage the surplus of resource, necessary to manage information. But what’s happened now through the independence movements creating a culture of everyone being individually powerful through individual worthy of rights, and then more recently with the internet becoming a distributor, enabler and catalyst of that, we are now seeing power massively distributed.

Think about it. Any individual around the world that can get online, admittedly that’s only two thirds of us but it’s growing every day, and everyone has the power to publish, to create, to share, to collaborate, to collude, to monitor. It’s not just the state monitoring the people but the people monitoring the state and people monitoring other people. There is the power to enforce your own perspective. And it doesn’t actually matter whether you think it’s a good or bad thing, it is the reality. It’s the shift. And if we don’t learn to embrace, understand and participate in it,particularly in government, then we actually make ourselves less relevant. Because one of the main things about this distribution of power, that the internet has taught us fundamentally as part of our culture that we have all started to adopt, is that you can route around damage. The internet was set up to be able to route around damage where damage was physical or technical. We started to internalise that socially and if you, in government, are seen to be damage, then people route around you. This is why we have to learn to work as a node in a network, not just a king in a castle, because kings don’t last anymore.

So which way is forward. The priority now needs to be deciding what sort of future do we want. Not what sort of past do we want to escape. The 21st century sees many communities emerging. They are hyper connected, transnational, multicultural, heavily interdependent, heavily specialised, rapidly changing and disconnected from their geopolitical roots. Some people see that as a reason to move away from having geopolitically formed states. Personally I believe there will always be a role for a geographic state because I need a way to scale a quality of life for my family along with my fellow citizens and neighbours. But what does that mean in an international sense. Are my rights as a human being being realised in a transnational sense. There are some really interesting questions about the needs of users beyond the individual services that we deliver, particularly when you look in a transnational way.

So a lot of these assumptions have become like a rusty anchor that kept us in place in high tide, but are drawing us to a dangerous reef as to water lowers. We need to figure out how to float on the water without rusty anchors to adapt to the tides of change.

There are a lot of pressures that are driving these changes of course. We are all feeling those pressures, those of us that are working in government. There’s the pressure of changing expectations, of history, from politics and the power shift. The pressure of the role of government in the 21st century. Pressure is a wonderful thing as it can be a catalyst of change, so we shouldn’t shy away from pressure, but recognising that we’re under pressure is important.

So let’s explore some of those power shifts and then what role could government play moving forward.

Paradigm #1: central to distributed.

This is about that shift in power, the independence movements and the internet. It is something people talk about but don’t necessarily apply to their work. Governments will talk about wanting to take a more distributed approach but followup with setting up “my” website expecting everyone to join or do something. How about everyone come to “my” office or create “my” own lab. Distributed, when you start to really internalise what that means, if different. I was lucky as I forged a lot of my assumptions and habits of working when I was involved in the Open Source community, and the Open Source community has a lot of lessons for rest of society because it is on the bleeding edge of a lot of these paradigm shifts. So working in a distributed way is to assume that you are not at the centre, to assume that you’re not needed. To assume that if you make yourself useful that people will rely on you, but also to assume that you rely on others and to build what you do in a way that strengthens the whole system. I like to talk about it as “Gov as a Platform”, sometimes that is confusing to people so let’s talk about it as “Gov as an enabler”. It’s not just government as a central command and controller anymore because the moment you create a choke point, people route around it. How do we become a government as an enabler of good things, and how can we use other mechanisms to create the controls in society. Rather than try to protect people from themselves, why not enable people to protect themselves. There are so many natural motivations in the community, in industry, in the broader sector that we serve, that we can tap into but traditionally we haven’t. Because traditionally we saw ourselves as the enforcer, as the one to many choke point. So working in a distributed way is not just about talking the talk, it’s about integrated it into the way we think.

Some other aspects of this include localised to globalised, keeping in mind that large multinational companies have become quite good at jurisdiction shopping for improvements of profits, which you can’t say is either a good or bad thing, it’s just a natural thing and how they’re naturally motivated. But citizens are increasingly starting to jurisdiction shop too. So I would suggest a role for government in the 21st century would be to create the best possible quality of life for people, because then you’ll attract the best from around the world.

The second part of central to distributed is simple to complex. I have this curve (on the slide) which shows green as complexity and red as government’s response to user needs. The green climbs exponentially whilst the red is pretty linear, with small increases or decreases over time, but not an exponential response by any means. Individual needs are no longer heavily localised. They are subject to local, national, transnational complexities with every extra complexity compounded, not linear. So the increasing complexities in people’s lives, and the obligations, taxation, services and entitlements, everything is going up. So there is a delta forming between what government can directly do, and what people need. So again I contend that the opportunity here particularly for the public sector is to actually be an enabler for all those service intermediaries – the for profit, non profit, civic tech – to help them help themselves, help them help their customers, by merit of making government a platform upon which they can build. We’ve had a habit and a history of creating public infrastructure, particularly in Australia, in New Zealand, in Canada, we’re been very good at building public infrastructure. Why have we not focused on digital infrastructure? Why do we see digital infrastructure as something that has to be cost recovered to be sustainable when we don’t have to do cost recovery for every thing public road. I think that looking at the cost benefits and value creation of digital public infrastructure needs to be looks at in the same way, and we need to start investing in digital public infrastructure.

Paradigm #2: analog to digital.

Or slow to very fast. I like to joke that we use lawyers as modems. If you think about regulation and policy, we write it, it is translated by a lawyer or drafter into regulation or policy, it is then translated by a lawyer or drafter or anyone into operational systems, business systems, helpdesk systems or other systems in society. Why wouldn’t we make our regulation as code? The intent of our regulation and our legislative regimes available to be directly consumed (by the systems) so that we can actually speed up, automate, improve consistency of application through the system, and have a feedback loop to understand whether policy changes are having the intended policy effect.

There are so many great things we can do when we start thinking about digital as something new, not just digitising an analog process. Innovation too long was interpreted as a digitisation of a process, basic process improvements. But real digitisation should a a transformation where you are changing the thing to better achieve the purpose or intent.

Paradigm #3: scarcity to surplus.

I think this is critical. We have a lot of assumptions in our systems that assume scarcity. Why do we still have so many of our systems assume scarcity when surplus is the opportunity. Between 3D printing and nanotech, we could be deconstructing and reconstructing new materials to print into goods and food and yet a large inhibitor of 3D printing progress is copyright. So the question I have for you is do we care more about an 18h century business model or do we care about solving the problems of our society. We need to make these choices. If we have moved to an era of surplus but we are getting increasing inequality, perhaps the systems of distribution are problematic? Perhaps in assuming scarcity we are protecting scarcity for the few at the cost of the many.

Paradigm #4: normative to formative

“Please comply”. For the last hundred years in particular we have perfected the art of broadcasting images of normal into our houses, particularly with radio and television. We have the concept of set a standard or rule and if you don’t follow we’ll punish you, so a lot of culture is about compliance in society. Compliance is important for stability, but blind compliance can create millstones. A formative paradigm is about not saying how it is but in exploring where you want to go. In the public service we are particularly good at compliance culture but I suggest that if we got more people thinking formatively, not just change for changes sake, but bringing people together on their genuinely shared purpose of serving the public, then we might be able to take a more formative approach to doing the work we do for the betterment of society rather than ticking the box because it is the process we have to follow. Formative takes us away from being consumers and towards being makers. As an example, the most basic form of normative human behaviour is in how we see and conform to being human. You are either normal, or you are not, based on some externally projected vision of normal. But the internet has shown us that no one is normal. So embracing that it is through our difference we are more powerful and able to adapt is an important part of our story and culture moving forward. If we are confident to be formative, we can always trying to create a better world whilst applying a critical eye to compliance so we don’t comply for compliance sake.

Exploring optimistic futures

Now on the back of these paradigm shifts, I’d like to briefly about the future. I spoke about the opportunity through surplus with 3D printing and nanotech to address poverty and hunger. What about the opportunities of rockets for domestic travel? It takes half an hour to get into space, an hour to traverse the world and half an hour down which means domestic retail transport by rocket is being developed right now which means I could go from New Zealand to Canada to work for the day and be home for tea. That shift is going to be enormous in so many ways and it could drive real changes in how we see work and internationalism. How many people remember Total Recall? The right hand picture is a self driving car from a movie in the 90s and is becoming normal now. Interesting fact, some of the car designs will tint the windows when they go through intersections because the passengers are deeply uncomfortable with the speed and closeness of self driving cars which can miss each other very narrowly compared to human driving. Obviously there are opportunities around AI, bots and automation but I think where it gets interesting when we think about opportunities of the future of work. We are still working on industrial assumptions that the number of hours that we have is a scarcity paradigm and I have to sell the number of hours that I work, 40, 50, 60 hours. Why wouldn’t we work 20 hours a week at a higher rate to meet our basic needs? Why wouldn’t we have 1 or 2 days a week where we could contribute to our civic duties, or art, or education. Perhaps we could jump start an inclusive renaissance, and I don’t mean cat pictures. People can’t thrive if they’re struggling to survive and yet we keep putting pressure on people just to survive. Again, we are from countries with quite strong safety nets but even those safety nets put huge pressure, paperwork and bureaucracy on our most vulnerable just to meet their basic needs. Often the process of getting access to the services and entitlements is so hard and traumatic that they can’t, so how do we close that gap so all our citizens can move from survival to thriving.

The last picture is a bit cheeky. A science fiction author William Gibson wrote Johnny Pneumonic and has a character in that book called Jones, a cyborg dolphin to sniff our underwater mines in warfare. Very dark, but the interesting concept there is in how Jones was received after the war: “he was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” What does it mean to be human? If I lose a leg, right now it is assumed I need to replace that leg to be somehow “whole”. What if I want 4 legs. The human brain is able to adapt to new input. I knew a woman who got a small sphere filled with mercury and a free floating magnet in her finger, and the magnet spins according to frequency and she found over a short period of time she was able to detect changes in frequency. Why is that cool and interesting? Because the brain can adapt to foreign, non evolved input. I think that is mind blowing. We have the opportunity to augment our selves not to just conform to normal or be slightly better, faster humans. But we can actually change what it means to be human altogether. I think this will be one of the next big social challenges for society but because we are naturally so attracted to “shiny”, I think that discomfort will pass within a couple of generations. One prediction is that the normal Olympics has become boring and that we will move into a transhuman olympics where we take the leash off and explore the 100m sprint with rockets, or judo with cyborgs. Where the interest goes, the sponsorship goes, and more professional athletes compete. And what’s going to happen if your child says they want to be a professional transhuman olympian and that they will add wings or remove their legs for their professional career, to add them (or not) later? That’s a bit scary for many but at the same time, it’s very interesting. And it’s ok to be uncomfortable, it’s ok to look at change, be uncomfortable and ask yourself “why am I uncomfortable?” rather than just pushing back on discomfort. It’s critical more than ever, particularly in the public service that we get away from this dualistic good or bad, in or out, yours or mine and start embracing the grey.

The role of government?

So what’s the role of government in all this, in the future. Again these are just some thoughts, a conversation starter.

I think one of our roles is to ensure that individuals have the ability to thrive. Now I acknowledge I’m very privileged to have come from a social libertarian country that believe this, where people broadly believe they want their taxes to go to the betterment of society and not all countries have that assumption. But if we accept the idea that people can’t thrive if they can’t survive, then our baseline quality of life if you assume an individual starts from nothing with no privilege, benefits or family, provided by the state, needs to be good enough for the person to be able to thrive. Otherwise we get a basic structural problem. Part of that is becoming master buildings again, and to go to the Rawl’s example from Alistair before, we need empathy in what we do in government. The amount of times we build systems without empathy and they go terribly wrong because we didn’t think about what it would be like to be on the other side of that service, policy or idea. User centred design is just a systematisation of empathy, which is fantastic, but bringing empathy into everything we do is very important.

Leadership is a very important role for government. I think part of our role is to represent the best interests of society. I very strongly feel that we have a natural role to serve the public in the public sector, as distinct from the political sector (though citizens see us as the same thing). The role of a strong, independent public sector is more important than ever in a post facts “fake news” world because it is one of the only actors on the stage that is naturally motivated, naturally systemically motivated, to serve the best interests of the public. That’s why open government is so important and that’s why digital and open government initiatives align directly.

Because open with digital doesn’t scale, and digital without open doesn’t last.

Stability, predictability and balance. It is certainly a role of government to create confidence in our communities, confidence creates thriving. It is one thing to address Maslov’s pyramid of needs but if you don’t feel confident, if you don’t feel safe, then you still end up behaving in strange and unpredictable ways. So this is part of what is needed for communities to thrive. This relates to regulation and there is a theory that regulation is bad because it is hard. I would suggest that regulation is important for the stability and predictability in society but we have to change the way we deliver it. Regulation as code gets the balance right because you can have the settings and levers in the economy but also the ability for it to be automated, consumable, consistent, monitored and innovative. I imagine a future where I have a personal AI which I can trust because of quantum cryptography and because it is tethered in purpose to my best interests. I don’t have to rely on whether my interests happen to align with the purpose of a department, company or non-profit to get the services I need because my personal bot can figure out what I need and give me the options for me to make decisions about my life. It could deal with the Government AI to figure out the rules, my taxation, obligations, services and entitlements. Where is the website in all that? I ask this because the web was a 1990s paradigm, and we need more people to realise and plan around the idea that the future of service delivery is in building the backend of what we do – the business rules, transactions, data, content, models – in a modular consumable so we can shift channels or modes of delivery whether it is a person, digital service or AI to AI interaction.

Another role of government is in driving the skills we need for the 21st century. Coding is critical not because everyone needs to code (maybe they will) but more than that coding teaches you an assumption, an instinct, that technology is something that can be used by you, not something you are intrinsically bound to. Minecraft is the saviour of a generation because all those kids are growing up believing they can shape the world around them, not have to be shaped by the world around them. This harks back to the normative/formative shift. But we also need to teach critical thinking, teach self awareness, bias awareness, maker skills, community awareness. It has been delightful to move to New Zealand where they have a culture that has an assumed community awareness.

We need of course to have a strong focus on participatory democracy, where government isn’t just doing something to you but we are all building the future we need together. This is how we create a multi-processor world rather than a single processor government. This is how we scale and develop a better society but we need to move beyond “consultation” and into actual co-design with governments working collaboratively across the sectors and with civil society to shape the world.

I’ll finish on this note, government as an enabler, a platform upon which society can build. We need to build a way of working that assumes we are a node in the network, that assumes we have to work collaboratively, that assumes that people are naturally motivated to make good decisions for their life and how can government enable and support people.

So embrace the tipping point, don’t just react. What future do you want, what society do you want to move towards? I guess I’ve got to a point in my life where I see everything as a system and if I can’t connect the dots between what I’m doing and the purpose then I try to not do that thing. The first public service job I had I got in and automated a large proportion of the work within a couple of weeks and then asked for, and they gave it to me because I was motivated to make it better.

So I challenge you to be thinking about this every day, to consider your own assumptions and biases, to consider whether you are being normative or formative, to evaluate whether you are being iterative or transformative, to evaluate whether you are moving away from something or towards something. And to always keep in mind where you want to be, how you are contributing to a better society and to actively leave behind those legacy ideas that simply don’t serve us anymore.

November 06, 2017

Web Security 2017

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/ at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

November 04, 2017

This Week in HASS – term 4, week 5

Halfway through the last term of the year already! This week our youngest students consider museums as a place to learn about the past. Slightly older students are learning about the states and territories of Australia, as well as their representative birds and animals. Older students are in throes of their class election campaign, preparing […]

November 03, 2017

Work Stuff

Does anyone know of a Linux support company that provides 24*7 support to Ruby and PHP applications? I have a client that is looking for such a company.

Also I’m looking for more consulting work. If anyone knows of an organisation that needs some SE Linux consulting, or support for any of the FOSS software I’ve written then let me know. I take payment by Paypal and Bitcoin as well as all the usual ways. I can make a private build of any of my FOSS software to suit your requirements or if you want features that could be used by other people (and don’t conflict with the general use cases) I can add them on request. Small changes start at $100.

October 31, 2017

Election Activity Bundle

For any Australian Curriculum HASS topic from Prep to at least Year 6, we can safely say “We have a resource on that!” So when, like here in Queensland, an election is suddenly called and teachers want to do some related activities in class, we actually already have the materials for you as these topics […]

October 30, 2017

Logic of Zombies

Most zombie movies feature shuffling hordes which prefer to eat brains but also generally eat any human flesh available. Because in most movies (pretty much everything but the 28 Days Later series [1]) zombies move slowly they rely on flocking to be dangerous.

Generally the main way of killing zombies is severe head injury, so any time zombies succeed in their aim of eating brains they won’t get a new recruit for their horde. The TV series iZombie [2] has zombies that are mostly like normal humans as long as they get enough brains and are smart enough to plan to increase their horde. But most zombies don’t have much intelligence and show no signs of restraint so can’t plan to recruit new zombies. In 28 Days Later the zombies aren’t smart enough to avoid starving to death, in contrast to most zombie movies where the zombies aren’t smart enough to find food other than brains but seem to survive on magic.

For a human to become a member of a shuffling horde of zombies they need to be bitten but not killed. They then need to either decide to refrain from a method of suicide that precludes becoming a zombie (gunshot to the head or jumping off a building) or unable to go through with it. Most zombie movies (I think everything other than 28 Days Later) has the transition process taking some hours so there’s plenty of time for an infected person to kill themself or be killed by others. Then they need to avoid having other humans notice that they are infected and kill them before they turn into a zombie. This doesn’t seem likely to be a common occurrence. It doesn’t seem likely that shuffling zombies (as opposed to the zombies in 28 Days Later or iZombie) would be able to form a horde.

In the unlikely event that shuffling zombies managed to form a horde that police couldn’t deal with I expect that earth-moving machinery could deal with them quickly. The fact that people don’t improvise armoured vehicles capable of squashing zombies is almost as ridiculous as all the sci-fi movies that feature infantry.

It’s obvious that logic isn’t involved in the choice of shuffling zombies. It’s more of a choice of whether to have the jump-scare aspect of 18 Days Later, the human-drama aspect of zombies that pass for human in iZombie, or the terror of a slowly approaching horrible fate that you can’t escape in most zombie movies.

I wonder if any of the music streaming services have a horror-movie playlist that has screechy music to set your nerves on edge without the poor plot of a horror movie. Could listening to scary music in the dark become a thing?

October 27, 2017

Happy Teachers’ Day

OpenSTEM would like to extend warm congratulations to all teachers on Teachers’ Day!! We salute you all for the wonderful job you do for all students every day, often without thanks or praise. It is not lightly that people say “If you can read this, thank a teacher”. Teachers truly are the force that shapes […]

This Week in HASS – term 4, week 4

This week our youngest students are looking at Aboriginal Places, while slightly older students are comparing Australia to other places around the world. Our older students are starting their class election segment of work, covering several parts of the Civics and Citizenship, as well as the History, curricula. Foundation/Kindy/Prep to Year 3 Students in Foundation/Kindy/Prep […]

October 26, 2017

Anarchy in the Office

Some of the best examples I’ve seen of anarchy working have been in corporate environments. This doesn’t mean that they were perfect or even as good as a theoretical system in which a competent manager controlled everything, but they often worked reasonably well.

In a well functioning team members will encourage others to do their share of the work in the absence of management. So when the manager disappears (doesn’t visit the team more than once a week and doesn’t ask for any meaningful feedback on how things are going) things can still work out. When someone who is capable of doing work isn’t working then other people will suggest that they do their share. If resources for work (such as a sufficiently configured PC for IT work) aren’t available then they can be found (abandoned PCs get stripped and the parts used to upgrade the PCs that need it most).

There was one time where a helpdesk worker who was about to be laid off was assigned to the same office as me (apparently making all the people in his group redundant took some time). So I started teaching him sysadmin skills, assigned work to him, and then recommended that my manager get him transferred to my group. That worked well for everyone.

One difficult case is employees who get in the way of work being done, those who are so incompetent that they break enough things to give negative productivity. One time when I was working in Amsterdam I had two colleagues like that, it turned out that the company had no problem with employees viewing porn at work so no-one asked them to stop looking at porn. Having them paid to look at porn 40 hours a week was much better than having them try to do work. With anarchy there’s little option to get rid of bad people, so just having them hang out and do no work was the only option. I’m not advocating porn at work (it makes for a hostile work environment), but managers at that company did worse things.

One company I worked for appeared (from the non-management perspective) to have a management culture of doing no work. During my time there I did two “annual reviews” in two weeks, and the second was delayed by over 6 months. The manager in question only did the reviews at that time because he was told he couldn’t be promoted until he got the backlog of reviews done, so apparently being more than a year behind in annual reviews was no obstacle to being selected for promotion. On one occasion I raised the issue of a colleague who had done no work for over a year (and didn’t even have a PC to do work) with that manager, his response was “what do you expect me to do”! I expected him to do anything other than blow me off when I reported such a serious problem! But in spite of that strictly work-optional culture enough work was done and the company was a leader in it’s field.

There has been a lot of research into the supposed benefits of bonuses etc which usually turn out to reduce productivity. Such research is generally ignored presumably because the people who are paid the most are the ones who get to decide whether financial incentives should be offered so they choose the compensation model for the company that benefits themselves. But the fact that teams can be reasonably productive when some people are paid to do nothing and most people have their work allocated by group consensus rather than management plan seems to be a better argument against the typical corporate management.

I think it would be interesting to try to run a company with an explicit anarchic management and see how it compares to the accidental anarchy that so many companies have. The idea would be to have minimal management that just does the basic HR tasks (preventing situations of bullying etc), a flat pay rate for everyone (no bonuses, pay rises, etc) and have workers decide how to spend money for training, facilities, etc. Instead of having middle managers you would have representatives elected from each team to represent their group to senior management.

PS Australia has some of the strictest libel laws in the world. Comments that identify companies or people are likely to be edited or deleted.

October 25, 2017

Teaching High Throughput Computing: An International Comparison of Andragogical Techniques

The importance of High Throughput Computing (HTC), whether through high performance or cloud-enabled, is a critical issue for research institutions as data metrics are increasing at a rate greater than the capacity of user systems [1]. As a result nascent evidence suggests higher research output from institutions that provide access to HTC facilities. However the necessary skills to operate HTC systems is lacking from the very research communities that would benefit from them.

read more

Spartan and NEMO: Two HPC-Cloud Hybrid Implementations

High Performance Computing systems offer excellent metrics for speed and efficiency when using bare metal hardware, a high speed interconnect, and parallel applications. This however does not represent a significant portion of scientific computational tasks. In contrast cloud computing has provided management and implementation flexibility at a cost of performance. We therefore suggest two approaches to make HPC resources available in a dynamically reconfigurable hybrid HPC/Cloud architecture. Both can can be achieved with few modifications to existing HPC/Cloud environments.

read more

October 20, 2017

Security Session at the 2017 Kernel Summit

For folks attending Open Source Summit Europe next week in Prague, note that there is a security session planned as part of the co-located Kernel Summit technical track.

This year, the Kernel Summit is divided into two components:

  1. An invitation-only maintainer summit of 30 people total, and;
  2. An open kernel summit technical track which is open to all attendees of OSS Europe.

The security session is part of the latter.  The preliminary agenda for the kernel summit technical track was announced by Ted Ts’o here:

There is also a preliminary agenda for the security session, here:

Currently, the agenda includes an update from Kees Cook on the Kernel Self Protection Project, and an update from Jarkko Sakkinen on TPM support.  I’ll provide a summary of the recent Linux Security Summit, depending on available time, perhaps focusing on security namespacing issues.

This agenda is subject to change and if you have any topics to propose, please send an email to the ksummit-discuss list.


This Week in HASS – term 4, week 3

This week our youngest students are looking at special places locally and around Australia, slightly older students are considering plants and animals around the world, while our older students are studying aspects of diversity in Australia. Foundation/Prep/Kindy to Year 3 Students in standalone Foundation/Prep/Kindy (Unit F.4) and combined classes with Year 1 (F-1.4) are thinking […]

October 17, 2017

Checking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

October 13, 2017

This Week in HASS – term 4, week 2

This week our youngest students are looking at transport in the past, slightly older students consider places that are special to people around the world and our oldest students are considering reasons why people might leave their homes to become migrants. Foundation/Prep/Kindy to Year 3 Students in standalone Foundation/Prep/Kindy classes (Unit F.4), as well as […]

October 09, 2017

LUV Main November 2017 Meeting: Ubuntu Artful Aardvark

Nov 8 2017 18:30
Nov 8 2017 20:30
Nov 8 2017 18:30
Nov 8 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Wednesday, November 8, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 8, 2017 - 18:30

read more