Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

October 24, 2016

Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

October 23, 2016

APM:Plane 3.7.1 released and 3.8.0 beta nearly ready for testing

New plane releases from

Development is really hotting up for fixed wing and quadplanes! I've just released 3.7.1 and I plan on releasing the first beta of the major 3.8.0 release in the next week.
The 3.7.1 release fixes some significant bugs reported by users since the 3.7.0 release. Many thanks for all the great feedback on the release from users!
The 3.8.0 release includes a lot more new stuff, including a new servo mapping backend and a new persistent auto-trim system that is makes getting the servo trim just right a breeze.
Happy flying!

Building and Booting Upstream Linux and U-Boot for Orange Pi One ARM Board (with Ethernet)

My home automation setup will make use of Arduinos and also embedded Linux devices. I’m currently looking into a few boards to see if any meet my criteria.

The most important factor for me is that the device must be supported in upstream Linux (preferable stable, but mainline will do) and U-Boot. I do not wish to use any old, crappy, vulnerable vendor trees!

The Orange Pi One is a small, cheap ARM board based on the AllWinner H3 (sun8iw7p1) SOC with a quad-Core Cortex-A7 ARM CPU and 512MB RAM. It has no wifi, but does have an onboard 10/100 Ethernet provided by the SOC (Linux patches incoming). It has no NAND (not supported upstream yet anyway), but does support SD. There is lots of information available at

Orange Pi One

Orange Pi One

Note that while Fedora 25 does not yet support this board specifically it does support both the Orange Pi PC (which is effectively a more full-featured version of this device) and the Orange Pi Lite (which is the same but swaps Ethernet for WiFi). Using either of those configurations should at least boot on the Orange Pi One.

Connecting UART

The UART on the Pi One uses the GND, TX and RX connections which are next to the Ethernet jack. Plug the corresponding cables from a 3.3V UART cable onto these pins and then into a USB port on your machine.

Orange Pi One UART Pin Connections

UART Pin Connections (RX yellow, TX orange, GND black)

Your device will probably be /dev/ttyUSB0, but you can check this with dmesg just after plugging it in.

Now we can simply use screen to connect to the UART, but you’ll have to be in the dialout group.

sudo gpasswd -a ${USER} dialout
newgrp dialout
screen /dev/ttyUSB0 115200

Note that you won’t see anything just yet without an SD card that has a working bootloader. We’ll get to that shortly!

Partition the SD card

First things first, get yourself an SD card.

While U-Boot itself is embedded in the card and doesn’t need it to be partitioned, it will be required later to read the boot files.

U-Boot needs the card to have an msdos partition table with a small boot partition (ext now supported) that starts at 1MB. You can use the rest of the card for the root file system (but we’ll boot an initramfs, so it’s not needed).

Assuming your card is at /dev/sdx (replace as necessary, check dmesg after plugging it in if you’re not sure).

sudo umount /dev/sdx* # makes sure it's not mounted
sudo parted -s /dev/sdx \
mklabel msdos mkpart \
primary ext3 1M 10M \
mkpart primary ext4 10M 100%

Now we can format the partitions (upstream U-Boot supports ext3 on the boot partition).
sudo mkfs.ext3 /dev/sdx1
sudo mkfs.ext4 /dev/sdx2

Leave your SD card plugged in, we will need to write the bootloader to it soon!

Upstream U-Boot Bootloader

Install the arm build toolchain dependencies.

sudo dnf install gcc-arm-linux-gnu binutils-arm-linux-gnu

We need to clone upstream U-Boot Git tree. Note that I’m checking out the release directly (-b v2016.09.01) but you could leave this off to get master, or change it to a different tag if you want.
cd ${HOME}
git clone --depth 1 -b v2016.09.01 git://
cd u-boot

There is a defconfig already for this board, so simply make this and build the bootloader binary.
CROSS_COMPILE=arm-linux-gnu- make orangepi_one_defconfig
CROSS_COMPILE=arm-linux-gnu- make -j$(nproc)

Write the bootloader to the SD card (replace /dev/sdx, like before).
sudo dd if=u-boot-sunxi-with-spl.bin of=/dev/sdx bs=1024 seek=8

Testing our bootloader

Now we can remove the SD card and plug it into the powered off Orange Pi One to see if our bootloader build was successful.

Switch back to your terminal that’s running screen and then power up the Orange Pi One. Note that the device will try to netboot by default, so you’ll need to hit the enter key when you see a line that says the following.

(Or you can just repeatedly hit enter key in the screen console while you turn the device on.)

Note that if you don’t see anything, swap the RT and TX pins on the UART and try again.

With any luck you will then get to a U-Boot prompt where we can check the build by running the version command. It should have the U-Boot version we checked out from Git and today’s build date!

U-Boot version

U-Boot version

Hurrah! If that didn’t work for you, repeat the build and writing steps above. You must have a working bootloader before you can get a kernel to work.

If that worked, power off your device and re-insert the SD card into your computer and mount it at /mnt.

sudo umount /dev/sdx* # unmount everywhere first
sudo mount /dev/sdx1 /mnt

Creating an initramfs

Of course, a kernel won’t be much good without some userspace. Let’s use Fedora’s static busybox package to build a simple initramfs that we can boot on the Orange Pi One.

I have a script that makes this easy, you can grab it from GitHub.

Ensure your SD card is plugged into your computer and mounted at /mnt, then we can copy the file on!

cd ${HOME}
git clone
cd custom-initramfs
./ --arch arm --dir "${PWD}" --tty ttyS0

This will create an initramfs for us in your custom-initramfs directory, called initramfs-arm.cpio.gz. We’re not done yet, though, we need to convert this to the format supported by U-Boot (we’ll write it directly to the SD card).

gunzip initramfs-arm.cpio.gz
sudo mkimage -A arm -T ramdisk -C none -n uInitrd \
-d initramfs-arm.cpio /mnt/uInitrd

Now we have a simple initramfs ready to go.

Upstream Linux Kernel

The Ethernet driver has been submitted to the arm-linux mailing list (it’s up to its 4th iteration) and will hopefully land in 4.10 (it’s too late for 4.9 with RC1 already out).

Clone the mainline Linux tree (this will take a while). Note that I’m getting the latest tagged release by default (-b v4.9-rc1) but you could leave this off or change it to some other tag if you want.

cd ${HOME}
git clone --depth 1 -b v4.9-rc1 \

Or, if you want to try linux-stable, clone this repo instead.
git clone --depth 1 -b v4.8.4 \
git:// linux

Now go into the linux directory.
cd linux

Patching in EMAC support for SOC

If you don’t need the onboard Ethernet, you can skip this step.

We can get the patches from the Linux kernel’s Patchwork instance, just make sure you’re in the directory for your Linux Git repository.

Note that these will probably only apply cleanly on top of mainline v4.9 Linux tree, not stable v4.8.

# [v4,01/10] ethernet: add sun8i-emac driver
wget \
-O sun8i-emac-patch-1.patch
# [v4,04/10] ARM: dts: sun8i-h3: Add dt node for the syscon
wget \
-O sun8i-emac-patch-4.patch
# [v4,05/10] ARM: dts: sun8i-h3: add sun8i-emac ethernet driver
wget \
-O sun8i-emac-patch-5.patch
# [v4,07/10] ARM: dts: sun8i: Enable sun8i-emac on the Orange PI One
wget \
-O sun8i-emac-patch-7.patch
# [v4,09/10] ARM: sunxi: Enable sun8i-emac driver on sunxi_defconfig
wget \
-O sun8i-emac-patch-9.patch

We will apply these patches (you could also use git apply, or grab the mbox if you want and use git am).

for patch in 1 4 5 7 9 ; do
    patch -p1 < sun8i-emac-patch-${patch}.patch

Hopefully that will apply cleanly.

Building the kernel

Now we are ready to build our kernel!

Load the default kernel config for the sunxi boards.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make sunxi_defconfig

If you want, you could modify the kernel config here, for example remove support for other AllWinner SOCs.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make menuconfig

Build the kernel image and device tree blob.
ARCH=arm CROSS_COMPILE=arm-linux-gnu- make -j$(nproc) zImage dtbs

Mount the boot partition and copy on the kernel and device tree file.
sudo cp arch/arm/boot/zImage /mnt/
sudo cp arch/arm/boot/dts/sun8i-h3-orangepi-one.dtb /mnt/

Bootloader config

Next we need to make a bootloader file, boot.cmd, which tells U-Boot what to load and boot (the kernel, device tree and initramfs).

The bootargs line says to output the console to serial and to boot from the ramdisk. Variables are used for the memory locations of the kernel, dtb and initramfs.

Note, if you want to boot from the second partition instead of an initramfs, change root argument to root=/dev/mmcblk0p2 (or other partition as required).

cat > boot.cmd << \EOF
ext2load mmc 0 ${kernel_addr_r} zImage
ext2load mmc 0 ${fdt_addr_r} sun8i-h3-orangepi-one.dtb
ext2load mmc 0 ${ramdisk_addr_r} uInitrd
setenv bootargs console=ttyS0,115200 earlyprintk root=/dev/root \
rootwait panic=10
bootz ${kernel_addr_r} ${ramdisk_addr_r} ${fdt_addr_r}

Compile the bootloader file and output it directly to the SD card at /mnt.
sudo mkimage -C none -A arm -T script -d boot.cmd /mnt/boot.scr

Now, unmount your SD card.

sudo umount /dev/sdx*

Testing it all

Insert it into the Orange Pi One and turn it on! Hopefully you’ll see it booting the kernel on your screen terminal window.

You should be greeted by a login prompt. Log in with root (no password).

Login prompt

Login prompt

That’s it! You’ve built your own Linux system for the Orange Pi One!

If you want networking, at root prompt with the Ethernet device an IP address on your network and test with a ping.

Here’s an example:

Networking on Orange Pi One

Networking on Orange Pi One

There is clearly lots more you can do with this device.

Memory usage

Memory usage


October 22, 2016

My Custom Open Source Home Automation Project – Part 3, Roll Out

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. Part 2 discussed the prototype and other experiments.

Because we were having to fit in with the builder, I didn’t have enough time to finalise the smart system, so I needed a dumb mode. This Part 3 is about rolling out dumb mode in Smart house!

Operation “dumb mode, Smart house” begins

We had a proven prototype, now we had to translate that into a house-wide implementation.

First we designed and mapped out the cabling.

Cabling Plans

Cabling Plans


  • Cat5e (sometimes multiple runs) for room Arduinos
  • Cat5e to windows for future curtain motors
  • Reed switch cables to light switch
  • Regular Cat6 data cabling too, of course!
  • Whatever else we thought we might need down the track

Time was tight (fitting in with the builder) but we got there (mostly).

  • Ran almost 2 km of cable in total
  • This was a LOT of work and took a LOT of time


Cabled Wall Plates

Cabled Wall Plates

Cabled Wireless Access Point

Cabled Wireless Access Point

Cable Run

Cable Run

Electrical cable

This is the electrician’s job.

  • Electrician ran each bank of lights on their own circuit
  • Multiple additional electrical circuits
    • HA on its own electrical circuit, UPS backed
    • Study/computers on own circuit, UPS backed
    • Various others like dryer, ironing board, entertainment unit, alarm, ceiling fans, ovens, etc
  • Can leave the house and turn everything off
    (except essentials)


The relays had to be reliable, but also available off-the-shelf as I didn’t want to get something that’s custom or hard to replace. Again, for devices that draw too much current for the relay, it will throw a contactor instead so that the device can still be controlled.

  • Went with Finder 39 series relays, specifically
  • Very thin profile
  • Built in fuses
  • Common bus bars
  • Single Pole Double Throw (SPDT)
Finder Relays with Din Mount Module

Finder Relays with Din Mount Module

These are triggered by 24V DC which switches the 240V AC for the circuit. The light switches are running 24V and when you press the button it completes the circuit, providing 24V input to the relay (which turns on the 240V and therefore the light).

There are newer relays now which have an input range (e.g. 0-24V), I would probably use those instead if I was doing it again today so that it can be more easily fired from an array of outputs (not just a 24V relay driver shield).

The fact that they are SPDT means that I can set the relay to be normally open (NO) in which case the relay is off by default, or normally closed (NC) in which case the relay is on by default. This means that if the smart system goes down and can’t provide voltage to the input side of the relay, certain banks of lights (such as the hallway, stairs, kitchen, bathroom and garage lights) will turn on (so that the house is safe while I fix it).

Bank of Relays

Bank of Relays

In the photo above you can see the Cat5e 24V lines from the light switch circuits coming into the grey terminal block at the bottom. They are then cabled to the input side of the relays. This means that we don’t touch any AC and I can easily change what’s providing the input to the relay (to a relay board on an Arduino, for example).

Rack (excuse the messy data cabling)

Rack (excuse the messy data cabling)

There are two racks, one upstairs and one downstairs, that provide the infrastructure for the HA and data networks.

PSU running at 24V

PSU running at 24V

Each rack has a power supply unit (PSU) running at 24V which provides the power for the light switches in dumb mode. These are running in parallel to provide redundancy for the dumb network in case one dies.

You can see that there is very little voltage drop.

Relay Timers

The Finder relays also support timer modules, which is very handy for something that you want to automatically turn off after a certain (configurable) amount of time.

  • Heated towel racks are bell press switches
  • Uses a timer relay to auto turn off
  • Modes and delay configurable via dip switches on relay

UPS backed GPO Circuits

UPS in-lined GPO Circuits

UPS in-lined GPO Circuits

Two GPO circuits are backed by UPS which I can simply plug in-line and feed back to the circuit. These are the HA network as well as the computer network. If the power is cut to the house, my HA will still have power (for a while) and the Internet connection will remain up.

Clearly I’ll have no lights if the power is cut, but I could power some emergency LED lighting strips from the UPS lines – that hasn’t been done yet though.


The switches for dumb mode are regular also off the shelf light switches.

  • Playing with light switches (yay DC!)
  • Cabling up the switches using standard Clipsal gear
  • Single Cat5e cable can power up to 4 switches
  • Support one, two, three and four way switches
  • Bedroom switches use switch with LED (can see where the switch is at night)
Bathroom Light Switch  (Dumb Mode)

Bathroom Light Switch (Dumb Mode)

We have up to 4-way switches so you can turn a single light on or off from 4 locations. The entrance light is wired up this way, you can control it from:

  • Front door
  • Hallway near internal garage door
  • Bottom of the stairs
  • Top of the stairs

A single Cat5e cable can run up to 4 switches.

Cat5e Cabling for Light Switch

Cat5e Cabling for Light Switch

  • Blue and orange +ve
  • White-blue and white-orange -ve
  • Green switch 1
  • White-green switch 2
  • Brown switch 3
  • White-brown switch 4

Note that we’re using two wires together for +ve and -ve which helps increase capacity and reduce voltage drop (followed the Clipsal standard wiring of blue and orange pairs).

Later in Smart Mode, this Cat5e cable will be re-purposed as Ethernet for an Arduino or embedded Linux device.

Hallway Passive Infrared (PIR) Motion Sensors

I wanted the lights in the hallway to automatically turn on at night if someone was walking to the bathroom or what not.

  • Upstairs hallway uses two 24V PIRs in parallel
  • Either one turns on lights
  • Connected to dumb mode network so fire relay as everything else
  • Can be overridden by switch on the wall
  • Adjustable for sensitivity, light level and length of time
Hallway PIR

Hallway PIRs

Tunable settings for PIR

Tunable settings for PIR

Tweaking the PIR means I have it only turning on the lights at night.

Dumb mode results

We have been using dumb mode for over a year now, it has never skipped a beat.

Now I just need to find the time to start working on the Smart Mode…

My Custom Open Source Home Automation Project – Part 2, Design and Prototype

In Part 1 I discussed motivation and research where I decided to build a custom, open source wired solution. In this Part 2 I discuss the design and the prototype that proved the design.

Wired Design

Although there are options like 1-Wire, I decided that I wanted more flexibility at the light switches.

  • Inspired by Jon Oxer’s awesome
  • Individual circuits for lights and some General Purpose Outlets (GPO)
  • Bank of relays control the circuits
  • Arduinos and/or embedded Linux devices control the relays

How would it work?

  • One Arduino or embedded Linux device per room
  • Run C-Bus Cat5e cable to light switches to power Arduino, provide access to HA network
  • Room Arduino takes buttons (lights, fans, etc) and sensors (temp, humidity, reed switch, PIR, etc) as inputs
  • Room Arduino sends network message to relay Arduino
  • Arduino with relay shield fires relay to enable/disable power to device (such as a light, fan, etc)
Basic design

Basic design

Of course this doesn’t just control lights, but also towel racks, fans, etc. Running C-Bus cable means that I can more easily switch to their proprietary system if I fail (or perhaps sell the house).

A relay is fine for devices such as LED downlights which don’t draw much current, but for larger devices (such as ovens and airconditioning) I will use the relay to throw a contactor.

Network Messaging

For the network messaging I chose MQTT (as many others have).

  • Lightweight
  • Supports encryption
  • Uses publish/subscribe model with a broker
  • Very easy to set up and well supported by Arduino and Linux

The way it works would be for the relay driver to subscribe to topics from devices around the house. Those devices publish messages to the topic, such as when buttons are pressed or sensors are triggered. The relay driver parses the messages and reacts accordingly (turning a device on or off).

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂


  • Got some Freeduinos and relay boards (from Freetronics)
  • Hacked around with some ideas, was able to control the relays
  • Basic concept seems doable
  • More on that later..

However, I realised I wasn’t going to get this working in time. I needed a “dumb” mode that wouldn’t require any computers to turn on the lights at least.

Dumb Mode Prototype

So, the dumb mode looks like this.

  • Use the same Cat5e cabling so that Arduino devices can be installed later for smart mode
  • Use standard off the shelf Clipsal light switches
    • Support one, two, three and four way switching
  • Run 24 volts over the Cat5e
  • Light switch completes the circuit which feeds 24 volts into the relay
  • Relay fires the circuit and light turns on!
Dumb mode design

Dumb mode design

We created a demo board that supported both dumb mode and smart mode, which proved that the concept worked well.

HA Prototype Board

HA Prototype Board

The board has:

  • Six LEDs representing the lights
  • Networking switch for the smart network
  • One Arduino as the input (representing the light switch)
  • One Arduino as the relay driver
  • One Raspberry Pi (running Fedora) as the MQTT broker
  • Several dumb mode multi-way light switches
  • Smart inputs such has:
    • Reed switch
    • Temperature and humidity sensor
    • Light sensor
    • PIR

The dumb lights work without input from the smart network.

In smart mode, the Ardinos and Pi are on the same HA network and connect to the broker running on the Pi.

The input Arduino publishes MQTT messages from inputs such as sensors and buttons. The relay Arduino is subscribed to those topics and responds accordingly (e.g. controlling the relay when appropriate).

Dimming Lights

Also played with pulse width modulation (PWM) for LED downlights.

  • Most LEDs come with smart dimmable drivers (power packs) that use leading or trailing edge on AC
  • Wanted to control brightness via DC
  • Used an arduino to program a small ATTiny for PWM
  • Worked, but only with non-smart driver
  • Got electrician to install manual dimmers for now where needed, such as family room


Given the smart power packs, I cannot dim on the DC side, unless I replace the power packs (which is expensive).

In future, I plan to put some leading/trailing edge dimmers inline on the AC side of my relays (which will need an electrician) which I can control from an Arduino or embedded Linux device via a terminal block. This should be more convenient than replacing the power packs in the ceiling space and running lines to control the ATTiny devices.


Doors are complicated and have quite a few requirements.

  • Need to also work with physical key
  • Once in, door should be unlocked from inside
  • Need to be fire-able from an Arduino
  • Work with multiple smart inputs, e.g. RFID, pin pad
  • Played with wireless rolling q-code, arduino can fire the remote (Jon Oxer has done this)
  • Maybe pair this with deadlock and electric strike
  • Perhaps use electronic door closers

I’m not sure what route I will take here yet, but it’s been cabled up and electronic strike plates are in place.

At this point I had a successful prototype which was ready to be rolled out across the house. Stay tuned for Part 3!

My Custom Open Source Home Automation Project – Part 1, Motivation and Research

In January 2016 I gave a presentation at the Canberra Linux Users Group about my journey developing my own Open Source home automation system. This is an adaptation of that presentation for my blog. Big thanks to my brother, Tim, for all his help with this project!

Comments and feedback welcome.

Why home automation?

  • It’s cool
  • Good way to learn something new
  • Leverage modern technology to make things easier in the home

At the same time, it’s kinda scary. There is a lack of standards and lack of decent security applied to most Internet of Things (IoT) solutions.

Motivation and opportunity

  • Building a new house
  • Chance to do things more easily at frame stage while there are no walls
Frame stage

Frame stage

Some things that I want to do with HA

  • Respond to the environment and people in the home
  • Alert me when there’s a problem (fridge left open, oven left on!)
  • Gather information about the home, e.g.
    • Temperature, humidity, CO2, light level
    • Open doors and windows and whether the house is locked
    • Electricity usage
  • Manage lighting automatically, switches, PIR, mood, sunset, etc
  • Control power circuits
  • Manage access to the house via pin pad, proxy card, voice activation, retina scans
  • Control gadgets, door bell/intercom, hot water, AC heating/cooling, exhaust fans, blinds and curtains, garage door
  • Automate security system
  • Integrate media around the house (movie starts, dim the lights!)
  • Water my garden, and more..

My requirements for HA

  • Open
  • Secure
  • Extensible
  • Prefer DC only, not AC
  • High Wife Acceptance Factor (important!)

There’s no existing open source IoT framework that I could simply install, sit back and enjoy. Where’s the fun in that, anyway?

Research time!

Three main options:

  • Wireless
  • Wired
  • Combination of both

Wireless Solutions

  • Dominated by proprietary Z-Wave (although has since become more open)
  • Although open standards based also exist, like ZigBee and 6LoWPAN
Z-Wave modules

Z-Wave modules

Wireless Pros

  • Lots of different gadgets available
  • Gadgets are pretty cheap and easy to find
  • Easy to get up and running
  • Widely supported by all kinds of systems

Wireless Cons

  • Wireless gadgets are pretty cheap and nasty
  • Most are not open
  • Often not updateable, potentially insecure
  • Connect to AC
  • Replace or install a unit requires an electrician
  • Often talk to the “cloud”

So yeah, I could whack those up around my house, install a bridge and move on with my life, but…

  • Not as much fun!
  • Don’t want to rely on wireless
  • Don’t want to rely on an electrician
  • Don’t really want to touch AC
  • Cheap gadgets that are never updated
  • Security vulnerabilities makes it high risk

Wired Solutions

  • Proprietary systems like Clipsal’s C-Bus
  • Open standards based systems like KNX
  • Custom hardware
  • Expensive
  • 🙁
Clipsal C-Bus light switch

Clipsal C-Bus light switch

Cabling Benefits

  • More secure than wireless
  • More future proof
  • DC only, no need to touch AC
  • Provides PoE for devices and motors
  • Can still use wireless (e.g. ZigBee) if I want to
  • Convert to proprietary system (C-Bus) if I fail
  • My brother is a certified cabler 🙂

Technology Choice Overview

So comes down to this.

  • Z-Wave = OUT
  • ZigBee/6LoWPAN = MAYBE IN
  • C-Bus = OUT (unless I screw up)
  • KNX = OUT
  • Arduino, Raspberry Pi = IN

I went with a custom wired system, after all, it seems like a lot more fun…

Stay tuned for Part 2!

Another Broken Nexus 5

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.

October 20, 2016

Workaround for opal-prd using 100% CPU

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor.
HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0
HBRT: ERRL:>>setupPnorInfo
HBRT: PNOR:>>RtPnor::getSectionInfo
HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1
HBRT: PNOR:RtPnor::readFromDevice: removing ECC...
HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone
cd skiboot/external/pflash

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456
pflash -P HBEL -e
pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

October 17, 2016

Common Russian Media Themes, Has Western Liberal Capitalist Democracy Failed?, and More

After watching international media for a while (particularly those who aren't part of the standard 'Western Alliance') you'll realise that their are common themes: - they are clearly against the current international order. Believe that things will be better if changed. Wants the rules changed (especially as they seemed to have favoured some countries who went through the World Wars relatively

CanberraUAV Outback Challenge 2016 Debrief

I have finally written up an article on our successful Outback Challenge 2016 entry

The members of CanberraUAV are home from the Outback Challenge and life is starting to return to normal after an extremely hectic (and fun!) time preparing our aircraft for this years challenge. It is time to write up our usual debrief acticle to give those of you who weren't able to be there some idea of what happened.

For reference here are the articles from the 2012 and 2014 challenges:

Medical Express

The Outback Challenge is held every two years in Queensland, Australia. As the challenge was completed by multiple teams in 2014 the organisers needed to come up with a new challenge. The new challenge for 2016 was called "Medical Express" and the challenge was to retrieve a blood sample from Joe at a remote landing site.

The back-story is that poor Outback Joe is trapped behind flood waters on his remote property in Queensland. Unfortunately he is ill, and doctors at a nearby hospital need a blood sample to diagnose his illness. A UAV is called in to fly a 23km path to a place where Joe is waiting. We only know Joes approximate position (within 100 meters) so
first off the UAV needs to find Joe using an on-board camera. After finding Joe the aircraft needs to find a good landing site in an area littered with obstacles. The landing site needs to be more than 30 meters from Joe (to meet CASA safety requirements) but less than 80 meters (so Joe doesn't have to walk too far).

The aircraft then needs to perform an automatic landing, and then wait for Joe to load the blood sample into an easily accessible carrier. Joe then presses a button to indicate he is done loading the blood sample. The aircraft needs to wait for one minute for Joe to get clear, and then perform an automatic takeoff and flight back to the home location to deliver the blood sample to waiting hospital staff.

That story hides a lot of very challenging detail. For example, the UAV must maintain continuous telemetry contact with the operators back at the base. That needs to be done despite not knowing exactly where the landing site will be until the day before the challenge starts.

Also, the landing area has trees around it and no landing strip, so a normal fixed wing landing and takeoff is very problematic. The organisers wanted teams to come up with a VTOL solution and in this
they were very successful, kickstarting a huge effort to develop the VTOL capabilities of multiple open source autopilot systems.

The organisers also had provided a strict flight path that the teams have to follow to reach the search area where Joe is located. The winding path over the rural terrain of Dalby is strictly enforced, with any aircraft breaching the geofence required to immediately and automatically terminate by crashing into the ground.

The organisers also gave quite a wide range of flight distance and weather conditions that the teams had to be able to cope with. The distance to the search area could be up to 30km, meaning a round trip distance of 60km without taking into account all the time spent above the search area trying to find Joe. The teams had to be able to fly in up to 25 knots average wind on the ground, which could mean well over 30 knots in the air.

The mission also needed to be completed in one hour, including the time for spent loading the blood sample and circling above Joe.

Improving Memory

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

October 16, 2016

In Memory of Gary Curtis

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.


Filed under: Uncategorized

October 13, 2016

Activating IPv6 stable privacy addressing from RFC7217

Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:


Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global
1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic 
       valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

One more fix for AMP WordPress plugin

With the recent AMP update at Google you may notice increased number of AMP parsing errors in your search console. They look like

The mandatory tag 'html ⚡ for top-level html' is missing or incorrect.

Some plugins, e.g. Add Meta Tags, may alter language_attributes() using 'language_attributes' filter and adding XML-related attributes which are disallowed (see ) and that causes the error mentioned above.

I have made a fix solving this problem and made pull request for WordPress AMP plugin, you may see it here:

October 10, 2016

LUV Main November 2016 Meeting: The Internet of Toys / Special General Meeting / Functional Programming

Nov 2 2016 18:30
Nov 2 2016 20:30
Nov 2 2016 18:30
Nov 2 2016 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Nick Moore, The Internet of Toys: ESP8266 and MicroPython
• Special General Meeting
• Les Kitchen, Functional Programming

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 627 326.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

November 2, 2016 - 18:30

read more

October 09, 2016

LUV Beginners October Meeting: Build a Simple RC Bot!

Oct 15 2016 12:30
Oct 15 2016 16:30
Oct 15 2016 12:30
Oct 15 2016 16:30

Infoxchange, 33 Elizabeth St. Richmond

Build a Simple RC Bot! Getting started with Arduino and Android

In this introductory talk, Ivan Lim Siu Kee will take you through the process of building a simple remote controlled bot. Find out how you can get started on building simple remote controlled bots of your own. While effort has been made to keep the presentation as beginner friendly as possible, some programming experience is still recommended to get the most out of this talk.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 15, 2016 - 12:30

read more

Converting to a ZFS rootfs

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdp: 537234768 sectors, 256.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 537234734
Partitions will be aligned on 8-sector boundaries
Total free space is 6 sectors (3.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40            2047   1004.0 KiB  EF02  BIOS boot partition
   2            2048         2099199   1024.0 MiB  EF00  EFI System
   3         2099200         6293503   2.0 GiB     8300  Linux filesystem
   4         6293504        14682111   4.0 GiB     8200  Linux swap
   5        14682112       455084031   210.0 GiB   BF07  Solaris Reserved 1
   6       455084032       459278335   2.0 GiB     BF08  Solaris Reserved 2
   7       459278336       537234734   37.2 GiB    BF09  Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

#! /bin/bash


targets=( 'sdq' 'sdr' 'sds' )

for tgt in "${targets[@]}"; do
  sgdisk --replicate="/dev/$tgt" /dev/"$src"
  sgdisk --randomize-guids "/dev/$tgt"

3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

#! /bin/bash

exec &> ./create.log

hn="$(hostname -s)"

md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) )


# 4 disks, so use the top half and bottom half for the two mirrors.
zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) )
zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) )

# create /boot raid array
mdadm "$md" --create \
    --bitmap=internal \
    --raid-devices=4 \
    --level 1 \
    --metadata=0.90 \

mkfs.ext4 "$md"

# create zpool
zpool create -o ashift=12 "$hn" \
    mirror "${zmirror1[@]}" \
    mirror "${zmirror2[@]}"

# create zfs rootfs
zfs set compression=on "$hn"
zfs set atime=off "$hn"
zfs create "$hn/root"
zpool set bootfs="$hn/root"

# mount the new /boot under the zfs root
mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \
  -o primarycache=metadata ganesh/postgres

zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \
  -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).


hn="$(hostname -s)"
time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

#! /bin/sh

hn="$(hostname -s)"

for i in proc sys dev dev/pts ; do
  mount -o bind "/$i" "/${hn}/root/$i"

chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root    /         zfs     defaults                                         0  0
/dev/md0        /boot     ext4    defaults,relatime,nodiratime,errors=remount-ro   0  2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

#! /bin/sh

cd /dev
ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

#! /bin/sh

hn="$(hostname -s)"

for i in dev/pts dev sys proc ; do
  umount "/${hn}/root/$i"

umount "$md"

zfs umount "${hn}/root"
zfs umount "${hn}"
zfs set mountpoint=/ "${hn}/root"
zfs set canmount=off "${hn}"

8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes

  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \
      mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk

10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

October 08, 2016

Data structure for word relative cooccurence frequencies, counts and prefix tree

Trying to solve the task of calculating word cooccurrence relative frequencies fast, I have created an interesting data structure, which also allows to calculate counts for the first word in the pair to check; and it creates word prefix tree for the text processing, which can be used for further text analysis.

The source code is available on GitHub:

When you execute make command you should see the following output:

cc -O3 -funsigned-char cooccur.c -o cooccur -lm

Example 1
./cooccur a.txt 2 < | tee a.out

Checking pair d e
Count:3  cocount:3
Relative frequency: 1.00

Checking pair a b
Count:3  cocount:1
Relative frequency: 0.33

Example 2
./cooccur b.txt 3 < | tee b.out

Checking pair a penny
Count:3  cocount:3
Relative frequency: 1.00

Checking pair penny earned
Count:4  cocount:1
Relative frequency: 0.25

The cooccur program takes two arguments: the filename of a text file to process and the window of words size to calculate relative frequencies within it. Then the program takes pairs of words from its standard input, one pair per line, to calculate count of appearance of the first word in the text processed and the cooccurrence count for the pair in that text. If the second word appears more than once in the window, only one appearance is counted.

Examples were taken here:

October 07, 2016

Fixing broken Debian packages

In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy

And now, let's build and install the dummy package:

equivs-build fake-libqt4-gui
sudo dpkg -i libqt4-gui_100_all.deb

And now vidyodesktop installs cleanly!

sudo dpkg -i VidyoDesktopInstaller-*.deb

October 06, 2016

LinuxCon Europe Kernel Security Slides

Yesterday I gave an update on the Linux kernel security subsystem at LinuxCon Europe, in Berlin.

The slides are available here:

The talk began with a brief overview and history of the Linux kernel security subsystem, and then I provided an update on significant changes in the v4 kernel series, up to v4.8.  Some expected upcoming features were also covered.  Skip to slide 31 if you just want to see the changes.  There are quite a few!

It’s my first visit to Berlin, and it’s been fascinating to see the remnants of the Cold War, which dominated life in 1980s when I was at school, but which also seemed so impossibly far from Australia.

brandenburg gate

Brandenburg Gate, Berlin. Unity Day 2016.

I hope to visit again with more time to explore.

October 03, 2016

10 Years of Glasses

10 years ago I first blogged about getting glasses [1]. I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:

New 4 Years Old Really Old
R-SPH 0.00 0.00 -0.25
R-CYL -1.50 -1.50 -1.50
R-AXS 180 179 180
L-SPH 0.00 -0.25 -0.25
L-CYL -1.00 -1.00 -1.00
L-AXS 5 10 179

The Specsavers website has a good description of what this means [2]. In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.

The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.

Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this [3]). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.

The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.

October 02, 2016

Speaking in October 2016

  • I’m thrilled to naturally be at Percona Live Europe Amsterdam from Oct 3-5 2016. I have previously talked about some of my sessions but I think there’s another one on the schedule already.
  • LinuxCon Europe – Oct 4-6 2016. I won’t be there for the whole conference, but hope to make the most of my day on Oct 6th.
  • MariaDB Developer’s meeting – Oct 6-8 2016 – skipping the first day, but will be there all day 2 and 3. I even have a session on day 3, focused on compatibility with MySQL, a topic I deeply care about (session schedule)
  • OSCON London – Oct 17-20 2016 – a bit of a late entrant, I do have a talk titled “Forking successfully”, and wonder if a branch makes more sense, how to fork, and what happens when parity comes?
  • October MySQL London Meetup – Oct 17 2016 – I’m already in London, I wouldn’t miss this meetup for the world! There’s no agenda yet, but I think the discussion should be fun.

Hostile Web Sites

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.

Wget Overview

Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.

As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.

In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript – features that wget lacks.

The Profit Motive

An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.

However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading [2].

Privilege Separation

If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.

I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.

It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.

Browser Features

Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.


I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.

Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).

October 01, 2016

DevOpsDays Wellington 2016 – Day 2, Session 3


Mrinal Mukherjee – How to choose a DevOps tool

Right Tool
– Does the job
– People will accept

Wrong tool
– Never ending Poc
– Doesn’t do the job

How to pick
– Budget / Licensing
– does it address your pain points
– Learning cliff
– Community support
– Enterprise acceptability
– Config in version control?

Central tooling team
– Pro standardize, educate, education
– Constant Bottleneck, delays, stifles innovation, not in sync with teams

DevOps != Tool
Tools != DevOps

Tools facilitate it not define it.

Howard Duff – Eric and his blue boxes

Physical example of KanBan in an underwear factory

Lindsey Holmwood – Deepening people to weather the organisation

Note: Lindsey presents really fast so I missed recording a lot from the talk

His Happy, High performing Team -> He left -> 6 months later half of team had left

How do you create a resilient culture?

What is culture?
– Lots of research in organisation psychology
– Edgar Schein – 3 levels of culture
– Artefacts, Values, Assumptions

– Physical manifestations of our culture
– Standups, Org charts, desk layout, documentation
– actual software written
– Easiest to see and adopt

– Goals, strategies and philosophise
– “we will dominate the market”
– “Management if available”
– “nobody is going to be fired for making a mistake”
– lived values vs aspiration values (People have good nose for bullshit)
– Example, cores values of Enron vs reality
– Work as imagined vs Work is actually done

– beliefs, perceptions, thoughts and feelings
– exist on an unconscious level
– hard to discern
– “bad outcomes come from bad people”
– “it is okay to withhold information”
– “we can’t trust that team”
– “profits over people”

If we can change our people, we can change our culture

What makes a good team member?

– Vulnerability
– Assume the best of others
– Aware of their cognitive bias
– Aware of the fundamental attribution error (judge others by actions, judge ourselves by our intentions)
– Aware of hindsight bias. Hindsight bias is your culture killer
– When bad things happen explain in terms of foresight
– Regular 1:1s
Eliminate performance reviews
Willing to play devils advocate

Commit and acting
– Shared goal settings
– Don’t solutioneer
– Provide context about strategy, about desired outcome
What makes a good team?

Influence of hiring process
– Willingness to adapt and adopt working in new team
– Qualify team fit, tech talent then rubber stamp from team lead
– have a consistent script, but be prepared to improvise
– Everyone has the veto power
– Leadership is vetoing at the last minute, thats a systemic problem with team alignment not the system
– Benefit: team talks to candidate (without leadership present)
– Many different perspectives
– unblock management bottlenecks
– Risk: uncovering dysfunctions and misalignment in your teams
– Hire good people, get out of their way

Diversity and inclusion
– includes: race, gender, sexual orientation, location, disability, level of experience, work hours
– Seek out diverse candidates.
– Sponsor events and meetups
– Make job description clear you are looking for diverse background
– Must include and embrace differences once they actually join
– Safe mechanism for people to raise criticisms, and acting on them

Leadership and Absence of leadership
– Having a title isn’t required
– If leader steps aware things should continue working right
– Team is their own shit umbrella
– empowerment vs authority
– empowerment is giving permission from above (potentially temporary)
– authority is giving power (granting autonomy)

Part of something bigger than the team
– help people build up for the next job
– Guilds in the Spotify model
– Run them like meetups
– Get senior management to come and observe
– What we’re talking about is tech culture

We can change tech culture
– How to make it resist the culture of the rest of the organisation
– Artefacts influence behaviour
– Artifact fast builds -> value: make better quality
– Artifact: post incident reviews -> Value: Failure is an opportunity for learning

Q: What is a pre-incident review
A: Brainstorm beforehand (eg before a big rollout) what you think might go wrong if something is coming up
then afterwards do another review of what just went wrong

Q: what replaces performance reviews
A: One on ones

Q: Overcoming Resistance
A: Do it and point back at the evidence. Hard to argue with an artifact

Q: First step?
A: One on 1s

Getting started, reading books by Patrick Lencioni:
– Solos, Politics and turf wars
– 5 Dysfunctions of a team


September 30, 2016

DevOpsDays Wellington 2016 – Day 2, Session 2

Troy Cornwall & Alex Corkin – Health is hard: A Story about making healthcare less hard, and faster!

Maybe title should be “Culture is Hard”

@devtroy @4lexNZ

Working at HealthLink
– Windows running Java stuff
– Out of date and poorly managed
– Deployments manual, thrown over the wall by devs to ops

Team Death Star
– Destroy bad processes
– Change deployment process

Existing Stack
– VMware
– Windows
– Puppet

CD and CI Requirements
– Goal: Time to regression test under 2 mins, time to deploy under 2 mins (from 2 weeks each)
– Puppet too slow to deploy code in a minute or two. App deply vs Conf mngt
– Can’t use (then) containers on Windows so not an option

New Stack
– VMware
– Ubuntu
– Puppet for Server config
– Docker
– rancher

Smashed the 2 minute target!

– We focused on the tech side and let the people side slip
– Windows shop, hard work even to get a Linux VM at the start
– Devs scared to run on Linux. Some initial deploy problems burnt people
– Lots of different new technologies at once all pushed to devs, no pull from them.

Blackout where we weren’t allowed to talk to them for four weeks
– Should have been a warning sign…

We thought we were ready.
– Ops was not ready

“5 dysfunctions of a team”
– Trust as at the bottom, we didn’t have that

– We were aware of this, but didn’t follow though
– We were used to disruption but other teams were not

Note: I’m not sure how the story ended up, they sort of left it hanging.

Pavel Jelinek – Kubernetes in production

Works at Movio
– Software for Cinema chains (eg Loyalty cards)
– 100million emails per month. million of SMS and push notifications (less push cause ppl hate those)

Old Stack
– Started with mysql and php application
– AWS from the beginning
– On largest aws instance but still slow.

Decided to go with Microservices
– Put stuff in Docker
– Used Jenkins, puppet, own docker registery, rundeck (see blog post)
– Devs didn’t like writing puppet code and other manual setup

Decided to go to new container management at start of 2016
– Was pushing for Nomad but devs liked Kubernetes

– Built in ports, HA, LB, Health-checks

Concepts in Kub
– POD – one or more containers
– Deployment, Daemon, Pet Set – Scaling of a POD
– Service- resolvable name, load balancing
– ConfigMap, Volume, Secret – Extended Docker Volume

Devs look after some kub config files
– Brings them closer to how stuff is really working

– Using kubectl to create pod in his work’s lab env
– Add load balancer in front of it
– Add a configmap to update the container’s nginx config
– Make it public
– LB replicas, Rolling updates

Best Practices
– lots of small containers are better
– log on container stdout, preferable via json
– Test and know your resource requirements (at movio devs teams specify, check and adjust)
– Be aware of the node sizes
– Stateless please
– if not stateless than clustered please
– Must handle unexpected immediate restarts


Linux Security Summit 2016 Wrapup

Here’s a summary of the 2016 Linux Security Summit, which was held last month in Toronto.

Presentation slides are available at

This year, videos were made of the sessions, and they may be viewed at — many thanks to Intel for sponsoring the recordings!

LWN has published some excellent coverage:

This is a pretty good representation of the main themes which emerged in the conference: container security, kernel self-protection, and integrity / secure boot.

Many of the core or low level security technologies (such as access control, integrity measurement, crypto, and key management) are now fairly mature. There’s more focus now on how to integrate these components into higher-level systems and architectures.

One talk I found particularly interesting was Design and Implementation of a Security Architecture for Critical Infrastructure Industrial Control Systems in the Era of Nation State Cyber Warfare. (The title, it turns out, was a hack to bypass limited space for the abstract in the cfp system).  David Safford presented an architecture being developed by GE to protect a significant portion of the world’s electrical grid from attack.  This is being done with Linux, and is a great example of how the kernel’s security mechanisms are being utilized for such purposes.  See the slides or the video.  David outlined gaps in the kernel in relation to their requirements, and a TPM BoF was held later in the day to work on these.  The BoF was reportedly very successful, as several key developers in the area of TPM and Integrity were present.

Attendance at LSS was the highest yet with well over a hundred security developers, researchers and end users.

Special thanks to all of the LF folk who manage the logistics for the event.  There’s no way we could stage something on this scale without their help.

Stay tuned for the announcement of next year’s event!


DevOpsDays Wellington 2016 – Day 2, Session 1

Jethro Carr – Powering with DevOps goodness
– “News” Website
– 5 person DevOps team

– “Something you do because Gartner said it’s cool”
– Sysadmin -> InfraCoder/SRE -> Dev Shepherd -> Dev
– Stuff in the middle somewhere
– DevSecOps

Company Structure drives DevOps structure
– Lots of products – one team != one product
– Dev teams with very specific focus
– Scale – too big, yet to small

About our team
– Mainly Ops focus
– small number compared to developers
– Operate like an agency model for developers
– “If you buy the Dom Post it would help us grow our team”
– Lots of different vendors with different skill levels and technology

Work process
– Use KanBan with Jira
– Works for Ops focussed team
– Not so great for long running projects

War Against OnCall
– Biggest cause of burnout
– focus on minimising callouts
– Zero alarm target
– Love pagerduty

Commonalities across platforms
– Everyone using compute
– Most Java and javascript
– Using Public Cloud
– Using off the shelf version control, deployment solutions
– Don’t get overly creative and make things too complex
– Proven technology that is well tried and tested and skills available in marketplace
– Classic technologist like Nginx, Java, Varnish still have their place. Don’t always need latest fashion

– Linux, ubuntu
– Adobe AEM Java CMS
– AWS 14x c4.2xlarge
– Varnish in front, used by everybody else. Makes ELB and ALB look like toys

How use Varnish
– Retries against backends if 500 replies, serve old copies
– split routes to various backends
– Control CDN via header
– Dynamic Configuration via puppet

– Akamai
– Keeps online during breaking load
– 90% cache offload
– Management is a bit slow and manual

– Small batch jobs
– Check mail reputation score
– “Download file from a vendor” type stuff
– Purge cache when static file changes
– Lamda webapps – Hopefully soon, a bit immature

Increasing number of microservices

Standards are vital for microservices
– Simple and reasonable
– Shareable vendors and internal
– flexible
– grow organicly
– Needs to be detail
– 12 factor App
– 3 languages Node, Java, Ruby
– Common deps (SQL, varnish, memcache, Redis)
– Build pipeline standardise. Using Codeship
– Standardise server builds
– Everything Automated with puppet
– Puppet building docker containers (w puppet + puppetstry)
– Std Application deployment

Init systems
– Had proliferation
– pm2, god, supervisord, systemvinit are out
– systemd and upstart are in

Always exceptions
– “Enterprise ___” is always bad
– Educating the business is a forever job
– Be reasonable, set boundaries

More Stuff at

Q: Pull request workflow
A: Largely replaced traditional review

Q: DR eg AWS outage
A: Documented process if codeship dies can manually push, Rest in 2*AZs, Snapshots

Q: Dev teams structure
A: Project specific rather than product specific.

Q: Puppet code tested?
A: Not really, Kinda tested via the pre-prod environment, Would prefer result (server spec) testing rather than low level testing of each line
A: Code team have good test coverage though. 80-90% in many cases.

Q: Load testing, APM
A: Use New Relic. Not much luck with external load testing companies

Q: What is somebody wants something non-standard?
A: Case-by-case. Allowed if needed but needs a good reason.

Q: What happens when automation breaks?
A: Documentation is actually pretty good.


September 29, 2016

DevOpsDays Wellington 2016 – Day 1, Session 3

Owen Evans – DevOps is Dead, long live DevOps

Theory: Devops is role that never existed.

In the old days
– Shipping used to be hard and expensive, eg on physical media
– High cost of release
– but everybody else was the same.
– Lots of QA and red tape, no second chances

Then we got the Internet
– Speed became everything
– You just shipped enough

But Hardware still was a limiting factor
– Virtual machines
– IaaS
– Containers

This led to complacency
– Still had a physical server under it all

Birth of devops
– Software got faster but still had to have hardware under their somewhere
– Disparity between operations cadence and devs cadence
– things got better
– But we didn’t free ourselves from hardware
– Now everything is much more complex

Developers are now divorced from the platform
– Everything is abstracted
– It is leaky buckets all the way down

– Education of developers as to what happens below the hood
– Stop reinventing the where
– Harmony is much more productive
– Lots of tools means that you don’t have enough expertise on each
– Reduce fiefdoms
– Push responsibility but not ownership (you own it but the devs makes some of the changes)
– Live with the code
– Pit of success, easy ways to fail that don’t break stuff (eg test environments, by default it will do the right thing)
– Be Happy. Everybody needs to be a bit devops and know a bit of everything.


DevOpsDays Wellington 2016 – Day 1, Session 2

Martina Iglesias – Automatic Discovery of Service metadata for systems at scale

Backend developer at Spotify

Spotify Scale
– 100m active users
– 800+ tech employees
– 120 teams
– Microservices architecture

Walk though Sample artist’s page
– each component ( playlist, play count, discgraphy) is a seperate service
– Aggregated to send result back to client

Hard to co-ordinate between services as scale grows
– 1000+ services
– Each need to use each others APIs
– Dev teams all around the world

Previous Solution
– Teams had docs in different places
– Some in Wiki, Readme, markdown, all different

Current Solution – System Z
– Centralise in one place, as automated as possible
– Internal application
– Web app, catalog of all systems and its parts
– Well integrated with Apollo service

Web Page for each service
– Various tabs
– Configuration (showing versions of build and uptimes)
– API – list of all endpoints for service, scheme, errors codes, etc (automatically populated)
– System tab – Overview on how service is connected to other services, dependencies (generated automatically)

– System Z gets information from Apollo and prod servers about each service that has been registered

– Java libs for writing microservices
– Open source

– Metadata module
– Exposes endpoint with metadata for each service
– Exposes
– instance info – versions, uptime
– configuration – currently loaded config of the service
– endpoints –
– call information – monitors service and learns and returns what incoming and outgoing services the service actually does and to/from what other services.
– Automatically builds dependencies

Situation Now
– Quicker access to relevant information
– Automated boring stuff
– All in one place

– Think about growth and scaling at the start of the project

Documentation generators


Q: How to handle breaking APIs
A: We create new version of API endpoint and encourage people to move over.

Bridget Cowie – The story of a performance outage, and how we could have prevented it

– Works for Datacom
– Consultant in Application performance management team

Story from Start of 2015

– Friday night phone calls from your boss are never good.
– Dropped in application monitoring tools (Dynatrace) on Friday night, watch over weekend
– Prev team pretty sure problem is a memory leak but had not been able to find it (for two weeks)
– If somebody tells you they know what is wrong but can’t find it, give details or fix it then be suspicious

Book: Java Enterprise performance

– Monday prod load goes up and app starts crashing
– Told ops team but since crash wasn’t visable yet, was not believed. waited

Tech Stack
– Java App, Jboss on Linux
– Multiple JVMs
– Oracle DBs, Mulesoft ESB, ActiveMQ, HornetQ

Ah Ha moment
– Had a look at import process
– 2.3 million DB queries per half hour
– With max of 260 users, seems way more than what is needed
– Happens even when nobody is logged in

Tip: Typically 80% of all issues can be detected in dev or test if you look for them.

Where did this code come from?
– Process to import a csv into the database
– 1 call mule -> 12 calls to AMQ -> 12 calls to App -> 102 db queries
– Passes all the tests… But
– Still shows huge growth in queries as we go through layers
– DB queries grow bigger with each run

Tip: Know how your code behaves and track how this behavour changes with each code change (or even with no code change)

Q: Why Dynatrace?
A: Quick to deploy, useful info back in only a couple of hours


DevOpsDays Wellington 2016 – Day 1, Session 1

Ken Mugrage – What we’re learning from burnout and how DevOps culture can help

Originally in the Marines, environment where burnout not tolerated
Works for Thoughtworks – not a mental health professional

Devops could make this worse
Some clichéd places say: “Teach the devs puppet and fire all the Ops people”

Why should we address burnout?
– Google found psychological safety was the number 1 indicator of an effective team
– Not just a negative, people do better job when feeling good.

What is burnout
– The Truth about burnout – Maslach and Leiter
– The Dimensions of Burnout
– Exhaustion
– Cynicism
– Mismatch between work and the person
– Work overload
– Lack of control
– Insufficient reward
– Breakdown of communication

Work overload
– Various prioritisation methods
– More load sharing
– Less deploy marathons
– Some orgs see devops as a cost saving
– There is no such thing as a full stack engineer
– team has skills, not a person

Lack of Control
– Team is ultimately for the decissions
– Use the right technolgy and tools for the team
– This doesnt mean a “Devops team” contolling what others do

Insufficient Reward
– Actually not a great motivator

Breakdown in communication
– Walls between teams are bad
– Everybody involved with product should be on the same team
– 2 pizza team
– Pairs with different skill sets are common
– Swarming can be done when required ( one on keyboard, everybody else watching and talking and helping on big screen)
– Blameless retrospectives are held
– No “Devops team”, creating a silo is not a solution for silos

Absence of Fairness
– You build it, you run it
– Everybody is responsible for quality
– Everybody is measured in the same way
– example Expedia – *everything* deployed has A/B tesing
– everybody goes to release party

Conflicting Values
– In the broadest possible sense
– eg Company industry and values should match your own

Reminder: it is about you and how you fit in with the above

Pay attention to how you feel
– Increase your self awareness
– Maslach Burnout inventory
– Try not to focus on the negative.

Pay attention to work/life balance
– Ask for it, company might not know your needs
– If you can’t get it then quit

Talk to somebody
– Professional help is the best
– Trained to identify cause and effect
– can recommend treatment
– You’d call them if you broke your arm

Friends and family
– People who care, that you haven’t even meet
– Empathy is great , but you aren’t a professional
– Don’t guess cause and effect
– Don’t recommend treatment if not a professional

Q: Is it Gender specific for men (since IT is male dominated) ?
– The “absence of fairness” problem is huge for women in IT

Q: How to promote Psychological safety?
– Blameless post-mortems


Damian Brady – Just let me do my job

After working in govt, went to work for new company and hoped to get stuff done

But whole dev team was unhappy
– Random work assigned
– All deadlines missed
– Lots of waste of time meetings

But 2 years later
– Hitting all deadlines
– Useful meetings

What changes were made?

New boss, protect devs for MUD ( Meetings, uncertainty, distractions )

– In board sense, 1-1, all hands, normal meetings
– People are averaging 7.5 hours/week in meetings
– On average 37% of meeting time is not relevant to person ( ~ $8,000 / year )
– Do meetings have goals and do they achieve those goals?
– 38% without goals
– only half of remaining meet those goals
– around 40% of meetings have and achieve goals
– Might not be wasted. Look at “What has changed as result of this meeting?”

Meetings fixes
– New Boss went to meetings for us (didn’t need everybody) as a representative
– Set a clear goal and agenda
– Avoid gimmicks
– don’t default to 30min or 1h

– 60% of people interrupted 10 or more times per day
– Good to stay in a “flow state”
– 40% people say they are regularly focussed in their work. but all are sometimes
– 35% of time loss focus when interrupted
– Study shows people can take up to 23mins to get focus back after interruption
– $25,000/year wasting according to interruptions

Distraction Fixes
– Allowing headphones, rule not to interrupt people wearing headphones
– “Do not disturb” times
– Little Signs
– Had “the finger” so that you could tell somebody your were busy right now and would come back to them
– Let devs go to meeting rooms or cafes to hide from interruptions
– All “go dark” where email and chat turned off

– 82% in survey were clear
– nearly 60% of people their top priority changes before they can finish it.
– Autonomy, mastery, purpose

Uncertainty Fixes
– Tried to let people get clear runs at work
– Helped people acknowledge the unexpected work, add to Sprint board
– Established a gate – Business person would have to go through the manager
– Make the requester responsible – made the requester decide what stuff didn’t get done by physically removing stuff from the sprint board to add their own


September 26, 2016

MySQL removes the FRM (7 years after Drizzle did)

The new MySQL 8.0.0 milestone release that was recently announced brings something that has been a looooong time coming: the removal of the FRM file. I was the one who implemented this in Drizzle way back in 2009 (July 28th 2009 according to Brian)- and I may have had a flashback to removing the tentacles of the FRM when reading the MySQL 8.0.0 announcement.

As an idea for how long this has been on the cards, I’ll quote Brian from when we removed it in Drizzle:

We have been talking about getting rid of FRM since around 2003. I remember a drive up to northern Finland with Kaj Arnö, where we spent an hour talking about this. I, David, and MontyW have talked about this for years.

Soo… it was a known problem for at least thirteen years. One of the issues removing it was how pervasive all of the FRM related things were. I shudder at the mention of “pack_flag” and Jay Pipes probably does too.

At the time, we tried a couple of approaches as to how things should look. Our philosophy with Drizzle was that it should get out of the way at let the storage engines be the storage engines and not try to second guess them or keep track of things behind their back. I still think that was the correct architectural approach: the role of Drizzle was to put SQL on top of a storage engine, not to also be one itself.

Looking at the MySQL code, there’s one giant commit 31350e8ab15179acab5197fa29d12686b1efd6ef. I do mean giant too, the diffstat is amazing:

 786 files changed, 58471 insertions(+), 25586 deletions(-)

How anyone even remotely did code review on that I have absolutely no idea. I know the only way I could get it to work in Drizzle was to do it incrementally, a series of patches that gradually chiseled out what needed to be taken out so I could put it an API and the protobuf code.

Oh, and in case you’re wondering:

- uint offset,pack_flag;
+ uint offset;

Thank goodness. Now, you may not appreciate that as much as I might, but pack_flag was not the height of design, it was… pretty much a catchalll for some kind of data about a field that wasn’t something that already had a field in the FRM. So it may include information on if the field could be null or not, if it’s decimal, how many bytes an integer takes, that it’s a number and how many oh, just don’t ask.

Also gone is the weird interval_id and a whole bunch of limitations because of the FRM format, including one that I either just discovered or didn’t remember: if you used all 256 characters in an enum, you couldn’t create the table as MySQL would pick either a comma or an unused character to be the separator in the FRM!?!

Also changed is how the MySQL server handles default values. For those not aware, the FRM file contains a static copy of the row containing default values. This means the default values are computed once on table creation and never again (there’s a bunch of work arounds for things like AUTO_INCREMENT and DEFAULT NOW()). The new sql/ is where this is done now.

For now at least, table metadata is also written to a file that appears to be JSON format. It’s interesting that a SQL database server is using a schemaless file format to describe schema. It appears that these files exist only for disaster recovery or perhaps portable tablespaces. As such, I’m not entirely convinced they’re needed…. it’s just a thing to get out of sync with what the storage engine thinks and causes extra IO on DDL (as well as forcing the issue that you can’t have MVCC into the data dictionary itself).

What will be interesting is to see the lifting of these various limitations and how MariaDB will cope with that. Basically, unless they switch, we’re going to see some interesting divergence in what you can do in either database.

There’s certainly differences in how MySQL removed the FRM file to the way we did it in Drizzle. Hopefully some of the ideas we had were helpful in coming up with this different approach, as well as an extra seven years of in-production use.

At some point I’ll write something up as to the fate of Drizzle and a bit of a post-mortem, I think I may have finally worked out what I want to say…. but that is a post for another day.

Percona Live Europe Amsterdam PostgreSQL Day

This is my very first post on Planet PostgreSQL, so thank you for having me here! I’m not sure if you’re aware, but the PostgreSQL Events page lists the conference as something that should be of interest to PostgreSQL users and developers.

There is a PostgreSQL Day on October 4 2016 in Amsterdam, and if you’re planning on just attending a single day, use code PostgreSQLRocks and it will only cost €200+VAT.

I for one am excited to see Patroni: PostgreSQL High Availability made easy, Relational Databases at Uber: MySQL & Postgres, and Linux tuning to improve PostgreSQL performance: from hardware to postgresql.conf.

I’ll write notes here, if time permits we’ll do a database hackers lunch gathering (its good to mingle with everyone), and I reckon if you’re coming for PostgreSQL day, don’t forget to also signup to the Community Dinner at

September 22, 2016

First look at MySQL 8.0.0 Milestone

So, about ten days ago the MySQL Server Team released MySQL 8.0.0 Milestone to the world. One of the most unfortunate things about MySQL development is that it’s done behind closed doors, with the only hints of what’s to come arriving in maybe a note on a bug or such milestone releases that contain a lot of code changes. How much code change? Well, according to the text up on github for the 8.0 branch “This branch is 5714 commits ahead, 4 commits behind 5.7. ”

Way back in 2013, I looked at MySQL Code Size over releases, which I can again revisit and include both MySQL 5.7 and 8.0.0.

While 5.7 was a big jump again, we seem to be somewhat leveling off, which is a good thing. Managing to add features and fix long standing problems without bloating code size is good for software maintenance. Honestly, hats off to the MySQL team for keeping it to around a 130kLOC code size increase over 5.7 (that’s around 5%).

These days I’m mostly just a user of MySQL, pointing others in the right direction when it comes to some issues around it and being the resident MySQL grey(ing)beard(well, if I don’t shave for a few days) inside IBM as a very much side project to my day job of OPAL firmware.

So, personally, I’m thrilled about no more FRM, better Unicode, SET PERSIST and performance work. With my IBM hat on, I’m thrilled about the fact that it compiled on POWER out of the box and managed to work (I haven’t managed to crash it yet). There seems to be a possible performance issue, but hey, this is a huge improvement over the 5.7 developer milestones when run on POWER.

A lot of the changes are focused around usability, making it easier to manage and easier to run at at least a medium amount of scale. This is long overdue and it’s great to see even seemingly trivial things like SET PERSIST coming (I cannot tell you how many times that has tripped me up).

In a future post, I’ll talk about the FRM removal!

Inside North Korea, Russia Vs USA Part 3, and More

At times, you have to admit the international situation regarding North Korea is fairly comical. The core focus on it has basically been it's nuclear weapons programs but there's obviously there's a lot more to a country than just it's defense force. I wanted to take at what's happening inside. While they clearly have difficulties thing's don't seem entirely horrible? North Korea’s Nuclear

Lesson 124 in why scales on a graph matter…

The original article presented two graphs: one of MariaDB searches (which are increasing) and the other showing MySQL searches (decreasing or leveling out). It turns out that the y axis REALLY matters.

I honestly expected better….

Compiling your own firmware for Barreleye (OpenCompute OpenPOWER system)

Aaron Sullivan announced on the Rackspace Blog that you can now get your own Barreleye system! What’s great is that the code for the Barreleye platform is upstream in the op-build project, which means you can build your own firmware for them (just like garrison, the “IBM S822LC for HPC” system I blogged about a few days ago).

Remarkably, to build an image for the host firmware, it’s eerily similar to any other platform:

git clone --recursive
cd op-build
. op-build-env
op-build barreleye_defconfig

…and then you wait. You can cross compile on x86.

You’ve been able to build firmware for these machines with upstream code since Feb/March (I wouldn’t recommend running with builds from then though, try the latest release instead).

Hopefully, someone involved in OpenBMC can write on how to build the BMC firmware.

September 19, 2016

LUV Main October 2016 Meeting: Sending Linux to Antarctica, 2012-2017 / Annual General Meeting

Oct 4 2016 18:30
Oct 4 2016 20:30
Oct 4 2016 18:30
Oct 4 2016 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Scott Penrose, Sending Linux to Antarctica: 2012-2017
• Annual General Meeting and lightning talks

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

October 4, 2016 - 18:30

read more

September 17, 2016

The Road to DrupalCon Dublin

DrupalCon Dublin is just around the corner. Earlier today I started my journey to Dublin. This week I'll be in Mumbai for some work meetings before heading to Dublin.

On Tuesday 27 September at 1pm I will be presenting my session Let the Machines do the Work. This lighthearted presentation provides some practical examples of how teams can start to introduce automation into their Drupal workflows. All of the code used in the examples will be available after my session. You'll need to attend my talk to get the link.

As part of my preparation for Dublin I've been road testing my session. Over the last few weeks I delivered early versions of the talk to the Drupal Sydney and Drupal Melbourne meetups. Last weekend I presented the talk at Global Training Days Chennai, DrupalCamp Ghent and DrupalCamp St Louis. It was exhausting presenting three times in less than 8 hours, but it was definitely worth the effort. The 3 sessions were presented using hangouts, so they were recorded. I gained valuable feedback from attendees and became aware of some bits of my talk needed some attention.

Just as I encourage teams to iterate on their automation, I've been iterating on my presentation. Over the next week or so I will be recutting my demos and polishing the presentation. If you have a spare 40 minutes I would really appreciate it if you watch one of the session recording below and leave a comment here with any feedback.

Global Training Days Chennai

Thumbnail frame from DrupalCamp Ghent presentation video

DrupalCamp Ghent

Thumbnail frame from DrupalCamp Ghent presentation video

Note: I recorded the audience not my slides.

DrupalCamp St Louis

Thumbnail frame from DrupalCamp St Louis presentation video

Note: There was an issue with the mic in St Louis, so there is no audio from their side.

September 15, 2016

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It’s Alive!

The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.

Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.

I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).

After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.

Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).

In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too.

I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.

To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware.

I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:

APT::Default-Release "wheezy";

Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.

I then installed the following:

apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev

To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie anyway).

Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn’t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly.

That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.

When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. It’s on the TODO list at work, but everyone else is too busy – a perfect job for an unpaid volunteer. Wheezy’s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg.

Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

Moving to …

Last October was moved from the Department of Finance to the Department of Prime Minister and Cabinet (PM&C) and I moved with the team before going on maternity leave in January. In July of this year, whilst still on maternity leave, I announced that I was leaving PM&C but didn’t say what the next gig was. In choosing my work I’ve always tried to choose new areas, new parts of the broader system to better understand the big picture. It’s part of my sysadmin background – I like to understand the whole system and where the config files are so I can start tweaking and making improvements. These days I see everything as a system, and anything as a “config file”, so there is a lot to learn and tinker with!

Over the past 3 months, my little family (including new baby) has been living in New Zealand on a bit of a sabbatical, partly to spend time with the new bub during that lovely 6-8 months period, but partly for us to have the time and space to consider next steps, personally and professionally. Whilst in New Zealand I was invited to spend a month working with the team which was awesome, and to share some of my thoughts on digital government and what systemic “digital transformation” could mean. It was fun and I had incredible feedback from my work there, which was wonderful and humbling. Although tempting to stay, I wanted to return to Australia for a fascinating new opportunity to expand my professional horizons.

Thus far I’ve worked in the private sector, non-profits and voluntary projects, political sphere (as an advisor), and in the Federal and State/Territory public sectors. I took some time whilst on maternity leave to think about what I wanted to experience next, and where I could do some good whilst building on my experience and skills to date. I had some interesting offers but having done further tertiary study recently into public policy, governance, global organisations and the highly complex world of international relations, I wanted to better understand both the regulatory sphere and how international systems work. I also wanted to work somewhere where I could have some flexibility for balancing my new family life.

I’m pleased to say that my next gig ticks all the boxes! I’ll be starting next week at AUSTRAC, the Australian financial intelligence agency and regulator where I’ll be focusing on international data projects. I’m particularly excited to be working for the brilliant Dr Maria Milosavljevic (Chief Innovation Officer for AUSTRAC) who has a great track record of work at a number of agencies, including as CIO of the Australian Crime Commission. I am also looking forward to working with the CEO, Paul Jevtovic APM, who is a strong and visionary leader for the organisation, and I believe a real change agent for the broader public sector.

It should be an exciting time and I look forward to sharing more about my work over the coming months! Wish me luck :)

September 12, 2016

Diplomacy Part 2, Russia Vs USA Part 2, and More

This is obviously a continuation of my last post, - if you've ever been exposed to communism then you'll realise that there are those who have been exposed to the 'benefits' of capitalism look back upon it harshly. It's taken me a long while but I get where they're coming from. Many times regional and the global

Pia, Thomas and Little A’s Excellent Adventure – Final days

Well, the last 3 months just flew past on our New Zealand adventure! This is the final blog post. We meant to blog more often but between limited internet access and being busy getting the most of our much needed break, we ended up just doing this final post. Enjoy!

Photos were added every week or so to the flickr album.
Our NZ Adventure


I was invited to spend 4 weeks during this trip working with the Department of Internal Affairs in the New Zealand Government on and a roadmap for The team there were just wonderful to work with as were the various people I met from across the NZ public sector. It was particularly fascinating to spend some time with the NZ Head Statistician Liz MacPherson who is quite a data visionary! It was great to get to better know the data landscape in New Zealand and contribute, even in a small way, to where the New Zealand Government could go next with open data, and a more data-driven public sector. I was also invited to share my thoughts on where government could go next more broadly, with a focus on “gov as an API” and digital transformation. It really made me realise how much we were able to achieve both with from 2013-2015 and in the 8 months I was at the Digital Transformation Office. Some of the strategies, big picture ideas and clever mixes of technology and system thinking created some incredible outcomes, things we took for granted from the inside, but things that are quite useful to others and are deserving of recognition for the amazing public servants who contributed. I shared with my New Zealand colleagues a number of ideas we developed at the DTO in the first 8 months of the “interim DTO”, which included the basis for evidence based service design, delivery & reporting, and a vision for how governments could fundamentally change from siloed services to modular and mashable government. “Mashable government” enables better service and information delivery, a competitive ecosystem of products and services, and the capability to automate system to system transactions – with citizen permission of course – to streamline complex user needs. I’m going to do a dedicated blog post later on some of the reflections I’ve had on that work with both and the early DTO thinking, with kudos to all those who contributed.

I mentioned in July that I had left the Department of Prime Minister and Cabinet (where was moved to in October 2015, and I’ve been on maternity leave since January 2016). My next blog post will be about where I’m going and why. You get a couple of clues: yes it involves data, yes it involves public sector, and yes it involves an international component. Also, yes I’m very excited about it!! Stay tuned ;)


When we planned this trip to New Zealand, Thomas has some big numbers in mind for how many fish we should be able to catch. As it turned out, the main seasonal run of trout was 2 months later than usual so for the first month and a half of our trip, it looked unlikely we would get anywhere near what we’d hoped. We got to about 100 fish, fighting for every single one (and keeping only about 5) and then the run began! For 4 weeks of the best fishing of the season I was working in Wellington Mon-Fri, with Little A accompanying me (as I’m still feeding her) leaving Thomas to hold the fort. I did manage to get some great time on the water after Wellington, with my best fishing session (guided by Thomas) resulting in a respectable 14 fish (over 2 hours). Thomas caught a lazy 42 on his best day (over only 3 hours), coming home in time for breakfast and a cold compress for his sprained arm. All up our household clocked up 535 big trout (mostly Thomas!) of which we only kept 10, all the rest were released to swim another day. A few lovely guests contributed to the numbers so thank you Bill, Amanda, Amelia, Miles, Glynn, Silvia and John who together contributed about 40 trout to our tally!


My studies are going well. I now have only 1.5 subjects left in my degree (the famously elusive degree, which was almost finished and then my 1st year had to be repeated due to doing it too long ago for the University to give credit for, gah!). To finish the degree, a Politics degree with loads of useful stuff for my work like public policy, I quite by chance chose a topic on White Collar Crime which was FASCINATING!


Over the course of the 3 months we had a number of wonderful guests who contributed to the experience and had their own enjoyable and relaxing holidays with us in little Turangi: fishing, bushwalking, going to the hot pools and thermal walks, doing high tea at the Tongariro Chateau at Whakaapa Village, Huka Falls in Taupo, and even enjoying some excellent mini golf. Thank you all for visiting, spending time with us and sharing in our adventure. We love you all!

Little A

Little A is now almost 8 months old and has had leaps and bounds in development from a little baby to an almost toddler! She has learned to roll and commando crawl (pulling herself around with her arms only) around the floor. She loves to sit up and play with her toys and is eating her way through a broad range of foods, though pear is still her favourite. She is starting to make a range of noises and the race is on as to whether she’ll say ma or da first :) She has quite the social personality and we adore her utterly! She surprised Daddy with a number of presents on Father’s Day, and helped to make our first family Father’s Day memorable indeed.

Salut Turangi

And so it’s with mixed feelings that we bid adieu to the sleepy town of Turangi. It’s been a great adventure, with lots of wonderful memories and a much-needed chance to get off the grid for a while, but we’re both looking forward to re-entering respectable society, catching up with those of you that we haven’t seen for a while, and planning our next great adventure. We’ll be back in Turangi in February for a different adventure with friends of ours from the US, but that will be only a week or so. Turangi is a great place, and if you’re ever in the area stop into the local shopping centre and try one of the delicious pork and watercress or lamb, mint and kumara pies available from the local bakeries – reason enough to return again and again.

Speaking in September 2016

A few events, but mostly circling around London:

  • Open collaboration – an O’Reilly Online Conference, at 10am PT, Tuesday September 13 2016 – I’m going to be giving a new talk titled Forking Successfully. I’ve seen how the platform works, and I’m looking forward to trying this method out (its like a webminar but not quite!)
  • September MySQL London Meetup – I’m going to focus on MySQL, a branch, Percona Server and the fork MariaDB Server. This will be interesting because one of the reasons you don’t see a huge Emacs/XEmacs push after about 20 years? Feature parity. And the work that’s going into MySQL 8.0 is mighty interesting.
  • should be a fun event, as the speakers were hand-picked and the content is heavily curated. I look forward to my first visit there.

Compiling your own firmware for the S822LC for HPC

IBM (my employer) recently announced  the new S822LC for HPC POWER8+NVLINK NVIDIA P100 GPUs server (press release, IBM Systems Blog, The Register). The “For HPC” suffix on the model number is significant, as the S822LC is a different machine. What makes the “for HPC” variant different is that the POWER8 CPU has (in addition to PCIe), logic for NVLink to connect the CPU to NVIDIA GPUs.

There’s also the NVIDIA Tesla P100 GPUs which are NVIDIA’s latest in an SXM2 form factor, but instead of delving into GPUs, I’m going to tell you how to compile the firmware for this machine.

You see, this is an OpenPOWER machine. It’s an OpenPOWER machine where the vendor (in this case IBM) has worked to get all the needed code upstream, so you can see exactly what goes into a firmware build.

To build the latest host firmware (you can cross compile on x86 as we use buildroot to build a cross compiler):

git clone --recursive
cd op-build
. op-build-env
op-build garrison_defconfig

That’s it! Give it a while and you’ll end up with output/images/garrison.pnor – which is a firmware image to flash onto PNOR. The machine name is garrison as that’s the code name for the “S822LC for HPC” (you may see Minsky in the press, but that’s a rather new code name, Garrison has been around for a lot longer as a name).

September 10, 2016

Software Freedom Day Meeting 2016

Sep 17 2016 10:00
Sep 17 2016 16:30
Sep 17 2016 10:00
Sep 17 2016 16:30

Electron Workshop 31 Arden Street, North Melbourne.

There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!

This month, Free Software Melbourne[1], Linux Users of Victoria[2] and Electron Workshop[3] are joining forces to bring you the local Software Freedom Day event for Melbourne.

The event will take place on Saturday 17th September between 10am and 4:30pm at:

Electron Workshop
31 Arden Street, North Melbourne.

Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 17, 2016 - 10:00

read more

September 09, 2016

Houndbot suspension test fit

I now have a few crossover plates in the works to hold the upgraded suspension in place. See the front wheel of the robot on your right. The bottom side is held in place with a crossover to go from the beam to a 1/4 inch bearing mount. The high side uses one of the hub mount brackets which are a fairly thick alloy and four pretapped attachment blocks. To that I screw my newly minted alloy blocks which have a sequence of M8 sized holes in them. I was unsure of the final fit on the robot so made three holes to give me vertical variance to help set the suspension in the place that I want.

Notice that the high tensile M8 bolt attached to the top suspension is at a slight angle. In the end the top of the suspension will be between the two new alloy plates. But to do that I need to trim some waste from the plates, but to do that I needed to test mount to see where and what needs to be trimmed. I now have an idea of what to trim for a final test mount ☺.

Below is a close up view of the coil over showing the good clearance from the tire and wheel assembly and the black markings on the top plate giving an idea of the material that I will be removing so that the top tension nut on the suspension clears the plate.

 The mounting hole in the suspension is 8mm diameter. The bearing blocks are for 1/4 inch (~6.35mm) diameters. For test mounting I got some 1/4 inch threaded rod and hacked off about what was needed to get clear of both ends of the assembly. M8 nylock nuts on both sides provide a good first mounting for testing. The crossover plate that I made is secured to the beam by two bolts. At the moment the bearing block is held to the crossover by JB Weld only, I will likely use that to hold the piece and drill through both chunks of ally and bolt them together too. It's somewhat interesting how well these sorts of JB and threaded rod assemblies seem to work though. But a fracture in the adhesive at 20km/h when landing from a jump without a bolt fallback is asking for trouble.

The top mount is shown below. I originally had the shock around the other way, to give maximum clearance at the bottom so the tire didn't touch the shock. But with the bottom mount out this far I flipped the shock to give maximum clearance to the top mounting plates instead.

So now all I need is to cut down the top plates, drill bolt holes for the bearing to crossover plate at the bottom, sand the new bits smooth, and maybe I'll end up using the threaded rod at the bottom with some JB to soak up the difference from 1/4 inch to M8.

Oh, and another order to get the last handful of parts needed for the mounting.

APM:Plane 3.7.0 released

The ArduPilot development team is proud to announce the release of version 3.7.0 of APM:Plane. This is a major update so please read the notes carefully.

The biggest changes in this release are:

  • more reliable recovery from inverted flight
  • automatic IC engine support
  • Q_ASSIST_ANGLE for stall recovery on quadplanes
  • Pixhawk2 IMU heater support
  • PH2SLIM support
  • AP_Module support
  • Parrot Disco support
  • major VRBrain support merge
  • much faster boot time on Pixhawk

I'll give a bit of detail on each of these changes before giving the more detailed list of changes.

More reliable recovery from inverted flight

Marc Merlin discovered that on some types of gliders that ArduPilot would not reliably recover from inverted flight. The problem turned out to be the use of the elevator at high bank angles preventing the ailerons from fully recovering attitude. The fix in this release prevent excessive elevator use when the aircraft is beyond LIM_ROLL_CD. This should help a lot for people using ArduPilot as a recovery system for manual FPV flight.

Automatic IC engine support

ArduPilot has supported internal combustion engines for a long time, but until now the pilot has had to control the ignition and starter manually using transmitter pass throughs. A new "ICE" module in ArduPilot now allows for fully automatic internal combustion engine support.

Coupled with an RPM sensor you can setup your aircraft to automatically control the ignition and starter motor, allowing for one touch start of the motor on the ground and automatic restart of the motor in flight if needed.

The IC engine support is also integrated into the quadplane code, allowing for automatic engine start at a specified altitude above the ground. This is useful for tractor engine quadplanes where the propeller could strike the ground on takeoff. The engine can also be automatically stopped in the final stage of a quadplane landing.

Q_ASSIST_ANGLE for stall recovery

Another new quadplane feature is automatic recovery from fixed wing stall. Previously the VTOL motors would only provide assistance in fixed wing modes when the aircraft airspeed dropped below Q_ASSIST_SPEED. Some stalls can occur with higher airspeed however, and this can result in the aircraft losing attitude control without triggering a Q_ASSIST_SPEED recovery. A new parameter Q_ASSIST_ANGLE allows for automatic assistance when attitude control is lost, triggering when the attitude goes outside the defined roll and pitch limits and is more than Q_ASSIST_ANGLE degrees from the desired attitude. Many thanks to Iskess for the suggestion and good discussion around this feature.

Pixhawk2 heated IMU support

This release adds support for the IMU heater in the upcoming Pixhawk2, allowing for more stable IMU temperatures. The Pixhawk2 is automatically detected and the heater enabled at boot, with the target IMU temperature controllable via BRD_IMU_TARGTEMP.

Using an IMU heater should improve IMU stability in environments with significant temperature changes.

PH2SLIM Support

This release adds support for the PH2SLIM variant of the Pixhawk2, which is a Pixhawk2 cube without the isolated sensor top board. This makes for a very compact autopilot for small aircraft. To enable PH2SLIM support set the BRD_TYPE parameter to 6 using a GCS connected on USB.

AP_Module Support

This is the first release of ArduPilot with loadable module support for Linux based boards. The AP_Module system allows for externally compiled modules to access sensor data from ArduPilot controlled sensors. The initial AP_Module support is aimed at vendors integrating high-rate digital image stabilisation using IMU data, but it is expected this will be expanded to other use cases in future releases.

Parrot Disco Support

This release adds support for the Parrot C.H.U.C.K autopilot in the new Disco airframe. The Disco is a very lightweight flying wing with a nicely integrated Linux based autopilot. The Disco flies very nicely with ArduPilot, bringing the full set of mission capabilities of ArduPilot to this airframe.

Major VRBrain Support Update

This release includes a major merge of support for the VRBrain family of autopilots. Many thanks to the great work by Luke Mike in putting together this merge!

Much Faster Boot Time

Boot times on Pixhawk are now much faster due to a restructuring of the driver startup code, with slow starting drivers not started unless they are enabled with the appropriate parameters. The restructuring also allows for support of a wide variety of board types, including the PH2SLIM above.

This release includes many other updates right across the flight stack, including several new features. Some of the changes include:

  • improved quadplane auto-landing
  • limit roll and pitch by Q_ANGLE_MAX in Q modes
  • improved ADSB avoidance and MAVLink streaming
  • smoother throttle control on fixed-wing to VTOL transition
  • removed "demo servos" movement on boot
  • fixed a problem with spurious throttle output during boot (thanks
  • to Marco for finding this)
  • support MAVLink SET_ATTITUDE_TARGET message
  • log all rally points on startup
  • fixed use of stick mixing for rudder with STICK_MIXING=0
  • fixed incorrect tuning warnings when vtol not active
  • support MAVLink based external GPS device
  • support LED_CONTROL MAVLink message
  • prevent baro update while disarmed for large height change
  • support PLAY_TUNE MAVLink message
  • added AP_Button support for remote button input reporting
  • support Ping2020 ADSB transceiver
  • fixed disarm by rudder in quadplanes
  • support 16 channel SERVO_OUTPUT_RAW in MAVLink2
  • added automatic internal combustion engine support
  • support DO_ENGINE_CONTROL MAVLink message
  • added ground throttle suppression for quadplanes
  • added MAVLink reporting of logging subsystem health
  • prevent motor startup on reboot in quadplanes
  • added quadplane support for Advanced Failsafe
  • added support for a 2nd throttle channel
  • fixed bug in crash detection during auto-land flare
  • lowered is_flying groundspeed threshold to 1.5m/s
  • added support for new FrSky telemetry protocol varient
  • added support for fence auto-enable on takeoff in quadplanes
  • added Q_ASSIST_ANGLE for using quadplane to catch stalls in fixed wing flight
  • added BRD_SAFETY_MASK to allow for channel movement for selected channels with safety on
  • numerous improvements to multicopter stability control for quadplanes
  • support X-Plane10 as SITL backend
  • lots of HAL_Linux improvements to bus and thread handling
  • fixed problem with elevator use at high roll angles that could
  • prevent attitude recovery from inverted flight
  • improved yaw handling in EKF2 near ground
  • added IMU heater support on Pixhawk2
  • allow for faster accel bias learning in EKF2
  • fixed in-flight yaw reset bug in EKF2
  • added AP_Module support for loadable modules
  • support Disco airframe from Parrot
  • use full throttle in initial takeoff in TECS
  • added NTF_LED_OVERRIDE support
  • added terrain based simulation in SITL
  • merged support for wide range of VRBrain boards
  • added support for PH2SLIM and PHMINI boards with BRD_TYPE
  • greatly reduced boot time on Pixhawk and similar boards
  • fixed magic check for signing key in MAVLink2
  • fixed averaging of gyros for EKF2 gyro bias estimate

Many thanks to the many people who have contributed to this release, and happy flying!

September 08, 2016

Speaking at Percona Live Europe Amsterdam

I’m happy to speak at Percona Live Europe Amsterdam 2016 again this year (just look at the awesome schedule). On my agenda:

I’m also signed up for the Community Dinner @, and I reckon you should as well – only 35 spots remain!

Go ahead and register now. You should be able to search Twitter or the Percona blog for discount codes :-)

September 06, 2016

Standard versus Reality

While dereferencing a NULL pointer may be undefined, there’s a legitimate reason to do so: you want to store something at address 0. Why? Well, not all of us are fancy and have an MMU turned on.

Intercepting hotplug on the Freecom FSG-3

The Freecom FSG-3 wireless storage router has four USB ports, and has support for hotplug built into the kernel.  This makes it ideal for use as a docking station for OpenMoko phones.

Unfortunately, it does not have the normal hotplug agent scripts that you expect to find on a desktop Linux distribution.

So you have to roll your own:

  1. Run “mv /sbin/hotplug /sbin/hotplug.freecom
  2. Create a new “/sbin/hotplug” shell script (the following is an example of how to automatically enable USB networking for an OpenMoko phone):
    case $1 in
      ( usb )
        case $PRODUCT/$INTERFACE in
          ( 1457/5122/212/2/6/0 ) # OpenMoko GTA01 cdc-ether
            case $ACTION in
              ( add )
                ifconfig usb0 up
              ( remove )
                ifconfig usb0 down
    /sbin/hotplug.freecom "$@"

  3. Run “chmod ugo+x /sbin/hotplug” to ensure that your new hotplug script is executable.
  4. See for the list of environment variables you can use to distinguish different devices.

Syncing Treo650 with Google Contacts using CompanionLink for Google

In preparation for a possible move from my Treo 650 to the new Palm Pre, I’ve decided to try and synchronise my contacts between the Treo and Google Contacts.

So I’m evaluating CompanionLink for Google as a possible tool to achieve this. Another option might be GooSync.

I tried syncing some sample contacts in both directions, with the following results:

  1. Google Contacts to PalmOS Contacts
  2. Syncing a new contact from Google Contacts to PalmOS Contacts results in the following fields being synched correctly:

    • Name
    • Title
    • Company
    • Home Phone
    • Work Phone
    • Mobile Phone
    • E-mail (synched with the Work Email field in Google Contacts)
    • Fax (synched with the Work Fax field in Google Contacts)
    • Home Address
    • Work Address
    • Other Address
    • Notes

    The following Google Contacts fields are not synched:

    • Home Email
    • Other Email
    • Home Fax
    • Pager
    • Other Phone
    • IM Fields (Google Talk, Skype, Jabber, etc)
    • Custom Field

  3. PalmOS Contacts to Google Contacts
  4. Syncing a new contact from PalmOS Contacts to Google Contacts results in the following fields being synched correctly:

    • Name
    • Title
    • Company
    • Work Email (synched with the first E-mail field in PalmOS Contacts)
    • Home Phone
    • Work Phone
    • Mobile Phone
    • Home Address
    • Work Address
    • Other Address
    • Google Talk (synched with the IM field in PalmOS Contacts)
    • Notes (synched with a combination of the Custom and Note fields in PalmOS Contacts)

    The following PalmOS Contacts fields are not synched:

    • Secondary E-mail entries
    • Other Phone
    • AIM
    • MSN
    • Web site

I then tried duplicating contacts to see if I could determine the primary synchronisation key. Duplicating a contact in PalmOS Contacts and then synchronising did not result in a duplicated contact in Google Contacts. However, changing the E-Mail field in the duplicated contact in PalmOS Contacts was enough to cause it to be created as a second separate record in Google Contacts. So it seems that the PalmOS E-Mail field (which syncs with the Google Work Email field) is the primary key.
Interestingly, even while the PalmOS Contacts HotSync conduit is set up to sync with Google Contacts, the syncing with the Palm Desktop still happens. Indeed, the deletion of a record in the Palm Desktop is reflected in PalmOS Contacts on each sync, but it seems does not trigger a corresponding deletion in Google Contacts (perhaps there is some QuickSync vs SlowSync thing happening here). Modifying the record in Google Contacts which had already been deleted in PalmOS Contacts (through the Palm Desktop) did cause it to be reinstated in PalmOS Contacts on the next sync.
Adding a new E-Mail field to the PalmOS Contacts record (before the existing field) causes that new field to be the one that is synched with the Google Contacts Work Email field. So it seems that synchronisation happens between the first E-Mail field in PalmOS Contacts and the Work Email field in Google Contacts, and that only one Email address is ever synchronised between the two. If there is no Work Email field in Google Contacts, then all E-Mail fields in PalmOS Contacts are deleted. Additional Email fields in Google Contacts are not replicated in PalmOS Contacts. If an additional E-Mail field is added to PalmOS Contacts, then synchronisation exits with an error on the first attempt (“Can have at most one primary email address, found 2″), and prevents other fields being synchronisedbut then succeeds on the second attempt.  As long as a Work Email field is synching properly, other non-synching Email fields on both sides are retained (but not synched, even though other non-Email fields are synched).

Five new NSLU2 firmware releases in five days

In the last five days, we have made five new NSLU2 firmware releases:

2007-12-31 – Unslung 6.10-beta Release
2007-12-30 – SlugOS 4.8-beta Release
2007-12-29 – OpenWrt/NSLU2 Kamikaze 7.09 Release
2007-12-28 – Angstrom/NSLU2 2007.12 Release
2007-12-27 – Debian/NSLU2 Stable 4.0r2 Release

All of these new releases are available at

See for
an explanation of the pros and cons of each different firmware
distribution, and the installable packages available for each.

Thanks to everyone in the NSLU2-Linux, OpenWrt, Angstrom, OpenEmbedded
and Debian projects who contributed to these releases.

Remember, if you find any of the firmware or packages that the
NLSU2-Linux project provides useful, feel free to make a donation to
the project at

We are currently in need of about $500 to buy a RAID controller card
and some disks for our autobuild machine to support all this new
firmware with up-to-date package feeds …

The Definitive Analysis of Palm Pre App Install Limits and the Palm App Catalog Hanging

After the Preware 0.9.4 release, which included Applications, Themes, and Patches, and offers over 670 homebrew packages for installation, we (  started getting reports of the Palm App Catalog “hanging” the user interface for 30 seconds or more when the installation of a new application is initiated, but only when the Package Manager Service (the service which does all the Linux-level work for the Preware front-end) was installed.

After some analysis, I found that disabling all the feeds in Preware reduced the “hang” from more than 30 seconds to less than a second.

Looking through the output of ‘dbus-util –capture’ showed that the “hang” was during the call to the queryInstallCapacity method of the com.palm.appinstaller service, the routine that the Palm App Catalog application uses to determine whether there is enough space to install a new application from the Palm App Catalog.  Yes, this is the method which is at the heart of the dreaded “Too many applications” errors that people are seeing when they have a number of homebrew applications installed and try to install a new application from the Palm App Catalog.

Watching the output of ‘ps -ef’ during calls to this method (you can call it manually using luna-send) showed that palm was calling “ipkg -o /var list”.  Curious.  Why would you want to know the *complete* list of all *available* applications when determining whether there is room to install one known new application.  I suspect that Palm should be calling “ipkg -o /var list_installed” instead (which just lists the installed applications).  Note that Palm doesn’t use feeds the way that Preware does, so for Palm’s official use of ipkg, list and list_installed would return the same thing in their testing, but list_installed is definitely what they should be using to determine the size of installed applications.

The plot thickens when you use strace (which Palm conveniently includes in the official firmware image) on the running LunaSysMgr process.

It seems that LunaSysMgr calls “ipkg -o /var list” to get a list of installed packages (the probably intend to just get the list of installed packages, but when you have Preware installed and have feed configuration files in /var/etc/ipkg/*.conf, it actually returns a list of all *available packages).

LunaSysMgr then does an execve of “/usr/bin/du -s /var/usr/palm/applications/package” for each package in that list.  (BTW Palm, you seem to have a bug in the logic of that code, cause it’s running du on random garbage strings after the end of the real package list)

Yes, that’s right.  A call to queryInstallCapacity spawns a new program (“du”) once for each package returned by “ipkg -o /var list”.  No wonder the UI hangs for 30 seconds or more!

A single “du -s /var/usr/palm/applications/*” would be a far more efficient way to get exactly the same information, but again, Palm would not see this difference in testing because they do not support the third-party Preware application usage of ipkg feeds.

You can imagine that this behaviour is probably related to the app install limit that many people are experiencing too.  Unfortunately, I’ll have to reduce my /var partition size down from it’s current 2GB size (courtesy of the WebOS Internals Meta-Doctor) to be able to investigate this one.

Now the developers need to develop a new method of installing homebrew applications so that this bug in Palm’s appInstaller service is not triggered.

In the meantime, the work-around is to go into the Preware Preferences screen, hit the “Feeds” button in the top-right corner, and disable all the feeds while you use the Palm App Catalog application in another card.  No need to exit the Feeds screen, just toggle all the button to “Off”, and then toggle them back to “On” when you’re finished with the App Catalog.

For the solution to this problem, see Update #2, below.

I’ve created a thread on PreCentral where this issue can be discussed.  As I uncover more information, I’ll publish my finding here.

Update #1: I’ve now webOS Doctored my Pre in the name of science, and have tested the limits of installing applications.

If you run “du -s /var/usr/palm/applications/*”, and add up all the numbers in the first column, then as soon as you hit the 62367 1K blocks limit of the addition of the sizes reported by that “du” command and the size of the app you with to install, you will get the dreaded “Sorry, Not Enough Memory” error from the Palm App Catalog application (and any other installer, like fileCoaster or PreLoad, which uses the palm appInstaller API).  It doesn’t matter whether you have 192MB free in your /var partition, it will max out at just under 64MB of application usage.

Update #2: I have now created a Linux Application called “Fair Dinkum App Limit” (org.webosinternals.fairdinkum), which removes both the “hang” and the arbitrary application limit.  You can find it in Preware.  Just install it (no need to even run anything – if it is installed, it’s working), and you’re ready to install more applications than you can poke a stick at …

Fair Dinkum App Limit works by simply putting a couple of wrapper scripts in /usr/local/bin, which returns a size of zero when du is called, and returns the output of “ipkg -o /var list_installed” when “ipkg -o /var list” is called.  In the future, the wrappers will be made much more sophisticated than they are right now to prevent operation outside of the specific cases where they need to fool LunaSysMgr, and to also include a safety buffer so that users do not fill the /var partition.  This is a tactical response to a problem that people using homebrew applications are experiencing.  Hopefully, Palm will provide the long term solution for limits on application installation in a release in the near future.

Notes for Palm, if you happen to read this:

1) We fully appreciate that the usage of the ipkg database in /var for homebrew applications is a choice that the homebrew community has made, and is not supported by Palm.

2) We fully agree that the use of “ipkg -o /var list” instead of “ipkg -o /var list_installed” would work perfectly fine for the way that Palm is officially using the ipkg database in /var/usr/lib/ipkg, but we contend that the “list” and “list_installed” commands have clear intended usage, and the one to find the list of installed applications for checking available disk space should be “list_installed”.

3) We fully realise that the initial version of the FairDinkum scripts are unsafe.  Returning a zero value for du is a temporary solution while we work out how to achieve the same result safely.  The intention is to only return false values when du is being called from LunaSysMgr, and to make sure that a safety buffer is kept so that users do not fill the /var partition.

4) I would be very happy to discuss these issues with anyone at Palm (Chuq and Google both have my email address), and would hope that we can together architect a solution for supporting homebrew application installation which does not require these work-arounds.

5) We have designed these work-arounds in a way which does not collide with Palm OTA Updates, and in a way that we can update them automatically using the Preware installer application, and in a way that we can cause them to self-remove when Palm releases a long term solution.

Update #3:

It seems that there is yet another limit on application installation imposed by LunaSysMgr.  Once the used space on /var crosses around 60%, LunaSysMgr will once again refuse to install applications.

I’m going to need to webOS Doctor my Pre yet again (to reallocate 2GB to /var) to determine whether this limit is a fixed percentage, or a fixed number of blocks.

Update #4:

The limit is 102400 free blocks.  Mystery solved.  That also means the Fair Dinkum App Limit cannot cause your /var to be filled.

Update #5:

Thanks to Carl Thompson, an improved version of Fair Dinkum App Limit which does not alter the operation of ‘du’ has been released.

Replacing dropbear with openssh

I prefer to use OpenSSH rather than Dropbear on my devices.  The main reason is to get sftp support (which is required by sshfs).  Another reason is to get better support for agent forwarding (which is essential for bouncing from one machine to another without leaving your private keys all over the internet).

To do this on OpenMoko (or any other OpenEmbedded-based distribution for that matter, for instance SlugOS or Angstrom):

  1. Edit /etc/init.d/dropbear by replacing “DROPBEAR_PORT=22” with “DROPBEAR_PORT=2222” (or any other unused port).
  2. Run “ipkg install -force-depends openssh” to install openssh.
  3. Make sure you have set a root password before rebooting (use “passwd” to set it).
  4. Reboot (dropbear will restart on the new port, and openssh will start on the normal ssh port).
  5. Check that openssh is now serving on port 22 by logging into the device over ssh.
  6. Run “ipkg remove -force-depends dropbear” to remove dropbear.
  7. Then run “ipkg install openssh-sftp” to install support for the sftp protocol which sshfs uses.

Palm Pre GPS doesn’t like my hemisphere

It seems the Palm Pre GPS was never tested in the southern hemisphere – my new Pre’s GPS reports Lat: 394.6, Long: 138.6

24926.609       PUB     call    460             :1.26   com.palm.location       //getCurrentPosition  «string=“{}”, string=“com.sfmpllc.sendmylocation 1058”»
24926.641       PRV     call    238             :1.68  /com/palm/phone/tel_getradiotype      «»
24926.661       PRV     return  238     0.020   :1.26   :1.68           «string=“success”, string=“CDMA”»
24926.751       PRV     call    239             :1.68  /com/palm/phone/tel_getbsinfo «»
24926.786       PUB     call    461             com.palm.luna   org.freedesktop.DBus    /org/freedesktop/DBus/AddMatch        «string=“interface=org.freedesktop.DBus,member=NameOwnerChanged,arg0=com.palm.location”»
24926.920       PUB     return  460             com.palm.location       com.palm.luna        «string=“{“errorCode”:0,”timestamp”:1.254820510841E12,”latitude”:394.593215,”longitude”:138.681593,”horizAccuracy”:150,”heading”:0,”velocity”:0,”altitude”:0,”vertAccuracy”:0}”»

24926.609       PUB     call    460             :1.26   com.palm.location       //getCurrentPosition  «string=“{}”, string=“com.sfmpllc.sendmylocation 1058”»

24926.920       PUB     return  460             com.palm.location       com.palm.luna        «string=“{“errorCode”:0,”timestamp”:1.254820510841E12,”latitude”:394.xxxxxx,”longitude”:138.xxxxxx,”horizAccuracy”:150,”heading”:0,”velocity”:0,”altitude”:0,”vertAccuracy”:0}”»

The latitude value should be 34.6 degrees South (or -34.6 degrees).

That would explain why Google Maps isn’t working.

Now I need to work out how to replace the Coordinates java class in /usr/lib/luna/java/location.jar, so that the getLatitude method returns a number between -90 and +90 …

I wonder how many WebOS applications will then barf on a negative latitude value …

The PreCentral thread has more information on other GPS tweaks.

Connecting a Treo650 to a Freecom DataTank 2

  1. Install bluez2-utils from Optware
  2. Install the following kernel modules: bluetooth, hci_usb, l2cap, bnep, rfcomm, hidp
  3. Create /dev/rfcomm0 as follows:
    mknod /dev/rfcomm0 c 216 0
  4. Enable routing from ppp0 to eth1 (don’t do this if you use ppp for your internet connection):
    /etc/init.d # diff -u routing.orig routing
    --- routing.orig        Sat Mar 22 18:57:23 2008
    +++ routing     Sat Mar 22 15:14:29 2008
    @@ -37,6 +37,7 @@
            # lo & eth0 always accepted (also if WAN port IP not set)
            /sbin/iptables -A INPUT -i $INIF -j ACCEPT
    +       /sbin/iptables -A INPUT -i ppp0 -j ACCEPT
            /sbin/iptables -A INPUT -i lo -j ACCEPT
            # get IP address from WAN port
    @@ -150,6 +151,7 @@
              /sbin/iptables -A FORWARD -j TCPMSS -o $EXIF --clamp-mss-to-pmtu -p tcp --tcp-flags SYN,RST SYN
            /sbin/iptables -A FORWARD -i $EXIF -o $INIF -m state --state ESTABLISHED,RELATED -j ACCEPT
    +       /sbin/iptables -A FORWARD -i $EXIF -o ppp0 -m state --state ESTABLISHED,RELATED -j ACCEPT
            grep -q ^proxy_server=checked /etc/master.conf
    @@ -163,6 +165,7 @@
              /sbin/iptables -A FORWARD -s $SUBNET -i $INIF -j ACCEPT
    +         /sbin/iptables -A FORWARD -s $SUBNET -i ppp0 -j ACCEPT
            # port forwarding

  5. Edit /etc/ppp/options as follows:

    ms-dns 192.168.1.ZZ

    (edit the last two lines to suit your network topology, the first IP address
    is your gateway device, the second IP address will be assigned to the client,
    and the third IP address is your DNS server)
  6. Add the following line to /etc/dnsmasq.conf:

The WebOS Internals Meta-Doctor

Palm supplies Palm Pre owners with this wonderful recovery tool called the webOS Doctor.  Part of the charter of the WebOS Internals project is to ensure that anything we (or anyone following instructions we publish or installing packages we develop) do can always be reverted using the webOS Doctor.

Usually, a Palm Pre is required to be activated on the Sprint network before it can be used.  This is not possible for a Palm Pre in Australia.

So we need to allow the Pre to be used without activation, and there are a number of information sources showing how this can be done.  There are also some dubious sites that redistribute modified versions of the webOS Doctor (which is a clear violation of copyright law, since it contains proprietary Palm software).  Note that WebOS Internals is always careful to comply with all copyright laws (copyright law is the foundation upon which open source licenses are based).

So we need a way for a Pre owner (who has the right to use the webOS Doctor on their own Pre) to modify the webOS Doctor that is specific to their particular version of the Palm Pre before using it to flash that modified firmware onto their Pre.

That’s where the WebOS Internals “Meta-Doctor” comes into play.

I have created a tool which will download, unpack, patch, and repack a webOS Doctor image, applying a number of transformations along the way to:

  1. Bypass the need for activation
  2. Enable Palm Profile access
  3. Set developer mode on by default
  4. Increase the size of the /var partition to 2GB

You can find this tool in the WebOS Internals source code repository at

Do not redistribute modified versions of the webOS Doctor created using this tool – it is for end-user use only.

I’ve created a forum thread on PreCentral for discussion about this tool.

Setting the OpenMoko timezone

If you want to set the timezone on your phone correctly, do the following:

  1. ipkg install tzdata
  2. ipkg install your desired tzdata-* packages.  For instance, I use “tzdata-australia“.
  3. Enable your desired timezone by symlinking it to “/etc/localtime“.  Adjust the following example command line for your locality.
    • ln -s /usr/share/zoneinfo/Australia/Adelaide /etc/localtime
  4. The “date” command should now show the correct time for your timezone.  If it is not correct, then install the “ntpclient” package, and use it to set your clock.

Note that this technique should work on any OpenEmbedded-based Linux distribution.

    The Palm Pre lands in Australia

    Thanks to the generosity of the PreCentral and WebOS Internals communities, I am now the proud owner of a Palm Pre.

    There is just one catch  – since I live in Australia (which uses a proper cellular communications standard), the CDMA Palm Pre that I was able to import from the USA will never work as a phone here (yes, I knew this before I purchased it).  I plan to also purchase an unlocked GSM/UMTS Pre when one becomes available (maybe I’ll buy a German one and then swap the two keyboards).

    After founding the WebOS Internals project, and using the Pre Emulator in the development of Preware, it is great to have a real device to get the full Pre experience.

    If you want to keep up to date with the activities of the WebOS Internals group, just follow @webosinternals on Twitter.  You can also find a list of articles about WebOS Internals in our Press_Room.  We hang out in the #webos-internals IRC channel on Freenode, and have a webos-internals-announce Google group.

    I still use my trusty old Treo 650 as my daily phone, which allows me to not have to worry about reflashing the Pre to try out things, as I don’t keep any real personal data on it.

    I guess this also closes the OpenMoko chapter of my open source development activities.  I was involved with OpenMoko from the very start, but always said that an OpenMoko device with a hardware keyboard was my preferred form factor, and the Pre seems to satisfy that personal hardware form factor preference whilst still being open enough on the software side to attract my interest.  I wish those who are continuing the OpenMoko path the best of success.

    I’ll document my experiences with the Pre in subsequent posts …

    September 04, 2016

    Diplomacy, Russia Vs USA, and More

    Over and over again, the US and Russia (and other countries) seem to get in each others way. The irony is that while there are many out there who believe that they can come to an 'agreement' of sorts the more I look the more difficult I find this proposition to occur. - first, let's introduce ourselves to modern diplomacy though some videos Diplomacy

    Houndbot rolling stock upgrade

    After getting Terry the robot to navigate around inside with multiple Kinects as depth sensors I have now turned my attention to outdoor navigation using two cameras as sensors. The cameras are from a PS4 eye which I hacked to be able to connect to a normal machine. The robot originally used 5.4 inch wheels which were run with foam inside them. This sort of arrangement can be seen in many builds in the Radio Controlled (RC) world and worked well when the robot was simple and fairly light. Now that it is well over 10kg the same RC style build doesn't necessarily still work. Foam compresses a bit to easily.

    I have upgraded to 12 inch wheels with air tube tires. This jump seemed a bit risky, would the new setup overwhelm the robot? Once I modified the wheels and came up with an initial mounting scheme to test I think the 12 inch is closer to what the robot naturally wants to have. This should boost the maximum speed of the machine to around 20km/h which is probably as much as you might want on something autonomous. For example, if your robot can out run you things get interesting.

    I had to get the wheels attached in order to work out clearances for the suspension upgrade. While the original suspension worked great for a robot that you only add 1-2kg to, with an itx case, two batteries, a fused power supply etc things seem to have added up to too much weight for the springs to counter.

    I now have some new small 'coil overs' in hand which are taken from mini mountain bike suspension. They are too heavy for what I am using, with around 600lb/inch compression. I have in mind some places that use coil overs in between the RC ones and the push bike ones which I may end up using. Also with slightly higher travel distance.

    As the photo reveals, I don't actually have the new suspension attached yet. I'm thinking about a setup based around two bearing mounts from sparkfun. I'd order from servocity but sfe has cheaper intl shipping :o Anyway, two bearing mounts at the top, two at the bottom and a steel shaft that is 8mm in the middle and 1/4 inch (6.35mm) on the edges. Creating the shafts like that, with the 8mm part just the right length will trap the shaft between the two bearing mounts for me. I might tack weld on either side of the coil over mounts so there is no side to side movement of the suspension.

    Yes, hubs and clamping collars were by first thought for the build and would be nice, but a reasonable result for a manageable price is also a factor.

    September 03, 2016

    Websockets + on the ESP8266 w/ Micropython

    I recently learned about the ESP8266 while at Pycon AU. It’s pretty nifty: it’s tiny, it has wifi, a reasonable amount of RAM (for a microcontroller) oh, and it can run Python. Specifically Micropython. Anyway I purchased a couple from Adafruit (specifically this one) and installed the Micropython UNIX port on my computer (be aware with the cheaper ESP8266 boards, they might not be very reflashable, or so I’ve been told, spend the extra money for one with decent flash).

    The first thing you learn is that the ports are all surprisingly different in terms of what functionality they support, and the docs don’t make it clear like they do for CPython. I learned the hard way there is a set of docs per port, which maybe is why you the method you’re looking for isn’t there.

    The other thing is that even though you’re getting to write in Python, and it has many Pythonic abstractions, many of those abstractions are based around POSIX and leak heavily on microcontrollers. Still a number of them look implementable without actually reinventing UNIX (probably).

    The biggest problem at the moment is there’s no “platform independent” way to do asynchronous IO. On the microcontroller you can set top-half interrupt handlers for IO events (no malloc here, yay!), gate the CPU, and then execute bottom halfs from the main loop. However that’s not going to work on UNIX. Or you can use select, but that’s not available on the ESP8266 (yet). Micropython does support Python 3.5 asyncio coroutines, so hopefully the port of asyncio to the ESP8266 happens soon. I’d be so especially ecstatic if I could do await pin.trigger(Pin.FALLING).

    There’s a few other things that could really help make it feel like Python. Why isn’t disabling interrupts a context manager/decorator. It’s great that you can try/finally your interrupt code, but the with keyword is so much more Pythonic. Perhaps this is because the code is being written by microprocessor people… which is why they’re so into protocols like MQTT for talking to their devices.

    Don’t get me wrong, MQTT is a great protocol that you can cram onto all sorts of devices, with all sorts of crappy PHYs, but I have wifi, and working SSL. I want to do something more web 2.0. Something like websockets. In fact, I want to take another service’s REST API and websockets, and deliver that information to my device… I could build a HTTP server + MQTT broker, but that sounds like a pain. Maybe I can just build a web server with and connect to that directly from the device?!

    The ESP8266 already has some very basic websocket support for its WebREPL, but that’s not very featureful and seems to only implement half of the spec. If we’re going to have Python on a device, maybe we can have something that looks like the great websockets module. Turns out we can! is a little harder, it requires a handshake which is not documented (I reversed it in the end), and decoding a HTTP payload, which is not very clearly documented (had to read the source). It’s not the most efficient protocol out there, but the chip is more than fast enough to deal with it. Also fun times, it turns out there’s no platform independent way to return from waiting for IO. Basically it turned out there were a lot of yaks to shave.

    Where it all comes into its own though is the ability to write what is pretty much everyday, beautiful Python however, it’s worth it over Arduino sketches or whatever else takes your fancy.

    uwebsockets/usocketio on Github.

    Electronics breadboard with a project on it sitting on a laptop keyboard Electronics breadboard with a project on it sitting on a laptop keyboard

    LUV Main September 2016 Meeting: Spartan / The Future is Awesome

    Sep 6 2016 18:30
    Sep 6 2016 20:30
    Sep 6 2016 18:30
    Sep 6 2016 20:30

    6th Floor, 200 Victoria St. Carlton VIC 3053


    • Lev Lafayette, Spartan: A Linux HPC/Cloud Hybrid
    • Paul Fenwick, The Future is Awesome (and what you can do about it)

    200 Victoria St. Carlton VIC 3053

    Late arrivals, please call (0490) 049 589 for access to the venue.

    Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

    LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

    Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

    September 6, 2016 - 18:30

    read more

    August 31, 2016

    Configuring QEMU bridge helper after “access denied by acl file” error

    QEMU has a neat bridge-helper utility which allows a non-root user to easily connect a virtual machine to a bridged interface. In Fedora at least, qemu-bridge-helper runs as setuid (any user can run as root) and privileges are immediately dropped to cap_net_admin. It also has a simple white/blacklist ACL mechanism in place which limits connections to virbr0, libvirt’s local area network.

    That’s all great, but often you actually want a guest to be a part of your real network. This means it must connect to a bridged interface (often br0) on a physical network device.

    If your user tries to kick up such a QEMU guest while specifying bridge,br=br0, something like this (although probably also with a disk or kernel and initramfs):

    qemu-system-x86_64 \
    -machine accel=kvm \
    -cpu host \
    -netdev bridge,br=br0,id=net0 \
    -device virtio-net-pci,netdev=net0

    You may run into the following error:

    access denied by acl file
    qemu-system-ppc64: -netdev bridge,br=br0,id=net0: bridge helper failed

    As mentioned above, this is the QEMU bridge config file /etc/qemu/bridge.conf restricting bridged interfaces to virbr0 for all users by default. So how to make this work more nicely?

    One way is to simply edit the main config file and change virbr0 to all, however that’s not particularly fine-grained.

    Instead, we could create a new config file for the user which specifies any (or all) bridge devices that this user is permitted to connect guests to. This way all other users are restricted to virbr0 while your user can connect to other bridges.

    This doesn’t have to be a user, it could also be a group (just substitute ${USER} for the group, below), and you can also add multiple files.

    Instead of allow you can use deny in the same way to prevent a user or group from attaching to any or all bridges.

    So, let’s create a new file for our user and give them access to all interfaces (requires sudo):

    echo "allow all" | sudo tee /etc/qemu/${USER}.conf
    echo "include /etc/qemu/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf
    sudo chown root:${USER} /etc/qemu/${USER}.conf
    sudo chmod 640 /etc/qemu/${USER}.conf

    This user should now be able to successfully kick up the guest connected to br0.
    qemu-system-x86_64 \
    -machine accel=kvm \
    -cpu host \
    -netdev bridge,br=br0,id=net0 \
    -device virtio-net-pci,netdev=net0

    August 29, 2016

    Monitoring of Monitoring

    I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

    The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

    I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

    The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

    Doing it Properly

    Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

    Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

    Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

    There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

    I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

    [mtb/events] Oxfam Trailwalker - Sydney 2016 - ARNuts

    A great day out on the trail with friends (fullsize)
    Though it did not really hit me in the lead up or during the event until half way that it was yet another 100km and these are indeed somewhat tough to get through. The day out in the bush with my friends Alex, David and Julie was awesome.

    As I say in the short report with the photos linked below, Oxfam is a great charity and that they have these trailwalker events in many places around the world to fundraise and get people to enjoy some quality outdoor time is pretty awesome. This is a hard course, that it took us 14h30m to get through it shows that but it sure is pretty, amazing native flowers, views (water ways and bush) and that it can get in to Manly with hardly realising you are in the middle of the biggest city in Australia is awesome.

    My words and photos are online in my Oxfam Trailalker - Sydney 2016 - ARnuts gallery. What a fun day out!.

    August 28, 2016


    I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack.

    I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day’s hacking with a new web framework.

    fakecloud is a post from: Errata

    August 26, 2016

    Live migrating Btrfs from RAID 5/6 to RAID 10

    Recently it was discovered that the RAID 5/6 implementation in Btrfs is broken, due to the fact that can miscalculate parity (which is rather important in RAID 5 and RAID 6).

    So what to do with an existing setup that’s running native Btfs RAID 5/6?

    Well, fortunately, this issue doesn’t affect non-parity based RAID levels such as 1 and 0 (and combinations thereof) and it also doesn’t affect a Btrfs filesystem that’s sitting on top of a standard Linux Software RAID (md) device.

    So if down-time isn’t a problem, we could re-create the RAID 5/6 array using md and put Btrfs back on top and restore our data… or, thanks to Btrfs itself, we can live migrate it to RAID 10!

    A few caveats though. When using RAID 10, space efficiency is reduced to 50% of your drives, no matter how many you have (this is because it’s mirrored). By comparison, with RAID 5 you lose a single drive in space, with RAID 6 it’s two, no-matter how many drives you have.

    This is important to note, because a RAID 5 setup with 4 drives that is using more than 2/3rds of the total space will be too big to fit on RAID 10. Btrfs also needs space for System, Metadata and Reserves so I can’t say for sure how much space you will need for the migration, but I expect considerably more than 50%. In such cases, you may need to add more drives to the Btrfs array first, before the migration begins.

    So, you will need:

    • At least 4 drives
    • An even number of drives (unless you keep one as a spare)
    • Data in use that is much less than 50% of the total provided by all drives (number of disks / 2)

    Of course, you’ll have a good, tested, reliable backup or two before you start this. Right? Good.

    Plug any new disks in and partition or luksFormat them if necessary. We will assume your new drive is /dev/sdg, you’re using dm-crypt and that Btrfs is mounted at /mnt. Substitute these for your actual settings.
    cryptsetup luksFormat /dev/sdg
    UUID="$(cryptsetup luksUUID /dev/sdg)"
    echo "luks-${UUID} UUID=${UUID} none" >> /etc/crypttab
    cryptsetup luksOpen luks-${UUID} /dev/sdg
    btrfs device add /dev/mapper/luks-${UUID} /mnt

    The migration is going to take a long time, so best to run this in a tmux or screen session.

    time btrfs balance /mnt
    time btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt

    After this completes, check that everything has been migrated to RAID 10.
    btrfs fi df /srv/data/
    Data, RAID10: total=2.19TiB, used=2.18TiB
    System, RAID10: total=96.00MiB, used=240.00KiB
    Metadata, RAID10: total=7.22GiB, used=5.40GiB
    GlobalReserve, single: total=512.00MiB, used=0.00B

    If you still see some RAID 5/6 entries, run the same migrate command and then check that everything has migrated successfully.

    Now while we’re at it, let’s defragment everything.
    time btrfs filesystem defragment /srv/data/ # this defrags the metadata
    time btrfs filesystem defragment -r /srv/data/ # this defrags data

    For good measure, let’s rebalance again without the migration (this will also take a while).
    time btrfs fi balance start --full-balance /srv/data/

    August 25, 2016

    Debugging gnome-session problems on Ubuntu 14.04

    After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

    The solution I found was to install this missing package:

    apt install libwayland-egl1-mesa-lts-xenial

    Looking for clues in the logs

    The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

    DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
    DEBUG: Creating shared data directory /var/lib/lightdm-data/username
    DEBUG: Session pid=12743: Logging to .xsession-errors

    This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

    [Desktop Entry]
    Comment=This session logs you into GNOME
    Exec=gnome-session --session=gnome

    I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

    Script for ibus started at run_im.
    Script for auto started at run_im.
    Script for default started at run_im.
    init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
    init: Déconnecté du bus D-Bus notifié
    init: Le processus logrotate main (11831) a été tué par le signal TERM
    init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

    Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

    gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
    gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
    gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
    gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

    It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

    Increasing the amount of logging

    In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

    cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

    and then adding --debug to the command line inside gnome-debug.desktop:

    [Desktop Entry]
    Name=GNOME debug
    Comment=This session logs you into GNOME debug
    Exec=gnome-session --debug --session=gnome
    X-LightDM-DesktopName=GNOME debug

    After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

    This time, I had a lot more information in ~/.xsession-errors:

    gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
    gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
    /usr/bin/gnome-shell: error while loading shared libraries: cannot open shared object file: No such file or directory
    gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
    gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

    which suggests that gnome-shell won't start because of a missing library.

    Finding the missing library

    To find the missing library, I used the apt-file command:

    apt-file update
    apt-file search

    and found that this file is provided by the following packages:

    • libhybris
    • libwayland-egl1-mesa
    • libwayland-egl1-mesa-dbg
    • libwayland-egl1-mesa-lts-utopic
    • libwayland-egl1-mesa-lts-vivid
    • libwayland-egl1-mesa-lts-wily
    • libwayland-egl1-mesa-lts-xenial

    Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

    I filed a bug for this on Launchpad.

    August 24, 2016

    Small fix for AMP WordPress plugin

    If you use AMP plugin for WordPress to make AMP (Accelerated Mobile Pages) version of your posts and have some troubles validating them on AMP validator, you may try this fix for AMP plugin to make those pages valid.

    August 20, 2016

    Basics of Backups

    I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

    Essential Requirements

    Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

    Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

    It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

    Backup Devices

    USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

    Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

    The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

    With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

    Multiple Backups

    It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

    To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

    For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

    Offsite Backups

    One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

    Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

    If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

    A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

    A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.


    On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

    Secret Data

    Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

    One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

    Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

    One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

    Backup Corruption

    With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

    Failing Systems – update 2016-08-22

    When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

    One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

    A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

    When in doubt make a backup.

    Any Suggestions?

    If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

    August 19, 2016

    Speaking in August 2016

    I know this is a tad late, but there have been some changes, etc. recently, so apologies for the delay of this post. I still hope to meet many of you to chat about MySQL/Percona Server/MariaDB Server, MongoDB, open source databases, and open source in general in the remainder of August 2016.

    • LinuxCon+ContainerCon North America – August 22-24 2016 – Westin Harbour Castle, Toronto, Canada – I’ll be speaking about lessons one can learn from database failures and enjoying the spectacle that is the 25th anniversary of Linux!
    • Chicago MySQL Meetup Group – August 29 2016 – Vivid Seats, Chicago, IL – more lessons from database failures here, and I’m looking forward to meeting users, etc. in the Chicago area

    While not speaking, Vadim Tkachenko and I will be present at the @scale conference. I really enjoyed my time there previously, and if you get an invite, its truly a great place to learn and network.

    August 17, 2016

    Getting In Sync

    Since at least v1.0.0 Petitboot has used device-mapper snapshots to avoid mounting block devices directly. Primarily this is so Petitboot can mount disks and potentially perform filesystem recovery without worrying about messing it up and corrupting a host's boot partition - all changes happen to the snapshot in memory without affecting the actual device.

    This of course gets in the way if you actually do want to make changes to a block device. Petitboot will allow certain bootloader scripts to make changes to disks if configured (eg, grubenv updates), but if you manually make changes you would need to know the special sequence of dmsetup commands to merge the snapshots back to disk. This is particulary annoying if you're trying to copy logs to a USB device!

    Depending on how recent a version of Petitboot you're running, there are two ways of making sure your changes persist:

    Before v1.2.2

    If you really need to save changes from within Petitboot, the most straightforward way is to disable snapshots. Drop to the shell and enter

    nvram --update-config petitboot,snapshots?=false

    Once you have rebooted you can remount the device as read-write and modify it as normal.

    After v1.2.2

    To make this easier while keeping the benefit of snapshots, v1.2.2 introduces a new user-event that will merge snapshots on demand. For example:

    mount -o remount,rw /var/petitboot/mnt/dev/sda2
    cp /var/log/messages /var/petitboot/mnt/dev/sda2/
    pb-event sync@sda2

    After calling pb-event sync@yourdevice, Petitboot will remount the device back to read-only and merge the current snapshot differences back to disk. You can also run pb-event sync@all to sync all existing snapshots if desired.

    What’s next

    I received an overwhelming number of comments when I said I was leaving MariaDB Corporation. Thank you – it is really nice to be appreciated.

    I haven’t left the MySQL ecosystem. In fact, I’ve joined Percona as their Chief Evangelist in the CTO Office, and I’m going to focus on the MySQL/Percona Server/MariaDB Server ecosystem, while also looking at MongoDB and other solutions that are good for Percona customers. Thanks again for the overwhelming response on the various social media channels, and via emails, calls, etc.

    Here’s to a great time at Percona to focus on open source databases and solutions around them!

    My first blog post on the Percona blog – I’m Colin Charles, and I’m here to evangelize open source databases!, the press release.

    August 15, 2016

    Neo-Colonialism and Neo-Liberalism, Intelligence Analysis, and More

    Watch a lot of media outlets and over and over again and you hear the terms 'Neocolonialism' and 'Free Trade' from time to time. Until fairly recently, I wasn't entirely aware of what exactly this meant and how it came to be. As indicated in my last post, up until a certain point wealth was distributed rather evenly throughout the world. Then 'colonialism' happened and the wealth gap between

    Changing of the guard

    I posted a message to the internal mailing lists at MariaDB Corporation. I have departed (I resigned) the company, but definitely not the community. Thank you all for the privilege of serving the large MariaDB Server community of users, all 12 million+ of you. See you on the mailing lists, IRC, and the developer meetings.

    The Japanese have a saying, “leave when the cherry blossoms are full”.

    I’ve been one of the earliest employees of this post-merge company, and was on the founding team of the MariaDB Server having been around since 2009. I didn’t make the first company meeting in Mallorca (August 2009) due to the chickenpox, but I’ve been to every one since.

    We made the first stable MariaDB Server 5.1 release in February 2010. Our first Linux distribution release was in openSUSE. Our then tagline: MariaDB: Community Developed. Feature Enhanced. Backward Compatible.

    In 2013, we had to make a decision: merge with our sister company SkySQL or take on investment of equal value to compete; majority of us chose to work with our family.

    Our big deal was releasing MariaDB Server 5.5 – Wikipedia migrated, Google wanted in, and Red Hat pushed us into the enterprise space.

    Besides managing distributions and other community related activities (and in the pre-SkySQL days Rasmus and I did everything from marketing to NRE contract management, down to even doing press releases – you wear many hats when you’re in a startup of less than 20 people), in this time, I’ve written over 220 blog posts, spoken at over 130 events (an average of 18 per year), and given generally over 250 talks, tutorials and keynotes. I’ve had numerous face-to-face meetings with customers, figuring out what NRE they may need and providing them solutions. I’ve done numerous internal presentations, audience varying from the professional services & support teams, as well as the management team. I’ve even technically reviewed many books, including one of the best introductions by our colleague, Learning MySQL & MariaDB.

    Its been a good run. Seven years. Uncountable amount of flights. Too many weekends away working for the cause. A whole bunch of great meetings with many of you. Seen the company go from bootstrap, merger, Series A, and Series B.

    It’s been a true privilege to work with many of you. I have the utmost respect for Team MariaDB (and of course my SkySQL brethren!). I’m going to miss many of you. The good thing is that MariaDB Server is an open source project, and I’m not going to leave the project or #maria. I in fact hope to continue speaking and working on MariaDB Server.

    I hope to remain connected to many of you.

    Thank you for this great privilege.

    Kind Regards,
    Colin Charles

    [mtb/events] Razorback Ultra - Spectacular run in the Victorian Alps

    Alex and another Canberran on the Razorback (fullsize)
    Alex and I signed up for the Razorback Ultra because it is in an amazing part of the country and sounded like a fun event to go do. I was heading into it a week after Six Foot, however this is all just training for UTA100 so why not. All I can say is every trail runner should do this event, it is amazing.

    The atmosphere at the race is laid back and it is all about heading up into the mountains and enjoying yourself. I will be back for sure.

    My words and photos are online in my Razorback Ultra 2016 gallery. This is truly one of the best runs in Australia.

    [mtb/events] Geoquest 2016 - Port Mac again with Resultz

    My Mirage 730 - Matilda, having a rest while we ran around (fullsize)
    I have fun at Goequest and love doing the event however have been a bit iffy about trying to organise a team for a few years. As many say one of the hardest things in the event is getting 4 people to the start line ready to go.

    This year my attitude was similar to last, if I was asked to join a team I would probably say yes. I was asked and thus ended up racing with a bunch of fun guys under the banner of Michael's company Resultz Racing. Another great weekend on the mid north NSW coast with some amazing scenery (the two rogaines were highlights, especially the punchbowl waterfall on the second one).

    My words and photos are online in my Geoquest 2016 gallery. Always good fun and a nice escape from winter.

    [mtb] The lots of vert lunch run, reasons to live in Canberra

    Great view of the lake from the single track on the steep side of BM (fullsize)
    This run that is so easy to get out for at lunch is a great quality climbing session and shows off canberra beautifully. What fun.

    Photos and some words are online on my Lots of vert lunch run page.

    [various] Vote Greens Maybe

    I have had the parodies of the Call me maybe song in my head again today (the Orica Green edge one was brilliant and there are some inspired versions out there). This had me thinking of different lyrics, maybe something to suggest people vote Green this Saturday for a better and fairer Australia.

    Vote Green Maybe
    I threw a wish in the well
    For a better Australia today
    I looked at our leaders today
    And now they're in our way
    I'll not trade my freedom for them
    All our dollars and cents to the rich
    I wasn't looking for this
    But now they're in our way
    Our democracy is squandered
    Broken promises
    Lies everywhere
    Hot nights
    Winds are blowing
    Freak weather events, climate change
    Hey I get to vote soon
    And this isn't crazy
    But here's my idea
    So vote Greens maybe
    It's hard to look at our future 
    But here's my idea
    So vote Greens maybe
    Hey I get to vote soon
    And this isn't crazy
    But here's my idea
    So vote Greens maybe
    And all the major parties
    Try to shut us up
    But here's my idea
    So vote Greens maybe
    Liberal and Labor think they should rule
    I take no time saying they fail
    They gave us nothing at all
    And now they're in our way
    I beg for a fairer Australia
    At first sight our policies are real
    I didn't know if you read them
    But it's the Greens way  
    Your vote can fix things
    Healthier people
    Childrens education
    Fairer policies
    A change is coming
    Where you think you're voting, Greens?
    Hey I get to vote soon
    And this isn't crazy
    But here's my idea
    So vote Greens maybe
    It's worth a look to a brighter future
    But here's my idea
    So vote Greens maybe
    Before this change in our lives
    I see children in detention
    I see humans fleeing horrors
    I see them locked up and mistreated
    Before this change in our lives
    I see a way to fix this
    And you should know that
    Voting Green can help fix this, Green, Green, Green...
    It's bright to look at our future 
    But here's my idea
    So vote Greens maybe
    Hey I get to vote soon
    And this isn't crazy
    But here's my idea
    So vote Greens maybe
    And all the major parties
    Try to shut us up
    But here's my idea
    So vote Greens maybe
    Before this change in our lives
    I see children in detention
    I see humans fleeing horrors
    I see them locked up and mistreated
    Before this change in our lives
    I see a way to fix this
    And you should know that
    So vote Green Saturday
    Call Me Maybe (Carly Rae Jepsen)
    I threw a wish in the well
    Don't ask me I'll never tell
    I looked at you as it fell
    And now you're in my way
    I trade my soul for a wish
    Pennies and dimes for a kiss
    I wasn't looking for this
    But now you're in my way
    Your stare was holding
    Ripped jeans
    Skin was showing
    Hot night
    Wind was blowing
    Where you think you're going baby?
    Hey I just met you
    And this is crazy
    But here's my number
    So call me maybe
    It's hard to look right at you baby
    But here's my number
    So call me maybe
    Hey I just met you
    And this is crazy
    But here's my number
    So call me maybe
    And all the other boys
    Try to chase me
    But here's my number
    So call me maybe
    You took your time with the call
    I took no time with the fall
    You gave me nothing at all
    But still you're in my way
    I beg and borrow and steal
    At first sight and it's real
    I didn't know I would feel it
    But it's in my way
    Your stare was holding
    Ripped jeans
    Skin was showing
    Hot night
    Wind was blowing
    Where you think you're going baby?
    Hey I just met you
    And this is crazy
    But here's my number
    So call me maybe
    It's hard to look right at you baby
    But here's my number
    So call me maybe
    Before you came into my life
    I missed you so bad
    I missed you so bad
    I missed you so so bad
    Before you came into my life
    I missed you so bad
    And you should know that
    I missed you so so bad, bad, bad, bad....
    It's hard to look right at you baby
    But here's my number
    So call me maybe
    Hey I just met you
    And this is crazy
    But here's my number
    So call me maybe
    And all the other boys
    Try to chase me
    But here's my number
    So call me maybe
    Before you came into my life
    I missed you so bad
    I missed you so bad
    I missed you so so bad
    Before you came into my life
    I missed you so bad
    And you should know that
    So call me, maybe

    [various] Safety Sewing

    No reflections (fullsize)

    None outside either (fullsize)

    Better when full/open (fullsize)

    Also better when closed, much brightness (fullsize)
    For over a year I have been planning to do this, my crumpler bag (the complete seed) which I bought in 2008 has been my primary commuting and daily use bag for stuff since that time and as much as I love the bag there is one major problem. No reflective marking anywhere on the bag.

    Some newer crumplers have reflective strips and other such features and if I really wanted to spend big I could get them to do a custom bag with whatever colours and reflective bits I can dream up. There are also a number of other brands that do a courier bag with reflective bits or even entire panels or similar that are reflective. However this is the bag I own and it is still perfectly good for daily use so no need to go buy something new.

    So I got a $4 sewing kit I had sitting around in the house, some great 3M reflective tape material and finally spent the time to rectify this feature missing from the bag. After breaking 3 needles and spending a while getting it done I now have a much safer bag especially commuting home on these dark winter nights. The sewing work is a bit messy however it is functional which is all that matters to me.

    August 14, 2016

    The rise and fall of the Gopher protocol | MinnPost

    Twenty-five years ago, a small band of programmers from the University of Minnesota ruled the internet. And then they didn’t.

    The committee meeting where the team first presented the Gopher protocol was a disaster, “literally the worst meeting I’ve ever seen,” says Alberti. “I still remember a woman in pumps jumping up and down and shouting, ‘You can’t do that!’ ”

    Among the team’s offenses: Gopher didn’t use a mainframe computer and its server-client setup empowered anyone with a PC, not a central authority. While it did everything the U (University of Minnesota) required and then some, to the committee it felt like a middle finger. “You’re not supposed to have written this!” Alberti says of the group’s reaction. “This is some lark, never do this again!” The Gopher team was forbidden from further work on the protocol.

    Read the full article (a good story of Gopher and WWW history!) at

    Have It Your Way: Maximizing Drive-Thru Contributions - PyConAu 2016

    by VM (Vicky) Brasseur.


    Vicky talked about the importance non-committing contributors but the primary focus is on committing contributors due to time limits.

    Covered the different types of drive-thru contributors and why they show up.

    • Scratching an itch.
    • Unwilling / Unable to find an alternative to this project
    • They like you.

    Why do they leave?

    • Itch has been sratched.
    • Not enough time.
    • No longer using the project.
    • Often a high barrier to contribution.
    • Absence of appreciation.
    • Unpleasant people.
    • Inappropriate attribution.


    • It takes more time to help them land patches
      • Reluctance to help them "as they're not community".

    It appears to be that many project see community as the foundation but Vicky contended it is contributors.

    More drive-thru contributors are a sign of a healthy project and can lead to a larger community.


    • Have better processes in place.
    • Faster patch and release times.
    • More eyes and shallower bugs
    • Better community, code and project reputation.

    Leads to a healthier overall project.

    Methods for Maxmising drive-thru contributions:


    • give your project super powers.
    • Scales!
    • Ensures efficient and successful contributions.
    • Minimises questions.
    • Standardises processes.
    • Vicky provided a documentation quick start guide.


    • Code review.
    • "Office hours" for communication.
    • Hackfests.
    • New contributor events.

    Process improvements!

    • Tag starter bugs
    • Contributor SLA
    • Use containers / VM of dev environment


    • Value contributions and contributors
    • Culture of documentation
    • Default to assistance

    Outreach! * Gratitude * Recognition * Follow-up!

    Institute the "No Asshole" rule.

    PyConAu 2016

    Keynote - Python All the Things - PyConAu 2016

    by Russell Keith-Magee.

    Keith spoke about porting Python to mobile devices. CPython being written in C enables it to leverage the supported platforms of the C language and be compiled a wide range of platforms.

    There was a deep dive in the options and pitfalls when selecting a method to and implementing Python on Android phones.

    Ouroboros is a pure Python implementation of the Python standard library.

    Most of the tools discussed are at an early stage of development.


    • Being able to run on new or mobile platforms addresses an existential threat.
    • The threat also presents an opportunity to grown, broaden and improve Python.
    • Wants Python to be a "first contact" language, like (Visual) Basic once was.
    • Unlike Basic, Python also support very complex concepts and operations.
    • Presents an opportunity to encourage broader usage by otherwise passive users.
    • Technical superiority is rarely enough to guarantee success.
    • A breadth of technical domains is required for Python to become this choice.
    • Technical problems are the easiest to solve.
    • Te most difficult problems are social and community and require more attention.

    Keith's will be putting his focus into BeeWare and related projects.

    Fortune favours the prepared mind

    (Louis Pasteur)

    PyConAu 2016

    August 13, 2016

    SSD and M.2

    The Need for Speed

    One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

    ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

    For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

    Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.


    M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

    The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

    The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

    The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

    Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

    One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

    Power Use

    I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

    The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

    I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

    I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

    From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

    The Future

    M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

    Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

    For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

    Playing with Shifter Part 2 – converted Docker containers inside Slurm

    This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.

    Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:

    required /usr/lib64/shifter/ shifter_config=/etc/shifter/udiRoot.conf

    as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:

    #SBATCH -p debug
    #SBATCH --image=debian:wheezy
    shifter cat /etc/issue

    results in the following on our RHEL compute nodes:

    [samuel@bruce Shifter]$ cat slurm-1734069.out 
    Debian GNU/Linux 7 \n \l

    simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.

    That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:

    #SBATCH -p debug
    #SBATCH --image=dispel4py/docker.openmpi
    #SBATCH --ntasks=3
    #SBATCH --tasks-per-node=1
    shifter cat /etc/issue
    srun shifter python /home/tutorial/mpi4py_benchmarks/

    This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:

    [samuel@bruce Python]$ cat slurm-1734135.out
    Ubuntu 14.04.4 LTS \n \l
    libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
    libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
    [[30199,2],0]: A high-performance Open MPI point-to-point messaging module
    was unable to find any relevant network interfaces:
    Module: OpenFabrics (openib)
      Host: bruce001
    Another transport will be used instead, although this may result in
    lower performance.
    libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
    libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
    Hello, World! I am process 0 of 3 on bruce001.
    libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
    [[30199,2],1]: A high-performance Open MPI point-to-point messaging module
    was unable to find any relevant network interfaces:
    Module: OpenFabrics (openib)
      Host: bruce002
    Another transport will be used instead, although this may result in
    lower performance.
    Hello, World! I am process 1 of 3 on bruce002.
    libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
    [[30199,2],2]: A high-performance Open MPI point-to-point messaging module
    was unable to find any relevant network interfaces:
    Module: OpenFabrics (openib)
      Host: bruce003
    Another transport will be used instead, although this may result in
    lower performance.
    Hello, World! I am process 2 of 3 on bruce003.

    It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).

    Open-MPI allows you to specify what transports to use, so adding one line to my batch script:

    export OMPI_MCA_btl=tcp,self,sm

    cleans up the output a lot:

    Ubuntu 14.04.4 LTS \n \l
    Hello, World! I am process 0 of 3 on bruce001.
    Hello, World! I am process 2 of 3 on bruce003.
    Hello, World! I am process 1 of 3 on bruce002.

    This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.

    Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:

    #SBATCH -p debug
    #SBATCH --image=chrissamuel/docker.openmpi:latest
    #SBATCH --ntasks=2
    #SBATCH --tasks-per-node=1
    export OMPI_MCA_btl=tcp,self,sm
    srun shifter python /home/tutorial/mpi4py_benchmarks/

    giving these latency results:

    [samuel@bruce MPI]$ cat slurm-1734137.out
    # MPI Latency Test
    # Size [B]        Latency [us]
    0                        16.19
    1                        16.47
    2                        16.48
    4                        16.55
    8                        16.61
    16                       16.65
    32                       16.80
    64                       17.19
    128                      17.90
    256                      19.28
    512                      22.04
    1024                     27.36
    2048                     64.47
    4096                    117.28
    8192                    120.06
    16384                   145.21
    32768                   215.76
    65536                   465.22
    131072                  926.08
    262144                 1509.51
    524288                 2563.54
    1048576                5081.11
    2097152                9604.10
    4194304               18651.98

    To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):

    #SBATCH -p debug
    #SBATCH --image=chrissamuel/docker.openmpi:latest
    #SBATCH --ntasks=2
    #SBATCH --tasks-per-node=1
    export OMPI_MCA_btl=openib,self,sm
    srun shifter python /home/tutorial/mpi4py_benchmarks/

    which then gave these latency numbers:

    [samuel@bruce MPI]$ cat slurm-1734138.out
    # MPI Latency Test
    # Size [B]        Latency [us]
    0                         2.52
    1                         2.71
    2                         2.72
    4                         2.72
    8                         2.74
    16                        2.76
    32                        2.73
    64                        2.90
    128                       4.03
    256                       4.23
    512                       4.53
    1024                      5.11
    2048                      6.30
    4096                      7.29
    8192                      9.43
    16384                    19.73
    32768                    29.15
    65536                    49.08
    131072                   75.19
    262144                  123.94
    524288                  218.21
    1048576                 565.15
    2097152                 811.88
    4194304                1619.22

    So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).

    Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.

    Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.

    So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed! 🙂

    This item originally posted here:

    Playing with Shifter Part 2 – converted Docker containers inside Slurm

    Microsoft Chicago – retro in qemu!

    So, way back when (sometime in the early 1990s) there was Windows 3.11 and times were… for Workgroups. There was this Windows NT thing, this OS/2 thing and something brewing at Microsoft to attempt to make the PC less… well, bloody awful for a user.

    Again, thanks to abandonware sites, it’s possible now to try out very early builds of Microsoft Chicago – what would become Windows 95. With the earliest build I could find (build 56), I set to work. The installer worked from an existing Windows 3.11 install.

    I ended up using full system emulation rather than normal qemu later on, as things, well, booted in full emulation and didn’t otherwise (I was building from qemu master… so it could have actually been a bug fix).

    chicago-launch-setupMmmm… Windows 3.11 File Manager, the fact that I can still use this is a testament to something, possibly too much time with Windows 3.11.

    chicago-welcome-setupchicago-setupUnfortunately, I didn’t have the Plus Pack components (remember Microsoft Plus! ?- yes, the exclamation mark was part of the product, it was the 1990s.) and I’m not sure if they even would have existed back then (but the installer did ask).

    chicago-install-dirObviously if you were testing Chicago, you probably did not want to upgrade your working Windows install if this was a computer you at all cared about. I installed into C:\CHICAGO because, well – how could I not!

    chicago-installingThe installation went fairly quickly – after all, this isn’t a real 386 PC and it doesn’t have of-the-era disks – everything was likely just in the linux page cache.

    chicago-install-networkI didn’t really try to get network going, it may not have been fully baked in this build, or maybe just not really baked in this copy of it, but the installer there looks a bit familiar, but not like the Windows 95 one – maybe more like NT 3.1/3.51 ?

    But at the end… it installed and it was time to reboot into Chicago:
    chicago-bootSo… this is what Windows 95 looked like during development back in July 1993 – nearly exactly two years before release. There’s some Windows logos that appear/disappear around the place, which are arguably much cooler than the eventual Windows 95 boot screen animation. The first boot experience was kind of interesting too:
    Screenshot from 2016-08-07 20-57-00Luckily, there was nothing restricting the beta site ID or anything. I just entered the number 1, and was then told it needed to be 6 digits – so beta site ID 123456 it is! The desktop is obviously different both from Windows 3.x and what ended up in Windows 95.

    Screenshot from 2016-08-07 20-57-48Those who remember Windows 3.1 may remember Dr Watson as an actual thing you could run, but it was part of the whole diagnostics infrastructure in Windows, and here (as you can see), it runs by default. More odd is the “Switch To Chicago” task (which does nothing if opened) and “Tracker”. My guess is that the “Switch to Chicago” is the product of some internal thing for launching the new UI. I have no ideawhat the “Tracker” is, but I think I found a clue in the “Find File” app:

    Screenshot from 2016-08-13 14-10-10Not only can you search with regular expressions, but there’s “Containing text”, could it be indexing? No, it totally isn’t. It’s all about tracking/reporting problems:

    Screenshot from 2016-08-13 14-15-19Well, that wasn’t as exciting as I was hoping for (after all, weren’t there interesting database like file systems being researched at Microsoft in the early 1990s?). It’s about here I should show the obligatory About box:
    Screenshot from 2016-08-07 20-58-10It’s… not polished, and there’s certainly that feel throughout the OS, it’s not yet polished – and two years from release: that’s likely fair enough. Speaking of not perfect:

    Screenshot from 2016-08-07 20-59-03When something does crash, it asks you to describe what went wrong, i.e. provide a Clue for Dr. Watson:

    Screenshot from 2016-08-13 12-09-22

    But, most importantly, Solitaire is present! You can browse the Programs folder and head into Games and play it! One odd tihng is that applications have two >> at the end, and there’s a “Parent Folder” entry too.

    Screenshot from 2016-08-13 12-01-24Solitair itself? Just as I remember.

    Screenshot from 2016-08-07 21-21-27Notably, what is missing is anything like the Start menu, which is probably the key UI element introduced in Windows 95 that’s still with us today. Instead, you have this:

    Screenshot from 2016-08-13 11-55-15That’s about the least exciting Windows menu possible. There’s the eye menu too, which is this:

    Screenshot from 2016-08-13 11-56-12More unfinished things are found in the “File cabinet”, such as properties for anything:
    Screenshot from 2016-08-13 12-02-00But let’s jump into Control Panels, which I managed to get to by heading to C:\CHICAGO\Control.sys – which isn’t exactly obvious, but I think you can find it through Programs as well.Screenshot from 2016-08-13 12-02-41Screenshot from 2016-08-13 12-05-40The “Window Metrics” application is really interesting! It’s obvious that the UI was not solidified yet, that there was a lot of experimenting to do. This application lets you change all sorts of things about the UI:

    Screenshot from 2016-08-13 12-05-57My guess is that this was used a lot internally to twiddle things to see what worked well.

    Another unfinished thing? That familiar Properties for My Computer, which is actually “Advanced System Features” in the control panel, and from the [Sample Information] at the bottom left, it looks like we may not be getting information about the machine it’s running on.

    Screenshot from 2016-08-13 12-06-39

    You do get some information in the System control panel, but a lot of it is unfinished. It seems as if Microsoft was experimenting with a few ways to express information and modify settings.

    Screenshot from 2016-08-13 12-07-13But check out this awesome picture of a hard disk for Virtual Memory:

    Screenshot from 2016-08-13 12-07-47The presence of the 386 Enhanced control panel shows how close this build still was to Windows 3.1:

    Screenshot from 2016-08-13 12-08-08At the same time, we see hints of things going 32 bit – check out the fact that we have both Clock and Clock32! Notepad, in its transition to 32bit, even dropped the pad and is just Note32!

    Screenshot from 2016-08-13 12-11-10Well, that’s enough for today, time to shut down the machine:
    Screenshot from 2016-08-13 12-15-45

    Python for science, side projects and stuff! - PyConAu 2016

    By Andrew Lonsdale.

    • Talked about using python-ppt for collaborating on PowerPoint presentations.
    • Covered his journey so far and the lessons he learned.
    • Gave some great examples of re-creating XKCD comics in Python (matplotlib_venn).
    • Claimed the diversion into Python and Matplotlib has helped is actual research.
    • Spoke about how using Python is great for Scientific research.
    • Summarised that side projects are good for Science and Python.
    • Recommended Elegant SciPy
    • Demo's using Emoji to represent bioinformatics using FASTQE (FASTQ as Emoji).

    PyConAu 2016

    MicroPython: a journey from Kickstarter to Space by Damien George - PyConAu 2016

    Damien George.

    Motivations for MicroPython:

    • To provide a high level language to control sophisticated micro-controllers.
    • Approached it as an intellectually stimulating research problem.
    • Wasn't even sure it was possible.
    • Chose Python because:
      • It was a high level language with powerful features.
      • Large existing community.
      • Naively thought it would be easy.
      • Found Python easy to learn.
      • Shallow but long learning curve of python makes it good for beginners and advanced programmers.
      • Bitwise operaitons make it usefult for micro-controllers.

    Why Not Use CPython?

    • CPython pre-allocates memory, resulting in inefficient memory usage which is problematic for low RAM devices like micro controllers.


    • If you know Python, you know MicroPython - it's implemented the same


    Damien covered his experiences with Kickstarter.

    Internals of MicroPython:

    • Damien covered the parser, lexer, compiler and runtime.
    • Walked us through the workflows of the internals.
    • Spoke about object represntation and the three machine word object forms:
      • Integers.
      • Strings.
      • Objects.
    • Covered the emitters:
      • Bytecode.
      • Native (machine code).
      • Inline assembler.

    Coding Style:

    Coding was more based on a physicist trying to make things work, than a computer engineer.

    • There's a code dashboard
    • Hosted on GitHub
    • Noted that he could not have done this without the support of the community.


    Listed some of the micro controller boards that it runs on ad larger computers that currently run OpenWRT.

    Spoke about the BBC micron:bit project. Demo'd speech synthesis and image display running on it.

    MicroPython in Space:

    Spoke about the port to LEON / SPARC / RTEMS for the European Space agency for satellite control, particularly the application layer.

    Damien closed with an overview of current applications and ongoing software and hardware development.


    PyConAu 2016

    August 12, 2016

    Doing Math with Python - Amit Saha - PyConAu 2016

    Amit Saha.

    Slides and demos.

    Why Math with Python?

    • Provides an interactive learning experience.
    • Provides a great base for future programming (ie: data science, machine learning).


    • Python 3
    • SymPy
    • matplotlib

    Amit's book: Doing Math with Python

    PyConAu 2016

    The Internet of Not Great Things - Nick Moore - PyConAu 2016

    Nick Moore.

    aka "The Internet of (Better) Things".

    • Abuse of IoT is not a technical issue.
    • The problem is who controls the data.
    • Need better analysis of the was it is used that is bad.
    • "If you're not the customer, you're the product."
      • by accepting advertising.
      • by having your privacy sold.
    • Led to a conflation of IoT and Big Data.
    • Product end of life by vendors ceasing support.
    • Very little cross vendor compatibility.
    • Many devices useless if the Internet is not available.
    • Consumer grade devices often fail.
    • Weak crypto support.
    • Often due to lack of entropy, RAM, CPU.
    • Poorly thought out update cycles.

    Turning Complaints into Requirements:

    We need:

    • Internet independence.
    • Generic interfaces.
    • Simplified Cryptography.
    • Easier Development.

    Some Solutions:

    • Peer to peer services.
    • Standards based hardware description language.
    • Shared secrets, initialised by QR code.
    • Simpler development with MicroPython.

    PyConAu 2016

    OpenBMC - Boot your server with Python - Joel Stanley - PyConAu 2016

    Joel Stanley.

    • OpenBMC is a Free Software BMC
    • Running embedded Linux.
    • Developed an API before developing other interfaces.


    • A modern kernel.
    • Up to date userspace.
    • Security patches.
    • Better interfaces.
    • Reliable performance.
      • REST interface.
      • SSH instead of strange tools.

    The Future:

    • Support more home devices.
    • Add a web interface.
    • Secure boot, trusted boot, more security features.
    • Upstream all of the things.
    • Support more hardware.

    PyConAu 2016

    Teaching Python with Minecraft - Digital K - PyConAu 2016

    by Digital K.

    The video of the talk is here.

    • Recommended for ages 10 - 16
    • Why Minecraft?
      • Kids familiarity is highly engaging.
      • Relatively low cost.
      • Code their own creations.
      • Kids already use the command line in Minecraft
    • Use the Minecraft API to receive commands from Python.
      • Place blocks
      • Move players
      • Build faster
      • Build larger structures and shapes
      • Easy duplication
      • Animate blocks (ie: colour change)
      • Create games

    Option 1:

    How it works:

    • Import Minecraft API libraries to your code.
    • Push it to the server.
    • Run the Minecraft client.

    What you can Teach:

    • Co-ordinates
    • Time
    • Multiplications
    • Data
    • Art works with maths
    • Trigonometry
    • Geo fencing
    • Design
    • Geography

    Connect to External Devices:

    • Connect to Raspberry Pi or Arduino.
    • Connect the game to events in the real world.

    Other Resources:

    PyConAu 2016

    Scripting the Internet of Things - Damien George - PyConAu 2016

    Damien George

    Damien gave an excellent overview of using MicroPython with microcontrollers, particularly the ESP8266 board.

    Damien's talk was excellent and covered a broad and interesting history of the project and it's current efforts.

    PyConAu 2016

    ESP8266 and MicroPython - Nick Moore - PyConAu 2016

    Nick Moore


    • Price and feature set are a game changer for hobbyists.
    • Makes for a more playful platform.
    • Uses serial programming mode to flash memory
    • Strict power requirements
    • The easy way to use them is with a NodeMCU for only a little more.
    • Tool kits:
    • Lua: (Node Lua).
    • Javascript: Espruino.
    • Forth, Lisp, Basic(?!).
    • Mircopython works on the ESP8266:
      • Drives micro controllers.
      • The onboard Wifi.
      • Can run a small webserver to view and control devices.
      • WebRepl can be used to copy files, as can mpy-utils.
      • Lacks:
        • an operating system.
        • Lacks multiprocessing.
        • Debugger / profiler.
    • Flobot:
      • Compiles via MicroPython.
      • A visual dataflow language for robots.

    ES8266 and MicroPython provide an accessible entry into working with micro-crontrollers.

    PyConAu 2016

    August 10, 2016

    Command line password management with pass

    Why use a password manager in the first place? Well, they make it easy to have strong, unique passwords for each of your accounts on every system you use (and that’s a good thing).

    For years I’ve stored my passwords in Firefox, because it’s convenient, and I never bothered with all those other fancy password managers. The problem is, that it locked me into Firefox and I found myself still needing to remember passwords for servers and things.

    So a few months ago I decided to give command line tool Pass a try. It’s essentially a shell script wrapper for GnuPG and stores your passwords (with any notes) in individually encrypted files.

    I love it.

    Pass is less convenient in terms of web browsing, but it’s more convenient for everything else that I do (which is often on the command line). For example, I have painlessly integrated Pass into Mutt (my email client) so that passwords are not stored in the configuration files.

    As a side-note, I installed the Password Exporter Firefox Add-on and exported my passwords. I then added this whole file to Pass so that I can start copying old passwords as needed (I didn’t want them all).

    About Pass

    Pass uses public-key cryptography to encrypt each password that you want to store as an individual file. To access the password you need the private key and passphrase.

    So, some nice things about it are:

    • Short and simple shell script
    • Uses standard GnuPG to encrypt each password into individual files
    • Password files are stored on disk in a hierarchy of own choosing
    • Stored in Git repo (if desired)
    • Can also store notes
    • Can copy the password temporarily to copy/paste buffer
    • Can show, edit, or copy password
    • Can also generate a password
    • Integrates with anything that can call it
    • Tab completion!

    So it’s nothing super fancy, “just” a great little wrapper for good old GnuPG and text files, backed by git. Perfect!

    Install Pass

    Installation of Pass (and Git) is easy:
    sudo dnf -y install git pass

    Prepare keys

    You’ll need a pair of keys, so generate these if you haven’t already (this creates the keys under ~/.gnupg). I’d probably recommend RSA and RSA, 4096 bits long, using a decent passphrase and setting a valid email address (you can also separately use these keys to send signed emails and receive encrypted emails).
    gpg2 --full-gen-key

    We will need the key’s fingerprint to give to pass. It should be a string of 40 characters, something like 16CA211ACF6DC8586D6747417407C4045DF7E9A2.
    gpg2 --list-secret-keys

    Note: Your fingerprint (and public keys) can be public, but please make sure that you keep your private keys secure! For example, don’t copy the ~/.gnupg directory to a public place (even though they are protected by a nice long passphrase, right? Right?).

    Initialise pass

    Before we can use Pass, we need to initialise it. Put the fingerprint you got from the output of gpg2 –list-secret-keys above (e.g. 5DF7E9A2).
    pass init 5DF7E9A2

    This creates the basic directory structure in the .password-store directory in your home directory. At this point it just has a plain text file (.password-store/.gpg-id) with the fingerprint of the public key that it should use.

    Adding git backing

    If you haven’t already, you’ll need to tell Git who you are. Using the email address that you used when creating the GPG key is probably good.
    git config --global ""
    git config --global "Your Name"

    Now, go into the password-store directory and initialise it as a Git repository.
    cd ~/.password-store
    git init
    git add .
    git commit -m "intial commit"
    cd -

    Pass will now automatically commit changes for you!


    As mentioned, you can create any hierarchy you like. I quite like to use subdirectories and sort by function first (like mail, web, server), then domains (like, and then server or username. This seems to work quite nicely with tab completion, too.

    You can rearrange this at any time, so don’t worry too much!

    Storing a password

    Adding a password is simple and you can create any hierarchy that you want; you just tell pass to add a new password and where to store it. Pass will prompt you to enter the password.

    For example, you might want to store your password for a machine at – you could do that like so:
    pass add servers/

    This creates the directory structure on disk and your first encrypted file!
    └── servers
            └── server1.gpg
    2 directories, 1 file

    Run the file command on that file and it should tell you that it’s encrypted.
    file ~/.password-store/servers/

    But is it really? Go ahead, cat that gpg file, you’ll see it’s encrypted (your terminal will probably go crazy – you can blindly enter the reset command to get it back).
    cat ~/.password-store/servers/

    So this file is encrypted – you can safely copy it anywhere (again, please just keep your private key secure).

    Git history

    Browse to the .password-store dir and run some git commands, you’ll see your history and showing will prompt for your GPG passphrase to decrypt the files stored in Git.

    cd ~/.password-store
    git log
    git show
    cd -

    If you wanted to, you could push this to another computer as a backup (perhaps even via a git-hook!).

    Storing a password, with notes

    By default Pass just prompts for the password, but if you want to add notes at the same time you can do that also. Note that the password should still be on its own on the first line, however.
    pass add -m mail/

    If you use two-factor authentication (which you should be), this is useful for also storing the account password and recovery codes.

    Generating and storing a password

    As I mentioned, one of the benefits of using a password manager is to have strong, unique passwords. Pass makes this easy by including the ability to generate one for you and store it in the hierarchy of your choosing. For example, you could generate a 32 character password (without special characters) for a website you often log into, like so:
    pass generate -n web/ 32

    Getting a password out

    Getting a password out is easy; just tell Pass which one you want. It will prompt you for your passphrase, decrypt the file for you, read the first line and print it to the screen. This can be useful for scripting (more on that below).

    pass web/

    Most of the time though, you’ll probably want to copy the password to the copy/paste buffer; this is also easy, just add the -c option. Passwords are automatically cleared from the buffer after 45 seconds.
    pass -c web/

    Now you can log into Twitter by entering your username and pasting the password.

    Editing a password

    Similarly you can edit an existing password to change it, or add as many notes as you like. Just tell Pass which password to edit!
    pass edit web/

    Copying and moving a password

    It’s easy to copy an existing password to a new one, just specify both the original and new file.
    pass copy servers/ servers/

    If the hierarchy you created is not to your liking, it’s easy to move passwords around.
    pass mv servers/ computers/

    Of course, you could script this!

    Listing all passwords

    Pass will list all your passwords in a tree nicely for you.
    pass list

    Interacting with Pass

    As pass is a nice standard shell program, you can interact with it easily. For example, to get a password from a script you could do something like this.
    #!/usr/bin/env bash
    echo "Getting password.."
    PASSWORD="$(pass servers/"
    if [[ $? -ne 0 ]]; then
        echo "Sorry, failed to get the password"
        exit 1
    echo "..and we got it, ${PASSWORD}"

    Try it!

    There’s lots more you can do with Pass, why not check it out yourself!

    August 08, 2016

    Setting up OpenStack Ansible All-in-one behind a proxy

    Setting up OpenStack Ansible (OSA) All-in-one (AIO) behind a proxy requires a couple of settings, but it should work fine (we’ll also configure the wider system). There are two types of git repos that we should configure for (unless you’re an OpenStack developer), those that use http (or https) and those that use the git protocol.

    Firstly, this assumes an Ubuntu 14.04 server install (with at least 60GB of free space on / partition).

    All commands are run as the root user, so switch to root first.

    sudo -i

    Export variables for ease of setup

    Setting these variables here means that you can copy and paste the relevant commands from the rest of this blog post.

    Note: Make sure that your proxy is fully resolvable and then replace the settings below with your actual proxy details (leave out user:password if you don’t use one).

    export PROXY_PROTO="http"
    export PROXY_HOST="user:password@proxy"
    export PROXY_PORT="3128"

    First, install some essentials (reboot after upgrade if you like).
    echo "Acquire::http::Proxy \"${PROXY}\";" \
    > /etc/apt/apt.conf.d/90proxy
    apt-get update && apt-get upgrade
    apt-get install git openssh-server rsync socat screen vim

    Configure global proxies

    For any http:// or https:// repositories we can just set a shell environment variable. We’ll set this in /etc/environment so that all future shells have it automatically.

    cat >> /etc/environment << EOF
    export http_proxy="${PROXY}"
    export https_proxy="${PROXY}"
    export HTTP_PROXY="${PROXY}"
    export HTTPS_PROXY="${PROXY}"
    export ftp_proxy="${PROXY}"
    export FTP_PROXY="${PROXY}"
    export no_proxy=localhost
    export NO_PROXY=localhost

    Source this to set the proxy variables in your current shell.
    source /etc/environment

    Tell sudo to keep these environment variables
    echo 'Defaults env_keep = "http_proxy https_proxy ftp_proxy \
    > /etc/sudoers.d/01_proxy

    Configure Git

    For any git:// repositories we need to make a script that uses socat (you could use netcat) and tell Git to use this as the proxy.

    cat > /usr/local/bin/ << EOF
    # \$1 = hostname, \$2 = port
    exec socat STDIO PROXY:${PROXY_HOST}:\${1}:\${2},proxyport=${PROXY_PORT}

    Make it executable.
    chmod a+x /usr/local/bin/

    Tell Git to proxy connections through this script.
    git config --global core.gitProxy /usr/local/bin/

    Clone OpenStack Ansible

    OK, let’s clone the OpenStack Ansible repository! We’re living on the edge and so will build from the tip of the master branch.
    git clone git:// \
    cd /opt/openstack-ansible/

    If you would prefer to build from a specific release, such as the latest stable, feel free to now check out the appropriate tag. For example, at the time of writing this is tag 13.3.1. You can get a list of tags by running the git tag command.

    # Only run this if you want to build the 13.3.1 release
    git checkout -b tag-13.3.1 13.3.1

    Or if you prefer, you can checkout the tip of the stable branch which prepares for the upcoming stable minor release.

    # Only run this if you want to build the latest stable code
    git checkout -b stable/matika origin/stable/mitaka

    Prepare log location

    If something goes wrong, it’s handy to be able to have the log available.

    export ANSIBLE_LOG_PATH=/root/ansible-log

    Bootstrap Ansible

    Now we can kick off the ansible bootstrap. This prepares the system with all of the Ansible roles that make up an OpenStack environment.

    Upon success, you should see:

    System is bootstrapped and ready for use.

    Bootstrap OpenStack Ansible All In One

    Now let’s bootstrap the all in one system. This configures the host with appropriate disks and network configuration, etc ready to run the OpenStack environment in containers.

    Run the Ansible playbooks

    The final task is to run the playbooks, which sets up all of the OpenStack components on the host and containers. Before we proceed, however, this requires some additional configuration for the proxy.

    The user_variables.yml file under the root filesystem at /etc/openstack_deploy/user_variables.yml is where we configure environment variables for OSA to export and set some other options (again, note the leading / before etc – do not modify the template file at /opt/openstack-ansible/etc/openstack_deploy by mistake).

    cat >> /etc/openstack_deploy/user_variables.yml << EOF
    ## Proxy settings
    proxy_env_url: "\"${PROXY}\""
    no_proxy_env: "\"localhost,,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}\""
      HTTP_PROXY: "{{ proxy_env_url }}"
      HTTPS_PROXY: "{{ proxy_env_url }}"
      NO_PROXY: "{{ no_proxy_env }}"
      http_proxy: "{{ proxy_env_url }}"
      https_proxy: "{{ proxy_env_url }}"
      no_proxy: "{{ no_proxy_env }}"

    Secondly, if you’re running the latest stable, 13.3.x, you will need to make a small change to pip package list for the keystone (authentication component) container. Currently it pulls in httplib2 version 0.8, however this does not appear to respect the NO_PROXY variable and so keystone provisioning fails. Version 0.9 seems to fix this problem.

    sed -i 's/state: present/state: latest/' \

    Now run the playbooks!

    Note: This will take a long time, perhaps a few hours, so run it in a screen or tmux session.

    time ./scripts/

    Verify containers

    Once the playbooks complete, you should be able to list your running containers and see their status (there will be a couple of dozen).
    lxc-ls -f

    Log into OpenStack

    Now that the system is complete, we can start using OpenStack!

    You should be able to use your web browser to log into Horizon, the OpenStack Dashboard, at your AIO hosts’s IP address.

    If you’re not sure what IP that is, you can find out by looking at which address port 443 is running on.

    netstat -ltnp |grep 443

    The admin user’s password is available in the user_secrets.yml file on the AIO host.
    grep keystone_auth_admin_password \


    A successful login should reveal the admin dashboard.


    Enjoy your OpenStack Ansible All-in-one!

    Windows 3.11 nostalgia

    Because OS/2 didn’t go so well… let’s try something I’m a lot more familiar with. To be honest, the last time I in earnest used Windows on the desktop was around 3.11, so I kind of know it back to front (fun fact: I’ve read the entire Windows 3.0 manual).

    It turns out that once you have MS-DOS installed in qemu, installing Windows 3.11 is trivial. I didn’t even change any settings for Qemu, I just basically specced everything up to be very minimal (50MB RAM, 512mb disk).

    win31-setupwin31-disk4win31-installedWindows 3.11 was not a fun time as soon as you had to do anything… nightmares of drivers, CONFIG.SYS and AUTOEXEC.BAT plague my mind. But hey, it’s damn fast on a modern processor.

    Swift + Xena would make the perfect digital preservation solution

    Those of you might not know, but for some years I worked at the National Archives of Australia working on, at the time, their leading digital preservation platform. It was awesome, opensource, and they paid me to hack on it.
    The most important parts of the platform was Xena and Digital Preservation Recorder (DPR). Xena was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.

    We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.

    Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.

    Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.

    Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.

    Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!

    Happy preserving.

    August 07, 2016

    SM2000 – Part 8 – Gippstech 2016 Presentation

    Justin, VK7TW, has published a video of my SM2000 presentation at Gippstech, which was held in July 2016.

    Brady O’Brien, KC9TPA, visited me in June. Together we brought the SM2000 up to the point where it is decoding FreeDV 2400A waveforms at 10.7MHz IF, which we demonstrate in this video. I’m currently busy with another project but will get back to the SM2000 (and other FreeDV projects) later this year.

    Thanks Justin and Brady!

    FreeDV and this video was also mentioned on this interesting Reddit post/debate from Gary KN4AQ on VHF/UHF Digital Voice – a peek into the future