Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

January 22, 2018

LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Kernel Miniconf in Sydney — the slides from the talk are here:

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon! 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example:
  • But “possible isn’t enough”

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc


Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do


Share 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation


Share 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor


  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them


  • Printing pointers (eg in syslog)
  • Usercopy



January 21, 2018

4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.

The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!

January 19, 2018

January 16, 2018

More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

January 10, 2018

Priorities for my team

(unthreaded from here)

During the day, I’m a Lead of a group of programmers. We’re responsible for a range of tools and tech used by others at the company for making games.

I have a list of the my priorities (and some related questions) of things that I think are important for us to be able to do well as individuals, and as a team:

  1. Treat people with respect. Value their time, place high value on their well-being, and start with the assumption that they have good intentions
    (“People” includes yourself: respect yourself, value your own time and well-being, and have confidence in your good intentions.)
  2. When solving a problem, know the user and understand their needs.
    • Do you understand the problem(s) that need to be solved? (it’s easy to make assumptions)
    • Have you spoken to the user and listened to their perspective? (it’s easy to solve the wrong problem)
    • Have you explored the specific constraints of the problem by asking questions like:
      • Is this part needed? (it’s easy to over-reach)
      • Is there a satisfactory simpler alternative? (actively pursue simplicity)
      • What else will be needed? (it’s easy to overlook details)
    • Have your discussed your proposed solution with users, and do they understand what you intend to do? (verify, and pursue buy-in)
    • Do you continue to meet regularly with users? Do they know you? Do they believe that you’re working for their benefit? (don’t under-estimate the value of trust)
  3. Have a clear understanding of what you are doing.
    • Do you understand the system you’re working in? (it’s easy to make assumptions)
    • Have you read the documentation and/or code? (set yourself up to succeed with whatever is available)
    • For code:
      • Have you tried to modify the code? (pull a thread; see what breaks)
      • Can you explain how the code works to another programmer in a convincing way? (test your confidence)
      • Can you explain how the code works to a non-programmer?
  4. When trying to solve a problem, debug aggressively and efficiently.
    • Does the bug need to be fixed? (see 1)
    • Do you understand how the system works? (see 2)
    • Is there a faster way to debug the problem? Can you change code or data to cause the problem to occur more quickly and reliably? (iterate as quickly as you can, fix the bug, and move on)
    • Do you trust your own judgement? (debug boldly, have confidence in what you have observed, make hypotheses and test them)
  5. Pursue excellence in your work.
    • How are you working to be better understood? (good communication takes time and effort)
    • How are you working to better understand others? (don’t assume that others will pursue you with insights)
    • Are you responding to feedback with enthusiasm to improve your work? (pursue professionalism)
    • Are you writing high quality, easy to understand, easy to maintain code? How do you know? (continue to develop your technical skills)
    • How are you working to become an expert and industry leader with the technologies and techniques you use every day? (pursue excellence in your field)
    • Are you eager to improve (and fix) systems you have worked on previously? (take responsibility for your work)

The list was created for discussion with the group, and as an effort to articulate my own expectations in a way that will help my team understand me.

Composing this has been useful exercise for me as a lead, and definitely worthwhile for the group. If you’ve never tried writing down your own priorities, values, and/or assumptions, I encourage you to try it :)

January 07, 2018

Engage the Silent Drive

I’ve been busy electrocuting my boat – here are our first impressions of the Torqueedo Cruise 2.0T on the water.

About 2 years ago I decided to try sailing, so I bought a second hand Hartley TS16; a popular small “trailer sailor” here in Australia. Since then I have been getting out once every week, having some very pleasant days with friends and family, and even at times by myself. Sailing really takes you away from everything else in the world. It keeps you busy as you are always pulling a rope or adjusting this and that, and is physically very active as you are clambering all over the boat. Mentally there is a lot to learn, and I started as a complete nautical noob.

Sailing is so quiet and peaceful, you get propelled by the wind using aerodynamics and it feels like like magic. However this is marred by the noise of outboard motors, which are typically used at the start and end of the day to get the boat to the point where it can sail. They are also useful to get you out of trouble in high seas/wind, or when the wind dies. I often use the motor to “un hit” Australia when I accidentally lodge myself on a sand bar (I have a lot of accidents like that).

The boat came with an ancient 2 stroke which belched smoke and noise. After about 12 months this motor suffered a terminal melt down (impeller failure and over heated) so it was replaced with a modern 5HP Honda 4-stroke, which is much quieter and very fuel efficient.

My long term goal was to “electrocute” the boat and replace the infernal combustion outboard engine with an electric motor and battery pack. I recently bit the bullet and obtained a Torqeedo Cruise 2kW outboard from Eco Boats Australia.

My friend Matt and I tested the motor today and are really thrilled. Matt is an experienced Electrical Engineer and sailor so was an ideal companion for the first run of the Torqueedo.

Torqueedo Cruise 2.0 First Impressions

It’s silent – incredibly so. Just a slight whine conducted from the motor/gearbox pod beneath the water. The sound of water flowing around the boat is louder!

The acceleration is impressive, better than the 4-stroke. Make sure you sit down. That huge, low RPM prop and loads of torque. We settled on 1000W, experimenting with other power levels.

The throttle control is excellent, you can dial up any speed you want. This made parking (mooring) very easy compared to the 4-stroke which is more of a “single speed” motor (idles at 3 knots, 4-5 knots top speed) and is unwieldy for parking.

It’s fit for purpose. This is not a low power “trolling” motor, it is every bit as powerful as the modern Honda 5HP 4-stroke. We did a A/B test and obtained the same top speed (5 knots) in the same conditions (wind/tide/stretch of water). We used it with 15 knot winds and 1m seas and it was the real deal – pushing the boat exactly where we wanted to go with authority. This is not a compromise solution. The Torqueedo shows internal combustion who’s house it is.

We had some fun sneaking up on kayaks at low power, getting to within a few metres before they heard us. Other boaties saw us gliding past with the sails down and couldn’t work out how we were moving!

A hidden feature is Azipod steering – it steers through more than 270 degrees. You can reverse without reverse gear, and we did “donuts” spinning on the keel!

Some minor issues: Unlike the Honda the the Torqueedo doesn’t tilt complete out of the water when sailing, leaving some residual drag from the motor/propeller pod. It also has to be removed from the boat for trailering, due to insufficient road clearance.

Walk Through

Here are the two motors with the boat out of the water:

It’s quite a bit longer than the Honda, mainly due to the enormous prop. The centres of the two props are actually only 7cm apart in height above ground. I had some concerns about ground clearance, both when trailering and also in the water. I have enough problems hitting Australia and like the way my boat can float in just 30cm of water. I discussed this with my very helpful Torqueedo dealer, Chris. He said tests with short and long version suggested this wasn’t a problem and in fact the “long” version provided better directional control. More water on top of the prop is a good thing. They recommend 50mm minimum, I have about 100mm.

To get started I made up a 24V battery pack using a plastic tub and 8 x 3.2V 100AH Lithium cells, left over from my recent EV battery upgrade. The cells are in varying conditions; I doubt any of them have 100AH capacity after 8 years of being hammered in my EV. On the day we ran for nearly 2 hours before one of the weaker cells dipped beneath 2.5V. I’ll sort through my stock of second hand cells some time to optimise the pack.

The pack plus motor weighs 41kg, the 5HP Honda plus 5l petrol 32kg. At low power (600W, 3.5 knots), this 2.5kWHr pack will give us a range of 14 nm or 28km. Plenty – on a huge days sailing we cover 40km, of which just 5km would be on motor.

All that power on board is handy too, for example the load of a fridge would be trivial compared to the motor, and a 100W HF radio no problem. So now I can quaff ice-cold sparkling shiraz or a nice beer, while having an actual conversation and not choking on exhaust fumes!

Here’s Matt taking us for a test drive, not much to the Torqueedo above the water:

For a bit of fun we ran both motors (maybe 10HP equivalent) and hit 7 knots, almost getting the Hartley up on the plane. Does this make it a Hybrid boat?


We are in love. This is the future of boating. For sale – one 5HP Honda 4-stroke.

Annual Penguin Picnic, January 28, 2018

Jan 28 2018 12:00
Jan 28 2018 18:00
Jan 28 2018 12:00
Jan 28 2018 18:00
Yarra Bank Reserve, Hawthorn.

The Linux Users of Victoria Annual Penguin Picnic will be held on Sunday, January 28, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 28, 2018 - 12:00

read more

January 06, 2018

A little bit of floating point in a memory allocator — Part 1: Background

This post contains the same material as this thread of tweets, with a few minor edits.

Over my holiday break at the end of 2017, I took a look into the TLSF (Two Level Segregated Fit) memory allocator to better understand how it works. I’ve made use of this allocator and have been impressed by its real world performance, but never really done a deep dive to properly understand it.

The mapping_insert() function is a key part of the allocator implementation, and caught my eye. Here’s how that function is described in the paper A constant-time dynamic storage allocator for real-time systems:

I’ll be honest: from that description, I never developed a clear picture in my mind of what that function does.

(Reading it now, it seems reasonably clear – but I can say that only after I spent quite a bit of time using other methods to develop my understanding)

Something that helped me a lot was by looking at the implementation of that function from  There’s a bunch of long-named macro constants in there, and a few extra implementation details. If you collapse those it looks something like this:

void mapping_insert(size_t size, int* fli, int* sli)
  int fl, sl;
  if (size < 256)
    fl = 0;
    sl = (int)size / 8;
    fl = fls(size);
    sl = (int)(size >> (fl - 5)) ^ 0x20;
    fl -= 7;
  *fli = fl;
  *sli = sl;

It’s a pretty simple function (it really is). But I still failed to *see* the pattern of results that would be produced in my mind’s eye.

I went so far as to make a giant spreadsheet of all the intermediate values for a range of inputs, to paint myself a picture of the effect of each step :) That helped immensely.

Breaking it down…

There are two cases handled in the function: one for when size is below a certain threshold, and on for when it is larger. The first is straightforward, and accounts for a small number of possible input values. The large size case is more interesting.

The function computes two values: fl and sl, the first and second level indices for a lookup table. For the large case, fl (where fl is “first level”) is computed via fls(size) (where fls is short for “find last set” – similar names, just to keep you on your toes).

fls() returns the index of the largest bit set, counting from the least significant slbit, which is the index of the largest power of two. In the words of the paper:

“the instruction fls can be used to compute the ⌊log2(x)⌋ function”

Which is, in C-like syntax: floor(log2(x))

And there’s that “fl -= 7” at the end. That will show up again later.

For the large case, the computation of sl has a few steps:

  sl = (size >> (fl – 5)) ^ 0x20;

Depending on shift down size by some amount (based on fl), and mask out the sixth bit?

(Aside: The CellBE programmer in me is flinching at that variable shift)

It took me a while (longer than I would have liked…) to realize that this
size >> (fl – 5) is shifting size to generate a number that has exactly six significant bits, at the least significant end of the register (bits 5 thru 0).

Because fl is the index of the most significant bit, after this shift, bit 5 will always be 1 – and that “^ 0x20” will unset it, leaving the result as a value between 0 and 31 (inclusive).

So here’s where floating point comes into it, and the cute thing I saw: another way to compute fl and sl is to convert size into an IEEE754 floating point number, and extract the exponent, and most significant bits of the mantissa. I’ll cover that in the next part, here.

A little bit of floating point in a memory allocator — Part 2: The floating point


This post contains the same material as this thread of tweets, with a few minor edits.

In IEEE754, floating point numbers are represented like this:


nnn is the exponent, which is floor(log2(size)) — which happens to be the fl value computed by TLSF.

sss… is the significand fraction: the part that follows the decimal point, which happens to be sl.

And so to calculate fl and sl, all we need to do is convert size to a floating point value (on recent x86 hardware, that’s a single instruction). Then we can extract the exponent, and the upper bits of the fractional part, and we’re all done :D

That can be implemented like this:

double sf = (int64_t)size;
uint64_t sfi;
memcpy(&sfi, &sf, 8);
fl = (sfi >> 52) - (1023 + 7);
sl = (sfi >> 47) & 31;

There’s some subtleties (there always is). I’ll break it down…

double sf = (int64_t)size;

Convert size to a double, with an explicit cast. size has type size_t, but using TLSF from, the largest supported allocation on 64bit architecture is 2^32 bytes – comfortably less than the precision provided by the double type. If you need your TLSF allocator to allocate chunks bigger than 2^53, this isn’t the technique for you :)

I first tried using float (not double), which can provide correct results — but only if the rounding mode happens to be set correctly. double is easier.

The cast to (int64_t) results in better codegen on x86: without it, the compiler will generate a full 64bit unsigned conversion, and there is no single instruction for that.

The cast tells the compiler to (in effect) consider the bits of size as if they were a two’s complement signed value — and there is an SSE instruction to handle that case (cvtsi2sdq or similar). Again, with the implementation we’re using size can’t be that big, so this will do the Right Thing.

uint64_t sfi;
memcpy(&sfi, &sf, 8);

Copy the 8 bytes of the double into an unsigned integer variable. There are a lot of ways that C/C++ programmers copy bits from floating point to integer – some of them are well defined :) memcpy() does what we want, and any moderately respectable compiler knows how to select decent instructions to implement it.

Now we have floating point bits in an integer register, consisting of one sign bit (always zero for this, because size is always positive), eleven exponent bits (offset by 1023), and 52 bits of significant fraction. All we need to do is extract those, and we’re done :)

fl = (sfi >> 52) - (1023 + 7);

Extract the exponent: shift it down (ignoring the always-zero sign bit), subtract the offset (1023), and that 7 we saw earlier, at the same time.

sl = (sfi >> 47) & 31;

Extract the five most significant bits of the fraction – we do need to mask out the exponent.

And, just like that*, we have mapping_insert(), implemented in terms of integer -> floating point conversion.

* Actual code (rather than fragments) may be included in a later post…

January 05, 2018

That gantry just pops right off

Hobby CNC machines sold as "3040" may have a gantry clearance of about 80mm and a z axis travel of around 55mm. A detached gantry is shown below. Notice that there are 3 bolts on the bottom side mounting the z-axis to the gantry. The stepper motor attaches on the side shown so there are 4 NEMA holes to hold the stepper. Note that the normal 3040 doesn't have the mounting plate shown on the z-axis, that crossover plate allows a different spindle to be mounted to this machine.

The plan is to create replacement sides with some 0.5inch offcut 6061 alloy. This will add 100mm to the gantry so it can more easily clear clamps and a 4th axis. Because that would move the cutter mount upward as well, replacing the z-axis with something that has more range, say 160mm becomes an interesting plan.

One advantage to upgrading a machine like this is that you can reassemble the machine after measuring and designing the upgrade and then cut replacement parts for the machine using the machine.

The 3040 can look a bit spartan with the gantry removed.

The preliminary research is done. Designs created. CAM done. I just have to cut 4 plates and then the real fun begins.

January 04, 2018

Pivoting ‘the book’ from individuals to systems

In 2016 I started writing a book, “Choose Your Own Adventure“, which I wanted to be a call to action for individuals to consider their role in the broader system and how they individually can make choices to make things better. As I progressed the writing of that book I realised the futility of changing individual behaviours and perspectives without an eye to the systems and structures within which we live. It is relatively easy to focus on oneself, but “no man is an island” and quite simply, I don’t want to facilitate people turning themselves into more beautiful cogs in a dysfunctional machine so I’m pivoting the focus of the book (and reusing the relevant material) and am now planning to finish the book by mid 2018.

I have recently realised four paradoxes which have instilled in me a sense of urgency to reimagine the world as we know it. I believe we are at a fork in the road where we will either reinforce legacy systems based on outdated paradigms with shiny new things, or choose to forge a new path using the new tools and opportunities at our disposal, hopefully one that is genuinely better for everyone. To do the latter, we need to critically assess the systems and structures we built and actively choose what we want to keep, what we should discard, what sort of society we want in the future and what we need to get there.

I think it is too easily forgotten that we invented all this and can therefore reinvent it if we so choose. But to not make a choice is to choose the status quo.

This is not to say I think everything needs to change. Nothing is so simplistic or misleading as a zero sum argument. Rather, the intent of this book is to challenge you to think critically about the systems you work within, whether they enable or disable the things you think are important, and most importantly, to challenge you to imagine what sort of world you want to see. Not just for you, but for your family, community and the broader society. I challenge you all to make 2018 a year of formative creativity in reimagining the world we live in and how we get there.

The paradoxes in brief, are as follows:

  • That though power is more distributed than ever, most people are still struggling to survive.
    It has been apparent to me for some time that there is a growing substantial shift in power from traditional gatekeepers to ordinary people through the proliferation of rights based philosophies and widespread access to technology and information. But the systemic (and artificial) limitations on most people’s time and resources means most people simply cannot participate fully in improving their own lives let alone in contributing substantially to the community and world in which they live. If we consider the impact of business and organisational models built on scarcity, centricity and secrecy, we quickly see that normal people are locked out of a variety of resources, tools and knowledge with which they could better their lives. Why do we take publicly funded education, research and journalism and lock them behind paywalls and then blame people for not having the skills, knowledge or facts at their disposal? Why do we teach children to be compliant consumers rather than empowered makers? Why do we put the greatest cognitive load on our most vulnerable through social welfare systems that then beget reliance? Why do we not put value on personal confidence in the same way we value business confidence, when personal confidence indicates the capacity for individuals to contribute to their community? Why do we still assume value to equate quantity rather than quality, like the number of hours worked rather than what was done in those hours? If a substantial challenge of the 21st century is having enough time and cognitive load to spare, why don’t we have strategies to free up more time for more people, perhaps by working less hours for more return? Finally, what do we need to do systemically to empower more people to move beyond survival and into being able to thrive.
  • Substantial paradigm shifts have happened but are not being integrated into people’s thinking and processes.
    The realisation here is that even if people are motivated to understand something fundamentally new to their worldview, it doesn’t necessarily translate into how they behave. It is easier to improve something than change it. Easier to provide symptomatic relief than to cure the disease. Interestingly I often see people confuse iteration for transformation, or symptomatic relief with addressing causal factors, so perhaps there is also a need for critical and systems thinking as part of the general curriculum. This is important because symptomatic relief, whilst sometimes necessary to alleviate suffering, is an effort in chasing one’s tail and can often perpetrate the problem. For instance, where providing foreign aid without mitigating displacement of local farmer’s efforts can create national dependence on further aid. Efforts to address causal factors is necessary to truly address a problem. Even if addressing the causal problem is outside your influence, then you should at least ensure your symptomatic relief efforts are not built to propagate the problem. One of the other problems we face, particularly in government, is that the systems involved are largely products of centuries old thinking. If we consider some of the paradigm shifts of our times, we have moved from scarcity to surplus, centralised to distributed, from closed to openness, analog to digital and normative to formative. And yet, people still assume old paradigms in creating new policies, programs and business models. For example how many times have you heard someone talk about innovative public engagement (tapping into a distributed network of expertise) by consulting through a website (maintaining central decision making control using a centrally controlled tool)? Or “innovation” being measured (and rewarded) through patents or copyright, both scarcity based constructs developed centuries ago? “Open government” is often developed by small insular teams through habitually closed processes without any self awareness of the irony of the approach. And new policy and legislation is developed in analog formats without any substantial input from those tasked with implementation or consideration with how best to consume the operating rules of government in the systems of society. Consider also the number of times we see existing systems assumed to be correct by merit of existing, without any critical analysis. For instance, a compliance model that has no measurable impact. At what point and by what mechanisms can we weigh up the merits of the old and the new when we are continually building upon a precedent based system of decision making? If 3D printing helped provide a surplus economy by which we could help solve hunger and poverty, why wouldn’t that be weighed up against the benefits of traditional scarcity based business models?
  • That we are surrounded by new things every day and yet there is a serious lack of vision for the future
    One of the first things I try to do in any organisation is understand the vision, the strategy and what success should look like. In this way I can either figure out how to best contribute meaningfully to the overarching goal, and in some cases help grow or develop the vision and strategy to be a little more ambitious. I like to measure progress and understand the baseline from which I’m trying to improve but I also like to know what I’m aiming for. So, what could an optimistic future look like for society? For us? For you? How do you want to use the new means at our disposal to make life better for your community? Do we dare imagine a future where everyone has what they need to thrive, where we could unlock the creative and intellectual potential of our entire society, a 21st century Renaissance, rather than the vast proportion of our collective cognitive capacity going into just getting food on the table and the kids to school. Only once you can imagine where you want to be can we have a constructive discussion where we want to be collectively, and only then can we talk constructively the systems and structures we need to support such futures. Until then, we are all just tweaking the settings of a machine built by our ancestors. I have been surprised to find in government a lot of strategies without vision, a lot of KPIs without measures of success, and in many cases a disconnect between what a person is doing and the vision or goals of the organisation or program they are in. We talk “innovation” a lot, but often in the back of people’s minds they are often imagining a better website or app, which isn’t much of a transformation. We are surrounded by dystopic visions of the distant future, and yet most government vision statements only go so far as articulating something “better” that what we have now, with “strategies” often focused on shopping lists of disconnected tactics 3-5 years into the future. The New Zealand Department of Conservation provides an inspiring contrast with a 50 year vision they work towards, from which they develop their shorter term stretch goals and strategies on a rolling basis and have an ongoing measurable approach.
  • That government is an important part of a stable society and yet is being increasingly undermined, both intentionally and unintentionally.
    The realisation here has been in first realising how important government (and democracy) is in providing a safe, stable, accountable, predictable and prosperous society whilst simultaneously observing first hand the undermining and degradation of the role of government both intentionally and unintentionally, from the outside and inside. I have chosen to work in the private sector, non-profit community sector, political sector and now public sector, specifically because I wanted to understand the “system” in which I live and how it all fits together. I believe that “government” – both the political and public sectors – has a critical part to play in designing, leading and implementing a better future. The reason I believe this, is because government is one of the few mechanisms that is accountable to the people, in democratic countries at any rate. Perhaps not as much as we like and it has been slow to adapt to modern practices, tools and expectations, but governments are one of the most powerful and influential tools at our disposal and we can better use them as such. However, I posit that an internal, largely unintentional and ongoing degradation of the public sectors is underway in Australia, New Zealand, the United Kingdom and other “western democracies”, spurred initially by an ideological shift from ‘serving the public good’ to acting more like a business in the “New Public Management” policy shift of the 1980s. This was useful double speak for replacing public service values with business values and practices which ignores the fact that governments often do what is not naturally delivered by the marketplace and should not be only doing what is profitable. The political appointment of heads of departments has also resulted over time in replacing frank, fearless and evidence based leadership with politically palatable compromises throughout the senior executive layer of the public sector, which also drives necessarily secretive behaviour, else the contradictions be apparent to the ordinary person. I see the results of these internal forms of degradations almost every day. From workshops where people under budget constraints seriously consider outsourcing all government services to the private sector, to long suffering experts in the public sector unable to sway leadership with facts until expensive consultants are brought in to ask their opinion and sell the insights back to the department where it is finally taken seriously (because “industry” said it), through to serious issues where significant failures happen with blame outsourced along with the risk, design and implementation, with the details hidden behind “commercial in confidence” arrangements. The impact on the effectiveness of the public sector is obvious, but the human cost is also substantial, with public servants directly undermined, intimidated, ignored and a growing sense of hopelessness and disillusionment. There is also an intentional degradation of democracy by external (but occasionally internal) agents who benefit from the weakening and limiting of government. This is more overt in some countries than others. A tension between the regulator and those regulated is a perfectly natural thing however, as the public sector grows weaker the corporate interests gain the upper hand. I have seen many people in government take a vendor or lobbyist word as gold without critical analysis of the motivations or implications, largely again due to the word of a public servant being inherently assumed to be less important than that of anyone in the private sector (or indeed anyone in the Minister’s office). This imbalance needs to be addressed if the public sector is to play an effective role. Greater accountability and transparency can help but currently there is a lack of common agreement on the broader role of government in society, both the political and public sectors. So the entire institution and the stability it can provide is under threat of death by a billion papercuts. Efforts to evolve government and democracy have largely been limited to iterations on the status quo: better consultation, better voting, better access to information, better services. But a rethink is required and the internal systemic degradations need to be addressed.

If you think the world is perfectly fine as is, then you are probably quite lucky or privileged. Congratulations. It is easy to not see the cracks in the system when your life is going smoothly, but I invite you to consider the cracks that I have found herein, to test your assumptions daily and to leave your counter examples in the comments below.

For my part, I am optimistic about the future. I believe the proliferation of a human rights based ideology, participatory democracy and access to modern technologies all act to distribute power to the people, so we have the capacity more so than ever to collectively design and create a better future for us all.

Let’s build the machine we need to thrive both individually and collectively, and not just be beautiful cogs in a broken machine.

Further reading:

Chapter 1.2: Many hands make light work, for a while

This is part of a book I am working on, hopefully due for completion by mid 2018. The original purpose of the book is to explore where we are at, where we are going, and how we can get there, in the broadest possible sense. Your comments, feedback and constructive criticism are welcome! The final text of the book will be freely available under a Creative Commons By Attribution license. A book version will be sent to nominated world leaders, to hopefully encourage the necessary questioning of the status quo and smarter decisions into the future. Additional elements like references, graphs, images and other materials will be available in the final digital and book versions and draft content will be published weekly. Please subscribe to the blog posts by the RSS category and/or join the mailing list for updates.

Back to the book overview or table of contents for the full picture. Please note the pivot from focusing just on individuals to focusing on the systems we live in and the paradoxes therein.

“Differentiation of labour and interdependence of society is reliant on consistent and predictable authorities to thrive” — Durkheim

Many hands makes light work is an old adage both familiar and comforting. One feels that if things get our of hand we can just throw more resources at the problem and it will suffice. However we have made it harder on ourselves in three distinct ways:

  • by not always recognising the importance of interdependence and the need to ensure the stability and prosperity of our community as a necessary precondition to the success of the individuals therein;
  • by increasingly making it harder for people to gain knowledge, skills and adaptability to ensure those “many hands” are able to respond to the work required and not trapped into social servitude; and
  • by often failing to recognise whether we need a linear or exponential response in whatever we are doing, feeling secure in the busy-ness of many hands.

Specialisation is when a person delves deep on a particular topic or skill. Over many millennia we have got to the point where we have developed extreme specialisation, supported through interdependence and stability, which gave us the ability to rapidly and increasingly evolve what we do and how we live. This resulted in increasingly complex social systems and structures bringing us to a point today where the pace of change has arguably outpaced our imagination. We see many people around the world clinging to traditions and romantic notions of the past whilst we hurtle at an accelerating pace into the future. Many hands have certainly made light work, but new challenges have emerged as a result and it is more critical than ever that we reimagine our world and develop our resilience and adaptability to change, because change is the only constant moving forward.

One human can survive on their own for a while. A tribe can divide up the labour quite effectively and survive over generations, creating time for culture and play. But when we established cities and states around 6000 years ago, we started a level of unprecedented division of labour and specialisation beyond mere survival. When the majority of your time, energy and resources go into simply surviving, you are largely subject to forces outside your control and unable to justify spending time on other things. But when survival is taken care of (broadly speaking) it creates time for specialisation and perfecting your craft, as well as for leisure, sport, art, philosophy and other key areas of development in society.

The era of cities itself was born on the back of an agricultural technology revolution that made food production far more efficient, creating surplus (which drove a need for record keeping and greater proliferation of written language) and prosperity, with a dramatic growth in specialisation of jobs. With greater specialisation came greater interdependence as it becomes in everyone’s best interests to play their part predictably. A simple example is a farmer needing her farming equipment to be reliable to make food, and the mechanic needs food production to be reliable for sustenance. Both rely on each other not just as customers, but to be successful and sustainable over time. Greater specialisation led to greater surplus as specialists continued to fine tune their crafts for ever greater outcomes. Over time, an increasing number of people were not simply living day to day, but were able to plan ahead and learn how to deal with major disruptions to their existence. Hunters and gatherers are completely subject to the conditions they live in, with an impact on mortality, leisure activities largely fashioned around survival, small community size and the need to move around. With surplus came spare time and the ability to take greater control over one’s existence and build a type of systemic resilience to change.

So interdependence gave us greater stability, as a natural result of enlightened self interest writ large where ones own success is clearly aligned with the success of the community where one lives. However, where interdependence in smaller communities breeds a kind of mutual understanding and appreciation, we have arguably lost this reciprocity and connectedness in larger cities today, ironically where interdependence is strongest. When you can’t understand intuitively the role that others play in your wellbeing, then you don’t naturally appreciate them, and disconnected self interest creates a cost to the community. When community cohesion starts to decline, eventually individuals also decline, except the small percentage who can either move communities or who benefit, intentionally or not, on the back of others misfortune.

When you have no visibility of food production beyond the supermarket then it becomes easier to just buy the cheapest milk, eggs or bread, even if the cheapest product is unsustainable or undermining more sustainably produced goods. When you have such a specialised job that you can’t connect what you do to any greater meaning, purpose or value, then it also becomes hard to feel valuable to society, or valued by others. We see this increasingly in highly specialised organisations like large companies, public sector agencies and cities, where the individual feels the dual pressure of being anything and nothing all at once.

Modern society has made it somewhat less intuitive to value others who contribute to your survival because survival is taken for granted for many, and competing in ones own specialisation has been extended to competing in everything without appreciation of the interdependence required for one to prosper. Competition is seen to be the opposite of cooperation, whereas a healthy sustainable society is both cooperative and competitive. One can cooperate on common goals and compete on divergent goals, thus making best use of time and resources where interests align. Cooperative models seem to continually emerge in spite of economic models that assume simplistic punishment and incentive based behaviours. We see various forms of “commons” where people pool their resources in anything from community gardens and ’share economies’ to software development and science, because cooperation is part of who we are and what makes us such a successful species.

Increasing specialisation also created greater surplus and wealth, generating increasingly divergent and insular social classes with different levels of power and people becoming less connected to each other and with wealth overwhelmingly going to the few. This pressure between the benefits and issues of highly structured societies and which groups benefit has ebbed and flowed throughout our history but, generally speaking, when the benefits to the majority outweigh the issues for that majority, then you have stability. With stability a lot can be overlooked, including at times gross abuses for a minority or the disempowered. However, if the balances tips too far the other way, then you get revolutions, secessions, political movements and myriad counter movements. Unfortunately many counter movements limit themselves to replacing people rather than the structures that created the issues however, several of these counter movements established some critical ideas that underpin modern society.

Before we explore the rise of individualism through independence and suffrage movements (chapter 1.3), it is worth briefly touching upon the fact that specialisation and interdependence, which are critical for modern societies, both rely upon the ability for people to share, to learn, and to ensure that the increasingly diverse skills are able to evolve as the society evolves. Many hands only make light work when they know what they are doing. Historically the leaps in technology, techniques and specialisation have been shared for others to build upon and continue to improve as we see in writings, trade, oral traditions and rituals throughout history. Gatekeepers naturally emerged to control access to or interpretations of knowledge through priests, academics, the ruling class or business class. Where gatekeepers grew too oppressive, communities would subdivide to rebalance the power differential, such a various Protestant groups, union movements and the more recent Open Source movements. In any case, access wasn’t just about power of gatekeepers. The costs of publishing and distribution grew as societies grew, creating a call from the business class for “intellectual property” controls as financial mechanisms to offset these costs. The argument ran that because of the huge costs of production, business people needed to be incentivised to publish and distribute knowledge, though arguably we have always done so as a matter of survival and growth.

With the Internet suddenly came the possibility for massively distributed and free access to knowledge, where the cost of publishing, distribution and even the capability development required to understand and apply such knowledge was suddenly negligible. We created a universal, free and instant way to share knowledge, creating the opportunity for a compounding effect on our historic capacity for cumulative learning. This is worth taking a moment to consider. The technology simultaneously created an opportunity for compounding our cumulative learning whilst rendered the reasons for IP protections negligible (lowered costs of production and distribution) and yet we have seen a dramatic increase in knowledge protectionism. Isn’t it to our collective benefit to have a well educated community that can continue our trajectory of diversification and specialisation for the benefit of everyone? Anyone can get access to myriad forms of consumer entertainment but our most valuable knowledge assets are fiercely protected against general and free access, dampening our ability to learn and evolve. The increasing gap between the haves and have nots is surely symptomatic of the broader increasing gap between the empowered and disempowered, the makers and the consumers, those with knowledge and those without. Consumers are shaped by the tools and goods they have access to, and limited by their wealth and status. But makers can create the tools and goods they need, and can redefine wealth and status with a more active and able hand in shaping their own lives.

As a result of our specialisation, our interdependence and our cooperative/competitive systems, we have created greater complexity in society over time, usually accompanied with the ability to respond to greater complexity. The problem is that a lot of our solutions to change have been linear responses to an exponential problem space. the assumption that more hands will continue to make light work often ignores the need for sharing skills and knowledge, and certainly ignores where a genuinely transformative response is required. A small fire might be managed with buckets, but at some point of growth, adding more buckets becomes insufficient and new methods are required. Necessity breeds innovation and yet when did you last see real innovation that didn’t boil down to simply more or larger buckets? Iteration is rarely a form of transformation, so it is important to always clearly understand the type of problem you are dealing with and whether the planned response needs to be linear or exponential. If the former, more buckets is probably fine. If the latter, every bucket is just a distraction from developing the necessary response.

Next chapter I’ll examine how the independence movements created the philosophical pre-condition for democracy, the Internet and the dramatic paradigm shifts to follow.

January 02, 2018

Premier Open Source Database Conference Call for Papers closing January 12 2018

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Resolving a Partitioned RabbitMQ Cluster with JuJu

On occasion, a RabbitMQ cluster may partition itself. In a OpenStack environment this can often first present itself as nova-compute services stopping with errors such as these:

ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager._sync_power_states: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c
TRACE nova.openstack.common.periodic_task self._raise_timeout_exception(msg_id)
TRACE nova.openstack.common.periodic_task File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/", line 218, in _raise_timeout_exception
TRACE nova.openstack.common.periodic_task 'Timed out waiting for a reply to message ID %s' % msg_id)
TRACE nova.openstack.common.periodic_task MessagingTimeout: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c

Merely restarting the stopped nova-compute services will not resolve this issue.

You may also find that querying the rabbitmq service may either not return or take an awful long time to return:

$ sudo rabbitmqctl -p openstack list_queues name messages consumers status

...and in an environment managed by juju, you could also see JuJu trying to correct the RabbitMQ but failing:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0           idle 0/lxc/12 5672/tcp
rabbitmq-server/1   error   idle 1/lxc/8  5672/tcp   hook failed: "config-changed"
rabbitmq-server/2   error   idle 2/lxc/10 5672/tcp   hook failed: "config-changed"

You should now run rabbitmqctl cluster_status on each of your rabbit instances and review the output. If the cluster is partitioned, you will see something like the below:

ubuntu@my_juju_lxc:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...

You can clearly see from the above that there are two partitions for RabbitMQ. We need to now identify which of these is considered the leader:

maas-my_cloud:~$ juju run --service rabbitmq-server "is-leader"
- MachineId: 0/lxc/12
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/0
- MachineId: 1/lxc/8
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/1
- MachineId: 2/lxc/10
  Stderr: |
  Stdout: |
  UnitId: rabbitmq-server/2

As you see above, in this example machine 0/lxc/12 is the leader, via it's status of "True". Now we need to hit the other two servers and shut down RabbitMQ:

# service rabbitmq-server stop

Once both services have completed shutting down, we can resolve the partitioning by running:

$ juju resolved -r rabbitmq-server/<whichever is leader>

Substituting <whichever is leader> for the machine ID identified earlier.

Once that has completed, you can start the previously stopped services with the below on each host:

# service rabbitmq-server start

and verify the result with:

$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...

No partitions \o/

The JuJu errors for RabbitMQ should clear within a few minutes:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0             idle 0/lxc/12 5672/tcp 19
rabbitmq-server/1   unknown   idle 1/lxc/8  5672/tcp 19
rabbitmq-server/2   unknown   idle 2/lxc/10 5672/tcp

You should also find the nova-compute instances starting up fine.

January 01, 2018

Donations 2017

Like in 2016 and 2015 I am blogging about my charity donations.

The majority of donations were done during December (I start around my birthday) although after my credit card got suspended last year I spread them across several days.

The inspiring others bit seems to have worked a little. Ed Costello has blogged his donations for 2017.

I’ll note that throughout the year I’ve also been giving money via Patreon to several people whose online content I like. I suspended these payments in early-December but they have backed down on the change so I’ll probably restart them in early 2018.

As usual my main donation was to Givewell. This year I gave to them directly and allowed them to allocate to projects as they wish.

  • $US 600 to Givewell (directly for their allocation)

In march I gave to two organization I follow online. Transport Blog re-branded themselves as “Greater Auckland” and is positioning themselves as a lobbying organization as well as news site.

Signum University produce various education material around science-fiction, fantasy and medieval literature. In my case I’m following their lectures on Youtube about the Lord of the Rings.

I gave some money to the Software Conservancy to allocate across their projects and again to the Electronic Frontier Foundation for their online advocacy.

and lastly I gave to various Open Source Projects that I regularly use.


December 27, 2017

First Look at Snaps

I've belatedly come to have a close up look at both Ubuntu Core (Snappy), Snaps and the Snappy package manager.

The first pass was to rebuild my rack of Raspberry Pi's from Debian armhf to Ubuntu Core for the Raspberry Pi.


This proved to be the most graceful install I've ever had on any hardware, ever. No hyperbole: boot, authenticate, done. I repeated this for all six Pi's in such a short time frame that I was concerned I'd done something wrong. Your SSH keys are already installed, you can log in immediately and just get on with it.

Which is where snaps come into play.

Back on my laptop, I followed the tutorial Create Your First Snap which uses GNU Hello as an example snap build and finishes with a push to the snap store at

I then created a Launchpad Repo, related a snap package, told it to build for armhf and amd64 and before long, I could install this snap on both my laptop and the Pi's.

Overall this was a pretty impressive and graceful process.

December 20, 2017

Designing Shared Cars

Almost 10 years ago I blogged about car sharing companies in Melbourne [1]. Since that time the use of such services appears to have slowly grown (judging by the slow growth in the reserved parking spots for such cars). This isn’t the sudden growth that public transport advocates and the operators of those companies hoped for, but it is still positive. I have just watched the documentary The Human Scale [2] (which I highly recommend) about the way that cities are designed for cars rather than for people.

I think that it is necessary to make cities more suited to the needs of people and that car share and car hire companies are an important part of converting from a car based city to a human based city. As this sort of change happens the share cars will be an increasing portion of the new car sales and car companies will have to design cars to better suit shared use.

Personalising Cars

Luxury car brands like Mercedes support storing the preferred seat position for each driver, once the basic step of maintaining separate driver profiles is done it’s an easy second step to have them accessed over the Internet and also store settings like preferred radio stations, Bluetooth connection profiles, etc. For a car share company it wouldn’t be particularly difficult to extrapolate settings based on previous use, EG knowing that I’m tall and using the default settings for a tall person every time I get in a shared car that I haven’t driven before. Having Bluetooth connections follow the user would mean having one slave address per customer instead of the current practice of one per car, the addressing is 48bit so this shouldn’t be a problem.

Most people accumulate many items in their car, some they don’t need, but many are needed. Some of the things in my car are change for parking meters, sunscreen, tools, and tissues. Car share companies have deals with councils for reserved parking spaces so it wouldn’t be difficult for them to have a deal for paying for parking and billing the driver thus removing the need for change (and the risk of a car window being smashed by some desperate person who wants to steal a few dollars). Sunscreen is a common enough item in Australia that a car share company might just provide it as a perk of using a shared car.

Most people have items like tools, a water bottle, and spare clothes that can’t be shared which tend to end up distributed in various storage locations. The solution to this might be to have a fixed size storage area, maybe based on some common storage item like a milk crate. Then everyone who is a frequent user of shared cars could buy a container designed to fit that space which is divided in a similar manner to a Bento box to contain whatever they need to carry.

There is a lot of research into having computers observing the operation of a car and warning the driver or even automatically applying the brakes to avoid a crash. For shared cars this is more important as drivers won’t necessarily have a feel for the car and can’t be expected to drive as well.

Car Sizes

Generally cars are designed to have 2 people (sports car, Smart car, van/ute/light-truck), 4/5 people (most cars), or 6-8 people (people movers). These configurations are based on what most people are able to use all the time. Most car travel involves only one adult. Most journeys appear to have no passengers or only children being driven around by a single adult.

Cars are designed for what people can drive all the time rather than what would best suit their needs most of the time. Almost no-one is going to buy a personal car that can only take one person even though most people who drive will be on their own for most journeys. Most people will occasionally need to take passengers and that occasional need will outweigh the additional costs in buying and fueling a car with the extra passenger space.

I expect that when car share companies get a larger market they will have several vehicles in the same location to allow users to choose which to drive. If such a choice is available then I think that many people would sometimes choose a vehicle with no space for passengers but extra space for cargo and/or being smaller and easier to park.

For the common case of one adult driving small children the front passenger seat can’t be used due to the risk of airbags killing small kids. A car with storage space instead of a front passenger seat would be more useful in that situation.

Some of these possible design choices can also be after-market modifications. I know someone who removed the rear row of seats from a people-mover to store the equipment for his work. That gave a vehicle with plenty of space for his equipment while also having a row of seats for his kids. If he was using shared vehicles he might have chosen to use either a vehicle well suited to cargo (a small van or ute) or a regular car for transporting his kids. It could be that there’s an untapped demand for ~4 people in a car along with cargo so a car share company could remove the back row of seats from people movers to cater to that.

December 18, 2017

Percona Live Santa Clara 2018 CFP

Percona Live Santa Clara 2018 call for papers ends fairly soon — December 22 2017. It may be extended, but I suggest getting a topic in ASAP so the conference committee can view everything fairly and quickly. Remember this conference is bigger than just MySQL, so please submit topics on MongoDB, other databases like PostgreSQL, time series, etc., and of course MySQL.

What are you waiting for? Submit TODAY!
(It goes without saying that speakers get a free pass to attend the event.)

December 15, 2017

Celebration Time!

Here at OpenSTEM we have a saying “we have a resource on that” and we have yet to be caught out on that one! It is a festive time of year and if you’re looking for resources reflecting that theme, then here are some suggestions: Celebrations in Australia – a resource covering the occasions we […]

December 14, 2017

Huawei Mate9

Warranty Etc

I recently got a Huawei Mate 9 phone. My previous phone was a Nexus 6P that died shortly before it’s one year warranty ran out. As there have apparently been many Nexus 6P phones dying there are no stocks of replacements so Kogan (the company I bought the phone from) offered me a choice of 4 phones in the same price range as a replacement.

Previously I had chosen to avoid the extended warranty offerings based on the idea that after more than a year the phone won’t be worth much and therefore getting it replaced under warranty isn’t as much of a benefit. But now that it seems that getting a phone replaced with a newer and more powerful model is a likely outcome it seems that there are benefits in a longer warranty. I chose not to pay for an “extended warranty” on my Nexus 6P because getting a new Nexus 6P now isn’t such a desirable outcome, but when getting a new Mate 9 is a possibility it seems more of a benefit to get the “extended warranty”. OTOH Kogan wasn’t offering more than 2 years of “warranty” recently when buying a phone for a relative, so maybe they lost a lot of money on replacements for the Nexus 6P.


I chose the Mate 9 primarily because it has a large screen. It’s 5.9″ display is only slightly larger than the 5.7″ displays in the Nexus 6P and the Samsung Galaxy Note 3 (my previous phone). But it is large enough to force me to change my phone use habits.

I previously wrote about matching phone size to the user’s hand size [1]. When writing that I had the theory that a Note 2 might be too large for me to use one-handed. But when I owned those phones I found that the Note 2 and Note 3 were both quite usable in one-handed mode. But the Mate 9 is just too big for that. To deal with this I now use the top corners of my phone screen for icons that I don’t tend to use one-handed, such as Facebook. I chose this phone knowing that this would be an issue because I’ve been spending more time reading web pages on my phone and I need to see more text on screen.

Adjusting my phone usage to the unusually large screen hasn’t been a problem for me. But I expect that many people will find this phone too large. I don’t think there are many people who buy jeans to fit a large phone in the pocket [2].

A widely touted feature of the Mate 9 is the Leica lens which apparently gives it really good quality photos. I haven’t noticed problems with my photos on my previous two phones and it seems likely that phone cameras have in most situations exceeded my requirements for photos (I’m not a very demanding user). One thing that I miss is the slow-motion video that the Nexus 6P supports. I guess I’ll have to make sure my wife is around when I need to make slow motion video.

My wife’s Nexus 6P is well out of warranty. Her phone was the original Nexus 6P I had. When her previous phone died I had a problem with my phone that needed a factory reset. It’s easier to duplicate the configuration to a new phone than restore it after a factory reset (as an aside I believe Apple does this better) I copied my configuration to the new phone and then wiped it for my wife to use.

One noteworthy but mostly insignificant feature of the Mate 9 is that it comes with a phone case. The case is hard plastic and cracked when I unsuccessfully tried to remove it, so it seems to effectively be a single-use item. But it is good to have that in the box so that you don’t have to use the phone without a case on the first day, this is something almost every other phone manufacturer misses. But there is the option of ordering a case at the same time as a phone and the case isn’t very good.

I regard my Mate 9 as fairly unattractive. Maybe if I had a choice of color I would have been happier, but it still wouldn’t have looked like EVE from Wall-E (unlike the Nexus 6P).

The Mate 9 has a resolution of 1920*1080, while the Nexus 6P (and many other modern phones) has a resolution of 2560*1440 I don’t think that’s a big deal, the pixels are small enough that I can’t see them. I don’t really need my phone to have the same resolution as the 27″ monitor on my desktop.

The Mate 9 has 4G of RAM and apps seem significantly less likely to be killed than on the Nexus 6P with 3G. I can now switch between memory hungry apps like Pokemon Go and Facebook without having one of them killed by the OS.


The OS support from Huawei isn’t nearly as good as a Nexus device. Mine is running Android 7.0 and has a security patch level of the 5th of June 2017. My wife’s Nexus 6P today got an update from Android 8.0 to 8.1 which I believe has the fixes for KRACK and Blueborne among others.

Kogan is currently selling the Pixel XL with 128G of storage for $829, if I was buying a phone now that’s probably what I would buy. It’s a pity that none of the companies that have manufactured Nexus devices seem to have learned how to support devices sold under their own name as well.


Generally this is a decent phone. As a replacement for a failed Nexus 6P it’s pretty good. But at this time I tend to recommend not buying it as the first generation of Pixel phones are now cheap enough to compete. If the Pixel XL is out of your price range then instead of saving $130 for a less secure phone it would be better to save $400 and choose one of the many cheaper phones on offer.

Remember when Linux users used to mock Windows for poor security? Now it seems that most Android devices are facing the security problems that Windows used to face and the iPhone and Pixel are going to take the role of the secure phone.

December 13, 2017

Thinkpad X301

Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.


The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

December 11, 2017

Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

December 08, 2017

Happy Holidays, Queensland!

It’s finally holidays in Queensland! Yay! Congratulations to everyone for a wonderful year and lots of hard work! Hope you all enjoy a well-earned rest! Most other states and territories have only a week to go, but the holiday spirit is in the air.- Should you be looking for help with resources, rest assured that […]

December 05, 2017

A Tale of Two Conferences: ISC and TERATEC 2017

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of
choice at both was overwhelming.

read more

December 03, 2017

How Inlets Generate Thrust on Supersonic aircraft

Some time ago I read Skunk Works, a very good “engineering” read.

In the section on the SR-71, the author Ben Rich made a statement that has puzzled me ever since, something like: “Most of the engines thrust is developed by the intake”. I didn’t get it – surely an intake is a source of drag rather than thrust? I have since read the same statement about the Concorde and it’s inlets.

Lately I’ve been watching a lot of AgentJayZ Gas Turbine videos. This guy services gas turbines for a living and is kind enough to present a lot of intricate detail and answer questions from people. I find his presentation style and personality really engaging, and get a buzz out of his enthusiasm, love for his work, and willingness to share all sorts of geeky, intricate details.

So inspired by AgentJayZ I did some furious Googling and finally worked out why supersonic planes develop thrust from their inlets. I don’t feel it’s well explained elsewhere so here is my attempt:

  1. Gas turbine jet engines only work if the air is moving into the compressor at subsonic speeds. So the job of the inlet is to slow the air down from say Mach 2 to Mach 0.5.
  2. When you slow down a stream of air, the pressure increases. Like when you feel the wind pushing on your face on a bike. Imagine (don’t try) the pressure on your arm hanging out of a car window at 100 km/hr. Now imagine the pressure at 3000 km/hr. Lots. Around a 40 times increase for the inlets used in supersonic aircraft.
  3. So now we have this big box (the inlet chamber) full of high pressure air. Like a balloon this pressure is pushing equally on all sides of the box. Net thrust is zero.
  4. If we untie the balloon neck, the air can escape, and the balloon shoots off in the opposite direction.
  5. Back to the inlet on the supersonic aircraft. It has a big vacuum cleaner at the back – the compressor inlet of the gas turbine. It is sucking air out of the inlet as fast as it can. So – the air can get out, just like the balloon, and the inlet and the aircraft attached to it is thrust in the opposite direction. That’s how an inlet generates thrust.
  6. While there is also thrust from the gas turbine and it’s afterburner, turns out that pressure release in the inlet contributes the majority of the thrust. I don’t know why it’s the majority. Guess I need to do some more reading and get my gas equations on.

Another important point – the aircraft really does experience that extra thrust from the inlet – e.g. it’s transmitted to the aircraft by the engine mounts on the inlet, and the mounts must be designed with those loads in mind. This helps me understand the definition of “thrust from the inlet”.

December 01, 2017

My Canadian adventure exploring FWD50

I recently went to Ottawa for the FWD50 conference run by Rebecca and Alistair Croll. It was my first time in Canada, and it combined a number of my favourite things. I was at an incredible conference with a visionary and enthusiastic crowd, made up of government (international, Federal, Provincial and Municipal), technologists, civil society, industry, academia, and the calibre of discussions and planning for greatness was inspiring.

There was a number of people I have known for years but never met in meatspace, and equally there were a lot of new faces doing amazing things. I got to spend time with the excellent people at the Treasury Board of Canadian Secretariat, including the Canadian Digital Service and the Office of the CIO, and by wonderful coincidence I got to see (briefly) the folk from the Open Government Partnership who happened to be in town. Finally I got to visit the gorgeous Canadian Parliament, see their extraordinary library, and wander past some Parliamentary activity which always helps me feel more connected to (and therefore empowered to contribute to) democracy in action.

Thank you to Alistair Croll who invited me to keynote this excellent event and who, with Rebecca Croll, managed to create a truly excellent event with a diverse range of ideas and voices exploring where we could or should go as a society in future. I hope it is a catalyst for great things to come in Canada and beyond.

For those in Canada who are interested in the work in New Zealand, I strongly encourage you to tune into the D5 event in February which will have some of our best initiatives on display, and to tune in to our new Minister for Broadband, Digital and Open Government (such an incredible combination in a single portfolio), Minister Clare Curran and you can tune in to our “Service Innovation” work at our blog or by subscribing to our mailing list. I also encourage you to read this inspiring “People’s Agenda” by a civil society organisation in NZ which codesigned a vision for the future type of society desired in New Zealand.


  • One of the great delights of this trip was seeing a number of people in person for the first time who I know from the early “Gov 2.0″ days (10 years ago!). It was particularly great to see Thom Kearney from Canada’s TBS and his team, Alex Howard (@digiphile) who is now a thought leader at the Sunlight Foundation, and Olivia Neal (@livneal) from the UK CTO office/GDS, Joe Powell from OGP, as well as a few friends from Linux and Open Source (Matt and Danielle amongst others).
  • The speech by Canadian Minister of the Treasury Board Secretariat (which is responsible for digital government) the Hon Scott Brison, was quite interesting and I had the chance to briefly chat to him and his advisor at the speakers drinks afterwards about the challenges of changing government.
  • Meeting with Canadian public servants from a variety of departments including the transport department, innovation and science, as well as the Treasury Board Secretariat and of course the newly formed Canadian Digital Service.
  • Meeting people from a range of sub-national governments including the excellent folk from Peel, Hillary Hartley from Ontario, and hearing about the quite inspiring work to transform organisational structures, digital and other services, adoption of micro service based infrastructure, the use of “labs” for experimentation.
  • It was fun meeting some CIO/CTOs from Canada, Estonia, UK and other jurisdictions, and sharing ideas about where to from here. I was particularly impressed with Alex Benay (Canadian CIO) who is doing great things, and with Siim Sikkut (Estonian CIO) who was taking the digitisation of Estonia into a new stage of being a broader enabler for Estonians and for the world. I shared with them some of my personal lessons learned around digital iteration vs transformation, including from the DTO in Australia (which has changed substantially, including a name change since I was there). Some notes of my lessons learned are at
  • My final highlight was how well my keynote and other talks were taken. People were really inspired to think big picture and I hope it was useful in driving some of those conversations about where we want to collectively go and how we can better collaborate across geopolitical lines.

Below are some photos from the trip, and some observations from specific events/meetings.

My FWD50 Keynote – the Tipping Point

I was invited to give a keynote at FWD50 about the tipping point we have gone through and how we, as a species, need to embrace the major paradigm shifts that have already happened, and decide what sort of future we want and work towards that. I also suggested some predictions about the future and examined the potential roles of governments (and public sectors specifically) in the 21st century. The slides are at and the full speech is on my personal blog at

I also gave a similar keynote speech at the NerHui conference in New Zealand the week after which was recorded for those who want to see or hear the content at

The Canadian Digital Service

Was only set up about a year ago and has a focus on building great services for users, with service design and user needs at the heart of their work. They have some excellent people with diverse skills and we spoke about what is needed to do “digital government” and what that even means, and the parallels and interdependencies between open government and digital government. They spoke about an early piece of work they did before getting set up to do a national consultation about the needs of Canadians ( which had some interesting insights. They were very focused on open source, standards, building better ways to collaborate across government(s), and building useful things. They also spoke about their initial work around capability assessment and development across the public sector. I spoke about my experience in Australia and New Zealand, but also in working and talking to teams around the world. I gave an informal outline about the work of our Service Innovation and Service Integration team in DIA, which was helpful to get some feedback and peer review, and they were very supportive and positive. It was an excellent discussion, thank you all!

CivicTech meetup

I was invited to talk to the CivicTech group meetup in Ottawa ( about the roles of government and citizens into the future. I gave a quick version of the keynote I gave at 2017 (, which explores paradigm shifts and the roles of civic hackers and activists in helping forge the future whilst also considering what we should (and shouldn’t) take into the future with us. It included my amusing change.log of the history of humans and threw down the gauntlet for civic hackers to lead the way, be the light :)

CDS Halloween Mixer

The Canadian Digital Service does a “mixer” social event every 6 weeks, and this one landed on Halloween, which was also my first ever Halloween celebration  I had a traditional “beavertail” which was a flat cinnamon doughnut with lemon, amazing! Was fun to hang out but of course I had to retire early from jet lag.

Workshop with Alistair

The first day of FWD50 I helped Alistair Croll with a day long workshop exploring the future. We thought we’d have a small interactive group and ended up getting 300, so it was a great mind meld across different ideas, sectors, technologies, challenges and opportunities. I gave a talk on culture change in government, largely influenced by a talk a few years ago called “Collaborative innovation in the public service: Game of Thrones style” ( People responded well and it created a lot of discussions about the cultural challenges and barriers in government.


Finally, just a quick shout out and thanks to Alistair for inviting me to such an amazing conference, to Rebecca for getting me organised, to Danielle and Matthew for your companionship and support, to everyone for making me feel so welcome, and to the following folk who inspired, amazed and colluded with me  In chronological order of meeting: Sean Boots, Stéphane Tourangeau, Ryan Androsoff, Mike Williamson, Lena Trudeau, Alex Benay (Canadian Gov CIO), Thom Kearney and all the TBS folk, Siim Sikkut from Estonia, James Steward from UK, and all the other folk I met at FWD50, in between feeling so extremely unwell!

Thank you Canada, I had a magnificent time and am feeling inspired!

This Week in HASS – term 4, week 9

Well, we’re almost at the end of the year!! It’s a time when students and teachers alike start to look forward to the long, summer break. Generally a time for celebrations and looking back over the highlights of the year – which is reflected in the activities for the final lessons of the Understanding Our […]

November 29, 2017

Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname,, pointing to the main Libravatar server:

CNAME acme

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d

The resulting certificate will then be automatically renewed before it expires.

November 27, 2017

Steve Ports an OFDM modem from Octave to C

Earlier this year I asked for some help. Steve Sampson K5OKC stepped up, and has done some fine work in porting the OFDM modem from Octave to C. I was so happy with his work I asked him to write a guest post on my blog on his experience and here it is!

On a personal level working with Steve was a great experience for me. I always enjoy and appreciate other people working on FreeDV with me, however it is quite rare to have people help out with programming. As you will see, Steve enjoyed the process and learned a great deal in the process.

The Problem with Porting

But first some background on the process involved. In signal processing it is common to develop algorithms in a convenient domain-specific scripting language such as GNU Octave. These languages can do a lot with one line of code and have powerul visualisation tools.

Usually, the algorithm then needs to be ported to a language suitable for real time implementation. For most of my career that has been C. For high speed operation on FPGAs it might be VHDL. It is also common to port algorithms from floating point to fixed point so they can run on low cost hardware.

We don’t develop algorithms directly in the target real-time language as signal processing is hard. Bugs are difficult to find and correct. They may be 10x or 100x times harder (in terms of person-hours) to find in C or VHDL than say GNU Octave.

So a common task in my industry is porting an algorithm from one language to another. Generally the process involves taking a working simulation and injecting a bunch of hard to find bugs into the real time implementation. It’s an excellent way for engineering companies to go bankrupt and upset customers. I have seen and indeed participated in this process (screwing up real time implementations) many times.

The other problem is algorithm development is hard, and not many people can do it. They are hard to find, cost a lot of money to employ, and can be very nerdy (like me). So if you can find a way to get people with C, but not high level DSP skills, to work on these ports – then it’s a huge win from a resourcing perspective. The person doing the C port learns a lot, and managers are happy as there is some predictability in the engineering process and schedule.

The process I have developed allows people with C coding (but not DSP) skills to port complex signal processing algorithms from one language to another. In this case its from GNU Octave to floating point C. The figures below shows how it all fits together.

Here is a sample output plot, in this case a buffer of received samples in the demodulator. This signal is plotted in green, and the difference between C and Octave in red. The red line is all zeros, as it should be.

This particular test generates 12 plots. Running is easy:

$ cd codec2-dev/octave
$ ../build_linux/unittest/tofdm
$ octave
>> tofdm
W........................: OK
tx_bits..................: OK
tx.......................: OK
rx.......................: OK
rxbuf in.................: OK
rxbuf....................: OK
rx_sym...................: FAIL (0.002037)
phase_est_pilot..........: FAIL (0.001318)
rx_amp...................: OK
timing_est...............: OK
sample_point.............: OK
foff_est_hz..............: OK
rx_bits..................: OK

This shows a fail case – two vectors just failed so some further inspection required.

Key points are:

  1. We make sure the C and Octave versions are identical. Near enough is not good enough. For floating point I set a tolerance like 1 part in 1000. For fixed point ports it can be bit exact – zero difference.
  2. We dump a lot of internal states, not just the inputs and outputs. This helps point us at exactly where the problem is.
  3. There is an automatic checklist to give us pass/fail reports of each stage.
  4. This process is not particularly original. It’s not rocket science, but getting people (especially managers) to support and follow such a process is. This part – the human factor – is really hard to get right.
  5. The same process can be used between any two versions of an algorithm. Fixed and float point, fixed point C and VHDL, or a reference implementation and another one that has memory or CPU optimisations. The same basic idea: take a reference version and use software to compare it.
  6. It makes porting fun and strangely satisfying. You get constant forward progress and no hard to find bugs. Things work when they hit real time. After months of tough, brain hurting, algorithm development, I find myself looking forward to the productivity the porting phase.

In this case Steve was the man doing the C port. Here is his story…..

Initial Code Construction

I’m a big fan of the Integrated Debugging Environment (IDE). I’ve used various versions over the years, but mostly only use Netbeans IDE. This is my current favorite, as it works well with C and Java.

When I take on a new programming project I just create a new IDE project and paste in whatever I want to translate, and start filling-in the Java or C code. In the OFDM modem case, it was the Octave source code ofdm_lib.m.

Obviously this code won’t do anything or compile, but it allows me to write C functions for each of the Octave code blocks. Sooner or later, all the Octave code is gone, and only C code remains.

I have very little experience with Octave, but I did use some Matlab in college. It was a new system just being introduced when I was near graduation. I spent a little time trying to make the program as dynamic as the Octave code. But it became mired in memory allocation.

Once David approved the decision for me to go with fixed configuration values (Symbol rate, Sample rate, etc), I was able to quickly create the header files. We could adjust these header files as we went along.

One thing about Octave, is you don’t have to specify the array sizes. So for the C port, one of my tasks was to figure out the array sizes for all the data structures. In some cases I just typed the array name in Octave, and it printed out its value, and then presto I now knew the size. Inspector Clouseau wins again!

The include files were pretty much patterned the same as FDMDV and COHPSK modems.

Code Starting Point

When it comes to modems, the easiest thing to create first is the modulator. It proved true in this case as well. I did have some trouble early on, because of a bug I created in my testing code. My spectrum looked different than Davids. Once this bug was ironed out the spectrums looked similar. David recommended I create a test program, like he had done for other modems.

The output may look similar, but who knows really? I’m certainly not going to go line by line through comma-separated values, and anyway Octave floating point values aren’t the same as C values past some number of decimal points.

This testing program was a little over my head, and since David has written many of these before, he decided to just crank it out and save me the learning curve.

We made a few data structure changes to the C program, but generally it was straight forward. Basically we had the outputs of the C and Octave modulators, and the difference is shown by their different colors. Luckily we finally got no differences.

OFDM Design

As I was writing the modulator, I also had to try and understand this particular OFDM design. I deduced that it was basically eighteen (18) carriers that were grouped into eight (8) rows. The first row was the complex “pilot” symbols (BPSK), and the remaining 7 rows were the 112 complex “data” symbols (QPSK).

But there was a little magic going on, in that the pilots were 18 columns, but the data was only using 16. So in the 7 rows of data, the first and last columns were set to a fixed complex “zero.”

This produces the 16 x 7 or 112 complex data symbols. Each QPSK symbol is two-bits, so each OFDM frame represents 224 bits of data. It wasn’t until I began working on the receiver code that all of this started to make sense.

With this information, I was able to drive the modulator with the correct number of bits, and collect the output and convert it to PCM for testing with Audacity.

DFT Versus FFT

This OFDM modem uses a DFT and IDFT. This greatly simplifies things. All I have to do is a multiply and summation. With only 18 carriers, this is easily fast enough for the task. We just zip through the 18 carriers, and return the frequency or time domain. Obviously this code can be optimized for firmware later on.

The final part of the modulator, is the need for a guard period called the Cyclic Prefix (CP). So by making a copy of the last 16 of the 144 complex time-domain samples, and putting them at the head, we produce 160 complex samples for each row, giving us 160 x 8 rows, or 1280 complex samples every OFDM frame. We send this to the transmitter.

There will probably need to be some filtering, and a function of adjusting gain in the API.

OFDM Modulator

That left the Demodulator which looked much more complex. It took me quite a long time just to get the Octave into some semblance of C. One problem was that Octave arrays start at 1 and C starts at 0. In my initial translation, I just ignored this. I told myself we would find the right numbers when we started pushing data through it.

I won’t kid anyone, I had no idea what was going on, but it didn’t matter. Slowly, after the basic code was doing something, I began to figure out the function of various parts. Again though, we have no idea if the C code is producing the same data as the Octave code. We needed some testing functions, and these were added to tofdm.m and tofdm.c. David wrote this part of the code, and I massaged the C modem code until one day the data were the same. This was pretty exciting to see it passing tests.

One thing I found, was that you can reach an underflow with single precision. Whenever I was really stumped, I would change the single precision to a double, and then see where the problem was. I was trying to stay completely within single precision floating point, because this modem is going to be embedded firmware someday.

Testing Process

There was no way that I could have reached a successful conclusion without the testing code. As a matter of fact, a lot of programming errors were found. You would be surprised at how much damage a miss placed parenthesis can do to a math equation! I’ve had enough math to know how to do the basic operations involved in DSP. I’m sure that as this code is ported to firmware, it can be simplified, optimized, and unrolled a bit for added speed. At this point, we just want valid waveforms.

C99 and Complex Math

Working with David was pretty easy, even though we are almost 16 time-zones apart. We don’t need an answer right now, and we aren’t working on a deadline. Sometimes I would send an email, and then four hours later I would find the problem myself, and the morning was still hours away in his time zone. So he sometimes got some strange emails from me that didn’t require an answer.

David was hands-off on this project, and doesn’t seem to be a control freak, so he just let me go at it, and then teamed-up when we had to merge things in giving us comparable output. Sometimes a simple answer was all I needed to blow through an Octave brain teaser.

I’ve been working in C99 for the past year. For those who haven’t kept up (1999 was a long time ago), but still, we tend to program C in the same way. In working with complex numbers though, the C library has been greatly expanded. For example, to multiply two complex numbers, you type” “A * B”. That’s it. No need to worry about a simulated complex number using a structure. You need a complex exponent, you type “cexp(I * W)” where “I” is the sqrt(-1). But all of this is hidden away inside the compiler.

For me, this became useful when translating Octave to C. Most of the complex functions have the same name. The only thing I had to do, was create a matrix multiply, and a summation function for the DFT. The rest was straight forward. Still a lot of work, but it was enjoyable work.

Where we might have problems interfacing to legacy code, there are functions in the library to extract the real and imaginary parts. We can easily interface to the old structure method. You can see examples of this in the testing code.

Looking back, I don’t think I would do anything different. Translating code is tedious no matter how you go. In this case Octave is 10 times easier than translating Fortran to C, or C to Java.

The best course is where you can start seeing some output early on. This keeps you motivated. I was a happy camper when I could look and listen to the modem using Audacity. Once you see progress, you can’t give up, and want to press on.


Reading Further

The Bit Exact Fairy Tale is a story of fixed point porting. Writing this helped me vent a lot of steam at the time – I’d just left a company that was really good at messing up these sorts of projects.

Modems for HF Digital Voice Part 1 and Part 2.

The cohpsk_frame_design spreadsheet includes some design calculations on the OFDM modem and a map of where the data and pilot symbols go in time and frequency.

Reducing FDMDV Modem Memory is an example of using automated testing to port an earlier HF modem to the SM1000. In this case the goal was to reduce memory consumption without breaking anything.

Fixed Point Scaling – Low Pass Filter example – is consistently one of the most popular posts on this blog. It’s a worked example of a fixed point port of a low pass filter.

November 24, 2017

This Week in HASS – term 4, week 8

Well, the end of term is in sight! End of year reporting is in full swing and the Understanding Our World® activities are designed to keep students engaged whilst minimising requirements for teachers, especially over these critical weeks. The current activities for all year levels are tailored to require minimal teaching, allowing teacher aides and […]

November 21, 2017

LUV December 2017 end of year celebration: Meetup Mixup Melbourne

Dec 21 2017 18:00
Dec 21 2017 23:59
Dec 21 2017 18:00
Dec 21 2017 23:59
Loop Project Space and Bar, 23 Meyers Pl, Melbourne VIC 3000

There will be no December workshop, but there will be an end of year party in conjunction with other Melbourne groups including Buzzconf, Electronic Frontiers Australia, Hack for Privacy, the Melbourne PHP Users Group, Open Knowledge Australia, PyLadies Melbourne and R-Ladies Melbourne.

Please note that there's a $8.80 cover fee, which includes a drink and nibbles, and bookings are essential as spaces are limited.  Tickets are available at

Linux Users of Victoria is a subcommittee of Linux Australia.

December 21, 2017 - 18:00

November 20, 2017

Communication skills for everyone

Donna presenting this talk at DrupalSouth - Photo by Tim Miller

Communication is a skill most of us practice every day.

Often without realising we're doing it.

Rarely intentionally.

I take my communication skills for granted. I'm not a brilliant communicator, not the best by any means, but probably, yes, I'm a bit above average. It wasn't until a colleague remarked on my presentation skills in particular that I remembered I'd actually been taught a thing or two about being on a stage. First as a dancer, then as a performer, and finally as a theatre director.

It's called Stagecraft. There's a lot to it, but when mastering stagecraft, you learn to know yourself. To use your very self as a tool to amplify your message. Where and how you stand, awareness of the space, of the light, of the size of the room, and of how to project your voice so all will hear you. All these facets need polish if you want your message to shine.

But you also need to learn to know your audience. Why are they there? What have they come to hear? What do they need to learn? How will they be transformed? Tuning your message to serve your audience is the real secret to giving a great presentation.

But presenting is just one of many communication skills. It's probably the one that people tell me most instils fear. Then there's writing of course. I envy writers! I would love to write more. I think of these as the "broadcast" skills. The "loud" skills. But the most important communication skill, in my view, is Listening.

As I've developed new skills as a business analyst, I've come to understand that Listening is the communication skill I need to improve most.

I was delighted to read this article by Tammy Lenski on the very morning I was to give this comms skills talk at DrupalSouth. Tammy refers to 5 Types of Listening identified in a talk given by Stephen Covey some years back. She says

"He described a listening continuum that runs from ignoring all the way over on the left, to pretend listening (patronizing), then selective listening, then attentive listening, and finally to empathic listening on the right."

Listening continuum


I think this is really useful.  If we are to get better at listening, we need to study it. But more importantly, we need to practice it. "Practice makes perfect". Kathy Sierra talks a lot about the power of intentional practice in her book Badass: Making Users Awesome

So, communication is a huge, huge topic to try and cover in a conference talk, so I tried to distil it down into three elements.

The what.

The how,

and The why.

The what is the message itself.  The how is the channel, the method, the style, or the medium, as Marshall Mcluhan said, and finally, there's the why; the intent, the purpose, or the reason for communicating.  I believe we need to understand the "why" of what we're saying, or hearing if it is to be of any benefit. 

Here's my slides:


November 17, 2017

Hackweek0x10: Fun in the Sun

We recently had a 5.94KW solar PV system installed – twenty-two 270W panels (14 on the northish side of the house, 8 on the eastish side), with an ABB PVI-6000TL-OUTD inverter. Naturally I want to be able to monitor the system, but this model inverter doesn’t have an inbuilt web server (which, given the state of IoT devices, I’m actually kind of happy about); rather, it has an RS-485 serial interface. ABB sell addon data logger cards for several hundred dollars, but Rick from Affordable Solar Tasmania mentioned he had another client who was doing monitoring with a little Linux box and an RS-485 to USB adapter. As I had a Raspberry Pi 3 handy, I decided to do the same.

Step one: Obtain an RS-485 to USB adapter. I got one of these from Jaycar. Yeah, I know I could have got one off eBay for a tenth the price, but Jaycar was only a fifteen minute drive away, so I could start immediately (I later discovered various RS-485 shields and adapters exist specifically for the Raspberry Pi – in retrospect one of these may have been more elegant, but by then I already had the USB adapter working).

Step two: Make sure the adapter works. It can do RS-485 and RS-422, so it’s got five screw terminals: T/R-, T/R+, RXD-, RXD+ and GND. The RXD lines can be ignored (they’re for RS-422). The other three connect to matching terminals on the inverter, although what the adapter labels GND, the inverter labels RTN. I plugged the adapter into my laptop, compiled Curt Blank’s aurora program, then asked the inverter to tell me something about itself:

aurora -a 2 -Y 4 -e /dev/ttyUSB0Interestingly, the comms seem slightly glitchy. Just running aurora -a 2 -e /dev/ttyUSB0 always results in either “No response after 1 attempts” or “CRC receive error (1 attempts made)”. Adding “-Y 4″ makes it retry four times, which is generally rather more successful. Ten retries is even more reliable, although still not perfect. Clearly there’s some tweaking/debugging to do here somewhere, but at least I’d confirmed that this was going to work.

So, on to the Raspberry Pi. I grabbed the openSUSE Leap 42.3 JeOS image and dd’d that onto a 16GB SD card. Booted the Pi, waited a couple of minutes with a blank screen while it did its firstboot filesystem expansion thing, logged in, fiddled with network and hostname configuration, rebooted, and then got stuck at GRUB saying “error: attempt to read or write outside of partition”:

error: attempt to read or write outside of partition.

Apparently that’s happened to at least one other person previously with a Tumbleweed JeOS image. I fixed it by manually editing the partition table.

Next I needed an RPM of the aurora CLI, so I built one on OBS, installed it on the Pi, plugged the Pi into the USB adapter, and politely asked the inverter to tell me a bit more about itself:

aurora -a @ -Y 4 -d 0 /dev/ttyUSB0

Everything looked good, except that the booster temperature was reported as being 4294967296°C, which seemed a little high. Given that translates to 0×100000000, and that the south wall of my house wasn’t on fire, I rather suspected another comms glitch. Running aurora -a 2 -Y 4 -d 0 /dev/ttyUSB0 a few more times showed that this was an intermittent problem, so it was time to make a case for the Pi that I could mount under the house on the other side of the wall from the inverter.

I picked up a wall mount snap fit black plastic box, some 15mm x 3mm screws, matching nuts, and 9mm spacers. The Pi I would mount inside the box part, rather than on the back, meaning I can just snap the box-and-Pi off the mount if I need to bring it back inside to fiddle with it.

Then I had to measure up and cut holes in the box for the ethernet and USB ports. The walls of the box are 2.5mm thick, plus 9mm for the spacers meant the bottom of the Pi had to be 11.5mm from the bottom of the box. I measured up then used a Dremel tool to make the holes then cleaned them up with a file. The hole for the power connector I did by eye later after the board was in about the right place.

20171115_164538 20171115_165407 20171115_165924 20171115_172026 20171115_173200 20171115_174705 20171115_174822 20171115_175002

I didn’t measure for the screw holes at all, I simply drilled through the holes in the board while it was balanced in there, hanging from the edge with the ports. I initially put the screws in from the bottom of the box, dropped the spacers on top, slid the Pi in place, then discovered a problem: if the nuts were on top of the board, they’d rub up against a couple of components:


So I had to put the screws through the board, stick them there with Blu Tack, turn the Pi upside down, drop the spacers on top, and slide it upwards into the box, getting the screws as close as possible to the screw holes, flip the box the right way up, remove the Blu Tack and jiggle the screws into place before securing the nuts. More fiddly than I’d have liked, but it worked fine.

One other kink with this design is that it’s probably impossible to remove the SD card from the Pi without removing the Pi from the box, unless your fingers are incredibly thin and dexterous. I could have made another hole to provide access, but decided against it as I’m quite happy with the sleek look, this thing is going to be living under my house indefinitely, and I have no plans to replace the SD card any time soon.


All that remained was to mount it under the house. Here’s the finished install:


After that, I set up a cron job to scrape data from the inverter every five minutes and dump it to a log file. So far I’ve discovered that there’s enough sunlight by about 05:30 to wake the inverter up. This morning we’d generated 1KW by 08:35, 2KW by 09:10, 8KW by midday, and as I’m writing this at 18:25, a total of 27.134KW so far today.

Next steps:

  1. Figure out WTF is up with the comms glitches
  2. Graph everything and/or feed the raw data to

This Week in HASS – term 4, week 7

This week our younger students are preparing for their play/ role-playing presentation, whilst older students are practising a full preferential count to determine the outcome of their Class Election. Foundation/Prep/Kindy to Year 3 Our youngest students in Foundation/Prep/Kindy (Unit F.4) and integrated classes with Year 1 (Unit F-1.4) are working on the costumes, props and […]

November 15, 2017

Save the Dates: Linux Security Summit Events for 2018

There will be a new European version of the Linux Security Summit for 2018, in addition to the established North American event.

The dates and locations are as follows:

Stay tuned for CFP announcements!


November 13, 2017

Test mail server on Ubuntu and Debian

I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data.

First, install the postfix mail server:

apt install postfix

and choose the "Local only" mail server configuration type.

Then change the following in /etc/postfix/

default_transport = error


default_transport = local:root

and restart postfix:

systemctl restart postfix.service

Once that's done, you can find all of the emails in /var/mail/root.

So you can install mutt:

apt install mutt

and then view the mailbox like this:

mutt -f /var/mail/root

Rattus Norvegicus ESTs with BLAST and Slurm

The following is a short tutorial on using BLAST with Slurm using fasta nucleic acid (fna) FASTA formatted sequence files for Rattus Norvegicus. It assumes that BLAST (Basic Local Alignment Search Tool) is already installed.

First, create a database directory, download the datafile, extract, and load the environment variables for BLAST.

mkdir -r ~/applicationtests/BLAST/dbs
cd ~/applicationtests/BLAST/dbs
gunzip rat.1.rna.fna.gz
module load BLAST/2.2.26-Linux_x86_64

Having extracted the file, there will be a fna formatted sequence file, rat.1.rna.fna. An example header line for a sequence:

>NM_175581.3 Rattus norvegicus cathepsin R (Ctsr), mRNA

read more

November 12, 2017

LUV Main December 2017 Meeting - ISC and TERATEC: A Tale of Two Conferences / nfatbles

Dec 5 2017 18:30
Dec 5 2017 20:30
Dec 5 2017 18:30
Dec 5 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Lev Lafayette, ISC and TERATEC: A Tale of Two Conferences

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of choice at both was overwhelming.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

December 5, 2017 - 18:30

read more

LUV November 2017 Workshop: Status at a glance with LCDproc

Nov 18 2017 12:30
Nov 18 2017 16:30
Nov 18 2017 12:30
Nov 18 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

Status at a glance with LCDproc

Andrew Pam will demonstrate how to use small LCD or LED displays to provide convenient status information using LCDproc.  Possibly also how to write custom modules to display additional information.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 18, 2017 - 12:30

read more

Access and Memory: Open GLAM and Open Source

Over the years of my involvement with library projects, like Coder Dojo, programming workshops and such, I’ve struggled to nail down the intersection between libraries and open source. At this years in Sydney (my seventeenth!) I’m helping to put together a miniconf to answer this question: Open GLAM. If you do work in the intersection of galleries, libraries, archives, musuems and open source, we’d love to hear from you.

Filed under: lca, oss, Uncategorized

November 10, 2017

This Week in HASS – term 4, week 6

This week our youngest students are starting work on their Class Play, slightly older students are choosing a family group from around the world for a role play activity and our oldest students are holding a Class Election! What an activity-filled week! Foundation/Prep/Kindy to Year 3 Our youngest students in standalone Foundation/Prep/Kindy classes (Unit F.4) […]

November 08, 2017

FWD50 Keynote: The Tipping Point

I was invited to an incredible and inaugural conference in Canada called FWD50 which was looking at the next 50 days, months and years for society. It had a digital government flavour to it but had participants and content from various international, national and sub-national governments, civil society, academia, industry and advocacy groups. The diversity of voices in the room was good and the organisers committed to greater diversity next year. I gave my keynote as an independent expert and my goal was to get people thinking bigger than websites and mobile apps, to dream about the sort of future we want as a society (as a species!) and work towards that. As part of my talk I also explored what the big paradigm shifts have happened (note the past tense) and potential roles for government (particularly the public sector) in a hyper connected, distributed network of powerful individuals. My slides are available here (simple though they are). It wasn’t recorded but I did an audio recording and transcribed. I was unwell and had lost my voice so this is probably better anyway :)

The tipping point and where do we go from here

I’ve been thinking a lot over many years about change and the difference between iteration and transformation, about systems, about what is going on in the big picture, because what I’m seeing around the world is a lot of people iterating away from pain but not actually iterating towards a future. Looking for ways to solve the current problem but not rethinking or reframing in the current context. I want to talk to you about the tipping point.

We invented all of this. This is worth taking a moment to think. We invented every system, every government, every means of production, we organised ourselves into structures and companies, all the things we know, we invented. By understanding we invented we can embrace the notice we aren’t stuck with it. A lot of people start from the normative perspective that it is how it is and how do we improve it slightly but we don’t have to be constrained to assumption because *we* invented it. We can take a formative approach.

The reason this is important is because the world has fundamentally changed. The world has started from a lot of assumptions. This (slide) is a map of the world as it was known at the time, and it was known for a long time to be flat. And at some point it became known that the world was not flat and people had to change their perspective. If we don’t challenge those assumptions that underpin our systems, we run the significant risk of recreating the past with shiny new things. If we take whatever the shiny thing is today, like blockchain or social media 10 years ago, and take that shiny thing to do what we have always done, then how are we progressing? We are just “lifting and shifting” as they like to say, which as a technologist is almost the worst thing I can hear.

Actually understanding the assumptions that underpin what we do, understanding the goal that we have and what we are trying to achieve, and actually having to make sure that we intentionally choose to move forward with the assumptions that we want to take into the future is important because a lot of the biases and assumptions that underpin the systems that we have today were forged centuries or even millennia ago. A long time before the significant paradigm shifts we have seen.

So I’m going to talk a little bit about how things have changed. It’s not that the tipping point is happening. The tipping point has already happened. We have seen paradigm shifts with legacy systems of power and control. Individuals are more individually powerful than ever in the history of our species. If you think way back in hunter and gatherer times, everyone was individually pretty powerful then, but it didn’t scale. When we moved to cities we actually started to highly specialise and become interdependent and individually less powerful because we made these systems of control that were necessary to manage the surplus of resource, necessary to manage information. But what’s happened now through the independence movements creating a culture of everyone being individually powerful through individual worthy of rights, and then more recently with the internet becoming a distributor, enabler and catalyst of that, we are now seeing power massively distributed.

Think about it. Any individual around the world that can get online, admittedly that’s only two thirds of us but it’s growing every day, and everyone has the power to publish, to create, to share, to collaborate, to collude, to monitor. It’s not just the state monitoring the people but the people monitoring the state and people monitoring other people. There is the power to enforce your own perspective. And it doesn’t actually matter whether you think it’s a good or bad thing, it is the reality. It’s the shift. And if we don’t learn to embrace, understand and participate in it,particularly in government, then we actually make ourselves less relevant. Because one of the main things about this distribution of power, that the internet has taught us fundamentally as part of our culture that we have all started to adopt, is that you can route around damage. The internet was set up to be able to route around damage where damage was physical or technical. We started to internalise that socially and if you, in government, are seen to be damage, then people route around you. This is why we have to learn to work as a node in a network, not just a king in a castle, because kings don’t last anymore.

So which way is forward. The priority now needs to be deciding what sort of future do we want. Not what sort of past do we want to escape. The 21st century sees many communities emerging. They are hyper connected, transnational, multicultural, heavily interdependent, heavily specialised, rapidly changing and disconnected from their geopolitical roots. Some people see that as a reason to move away from having geopolitically formed states. Personally I believe there will always be a role for a geographic state because I need a way to scale a quality of life for my family along with my fellow citizens and neighbours. But what does that mean in an international sense. Are my rights as a human being being realised in a transnational sense. There are some really interesting questions about the needs of users beyond the individual services that we deliver, particularly when you look in a transnational way.

So a lot of these assumptions have become like a rusty anchor that kept us in place in high tide, but are drawing us to a dangerous reef as to water lowers. We need to figure out how to float on the water without rusty anchors to adapt to the tides of change.

There are a lot of pressures that are driving these changes of course. We are all feeling those pressures, those of us that are working in government. There’s the pressure of changing expectations, of history, from politics and the power shift. The pressure of the role of government in the 21st century. Pressure is a wonderful thing as it can be a catalyst of change, so we shouldn’t shy away from pressure, but recognising that we’re under pressure is important.

So let’s explore some of those power shifts and then what role could government play moving forward.

Paradigm #1: central to distributed.

This is about that shift in power, the independence movements and the internet. It is something people talk about but don’t necessarily apply to their work. Governments will talk about wanting to take a more distributed approach but followup with setting up “my” website expecting everyone to join or do something. How about everyone come to “my” office or create “my” own lab. Distributed, when you start to really internalise what that means, if different. I was lucky as I forged a lot of my assumptions and habits of working when I was involved in the Open Source community, and the Open Source community has a lot of lessons for rest of society because it is on the bleeding edge of a lot of these paradigm shifts. So working in a distributed way is to assume that you are not at the centre, to assume that you’re not needed. To assume that if you make yourself useful that people will rely on you, but also to assume that you rely on others and to build what you do in a way that strengthens the whole system. I like to talk about it as “Gov as a Platform”, sometimes that is confusing to people so let’s talk about it as “Gov as an enabler”. It’s not just government as a central command and controller anymore because the moment you create a choke point, people route around it. How do we become a government as an enabler of good things, and how can we use other mechanisms to create the controls in society. Rather than try to protect people from themselves, why not enable people to protect themselves. There are so many natural motivations in the community, in industry, in the broader sector that we serve, that we can tap into but traditionally we haven’t. Because traditionally we saw ourselves as the enforcer, as the one to many choke point. So working in a distributed way is not just about talking the talk, it’s about integrated it into the way we think.

Some other aspects of this include localised to globalised, keeping in mind that large multinational companies have become quite good at jurisdiction shopping for improvements of profits, which you can’t say is either a good or bad thing, it’s just a natural thing and how they’re naturally motivated. But citizens are increasingly starting to jurisdiction shop too. So I would suggest a role for government in the 21st century would be to create the best possible quality of life for people, because then you’ll attract the best from around the world.

The second part of central to distributed is simple to complex. I have this curve (on the slide) which shows green as complexity and red as government’s response to user needs. The green climbs exponentially whilst the red is pretty linear, with small increases or decreases over time, but not an exponential response by any means. Individual needs are no longer heavily localised. They are subject to local, national, transnational complexities with every extra complexity compounded, not linear. So the increasing complexities in people’s lives, and the obligations, taxation, services and entitlements, everything is going up. So there is a delta forming between what government can directly do, and what people need. So again I contend that the opportunity here particularly for the public sector is to actually be an enabler for all those service intermediaries – the for profit, non profit, civic tech – to help them help themselves, help them help their customers, by merit of making government a platform upon which they can build. We’ve had a habit and a history of creating public infrastructure, particularly in Australia, in New Zealand, in Canada, we’re been very good at building public infrastructure. Why have we not focused on digital infrastructure? Why do we see digital infrastructure as something that has to be cost recovered to be sustainable when we don’t have to do cost recovery for every thing public road. I think that looking at the cost benefits and value creation of digital public infrastructure needs to be looks at in the same way, and we need to start investing in digital public infrastructure.

Paradigm #2: analog to digital.

Or slow to very fast. I like to joke that we use lawyers as modems. If you think about regulation and policy, we write it, it is translated by a lawyer or drafter into regulation or policy, it is then translated by a lawyer or drafter or anyone into operational systems, business systems, helpdesk systems or other systems in society. Why wouldn’t we make our regulation as code? The intent of our regulation and our legislative regimes available to be directly consumed (by the systems) so that we can actually speed up, automate, improve consistency of application through the system, and have a feedback loop to understand whether policy changes are having the intended policy effect.

There are so many great things we can do when we start thinking about digital as something new, not just digitising an analog process. Innovation too long was interpreted as a digitisation of a process, basic process improvements. But real digitisation should a a transformation where you are changing the thing to better achieve the purpose or intent.

Paradigm #3: scarcity to surplus.

I think this is critical. We have a lot of assumptions in our systems that assume scarcity. Why do we still have so many of our systems assume scarcity when surplus is the opportunity. Between 3D printing and nanotech, we could be deconstructing and reconstructing new materials to print into goods and food and yet a large inhibitor of 3D printing progress is copyright. So the question I have for you is do we care more about an 18h century business model or do we care about solving the problems of our society. We need to make these choices. If we have moved to an era of surplus but we are getting increasing inequality, perhaps the systems of distribution are problematic? Perhaps in assuming scarcity we are protecting scarcity for the few at the cost of the many.

Paradigm #4: normative to formative

“Please comply”. For the last hundred years in particular we have perfected the art of broadcasting images of normal into our houses, particularly with radio and television. We have the concept of set a standard or rule and if you don’t follow we’ll punish you, so a lot of culture is about compliance in society. Compliance is important for stability, but blind compliance can create millstones. A formative paradigm is about not saying how it is but in exploring where you want to go. In the public service we are particularly good at compliance culture but I suggest that if we got more people thinking formatively, not just change for changes sake, but bringing people together on their genuinely shared purpose of serving the public, then we might be able to take a more formative approach to doing the work we do for the betterment of society rather than ticking the box because it is the process we have to follow. Formative takes us away from being consumers and towards being makers. As an example, the most basic form of normative human behaviour is in how we see and conform to being human. You are either normal, or you are not, based on some externally projected vision of normal. But the internet has shown us that no one is normal. So embracing that it is through our difference we are more powerful and able to adapt is an important part of our story and culture moving forward. If we are confident to be formative, we can always trying to create a better world whilst applying a critical eye to compliance so we don’t comply for compliance sake.

Exploring optimistic futures

Now on the back of these paradigm shifts, I’d like to briefly about the future. I spoke about the opportunity through surplus with 3D printing and nanotech to address poverty and hunger. What about the opportunities of rockets for domestic travel? It takes half an hour to get into space, an hour to traverse the world and half an hour down which means domestic retail transport by rocket is being developed right now which means I could go from New Zealand to Canada to work for the day and be home for tea. That shift is going to be enormous in so many ways and it could drive real changes in how we see work and internationalism. How many people remember Total Recall? The right hand picture is a self driving car from a movie in the 90s and is becoming normal now. Interesting fact, some of the car designs will tint the windows when they go through intersections because the passengers are deeply uncomfortable with the speed and closeness of self driving cars which can miss each other very narrowly compared to human driving. Obviously there are opportunities around AI, bots and automation but I think where it gets interesting when we think about opportunities of the future of work. We are still working on industrial assumptions that the number of hours that we have is a scarcity paradigm and I have to sell the number of hours that I work, 40, 50, 60 hours. Why wouldn’t we work 20 hours a week at a higher rate to meet our basic needs? Why wouldn’t we have 1 or 2 days a week where we could contribute to our civic duties, or art, or education. Perhaps we could jump start an inclusive renaissance, and I don’t mean cat pictures. People can’t thrive if they’re struggling to survive and yet we keep putting pressure on people just to survive. Again, we are from countries with quite strong safety nets but even those safety nets put huge pressure, paperwork and bureaucracy on our most vulnerable just to meet their basic needs. Often the process of getting access to the services and entitlements is so hard and traumatic that they can’t, so how do we close that gap so all our citizens can move from survival to thriving.

The last picture is a bit cheeky. A science fiction author William Gibson wrote Johnny Pneumonic and has a character in that book called Jones, a cyborg dolphin to sniff our underwater mines in warfare. Very dark, but the interesting concept there is in how Jones was received after the war: “he was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” What does it mean to be human? If I lose a leg, right now it is assumed I need to replace that leg to be somehow “whole”. What if I want 4 legs. The human brain is able to adapt to new input. I knew a woman who got a small sphere filled with mercury and a free floating magnet in her finger, and the magnet spins according to frequency and she found over a short period of time she was able to detect changes in frequency. Why is that cool and interesting? Because the brain can adapt to foreign, non evolved input. I think that is mind blowing. We have the opportunity to augment our selves not to just conform to normal or be slightly better, faster humans. But we can actually change what it means to be human altogether. I think this will be one of the next big social challenges for society but because we are naturally so attracted to “shiny”, I think that discomfort will pass within a couple of generations. One prediction is that the normal Olympics has become boring and that we will move into a transhuman olympics where we take the leash off and explore the 100m sprint with rockets, or judo with cyborgs. Where the interest goes, the sponsorship goes, and more professional athletes compete. And what’s going to happen if your child says they want to be a professional transhuman olympian and that they will add wings or remove their legs for their professional career, to add them (or not) later? That’s a bit scary for many but at the same time, it’s very interesting. And it’s ok to be uncomfortable, it’s ok to look at change, be uncomfortable and ask yourself “why am I uncomfortable?” rather than just pushing back on discomfort. It’s critical more than ever, particularly in the public service that we get away from this dualistic good or bad, in or out, yours or mine and start embracing the grey.

The role of government?

So what’s the role of government in all this, in the future. Again these are just some thoughts, a conversation starter.

I think one of our roles is to ensure that individuals have the ability to thrive. Now I acknowledge I’m very privileged to have come from a social libertarian country that believe this, where people broadly believe they want their taxes to go to the betterment of society and not all countries have that assumption. But if we accept the idea that people can’t thrive if they can’t survive, then our baseline quality of life if you assume an individual starts from nothing with no privilege, benefits or family, provided by the state, needs to be good enough for the person to be able to thrive. Otherwise we get a basic structural problem. Part of that is becoming master buildings again, and to go to the Rawl’s example from Alistair before, we need empathy in what we do in government. The amount of times we build systems without empathy and they go terribly wrong because we didn’t think about what it would be like to be on the other side of that service, policy or idea. User centred design is just a systematisation of empathy, which is fantastic, but bringing empathy into everything we do is very important.

Leadership is a very important role for government. I think part of our role is to represent the best interests of society. I very strongly feel that we have a natural role to serve the public in the public sector, as distinct from the political sector (though citizens see us as the same thing). The role of a strong, independent public sector is more important than ever in a post facts “fake news” world because it is one of the only actors on the stage that is naturally motivated, naturally systemically motivated, to serve the best interests of the public. That’s why open government is so important and that’s why digital and open government initiatives align directly.

Because open with digital doesn’t scale, and digital without open doesn’t last.

Stability, predictability and balance. It is certainly a role of government to create confidence in our communities, confidence creates thriving. It is one thing to address Maslov’s pyramid of needs but if you don’t feel confident, if you don’t feel safe, then you still end up behaving in strange and unpredictable ways. So this is part of what is needed for communities to thrive. This relates to regulation and there is a theory that regulation is bad because it is hard. I would suggest that regulation is important for the stability and predictability in society but we have to change the way we deliver it. Regulation as code gets the balance right because you can have the settings and levers in the economy but also the ability for it to be automated, consumable, consistent, monitored and innovative. I imagine a future where I have a personal AI which I can trust because of quantum cryptography and because it is tethered in purpose to my best interests. I don’t have to rely on whether my interests happen to align with the purpose of a department, company or non-profit to get the services I need because my personal bot can figure out what I need and give me the options for me to make decisions about my life. It could deal with the Government AI to figure out the rules, my taxation, obligations, services and entitlements. Where is the website in all that? I ask this because the web was a 1990s paradigm, and we need more people to realise and plan around the idea that the future of service delivery is in building the backend of what we do – the business rules, transactions, data, content, models – in a modular consumable so we can shift channels or modes of delivery whether it is a person, digital service or AI to AI interaction.

Another role of government is in driving the skills we need for the 21st century. Coding is critical not because everyone needs to code (maybe they will) but more than that coding teaches you an assumption, an instinct, that technology is something that can be used by you, not something you are intrinsically bound to. Minecraft is the saviour of a generation because all those kids are growing up believing they can shape the world around them, not have to be shaped by the world around them. This harks back to the normative/formative shift. But we also need to teach critical thinking, teach self awareness, bias awareness, maker skills, community awareness. It has been delightful to move to New Zealand where they have a culture that has an assumed community awareness.

We need of course to have a strong focus on participatory democracy, where government isn’t just doing something to you but we are all building the future we need together. This is how we create a multi-processor world rather than a single processor government. This is how we scale and develop a better society but we need to move beyond “consultation” and into actual co-design with governments working collaboratively across the sectors and with civil society to shape the world.

I’ll finish on this note, government as an enabler, a platform upon which society can build. We need to build a way of working that assumes we are a node in the network, that assumes we have to work collaboratively, that assumes that people are naturally motivated to make good decisions for their life and how can government enable and support people.

So embrace the tipping point, don’t just react. What future do you want, what society do you want to move towards? I guess I’ve got to a point in my life where I see everything as a system and if I can’t connect the dots between what I’m doing and the purpose then I try to not do that thing. The first public service job I had I got in and automated a large proportion of the work within a couple of weeks and then asked for, and they gave it to me because I was motivated to make it better.

So I challenge you to be thinking about this every day, to consider your own assumptions and biases, to consider whether you are being normative or formative, to evaluate whether you are being iterative or transformative, to evaluate whether you are moving away from something or towards something. And to always keep in mind where you want to be, how you are contributing to a better society and to actively leave behind those legacy ideas that simply don’t serve us anymore.

November 06, 2017

Web Security 2017

I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/ at time when security became important as commerce boomed online.

At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.

In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.

Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.

Web developers (writing Javascript and using various frameworks) can rejoice — the age of having to support legacy MS IE 6/7/8/9/10 is pretty much over. None of those browsers support TLS 1.2 out of the box (IE 10 can turn this on, but for some reason, it is off by default). This makes Javascript code smaller as it doesn’t have to have conditional code to work with the quirks of those older clients.

But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.

There’s two tools I am currently using to help in this battle to improve web security. One is, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.

There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.

So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.

November 04, 2017

This Week in HASS – term 4, week 5

Halfway through the last term of the year already! This week our youngest students consider museums as a place to learn about the past. Slightly older students are learning about the states and territories of Australia, as well as their representative birds and animals. Older students are in throes of their class election campaign, preparing […]

November 03, 2017

Work Stuff

Does anyone know of a Linux support company that provides 24*7 support to Ruby and PHP applications? I have a client that is looking for such a company.

Also I’m looking for more consulting work. If anyone knows of an organisation that needs some SE Linux consulting, or support for any of the FOSS software I’ve written then let me know. I take payment by Paypal and Bitcoin as well as all the usual ways. I can make a private build of any of my FOSS software to suit your requirements or if you want features that could be used by other people (and don’t conflict with the general use cases) I can add them on request. Small changes start at $100.

October 31, 2017

Election Activity Bundle

For any Australian Curriculum HASS topic from Prep to at least Year 6, we can safely say “We have a resource on that!” So when, like here in Queensland, an election is suddenly called and teachers want to do some related activities in class, we actually already have the materials for you as these topics […]

October 30, 2017

Logic of Zombies

Most zombie movies feature shuffling hordes which prefer to eat brains but also generally eat any human flesh available. Because in most movies (pretty much everything but the 28 Days Later series [1]) zombies move slowly they rely on flocking to be dangerous.

Generally the main way of killing zombies is severe head injury, so any time zombies succeed in their aim of eating brains they won’t get a new recruit for their horde. The TV series iZombie [2] has zombies that are mostly like normal humans as long as they get enough brains and are smart enough to plan to increase their horde. But most zombies don’t have much intelligence and show no signs of restraint so can’t plan to recruit new zombies. In 28 Days Later the zombies aren’t smart enough to avoid starving to death, in contrast to most zombie movies where the zombies aren’t smart enough to find food other than brains but seem to survive on magic.

For a human to become a member of a shuffling horde of zombies they need to be bitten but not killed. They then need to either decide to refrain from a method of suicide that precludes becoming a zombie (gunshot to the head or jumping off a building) or unable to go through with it. Most zombie movies (I think everything other than 28 Days Later) has the transition process taking some hours so there’s plenty of time for an infected person to kill themself or be killed by others. Then they need to avoid having other humans notice that they are infected and kill them before they turn into a zombie. This doesn’t seem likely to be a common occurrence. It doesn’t seem likely that shuffling zombies (as opposed to the zombies in 28 Days Later or iZombie) would be able to form a horde.

In the unlikely event that shuffling zombies managed to form a horde that police couldn’t deal with I expect that earth-moving machinery could deal with them quickly. The fact that people don’t improvise armoured vehicles capable of squashing zombies is almost as ridiculous as all the sci-fi movies that feature infantry.

It’s obvious that logic isn’t involved in the choice of shuffling zombies. It’s more of a choice of whether to have the jump-scare aspect of 18 Days Later, the human-drama aspect of zombies that pass for human in iZombie, or the terror of a slowly approaching horrible fate that you can’t escape in most zombie movies.

I wonder if any of the music streaming services have a horror-movie playlist that has screechy music to set your nerves on edge without the poor plot of a horror movie. Could listening to scary music in the dark become a thing?

October 27, 2017

Happy Teachers’ Day

OpenSTEM would like to extend warm congratulations to all teachers on Teachers’ Day!! We salute you all for the wonderful job you do for all students every day, often without thanks or praise. It is not lightly that people say “If you can read this, thank a teacher”. Teachers truly are the force that shapes […]

This Week in HASS – term 4, week 4

This week our youngest students are looking at Aboriginal Places, while slightly older students are comparing Australia to other places around the world. Our older students are starting their class election segment of work, covering several parts of the Civics and Citizenship, as well as the History, curricula. Foundation/Kindy/Prep to Year 3 Students in Foundation/Kindy/Prep […]

October 26, 2017

Anarchy in the Office

Some of the best examples I’ve seen of anarchy working have been in corporate environments. This doesn’t mean that they were perfect or even as good as a theoretical system in which a competent manager controlled everything, but they often worked reasonably well.

In a well functioning team members will encourage others to do their share of the work in the absence of management. So when the manager disappears (doesn’t visit the team more than once a week and doesn’t ask for any meaningful feedback on how things are going) things can still work out. When someone who is capable of doing work isn’t working then other people will suggest that they do their share. If resources for work (such as a sufficiently configured PC for IT work) aren’t available then they can be found (abandoned PCs get stripped and the parts used to upgrade the PCs that need it most).

There was one time where a helpdesk worker who was about to be laid off was assigned to the same office as me (apparently making all the people in his group redundant took some time). So I started teaching him sysadmin skills, assigned work to him, and then recommended that my manager get him transferred to my group. That worked well for everyone.

One difficult case is employees who get in the way of work being done, those who are so incompetent that they break enough things to give negative productivity. One time when I was working in Amsterdam I had two colleagues like that, it turned out that the company had no problem with employees viewing porn at work so no-one asked them to stop looking at porn. Having them paid to look at porn 40 hours a week was much better than having them try to do work. With anarchy there’s little option to get rid of bad people, so just having them hang out and do no work was the only option. I’m not advocating porn at work (it makes for a hostile work environment), but managers at that company did worse things.

One company I worked for appeared (from the non-management perspective) to have a management culture of doing no work. During my time there I did two “annual reviews” in two weeks, and the second was delayed by over 6 months. The manager in question only did the reviews at that time because he was told he couldn’t be promoted until he got the backlog of reviews done, so apparently being more than a year behind in annual reviews was no obstacle to being selected for promotion. On one occasion I raised the issue of a colleague who had done no work for over a year (and didn’t even have a PC to do work) with that manager, his response was “what do you expect me to do”! I expected him to do anything other than blow me off when I reported such a serious problem! But in spite of that strictly work-optional culture enough work was done and the company was a leader in it’s field.

There has been a lot of research into the supposed benefits of bonuses etc which usually turn out to reduce productivity. Such research is generally ignored presumably because the people who are paid the most are the ones who get to decide whether financial incentives should be offered so they choose the compensation model for the company that benefits themselves. But the fact that teams can be reasonably productive when some people are paid to do nothing and most people have their work allocated by group consensus rather than management plan seems to be a better argument against the typical corporate management.

I think it would be interesting to try to run a company with an explicit anarchic management and see how it compares to the accidental anarchy that so many companies have. The idea would be to have minimal management that just does the basic HR tasks (preventing situations of bullying etc), a flat pay rate for everyone (no bonuses, pay rises, etc) and have workers decide how to spend money for training, facilities, etc. Instead of having middle managers you would have representatives elected from each team to represent their group to senior management.

PS Australia has some of the strictest libel laws in the world. Comments that identify companies or people are likely to be edited or deleted.

October 25, 2017

Teaching High Throughput Computing: An International Comparison of Andragogical Techniques

The importance of High Throughput Computing (HTC), whether through high performance or cloud-enabled, is a critical issue for research institutions as data metrics are increasing at a rate greater than the capacity of user systems [1]. As a result nascent evidence suggests higher research output from institutions that provide access to HTC facilities. However the necessary skills to operate HTC systems is lacking from the very research communities that would benefit from them.

read more

Spartan and NEMO: Two HPC-Cloud Hybrid Implementations

High Performance Computing systems offer excellent metrics for speed and efficiency when using bare metal hardware, a high speed interconnect, and parallel applications. This however does not represent a significant portion of scientific computational tasks. In contrast cloud computing has provided management and implementation flexibility at a cost of performance. We therefore suggest two approaches to make HPC resources available in a dynamically reconfigurable hybrid HPC/Cloud architecture. Both can can be achieved with few modifications to existing HPC/Cloud environments.

read more

October 20, 2017

Security Session at the 2017 Kernel Summit

For folks attending Open Source Summit Europe next week in Prague, note that there is a security session planned as part of the co-located Kernel Summit technical track.

This year, the Kernel Summit is divided into two components:

  1. An invitation-only maintainer summit of 30 people total, and;
  2. An open kernel summit technical track which is open to all attendees of OSS Europe.

The security session is part of the latter.  The preliminary agenda for the kernel summit technical track was announced by Ted Ts’o here:

There is also a preliminary agenda for the security session, here:

Currently, the agenda includes an update from Kees Cook on the Kernel Self Protection Project, and an update from Jarkko Sakkinen on TPM support.  I’ll provide a summary of the recent Linux Security Summit, depending on available time, perhaps focusing on security namespacing issues.

This agenda is subject to change and if you have any topics to propose, please send an email to the ksummit-discuss list.


This Week in HASS – term 4, week 3

This week our youngest students are looking at special places locally and around Australia, slightly older students are considering plants and animals around the world, while our older students are studying aspects of diversity in Australia. Foundation/Prep/Kindy to Year 3 Students in standalone Foundation/Prep/Kindy (Unit F.4) and combined classes with Year 1 (F-1.4) are thinking […]

October 17, 2017

Checking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

October 13, 2017

This Week in HASS – term 4, week 2

This week our youngest students are looking at transport in the past, slightly older students consider places that are special to people around the world and our oldest students are considering reasons why people might leave their homes to become migrants. Foundation/Prep/Kindy to Year 3 Students in standalone Foundation/Prep/Kindy classes (Unit F.4), as well as […]

October 09, 2017

LUV Main November 2017 Meeting: Ubuntu Artful Aardvark

Nov 8 2017 18:30
Nov 8 2017 20:30
Nov 8 2017 18:30
Nov 8 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Wednesday, November 8, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 8, 2017 - 18:30

read more

October 07, 2017

New Lithium Battery Pack for my EV

Eight years ago I installed a pack of 36 Lithium cells in my EV. After about 50,000km and several near-death battery pack experiences (over discharge) the range decreased beneath a useful level so I have just purchased a new pack.

Same sort of cells, CALB 100AH, 3.2V per cell (80km range). The pack was about AUD$6,000 delivered and took an afternoon to install. I’ve adjusted my Zivan NG3 to cut out at an average of 3.6 v/cell (129.6V), and still have the BMS system that will drop out the charger if any one cell exceeds 4.1V.

The original pack was rated at 10 years (3000 cycles) and given the abuse we subjected it to I’m quite pleased it lasted 8 years. I don’t have a fail-safe battery management system like a modern factory EV so we occasionally drove the car when dead flat. While I could normally pick this problem quickly from the instrumentation my teenage children tended to just blissfully drive on. Oh well, this is an experimental hobby, and mistakes will be made. The Wright brothers broke a few wings……

I just took the car with it’s new battery pack for a 25km test drive and all seems well. The battery voltage is about 118V at rest, and 114V when cruising at 60 km/hr. It’s not dropping beneath 110V during acceleration, much better than the old pack which would sag beneath 100V. I guess the internal resistance of the new cells is much lower.

I plan to keep driving my little home-brew EV until I can by a commercial EV with a > 200km range here in Australia for about $30k, which I estimate will happen around 2020.

It’s nice to have my little EV back on the road.

October 06, 2017

This Week in HASS – term 4, week 1

The last term of the school year – traditionally far too short and crowded with many events, both at and outside of school. OpenSTEM’s® Understanding Our World® program for HASS + Science ensures that not only are the students kept engaged with interesting material, but that teachers can relax, knowing that all curriculum-relevant material is […]

October 05, 2017

MS Gong ride

I have returned to cycling a couple weeks ago and I am taking part in the MS Sydney to the Gong Ride - The Ride to Fight Multiple Sclerosis.

Though it would be a huge fun and a great challenge to ride over 80km along the Sydney coast, this is a fundraising event and entry fee only covers event staging costs. Every dollar you DONATE will go directly to ensuring the thousands of Australians with multiple sclerosis are able to receive the support and care they need to live well.

Please DONATE now to support my ride and change the lives of Australians living with multiple sclerosis.

Make a Donation!

Thank you for your support.

PS: Please visit fund raising pages of my friends Natasha and Eric who have inspired me to return to cycling and take this ride!

October 04, 2017

DevOps Days Auckland 2017 – Wednesday Session 3

Sanjeev Sharma – When DevOps met SRE: From Apollo 13 to Google SRE

  • Author of Two DevOps Bookks
  • Apollo 13
    • Who were the real heroes? The guys back at missing control. The Astronaunts just had to keep breathing and not die
  • Best Practice for Incident management
    • Prioritize
    • Prepare
    • Trust
    • Introspec
    • Consider Alternatives
    • Practice
    • Change it around
  • Big Hurdles to adoption of DevOps in Enterprise
    • Literature is Only looking at one delivery platform at a time
    • Big enterprise have hundreds of platforms with completely different technologies, maturity levels, speeds. All interdependent
    • He Divides
      • Industrialised Core – Value High, Risk Low, MTBF
      • Agile/Innovation Edge – Value Low, Risk High, Rapid change and delivery, MTTR
      • Need normal distribution curve of platforms across this range
      • Need to be able to maintain products at both ends in one IT organisation
  • 6 capabilities needed in IT Organisation
    • Planning and architecture.
      • Your Delivery pipeline will be as fast as the slowest delivery pipeline it is dependent on
    • APIs
      • Modernizing to Microservices based architecture: Refactoring code and data and defining the APIs
    • Application Deployment Automation and Environment Orchestration
      • Devs are paid code, not maintain deployment and config scripts
      • Ops must provide env that requires devs to do zero setup scripts
    • Test Service and Environment Virtualisation
      • If you are doing 2week sprints, but it takes 3-weeks to get a test server, how long are your sprints
    • Release Management
      • No good if 99% of software works but last 1% is vital for the business function
    • Operational Readiness for SRE
      • Shift between MTBF to MTTR
      • MTTR  = Mean time to detect + Mean time to Triage + Mean time to restore
      • + Mean time to pass blame
    • Antifragile Systems
      • Things that neither are fragile or robust, but rather thrive on chaos
      • Cattle not pets
      • Servers may go red, but services are always green
    • DevOps: “Everybody is responsible for delivery to production”
    • SRE: “(Everybody) is responsible for delivering Continuous Business Value”


DevOps Days Auckland 2017 – Wednesday Session 2

Marcus Bristol (Pushpay) – Moving fast without crashing

  • Low tolerance for errors in production due to being in finance
  • Deploy twice per day
  • Just Culture – Balance safety and accountability
    • What rule?
    • Who did it?
    • How bad was the breach?
    • Who gets to decide?
  • Example of Retributive Culture
    • KPIs reflect incidents.
    • If more than 10% deploys bad then affect bonus
    • Reduced number of deploys
  • Restorative Culture
  • Blameless post-mortem
    • Can give detailed account of what happened without fear or retribution
    • Happens after every incident or near-incident
    • Written Down in Wiki Page
    • So everybody has the chance to have a say
    • Summary, Timeline, impact assessment, discussion, Mitigations
    • Mitigations become highest-priority work items
  • Our Process
    • Feature Flags
    • Science
    • Lots of small PRs
    • Code Review
    • Testers paired to devs so bugs can be fixed as soon as found
    • Automated tested
    • Pollination (reviews of code between teams)
    • Bots
      • Posts to Slack when feature flag has been changed
      • Nags about feature flags that seems to be hanging around in QA
      • Nags about Flags that have been good in prod for 30+ days
      • Every merge
      • PRs awaiting reviews for long time (days)
      • Missing postmortun migrations
      • Status of builds in build farm
      • When deploy has been made
      • Health of API
      • Answer queries on team member list
      • Create ship train of PRs into a build and user can tell bot to deploy to each environment


October 03, 2017

DevOps Days Auckland 2017 – Wednesday Session 1

Michael Coté – Not actually a DevOps Talk

Digital Transformation

  • Goal: deliver value, weekly reliably, with small patches
  • Management must be the first to fail and transform
  • Standardize on a platform: special snow flakes are slow, expensive and error prone (see his slide, good list of stuff that should be standardize)
  • Ramping up: “Pilot low-risk apps, and ramp-up”
  • Pair programming/working
    • Half the advantage is people speed less time on reddit “research”
  • Don’t go to meetings
  • Automate compliance, have what you do automatic get logged and create compliance docs rather than building manually.
  • Crafting Your Cloud-Native Strategy

Sajeewa Dayaratne – DevOps in an Embedded World

  • Challenges on Embedded
    • Hardware – resource constrinaed
    • Debugging – OS bugs, Hardware Bugs, UFO Bugs – Oscilloscopes and JTAG connectors are your friend.
    • Environment – Thermal, Moisture, Power consumption
    • Deploy to product – Multi-month cycle, hard of impossible to send updates to ships at sea.
  • Principles of Devops , equally apply to embedded
    • High Frequency
    • Reduce overheads
    • Improve defect resolution
    • Automate
    • Reduce response times
  • Navico
    • Small Sonar, Navigation for medium boats, Displays for sail (eg Americas cup). Navigation displays for large ships
    • Dev around world, factory in Mexico
  • Codebase
    • 5 million lines of code
    • 61 Hardware Products supported – Increasing steadily, very long lifetimes for hardware
    • Complex network of products – lots of products on boat all connected, different versions of software and hardware on the same boat
  • Architecture
    • Old codebase
    • Backward compatible with old hardware
    • Needs to support new hardware
    • Desire new features on all products
  • What does this mean
    • Defects were found too late
    • Very high cost of bugs found late
    • Software stabilization taking longer
    • Manual test couldn’t keep up
    • Cost increasing , including opportunity cost
  • Does CI/CD provide answer?
    • But will it work here?
    • Case Study from HP. Large-Scale Agile Development by Gary Gruver
  • Our Plan
    • Improve tolls and archetecture
    • Build Speeds
    • Automated testing
    • Code quality control
  • Previous VCS
    • Proprietary tool with limit support and upgrades
    • Limited integration
    • Lack of CI support
    • No code review capacity
  • Move to git
    • Code reviews
    • Integrated CI
    • Supported by tools
  • Archetecture
    • Had a configurable codebase already
    • Fairly common hardware platform (only 9 variations)
    • Had runtime feature flags
    • But
      • Cyclic dependancies – 1.5 years to clean these up
      • Singletons – cut down
      • Promote unit testability – worked on
      • Many branches – long lived – mega merges
  • Went to a single Branch model, feature flags, smaller batch sizes, testing focused on single branch
  • Improve build speed
    • Start 8 hours to build Linux platform, 2 hours for each app, 14+ hours to build and package a release
    • Options
      • Increase speed
      • Parallel Builds
    • What did
      • ccache.clcache
      • IncrediBuild
      • distcc
    • 4-5hs down to 1h
  • Test automation
    • Existing was mock-ups of the hardware to not typical
    • Started with micro-test
      • Unit testing (simulator)
      • Unit testing (real hardware)
    • Build Tools
      • Software tools (n2k simulator, remote control)
      • Hardware tools ( Mimic real-world data, re purpose existing stuff)
    • UI Test Automation
      • Build or Buy
      • Functional testing vs API testing
      • HW Test tools
      • Took 6 hours to do full test on hardware.
  • PipeLine
    • Commit -> pull request
    • Automated Build / Unit Tests
    • Daily QA Build
  • Next?
    • Configuration as code
    • Code Quality tools
    • Simulate more hardware
    • Increase analytics and reporting
    • Fully simulated test env for dev (so the devs don’t need the hardware)
    • Scale – From internal infrastructure to the cloud
    • Grow the team
  • Lessons Learnt
    • Culture!
    • Collect Data
    • Get Executive Buy in
    • Change your tolls and processes if needed
    • Test automation is the key
      • Invest in HW
      • Silulate
      • Virtualise
    • Focus on good software design for Everything


Ikea wireless charger in CNC mahogany case

I notice that Ikea sell their wireless chargers without a shell for insertion into desks. The "desk" I chose is a curve cut profile in mahogany that just happens to have the same fit as an LG G3/4/5 type phone. The design changed along the way to a more upright one which then required a catch to stop the phone sliding off.

This was done in Fusion360 which allows bringing in STL files of things like phones and cutting those out of another body. It took a while to work out the ball end toolpath but I finally worked out how to get something that worked reasonably well. The chomps in the side allow fingers to securely lift the phone off the charger.

It will be interesting to play with sliced objects in wood. Layering 3D cuts to build up objects that are 10cm (or about 4 layers) tall.

DevOps Days Auckland 2017 – Tuesday Session 3

Mirror, mirror, on the wall: testing Conway’s Law in open source communities – Lindsay Holmwood

  • The map between the technical organisation and the technical structure.
  • Easy to find who owns something, don’t have to keep two maps in your head
  • Needs flexibility of the organisation structure in order to support flexibility in a technical design
  • Conway’s “Law” really just adage
  • Complexity frequently takes the form of hierarchy
  • Organisations that mirror perform badly in rapidly changing and innovative enviroments

Metrics that Matter – Alison Polton-Simon (Thoughtworks)

  • Metrics Mania – Lots of focus on it everywhere ( fitbits, google analytics, etc)
  • How to help teams improve CD process
  • Define CD
    • Software consistently in a deployable state
    • Get fast, automated feedback
    • Do push-button deployments
  • Identifying metrics that mattered
    • Talked to people
    • Contextual observation
    • Rapid prototyping
    • Pilot offering
  • 4 big metrics
    • Deploy ready builds
    • Cycle time
    • Mean time between failures
    • Mean time to recover
  • Number of Deploy-ready builds
    • How many builds are ready for production?
    • Routine commits
    • Testing you can trust
    • Product + Development collaboration
  • Cycle Time
    • Time it takes to go from a commit to a deploy
    • Efficient testing (test subset first, faster testing)
    • Appropriate parallelization (lots of build agents)
    • Optimise build resources
  • Case Study
    • Monolithic Codebase
    • Hand-rolled build system
    • Unreliable environments ( tests and builds fail at random )
    • Validating a Pull Request can take 8 hours
    • Coupled code: isolated teams
    • Wide range of maturity in testing (some no test, some 95% coverage)
    • No understanding of the build system
    • Releases routinely delay (10 months!) or done “under the radar”
  • Focus in case study
    • Reducing cycle time, increasing reliability
    • Extracted services from monolith
    • Pipelines configured as code
    • Build infrastructure provisioned as docker and ansible
    • Results:
      • Cycle time for one team 4-5h -> 1:23
      • Deploy ready builds 1 per 3-8 weeks -> weekly
  • Mean time between failures
    • Quick feedback early on
    • Robust validation
    • Strong local builds
    • Should not be done by reducing number of releases
  • Mean time to recover
    • How long back to green?
    • Monitoring of production
    • Automated rollback process
    • Informative logging
  • Case Study 2
    • 1.27 million lines of code
    • High cyclomatic complexity
    • Tightly coupled
    • Long-running but frequently failing testing
    • Isolated teams
    • Pipeline run duration 10h -> 15m
    • MTTR Never -> 50 hours
    • Cycle time 18d -> 10d
    • Created a dashboard for the metrics
  • Meaningless Metrics
    • The company will build whatever the CEO decides to measure
    • Lines of code produced
    • Number of Bugs resolved. – real life duplicates Dilbert
    • Developers Hours / Story Points
    • Problems
      • Lack of team buy-in
      • Easy to agme
      • Unintended consiquences
      • Measuring inputs, not impacts
  • Make your own metrics
    • Map your path to production
    • Highlights pain points
    • collaborate
    • Experiment



DevOps Days Auckland 2017 – Tuesday Session 2

Using Bots to Scale incident Management – Anthony Angell (Xero)

  • Who we are
    • Single Team
    • Just a platform Operations team
  • SRE team is formed
    • Ops teams plus performance Engineering team
  • Incident Management
    • In Bad old days – 600 people on a single chat channel
    • Created Framework
    • what do incidents look like, post mortems, best practices,
    • How to make incident management easy for others?
  • ChatOps (Based on Hubot)
    • Automated tour guide
    • Multiple integrations – anything with Rest API
    • Reducing time to restore
    • Flexability
  • Release register – API hook to when changes are made
  • Issue report form
    • Summary
    • URL
    • User-ids
    • how many users & location
    • when started
    • anyone working on it already
    • Anything else to add.
  • Chat Bot for incident
    • Populates for an pushes to production channel, creates pagerduty alert
    • Creates new slack channel for incident
    • Can automatically update status page from chat and page senior managers
    • Can Create “status updates” which record things (eg “restarted server”), or “yammer updates” which get pushed to social media team
    • Creates a task list automaticly for the incident
    • Page people from within chat
    • At the end: Gives time incident lasted, archives channel
    • Post Mortum
  • More integrations
    • Report card
    • Change tracking
    • Incident / Alert portal
  • High Availability – dockerisation
  • Caching
    • Pageduty
    • AWS
    • Datadog



October 02, 2017

DevOps Days Auckland 2017 – Tuesday Session 1

DevSecOps – Anthony Rees

“When Anthrax and Public Enemy came together, It was like Developers and Operations coming together”

  • Everybody is trying to get things out fast, sometimes we forget about security
  • Structural efficiency and optimised flow
  • Compliance putting roadblock in flow of pipeline
    • Even worse scanning in production after deployment
  • Compliance guys using Excel, Security using Shell-scripts, Develops and Operations using Code
  • Chef security compliance language – InSpec
    • Insert Sales stuff here
  • Lots of pre-written configs available

Immutable SQL Server Clusters – John Bowker (from Xero)

  • Problem
    • Pet Based infrastructure
    • Not in cloud, weeks to deploy new server
    • Hard to update base infrastructure code
  • 110 Prod Servers (2 regions).
  • 1.9PB of Disk
  • Octopus Deploy: SQL Schemas, Also server configs
  • Half of team in NZ, Half in Denver
    • Data Engineers, Infrastructure Engineers, Team Lead, Product Owner
  • Where we were – The Burning Platform
    • Changed mid-Migration from dedicated instances to dedicated Hosts in AWS
    • Big saving on software licensing
  • Advantages
    • Already had Clustered HA
    • Existing automation
    • 6 day team, 15 hours/day due to multiple locations of team
  • Migration had to have no downtime
    • Went with node swaps in cluster
  • Split team. Half doing migration, half creating code/system for the node swaps
  • We learnt
    • Dedicated hosts are cheap
    • Dedicated host automation not so good for Windows
    • Discovery service not so good.
    • Syncing data took up to 24h due to large dataset
    • Powershell debugging is hard (moving away from powershell a bit, but powershell has lots of SQL server stuff built in)
    • AWS services can timeout, allow for this.
  • Things we Built
    • Lots Step Templates in Octopus Deploy
    • Metadata Store for SQL servers – Dynamite (Python, Labda, Flask, DynamoDB) – Hope to Open source
    • Lots of PowerShell Modules
  • Node Swaps going forward
    • Working towards making this completely automated
    • New AMI -> Node swap onto that
    • Avoid upgrade in place or running on old version


Linux Security Summit 2017 Roundup

The 2017 Linux Security Summit (LSS) was held last month in Los Angeles over the 14th and 15th of September.  It was co-located with Open Source Summit North America (OSSNA) and the Linux Plumbers Conference (LPC).

LSS 2017 sign at conference

LSS 2017

Once again we were fortunate to have general logistics managed by the Linux Foundation, allowing the program committee to focus on organizing technical content.  We had a record number of submissions this year and accepted approximately one third of them.  Attendance was very strong, with ~160 attendees — another record for the event.

LSS 2017 Attendees

LSS 2017 Attendees

On the day prior to LSS, attendees were able to access a day of LPC, which featured two tracks with a security focus:

Many thanks to the LPC organizers for arranging the schedule this way and allowing LSS folk to attend the day!

Realtime notes were made of these microconfs via etherpad:

I was particularly interested in the topic of better integrating LSM with containers, as there is an increasingly common requirement for nesting of security policies, where each container may run its own apparently independent security policy, and also a potentially independent security model.  I proposed the approach of introducing a security namespace, where all security interfaces within the kernel are namespaced, including LSM.  It would potentially solve the container use-cases, and also the full LSM stacking case championed by Casey Schaufler (which would allow entirely arbitrary stacking of security modules).

This would be a very challenging project, to say the least, and one which is further complicated by containers not being a first class citizen of the kernel.   This leads to security policy boundaries clashing with semantic functional boundaries e.g. what does it mean from a security policy POV when you have namespaced filesystems but not networking?

Discussion turned to the idea that it is up to the vendor/user to configure containers in a way which makes sense for them, and similarly, they would also need to ensure that they configure security policy in a manner appropriate to that configuration.  I would say this means that semantic responsibility is pushed to the user with the kernel largely remaining a set of composable mechanisms, in relation to containers and security policy.  This provides a great deal of flexibility, but requires those building systems to take a great deal of care in their design.

There are still many issues to resolve, both upstream and at the distro/user level, and I expect this to be an active area of Linux security development for some time.  There were some excellent followup discussions in this area, including an approach which constrains the problem space. (Stay tuned)!

A highlight of the TPMs session was an update on the TPM 2.0 software stack, by Philip Tricca and Jarkko Sakkinen.  The slides may be downloaded here.  We should see a vastly improved experience over TPM 1.x with v2.0 hardware capabilities, and the new software stack.  I suppose the next challenge will be TPMs in the post-quantum era?

There were further technical discussions on TPMs and container security during subsequent days at LSS.  Bringing the two conference groups together here made for a very productive event overall.

TPMs microconf at LPC with Philip Tricca presenting on the 2.0 software stack.

This year, due to the overlap with LPC, we unfortunately did not have any LWN coverage.  There are, however, excellent writeups available from attendees:

There were many awesome talks.

The CII Best Practices Badge presentation by David Wheeler was an unexpected highlight for me.  CII refers to the Linux Foundation’s Core Infrastructure Initiative , a preemptive security effort for Open Source.  The Best Practices Badge Program is a secure development maturity model designed to allow open source projects to improve their security in an evolving and measurable manner.  There’s been very impressive engagement with the project from across open source, and I believe this is a critically important effort for security.

CII Bade Project adoption (from David Wheeler’s slides).

During Dan Cashman’s talk on SELinux policy modularization in Android O,  an interesting data point came up:

We of course expect to see application vulnerability mitigations arising from Mandatory Access Control (MAC) policies (SELinux, Smack, and AppArmor), but if you look closely this refers to kernel vulnerabilities.   So what is happening here?  It turns out that a side effect of MAC policies, particularly those implemented in tightly-defined environments such as Android, is a reduction in kernel attack surface.  It is generally more difficult to reach such kernel vulnerabilities when you have MAC security policies.  This is a side-effect of MAC, not a primary design goal, but nevertheless appears to be very effective in practice!

Another highlight for me was the update on the Kernel Self Protection Project lead by Kees, which is now approaching its 2nd anniversary, and continues the important work of hardening the mainline Linux kernel itself against attack.  I would like to also acknowledge the essential and original research performed in this area by grsecurity/PaX, from which this mainline work draws.

From a new development point of view, I’m thrilled to see the progress being made by Mickaël Salaün, on Landlock LSM, which provides unprivileged sandboxing via seccomp and LSM.  This is a novel approach which will allow applications to define and propagate their own sandbox policies.  Similar concepts are available in other OSs such as OSX (seatbelt) and BSD (pledge).  The great thing about Landlock is its consolidation of two existing Linux kernel security interfaces: LSM and Seccomp.  This ensures re-use of existing mechanisms, and aids usability by utilizing already familiar concepts for Linux users.

Overall I found it to be an incredibly productive event, with many new and interesting ideas arising and lots of great collaboration in the hallway, lunch, and dinner tracks.

Slides from LSS may be found linked to the schedule abstracts.

We did not have a video sponsor for the event this year, and we’ll work on that again for next year’s summit.  We have discussed holding LSS again next year in conjunction with OSSNA, which is expected to be in Vancouver in August.

We are also investigating a European LSS in addition to the main summit for 2018 and beyond, as a way to help engage more widely with Linux security folk.  Stay tuned for official announcements on these!

Thanks once again to the awesome event staff at LF, especially Jillian Hall, who ensured everything ran smoothly.  Thanks also to the program committee who review, discuss, and vote on every proposal, ensuring that we have the best content for the event, and who work on technical planning for many months prior to the event.  And of course thanks to the presenters and attendees, without whom there would literally and figuratively be no event :)

See you in 2018!


Stone Axes and Aboriginal Stories from Victoria

In the most recent edition of Australian Archaeology, the journal of the Australian Archaeological Association, there is a paper examining the exchange of stone axes in Victoria and correlating these patterns of exchange with Aboriginal stories in the 19th century. This paper is particularly timely with the passing of legislation in the Victorian Parliament on […]

September 28, 2017

Process Monitoring

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring

ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.


Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

I think I found a bug in python's unittest.mock library

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we've used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what "methods" were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem -- the mock object doesn't know if you're the code under test, or the code that's making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here's an example:

    from unittest import mock
    class foo(object):
        def dummy(a, b):
            return a + b
    @mock.patch.object(foo, 'dummy')
    def call_dummy(mock_dummy):
        f = foo()
        f.dummy(1, 2)
        print('Asserting a call should work if the call was made')
        mock_dummy.assert_has_calls([, 2)])
        print('Assertion for expected call passed')
        print('Asserting a call should raise an exception if the call wasn\'t made')
        mock_worked = False
            mock_dummy.assert_has_calls([, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
        if not mock_worked:
            print('*** Assertion should have failed ***')
        print('Asserting a call where the assertion has a typo should fail, but '
        mock_worked = False
            mock_dummy.typo_assert_has_calls([, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
        if not mock_worked:
            print('*** Assertion should have failed ***')
    if __name__ == '__main__':

If I run that code, I get this:

    $ python3 
    Asserting a call should work if the call was made
    Assertion for expected call passed
    Asserting a call should raise an exception if the call wasn't made
    Expected failure, Calls not found.
    Expected: [call(3, 4)]
    Actual: [call(1, 2)]
    Asserting a call where the assertion has a typo should fail, but doesn't
    *** Assertion should have failed ***
    [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn't a thing, but we didn't notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don't really have a solution to this right now (I'm home sick and not thinking straight), but it would be interesting to see what other people think.

Tags for this post: python unittest.mock mock testing
Related posts: Killing a blocking thread in python?; mbot: new hotness in Google Talk bots; Twisted conch; Executing a command with paramiko; Universal Feedparser and XML namespaces; More coding club


September 24, 2017

What Makes Humans Different From Most Other Mammals?

Well, there are several things that makes us different from other mammals – although perhaps fewer than one might think. We are not unique in using tools, in fact we discover more animals that use tools all the time – even fish! We pride ourselves on being a “moral animal”, however fairness, reciprocity, empathy and […]

Drupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

September 23, 2017

On Equal Rights

This is probably old news now, but I only saw it this morning, so here we go:

In case that embedded tweet doesn’t show up properly, that’s an editorial in the NT News which says:

Voting papers have started to drop through Territory mailboxes for the marriage equality postal vote and I wanted to share with you a list of why I’ll be voting yes.

1. I’m not an arsehole.

This resulted in predictable comments along the lines of “oh, so if I don’t share your views, I’m an arsehole?”

I suppose it’s unlikely that anyone who actually needs to read and understand what I’m about to say will do so, but just in case, I’ll lay this out as simply as I can:

  • A personal belief that marriage is a thing that can only happen between a man and a woman does not make you an arsehole (it might make you on the wrong side of history, or a lot of other things, but it does not necessarily make you an arsehole).
  • Voting “no” to marriage equality is what makes you an arsehole.

The survey says “Should the law be changed to allow same-sex couples to marry?” What this actually means is, “Should same-sex couples have the same rights under law as everyone else?”

If you believe everyone should have the same rights under law, you need to vote yes regardless of what you, personally, believe the word “marriage” actually means – this is to make sure things like “next of kin” work the way the people involved in a relationship want them to.

If you believe that there are minorities that should not have the same rights under law as everyone else, then I’m sorry, but you’re an arsehole.

(Personally I think the Marriage Act should be ditched entirely in favour of a Civil Unions Act – that way the word “marriage” could go back to simply meaning whatever it means to the individuals being married, and to their god(s) if they have any – but this should in no way detract from the above. Also, this vote shouldn’t have happened in the first place; our elected representatives should have done their bloody jobs and fixed the legislation already.)

Converting Mbox to Maildir

MBox is the original and ancient format for storing mail on Unix systems, it consists of a single file per user under /var/spool/mail that has messages concatenated. Obviously performance is very poor when deleting messages from a large mail store as the entire file has to be rewritten. Maildir was invented for Qmail by Dan Bernstein and has a single message per file giving fast deletes among other performance benefits. An ongoing issue over the last 20 years has been converting Mbox systems to Maildir. The various ways of getting IMAP to work with Mbox only made this more complex.

The Dovecot Wiki has a good page about converting Mbox to Maildir [1]. If you want to keep the same message UIDs and the same path separation characters then it will be a complex task. But if you just want to copy a small number of Mbox accounts to an existing server then it’s a bit simpler.

Dovecot has a script to convert folders [2].

cd /var/spool/mail
mkdir -p /mailstore/
for U in * ; do
  ~/ -s $(pwd)/$U -d /mailstore/$U

To convert the inboxes shell code like the above is needed. If the users don’t have IMAP folders (EG they are just POP users or use local Unix MUAs) then that’s all you need to do.

cd /home
for DIR in */mail ; do
  U=$(echo $DIR| cut -f1 -d/)
  cd /home/$DIR
  for FOLDER in * ; do
    ~/ -s $(pwd)/$FOLDER -d /mailstore/$U/.$FOLDER
  cp .subscriptions /mailstore/$U/ subscriptions

Some shell code like the above will convert the IMAP folders to Maildir format. The end result is that the users will have to download all the mail again as their MUA will think that every message had been deleted and replaced. But as all servers with significant amounts of mail or important mail were probably converted to Maildir a decade ago this shouldn’t be a problem.

LUV Main October 2017 Meeting: The Tor software and network

Oct 3 2017 18:30
Oct 3 2017 20:30
Oct 3 2017 18:30
Oct 3 2017 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000


Tuesday, October 3, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


  • Russell Coker, Tor

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 3, 2017 - 18:30

LUV October 2017 Workshop: Hands-on with Tor

Oct 21 2017 12:30
Oct 21 2017 16:30
Oct 21 2017 12:30
Oct 21 2017 16:30
Infoxchange, 33 Elizabeth St. Richmond

Hands-on with Tor

Following on from Russell Coker's well-attended Tor presentation at the October main meeting, he will cover torbrowser-launcher, torchat, ssh (as an example of a traditionally non-tor app that can run with it), and how to write a basic torchat in shell.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 21, 2017 - 12:30

read more

September 22, 2017

Stupid Solutions to Stupid Problems: Hardcoding Your SSH Key in the Kernel

The "problem"

I'm currently working on firmware and kernel support for OpenCAPI on POWER9.

I've recently been allocated a machine in the lab for development purposes. We use an internal IBM tool running on a secondary machine that triggers hardware initialisation procedures, then loads a specified skiboot firmware image, a kernel image, and a root file system directly into RAM. This allows us to get skiboot and Linux running without requiring the usual hostboot initialisation and gives us a lot of options for easier tinkering, so it's super-useful for our developers working on bringup.

When I got access to my machine, I figured out the necessary scripts, developed a workflow, and started fixing my code... so far, so good.

One day, I was trying to debug something and get logs off the machine using ssh and scp, when I got frustrated with having to repeatedly type in our ultra-secret, ultra-secure root password, abc123. So, I ran ssh-copy-id to copy over my public key, and all was good.

Until I rebooted the machine, when strangely, my key stopped working. It took me longer than it should have to realise that this is an obvious consequence of running entirely from an initrd that's reloaded every boot...

The "solution"

I mentioned something about this to Jono, my housemate/partner-in-stupid-ideas, one evening a few weeks ago. We decided that clearly, the best way to solve this problem was to hardcode my SSH public key in the kernel.

This would definitely be the easiest and most sensible way to solve the problem, as opposed to, say, just keeping my own copy of the root filesystem image. Or asking Mikey, whose desk is three metres away from mine, whether he could use his write access to add my key to the image. Or just writing a wrapper around sshpass...

One Tuesday afternoon, I was feeling bored...

The approach

The SSH daemon looks for authorised public keys in ~/.ssh/authorized_keys, so we need to have a read of /root/.ssh/authorized_keys return a specified hard-coded string.

I did a bit of investigation. My first thought was to put some kind of hook inside whatever filesystem driver was being used for the root. After some digging, I found out that the filesystem type rootfs, as seen in mount, is actually backed by the tmpfs filesystem. I took a look around the tmpfs code for a while, but didn't see any way to hook in a fake file without a lot of effort - the tmpfs code wasn't exactly designed with this in mind.

I thought about it some more - what would be the easiest way to create a file such that it just returns a string?

Then I remembered sysfs, the filesystem normally mounted at /sys, which is used by various kernel subsystems to expose configuration and debugging information to userspace in the form of files. The sysfs API allows you to define a file and specify callbacks to handle reads and writes to the file.

That got me thinking - could I create a file in /sys, and then use a bind mount to have that file appear where I need it in /root/.ssh/authorized_keys? This approach seemed fairly straightforward, so I decided to give it a try.

First up, creating a pseudo-file. It had been a while since the last time I'd used the sysfs API...


The sysfs pseudo file system was first introduced in Linux 2.6, and is generally used for exposing system and device information.

Per the sysfs documentation, sysfs is tied in very closely with the kobject infrastructure. sysfs exposes kobjects as directories, containing "attributes" represented as files. The kobject infrastructure provides a way to define kobjects representing entities (e.g. devices) and ksets which define collections of kobjects (e.g. devices of a particular type).

Using kobjects you can do lots of fancy things such as sending events to userspace when devices are hotplugged - but that's all out of the scope of this post. It turns out there's some fairly straightforward wrapper functions if all you want to do is create a kobject just to have a simple directory in sysfs.

#include <linux/kobject.h>

static int __init ssh_key_init(void)
        struct kobject *ssh_kobj;
        ssh_kobj = kobject_create_and_add("ssh", NULL);
        if (!ssh_kobj) {
                pr_err("SSH: kobject creation failed!\n");
                return -ENOMEM;

This creates and adds a kobject called ssh. And just like that, we've got a directory in /sys/ssh/!

The next thing we have to do is define a sysfs attribute for our authorized_keys file. sysfs provides a framework for subsystems to define their own custom types of attributes with their own metadata - but for our purposes, we'll use the generic bin_attribute attribute type.

#include <linux/sysfs.h>

const char key[] = "PUBLIC KEY HERE...";

static ssize_t show_key(struct file *file, struct kobject *kobj,
                        struct bin_attribute *bin_attr, char *to,
                        loff_t pos, size_t count)
        return memory_read_from_buffer(to, count, &pos, key, bin_attr->size);

static const struct bin_attribute authorized_keys_attr = {
        .attr = { .name = "authorized_keys", .mode = 0444 },
        .read = show_key,
        .size = sizeof(key)

We provide a simple callback, show_key(), that copies the key string into the file's buffer, and we put it in a bin_attribute with the appropriate name, size and permissions.

To actually add the attribute, we put the following in ssh_key_init():

int rc;
rc = sysfs_create_bin_file(ssh_kobj, &authorized_keys_attr);
if (rc) {
        pr_err("SSH: sysfs creation failed, rc %d\n", rc);
        return rc;

Woo, we've now got /sys/ssh/authorized_keys! Time to move on to the bind mount.


Now that we've got a directory with the key file in it, it's time to figure out the bind mount.

Because I had no idea how any of the file system code works, I started off by running strace on mount --bind ~/tmp1 ~/tmp2 just to see how the userspace mount tool uses the mount syscall to request the bind mount.

execve("/bin/mount", ["mount", "--bind", "/home/ajd/tmp1", "/home/ajd/tmp2"], [/* 18 vars */]) = 0


mount("/home/ajd/tmp1", "/home/ajd/tmp2", 0x18b78bf00, MS_MGC_VAL|MS_BIND, NULL) = 0

The first and second arguments are the source and target paths respectively. The third argument, looking at the signature of the mount syscall, is a pointer to a string with the file system type. Because this is a bind mount, the type is irrelevant (upon further digging, it turns out that this particular pointer is to the string "none").

The fourth argument is where we specify the flags bitfield. MS_MGC_VAL is a magic value that was required before Linux 2.4 and can now be safely ignored. MS_BIND, as you can probably guess, signals that we want a bind mount.

(The final argument is used to pass file system specific data - as you can see it's ignored here.)

Now, how is the syscall actually handled on the kernel side? The answer is found in fs/namespace.c.

SYSCALL_DEFINE5(mount, char __user *, dev_name, char __user *, dir_name,
                char __user *, type, unsigned long, flags, void __user *, data)
        int ret;

        /* ... copy parameters from userspace memory ... */

        ret = do_mount(kernel_dev, dir_name, kernel_type, flags, options);

        /* ... cleanup ... */

So in order to achieve the same thing from within the kernel, we just call do_mount() with exactly the same parameters as the syscall uses:

rc = do_mount("/sys/ssh", "/root/.ssh", "sysfs", MS_BIND, NULL);
if (rc) {
        pr_err("SSH: bind mount failed, rc %d\n", rc);
        return rc;

...and we're done, right? Not so fast:

SSH: bind mount failed, rc -2

-2 is ENOENT - no such file or directory. For some reason, we can't find /sys/ssh... of course, that would be because even though we've created the sysfs entry, we haven't actually mounted sysfs on /sys.

rc = do_mount("sysfs", "/sys", "sysfs",
              MS_NOSUID | MS_NOEXEC | MS_NODEV, NULL);

At this point, my key worked!

Note that this requires that your root file system has an empty directory created at /sys to be the mount point. Additionally, in a typical Linux distribution environment (as opposed to my hardware bringup environment), your initial root file system will contain an init script that mounts your real root file system somewhere and calls pivot_root() to switch to the new root file system. At that point, the bind mount won't be visible from children processes using the new root - I think this could be worked around but would require some effort.


The final piece of the puzzle is building our new code into the kernel image.

To allow us to switch this important functionality on and off, I added a config option to fs/Kconfig:

config SSH_KEY
        bool "Andrew's dumb SSH key hack"
        default y
          Hardcode an SSH key for /root/.ssh/authorized_keys.

          This is a stupid idea. If unsure, say N.

This will show up in make menuconfig under the File systems menu.

And in fs/Makefile:

obj-$(CONFIG_SSH_KEY)           += ssh_key.o

If CONFIG_SSH_KEY is set to y, obj-$(CONFIG_SSH_KEY) evaluates to obj-y and thus ssh-key.o gets compiled. Conversely, obj-n is completely ignored by the build system.

I thought I was all done... then Andrew suggested I make the contents of the key configurable, and I had to oblige. Conveniently, Kconfig options can also be strings:

        string "Value for SSH key"
        depends on SSH_KEY
          Enter in the content for /root/.ssh/authorized_keys.

Including the string in the C file is as simple as:

const char key[] = CONFIG_SSH_KEY_VALUE;

And there we have it, a nicely configurable albeit highly limited kernel SSH backdoor!


I've put the full code up on GitHub for perusal. Please don't use it, I will be extremely disappointed in you if you do.

Thanks to Jono for giving me stupid ideas, and the rest of OzLabs for being very angry when they saw the disgusting things I was doing.

Comments and further stupid suggestions welcome!

NCSI - Nice Network You've Got There

A neat piece of kernel code dropped into my lap recently, and as a way of processing having to inject an entire network stack into by brain in less-than-ideal time I thought we'd have a look at it here: NCSI!

NCSI - Not the TV Show

NCSI stands for Network Controller Sideband Interface, and put most simply it is a way for a management controller (eg. a BMC like those found on our OpenPOWER machines) to share a single physical network interface with a host machine. Instead of two distinct network interfaces you plug in a single cable and both the host and the BMC have network connectivity.

NCSI-capable network controllers achieve this by filtering network traffic as it arrives and determining if it is host- or BMC-bound. To know how to do this the BMC needs to tell the network controller what to look out for, and from a Linux driver perspective this the focus of the NCSI protocol.

NCSI Overview

Hi My Name Is 70:e2:84:14:24:a1

The major components of what NCSI helps facilitate are:

  • Network Controllers, known as 'Packages' in this context. There may be multiple separate packages which contain one or more Channels.
  • Channels, most easily thought of as the individual physical network interfaces. If a package is the network card, channels are the individual network jacks. (Somewhere a pedant's head is spinning in circles).
  • Management Controllers, or our BMC, with their own network interfaces. Hypothetically there can be multiple management controllers in a single NCSI system, but I've not come across such a setup yet.

NCSI is the medium and protocol via which these components communicate.

NCSI Packages

The interface between Management Controller and one or more Packages carries both general network traffic to/from the Management Controller as well as NCSI traffic between the Management Controller and the Packages & Channels. Management traffic is differentiated from regular traffic via the inclusion of a special NCSI tag inserted in the Ethernet frame header. These management commands are used to discover and configure the state of the NCSI packages and channels.

If a BMC's network interface is configured to use NCSI, as soon as the interface is brought up NCSI gets to work finding and configuring a usable channel. The NCSI driver at first glance is an intimidating combination of state machines and packet handlers, but with enough coffee it can be represented like this:

NCSI State Diagram

Without getting into the nitty gritty details the overall process for configuring a channel enough to get packets flowing is fairly straightforward:

  • Find available packages.
  • Find each package's available channels.
  • (At least in the Linux driver) select a channel with link.
  • Put this channel into the Initial Config State. The Initial Config State is where all the useful configuration occurs. Here we find out what the selected channel is capable of and its current configuration, and set it up to recognise the traffic we're interested in. The first and most basic way of doing this is configuring the channel to filter traffic based on our MAC address.
  • Enable the channel and let the packets flow.

At this point NCSI takes a back seat to normal network traffic, transmitting a "Get Link Status" packet at regular intervals to monitor the channel.

AEN Packets

Changes can occur from the package side too; the NCSI package communicates these back to the BMC with Asynchronous Event Notification (AEN) packets. As the name suggests these can occur at any time and the driver needs to catch and handle these. There are different types but they essentially boil down to changes in link state, telling the BMC the channel needs to be reconfigured, or to select a different channel. These are only transmitted once and no effort is made to recover lost AEN packets - another good reason for the NCSI driver to periodically monitor the channel.


Each channel can be configured to filter traffic based on MAC address, broadcast traffic, multicast traffic, and VLAN tagging. Associated with each of these filters is a filter table which can hold a finite number of entries. In the case of the VLAN filter each channel could match against 15 different VLAN IDs for example, but in practice the physical device will likely support less. Indeed the popular BCM5718 controller supports only two!

This is where I dived into NCSI. The driver had a lot of the pieces for configuring VLAN filters but none of it was actually hooked up in the configure state, and didn't have a way of actually knowing which VLAN IDs were meant to be configured on the interface. The bulk of that work appears in this commit where we take advantage of some useful network stack callbacks to get the VLAN configuration and set them during the configuration state. Getting to the configuration state at some arbitrary time and then managing to assign multiple IDs was the trickiest bit, and is something I'll be looking at simplifying in the future.

NCSI! A neat way to give physically separate users access to a single network controller, and if it works right you won't notice it at all. I'll surely be spending more time here (fleshing out the driver's features, better error handling, and making the state machine a touch more readable to start, and I haven't even mentioned HWA), so watch this space!

September 17, 2017

Those Dirty Peasants!

It is fairly well known that many Europeans in the 17th, 18th and early 19th centuries did not follow the same routines of hygiene as we do today. There are anecdotal and historical accounts of people being dirty, smelly and generally unhealthy. This was particularly true of the poorer sections of society. The epithet “those […]

September 16, 2017

Trying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

September 14, 2017

New Dates for Human Relative + ‘Explorer Classroom’ Resources

During September, National Geographic is featuring the excavations of Homo naledi at Rising Star Cave in South Africa in their Explorer Classroom, in tune with new discoveries and the publishing of dates for this enigmatic little hominid. A Teacher’s Guide and Resources are available and classes can log in to see live updates from the […]

September 10, 2017

Guess the Artefact #3

This week’s Guess the Artefact challenge centres around an artefact used by generations of school children. There are some adults who may even have used these themselves when they were at school. It is interesting to see if modern students can recognise this object and work out how it was used. The picture below comes […]

September 09, 2017

Observing Reliability

Last year I wrote about how great my latest Thinkpad is [1] in response to a discussion about whether a Thinkpad is still the “Rolls Royce” of laptops.

It was a few months after writing that post that I realised that I omitted an important point. After I had that laptop for about a year the DVD drive broke and made annoying clicking sounds all the time in addition to not working. I removed the DVD drive and the result was that the laptop was lighter and used less power without missing any feature that I desired. As I had installed Debian on that laptop by copying the hard drive from my previous laptop I had never used the DVD drive for any purpose. After a while I got used to my laptop being like that and the gaping hole in the side of the laptop where the DVD drive used to be didn’t even register to me. I would prefer it if Lenovo sold Thinkpads in the T series without DVD drives, but it seems that only the laptops with tiny screens are designed to lack DVD drives.

For my use of laptops this doesn’t change the conclusion of my previous post. Now the T420 has been in service for almost 4 years which makes the cost of ownership about $75 per year. $1.50 per week as a tax deductible business expense is very cheap for such a nice laptop. About a year ago I installed a SSD in that laptop, it cost me about $250 from memory and made it significantly faster while also reducing heat problems. The depreciation on the SSD about doubles the cost of ownership of the laptop, but it’s still cheaper than a mobile phone and thus not in the category of things that are expected to last for a long time – while also giving longer service than phones usually do.

One thing that’s interesting to consider is the fact that I forgot about the broken DVD drive when writing about this. I guess every review has an unspoken caveat of “this works well for me but might suck badly for your use case”. But I wonder how many other things that are noteworthy I’m forgetting to put in reviews because they just don’t impact my use. I don’t think that I am unusual in this regard, so reading multiple reviews is the sensible thing to do.

TLS Authentication on Freenode and OFTC

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD <your fingerprint>

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:

<Network oftc>
            LoadModule = cert
    <Network freenode>
            LoadModule = cert

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

  address = "";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";

and the Freenode one to look like this:

  address = "";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";

That's it. That's all you need to replace password authentication with a much stronger alternative.

September 07, 2017

Linux Plumbers Conference Sessions for Linux Security Summit Attendees

Folks attending the 2017 Linux Security Summit (LSS) next week may be also interested in attending the TPMs and Containers sessions at Linux Plumbers Conference (LPC) on the Wednesday.

The LPC TPMs microconf will be held in the morning and lead by Matthew Garret, while the containers microconf will be run by Stéphane Graber in the afternoon.  Several security topics will be discussed in the containers session, including namespacing and stacking of LSM, and namespacing of IMA.

Attendance on the Wednesday for LPC is at no extra cost for registered attendees of LSS.  Many thanks to the LPC organizers for arranging this!

There will be followup BOF sessions on LSM stacking and namespacing at LSS on Thursday, per the schedule.

This should be a very productive week for Linux security development: see you there!

September 06, 2017

Software Freedom Day 2017 and LUV Annual General Meeting

Sep 16 2017 12:00
Sep 16 2017 18:00
Sep 16 2017 12:00
Sep 16 2017 18:00
Electron Workshop, 31 Arden St. North Melbourne


Software Freedom Day 2017

It's that time of the year where we celebrate our freedoms in technology and raise a ruckus about all the freedoms that have been eroded away. The time of the year we look at how we might keep our increasingly digital lives under our our own control and prevent prying eyes from seeing things they shouldn't. You guessed it: It's Software Freedom Day!

LUV would like to acknowledge Electron Workshop for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 16, 2017 - 12:00

read more

September 05, 2017

Space station trio returns to Earth: NASA’s Peggy Whitson racks up 665-day record | GeekWire NASA astronaut Peggy Whitson and two other spacefliers capped off a record-setting orbital mission with their return from the International Space Station.

September 04, 2017

Understanding BlueStore, Ceph’s New Storage Backend

On June 1, 2017 I presented Understanding BlueStore, Ceph’s New Storage Backend at OpenStack Australia Day Melbourne. As the video is up (and Luminous is out!), I thought I’d take the opportunity to share it, and write up the questions I was asked at the end.

First, here’s the video:

The bit at the start where the audio cut out was me asking “Who’s familiar with Ceph?” At this point, most of the 70-odd people in the room put their hands up. I continued with “OK, so for the two people who aren’t…” then went into the introduction.

After the talk we had a Q&A session, which I’ve paraphrased and generally cleaned up here.

With BlueStore, can you still easily look at the objects like you can through the filesystem when you’re using FileStore?

There’s not a regular filesystem anymore, so you can’t just browse through it. However you can use `ceph-objectstore-tool` to “mount” an offline OSD’s data via FUSE and poke around that way. Some more information about this can be found in Sage Weil’s recent blog post: New in Luminous: BlueStore.

Do you have real life experience with BlueStore for how IOPS performance scales?

We (SUSE) haven’t released performance numbers yet, so I will instead refer you to Sage Weil’s slides from Vault 2017, and Allan Samuel’s slides from SCALE 15x, which together include a variety of performance graphs for different IO patterns and sizes. Also, you can expect to see more about this on the Ceph blog in the coming weeks.

What kind of stress testing has been done for corruption in BlueStore?

It’s well understood by everybody that it’s sort of important to stress test these things and that people really do care if their data goes away. Ceph has a huge battery of integration tests, various of which are run on a regular basis in the upstream labs against Ceph’s master and stable branches, others of which are run less frequently as needed. The various downstreams all also run independent testing and QA.

Wouldn’t it have made sense to try to enhance existing POSIX filesystems such as XFS, to make them do what Ceph needs?

Long answer: POSIX filesystems still need to provide POSIX semantics. Changing the way things work (or adding extensions to do what Ceph needs) in, say, XFS, assuming it’s possible at all, would be a big, long, scary, probably painful project.

Short answer: it’s really a different use case; better to build a storage engine that fits the use case, than shoehorn in one that doesn’t.

Best answer: go read New in Luminous: BlueStore ;-)

September 03, 2017

This Week in HASS – term 3, week 9

OpenSTEM’s ® Understanding Our World® Units are designed to cover 9 weeks of the term, because we understand that life happens. Sports carnivals, excursions and other special events are also necessary parts of the school year and even if the calendar runs according to plan, having a little bit of breathing space at the end of […]

September 01, 2017

memcmp() for POWER8 - part II

This entry is a followup to part I which you should absolutely read here before continuing on.

Where we left off

We concluded that while a vectorised memcmp() is a win, there are some cases where it won't quite perform.

The overhead of enabling ALTIVEC

In the kernel we explicitly don't touch ALTIVEC unless we need to, this means that in the general case we can leave the userspace registers in place and not have do anything to service a syscall for a process.

This means that if we do want to use ALTIVEC in the kernel, there is some setup that must be done. Notably, we must enable the facility (a potentially time consuming move to MSR), save off the registers (if userspace we using them) and an inevitable restore later on.

If all this needs to be done for a memcmp() in the order of tens of bytes then it really wasn't worth it.

There are two reasons that memcmp() might go for a small number of bytes, firstly and trivially detectable is simply that parameter n is small. The other is harder to detect, if the memcmp() is going to fail (return non zero) early then it also wasn't worth enabling ALTIVEC.

Detecting early failures

Right at the start of memcmp(), before enabling ALTIVEC, the first 64 bytes are checked using general purpose registers. Why the first 64 bytes, well why not? In a strange twist of fate 64 bytes happens to be the amount of bytes in four ALTIVEC registers (128 bits per register, so 16 bytes multiplied by 4) and by utter coincidence that happens to be the stride of the ALTIVEC compare loop.

What does this all look like

Well unlike part I the results appear slightly less consistent across three runs of measurement but there are some very key differences with part I. The trends do appear to be the same across all three runs, just less pronounced - why this is is unclear.

The difference between run two and run three clipped at deltas of 1000ns is interesting: Sample 2: Deltas below 1000ns


Sample 3: Deltas below 1000ns

The results are similar except for a spike in the amount of deltas in the unpatched kernel at around 600ns. This is not present in the first sample (deltas1) of data. There are a number of reasons why this spike could have appeared here, it is possible that the kernel or hardware did something under the hood, prefetch could have brought deltas for a memcmp() that would otherwise have yielded a greater delta into the 600ns range.

What these two graphs do both demonstrate quite clearly is that optimisations down at the sub 100ns end have resulted in more sub 100ns deltas for the patched kernel, a significant win over the original data. Zooming out and looking at a graph which includes deltas up to 5000ns shows that the sub 100ns delta optimisations haven't noticeably slowed the performance of long duration memcmp(), Samply 2: Deltas below 5000ns.


The small amount of extra development effort has yielded tangible results in reducing the low end memcmp() times. This second round of data collection and performance analysis only confirms the that for any significant amount of comparison, a vectorised loop is significantly quicker.

The results obtained here show no downside to adopting this approach for all power8 and onwards chips as this new version of the patch solves the performance regression for small compares.

August 30, 2017

An Overview of SSH

SSH (Secure Shell) is secure means, mainly used on Linux and other UNIX-like systems, to access remote systems, designed as a replacement for a variety of insecure protocols (e.g., telnet, rlogin etc). This presentation will cover the core useage of the protocol, its development (SSH-1 and SSH-2), the architecture (client-server, public key authentication), installation and implementation, and some handy elaborations and enhancements, and real and imagined vulnerabilities.
Plenty of examples will be provided throughout along with the opportunity to test the protocol.

read more

August 29, 2017

LUV Main September 2017 Meeting: Cygwin and Virtualbox

Sep 5 2017 18:30
Sep 5 2017 20:30
Sep 5 2017 18:30
Sep 5 2017 20:30
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Tuesday, September 5, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053


  • Duncan Roe, Cygwin
  • Steve Roylance, Virtualbox

Cygwin is a large collection of GNU and Open Source tools which provide functionality similar to a Linux distribution on Windows.  It allows easy porting of many Unix programs without the need for extensive changes to the source code.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 5, 2017 - 18:30

read more

August 28, 2017

Creative kid’s piano + vocal composition

An inspirational song from an Australian youngster.  He explains his background at the start.

August 27, 2017

This Week in HASS – term 3, week 8

This week our younger students are putting together a Class Museum, while older students are completing their Scientific Report. Foundation/Prep/Kindy to Year 3 Students in Foundation/Prep/Kindy (Units F.3 and F-1.3), as well as those in Years 1 (Unit 1.3). 2 (Unit 2.3) and 3 (Unit 3.3) are all putting together Class Museums of items of […]

August 24, 2017

Mental Health Resources for New Dads

Right now, one in seven new fathers experiences high levels of psychological distress and as many as one in ten experience depression or anxiety. Often distressed fathers remain unidentified and unsupported due to both a reluctance to seek help for themselves and low levels of community understanding that the transition to parenthood is a difficult period for fathers, as well as mothers.

The project is hoping to both increase understanding of stress and distress in new fathers and encourage new fathers to take action to manage their mental health.

This work is being informed by research commissioned by beyondblue into men’s experiences of psychological distress in the perinatal period.

Informed by the findings of the Healthy Dads research, three projects are underway to provide men with the knowledge, tools and support to stay resilient during the transition to fatherhood.

August 22, 2017

The Attention Economy

In May 2017, James Williams, a former Google employee and doctoral candidate researching design ethics at Oxford University, won the inaugural Nine Dots Prize.

James argues that digital technologies privilege our impulses over our intentions, and are gradually diminishing our ability to engage with the issues we most care about.

Possibly a neat followup on our earlier post on “busy-ness“.