Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

May 22, 2017

Announcing new high-level PKCS#11 HSM support for Python

Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix.

Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend.

https://github.com/danni/python-pkcs11

It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code.

At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it :-P Then I can look at releasing my Django integrations for fields, storage, signing, etc.

May 21, 2017

This Week in HASS – term 2, week 6

This week students doing the Understanding Our World® program are exploring their environment and considering indigenous peoples. Younger students are learning about local history and planning a poster on a local issue. Older students are studying indigenous peoples around the world. All the students are working strongly on their main pieces of assessment for the term.

school iconFoundation/Prep/Kindy to Year 3

Our youngest students, using the stand-along Foundation/Prep/Kindy unit (F.2) are exploring the sense of touch in their environment this week. Students consider a range of fabrics and textiles and choose which ones match their favourite place, for inclusion in their model or collage. Students in integrated classes of Foundation/Prep/Kindy and Year 1 (Unit F.6), Year 1 students (Unit 1.2), Year 2 students (Unit 2.2) and Year 3 (Unit 3.2) are starting to prepare a poster on an issue regarding their school, or local park/heritage place, while considering the local history. These investigations should be based on the excursion from last week. Students will have 2 weeks to prepare their posters, for display either at the school or a local venue, such as the library or community hall.

Years 3 to 6

Students in Years 3 to 6 are continuing with their project on an explorer. Students in Year 3 (Unit 3.6) are examining Australian Aboriginal groups from extreme climate areas of Australia, such as the central deserts, or cold climate areas. Students then choose one of these groups to describe in their Student Workbook, and add to their presentation. Students in Year 4 (Unit 4.2) are studying indigenous peoples of Africa and South America. They will then select a group from the area visited by their explorer, to include in their presentation. Year 5 students (Unit 5.2) do the same with indigenous groups from North America; whilst year 6 students (Unit 6.2) have a wide range of resources on indigenous peoples from Asia to select for study and inclusion in their presentation. Resources are available on groups from across mainland Asia (such as the Mongols, Tatars, Rus, Han), as well as South-East Asia (such as Malay, Dyak, Dani etc.). This is the last section of work to be included in the presentation, and students will then finish their presentation and present it to the class.

May 19, 2017

Borrowing a Pencil

Student: Can I borrow a pencil?

Teacher: I don’t know. Can you?

Student: Yes. I might add that colloquial irregularities occur frequently in any language. Since you and the rest of our present company understood perfectly my intended meaning, being particular about the distinctions between “can” and “may” is purely pedantic and arguably pretentious.

Teacher: True, colloquialism and the judicious interpretation of context help us communicate with nuance, range, and efficiency. And yet, as your teacher, my job is to teach you to think about language with care and rigour. Understanding the shades of difference between one word and another, and to think carefully about what you want to say, will give you greater power and versatility in your speech and writing.

Student: Point taken. May I have a pencil?

Teacher: No, you may not. We do not have pencils since the department cut funding for education again last year.

PostgreSQL date ranges in Django forms

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML.

Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.

Start with a form field based off MultiValueField:

from django import forms
from psycopg2.extras import DateRange


class DateRangeField(forms.MultiValueField):
    """
    A date range
    """

    widget = DateRangeWidget

    def __init__(self, **kwargs):
        fields = (
            forms.DateField(required=True),
            forms.DateField(required=True),
        )
        super().__init__(fields, **kwargs)

    def compress(self, values):
        try:
            lower, upper = values
            return DateRange(lower=lower, upper=upper, bounds='[]')
        except ValueError:
            return None

The other side of a form field is a Widget:

from django import forms
from psycopg2.extras import DateRange


class DateRangeWidget(forms.MultiWidget):
    """Date range widget."""
    template_name = 'forms/widgets/daterange.html'

    def __init__(self, **kwargs):
        widgets = (
            forms.DateInput(),
            forms.DateInput(),
        )
        super().__init__(widgets, **kwargs)

    def decompress(self, value):
        if isinstance(value, DateRange):
            return (value.lower, value.upper)
        elif value is None:
            return (None, None)
        else:
            return value

    class Media:
        css = {
            'all': ('//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/daterangepicker.min.css',)  # noqa: E501
        }

        js = (
            '//cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/jquery.daterangepicker.min.js',  # noqa: E501
        )

Finally we can write a template to use the jquery-date-range-picker:

{% for widget in widget.subwidgets %}
<input type="hidden" name="{{ widget.name }}"{% if widget.value != None %} value="{{ widget.value }}"{% endif %}{% include "django/forms/widgets/attrs.html" %} />
{% endfor %}

<div id='container_for_{{ widget.attrs.id }}'></div>

With a script block:

(function() {
    var format = 'D/M/YYYY';
    var isoFormat = 'YYYY-MM-DD';
    var startInput = $('#{{ widget.subwidgets.0.attrs.id }}');
    var endInput = $('#{{ widget.subwidgets.1.attrs.id }}');

    $('#{{ widget.attrs.id }}').dateRangePicker({
        inline: true,
        container: '#container_for_{{ widget.attrs.id }}',
        alwaysOpen: true,
        format: format,
        separator: ' ',
        getValue: function() {
            if (!startInput.val() || !endInput.val()) {
                return '';
            }

            var start = moment(startInput.val(), isoFormat);
            var end = moment(endInput.val(), isoFormat);

            return start.format(format) + ' ' + end.format(format);
        },
        setValue: function(s, start, end) {
            start = moment(start, format);
            end = moment(end, format);

            startInput.val(start.format(isoFormat));
            endInput.val(end.format(isoFormat));
        }
    });
})();

You can now use this DateRangeField in a form, retrieve it from cleaned_data for database queries or store it in a model DateRangeField.

May 18, 2017

FreeDV 700D – First Over The Air Tests

OK so after several attempts I finally managed to push a 700D signal from my QTH in Adelaide (PF95gc) 1170km to the Manly Warringah Radio Society WebSDR in Sydney (QF56oh). Bumped my power up a little, raised my antenna, and hunted around until I found a relatively birdie-free frequency, as even low level birdies are stronger than my very weak signal.

Have a listen:

Analog SSB 700D modem Decoded 700D DV

Here is a spectrogram (i.e. a waterfall with the water falling from left to right) of the analog then digital signal:

Faint birdies (tones) can be seen as horizontal lines at 1000 and 2000 Hz. You can see the slow fading on the digital signal as it dips beneath the noise every few seconds.

The scatter diagram looks like bugs (bits?) splattered on a windscreen:

The slow fading causes the errors to bounce up and down over time (above). The packet error rate (measured on the 28 bit Codec 2 frames) is 26%. This is rather high, but I would argue we have intelligible speech here, and that the intelligibility is better than SSB.

Yayyyyyyy.

I used 4 interleaver frames, which is about 640ms. Perhaps a longer interleaver would ride over the fades.

I’m impressed! Conditions were pretty bad on 40m, the band was “closed”. This is day 1 of FreeDV 700D. It will improve from here.

Command Lines

The Octave demodulator doing it’s thing:

octave:56> ofdm_rx("~/Desktop/700d_part2/manly5_4.wav",4, "manly5_4.err")
Coded BER: 0.0206 Tbits: 12544 Terrs:   259 PER: 0.2612 Tpacketerrs:   117 Tpackets:   448
Raw BER..: 0.0381 Tbits: 26432 Terrs:  1007

Not sure if I’m working out raw and coded BER right as they are not usually this close. Will look into that. Maybe all the errors are in the fades, where both the demod and LDPC decoder fall in a heap.

The ofdm_tx/ofdm_rx system transmits test frames of known data, so we can work out the BER. By xor-ing the tx and rx bits we can generate an error pattern that can be used to insert errors into a Codec 2 700C bit stream, using this magic incantation:

~/codec2-dev/build_linux/src$ sox ~/Desktop/cq_freedv_8k.wav ~/Desktop/cq_freedv_8k.wav -t raw -r 8000 -s -2 - | ./c2enc 700C - - | ./insert_errors - - ../../octave/manly5_4.err 28 | ./c2dec 700C - - | sox -t raw -r 8000 -s -2 - ~/Desktop/manly5_4_ldpc224_4.wav

It’s just like the real thing. Trust me. And it gives me a feel for how the system is hanging together earlier rather than after months more development.

Links

Lots of links on the Towards FreeDV 700D post earlier today.

The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation
Comment Recommend a book

Towards FreeDV 700D

For the last two months I have been beavering away at FreeDV 700D, as part my eternal quest to show SSB who’s house it is.

This work was inspired by Bill, VK5DSP, who kindly developed some short LDPC codes for me; and suggested I could improve on the synchronisation overhead of the cohpsk modem. As an aside – Bill is part if the communications payload team for the QB50 SUSat Cubesat – currently parked at the ISS awaiting launch! Very Kerbal.

Anyhoo – I’ve developed a new OFDM modem that has less syncronisation overhead, works better, and occupies less RF bandwidth (1000 Hz) than the cohpsk modem used for 700C. I have wrapped my head around such arcane mysteries as coding gain and now have LDPC codes playing nicely over that nasty old HF channel.

It looks like FreeDV 700D has a gain of 4dB over 700C. This means error free operation at -2dB SNR for AWGN, and 2dB SNR over a challenging fast fading HF channel (two paths, 1Hz Doppler, 1ms delay).

Major Innovations:

  1. An OFDM modem with with low overhead (small Eb/No penalty) synchronisation, even on fading channels.
  2. Use of LDPC codes.
  3. Long (several seconds) interleaver.
  4. Ruthlessly hunting down any dB’s leaking out of my performance curves.

One nasty surprise was that after a closer look at the short (224,112) LDPC codes, I discovered they don’t give any real improvement over the simple diversity scheme used for FreeDV 700C. However with long interleaving (several seconds) of the short codes, or a long (few thousand bit/several seconds) LDPC code we get an additional 3dB gain. The interleaver allows us to ride over the ups and downs of the fast fading channel.

Interleaving has a few downsides. One is delay, the other is when they fail you lose a big chunk of data.

I’ve avoided delay until now, using the argument that low delay is essential for PTT radio. However I’d like to test long delays and see what the trade off/end user experience is. Once someone is speaking – i.e in the middle of an “over” – I suspect we won’t notice the delay. However it could get confusing in fast handovers. This is experimental radio, designed for very low SNRs, so lets give it a try.

We could send the uncoded data without interleaving – allowing low delay decoding when the SNR is high. A switch could control LDPC decoding, allowing a user selection of coded-high-delay or uncoded-low-delay, like a noise banker. Mark, VK5QI, has suggested interleaver depth also be adjustable which I think is a good idea. The decoder could automagically determine interleaver depth by attempting decoding over a range of depths (1,2,4,8,16 frames etc) and noting when the LDPC code converges.

Or maybe we could use a small, low delay, interleaver, and just live with the fades (like we do on SSB) and get the vocoder to mute or interpolate over them, and enjoy low or modest latency.

I’m also interested to see how the LDPC code mops up errors like static bursts and other real-world HF rubbish that SSB subjects us to even on high SNR channels.

So, lots of room for experimentation. At this stage it’s all in GNU Octave simulation form, no C implementation or FreeDV GUI mode exists yet.

Lots more I could write about the engineering behind the modem, but lets leave it there for now and take a look at some results.

Results

Here is a rather busy set of BER versus SNR curves (click for larger version, and here is an EPS file version):

The 10-2 line is where the codec gets easy to listen to.

Observe far-right green (700C) to black (700D candidate with lots of interleaving) HF curves, which are about 4dB apart. Also the far-left cyan shows 700D working at -3dB SNR on AWGN channels. One dB later (-2dB) LDPC magic stomps all errors.

Here are some speech/modem tone samples on simulated channels:

AWGN -2dB SNR Analog SSB 700D modem 700D DV
HF +0.8dB SNR Analog SSB 700D modem 700D DV

The analog samples have a 300 to 2600 Hz BPF applied at the tx and rx side, to model an analog SSB radio. The analog SSB and 700D modem signals have exactly the same RMS power and channel models applied to them. In the AWGN channel, it’s difficult to hear the 700D modem signal, however the SSB is audible as it has peaks 9dB above the average.

OK so the 700 bit/s vocoder (Codec 2 700C) speech quality is not great even with no errors, but we have found it supports conversations just fine, and there is plenty of room for improvement. The same techniques (OFDM modem, LDPC interleaving) can also be applied to high quality/high bit rate/high SNR voice modes. But first – I want to push this low SNR DV work through to completion.

Simulation Code

This list summarises the GNU Octave code I’ve developed, as I’ll probably forget the details when I move onto the next project. Feel free to try any of these scripts and let me know what I’ve forgotten to check in. It’s all checked into codec2-dev/octave.

ldpc.m Wrapper functions for using the CML library LDPC functions with Octave
ldpcut.m Unit test/demo for ldpc.m
ldpc_qpsk.m Runs simulations for a bunch of codes for AWGN and HF channels using a simulated QPSK OFDM modem. Runs at the Rs (the symbol rate), assumes ideal modem
ldpc_short.m Simulation used for initial short LDPC code investigation using an ideal rate Rs BPSK modem. Bunch of codes and interleaving schemes tested
ofdm_lib.m Library of OFDM modem functions
ofdm_rs.m Rate Rs OFDM modem simulation used to develop low overhead pilot symbol phase estimation scheme
ofmd_dev.m Rate Fs OFDM modem simulation. This is the real deal, with timing and frequency offset estimation, LDPC integration, and tests for coarse timing and frequency offset estimation
ofdm_tx.m Generates test frames of OFDM raw file samples to play over your HF radio
ofdm_rx.m Receives raw file samples from your HF radio and 700D-demodulates-decodes, and measures BER and PER

Sing Along

Just this morning I tried to radiate some FreeDV 700D from my home to some interstate SDRs on 40M, but alas conditions were against me. I did manage to radiate across my bench so I know the waveform does make it through real HF radios OK.

Please try sending these files through your radio:

ssb_otx_224_32.wav 32 frame (5.12 second) interleaver
ssb_otx_224_4.wav 4 frame (0.64 second) interleaver

Get someone (or a websdr) to sample the received signal (8000Hz sample rate, 16 bit mono), and email me the received file.

Or you can decode it yourself using:

octave:10> ofdm_rx('~/Desktop/otx_224_32_mysample.wav',32);

or:

octave:10> ofdm_rx('~/Desktop/otx_224_4_mysample.wav',4);

The rx side is still a bit rough, I’ll refine it as I try the system with real off-air signals and flush out the bugs.

Update: FreeDV 700D – First Over The Air Tests.

Links

QB50 SUSat cubesat – Bill and team’s Cubesat currently parked at the ISS!
Codec 2 700C and Short LDPC Codes
Testing FreeDV 700C
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
FreeDV 700D – First Over The Air Tests

May 16, 2017

LUV Beginners May Meeting: Dealing with Security as a Linux Desktop User

May 20 2017 12:30
May 20 2017 16:30
May 20 2017 12:30
May 20 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

This presentation will introduce the various aspects of IT security that Linux Desktop users may be grappling with on an ongoing basis. The target audience of the talk will be beginners (newbies) - who might have had bad experiences using Windows OS all these years, and don't know what to expect when tiptoeing into the new world of Linux.  General Linux users who don't always pay much attention to aspects of security may also find interest in sharing some of the commonsense practices that are essential to using our computers safely.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 20, 2017 - 12:30

read more

Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty).

After showing the boot screen for about 30 seconds, a busybox shell pops up:

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.

(initramfs)

Typing exit will display more information about the failure before bringing us back to the same busybox shell:

Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  

(initramfs)

which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found.

There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk

First, create bootable USB disk using the latest Ubuntu installer:

  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):

     dd if=ubuntu.iso of=/dev/sdc1
    

and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition

Assuming a drive which is partitioned this way:

  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition

Open a terminal and mount the required partitions:

cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev

Note:

  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).

  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition

Then "enter" the root partition using:

chroot /mnt

and make sure that the lvm2 package is installed:

apt install lvm2

before regenerating the initramfs for all of the installed kernels:

update-initramfs -c -k all

May 12, 2017

Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv
    echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
    echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(pyenv init -)"' >> ~/.bashrc
    git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
    source ~/.bashrc
    


Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot
    cd ~/.virtualenvs/pyenv-infrasot
    pyenv virtualenv system infrasot
    


You can see your installed venvs like this:

    $ pyenv versions
    * system (set by /home/user/.pyenv/version)
      infrasot
    


Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot
    $ ... stuff you're doing ...
    $ pvenv deactivate
    


I'll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Tags for this post: python venv virtualenvwrapper python3
Related posts: Implementing SCP with paramiko; Packet capture in python; A pythonic example of recording metrics about ephemeral scripts with prometheus; mbot: new hotness in Google Talk bots; Calculating a SSH host key with paramiko; Twisted conch

Comment

May 11, 2017

Assessments

Well, NAPLAN is behind us for another year and so we can all concentrate on curriculum work again! This year we have updated our assessment material to make it even easier to map the answers in the Student Workbooks with the curriculum codes. Remember, our units integrate across several curriculum areas. The model answers now contain colour coded curriculum codes that look like this:  These numbers refer to specific curriculum strands, which are now also listed in our Assessment Guides. In the back of each Assessment Guide is a colour coded table – Gold for History; Green for Geography; Light Green for HASS Skills; Orange for Civics and Citizenship; Purple for Economics and Business and Blue for Science. Each curriculum code is included in this table, along with the rubric for grades A to E, or AP to BA for the younger students.

These updates mean that teachers can now match each question to the specific curriculum area being assessed, thus simplifying the process for grading, and reporting on, each curriculum area. So, if you need to report separate grades for Science and HASS, or even History and Civics and Citizenship, you can tally the results across the questions which address those subject areas, to obtain an overall grade for each subject. Since this can be done on a question-by-question basis, you can even keep a running tally of how each student is doing in each subject area through the term, by assessing those questions they have answered, on a regular basis.

Please make sure that you have the latest updates of both the model answers and the assessment guides for each unit that you are teaching, with the codes as shown here. If you don’t have the latest updates, please download them from our site. Log in with your account, go to your downloads (click on “My Account” on the top right and then “Downloads” on the left). Find the Model Answers PDF for your unit(s) in the list of available downloads and click the button(s) to download each one again. Email us if there are any issues.

How to get Fedora working on a System 76 Oryx Pro

Problems: a) No sound b) Only onboard screen, does not recognise HDMI or Mini-DP Solutions: 1) Install Korora 2) Make sure you're not using an outdated kernel that doesn't have the snd-hda-intel driver available. 3) dnf install akmod-nvidia xorg-x11-drv-nvidia Extra resources: http://sub-pop.net/post/fedora-23-on-system76-oryx-pro/

LCA 2017 roundup

I've just come back from LCA at the Wrest Point hotel and fun complex in Hobart, over the 16th to the 20th of January. It was a really great conference and keeps the bar for both social and technical enjoyment at a high level.

I stayed at a nearby AirBNB property so I could have my own kitchenette - I prefer to be able to at least make my own breakfast rather than purchase it - and to give me a little exercise each day walking to and from the conference. Having the conference in the same building as a hotel was a good thing, though, as it both simplified accommodation for many attendees and meant that many other facilities were available. LCA this year provided lunch, which was a great relief as it meant more time to socialise and learn and it also spared the 'nearby' cafes and the hotel's restaurants from a huge overload. The catering worked very well.

From the first keynote right to the last closing ceremony, the standard was very high. I enjoyed all the keynotes - they really challenged us in many different ways. Pia gave us a positive view of the role of free, open source software in making the world a better place. Dan made us think of what happens to projects when they stop, for whatever reason. Nadia made us aware of the social problems facing maintainers of FOSS - a topic close to my heart, as I see the way we use many interdependent pieces of software as in conflict with users' social expectations that we produce some kind of seamless, smooth, cohesive whole for their consumption. And Robert asked us to really question our relationship with our users and to look at the "four freedoms" in terms of how we might help everyone, even people not using FOSS. The four keynotes really linked together well - an amazing piece of good work compared to other years - and I think gave us new drive.

I never had a session where I didn't want to see something - which has not always been true for LCA - and quite often I skipped seeing something I wanted to see in order to see something even more interesting. While the miniconferences sometimes lacked the technical punch or speaker polish, they were still all good and had something interesting to learn. I liked the variety of miniconf topics as well.

Standout presentations for me were:

  • Tom Eastman talking about building application servers - which, in a reversal of the 'cloud' methodology, have to sit inside someone else's infrastructure and maintain a network connection to their owners.
  • Christoph Lameter talking about making kernel objects movable - particularly the inode and dentry caches. Memory fragmentation affects machines with long uptimes, and it was fascinating to hear Matthew Wilcox, Dave Chinner, Keith Packard and Christoph talking about how to fix some of these issues. That's the kind of opportunity that a conference like LCA provides.
  • James Dumay's talk on Blue Ocean, a new look for Jenkins. It really brings it a modern, interactive look and I hope this becomes the new default.

Maths Challenge (Basic Operations)

As we are working on expanding our resources in the Maths realm, we thought it would be fun to start a little game here.

Remember “Letters and Numbers” on SBS? (Countdown in UK, Cijfers en Letters in The Netherlands and Belgium, originally Des Chiffres et des Lettres in France).

The core rules for numbers game are: you get 6 numbers, to use with basic operations (add, subtract, multiply, divide) to get as close as possible to a three digit target number. You can only use each number once, but you don’t have to use all numbers. No intermediate result is allowed to be be negative.

Now try this for practice:

Your 6 numbers (4 small, 2 large):    1     9     6     9     25     75

Your target: 316

Comment on this post with your solution (full working)!  We’re not worrying about a time limit, as it’s about the problem solving.

May 10, 2017

Cafe Dark Ages

Today, like most mornings, I biked to a cafe to hack on my laptop while slurping on iced coffee. Exercise, fresh air, sugar, caffeine and R&D. On this lovely sunny Autumn day I’m tapping away on my lappy, teasing bugs out of my latest digital radio system.

Creating new knowledge is a slow, tedious, business.

I test each small change by running an experiment several hundred thousand times, using simulation software on my laptop. R&D – Science by another name – is hard. One in ten of my ideas actually work, despite being at the peak of my career, having a PhD in the field, and help from many very intelligent peers.

A different process is going on at the table next to me. An “Integrative Health Consultant” is going about her business, speaking to a young client.

In an earnest yet authoritative Doctor-voice the “consultant” revs up with ill-informed dietary advice, moves on to over-priced under-performing products that the consultant just happens to sell, and ends up with a thinly disguised invitation to join her Multi-Level-Marketing (MLM) organisation. With a few side journeys through anti-vaxer land, conspiracy theories, organic food, anti-carbo and anti-gluten, sprinkled with disparaging remarks on Science, evidence based medicine and an inspired stab at dissing oncology (“I know this guy who had chemo and still died!”). All heavily backed by n=1 anecdotes.

A hobby of mine is critical thinking, so I am aware that most of their conversation is bullshit. I know how new knowledge is found (see above) and it’s not from Facebook.

But this post is not about the arguments of alt-med versus evidence based medicine. Been there, done that.

Here is what bothers me. These were both good people, who more or less believe in what they say. They are not stupid, they are intelligent and want to help people get and stay healthy. I have friends and family that I love who believe this crap. But they are hurting society and making people sicker.

Steering people away from modern, evidence based medicine kills people. Someone who is persuaded to see a naturopath rather than an oncologist will find out too late the price of well-meaning ignorance. Anti-vaxers hurt, maim, and kill for their beliefs. I shudder to think of the wasted lives and billions of dollars that could be spent on far better outcomes than lining the pockets of snake oil salespeople.

There is some encouraging news. The Australian Government has started removing social security benefits from people who don’t vaccinate. The Nursing and Midwifery Board is also threatening to take action against Nurses who push an anti-vaccination stance.

But this is beating people with a stick, where is the carrot?

Doctors in the Dark Ages were good people. They really believed leaches, blood letting and prayer where helping the patients they loved. But those beliefs sustained untold human misery. The difference with today?

Science, Education, and Policy.

May 09, 2017

Get a 50% discount during NAPLAN week 2017

That’s right. Use the NAPLAN17 coupon code to receive a 50% discount on any base PDF resource, teacher unit bundle or subscription during this NAPLAN week, up to Sunday 14th May 2017. This offer is valid for anyone: existing and new customers, subscribers, and there are no other restrictions.

Why? Well, as you know we feel very strongly that good materials help awesome teachers deliver excellent outcomes. And while standardised assessment can assist a teacher with that, the way that NAPLAN results are now aggregated for comparing schools (including by the media) does little to improve either standards, the wellbeing and success of students, or the wellbeing of teachers.  Of course, NAPLAN “just is”, for now, but we’d like to support every teacher and school as they focus on the core teaching materials that help our students’ literacy, numeracy, general knowledge and skills. We’re here to help!

If you’re an existing customer, see if there are any units you’ve been wanting to get anyway. If you are a might-be-new-customer of OpenSTEM, we look forward to welcoming you!

May 08, 2017

May 07, 2017

This Week in HASS – term 2, week 4

It’s NAPLAN week and that means time is short! Fortunately, the Understanding Our World® program is based on 9 week units, which means that if you run out of time in any particular week, it’s not a disaster. Furthermore, we have made sure that there is plenty of catch-up time within the lessons, so that there is no need to feel rushed. This week students are getting into the nitty gritty of their term projects. Our youngest students are studying their surroundings at school and in the local area. Older students are getting to the core of their research projects.

Foundation to Year 3

Students in our standalone Foundation/Kindy/Prep class (unit F.2) are starting to build a model of their Favourite Place. It is the teacher’s choice whether they build a diorama, make a poster or collage, or how this is done in class. This week students start by drawing or cutting out pictures to show aspects of their favourite place. Students in an integrated Foundation/Kindy/Prep (unit F.6) and Year 1 class are using their senses to investigate their class and school – what can we see, hear, smell, feel and taste? Some ideas can be found in resources such as My Favourite Sounds and the Teacher Handbook also contains lots of ideas for these investigations. Students in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are also discussing how the school and local area has changed through time. The teacher can use old maps, photos or newspaper reports to guide students through these discussions. What information is available in the school? What do local families remember?

Years 3 to 6

Students in Year 3 (unit 3.6), 4 (unit 4.2), 5 (unit 5.2) and 6 (unit 6.2) are continuing to research their explorer. This week year 3 students are focusing on the climates encountered by their explorer. Resources such as Climate Zones of Australia and Climate Zones of the World can help the class to identify these climate areas. Year 4 students examine Environments in Africa and South America, in order to discuss the environments encountered by their explorer. Students in Year 5 can read up about the environment encountered by their explorer in North America, and Year 6 students examine the Environments of Asia. In each case, the student workbook guides the student through this investigation and helps them to isolate pertinent information to include in their presentation. This helps students to gain an understanding of how to research a topic and derive an understanding of what information they need to consider. Teachers can use the workbook to check in and see how students are travelling in their progress towards completing the project, as well as their understanding of the content covered.

May 05, 2017

Speaking in May 2017

It was a big April if you’re in the MySQL ecosystem, so am looking forward to other events that have different focus and a different base, so to speak. See you at:

  • rootconf – May 11-12 2017 – Bangalore, India. My first Rootconf was last year, and it was a great event; I look forward to going there again this year, to talk about capacity planning for your databases. If you register with this link you get a 10% discount.
  • Open Source Data Center Conference – May 16-18 2017 – Berlin, Germany. I’ve enjoyed my trips to OSDC in the last few years, and they’re on their last tickets now – so register if you plan to go!

May 03, 2017

How to Really Clean a Roomba

The official iRobot Roomba instructional videos show a Roomba doing its thing in an immaculately clean house. When it comes time to clean the Roomba itself, an immaculately manicured woman empties a sprinkling of dirt from the Roomba’s hopper into a bin, flicks no dust at all off the rotor brush and then delicately grooms the main brush, before putting the Roomba back on to charge.

It turns out the cleaning procedure is a bit more involved for two long-haired adults and three cats living on a farm. Note that the terminology used in the instructions below was made up by me just now, and may or may not match what’s in the Roomba manual. Also, our Roomba is named Neville.

First, assemble some tools. You will need at least two screwdrivers (one phillips, one slotted), the round red Roomba brush cleaning thingy, and a good sharp knife. You will not need the useless flat red Roomba cleaning thingy the woman in the official video used to groom the main brush.

01-toolsBrace yourself, then turn the Roomba over (here we see that Neville had an unfortunate encounter with some old cat-related mess, in addition to the usual dirt, mud, hair, straw, wood shavings, chicken feathers, etc.)

02-start-dirty

Remove the hopper:

03-empty-hopper

Empty the hopper:

04-hopper-empty

See if you can see if the fan inside the hopper looks like it’s clogged. It’s probably good this time (Neville hasn’t accidentally been run with one filter missing lately), but we may as well open it up anyway.

05-hopper-top

Take the top off:

06-hopper-open

Take the filter plate off:

07-hopper-open-more

Take the fan cowling off. Only a bit of furry dusty gunk:

08-hopper-fan

Remove the furry dusty gunk:

09-hopper-clean

Next, pop open the roller enclosure:

10-rollers-open

Remove the rollers and take their end caps off:

11-rollers-out

Pull the brush roller through the round Roomba brush cleaning thingy:

12-brush-roller-cleaning

This will remove most of the hair, pine needles and straw:

13-brush-roller-gunk

Do it again to remove the rest of the hair, pine needles and straw:

14-brush-roller-almost-clean

Check the end of the axle for even more hair:

15-brush-roller-end

This can be removed using your knife:

16-brush-roller-end-clean

The rubber roller also needs a good bit of knife action:

17-rubber-roller-knife

18-rubber-roller-knife-end

The rubber roller probably really needs to be replaced at this point (it’s getting a bit shredded), but I’ll do that next time.

19-rubber-roller-clean

Clean the inside of the roller enclosure using a hand-held Dyson vacuum cleaner:

20-inside-clean-vacuum

That didn’t work very well. I assume Neville managed to escape into the bathroom recently and got a bit wet (he’s a free spirit).

21-inside-vaccumed

Rubbing with a damp cloth helps somewhat:

22-inside-wiped

But this Orange Power stuff is even better:

23-orange-power

See?

24-inside-clean

Next, use your knife to liberate the rotor brush:

25-rotor-knife

Almost there, but we need to take the whole thing off:

26-rotor-partial-clean

Yikes:

27-rotor-off

Note to self: buy new rotor brush.

28-rotor-clean

One of Neville’s wheels has horrible gunk stuck it its tread. Spraying with Orange Power and wiping helps somewhat (we’ll come back to this later):

29-wheel-gunk

Use a screwdriver to pop the little front wheel out, and cut the hair off its axle with a knife. The axle could probably use some WD40 too:

30-front-wheel-off

Remove the entire bottom plate:

31-bottom-off

Use that hand-held vacuum cleaner again to get rid of the dust balls:

32-bottom-vacuum

Use your fingers or a screwdriver to remove small chicken and/or duck feathers from the wheel housing:

33-wheel-feather-01

33-wheel-feather-02

Use a chopstick to scrape the remaining gunk out of the wheel tread:

34-wheel-chopstick

Voila!

35-roomba-clean

Here’s what came out of the poor little guy:

36-roomba-gunk

Finally, reassemble everything, and Neville is ready for next time (or, at least, is ready to go back on the charger):

37-roomba-done

API, ABI and backwards compatibility are a hard necessity

Recently, I was reading a thread on LKML on a proposal to change the behavior of the open system call when confronted with unknown flags. The thread is worth a read as the topic of augmenting things that exist probably by accident to be “better” is always interesting, as is the definition of “better”.

Keeping API and/or ABI compatibility is something that isn’t a new problem, and it’s one that people are pretty good at sometimes messing up.

This problem does not go away just because “we have cloud now”. In any distributed system, in order to upgrade it (or “be agile” as the kids are calling it), you by definition are going to have either downtime or at least two versions running concurrently. Thus, you have to have your interfaces/RPCs/APIs/ABIs/protocols/whatever cope with changes.

You cannot instantly upgrade the world, it happens gradually. You also have to design for at least three concurrent versions running. One is the original, the second is your upgrade, your third is the urgent fix because the upgrade is quite broken in some new way you only discover in production.

So, the way you do this? Never ever EVER design for N-1 compatibility only. Design for going back a long way, much longer than you officially support. You want to have a design and programming culture of backwards compatibility to ensure you can both do new and exciting things and experiment off to the side.

It’s worth going and rereading Rusty’s API levels posts from 2008:

April 30, 2017

This Week in HASS – term 2, week 3

This week all of our students start to get into the focus areas of their units. For our youngest students that means starting to examine their “Favourite Place” – a multi-sensory examination which help them to explore a range of different kinds of experiences as they build a representation of their Favourite Place. Students in Years 1 to 3 start mapping their local area and students in Years 3 to 6 start their research topics for the term, each choosing a different explorer to investigate.

Foundation/Kindy/Prep to Year 3

Students doing our stand-alone Foundation/Kindy/Prep unit (F.2) start examining the concept of a Favourite Place this week. This week is an introduction to a 6 week investigation, using all their senses to consider different aspects of places. They are focusing on thinking about what makes their favourite place special to them and how different people like different places. This provides great opportunities for practising skills of considering alternate points of view, having respectful discussions and accepting that others might have opinions different to their own, but no less valid. Students in integrated Foundation/Kindy/Prep (unit F.6) classes and in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are doing some mapping this week, learning to represent school buildings, open areas, roads, houses, shops etc in a 2 dimensional plan. This exercise forms the foundation for an examination of the school and local landscape over the next few weeks.

Years 3 to 6

Students in Years 3 to 6 start their research projects this week. Students doing unit 3.6, Exploring Climates, will be investigating people who have explored extreme climates. Options include the first people to reach Australia during the Ice Age, Aboriginal people who lived in Australia’s central deserts, Europeans who explored central Australia, such as Sturt, Leichhardt and others. Students doing unit 4.2 will be investigating explorers of Africa and South America, including Ferdinand Magellan (and Elcano), Walter Raleigh, Amerigo Vespucci and many others. Students doing unit 5.2  are investigating explorers of North America. Far beyond Christopher Columbus, choices include Vikings such as Eric the Red, Leif Erikson and Bjarni Herjolfsson; Vitus Bering (after whom the Bering Strait is named), the French in the colony of Quebec, such as Jacques Cartier, Samuel de Champlain and Pierre François-Xavier de Charlevoix. Some 19th century women such as Isabella Bird (pictured on right) and Nellie Bly are also provided as options for research. Unit 6.2 examines explorers of Asia. In this unit, Year 6 students are encouraged to move beyond a Eurocentric approach to exploration and consider explorers from other areas such as Asia and Africa as well. Thus explorers such as Ibn Battuta, Ahmad Ibn Fadlan, Gan Ying, Ennin and Zheng He, join the list with Willem Barents, William Adams, Marco Polo and Abel Tasman. Women explorers include Gertrude Bell and Ida Pfeiffer. The whole question of women explorers, and the constraints under which they have operated in different cultures and time periods, can form part of a class discussion, either as extension or for classes with a particular interest.

Teachers have the option for student to present the results of their research (which will cover the next 4 weeks) as a slide presentation, using software such as Powerpoint, a poster, a narrative, a poem, a short play or any other format that is useful, and some teachers have managed to combine this with requirements for other subject areas, such as English or Digital Technologies, thereby making the exercise even more time-efficient.

What do you mean ExceptT doesn't Compose?

Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata.

At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus:

Using exceptions for control flow is the root of many evils in software.

Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded:

Yet what to do if you want composable code? Currently I have
type Rpc a = ExceptT RpcError IO a
which is terrible

But what do we mean by "composable"? I like the wikipedia definition:

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements.

The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code.

At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text.

In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together.

So lets look at the naive version of a program that doesn't do any exception handling at all.

  import Data.ByteString.Char8 (readFile, writeFile)

  import Naive.Cat (Cat, parseCat)
  import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection)
  import Naive.Dog (Dog, parseDog)

  import Prelude hiding (readFile, writeFile)

  import System.Environment (getArgs)
  import System.Exit (exitFailure)

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile
      _ -> putStrLn "Expected three file names." >> exitFailure

  readCatFile :: FilePath -> IO Cat
  readCatFile fpath = do
    putStrLn "Reading Cat file."
    parseCat <$> readFile fpath

  readDogFile :: FilePath -> IO Dog
  readDogFile fpath = do
    putStrLn "Reading Dog file."
    parseDog <$> readFile fpath

  writeResultFile :: FilePath -> Result -> IO ()
  writeResultFile fpath result = do
    putStrLn "Writing Result file."
    writeFile fpath $ renderResult result

  processFiles :: FilePath -> FilePath -> FilePath -> IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

Once built as per the instructions in the repo, it can be run with:

  dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null
  Reading Cat file 'Naive/Cat.hs'
  Reading Dog file 'Naive/Dog.hs'.
  Writing Result file '/dev/null'.

The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown:

  • Either of the two readFile calls.
  • The writeFile call.
  • The parsing functions parseCat and parseDog.
  • Opening the database connection.
  • The database connection could terminate during the processing stage.

So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things.

Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:

  {-# LANGUAGE OverloadedStrings #-}
  import           Control.Exception (SomeException)
  import           Control.Monad.IO.Class (liftIO)
  import           Control.Error (ExceptT, fmapL, fmapLT, handleExceptT
                                 , hoistEither, runExceptT)

  import           Data.ByteString.Char8 (readFile, writeFile)
  import           Data.Monoid ((<>))
  import           Data.Text (Text)
  import qualified Data.Text as T
  import qualified Data.Text.IO as T

  import           Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError)
  import           Improved.Db (DbError, Result, processWithDb, renderDbError
                               , renderResult, withDatabaseConnection)
  import           Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError)

  import           Prelude hiding (readFile, writeFile)

  import           System.Environment (getArgs)
  import           System.Exit (exitFailure)

  data ProcessError
    = ECat CatParseError
    | EDog DogParseError
    | EReadFile FilePath Text
    | EWriteFile FilePath Text
    | EDb DbError

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] ->
              report =<< runExceptT (processFiles inFile1 infile2 outFile)
      _ -> do
          putStrLn "Expected three file names, the first two are input, the last output."
          exitFailure

  report :: Either ProcessError () -> IO ()
  report (Right _) = pure ()
  report (Left e) = T.putStrLn $ renderProcessError e


  renderProcessError :: ProcessError -> Text
  renderProcessError pe =
    case pe of
      ECat ec -> renderCatParseError ec
      EDog ed -> renderDogParseError ed
      EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg
      EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg
      EDb dbe -> renderDbError dbe


  readCatFile :: FilePath -> ExceptT ProcessError IO Cat
  readCatFile fpath = do
    liftIO $ putStrLn "Reading Cat file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL ECat $ parseCat bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  readDogFile :: FilePath -> ExceptT ProcessError IO Dog
  readDogFile fpath = do
    liftIO $ putStrLn "Reading Dog file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL EDog $ parseDog bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO ()
  writeResultFile fpath result = do
    liftIO $ putStrLn "Writing Result file."
    handleExceptT handler . writeFile fpath $ renderResult result
    where
      handler :: SomeException -> ProcessError
      handler e = EWriteFile fpath (T.pack $ show e)

  processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- fmapLT EDb . withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function.

We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:

  EDb (DbError3 ResultError1)

contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing.

We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose.

Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including:

  • Errors are explicit in the types of the functions, making the code easier to reason about.
  • Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions.
  • The programmer spends less time chasing the source of exceptions in large complex code bases.
  • More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought.

Want to discuss this? Try reddit.

April 27, 2017

Continuing the Conversation at DrupalCon and Into the Future

My blog post from last week was very well received and sparked a conversation in the Drupal community about the future of Drupal. That conversation has continued this week at DrupalCon Baltimore.

Yesterday during the opening keynote, Dries touched on some of the issues raised in my blog post. Later in the day we held an unofficial BoF. The turn out was smaller than I expected, but we had a great discussion.

Drupal moving from a hobbyist and business tool to being an enterprise CMS for creating "ambitious digital experiences" was raised in the Driesnote and in other conversations including the BoF. We need to acknowledge that this has happened and consider it an achievement. Some people have been left behind as Drupal has grown up. There is probably more we can do to help these people. Do we need more resources to help them skill up? Should we direct them towards WordPress, backdrop, squarespace, wix etc? Is it is possible to build smaller sites that eventually grow into larger sites?

In my original blog post I talked about "peak Drupal" and used metrics that supported this assertion. One metric missing from that post is dollars spent on Drupal. It is clear that the picture is very different when measuring success using budgets. There is a general sense that a lot of money is being spent on high end Drupal sites. This has resulted in less sites doing more with Drupal 8.

As often happens when trying to solve problems with Drupal during the BoF descended into talking technical solutions. Technical solutions and implementation detail have a place. I think it is important for the community to move beyond this and start talking about Drupal as a product.

In my mind Drupal core should be a content management framework and content hub service for building compelling digital experiences. For the record, I am not arguing Drupal should become API only. Larger users will take this and build their digital stack on top of this platform. This same platform should support an ecosystem of Drupal "distros". These product focused projects target specific use cases. Great examples of such distros include Lightning, Thunder, Open Social, aGov and Drupal Commerce. For smaller agencies and sites a distro can provide a great starting point for building new Drupal 8 sites.

The biggest challenge I see is continuing this conversation as a community. The majority of the community toolkit is focused on facilitating technical discussions and implementations. These tools will be valuable as we move from talking to doing, but right now we need tools and processes for engaging in silver discussions so we can build platinum level products.

April 26, 2017

LUV Main May 2017 Meeting: The Plasma programming language

May 2 2017 18:30
May 2 2017 20:30
May 2 2017 18:30
May 2 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, May 2, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• Dr. Paul Bone, the Plasma programming language
• To be announced

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 2, 2017 - 18:30

read more

April 24, 2017

Weechat – a trial

I’m a big fan on console apps. But for IRC, I have been using quassel, as it gives me a client on my phone. But I’ve been cleaning up my cloud accounts, and thought of the good old days when you’d simply run a console IRC client in screen or tmux.

Many years ago I used weechat, it was awesome, so I thought why not go and have a play.

To my surprise weechat has a /relay command which allows other clients to connect. One such client is WeechatAndroid, which in itself isn’t a IRC client, but actually a client that can talk to a already running weechat… This is exactly what I want.

So in case any of you wanted to do the same, this is my current tmux + weechat setup. Mostly gleened from various internet sources.

 

Initial WeeChat setup

Once you start weechat, you can simply configure things, when you do, it writes it to its config file. So really all you need to do is place or edit the config file directly. So once setup you can easily move it. However, as you’ll always want to be connected, it’s nice to know how to change configuration while it’s running… you know, so you can play.

I’ll assume you have installed weechat in whatever distro your using. So we’ll start by running weechat (inside a tmux or screen):

weechat

Now while we are inside weechat, lets first start by installing/activating some scripts:

/script install buffers.pl go.py colorize_nicks.py urlserver.py

Noting that there are heaps of scripts to install, but lets explain these:

  • buffers.pl – makes a left side window listing the buffers (or channels).
  • go.py – allows you to quickly search the buffers.
  • urlserver.py – automatically shorens long urls so they don’t break if they overlap over a line.
  • colorize_nicks.py – is obvious.

The go.py script is useful. But turns out you can also turn on mouse support with:

/mouse enable

Which will then allow you to use your mouse to select a buffer, although this will break normal copy and pasting, so maybe not worth the effort, but thought I’d mention it.
Although of course normally you’d use the weechat keybindings to access them, <alt+left or right> to go to a buffer, or <alt-a> to goto the last active buffer. But go, means you can easily jump, and if you add a go.py keybinding:

/key bind meta-g /go

An <alt+g> will lauch it, so you can type away, is really easy.

But we are probably getting away from our selves. There are plenty of places that can tell you how to configure it connect to freenode etc. I used: https://weechat.org/files/doc/devel/weechat_quickstart.en.html

Now that you have some things configured lets setup the relay script/plugin.

 

Setting up /relay

This actually isn’t too hard. But there was a gotcha, which is why I’m writing about it. I first simply followed WeechatAndroid’s guide. But this just sets up an insecure relay:


/relay add weechat 8001
/set relay.network.password "your-secret-password"

NOTE: 8001 is the port, so you can change this, and the password to whatever you want.

However, that’s great for a test, but we probably want SSL, so we need a SSL cert:


mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

NOTE: To change or see where weechat is looking for the cert is to see what the value of ‘relay.network.ssl_cert_key’ or just do a: /set relay.*

We can tell weechat to load this sslcert without restarting by running:

/relay sslcertkey

And here’s the gotcha, you will fail to connect via SSL until you create an instance of the weechat relay protocal (listening socket) with ssl in the name of the protocol:
/relay del weechat
/relay add ssl.weechat 8001

Or just be smart and setup the SSL version.

On the weechat buffer you will see the client connecting and disconnecting, so is a good way to debug connection issues.

 

Weechat /relay + ssl (TL;DR)

mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

In weechat:

/relay sslcertkey
/relay add ssl.weechat 8001
/set relay.network.password "your-secret-password"

April 23, 2017

This Week in HASS – term 2, week 2

It is hoped that by now all the school routine is shaking back down into place. No doubt you’ve all got ANZAC Day marked on your class calendars, and this may be a good time to revisit some of the celebrations with the younger students. This week our younger students are looking at types of homes and local Aboriginal groups. Students in Year 3 are investigating climate zones and biomes of Australia, while students in Years 4 to 6 are looking at Europe in the ‘Age of Discovery’ (the 15th to 18th centuries).

Foundation/Prep to Year 3

House in Hobart TASStudents in our stand-alone Foundation/Prep class (Unit F.2), in line with the name of the unit “Where We Live”, are examining different types of homes and talking about how people get the things they need (such as shelter, warmth etc) from their homes. Students examine a wide range of different types of homes including freestanding houses, apartments, townhouses, as well as boats, caravans and other less conventional homes.

Students in integrated Foundation/Prep classes (Unit F.6) and in years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) are finding out about their local Aboriginal groups, in the area of their school. Students will be considering how the groups are connected to the land and what changes they have seen since they first arrived in that area, thousands of years before. Remember, if you need information about your local Aboriginal group, feel free to contact us and ask.

Years 3 to 6

Students in Year 3, doing the Unit “Exploring Climates” (Unit 3.6) are consolidating work done last week on climate zones and the biomes of Australia. This week they are focusing on matching the climate zone to the region of Australia. Students in Years 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are shifting focus across to Europe in the 15th to 18th centuries – the ‘Age of Discovery’.

This sets the scene for further examinations of explorers and the research project students will undertake this term, as well as introducing students to the conditions in Europe which later led to colonisation, thereby providing some important background information for Australian history in Term 3. Students can examine Spain, Portugal and England and the role that they played in exploring the world at this time.

Science!

Sailing Ships (History + Science)Sailing Ship Science

Did you know: the Understanding Our World® program also fully covers the Science component of the Australian Curriculum at each year level, integrated with the HASS materials!

In line with the Age of Discovery explorer theme, student start their Science activity: “Ancient Sailing Ships“. A perennial favourite with students, this activity involves making a simple model sailing ship and then examining the forces acting on the ship, the properties of different parts of the ships and the materials from which they were made, examining different types of sails (square-rigged versus lateen-rigged), as well as considering the phases of matter associated with sailing ships.

Some schools set up water troughs and fans and race the ships against each other, which causes much excitement! This activity also helps students understand some of the challenges faced by explorers who travelled the world in similar vessels.

April 21, 2017

Many People Want To Talk

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

April 19, 2017

Drupal, We Need To Talk

Update 21 April: I've published a followup post with details of the BoF to be held at DrupalCon Baltimore on Tuesday 25 April. I hope to see you there so we can continue the conversation.

Drupal has a problem. No, not that problem.

We live in a post peak Drupal world. Drupal peaked some time during the Drupal 8 development cycle. I’ve had conversations with quite a few people who feel that we’ve lost momentum. DrupalCon attendances peaked in 2014, Google search impressions haven’t returned to their 2009 level, core downloads have trended down since 2015. We need to accept this and talk about what it means for the future of Drupal.

Technically Drupal 8 is impressive. Unfortunately the uptake has been very slow. A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor.

In the five years Drupal 8 was being developed there was a fundamental shift in software architecture. During this time we witnessed the rise of microservices. Drupal is a monolithic application that tries to do everything. Don't worry this isn't trying to rekindle the smallcore debate from last decade.

Today it is more common to see an application that is built using a handful of Laravel micro services, a couple of golang services and one built with nodejs. These applications often have multiple frontends; web (react, vuejs etc), mobile apps and an API. This is more effort to build out, but it likely to be less effort maintaining it long term.

I have heard so many excuses for why Drupal 8 adoption is so slow. After a year I think it is safe to say the community is in denial. Drupal 8 won't be as popular as D7.

Why isn't this being talked about publicly? Is it because there is a commercial interest in perpetuating the myth? Are the businesses built on offering Drupal services worried about scaring away customers? Adobe, Sitecore and others would point to such blog posts to attack Drupal. Sure, admitting we have a problem could cause some short term pain. But if we don't have the conversation we will go the way of Joomla; an irrelevant product that continues its slow decline.

Drupal needs to decide what is its future. The community is full of smart people, we should be talking about the future. This needs to be a public conversation, not something that is discussed in small groups in dark corners.

I don't think we will ever see Drupal become a collection of microservices, but I do think we need to become more modular. It is time for Drupal to pivot. I think we need to cut features and decouple the components. I think it is time for us to get back to our roots, but modernise at the same time.

Drupal has always been a content management system. It does not need to be a content delivery system. This goes beyond "Decoupled (Headless) Drupal". Drupal should become a "content hub" with pluggable workflows for creating and managing that content.

We should adopt the unix approach, do one thing and do it well. This approach would allow Drupal to be "just another service" that compliments the application.

What do you think is needed to arrest the decline of Drupal? What should Drupal 9 look like? Let's have the conversation.

CNC Alloy Candelabra

While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

April 18, 2017

Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

April 17, 2017

This Week in HASS – term 2, week 1

Welcome to the new school term, and we hope you all had a wonderful Easter! Many of our students are writing NAPLAN this term, so the HASS program provides a refreshing focus on something different, whilst practising skills that will help students prepare for NAPLAN without even realising it! Both literacy and numeracy are foundation skills of much of the broader curriculum and are reinforced within our HASS program as well. Meantime our younger students are focusing on local landscapes this term, while our older students are studying explorers of different continents.

Foundation to Year 3

Our youngest students (Foundation/Prep Unit F.2) start the term by looking at different types of homes. A wide selection of places can be homes for people around the world, so students can compare where they live to other types of homes. Students in integrated Foundation/Prep and Years 1 to 3 (Units F.61.2; 2.2 and 3.2) start their examination of the local landscape by examining how Aboriginal people arrived in Australia 60,000 years ago. They learn how modern humans expanded across the world during the last Ice Age, reaching Australia via South-East Asia. Starting with this broad focus allows them to narrow down in later weeks, finally focusing on their local community.

Year 3 to Year 6

Students in Years 3 to 6 (Units 3.6; 4.2; 5.2 and 6.2) are looking at explorers this term. Each year level focuses on explorers of a different part of the world. Year 3 students investigate different climate zones and explorers of extreme climate areas (such as the Poles, or the Central Deserts of Australia).  Year 4 students examine Africa and South America and investigate how European explorers during the ‘Age of Discovery‘ encountered different environments, animals and people on these continents. The students start with prehistory and this week they are looking at how Ancient Egyptians and Bantu-speaking groups explored Africa thousands of years ago. They also examine Great Zimbabwe. Year 5 students are studying North America, and this week are starting with the Viking voyages to Greenland and Newfoundland, in the 10th century. Year 6 students focus on Asia, and start with a study in Economics by examining the Dutch East India Company of the 17th and 18th centuries. (Remember HASS for years 5 and 6 includes History, Geography, Civics and Citizenship and Economics and Business – we cover it all, plus Science!)

You might be wondering how on earth we integrate such apparently disparate topics for multi-year classes! Well, our Teacher Handbooks are full of tricks to make teaching these integrated classes a breeze. The Teacher Handbooks with lesson plans and hints for how to integrate across year levels are included, along with the Student Workbooks, Model Answers and Assessment Guides, within our bundles for each unit. Teachers using these units have been thrilled at how easy it is to use our material in multi-year level classes, whilst knowing that each student is covering curriculum-appropriate material for their own year level.

More KVM Modules Configuration

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

April 16, 2017

Creating an OpenStack Ironic deploy image with Buildroot

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

April 13, 2017

Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash

/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null

# Generate the right certs for ejabberd and znc
if test /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem -nt /etc/ejabberd/ejabberd.pem ; then
    cat /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem > /etc/ejabberd/ejabberd.pem
fi
cat /etc/letsencrypt/live/irc.fmarier.org/privkey.pem /etc/letsencrypt/live/irc.fmarier.org/fullchain.pem > /home/francois/.znc/znc.pem

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

Finally, since my XMPP server and IRC bouncer need the private key and the full certificate chain to be in the same file, so I regenerate these files at the end of the script. In the case of ejabberd, I only do so if the certificates have actually changed since overwriting ejabberd.pem changes its timestamp and triggers an fcheck notification (since it watches all files under /etc).

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

April 08, 2017

Speaking in April 2017

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

April 06, 2017

Remote Presentations

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.

April 05, 2017

'Advanced Computing': A International Journal of Plagiarism

Advanced Computing : An International Journal was a publication that I considering writing for. However it is almost certainly a predatory open-access journal, that seeks a "publication charge", without even performing the minimal standards of editorial checking.

I can just tolerate the fact that the most recent issue has numerous spelling and grammatical errors as the I believe that English is not the first language of the authors. It should have been caught by the editors, but we'll let that slide for a far greater crime - that of widespread plagiarism.

The fact that the editors clearly didn't even check for this is in inexcusable oversight.

I opened this correspondence to the editors in the hope that others will find it prior to submitting or even considering submission to the journal in question. I also hope the editors take the opportunity to dramatically improve their editorial standards.

read more

Light to Light, Day Three

The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Light to Light, Day Two

Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.



Interactive map for this route.

             

Tags for this post: events pictures 20170312 photo scouts bushwalk
Related posts: Light to Light, Day Three; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

April 04, 2017

Light to Light, Day One

Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.



Interactive map for this route.

                                       

See more thumbnails

Tags for this post: events pictures 20170311 photo scouts bushwalk
Related posts: Light to Light, Day Three; Light to Light, Day Two; Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

April 03, 2017

Iteration or Transformation in government: paint jobs and engines

I was recently at an event talking about all things technology with a fascinating group of people. It was a reminder to me that digital transformation has become largely confused with digital iteration, and we need to reset the narrative around this space if we are to realise the real opportunities and benefits of technology moving forward. I gave a speech recently about major paradigm shifts that have brought us to where we are and I encourage everyone to consider and explore these paradigm shifts as important context for this blog post and their own work, but this blog post will focus specifically on a couple of examples of actual transformative change worth exploring.

The TL;DR is simply that you need to be careful to not mistake iteration for transformation. Iteration is an improvement on the status quo. Transformation is a new model of working that is, hopefully, fundamentally better than the status quo. As a rule of thumb, if what you are doing is simply better, faster or cheaper, that it is probably just iterative. There are many examples from innovation and digital transformation agendas which are just improvements on the status quo, but two examples of actual transformation of government I think are worth exploring are Gov-as-an-API and mutually beneficial partnerships to address shared challenges.

Background

Firstly, why am I even interested in “digital transformation”? Well, I’ve worked on open data in the Australian Federal Government since 2012 and very early on we recognised that open data was just a step towards the idea of “Gov as a Platform” as articulated by Tim O’Reilly nearly 10 years ago. Basically, he spoke about the potential to transform government into Government as a Platform, similar (for those unfamiliar with the “as a platform” idea) to Google Maps, or the Apple/Google app stores. Basically government could provide the data, content, transaction services and even business rules (regulation, common patterns such as means testing, building codes, etc) in a consumable, componentised and modular fashion to support a diverse ecosystem of service delivery, analysis and products by myriad agents, including private and public sector, but also citizens themselves.

Seems obvious right? I mean the private sector (the tech sector in any case) have been taking this approach for a decade.

What I have found in government is a lot of interest in “digital” where it is usually simply digitising an existing process, product or service. The understanding of consumable, modular architecture as a strategic approach to achieve greater flexibility and agility within an organisation, whilst enabling a broader ecosystem to build on top, is simply not understood by many. Certainly there are pockets that understand this, especially at the practitioner level, but agencies are naturally motivated to simply delivery what they need in isolation from a whole of government view. It was wonderful to recently see New Zealand picking up a whole of government approach in this vein but many governments are still focused on simple digitisation rather than transformation.

Why is this a problem? Well, to put it simply, government can’t scale the way it has traditionally worked to meet the needs and challenges of an increasingly changing world. Unless governments can transform to be more responsive, adaptive, collaborative and scalable, then they will become less relevant to the communities they serve and less effective in implementing government policy. Governments need to learn to adapt to the paradigm shifts from centrist to distributed models, from scarcity to surplus resources, from analogue to digital models, from command and control to collaborative relationships, and from closed to open practices.

Gov as an API

On of the greatest impacts of the DTO and the UK Government Digital Service has been to spur a race to the top around user centred design and agile across governments. However, these methods whilst necessary, are not sufficient for digital transformation, because you too easily see services created that are rapidly developed and better for citizens, but still based on bespoke siloed stacks of technology and content that aren’t reconsumable. Why does this matter? Because there are loads of components needed for multiple services, but siloed service technology stacks lead to duplication, a lack of agility in iterating and improving the user experience on an ongoing basis, a lack of programmatic access to those components which would enable system to system automation, and a complete lack of the “platform” upon which an ecosystem could be built.

When I was at the interim DTO in 2016, we fundamentally realised that no single agency would ever be naturally motivated, funded or mandated to deliver services on behalf of someone else. So rather than assuming a model wherein an agency is expected to do just that, we started considering new models. New systems wherein agencies could achieve what they needed (and were mandated and funded) to do, but where the broader ecosystem could provide multi-channel services delivery where there is no wrong door for citizens to do what they need. One channel might be the magical “life events” lens, another might be third parties, or State and Territory Governments, or citizen mashups. These agents and sectors have ongoing relationships with their users allowing them to exponentially spread and maintain user-centred design in way that government by itself can not afford to do, now or into the future.

This vision was itself was just a reflection of the Amazon, Google Maps, the Apple “apps store” and other platform models so prevalent in the private sector as described above. But governments everywhere have largely interpreted the “Gov as a Platform” idea as simply common or shared platforms. Whilst common platforms can provide savings and efficiencies, it has not enabled the system transformation needed to get true digital transformation across government.

So what does this mean practically? There are certainly pockets of people doing or experimenting in this space. Here are some of my thoughts to date based on work I’ve done in Australia (at the interim DTO) and in New Zealand (with the Department of Internal Affairs).

Firstly you can largely identify four categories of things involved in any government service:

  • Content – obvious, but taking into account the domain specific content of agencies as well as the kind of custodian or contextual content usually managed by points of aggregation or service delivery
  • Data – any type of list, source of intelligence or statistics, search queries such as ABN lookups
  • Transaction services – anything a person or business interacts with such as registration, payments, claims, reporting, etc. Obviously requires strict security frameworks
  • Business rules – the regulation, legislation, code, policy logic or even reusable patterns such as means testing which are usually hard coded into projects as required. Imagine an authoritative public API with the business logic of government available for consumption by everyone. A good example of pioneering work in this space is the Regulation as a Platform work by Data61.

These categories of components can all be made programmatically available for the delivery of your individual initiative and for broader reuse either publicly (for data, content and business rules) or securely (for transaction services). But you also need some core capabilities that are consumable for any form of digital service, below are a few to consider:

  • Identity and authentication, arguably also taking into account user consent based systems which may be provided from outside of government
  • Service analytics across digital and non digital channels to baseline the user experience and journey with govt and identify what works through evidence. This could also fuel a basic personalisation service.
  • A government web platform to pull together the government “sedan” service
  • Services register – a consumable register of government services (human services) to draw from across the board.

Imagine if we tool a conditional approach to matters, where you don’t need to provide documentation to prove your age (birth certificate, licence, passport), all of which give too much information, but rather can provide a verifiable claim that yes I am over the required age. This would both dramatically reduce the work for gov, and improve the privacy of people. See the verifiable claims work by W3C for more info on this concept, but it could be a huge transformation for how gov and privacy operates.

The three key advantages to taking this approach are:

  1. Agency agility – In splitting the front end from a consumable backend, agencies gain the ability to more rapidly iterate the customer experience of the service, taking into account changing user needs and new user platforms (mobile is just the start – augmented reality and embedded computing are just around the corner). When the back end and front end of a service are part of the one monolithic stack, it is simply too expensive and complicated to make many changes to the service.
  2. Ecosystem enablement – As identified above, a key game changer with the model is the ability for others to consume the services to support and multi-channel of services, analysis and products delivered by the broader community of government, industry and community players.
  3. Automation – the final and least sexy, though most interesting from a service improvement perspective, is automation. If your data, content, transaction systems and rules are programmatically available, suddenly you create the opportunity for the steps of a life event to be automated, where user consent is granted. The user consent part is really important, just to be clear! So rather than having 17 beautiful but distinct user services that a person has to individually complete, a user could be asked at any one of those entry points whether they’d like the other 16 steps to be automatically completed on their behalf. Perhaps the best way government can serve citizens in many cases is to get out of the way :)

Meaningful and mutually beneficial collaboration

Collaboration has become something of a buzzword in government often resulting in meetings, MOUs, principle statements or joint media releases. Occasionally there are genuine joint initiatives but there are still a lot of opportunities to explore new models of collaboration that achieve better outcomes.

Before we talk about how to collaborate, we need to address the elephant in the room: natural motivation. Government often sees consultation as something nice to have, collaboration as a nice way of getting others to contribute to something, and co-design as something to strive across the business units in your agency. If we consider the idea that government simply cannot meet the challenges or opportunities of the 21st century in isolation, if we acknowledge that government cannot scale at the same pace of the changing domains we serve, then we need to explore new models of collaboration where we actively partner with others for mutual benefit. To do this we need to identify areas for which others are naturally motivated to collaborate.

Firstly, let’s acknowledge there will always be work to do for which there are no naturally motivated partners. Why would anyone else want, at their own cost, to help you set up your mobility strategy, or implement an email server, or provide telephony services? The fact is that a reasonable amount of what any organisation does would be seen as BAU, as commodity, and thus only able to be delivered through internal capacity or contractual relationships with suppliers. So initiatives that try to improve government procurement practices can iteratively improve these customer-supplier arrangements but they don’t lend themselves to meaningful or significant collaboration.

OK, so what sort of things could be done differently? This is where you need to look critically at the purpose of your agency including the highest level goals, and identify who the natural potential allies in those goals could be. You can then approach your natural allies, identify where there are shared interests, challenges or opportunities, and collectively work together to co-design, co-invest, co-deliver and co-resource a better outcome for all involved. Individual allies could use their own resources or contractors for their contribution to the work, but the relationship is one of partnership, the effort and expertise is shared, and the outcomes are more powerful and effective than any one entity would have delivered on their own. In short, the whole becomes greater than the sum of its parts.

I will use the exciting and groundbreaking work of my current employer as a real example to demonstrate the point.

AUSTRAC is the Australian Government financial intelligence agency with some regulatory responsibilities. The purpose of the agency is threefold: 1) to detect and disrupt abuse of the financial system; 2) to strengthen the financial system against abuse; and 3) to contribute to the growth of the Australian economy. So who are natural allies in these goals… banks, law enforcement and fraud focused agencies, consumer protection organisations, regulatory organisations, fintech and regtech startups, international organisations, other governments, even individual citizens! So to tap into this ecosystem of potential allies, AUSTRAC has launched a new initiative called the “Fintel Alliance” which includes, at its heart, new models of collaborating on shared goals. There are joint intelligence operations on major investigations like the Panama Papers, joint industry initiatives to explore shared challenges and then develop prototypes and references implementations, active co-design of the new regulatory framework with industry, and international collaborations to strengthen the global financial system against abuse. The model is still in early days, but already AUSTRAC has shown that a small agency can punch well above it’s weight by working with others in new and innovative ways.

Other early DTO lessons

I’ll finish with a few lessons from the DTO. I worked at the DTO for the first 8 months (Jan – Sept 2015) when it was being set up. It was a crazy time with people from over 30 agencies thrust together to create a new vision for government services whilst simultaneously learning to speak each other’s language and think in a whole of government(s) way. We found a lot of interesting things, not least of all just how pervasive the siloed thinking of government ran. For example, internal analysis at the DTO of user research from across government agencies showed that user research tended to be through the narrow lens of an agency’s view of “it’s customers” and the services delivered by that agency. It was clear the user needs beyond the domain of the agency was seen as out of scope, or, at best, treated as a hand off point.

We started writing about a new draft vision whilst at the DTO which fundamentally was based on the idea of an evidence based, consumable approach to designing and delivering government services, built on reusable components that could be mashed up for a multi-channel ecosystem of service delivery. We tested this with users, agencies and industry with great feedback. Some of our early thinking is below, now a year and a half old, but worth referring back to:

One significant benefit of the DTO and GDS was the cycling of public servants through the agency to experience new ways of working and thinking, and applying an all of government lens across their work. This cultural transformation was then maintained in Australia, at least in part, when those individuals returned to their home agencies. A great lesson for others in this space.

A couple of other lessons learned from the DTO are below:

  • Agencies want to change. They are under pressure from citizens, governments and under budget constraints and know they need better ways to do things.
  • A sandbox is important. Agencies need somewhere to experiment, play with new tools, ideas and methods, draw on different expertise and perspectives, build prototypes and try new ideas. This is ideally best used before major projects are undertaken as a way to quickly test ideas before going to market. It also helps improve expectations of what is possible and what things should cost.
  • Everyone has an agenda, every agency will drive their own agenda with whatever the language of the day and agendas will continue to diverge from each other whilst there is not common vision.
  • Evidence is important! And there isn’t generally enough AoG evidence available. Creating an evidence base was a critical part of identifying what works and what doesn’t.
  • Agile is a very specific and useful methodology, but often gets interpreted as something loose, fast, and unreliable. Education about proper agile methods is important.
  • An AoG strategy for transformation is critical. If transformation is seen as a side project, it will never be integrated into BAU.
  • Internal brilliance needs tapping. Too often govt brings in consultants and ignores internal ideas, skills and enthusiasm. There needs to be a combination of public engagement and internal engagement to get the best outcomes.

I want to just finish by acknowledging and thanking the “interim DTO” team and early leadership for their amazing work, vision and collective efforts in establishing the DTO and imagining a better future for service delivery and for government more broadly. It was an incredible time with incredible people, and your work continues to live on and be validated by service delivery initiatives in Australia and across the world. Particular kudos to team I worked directly with, innovative and awesome public servants all! Sharyn Clarkson, Sean Minney, Mark Muir, Vanessa Roarty, Monique Kenningham, Nigel O’Keefe, Mark McKenzie, Chris Gough, Deb Blackburn, Lisa Howdin, Simon Fisher, Andrew Carter, Fran Ballard and Fiona Payne :) Also to our contractors at the time Ruth Ellison, Donna Spencer and of course, the incredible and awesome Alex Sadleir.

April 02, 2017

Manually expanding a RAID1 array on Ubuntu

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine.

My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive

In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:

$ parted /dev/sdc
unit s
print
mkpart
print

Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):

$ parted /dev/sdd
unit s
mktable gpt
mkpart

Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:

  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive

I created the partition and flagged it as a RAID one:

mkpart
toggle 2 raid

and then deleted the temporary partition on the old 2 TB drive:

$ parted /dev/sdc
print
rm 3
print

Create a temporary RAID1 array on the new drive

With the new drive properly partitioned, I created a new RAID array for it:

mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing

and added it to /etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

which required manual editing of that file to remove duplicate entries.

Create the encrypted partition

With the new RAID device in place, I created the encrypted LUKS partition:

cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10
cryptsetup luksOpen /dev/md10 chome2

I took the UUID for the temporary RAID partition:

blkid /dev/md10

and put it in /etc/crypttab as chome2.

Then, I formatted the new LUKS partition and mounted it:

mkfs.ext4 -m 0 /dev/mapper/chome2
mkdir /home2
mount /dev/mapper/chome2 /home2

Copy the data from the old drive

With the home paritions of both drives mounted, I copied the files over to the new drive:

eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/

making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive

After the copy, I switched over to the new drive in a step-by-step way:

  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.

Add the old drive to the new RAID array

With all of this working, it was time to clear the mdadm superblock from the old drive:

mdadm --zero-superblock /dev/sdc1

and then change the second partition of the old drive to make it the same size as the one on the new drive:

$ parted /dev/sdc
rm 2
mkpart
toggle 2 raid
print

before adding it to the new array:

mdadm /dev/md1 -a /dev/sdc1

Rename the new array

To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:

umount /home
cryptsetup luksClose chome
mdadm --stop /dev/md10
mdadm --stop /dev/md1

before running this command:

mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2

and updating the name in /etc/mdadm/mdadm.conf.

The last step was to regenerate the initramfs:

update-initramfs -u

before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

March 30, 2017

Staying safe during Cyclone Debbie

Cyclone Debbie seen from the International Space StationCyclone Debbie as seen from the ISS

We hope all teachers and students are safe in the areas of Queensland and New South Wales affected by the cyclone weather! We understand that many state schools (any South of Agnes Water to Northern New South Wales) are closed today, the radar shows a very large rain front coming through. Near Brisbane it’s been raining for many hours already, and the wind is now picking up as well. It’s good to be inside, although things are starting to feel moist (reminding Arjen of when he lived in Cairns).

Why not take this opportunity to replace dry old teaching materials using coupon code DEBBIE for 25% discount on any Understanding our World® unit. This special Cyclone Debbie offer ends Sunday 2nd April.

Did you know that, in the Understanding Our World units, Year 5 students did work on Natural Disasters during this term!

Also, do take a peek at the Open Source Earth Wind Patterns site at NullSchool – using live data to create a moving image. All open. Beautiful.

March 29, 2017

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

March 28, 2017

Evaluating CephFS on Power

Methodology

To evaluate CephFS, we will create a ppc64le virtual machine, with sufficient space to compile the software, as well as 3 sparse 1TB disks to create the object store.

We will then build & install the Ceph packages, after adding the PowerPC optimisiations to the code. This is done, as ceph-deploy will fetch prebuilt packages that do not have the performance patches if the packages are not installed.

Finally, we will use the ceph-deploy to deploy the instance. We will ceph-deploy via pip, to avoid file conflicts with the packages that we built.

For more information on what each command does, visit the following tutorial, upon which which this is based: http://palmerville.github.io/2016/04/30/single-node-ceph-install.html

Virtual Machine Config

Create a virtual machine with at least the following: - 16GB of memory - 16 CPUs - 64GB disk for the root filesystem - 3 x 1TB for the Ceph object store - Ubuntu 16.04 default install (only use the 64GB disk, leave the others unpartitioned)

Initial config

  • Enable ssh
    sudo apt install openssh-server
    sudo apt update
    sudo apt upgrade
    sudo reboot
  • Install build tools
    sudo apt install git debhelper

Build Ceph

    mkdir $HOME/src
    cd $HOME/src
    git clone --recursive https://github.com/ceph/ceph.git  # This may take a while
    cd ceph
    git checkout master
    git submodule update --force --init --recursive
  • Cherry-pick the Power performance patches:
    git remote add kestrels https://github.com/kestrels/ceph.git
    git fetch --all
    git cherry-pick 59bed55a676ebbe3ad97d8ec005c2088553e4e11
  • Install prerequisites
    ./install-deps.sh
    sudo apt install python-requests python-flask resource-agents curl python-cherrypy python3-pip python-django python-dateutil python-djangorestframework
    sudo pip3 install ceph-deploy
    cd $HOME/src/ceph
    sudo dpkg-buildpackage -J$(nproc) # This will take a couple of hours (16 cpus)
  • Install the packages (note that python3-ceph-argparse will fail, but is safe to ignore)
    cd $HOME/src
    sudo dpkg -i *.deb

Create the ceph-deploy user

    sudo adduser ceph-deploy
    echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy
    sudo chmod 0440 /etc/sudoers.d/ceph-deploy

Configure the ceph-deploy user environment

    su - ceph-deploy
    ssh-keygen
    node=`hostname`
    ssh-copy-id ceph-deploy@$node
    mkdir $HOME/ceph-cluster
    cd $HOME/ceph-cluster
    ceph-deploy new $node # If this fails, remove the bogus 127.0.1.1 entry from /etc/hosts
    echo 'osd pool default size = 2' >> ceph.conf
    echo 'osd crush chooseleaf type = 0' >> ceph.conf

Complete the Ceph deployment

    ceph-deploy install $node
    ceph-deploy mon create-initial
    drives="vda vdb vdc"  # the 1TB drives - check that these are correct for your system
    for drive in $drives; do ceph-deploy disk zap $node:$drive; ceph-deploy osd prepare $node:$drive; done
    for drive in $drives; do ceph-deploy osd activate $node:/dev/${drive}1; done
    ceph-deploy admin $node
    sudo chmod +r /etc/ceph/ceph.client.admin.keyring
    ceph -s # Check the state of the cluster

Configure CephFS

    ceph-deploy mds create $node
    ceph osd pool create cephfs_data 128
    ceph osd pool create cephfs_metadata 128
    ceph fs new cephfs cephfs_metadata cephfs_data
    sudo systemctl status ceph\*.service ceph\*.target # Ensure the ceph-osd, ceph-mon & ceph-mds daemons are running
    sudo mkdir /mnt/cephfs
    key=`grep key ~/ceph-cluster/ceph.client.admin.keyring | cut -d ' ' -f 3`
    sudo mount -t ceph $node:6789:/ /mnt/cephfs -o name=admin,secret=$key

References

  1. http://docs.ceph.com/docs/master/install/clone-source/
  2. http://docs.ceph.com/docs/master/install/build-ceph/
  3. http://palmerville.github.io/2016/04/30/single-node-ceph-install.html

Making Views in Drupal 7 and Drupal 8

This talk was written for DrupalGov Canberra 2017.

Download a PDF of the slides.

Or view below on slideshare.

 

Busyness: A Modern Health Crisis | LinkedIn

Benjamin Cardullo writes about an issue that we really have to take (more) seriously.  Particularly with mobile devices enabling us to be “connected” 24/7, being busy (or available) all of that time is not a good thing at all.

How do we measure professional success? Is it by the location of our office or the size of our paycheck? Is it measured by the dimensions of our home or the speed of our car? Ten years ago, those would have been the most prominent answers; however, today when someone is really pulling out the big guns, when they really want to show you how important they are, they’ll tell you all about their busy day and how they never had a moment to themselves.

Read the full article: https://www.linkedin.com/pulse/busyness-modern-health-crisis-benjamin-cardullo

This Week in HASS – term 1, week 9

The last week of our first unit – time to wrap up, round off, finish up any work not yet done and to perhaps get a preliminary taste of what’s to come in future units. Easter holidays are just around the corner. Our youngest students are having a final discussion about celebrations; slightly older students are finishing off their quest for Aunt Madge, by looking at landmarks and the older students are considering democracy in Australia, compared to its early beginnings in Ancient Greece.

Foundation to Year 3

Foundation/Prep (units F.1 and F.6) students are finishing off their discussions about celebrations, just in time for the Easter holidays, by looking at celebrations around the world. Teachers may wish to focus on how other countries celebrate Easter, with passion plays, processions and special meals. Students in Years 1 (unit 1.1), 2 (unit 2.1) and 3 (unit 3.1) are finishing off their Aunt Madge activity, looking at landmarks in Australia and around the world. There is the option for teachers to concentrate on Australian landmarks in this lesson, setting the stage for some local history studies in the next unit, next term.

Years 3 to 6

Ancient Greek pottery with votes scratched into the surface

Older students in Years 3 (unit 3.5), 4 (unit 4.1), 5 (unit 5.1) and 6 (unit 6.1) start looking ahead and laying the foundations for later studies on the Australian system of government and democracy, by comparing democracy as it arose in Ancient Greece, with the modern Australian democratic system. Our word for democracy comes from the Ancient Greek words demos (people) and kratia (power). Students move on from their discussion of Eratosthenes to looking at the Ancient Greek democratic system, which was to lay the groundwork for modern democratic systems around the world. Discussing Ancient Greek democracy leads students to consider the rights and responsibilities of being a citizen, at both the local and international levels. Students also consider who could and could not vote and what this meant for different groups. They can also touch on the ancient practise of ostracism, which can lead to ethical debates around fair election practises. By considering these fundamental concepts, students are better able to relate the ideas around modern democracy to their own lives.

 

March 26, 2017

AMBE+2 and MELPe 600 Compared to Codec 2

Yesterday I was chatting on the #freedv IRC channel, and a good question was asked: how close is Codec 2 to AMBE+2 ? Turns out – reasonably close. I also discovered, much to my surprise, that Codec 2 700C is better than MELPe 600!

Samples

Original AMBE+2 3000 AMBE+ 2400 Codec 2 3200 Codec 2 2400
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Listen Listen Listen Listen Listen
Original MELPe 600 Codec 2 700C
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen
Listen Listen Listen

Here are all the samples in one big tar ball.

Discussion

I don’t have a AMBE or MELPe codec handy so I used the samples from the DVSI and DSP Innovations web sites. I passed the original “DAMA” speech samples found on these sites through Codec 2 (codec2-dev SVN revision 3053) at various bit rates. Turns out the DAMA samples were the same for the AMBE and MELPe samples which was handy.

These particular samples are “kind” to codecs – I consistently get good results with them when I test with Codec 2. I’m guessing they also allow other codecs to be favorably demonstrated. During Codec 2 development I make a point of using “pathological” samples such as hts1a, cg_ref, kristoff, mmt1 that tend to break Codec 2. Some samples of AMBE and MELP using my samples on the Codec 2 page.

I usually listen to samples through a laptop speaker, as I figure it’s close to the “use case” of a PTT radio. Small speakers do mask codec artifacts, making them sound better. I also tried a powered loud speaker with the samples above. Through the loudspeaker I can hear AMBE reproducing the pitch fundamental – a bass note that can be heard on some males (e.g. 7), whereas Codec 2 is filtering that out.

I feel AMBE is a little better, Codec 2 is a bit clicky or impulsive (e.g. on sample 1). However it’s not far behind. In a digital radio application, with a small speaker and some acoustic noise about – I feel the casual listener wouldn’t discern much difference. Try replaying these samples through your smart-phone’s browser at an airport and let me know if you can tell them apart!

On the other hand, I think Codec 2 700C sounds better than MELPe 600. Codec 2 700C is more natural. To my ear MELPe has very coarse quantisation of the pitch, hence the “Mr Roboto” sing-song pitch jumps. The 700C level is a bit low, an artifact/bug to do with the post filter. Must fix that some time. As a bonus Codec 2 700C also has lower algorithmic delay, around 40ms compared to MELPe 600’s 90ms.

Curiously, Codec 2 uses just 1 voicing bit which means either voiced or unvoiced excitation in each frame. xMBE’s claim to fame (and indeed MELP) over simpler vocoders is the use of mixed excitation. Some of the spectrum is voiced (regular pitch harmonics), some unvoiced (noise like). This suggests the benefits of mixed excitation need to be re-examined.

I haven’t finished developing Codec 2. In particular Codec 2 700C is very much a “first pass”. We’ve had a big breakthrough this year with 700C and development will continue, with benefits trickling up to other modes.

However the 1300, 2400, 3200 modes have been stable for years and will continue to be supported.

Next Steps

Here is the blog post that kicked off Codec 2 – way back in 2009. Here is a video of my linux.conf.au 2012 Codec 2 talk that explains the motivations, IP issues around codecs, and a little about how Codec 2 works (slides here).

What I spoke about then is still true. Codec patents and license fees are a useless tax on business and stifle innovation. Proprietary codecs borrow as much as 95% of their algorithms from the public domain – which are then sold back to you. I have shown that open source codecs can meet and even exceed the performance of closed source codecs.

Wikipedia suggests that AMBE license fees range from USD$100k to USD$1M. For “one license fee” we can improve Codec 2 so it matches AMBE+2 in quality at 2400 and 3000 bit/s. The results will be released under the LGPL for anyone to use, modify, improve, and inspect at zero cost. Forever.

Maybe we should crowd source such a project?

Command Lines

This is how I generated the Codec 2 wave files:

~/codec2-dev/build_linux//src/c2enc 3200 9.wav - | ~/codec2-dev/build_linux/src/c2dec 3200 - - | sox -t raw -r 8000 -s -2 - 9_codec2_3200.wav

Links

DVSI AMBE sample page

DSP Innovations, MELPe samples. Can anyone provide me with TWELP samples from these guys? I couldn’t find any on the web that includes the input, uncoded source samples.

Trying an OpenSTEM unit without a subscription

We have received quite a few requests for this option, so we’ve made it possible. As we understand it, in many cases an individual teacher wants to try our materials (often on behalf of the school, as a trial) but the teacher has to fund this from their classroom budget, so we appreciate they need to limit their initial outlay.

While purchasing units with an active subscription still works out cheaper (we haven’t changed that pricing), we have tweaked our online store to now also allow the purchase of individual unit bundles, from as little as $49.50 (inc.GST) for the Understanding Our World® HASS+Science program units. That’s a complete term bundle with teacher handbook, student workbook, assessment guide, model answers and curriculum mapping, as well as all the base resource PDFs needed for that unit! After purchase, the PDF materials can be downloaded from the site (optionally many files together in a ZIP).

We’d love to welcome you as a new customer! From experience we know that you’ll love our materials. The exact pricing difference (between subscription and non-subscription) depends on the type of bundle (term unit, year bundle, or multi-year bundle) and is indicated per item.

Try OpenSTEM today! Browse our teacher unit bundles (Foundation Year to Year 6).

This includes units for Digital Technologies, the Ginger Beer Science project, as well as for our popular Understanding Our World® HASS+Science program.

March 24, 2017

Linux Security Summit 2017: CFP Announcement

LSS logo

The 2017 Linux Security Summit CFP (Call for Participation) is now open!

See the announcement here.

The summit this year will be held in Los Angeles, USA on 14-15 September. It will be co-located with the Open Source Summit (formerly LinuxCon), and the Linux Plumbers Conference. We’ll follow essentially the same format as the 2016 event (you can find the recap here).

The CFP closes on June 5th, 2017.

March 23, 2017

Erasure Coding for Programmers, Part 2

We left part 1 having explored GF(2^8) and RAID 6, and asking the question "what does all this have to do with Erasure Codes?"

Basically, the thinking goes "RAID 6 is cool, but what if, instead of two parity disks, we had an arbitrary number of parity disks?"

How would we do that? Well, let's introduce our new best friend: Coding Theory!

Say we want to transmit some data across an error-prone medium. We don't know where the errors might occur, so we add some extra information to allow us to detect and possibly correct for errors. This is a code. Codes are a largish field of engineering, but rather than show off my knowledge about systematic linear block codes, let's press on.

Today, our error-prone medium is an array of inexpensive disks. Now we make this really nice assumption about disks, namely that they are either perfectly reliable or completely missing. In other words, we consider that a disk will either be present or 'erased'. We come up with 'erasure codes' that are able to reconstruct data when it is known to be missing. (This is a slightly different problem to being able to verify and correct data that might or might not be subtly corrupted. Disks also have to deal with this problem, but it is not something erasure codes address!)

The particular code we use is a Reed-Solomon code. The specific details are unimportant, but there's a really good graphical outline of the broad concepts in sections 1 and 3 of the Jerasure paper/manual. (Don't go on to section 4.)

That should give you some background on how this works at a pretty basic mathematical level. Implementation is a matter of mapping that maths (matrix multiplication) onto hardware primitives, and making it go fast.

Scope

I'm deliberately not covering some pretty vast areas of what would be required to write your own erasure coding library from scratch. I'm not going to talk about how to compose the matricies, how to invert them, or anything like that. I'm not sure how that would be a helpful exercise - ISA-L and jerasure already exist and do that for you.

What I want to cover is an efficient implementation of the some algorithms, once you have the matricies nailed down.

I'm also going to assume your library already provides a generic multiplication function in GF(2^8). That's required to construct the matrices, so it's a pretty safe assumption.

The beginnings of an API

Let's make this a bit more concrete.

This will be heavily based on the ISA-L API but you probably want to plug into ISA-L anyway, so that shouldn't be a problem.

What I want to do is build up from very basic algorithmic components into something useful.

The first thing we want to do is to be able to is Galois Field multiplication of an entire region of bytes by an arbitrary constant.

We basically want gf_vect_mul(size_t len, <something representing the constant>, unsigned char * src, unsigned char * dest)

Simple and slow approach

The simplest way is to do something like this:

void gf_vect_mul_simple(size_t len, unsigned char c, unsigned char * src, unsigned char * dest) {

    size_t i;
    for (i=0; i<len; i++) {
        dest[i] = gf_mul(c, src[i]);
    }
}

That does multiplication element by element using the library's supplied gf_mul function, which - as the name suggests - does GF(2^8) multiplication of a scalar by a scalar.

This works. The problem is that it is very, painfully, slow - in the order of a few hundred megabytes per second.

Going faster

How can we make this faster?

There are a few things we can try: if you want to explore a whole range of different ways to do this, check out the gf-complete project. I'm going to assume we want to skip right to the end and know what is the fastest we've found.

Cast your mind back to the RAID 6 paper (PDF). I talked about in part 1. That had a way of doing an efficient multiplication in GF(2^8) using vector instructions.

To refresh your memory, we split the multiplication into two parts - low bits and high bits, looked them up separately in a lookup table, and joined them with XOR. We then discovered that on modern Power chips, we could do that in one instruction with vpermxor.

So, a very simple way to do this would be:

  • generate the table for a
  • for each 16-byte chunk of our input:
    • load the input
    • do the vpermxor with the table
    • save it out

Generating the tables is reasonably straight-forward, in theory. Recall that the tables are a * {{00},{01},...,{0f}} and a * {{00},{10},..,{f0}} - a couple of loops in C will generate them without difficulty. ISA-L has a function to do this, as does gf-complete in split-table mode, so I won't repeat them here.

So, let's recast our function to take the tables as an input rather than the constant a. Assume we're provided the two tables concatenated into one 32-byte chunk. That would give us:

void gf_vect_mul_v2(size_t len, unsigned char * table, unsigned char * src, unsigned char * dest)

Here's how you would do it in C:

void gf_vect_mul_v2(size_t len, unsigned char * table, unsigned char * src, unsigned char * dest) {
        vector unsigned char tbl1, tbl2, in, out;
        size_t i;

        /* Assume table, src, dest are aligned and len is a multiple of 16 */

        tbl1 = vec_ld(16, table);
        tbl2 = vec_ld(0, table);
        for (i=0; i<len; i+=16) {
            in = vec_ld(i, (unsigned char *)src);
            __asm__("vpermxor %0, %1, %2, %3" : "=v"(out) : "v"(tbl1), "v"(tbl2), "v"(in)
            vec_st(out, i, (unsigned char *)dest);
        }
}

There's a few quirks to iron out - making sure the table is laid out in the vector register in the way you expect, etc, but that generally works and is quite fast - my Power 8 VM does about 17-18 GB/s with non-cache-contained data with this implementation.

We can go a bit faster by doing larger chunks at a time:

    for (i=0; i<vlen; i+=64) {
            in1 = vec_ld(i, (unsigned char *)src);
            in2 = vec_ld(i+16, (unsigned char *)src);
            in3 = vec_ld(i+32, (unsigned char *)src);
            in4 = vec_ld(i+48, (unsigned char *)src);
            __asm__("vpermxor %0, %1, %2, %3" : "=v"(out1) : "v"(tbl1), "v"(tbl2), "v"(in1));
            __asm__("vpermxor %0, %1, %2, %3" : "=v"(out2) : "v"(tbl1), "v"(tbl2), "v"(in2));
            __asm__("vpermxor %0, %1, %2, %3" : "=v"(out3) : "v"(tbl1), "v"(tbl2), "v"(in3));
            __asm__("vpermxor %0, %1, %2, %3" : "=v"(out4) : "v"(tbl1), "v"(tbl2), "v"(in4));
            vec_st(out1, i, (unsigned char *)dest);
            vec_st(out2, i+16, (unsigned char *)dest);
            vec_st(out3, i+32, (unsigned char *)dest);
            vec_st(out4, i+48, (unsigned char *)dest);
    }

This goes at about 23.5 GB/s.

We can go one step further and do the core loop in assembler - that means we control the instruction layout and so on. I tried this: it turns out that for the basic vector multiply loop, if we turn off ASLR and pin to a particular CPU, we can see a improvement of a few percent (and a decrease in variability) over C code.

Building from vector multiplication

Once you're comfortable with the core vector multiplication, you can start to build more interesting routines.

A particularly useful one on Power turned out to be the multiply and add routine: like gf_vect_mul, except that rather than overwriting the output, it loads the output and xors the product in. This is a simple extension of the gf_vect_mul function so is left as an exercise to the reader.

The next step would be to start building erasure coding proper. Recall that to get an element of our output, we take a dot product: we take the corresponding input element of each disk, multiply it with the corresponding GF(2^8) coding matrix element and sum all those products. So all we need now is a dot product algorithm.

One approach is the conventional dot product:

  • for each element
    • zero accumulator
    • for each source
      • load input[source][element]
      • do GF(2^8) multiplication
      • xor into accumulator
    • save accumulator to output[element]

The other approach is multiply and add:

  • for each source
    • for each element
      • load input[source][element]
      • do GF(2^8) multiplication
      • load output[element]
      • xor in product
      • save output[element]

The dot product approach has the advantage of fewer writes. The multiply and add approach has the advantage of better cache/prefetch performance. The approach you ultimately go with will probably depend on the characteristics of your machine and the length of data you are dealing with.

For what it's worth, ISA-L ships with only the first approach in x86 assembler, and Jerasure leans heavily towards the second approach.

Once you have a vector dot product sorted, you can build a full erasure coding setup: build your tables with your library, then do a dot product to generate each of your outputs!

In ISA-L, this is implemented something like this:

/*
 * ec_encode_data_simple(length of each data input, number of inputs,
 *                       number of outputs, pre-generated GF(2^8) tables,
 *                       input data pointers, output code pointers)
 */
void ec_encode_data_simple(int len, int k, int rows, unsigned char *g_tbls,
                           unsigned char **data, unsigned char **coding)
{
        while (rows) {
                gf_vect_dot_prod(len, k, g_tbls, data, *coding);
                g_tbls += k * 32;
                coding++;
                rows--;
        }
}

Going faster still

Eagle eyed readers will notice that however we generate an output, we have to read all the input elements. This means that if we're doing a code with 10 data disks and 4 coding disks, we have to read each of the 10 inputs 4 times.

We could do better if we could calculate multiple outputs for each pass through the inputs. This is a little fiddly to implement, but does lead to a speed improvement.

ISA-L is an excellent example here. Intel goes up to 6 outputs at once: the number of outputs you can do is only limited by how many vector registers you have to put the various operands and results in.

Tips and tricks

  • Benchmarking is tricky. I do the following on a bare-metal, idle machine, with ASLR off and pinned to an arbitrary hardware thread. (Code is for the fish shell)

    for x in (seq 1 50)
        setarch ppc64le -R taskset -c 24 erasure_code/gf_vect_mul_perf
    end | awk '/MB/ {sum+=$13} END {print sum/50, "MB/s"}'
    
  • Debugging is tricky; the more you can do in C and the less you do in assembly, the easier your life will be.

  • Vector code is notoriously alignment-sensitive - if you can't figure out why something is wrong, check alignment. (Pro-tip: ISA-L does not guarantee the alignment of the gftbls parameter, and many of the tests supply an unaligned table from the stack. For testing __attribute__((aligned(16))) is your friend!)

  • Related: GCC is moving towards assignment over vector intrinsics, at least on Power:

    vector unsigned char a;
    unsigned char * data;
    // good, also handles word-aligned data with VSX
    a = *(vector unsigned char *)data;
    // bad, requires special handling of non-16-byte aligned data
    a = vec_ld(0, (unsigned char *) data);
    

Conclusion

Hopefully by this point you're equipped to figure out how your erasure coding library of choice works, and write your own optimised implementation (or maintain an implementation written by someone else).

I've referred to a number of resources throughout this series:

If you want to go deeper, I also read the following and found them quite helpful in understanding Galois Fields and Reed-Solomon coding:

For a more rigorous mathematical approach to rings and fields, a university mathematics course may be of interest. For more on coding theory, a university course in electronics engineering may be helpful.

Guess the Artefact!

Today we are announcing a new challenge for our readers – Guess the Artefact! We post pictures of an artefact and you can guess what it is. The text will slowly reveal the answer, through a process of examination and deduction – see if you can guess what it is, before the end. We are starting this challenge with an item from our year 6 Archaeological Dig workshop. Year 6 (unit 6.3) students concentrate on Federation in their Australian History segment – so that’s your first clue! Study the image and then start reading the text below.

OpenSTEM archaeological dig artefact (C) 2016 OpenSTEM Pty Ltd

Our first question is what is it? Study the image and see if you can work out what it might be – it’s an dirty, damaged piece of paper. It seems to be old. Does it have a date? Ah yes, there are 3 dates – 23, 24 and 25 October, 1889, so we deduce that it must be old, dating to the end of the 19th century. We will file the exact date for later consideration. We also note references to railways. The layout of the information suggests a train ticket. So we have a late 19th century train ticket!

Now why do we have this train ticket and whose train ticket might it have been? The ticket is First Class, so this is someone who could afford to travel in style. Where were they going? The railways mentioned are Queensland Railways, Great Northern Railway, New South Wales Railways and the stops are Brisbane, Wallangara, Tenterfield and Sydney. Now we need to do some research. Queensland Railways and New South Wales Railways seem self-evident, but what is Great Northern Railway? A brief hunt reveals several possible candidates: 1) a contemporary rail operator in Victoria; 2) a line in Queensland connecting Mt Isa and Townsville and 3) an old, now unused railway in New South Wales. We can reject option 1) immediately. Option 2) is the right state, but the towns seem unrelated. That leaves option 3), which seems most likely. Looking into the NSW option in more detail we note that it ran between Sydney and Brisbane, with a stop at Wallangara to change gauge – Bingo!

Wallangara Railway Station

More research reveals that the line reached Wallangara in 1888, the year before this ticket was issued. Only after 1888 was it possible to travel from Brisbane to Sydney by rail, albeit with a compulsory stop at Wallangara. We note also that the ticket contains a meal voucher for dinner at the Railway Refreshment Rooms in Wallangara. Presumably passengers overnighted in Wallangara before continuing on to Sydney on a different train and rail gauge. Checking the dates on the ticket, we can see evidence of an overnight stop, as the next leg continues from Wallangara on the next day (24 Oct 1889). However, next we come to some important information. From Wallangara, the next leg of the journey represented by this ticket was only as far as Tenterfield. Looking on a map, we note that Tenterfield is only about 25 km away – hardly a day’s train ride, more like an hour or two at the most (steam trains averaged about 24 km/hr at the time). From this we deduce that the ticket holder wanted to stop at Tenterfield and continue their journey on the next day.

We know that we’re studying Australian Federation history, so the name Tenterfield should start to a ring a bell – what happened in Tenterfield in 1889 that was relevant to Australian Federation history? The answer, of course, is that Henry Parkes delivered his Tenterfield Oration there, and the date? 24 October, 1889! If we look into the background, we quickly discover that Henry Parkes was on his way from Brisbane back to Sydney, when he stopped in Tenterfield. He had been seeking support for Federation from the government of the colony of Queensland. He broke his journey in Tenterfield, a town representative of those towns closer to the capital of another colony than their own, which would benefit from the free trade arrangements flowing from Federation. Parkes even discussed the issue of different rail gauges as something that would be solved by Federation! We can therefore surmise that this ticket may well be the ticket of Henry Parkes, documenting his journey from Brisbane to Sydney in October, 1889, during which he stopped and delivered the Tenterfield Oration!

This artefact is therefore relevant as a source for anyone studying Federation history – as well as giving us a more personal insight into the travels of Henry Parkes in 1889, it allows us to consider aspects of life at the time:

  • the building of railway connections across Australia, in a time before motor cars were in regular use;
  • the issue of different size railway gauges in the different colonies and what practical challenges that posed for a long distance rail network;
  • the ways in which people travelled and the speed with which they could cross large distances;
  • what rail connections would have meant for small, rural towns, to mention just a few.
  • Why might the railway companies have provided meal vouchers?

These are all sidelines of inquiry, which students may be interested to pursue, and which might help them to engage with the subject matter in more detail.

In our Archaeological Dig Workshops, we not only engage students in the processes and physical activities of the dig, but we provide opportunities for them to use the artefacts to practise deduction, reasoning and research – true inquiry-based learning, imitating real-world processes and far more engaging and empowering than more traditional bookwork.

March 22, 2017

LUV Main April 2017 Meeting: SageMath / Simultaneous multithreading

Apr 4 2017 18:30
Apr 4 2017 20:30
Apr 4 2017 18:30
Apr 4 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, April 4, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• Adetokunbo "Xero" Arogbonlo, SageMath
• Stewart Smith, Simultaneous multithreading

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

April 4, 2017 - 18:30

read more

LUV Beginners April Meeting: Compiling Software from Source Code

Apr 22 2017 12:30
Apr 22 2017 16:30
Apr 22 2017 12:30
Apr 22 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

PLEASE NOTE NEW DATE due to the Easter holiday

Lev will offer a tutorial and demonstration on compiling software from source code.  This is useful when you need something not already packaged by your distribution, you need a newer version than is currently packaged, or to select your own configuration or make your own changes.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

April 22, 2017 - 12:30

read more

Flir ONE Issues

FLIR ONE for iOS or Android with solid orange power light

Troubleshooting steps when the FLIR ONE has a solid red/orange power light that will not turn to blinking green:

  • Perform a hard reset on the FLIR ONE by holding the power button down for 30 seconds.
  • Let the battery drain overnight and try charging it again (with another charger if possible) for a whole hour.

March 20, 2017

This Week in HASS – term 1, week 8

As we move into the final weeks of term, and the Easter holiday draws closer, our youngest students are looking at different kinds of celebrations in Australia. Students in years 1 to 3 are looking at their global family and students in years 3 to 6 are chasing Aunt Madge around the world, being introduced to Eratosthenes and examining Shadows and Light.

Foundation to Year 3

Our standalone Foundation/Prep students (Unit F.1) are studying celebrations in Australia and thinking about which is their favourite. It may well be Easter with its bunnies and chocolate eggs, which lies just around the corner now! They also get a chance to consider whether we should add any extra celebrations into our calendar in Australia. Those Foundation/Prep students in an integrated class with Year 1 students (Unit F.5), as well as Year 1 (Unit 1.1), 2 (Unit 2.1) and 3 (Unit 3.1) students are investigating where they, and other family members, were born and finding these places on the world map. Students are also examining features of the world map – including the different continents, North and South Poles, the equator and the oceans. Students also get a chance to undertake the Aunt Madge’s Suitcase Activity, in which they follow Aunt Madge around the world, learning about different countries and landmarks, as they go. Aunt Madge’s Suitcase is extremely popular with students of all ages – as it can easily be adapted to cover material at different depths. The activity encourages students to interact with the world map, whilst learning to recognise major natural and cultural landmarks in Australia and around the world.

Years 3 to 6

Aunt Madge

Students in Year 3 (Unit 3.5), who are integrated with Year 4, as well as the Year 4 (Unit 4.1), 5 (Unit 5.1) and 6 (Unit 6.1) students, have moved on to a new set of activities this week. The older students approach the Aunt Madge’s Suitcase Activity in more depth, deriving what items Aunt Madge has packed in her suitcase to match the different climates which she is visiting, as well as delving into each landmark visited in more detail. These landmarks are both natural and cultural and, although several are in Australia, examples are given from around the world, allowing teachers to choose their particular focus each time the activity is undertaken. As well as following Aunt Madge, students are introduced to Eratosthenes. Known as the ‘Father of Geography’, Eratosthenes also calculated the circumference of the Earth. There is an option for teachers to overlap with parts of the Maths curriculum here. Eratosthenes also studied the planets and used shadows and sunlight for his calculations, which provides the link for the Science activities – Shadows and Light, Sundials and Planets of the Solar System.

Next week is the last week of our first term units. By now students have completed the bulk of their work for the term, and teachers are able to assess most of the HASS areas already.

 

March 19, 2017

Erasure Coding for Programmers, Part 1

Erasure coding is an increasingly popular storage technology - allowing the same level of fault tolerance as replication with a significantly reduced storage footprint.

Increasingly, erasure coding is available 'out of the box' on storage solutions such as Ceph and OpenStack Swift. Normally, you'd just pull in a library like ISA-L or jerasure, and set some config options, and you'd be done.

This post is not about that. This post is about how I went from knowing nothing about erasure coding to writing POWER optimised routines to make it go fast. (These are in the process of being polished for upstream at the moment.) If you want to understand how erasure coding works under the hood - and in particular if you're interested in writing optimised routines to make it run quickly in your platform - this is for you.

What are erasure codes anyway?

I think the easiest way to begin thinking about erasure codes is "RAID 6 on steroids". RAID 6 allows you to have up to 255 data disks and 2 parity disks (called P and Q), thus allowing you to tolerate the failure of up to 2 arbitrary disks without data loss.

Erasure codes allow you to have k data disks and m 'parity' or coding disks. You then have a total of m + k disks, and you can tolerate the failure of up to m without losing data.

The downside of erasure coding is that computing what to put on those parity disks is CPU intensive. Lets look at what we put on them.

RAID 6

RAID 6 is the easiest way to get started on understanding erasure codes for a number of reasons. H Peter Anvin's paper on RAID 6 in the Linux kernel is an excellent start, but does dive in a bit quickly to the underlying mathematics. So before reading that, read on!

Rings and Fields

As programmers we're pretty comfortable with modular arithmetic - the idea that if you have:

unsigned char a = 255;
a++;

the new value of a will be 0, not 256.

This is an example of an algebraic structure called a ring.

Rings obey certain laws. For our purposes, we'll consider the following incomplete and somewhat simplified list:

  • There is an addition operation.
  • There is an additive identity (normally called 0), such that 'a + 0 = a'.
  • Every element has an additive inverse, that is, for every element 'a', there is an element -a such that 'a + (-a) = 0'
  • There is a multiplication operation.
  • There is a multiplicative identity (normally called 1), such that 'a * 1 = a'.

These operations aren't necessarily addition or multiplication as we might expect from the integers or real numbers. For example, in our modular arithmetic example, we have 'wrap around'. (There are also certain rules the addition and multiplication rules must satisfy - we are glossing over them here.)

One thing a ring doesn't have a 'multiplicative inverse'. The multiplicative inverse of some non-zero element of the ring (call it a), is the value b such that a * b = 1. (Often instead of b we write 'a^-1', but that looks bad in plain text, so we shall stick to b for now.)

We do have some inverses in 'mod 256': the inverse of 3 is 171 as 3 * 171 = 513, and 513 = 1 mod 256, but there is no b such that 2 * b = 1 mod 256.

If every non-zero element of our ring had a multiplicative inverse, we would have what is called a field.

Now, let's look at a the integers modulo 2, that is, 0 and 1.

We have this for addition:

+ 0 1
0 0 1
1 1 0

Eagle-eyed readers will notice that this is the same as XOR.

For multiplication:

* 0 1
0 0 0
1 0 1

As we said, a field is a ring where every non-zero element has a multiplicative inverse. As we can see, the integers modulo 2 shown above is a field: it's a ring, and 1 is its own multiplicative inverse.

So this is all well and good, but you can't really do very much in a field with 2 elements. This is sad, so we make bigger fields. For this application, we consider the Galois Field with 256 elements - GF(2^8). This field has some surprising and useful properties.

Remember how we said that integers modulo 256 weren't a field because they didn't have multiplicative inverses? I also just said that GF(2^8) also has 256 elements, but is a field - i.e., it does have inverses! How does that work?

Consider an element in GF(2^8). There are 2 ways to look at an element in GF(2^8). The first is to consider it as an 8-bit number. So, for example, let's take 100. We can express that as as an 8 bit binary number: 0b01100100.

We can write that more explicitly as a sum of powers of 2:

0 * 2^7 + 1 * 2^6 + 1 * 2^5 + 0 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2 + 0 * 1
= 2^6 + 2^5 + 2^2

Now the other way we can look at elements in GF(2^8) is to replace the '2's with 'x's, and consider them as polynomials. Each of our bits then represents the coefficient of a term of a polynomial, that is:

0 x^7 + 1 x^6 + 1 x^5 + 0 x^4 + 0 x^3 + 1 x^2 + 0 x + 0 * 1

or more simply

x^6 + x^5 + x^2

Now, and this is important: each of the coefficients are elements of the integers modulo 2: x + x = 2x = 0 as 2 mod 2 = 0. There is no concept of 'carrying' in this addition.

Let's try: what's 100 + 79 in GF(2^8)?

100 = 0b01100100 => x^6 + x^5 +       x^2
 79 = 0b01001111 => x^6 +       x^3 + x^2 + x + 1

100 + 79         =>   0 + x^5 + x^3 +   0 + x + 1
                 =    0b00101011 = 43

So, 100 + 79 = 43 in GF(2^8)

You may notice we could have done that much more efficiently: we can add numbers in GF(2^8) by just XORing their binary representations together. Subtraction, amusingly, is the same as addition: 0 + x = x = 0 - x, as -1 is congruent to 1 modulo 2.

So at this point you might be wanting to explore a few additions yourself. Fortuantely there's a lovely tool that will allow you to do that:

sudo apt install gf-complete-tools
gf_add $A $B 8

This will give you A + B in GF(2^8).

> gf_add 100 79 8
43

Excellent!

So, hold on to your hats, as this is where things get really weird. In modular arithmetic example, we considered the elements of our ring to be numbers, and we performed our addition and multiplication modulo 256. In GF(2^8), we consider our elements as polynomials and we perform our addition and multiplication modulo a polynomial. There is one conventional polynomial used in applications:

0x11d => 0b1 0001 1101 => x^8 + x^4 + x^3 + x^2 + 1

It is possible to use other polynomials if they satisfy particular requirements, but for our applications we don't need to worry as we will always use 0x11d. I am not going to attempt to explain anything about this polynomial - take it as an article of faith.

So when we multiply two numbers, we multiply their polynomial representations. Then, to find out what that is modulo 0x11d, we do polynomial long division by 0x11d, and take the remainder.

Some examples will help.

Let's multiply 100 by 3.

100 = 0b01100100 => x^6 + x^5 + x^2
  3 = 0b00000011 => x + 1

(x^6 + x^5 + x^2)(x + 1) = x^7 + x^6 + x^3 + x^6 + x^5 + x^2
                         = x^7 + x^5 + x^3 + x^2

Notice that some of the terms have disappeared: x^6 + x^6 = 0.

The degree (the largest power of a term) is 7. 7 is less than the degree of 0x11d, which is 8, so we don't need to do anything: the remainder modulo 0x11d is simply x^7 + x^5 + x^3 + x^2.

In binary form, that is 0b10101100 = 172, so 100 * 3 = 172 in GF(2^8).

Fortunately gf-complete-tools also allows us to check multiplications:

> gf_mult 100 3 8
172

Excellent!

Now let's see what happens if we multiply by a larger number. Let's multiply 100 by 5.

100 = 0b01100100 => x^6 + x^5 + x^2
  5 = 0b00000101 => x^2 + 1

(x^6 + x^5 + x^2)(x^2 + 1) = x^8 + x^7 + x^4 + x^6 + x^5 + x^2
                           = x^8 + x^7 + x^6 + x^5 + x^4 + x^2

Here we have an x^8 term, so we have a degree of 8. This means will get a different remainder when we divide by our polynomial. We do this with polynomial long division, which you will hopefully remember if you did some solid algebra in high school.

                              1
                           ---------------------------------------------
x^8 + x^4 + x^3 + x^2 + 1 | x^8 + x^7 + x^6 + x^5 + x^4       + x^2
                          - x^8                   + x^4 + x^3 + x^2 + 1
                            -------------------------------------------
                          =       x^7 + x^6 + x^5       + x^3       + 1

So we have that our original polynomial (x^8 + x^4 + x^3 + x^2 + 1) is congruent to (x^7 + x^6 + x^5 + x^3 + 1) modulo the polynomial 0x11d. Looking at the binary representation of that new polynomial, we have 0b11101001 = 233.

Sure enough:

> gf_mult 100 5 8
233

Just to solidify the polynomial long division a bit, let's try a slightly larger example, 100 * 9:

100 = 0b01100100 => x^6 + x^5 + x^2
  9 = 0b00001001 => x^3 + 1

(x^6 + x^5 + x^2)(x^3 + 1) = x^9 + x^8 + x^5 + x^6 + x^5 + x^2
                           = x^9 + x^8 + x^6 + x^2

Doing long division to reduce our result:

                              x
                           -----------------------------------
x^8 + x^4 + x^3 + x^2 + 1 | x^9 + x^8       + x^6                   + x^2
                          - x^9                   + x^5 + x^4 + x^3       + x
                            -------------------------------------------------
                          =       x^8       + x^6 + x^5 + x^4 + x^3 + x^2 + x

We still have a polynomial of degree 8, so we can do another step:

                              x +   1
                           -----------------------------------
x^8 + x^4 + x^3 + x^2 + 1 | x^9 + x^8       + x^6                   + x^2
                          - x^9                   + x^5 + x^4 + x^3       + x
                            -------------------------------------------------
                          =       x^8       + x^6 + x^5 + x^4 + x^3 + x^2 + x
                          -       x^8                   + x^4 + x^3 + x^2     + 1
                                  -----------------------------------------------
                          =                   x^6 + x^5                   + x + 1

We now have a polynomial of degree less than 8 that is congruent to our original polynomial modulo 0x11d, and the binary form is 0x01100011 = 99.

> gf_mult 100 9 8
99

This process can be done more efficiently, of course - but understanding what is going on will make you much more comfortable with what is going on!

I will not try to convince you that all multiplicative inverses exist in this magic shadow land of GF(2^8), but it's important for the rest of the algorithms to work that they do exist. Trust me on this.

Back to RAID 6

Equipped with this knowledge, you are ready to take on RAID6 in the kernel (PDF) sections 1 - 2.

Pause when you get to section 3 - this snippet is a bit magic and benefits from some explanation:

Multiplication by {02} for a single byte can be implemeted using the C code:

uint8_t c, cc;
cc = (c << 1) ^ ((c & 0x80) ? 0x1d : 0);

How does this work? Well:

Say you have a binary number 0bNMMM MMMM. Mutiplication by 2 gives you 0bNMMMMMMM0, which is 9 bits. Now, there are two cases to consider.

If your leading bit (N) is 0, your product doesn't have an x^8 term, so we don't need to reduce it modulo the irreducible polynomial.

If your leading bit is 1 however, your product is x^8 + something, which does need to be reduced. Fortunately, because we took an 8 bit number and multiplied it by 2, the largest term is x^8, so we only need to reduce it once. So we xor our number with our polynomial to subtract it.

We implement this by letting the top bit overflow out and then xoring the lower 8 bits with the low 8 bits of the polynomial (0x1d)

So, back to the original statement:

(c << 1) ^ ((c & 0x80) ? 0x1d : 0)
    |          |          |     |
    > multiply by 2       |     |
               |          |     |
               > is the high bit set - will the product have an x^8 term?
                          |     |
                          > if so, reduce by the polynomial
                                |
                                > otherwise, leave alone

Hopefully that makes sense.

Key points

It's critical you understand the section on Altivec (the vperm stuff), so let's cover it in a bit more detail.

Say you want to do A * V, where A is a constant and V is an 8-bit variable. We can express V as V_a + V_b, where V_a is the top 4 bits of V, and V_b is the bottom 4 bits. A * V = A * V_a + A * V_b

We can then make lookup tables for multiplication by A.

If we did this in the most obvious way, we would need a 256 entry lookup table. But by splitting things into the top and bottom halves, we can reduce that to two 16 entry tables. For example, say A = 02.

V_a A * V_a
00 00
01 02
02 04
... ...
0f 1e
V_b A * V_b
00 00
10 20
20 40
... ...
f0 fd

We then use vperm to look up entries in these tables and vxor to combine our results.

So - and this is a key point - for each A value we wish to multiply by, we need to generate a new lookup table.

So if we wanted A = 03:

V_a A * V_a
00 00
01 03
02 06
... ...
0f 11
V_b A * V_b
00 00
10 30
20 60
... ...
f0 0d

One final thing is that Power8 adds a vpermxor instruction, so we can reduce the entire 4 instruction sequence in the paper:

vsrb v1, v0, v14
vperm v2, v12, v12, v0
vperm v1, v13, v13, v1
vxor v1, v2, v1

to 1 vpermxor:

vpermxor v1, v12, v13, v0

Isn't POWER grand?

OK, but how does this relate to erasure codes?

I'm glad you asked.

Galois Field arithmetic, and its application in RAID 6 is the basis for erasure coding. (It's also the basis for CRCs - two for the price of one!)

But, that's all to come in part 2, which will definitely be published before 7 April!

Many thanks to Sarah Axtens who reviewed the mathematical content of this post and suggested significant improvements. All errors and gross oversimplifications remain my own. Thanks also to the OzLabs crew for their feedback and comments.

March 18, 2017

Codec 2 700C and Short LDPC Codes

In the last blog post I evaluated FreeDV 700C over the air. This week I’ve been simulating the use of short LDPC FEC codes with Codec 2 700C over AWGN and HF channels.

In my HF Digital Voice work to date I have shied away from FEC:

  1. We didn’t have the bandwidth for the extra bits required for FEC.
  2. Modern, high performance codes tend to have large block sizes (1000’s of bits) which leads to large latency (several seconds) when applied to low bit rate speech.
  3. The error rates we are interested in (e.g. 10% raw, 1% after FEC decoder) are unusual – many codes don’t work well.

However with Codec 2 pushed down to 700 bit/s we now have enough bandwidth for a rate 1/2 code inside a standard 2kHz SSB channel. Over coffee a few weeks ago, Bill VK5DSP offered to develop some short LDPC codes for me specifically for this application. He sent me an Octave simulation of rate 1/2 and 2/3 codes of length 112 and 56 bits. Codec 2 700C has 28 bit frames so this corresponds to 4 or 2 Codec 2 700C frames, which would introduce a latencies of between 80 to 160ms – quite acceptable for Push To Talk (PTT) radio.

I re-factored Bill’s simulation code to produce ldpc_short.m. This measures BER and PER for Bill’s short LDPC codes, and also plots curves for theoretical, HF multipath channels, a Golay (24,12) code, and the current diversity scheme used in FreeDV 700C.

To check my results I compared the Golay BER and ideal HF multipath (Rayleigh Fading) channel curves to other peoples work. Always a good idea to spot check a few values and make sure they are sensible. I took a simple approach to get results in a reasonable amount of coding time (about 1 day of work in this case). This simulation runs at the symbol rate, and assumes ideal synchronisation. My other modem work (i.e experience) lets me move back and forth between this sort of simulation and real world modems, for example accounting for synchronisation losses.

Error Distribution and Packet Error Rate

I had an idea that Packet Error Rate (PER) might be important. Without FEC, bit errors are scattered randomly about. At our target 1% BER, many frames will have 1 or 2 bit errors. As discussed in the last post Codec 2 700C is sensitive to bit errors as “every bit counts”. For example one bit error in the Vector Quantiser (VQ) index (a big look up table) can throw the speech spectrum right off.

However a LDPC decoder will tend to correct all errors in a codeword, or “die trying” (i.e. fail badly). So an average output BER of say 1% will consist of a bunch of perfect frames, plus a completely trashed one every now and again. Digital voice works better with this style of error pattern than a few random errors in each codec packet. So for a given BER, a system that delivers a lower PER is better for our application. I’ve guesstimated a 10% PER target for intelligible low bit rate speech. Lets see how that works out…..

Results

Here are the BER and PER curves for an AWGN channel:

Here are the same curves for HF (multipath fading) channel:

I’ve included a Golay (24,12) block code (hard decision) and uncoded PSK for comparison to the AWGN curves, and the diversity system on the HF curves. The HF channel is modelled as two paths with 1Hz Doppler spread and a 1ms delay.

The best LDPC code reaches the 1% BER/10% PER point at 2dB Eb/No (AWGN) and 6dB (HF multipath). Comparing BER, the coding gain is 2.5 and 3dB (AWGN and HF). Comparing PER, the coding gain is 3 and 5dB (AWGN and HF).

Here is a plot of the error pattern over time using the LDPC code on a HF channel at Eb/No of 6dB:

Note the errors are confined to short bursts – isolated packets where the decoder fails. Even though the average BER is 1%, most of the speech is error free. This is a very nice error distribution for digital speech.

Speech Samples

Here are some speech samples, comparing the current diversity scheme used for FreeDV 700C to LDPC, for AWGN and LDPC channels. These were simulated by extracting the error pattern from the simulation then inserting these errors in a Codec 2 700C bit stream (see command lines section below).

AWGN Eb/No 2dB Diversity LDPC
HF Eb/No 6dB Diversity LDPC

Next Steps

These results are very encouraging and suggest a gain of 2 to 5dB over FreeDV 700C, and better error distribution (lower PER). Next step is to develop FreeDV 700D – a real world implementation using the 112 data-bit rate 1/2 LDPC code. This will require 4 frames of buffering, and some sort of synchronisation to determine the 112 bit frame boundaries. Fortunately much of the C code for these LDPC codes already exists, as it was developed for the Wenet High Altitude Balloon work.

If most frames at the decoder input are now error free, we can consider more efficient (but less robust) techniques for Codec 2, such as prediction (delta coding). This will decrease the codec bit rate for a given speech quality. We could then choose to reduce our bit rate (making the system more robust for a given channel SNR), or raise speech quality while maintaining the same bit rate.

Command Lines

Generating the decoded speech, first run the Octave ldpc_short simulation to generate “error pattern file”, then subject the Codec 2 700C bit stream to these error patterns.

octave:67> ldpc_short
$ ./c2enc 700C ../../raw/ve9qrp_10s.raw - | ./insert_errors - - ../../octave/awgn_2dB_ldpc.err 28 | ./c2dec 700C - - | aplay -f S16_LE -

The simulation generate .eps files as direct generation of PNG leads to font size issues. Converting EPS to PNG without transparent background:

mogrify -resize 700x600 -density 300 -flatten -format png *.eps

However I still feel the images are a bit fuzzy, especially the text. Any ideas? Here’s the eps file if some one would like to try to get a nicer PNG conversion for me! The EPS file looks great at any scaling when I render it using the Ubuntu document viewer.

Update: A friend of mine (Erich) has suggested using GIMP for the conversion. This does seem to work well and has options for text and line anti-aliasing. It would be nice to be able to generate nice PNGs directly from Octave – my best approach so far is to capture screen shots.

Links

LowSNR site Bill VK5DSP writes about his experiments in low SNR communications.

Wenet High Altitude Balloon SSDV System developed with Mark VK5QI and BIll VK5DSP that uses LDPC codes.

LPDC using Octave and the CML library

FreeDV 700C

Codec 2 700C

St Patrick’s Day 2017 – and a free resource on Irish in Australia

Happy St Patrick’s day!

Slane AbbeyAnd “we have a resource on that” – that is, on the Irish in Australia and the major contributions they made since the very beginning of the colonies. You can get that lovely 5 page resource PDF for free if you check out using coupon code TRYARESOURCE. It’s an option we’ve recently put in place so anyone can grab one resource of their choice to see if they like our materials and assess their quality.

StPatric statue - Slane AbbeyView of Tara from Slane AbbeyBack to St Patrick, we were briefly in Ireland last year and near Dublin we drove past a ruin at the top of a hill that piqued our interest, so we stopped and had a look. It turned out to be Slane Abbey, the site where it is believed in 433 AD, the first Christian missionary to Ireland, later known as St Patrick, lit a large (Easter) celebration fire (on the Hill of Slane). With this action he (unwittingly?) contravened orders by King Laoghaire at nearby Tara. The landscape photo past the Celtic cross shows the view towards Tara. Ireland is a beautiful country, with a rich history.

Slane Abbey - info plaquePhotos by Arjen Lentz & Dr Claire Reeler

March 15, 2017

Am I a Neanderthal?

Early reconstruction of NeanderthalEarly reconstruction of Neanderthal

The whole question of how Neanderthals are related to us (modern humans) has been controversial ever since the first Neanderthal bones were found in Germany in the 19th century. Belonging to an elderly, arthritic individual (a good example of how well Neanderthals cared for each other in social groups), the bones were reconstructed to show a stooping individual, with a more ape-like gait, leading to Neanderthals being described as the “Missing Link” between apes and humans, and given the epithet “ape-man”.

Who were the Neanderthals?

Modern reconstruction – Smithsonian Museum of Natural History

Neanderthals lived in the lands surrounding the Mediterranean Sea, and as far east as the Altai Mountains in Central Asia, between about 250,000 and about 30,000 years ago. They were a form of ancient human with certain physical characteristics – many of which probably helped them cope with the cold of Ice Ages. Neanderthals evolved out of an earlier ancestorHomo erectus, possibly through another species – Homo heidelbergensis. They had a larger brain than modern humans, but it was shaped slightly differently, with less development in the prefrontal cortex, which allows critical thinking and problem-solving, and larger development at the back of the skull, and in areas associated with memory in our brains. It is possible that Neanderthals had excellent memory, but poor analytical skills. They were probably not good at innovation – a skill which became vital as the Ice Age ended and the global climate warmed, sea levels rose and plant and animal habitats changed.

Neanderthals were stockier than modern humans, with shorter arms and legs, and probably stronger and all-round tougher. They had a larger rib cage, and probably bigger lungs, a bigger nose, larger eyes and little to no chin. Most of these adaptations would have helped them in Ice Age Europe and Asia – a more compact body stayed warmer more easily and was tough enough to cope with a harsh environment. Large lungs helped oxygenate the blood and there is evidence that they had more blood supply to the face – so probably had warm, ruddy cheeks. The large nose warmed up the air they breathed, before it reached their lungs, reducing the likelihood of contracting pneumonia. Neanderthals are known to have had the same range of hair colours as modern humans and fair skin, red hair and freckles may have been more common.

They made stone tools, especially those of the type called Mousterian, constructed simple dwellings and boats, made and used fire, including for cooking their food, and looked after each other in social groups. Evidence of skeletons with extensive injuries occurring well before death, shows that these individuals must have been cared for, not only whilst recovering from their injuries, but also afterwards, when they would probably not have been able to obtain food themselves. Whether or not Neanderthals intentionally buried their dead is an area of hot controversy. It was once thought that they buried their dead with flowers in the grave, but the pollen was found to have been introduced accidentally. However, claims of intentional burial are still debated from other sites.

What Happened to the Neanderthals?

Abrigo do Lagar Velho

Anatomically modern humans emerged from Africa about 100,000 years ago. Recent studies of human genetics suggests that modern humans had many episodes of mixing with various lineages of human ancestors around the planet. Modern humans moved into Asia and Europe during the Ice Age, expanding further as the Ice Age ended. Modern humans overlapped with Neanderthals for about 60,000 years, before the Neanderthals disappeared. It is thought that a combination of factors led to the decline of Neanderthals. Firstly, the arrival of modern humans, followed by the end of the Ice Age, brought about a series of challenges which Neanderthals might have been unable to adapt to, as quickly as necessary. Modern humans have more problem solving and innovation capability, which might have meant that they were able to out-compete Neanderthals in a changing environment. The longest held theory is that out ancestors wiped out the Neanderthals in the first genocide in (pre)history. A find of Neanderthals in a group, across a range of ages, some from the same family group, who all died at the same time, is one of the sites, which might support this theory, although we don’t actually know who (or what) killed the group. Cut marks on their bones show that they were killed by something using stone tools. Finally, there is more and more evidence of what are called “transitional specimens”. These are individuals who have physical characteristics of both groups, and must represent inter-breeding. An example is the 4 year old child from the site of Abrigo do Lagar Velho in Portugal, which seems to have a combination of modern and Neanderthal features. The discovery of Neanderthals genes in many modern people living today is also proof that we must have interbred with Neanderthals in the past. It is thought that the genes were mixed several times, in several parts of the world.

Am I a Neanderthal?

So how do we know if we have Neanderthals genes? Neanderthal genes have some physical characteristics, but also other attributes that we can’t see. In terms of physical characteristics, Neanderthal aspects to the skull include brow ridges (ridges of bone above the eyes, under the eyebrows); a bump on the back of the head – called an occipital chignon, or bun, because it looks like a ‘bun’ hairstyle, built into the bone; a long skull (like Captain Jean-Lu Picard from Star Trek – actor Patrick Stewart); a small, or non-existent chin; a large nose; a large jaw with lots of space for wisdom teeth; wide fingers and thumbs; thick, straight hair; large eyes; red hair, fair skin and freckles! The last may seem a little surprising, but it appears that the genes for these characteristics came from Neanderthals – who had a wide range of hair colours, fair skin and, occasionally, freckles. Increased blood flow to the face also would have given Neanderthals lovely rosy cheeks!

Less obvious characteristics include resistance to certain diseases – parts of our immune systems, especially with reference to European and Asian diseases; less positively, an increased risk of other diseases, such as type 2 diabetes. Certain genes linked to depression are present, but ‘switched off’ in Neanderthals. The way that these genes link to depression, and their role in the lifestyles of early people (where they may have had benefits that are no longer relevant) are future topics for research and may help us understand more about ourselves.

Neanderthals genes are present in modern populations from Europe, Asia, Northern Africa, Australia and Oceania. So, depending on which parts of the world our ancestry is from, we may have up to 4% of our genetics from long-dead Neanderthal ancestors!

March 14, 2017

Japanese House in Python for Minecraft

I have kids that I'm teaching to hack. They started of on Scratch (which is excellent) and are ready to move up to Python. They also happen to be mad Minecraft fans, so now they're making their way through Adventures in Minecraft.

As I used Scratch when they were, I'm also hacking in Python & Minecraft as they are. It helps if I hit the bumps and hurdles before they do, as well as have a sound handle on the problems they're solving.

Now I've branched out from the tutorial and I'm just having fun with it and leaving behind code the kids can use, hack whatever. This code is in my minecraft-tools repo (for want of a better name). It's just a collection of random tools I've written for Minecraft aren't quite up to being their own thing. I expect this will mostly be a collection of python programs to construct things inside Minecraft via CanaryMod and CanaryRaspberryJuicePlugin.

The first bit of code to be shaken out of the tree is japanese_house.py which produces a Minecraft interpretation of a classic Japanese house. Presently it only produces the single configuration that is little more than an empty shell.

Japanese House (day) Japanese House (night)

I intend to add an interior fit out plus a whole bunch of optional configurations you can set at run time but for now it is what it is, as I'm going to move onto writing geodesic domes and transport | teleport rings (as per the Expanse, which will lead to eventually coding a TARDIS, that will you know, be actually bigger on the inside ;-)

Testing FreeDV 700C

Since releasing FreeDV 700C I’ve been “instrumenting” the FreeDV GUI program – adding some code to perform various tests of the 700C waveform, especially over the air.

With the kind help of Gerhard OE3GBB, Mark VK5QI, and Peter VK5APR, I have collected some samples and performed some tests. The goals of this work were:

  1. Compare 700C Over the Air (OTA) to simulation on an AWGN channel.
  2. Compare 700C OTA to SSB on an AWGN channel.

Instrumentation

Here is a screen shot of the latest FreeDV GUI Options screen:

I’ve added some features to the top three rows:

Test Frames Send a payload of known test bits rather than vocoder bits
Channel Noise Simulate a channel using AWGN noise
SNR SNR of AWGN noise
Attn Carrier Attenuate just one carrier
Carrier The 700C carrier (1-14) to attenuate
Simulated Interference Tone Enable an interfering sine wave of specified frequency and amplitude
Clipping Enable clipping of 700C tx waveform, to increase RMS power
Diversity Combine for plots Scatter and Test Frame plots use combined (7 carrier) information.

To explore these options it is useful to run in full duplex mode (Tools-PTT Half Duplex unchecked) and using a loopback sound device:

  $ sudo modprobe snd-aloop

More information on loopback in the FreeDV GUI README.

Clipping the 700C tx waveform reduces the Peak to Average Power ratio (PAPR) which may be result in a higher average power over the channel. However clipping distorts the waveform and add some “shoulders (i.e. noise) to the spectrum adjacent to the 700C waveform:

Several users have noticed this distortion. At this stage I’m unsure if clipping is useful or not.

The Diversity Combine option is useful to explore each of the 14 carriers separately before they are combined into 7 carriers.

Many of these options were designed to explore tx filtering. I have long wondered if any of the FreeDV carriers were receiving less power than others, for example due to ripple or a low pass response from the crystal filter. A low power carrier would have a high bit error rate, adversely affecting overall performance. Plotting the scatter diagram or bit error rate on a carrier by carrier basis can measure the effect of tx filtering – if it exists.

Some of the features above – like attenuating a single carrier – were designed to “test the test”. Much of the work I do on FreeDV (and indeed other projects) involves carefully developing software and writing “code to test the code”. For example to build the experiments described in this blog post I worked several hours day for several weeks. Not glamorous, but where the real labour lies in R&D. Careful, meticulous testing and experimentation. One percent inspiration … then code, test, test.

Comparing Analog SSB to Digital Voice

One of my goals is to develop a HF DV system that is competitive with analog SSB. So we need a way to compare analog and DV at the same SNR. So I came with the idea of a wave files of analog SSB and DV which have the same average (RMS) power. If these are fed into a SSB transmitter, then they will be received at the same SNR. I added 10 seconds of a 1000Hz sine wave at the start for good measure – this could be used to measure the actual SNR.

I developed two files:

  1. sine_analog_700c
  2. sine_analog_testframes700c

The first has the same voice signal in analog and 700C, the second uses test frames instead of encoded voice.

Interfering Carriers

One feature described above simulates an interfering carrier (like a birdie), something I have seen on the air. Here is a plot of a carrier in the middle of one of the 700C carriers, but about 10dB higher:

The upper RH plot is a rolling plot of bit errors for each carrier. You can see one carrier is really messed up – lots of bit errors. The average bit error rate is about 1%, which is where FreeDV 700C starts to become difficult to understand. These bit errors would not be randomly distributed, but would affect one part of the codec all the time. For example the pitch might be consistently wrong, or part of the speech spectrum. I found that as long as the interfering carrier is below the FreeDV carrier, the effect on bit error rate is negligible.

Take away: The tx station must tune away from any interfering carriers that poke above the FreeDV signal carriers. Placing the interfering tones between FreeDV carriers is another possibility, e.g. a 50Hz shift of the tx signal.

Results – Transmit Filtering

Simulation results suggest 700C should produce reasonable results near 0db SNR. So that’s the SNR I’m shooting for in Over The Air (OTA) testing.

Mark VK5QI sent me several minutes of test frames so I could determine if there were any carriers with dramatically different bit error rates, which would indicate the presence of some tx filtering. Here is the histogram of BERs for each carrier for Mark’s signal, which was at about 3dB SNR:

There is one bar for each I and Q QPSK bit of the 14 carriers – 28 bars total (note Diversity combination was off). After running for a few minutes, we can see a range of 5E-2 and 8E-2 (5 to 8%). In terms of AWGN modem performance, this is only about 1dB difference in SNR or Eb/No, as per the BER versus Eb/No graphs in this post on the COHPSK modem used for 700C. One carrier being pinned at say 20% BER, or a slope of increasing BER with carrier frequency – would have meant tx filtering trouble.

Peter VK5APR, sent me a high SNR signal (he lives just 4 km away). Initially I could see a X shaped scatter diagram, a possible sign of tx filtering. However this ended up being some amplitude pumping due to Fast AGC on my radio. After I disabled fast AGC, I could see a scatter diagram with 4 clear dots, and no X-shape. Check.

I performed an additional test using my IC7200 as a transmitter, and a HackRF as a receiver. The HackRF has no crystal filter and a very flat response, so any distortion would be due to the IC7200 transmit filtering. Once again – 4 clean dots on the scatter diagram and no X-shape.

So I am happy to conclude that transmit filtering does not seem to be a problem, at least of the radios tested. All performance issues are therefore likely to be caused by me and my algorithms!

Results – Low SNR testing

Peter, VK5APR, configured his station to send the analog/700C equi-power test wave files described above at very low power, such that the received SNR at my station was about 0dB. As we are so close it was reasonable to assume the channel was AWGN, indeed we could see no sign of NVIS fading and the band (40M) was devoid of DX at the 12 noon test time.

Here is the rx signal I received, and the same file run through the 700C decoder. Neither the SSB or the decoded 700C audio are pretty. However it’s fair to say we could just get a message through on both modes and that 700C is holding it’s own next to SSB. The results are close to my simulations which was the purpose of this test.

You can decode the off air signal yourself if you download the first file and replay it through the FreeDV GUI program using “Tools – Start/Stop Play File from Radio”.

Discussion

While setting up these tests, Peter and I conversed comfortably for some time over FreeDV 700C at a high SNR. This proved to me that for our audience (experienced users of HF radio) – FreeDV 700C can be used for conversational contacts. Given the 700C codec is really just a first pass – that’s a fine result.

However it’s a near thing – the 700C codec adds a lot of distortion just compressing the speech. It’s pretty bad even if the SNR is high. The comments on the Codec 2 700C blog post indicate many lay-people can’t understand speech compressed by 700C. Add any bit errors (due to low SNR or fading) and it quickly becomes hard to understand – even for experienced users. This makes 700C very sensitive to bit errors as the SNR drops. But hey – every one of those 28 bits/frame counts at 700 bit/s so it’s not surprising.

In contrast, SSB scales a bit better with SNR. However even at high SNRs, that annoying hiss is always there – which is very fatiguing. Peter and I really noticed that hiss after a few minutes back on SSB. Yuck.

SSB gets a lot of it’s low SNR “punch” from making effective use of peak power. Here is a plot of the received SSB:

It’s all noise except for the speech peaks, where the “peak SNR” is much higher than 0dB. Our brains are adept at picking out words from those peaks, integrating the received phonetic symbols (mainly vowel energy) in our squishy biological receive filters. It’s a pity we didn’t evolve to detect coherent PSK. A curse on your evolution!

In contrast – 700C allocates just as much power to the silence between words as the most important parts of speech. This suggests we could do a better job at tailoring the codec and modem to peak power, e.g. allocating more power to parts of the speech that really matter. I had a pass at Time Variable Quantisation a few years ago. A variable rate codec might be called for, tightly integrated to the modem to pack more bits/power into perceptually important parts of speech.

The results above assumed equal average power for SSB and FreeDV 700C. It’s unclear if this happens in the real world. For example we may need to “back off” FreeDV drive further than SSB; SSB may use a compressor; and the PAs we are using are generally designed for PEP rather than average power operation.

Next Steps

I’m fairly happy with the baseline COHPSK modem, it seems to be hanging on OK as long as there aren’t any co-channel birdies. The 700C codec works better than expected, has plenty of room for improvement – but it’s sensitive to bit errors. So I’m inclined to try some FEC next. Aim for error free 700C at 0dB, which I think will be superior to SSB. I’ll swap out the diversity for FEC. This will increase the raw BER, but allow me to run a serious rate 0.5 code. I’ll start just with an AWGN channel, then tackle fading channels.

Links

FreeDV 700C
Codec 2 700C

Setting up a basic keystone for Swift + Keystone dev work

As a Swift developer, most of the development works in a Swift All In One (SAIO) environment. This environment simulates a mulinode swift cluster on one box. All the SAIO documentation points to using tempauth for authentication. Why?

Because most the time authentication isn’t the things we are working on. Swift has many moving parts, and so tempauth, which only exists for testing swift and is configured in the proxy.conf file works great.

However, there are times you need to debug or test keystone + swift integration. In this case, we tend to build up a devstack for keystone component. But if all we need is keystone, then can we just throw one up on a SAIO?… yes. So this is how I do it.

Firstly, I’m going to be assuming you have SAIO already setup. If not go do that first. not that it really matters, as we only configure the SAIO keystone component at the end. But I will be making keystone listen on localhost, so if you are doing this on anther machine, you’ll have to change that.

Further, this will set up a keystone server in the form you’d expect from a real deploy (setting up the admin and public interfaces).

 

Step 1 – Get the source, install and start keystone

Clone the sourcecode:
cd $HOME
git clone https://github.com/openstack/keystone.git

Setup a virtualenv (optional):
mkdir -p ~/venv/keystone
virtualenv ~/venv/keystone
source ~/venv/keystone/bin/activate

Install keystone:
cd $HOME/keystone
pip install -r requirements.txt
pip install -e .
cp etc/keystone.conf.sample etc/keystone.conf

Note: We are running the services from the source so config exists in source etc.

 

The fernet keys seems to assume a full /etc path, so we’ll create it. Maybe I should update this to put all config there but for now meh:
sudo mkdir -p /etc/keystone/fernet-keys/
sudo chown $USER -R /etc/keystone/

Setup the database and fernet:
keystone-manage db_sync
keystone-manage fernet_setup

Finally we can start keystone. Keystone is a wsgi application and so needs a server to pass it requests. The current keystone developer documentation seems to recommend uwsgi, so lets do that.

 

First we need uwsgi and the python plugin, one a debian/ubuntu system you:
sudo apt-get install uwsgi uwsgi-plugin-python

Then we can start keystone, by starting the admin and public wsgi servers:
uwsgi --http 127.0.0.1:35357 --wsgi-file $(which keystone-wsgi-admin) &
uwsgi --http 127.0.0.1:5000 --wsgi-file $(which keystone-wsgi-public) &

Note: Here I am just backgrounding them, you could run then in tmux or screen, or setup uwsgi to run them all the time. But that’s out of scope for this.

 

Now a netstat should show that keystone is listening on port 35357 and 5000:
$ netstat -ntlp | egrep '35357|5000'
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 26916/uwsgi
tcp 0 0 127.0.0.1:35357 0.0.0.0:* LISTEN 26841/uwsgi

Step 2 – Setting up keystone for swift

Now that we have keystone started, its time to configure it. Firstly you need the openstack client to configure it so:
pip install python-openstackclient

Next we’ll use all keystone defaults, so we only need to pick an admin password. For the sake of this how-to I’ll pick the developer documentation example of `s3cr3t`. Be sure to change this. So we can do a basic keystone bootstrap with:
keystone-manage bootstrap --bootstrap-password s3cr3t

Now we just need to set up some openstack env variables so we can use the openstack client to finish the setup. To make it easy to access I’ll dump them into a file you can source. But feel free to dump these in your bashrc or whatever:
cat > ~/keystone.env <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=s3cr3t
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://localhost:5000/v3
EOF


source ~/keystone.env

 

Great, now  we can finish configuring keystone. Let’s first setup a service project (tennent) for our Swift cluster:
openstack project create service

Create a user for the cluster to auth as when checking user tokens and add the user to the service project, again we need to pick a password for this user so `Sekr3tPass` will do.. don’t forget to change it:
openstack user create swift --password Sekr3tPass --project service
openstack role add admin --project service --user swift

Now we will create the object-store (swift) service and add the endpoints for the service catelog:
openstack service create object-store --name swift --description "Swift Service"
openstack endpoint create swift public "http://localhost:8080/v1/AUTH_\$(tenant_id)s"
openstack endpoint create swift internal "http://localhost:8080/v1/AUTH_\$(tenant_id)s"

Note: We need to define the reseller_prefix we want to use in Swift. If you change it in Swift, make sure you update it here.

 

Now we can add roles that will match to roles in Swift, namely an operator (someone who will get a Swift account) and reseller_admins:
openstack role create SwiftOperator
openstack role create ResellerAdmin

Step 3 – Setup some keystone users to auth as.

TODO: create all the tempauth users here

 

Here, it would make sense to create the tempauth users devs are used to using, but I’ll just go create a user so you know how to do it. First create a project (tennent) for this example demo:
openstack project create --domain default --description "Demo Project" demo

Create a user:
openstack user create --domain default --password-prompt matt

We’ll also go create a basic user role:
openstack role create user

Now connect the 3 pieces together by adding user matt to the demo project with the user role:
openstack role add --project demo --user matt user

If you wanted user matt to be a swift operator (have an account) you’d:
openstack role add --project demo --user matt SwiftOperator

or even a reseller_admin:
openstack role add --project demo --user matt ResellerAdmin

If your in a virtual env, you can leave it now, because next we’re going to go back to your already setup swift to do the Swift -> Keystone part:
deactivate

Step 4 – Configure Swift

To get swift to talk to keystone we need to add 2 middlewares to the proxy pipeline. And in the case of a SAIO, remove the tempauth middleware. But before we do that we need to install the keystonemiddleware to get one of the 2 middlware’s, keystone’s authtoken:
sudo pip install keystonemiddleware

Now you want to replace your tempauth middleware in the proxy path pipeline with authtoken keystoneauth so it looks something like:
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync authtoken keystoneauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

Then in the same ‘proxy-server.conf’ file you need to add the paste filter sections for both of these new middlewares:
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_host = localhost
auth_port = 35357
auth_protocol = http
auth_uri = http://localhost:5000/
admin_tenant_name = service
admin_user = swift
admin_password = Sekr3tPass
delay_auth_decision = True
# cache = swift.cache
# include_service_catalog = False

[filter:keystoneauth]
use = egg:swift#keystoneauth
# reseller_prefix = AUTH
operator_roles = admin, SwiftOperator
reseller_admin_role = ResellerAdmin

Note: You need to make sure if you change the reseller_prefix here, you change it in keystone. And notice this is where you map operator_roles and reseller_admin_role in swift to that in keystone. Here anyone in with the keystone role admin or SwiftOperator are swift operators and those with the ResellerAdmin role are reseller_admins.

 

And that’s it. Now you should be able to restart your swift proxy and it’ll go off and talk to keystone.

 

You can use your Python swiftclient now to go talk, and whats better swiftclient understands the OS_* variables, so you can just source your keystone.env and talk to your cluster (to be admin) or export some new envs for the user you’ve created. If you want to use curl you can. But _much_ easier to use swiftclient.

 

Tip: You can use: swift auth to get the auth_token if you want to then use curl.

 

If you want to authenticate via curl then for v3, use: https://docs.openstack.org/developer/keystone/devref/api_curl_examples.html

 

Or for v2, I use:
url="http://localhost:5000/v2.0/tokens"
auth='{"auth": {"tenantName": "demo", "passwordCredentials": {"username": "matt", "password": ""}}}'

 

curl -s -d "$auth" -H 'Content-type: application/json' $url |python -m json.tool

 

or

curl -s -d "$auth" -H 'Content-type: application/json' $url |python -c "import sys, json; print json.load(sys.stdin)['access']['token']['id']"

To just print out the token. Although a simple swift auth would do all this for you.

LSM mailing list archive: this time for sure!

Following various unresolved issues with existing mail archives for the Linux Security Modules mailing list, I’ve set up a new archive here.

It’s a mailman mirror of the vger list.

March 13, 2017

Thunderbird 54 external editor

For many years I've used Thunderbird with Alexandre Feblot's external editor plugin to allow me to edit mail with emacs. Unfortunately it seems long unmaintained and stopped working on a recent upgrade to Thunderbird 54 when some deprecated interfaces were removed. Brian M. Carlson seemed to have another version which also seemed to fail with latest Thunderbird.

I have used my meagre Mozilla plugin skills to make an update at https://github.com/ianw/extedit/releases. Here you can download an xpi that passes the rigorous test-suite of ... works for me.

How to Create a Venn Diagram with Independent Intersections in PowerPoint

A Venn diagram can be a great way to explain a business concept. This is generally not difficult to create in modern presentation software. I often use Google Slides for its collaboration abilities.

Where it becomes difficult is when you want to add a unique colour/pattern to an intersection, where the circles overlap. Generally you will either get one circle overlapping another, or if you set some transparency then the intersection will become a blend of the colours of the circles.

I could not work out how to do this in Google Slides, so on this occasion I cheated and did it in Microsoft PowerPoint instead. I then imported the resulting slide into Slides.

This worked for me in PowerPoint for Mac 2016. The process is probably the same on Windows.

Firstly, create a SmartArt Venn Diagram

Insert > SmartArt > Relationship > Basic Venn

alt

Separate the Venn circles

SmartArt Design > Convert > Convert to Shapes

alt

Ungroup shapes

Shape Format > Group Objects > Ungroup

alt

Split out the intersections

Shape Format > Merge Shapes > Fragment

alt

From there, you can select the intersection as an independent shape. You can treat each piece separately. Try giving them different colours or even moving them apart.

alt

This can be a simple but impactful way to get your point across.

This Week in HASS – term 1, week 7

This week our youngest students are looking in depth at different types of celebrations; slightly older students are examining how people got around in the ‘Olden Days’; and our older primary students have some extra time to finish their activities from last week.

Foundation to Year 3

First car made in Qld, 1902

In the stand-alone Foundation (Prep) unit (F.1), students are discussing celebrations – which ones do we recognise in Australia, how these compare with celebrations overseas, and what were these celebrations like in days gone by. Our integrated Foundation (Prep) unit (F.5) and students in Years 1 (1.1), 2 (2.1) and 3 (3.1), are examining Transport in the Past – how did their grandparents get around? How did people get around 100 years ago? How did kids get to school? How did people do the shopping? Students even get to dream about how we might get around in the future…

Years 3 to 6

Making mud bricks

At OpenSTEM we recognise that good activities, which engage students and allow for real learning, take time. Nobody likes to get really excited about something and then be rushed through it and quickly moved on to something else. This part of the unit has lots of hands-on activities for Year 3 (3.5) students in an integrated class with Year 4, as well as Year 4 (4.1), 5 (5.1) and 6 (6.1) students. In recognition of that, two weeks are allowed for the students to really get into making Ice Ages and mud bricks, and working out how to survive the challenges of living in a Neolithic village – including how to trade, count and write. Having enough time allows for consolidation of learning, as well as allowing teachers to potentially split the class into different groups engaged in different activities, and then rotate the groups through the activities over a 2 week period.

March 10, 2017

Solarized xmobar, trayer and dmenu for xmonad

Like many other people, Ethan Schoonover's precision colours for machines and people, known as Solarized is my preferred colour pallet, especially as I spend most of my time in terminals.

I've "Solarized" most things in my work environment except my window manager, xmonad. Well to be more specific, the xmobar, trayer and dmenu applications that I use within xmonad.

xmobar screenshot

The commits:

Modifications were made to xmonad.hs, .xmobarrc and trayer in .xsession that switches out the existing and default colours for those I've selected from the Solarized palette. I now have a fully Solarized window manager that fits in much better with the rest of my workspace.

Relevant code snippets:

xmonad.hs

                , logHook = dynamicLogWithPP $ xmobarPP
                    { ppOutput = hPutStrLn xmproc
                    , ppCurrent = xmobarColor "#859900" "" . wrap "[" "]"
                    , ppVisible = xmobarColor "#2aa198" "" . wrap "(" ")"
                    , ppLayout = xmobarColor "#2aa198" ""
                    , ppTitle = xmobarColor "#859900" "" . shorten 50
                    }
            , modMask = mod4Mask -- Rebind Mod to the Windows key
            --, borderWidth = 1
            } `additionalKeys`
                    -- Custom dmenu launcher
                    [ ((mod4Mask, xK_p ), spawn "exe=`dmenu_path | dmenu -fn \"Open Sans-10\" -p \"λ:\" -nb \"#073642\" -nf \"#93a1a1\" -sb \"#002b36\" -sf \"#859900\"` && eval \"exec $exe\"")

.xmobarrc

-- Appearance
  font = "xft:OpenSans:size=10:antialias=true"
, bgColor = "#073642"
, fgColor = "#93a1a1"

...

--Plugins
, commands =
    -- CPU Activity Monitor
    ...
                         , "--low"      , "#2aa198"
                         , "--normal"   , "#859900"
                         , "--high"     , "#dc322f"
    ...
    -- cpu core temperature monitor
    ...
                         , "--low"      , "#2aa198"
                         , "--normal"   , "#859900"
                         , "--high"     , "#dc322f"
    ...
    -- Memory Usage Monitor
    ...
                         , "--low"      , "#2aa198"
                         , "--normal"   , "#859900"
                         , "--high"     , "#dc322f"
    ...
    -- Battery Monitor
    ...
                         , "--low"      , "#dc322f"
                         , "--normal"   , "#859900"
                         , "--high"     , "#2aa198"

                         , "--" -- battery specific options
                                   -- discharging status
                                   , "-o"   , "<left>% (<timeleft>)"
                                   -- AC "on" status
                                   , "-O" , "<fc=#2aa198>Charging</fc>"
                                   -- charged status
                                   , "-i"   , "<fc=#859900>Charged</fc>"
    ...
    -- Weather Monitor
    ...
                         , "--low"      , "#2aa198"
                         , "--normal"   , "#859900"
                         , "--high"     , "#dc322f"
    ...
    -- Network Activity Monitor
    ...
                         , "--low"      , "#2aa198"
                         , "--normal"   , "#859900"
                         , "--high"     , "#dc322f"
    ...
    -- Time and Date Display
    , Run Date           "<fc=#268bd2>%a %b %_d %H:%M</fc>" "date" 10
    ...

.xsession

exec /usr/bin/trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 10 --transparent true --alpha 0 --tint 0x073642 --height 22 &

That at least implements the colours I selected from Solarized and should be a good starting point for anyone else.

March 09, 2017

Hardening the LSM API

The Linux Security Modules (LSM) API provides security hooks for all security-relevant access control operations within the kernel. It’s a pluggable API, allowing different security models to be configured during compilation, and selected at boot time. LSM has provided enough flexibility to implement several major access control schemes, including SELinux, AppArmor, and Smack.

A downside of this architecture, however, is that the security hooks throughout the kernel (there are hundreds of them) increase the kernel’s attack surface. An attacker with a pointer overwrite vulnerability may be able to overwrite an LSM security hook and redirect execution to other code. This could be as simple as bypassing an access control decision via existing kernel code, or redirecting flow to an arbitrary payload such as a rootkit.

Minimizing the inherent security risk of security features, is, I believe, an essential goal.

Recently, as part of the Kernel Self Protection Project, support for marking kernel pages as read-only after init (ro_after_init) was merged, based on grsecurity/pax code. (You can read more about this in Kees Cook’s blog here). In cases where kernel pages are not modified after the kernel is initialized, hardware RO page protections are set on those pages at the end of the kernel initialization process. This is currently supported on several architectures (including x86 and ARM), with more architectures in progress.

It turns out that the LSM hook operations make an ideal candidate for ro_after_init marking, as these hooks are populated during kernel initialization and then do not change (except in one case, explained below). I’ve implemented support for ro_after_init hardening for LSM hooks in the security-next tree, aiming to merge it to Linus for v4.11.

Note that there is one existing case where hooks need to be updated, for runtime SELinux disabling via the ‘disable’ selinuxfs node. Normally, to disable SELinux, you would use selinux=0 at the kernel command line. The runtime disable feature was requested by Fedora folk to handle platforms where the kernel command line is problematic. I’m not sure if this is still the case anywhere. I strongly suggest migrating away from runtime disablement, as configuring support for it in the kernel (via CONFIG_SECURITY_SELINUX_DISABLE) will cause the ro_after_init protection for LSM to be disabled. Use selinux=0 instead, if you need to disable SELinux.

It should be noted, of course, that an attacker with enough control over the kernel could directly change hardware page protections. We are not trying to mitigate that threat here — rather, the goal is to harden the security hooks against being used to gain that level of control.

Quick Stats on zstandard (zstd) Performance

Was looking at using zstd for backup, and wanted to see the effect of different compression levels. I backed up my (built) bitcoin source, which is a decent representation of my home directory, but only weighs in 2.3GB. zstd -1 compressed it 71.3%, zstd -22 compressed it 78.6%, and here’s a graph showing runtime (on my laptop) and the resulting size:

zstandard compression (bitcoin source code, object files and binaries) times and sizes

For this corpus, sweet spots are 3 (the default), 6 (2.5x slower, 7% smaller), 14 (10x slower, 13% smaller) and 20 (46x slower, 22% smaller). Spreadsheet with results here.

March 08, 2017

pudb debugging tips

As an OpenStack Swift dev I obviously write a lot of Python. Further Swift is cluster and so it has a bunch of moving pieces. So debugging is very important. Most the time I use pudb and then jump into the PyCharms debugger if get really stuck.

Pudb is curses based version of pdb, and I find it pretty awesome and you can use it while ssh’d somewhere. So I thought I’d write a tips that I use. Mainly so I don’t forget 🙂

The first and easiest way to run pudb is use pudb as the python runner.. i.e:

pudb <python script>

On first run, it’ll start with the preferences window up. If you want to change preferences you can just hit ‘<ctrl>+p’. However you don’t need to remember that, as hitting ‘?’ will give you a nice help screen.

I prefer to see line numbers, I like the dark vim theme and best part of all, I prefer my interactive python shell to be ipython.

While your debugging, like in pdb, there are some simple commands:

  • n – step over (“next”)
  • s – step into
  • c – continue
  • r/f – finish current function
  • t – run to cursor
  • o – show console/output screen
  • b – toggle breakpoint
  • m – open module
  • ! – Jump into interactive shell (most useful)
  • / – text search

There are obviously more then that, but they are what I mostly use. The open module is great if you need to set a breakpoint somewhere deeper in the code base, so you can open it, set a breakpoint and then happily press ‘c’ to continue until it hits. The ‘!’ is the most useful, it’ll jump you into an interactive python shell in the exact point the debugger is at. So you can jump around, check/change settings and poke in areas to see whats happening.

As with pdb you can also use code to insert a breakpoint so pudb will be triggered rather then having to start a script with pudb. I give an example of how in the nosetest section below.

nosetests + pudb

Sometimes the best way to use pudb is to debug unit tests, or even write a unit (or functaional or probe) test to get you into an area you want to test. You can use pudb to debug these too. And there are 2 ways to do it.

The first way is by installing the ‘nose-pudb’ pip package:

pip install nose-pudb

Now when you run nosetests you can add the –pudb option and it’ll break into pudb if there is an error, so you go poke around in ‘post-mortem’ mode. This is really useful, but doesn’t allow you to actually trace the tests as they run.

So the other way of using pudb in nosetests is actually insert some code in the test that will trigger as a breakpoint and start up pudb. To do so is exactly how you would with pdb, except substitute for pudb. So just add the following line of code to your test where you want to drop into pudb:

import pudb; pudb.set_trace()

And that’s it.. well mostly, because pudb is command line you need to tell nosetests to not capture stdout with the ‘-s’ flag:

nosetests -s test/unit/common/middleware/test_cname_lookup.py

testr + pudb

Not problem here, it uses the same approach as above. Where you programmatically set a trace, as you would for pdb. Just follow the  ‘Debugging (pdb) Tests’ section on this page (except substitute pdb for pudb)

 

Update – run_until_failure.sh

I’ve been trying to find some intermittent unit test failures recently. So I whipped up  a quick bash script that I run in a tmux session that really helps find and deal with them, I thought I’d add to this post as I then can add nose-pudb to make it pretty useful.

#!/bin/bash

n=0
while [ True ]
do 
  clear
  $@
  if [ $? -gt 0 ]
  then 
    echo 'ERROR'
    echo "number " $n
    break
  fi
  let "n=n+1"
  sleep 1
done

With this I can simply:
run_until_failure.sh tox -epy27

 

It’ll stop looping once the command passed returns something other then 0.

Once I have an error, I have then been focusing in on the area it happens (to speed up the search a bit), I can also use nose-pudb to drop me into post-mortem mode so I can poke around in ipython, for example, I’m currently running:

 

run_until_failure.sh nosetests --pudb test/unit/proxy/test_server.py

 

Then I can come back to the tmux session, if I’m dropped in a pudb interface, I can go poke around.

Oceanography and the Continents

Marie Tharp (30 July, 1920 – 23 August, 2006) was an oceanographer and cartographer who mapped the oceans of the world. She worked with Bruce Heezen, who collected data on a ship, mapping the ocean floor.

Tharp and Heezen

Tharp turned the data into detailed maps. At that time women were not allowed to work on research ships, as it was thought that they would bring bad luck! However, Tharp was a skilled cartographer, and as she made her maps of the floor of the oceans of the world, with their ridges and valleys, she realised that there were deep valleys which showed the boundaries of continental plates. She noticed that these valleys were also places with lots of earthquakes and she became convinced of the basics of plate tectonics and continental drift.

Between 1959 and 1963, Tharp was not mentioned in any of the scientific papers published by Heezen, and he dismissed her theories disparagingly as “girl talk”. As this video  from National Geographic shows, she stuck to her guns and was vindicated by the evidence, eventually managing to persuade Heezen, and the scientific community at large, of the validity of the theories. In 1977, Heezen and Tharp published a map of the entire ocean floor. Tharp obtained degrees in English, Music, Geology and Mathematics during the course of her life. In 2001, a few weeks before her 81st birthday, Marie Tharp was awarded the Lamont-Doherty Heritage Award at Columbia University, in the USA, as a pioneer of oceanography. She died of cancer in 2006.

The National Geographic video provides an excellent testimony to this woman pioneer in oceanography.

March 07, 2017

Multicore World 2017: A Review

Multicore World is a small conference held annually in New Zealand hosted by Open Parallel. What it lacks in numbers however it makes up in quality of the presenters. The 2017 conference included a typically impressive array of speakers dealing with some of the most difficult issues facing computational science, and included several important announcements in the fields of supercomputing, the Internet of Things, and manufacting issues.

read more

March 06, 2017

Adventures in Unemployment

Due to a recent corporate fire sale, implosion, what ever you'd like to call it, I found myself joining thousands of my former colleagues unemployed and looking for "new opportunities" (hire me, I'm dead set amazing).

As a parent who also has an ex-wife and children it is incumbent upon me to inform the Department of Human Services (DHS) of any changes to my income within strict time frames. So like a dutiful slave of the state, I called them to advise of my new $0 income status.

The following conversation actually happened:

[DHS] "So taking into account your new income of $0, you will need to pay $114 / month."

[McW] "With an income of $0, how would you expect me to pay that?"

[DHS] "Borrow money from family and friends."

[McW] "You know you just said that out loud, right?"

[DHS] "Yes sir."

[McW] "Okay, so let me clarify this. I have an income of $0, 3 dependent children living with me, one dependent adult and the DHS priority is not for me to provide food and shelter for them but to pay child support?"

[DHS] "That is correct."

[McW] "..and this is something you've not only said out loud but on a phone call that's being recorded for 'service quality and training purposes'."

[DHS] "That is the nature of the legislation and what we are trained to say."

[McW] "You do see the problem here, don't you?"

[DHS] "Yes sir, I do."

[McW] "Are there any other things you're trained to say that might help?"

[DHS] "You could apply for work benefits."

[McW] "Okay, let's think this one through. Let's say I did get the dole, which would be about $400 / fortnight, less than my fortnightly rent even before I commence buying food, would the DHS still want $114 from that?

[DHS] "Yes, child support would be taken from the benefits before they were paid to you."

[McW] [long pause] "Back to the $0 income and obvious incapacity to pay, when the inevitable non-payment occurs, what does the DHS do next?"

[DHS] "Despite your excellent payment history, the DHS would have to pursue avenues for collection."

[McW] "So I have a family of 6 to shelter and support and the DHS will still end up going collect to strip us of whatever they can? That's not particularly helpful to anyone, not those I'm directly supporting nor my children for whom the DHS is collecting child support."

[DHS] "That's correct sir, once a child support debt of $1,000 is accrued, DHS will pursue collection avenues. Is there anything else I can help you with?"

[McW] "Unless you can change the legislation, I think we're good here. Thank you."

Having been in the child support "game" for about 13 years, having seen female friends dudded by former male partners, have seen male friends rorted by former female partners, it's not as though I was unaware the system was truly broken and unfair to all parties in so many cases.

This conversation however, was truly breathtaking. I doubt Douglas Adams could have scripted this any better. :-)

The Week in HASS – term 1, week 6

HASS students have a global focus this week. The younger students are looking at calendars, celebrations and which countries classmates are connected to, around the world. Older students are starting to explore what happened at the end of the Ice Age and the beginnings of agriculture and trade. These students will also be applying the scientific method to practical examinations – creating their own mini Ice Ages in a bowl and making mud bricks.

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineFoundation to Year 3

Our standalone Foundation/Prep classes (F.1) are looking at calendars and celebrations this week, starting to explore the world beyond their own family and gain an identity relative to each other. Integrated Foundation/Prep (F.5) and Year 1 (1.1) classes; as well as Year 1 (1.1), 2 (2.1) and 3 (3.1) classes are examining our OpenSTEM blackline world map and putting coloured dots on all the countries that they and their families are connected to, either through relatives, or by having lived there themselves. It is through this sort of exercise that students can start to understand the concept of the “global family”.

Year 3 to Year 6

Making an Ice Age

Students in years 3 (3.5), 4 (4.1), 5 (5.1) and 6 (6.1) are consolidating their learning and expanding into subjects, such as Science and Economics and Business. The ever-popular Ice Ages and Mud Bricks activity links to core Science curricular strands and allows students to explore their learning in very tactile ways. Whilst undertaking the activity, students make a mini Ice Age in a bowl, attempting to predict what will happen to their clay landscape when it is flooded and frozen, and then comparing these predictions to their recorded observations, during empirical testing. Students also make their own mud bricks by hand, once again predicting how to make the bricks strongest and testing different construction techniques. We have even had classes test the strength of their mud brick walls under simulated flood conditions, working inside a tidy tray.

Making mud bricks

Students move on from studying the Ice Age, looking at what happened as the climate changed and global sea levels rose. The pressures that these changes brought to people’s lives is examined by looking at the origins of agriculture with domestic plants and animals. Students consider how people needed to wok together to survive. The cooperative Trade and Barter activity allows students to role play life in a Neolithic village. Faced with a range of challenges, such as floods and droughts, students discover how to prioritise their needs for food to survive the winter, against their wants. They also discover that trade, counting and writing all grew out of the needs for people to exchange items and help each other to survive. This activity covers all the basic concepts in the Economics and Business curriculum, whilst providing a context that is meaningful to the students and their own experiences. Replicating the way that people developed trade, counting and writing in the historic period, the students’ experiences during the Trade and Barter activity lay the foundations for a deeper understanding of the basic concepts of Economics and Business.

flooding the mud brick wall

March 05, 2017

Non self replicating reprap 3d printer

The reprap is designed to be able to "self replicate" to a degree. If a part on a reprap 3d printer breaks then a replacement part can be printed and attached. Parts can evolve as new ideas come along. Having parts crack or weaken on a 3d printer can be undesirable though.

A part on this printer was a mix of acrylic and PLA, both of which were cracked. Not quite what one would hope for as a foot of the y-axis. It is an interesting design with the two driving rods the same length as the alloy channel at the back of the printer.



A design I thought of called for 1/2 inch alloy in order to wrap the existing alloy extrusion with a 3mm cover. The dog bone on the slot is manually added in Fusion 360 so it is larger than needed. The whole thing being a learning exercise for me as to how to create 2.5D parts. The belt tensioning is on a 6mm subassembly that is mounted on the bracket in the right of the image below.


The bracket and subassembly are shown mounted below. Yes, using four M6 bolts to tension a belt is overkill. I would imagine you can stretch the belt to breaking point quite easily with these bolts. The two rods are locked into place using M3 tapped grub screws. The end brackets are bolted to the back extrusion using two M6 bolts.


The z-axis is now supported by a second 10mm alloy custom bracket. This combination makes it much, much harder to wobble the z-axis than the original design using plastic parts.




March 03, 2017

Service Innovation in New Zealand – the new digital transformation

Over the past fortnight I have had the pleasure and privilege of working closely with the Service Innovation team in the New Zealand Government to contribute to their next steps in achieving proper service integration. It was an incredible two weeks as part of an informal exchange between our agencies to share expertise and insights. I am very thankful for the opportunity to work with the team and to contribute in some small way to their visionary, ambitious and world leading agenda. I also recommend everyone watch closely the work of Service Innovation team, and contact them if you are interested in giving feedback on the model.

I spent a couple of weeks in New Zealand looking at their “Service Innovation” agenda, which is, I can confidently say, one of the most exciting things I’ve ever seen for genuine digital transformation. The Kiwis have in place already a strong and technically sound vision for service integration, a bunch of useful guidance (including one of the best gov produced API guides I’ve ever seen!) and a commitment to delivering integrated services as a key part of their agenda and programme along with brilliant skills and visionaries across government.

I believe New Zealand will be the one to watch over the coming year and, with a little luck, they could redefine the baseline for what everyone should be aspiring to. They could be the first government to properly demonstrate Gov as a Platform, not just better digital government, which is quite exciting! Systemic change and transformation generally happens once a generation if you are lucky, so do keep an eye on the Kiwis. They are set to  leave us all behind!

There is more internal documentation which I encourage the team to publish, like the Federated Services Model Reference Architecture and other gems.

In a couple of weeks, on the back of a raft of ongoing work, we analysed why it is with such great guidance available, why would siloed approaches still be happening? We found that the natural motivations of agencies would always drive an implementation that was designed to meet the specific agency needs rather than the system needs across government. That was unsurprising but the new insight was the service delivery teams themselves, who wanted to do the best possible implementations but with little time and resource, and high expectations, couldn’t take the time needed to find, read, interpret, translate into practice and verify implementation of the guidance. Which is quite fair! So we looked at models of reducing the barriers for those teams to do things better by providing reusable infrastructure and reference implementations, and either changing or tweaking the motivations of agencies themselves.

This is an ongoing piece of work, but fundamentally we looked at the idea that if we made the best technical path also the easiest path for service delivery teams to follow, then there would be a reasonable chance of a consumable systems approach to delivering these services. If support and skills was available with tools, code, dev environments, reference implementations, lab environments and other useful tools for designing and delivering government services faster, better and cheaper, then service delivery teams and agencies both would have a natural motivation to take that approach. Basically, we surmised that vision and guidance probably needed to be supplemented by implementation to make it real, moving from policy to application.

It is great to see other jurisdictions like New Zealand starting to experiment and implement the consumable mashable government model! I want to say a huge thank you to the New Zealand Government for sharing their ideas, but mostly for now picking up and being in such a great position to show everyone what Gov as a Platform and Gov as an API should look like. I wish you luck and hope to be a part of your success, even just in a small way!

Rock on.

February 28, 2017

Mildred Dresselhaus, the Queen of Carbon | NY Times

“Dr. Dresselhaus, who helped transform carbon into the superstar of modern materials science, was renowned for her efforts to promote the cause of women in science.”

Millie Dresselhaus (nee Spiewak) high school yearbook 19481948 A tribute at Hunter High School.

“Mildred (Millie) Dresselhaus, a professor emerita at the Massachusetts Institute of Technology whose research into the fundamental properties of carbon helped transform it into the superstar of modern materials science and the nanotechnology industry, died on Monday in Cambridge, Mass. She was 86.”

Read more.

February 27, 2017

LUV Main March 2017 Meeting: Multicore World / Patching with quilt

Mar 7 2017 18:30
Mar 7 2017 20:30
Mar 7 2017 18:30
Mar 7 2017 20:30
Location: 
Level 29, 570 Bourke St. Melbourne

PLEASE NOTE NEW LOCATION

Tuesday, March 7, 2017
6:30 PM to 8:30 PM
Level 29, 570 Bourke St. Melbourne

Speakers:

• Lev Lafayette, MultiCore World 2017 Wellington
• Russell Coker, Patching with quilt

570 Bourke St. Melbourne, between King and William streets

Late arrivals needing access to the building and the twenty-ninth floor please call 0490 627 326.
 

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue.

LUV would like to acknowledge Dell for their help in obtaining the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

March 7, 2017 - 18:30

read more

LUV Beginners March Meeting: node.js workshop

Mar 18 2017 12:30
Mar 18 2017 16:30
Mar 18 2017 12:30
Mar 18 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

node.js workshop

Node.js® is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js' package ecosystem, npm, is the largest ecosystem of open source libraries in the world.

Members will be invited to install and learn about node.js with peer assistance.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

March 18, 2017 - 12:30

read more

February 26, 2017

This Week in HASS – term 1, week 5

This week students are exploring a vast range of topics, across the year levels. From using a torch and a tennis ball to investigate how the Earth experiences Day and Night to case studies on natural disasters, celebrations and indigenous peoples, there is a broad range of topics to spark interest.

Foundation to Year 3

Our youngest students (Foundation/Prep – Unit F.1) are talking about where they, and other members of their family, were born. Once again, this activity gets them interacting with maps and thinking about how we represent locations, whilst reinforcing their sense of identity and how they relate to others. Students in Years 1 to 3 (Units 1.1, 2.1 and 3.1) are using a torch and a tennis ball to investigate how the Earth experiences Night and Day, Seasons, Equinoxes and Solstices. This activity ties in what we experience as weather, seasons and their related celebrations to the Physics of how it all works, allowing students to draw connections between what they experience and what they are learning, and providing essential context for the more abstract knowledge. Teachers can easily tailor this activity to the needs of each class and explore the concepts in as much detail as required.

Years 3 to 6

Charlotte St, Brisbane 1893 floodsCharlotte St, Brisbane 1893 floods

Students in Years 3 to 6 (Units 3.5, 4.1, 5.1 and 6.1) are looking at a range of different case studies pertinent to their year-level curriculum requirements, this week. Year 3 students are examining celebrations in Australia and around the world (the Celebrations Around the World resource has been updated this year, and contains some new material, please check that you have the latest copy, and re-download it if necessary) and Year 4 students examine areas of natural beauty in Australia. Year 5 students are looking at the effects of natural disasters, especially here in Australia. Case studies on floods, such as the Brisbane Floods of 1893, and bushfires, such as the infamous Black Friday fires in Victoria, are available for more in depth study by teachers and students wishing to explore the topic in more detail. Year 6 students are examining Indigenous groups of people from Australia and Asia. A range of case studies are available for this topic, from groups within Australia, holding Native Title, such as the Quandamooka People, to groups from the mountains of Southern China, such as the Yi people. The larger number of case studies available, which can be found in our store resource category Indigenous Peoples, allows for Year 6 students to pursue more individual lines of enquiry, suited to their developing abilities.

Nyriad: An Agile Startup Done Right

I have recently spent a few days in the company of Nyriad, a New Zealand IT company specialisng in GPU software. I wish to make a point of a few observations of the company because they are an example of both a startup company that uses agile project management, two terms much maligned and subject to justified cynicism, and does it right. Because I have seen so many colleagues burned by companies and organisations which profess such values and do not do it right, I hope the following observations will be useful for future organisations.

read more

February 23, 2017

This Week in HASS – term 1, week 4

This week the Understanding Our World program for primary schools has younger students looking at time passing, both in their own lives and as marked by others, including the seasons recognised by different Aboriginal groups. Older students are looking at how Aboriginal people interacted with the Australian environment, as it changed at the end of the Ice Age, and how they learnt to manage the environment and codified that knowledge into their lore.

Foundation to Year 3

This week our standalone Foundation classes  (Unit F.1) are thinking about what they were like as babies. They are comparing photographs or drawings of themselves as babies, with how they are now. This is a great week to involve family members and carers into class discussions, if appropriate. Students in multi-age classes and Years 1 to 3 (Units F.5, 1.1, 2.1 and 3.1) , are examining how weather and seasons change throughout the year and comparing our system of seasons with those used by different groups of Aboriginal people in different parts of Australia. Students can compare these seasons to the weather where they live and think about how they would divide the year into seasons that work where they live. Students can also discuss changes in weather over time with older members of the community.

Years 3 to 6

Older students, having followed the ancestors of Aboriginal people all the way to Australia, are now examining how the climate changed in Australia after the Ice Age, and how this affected Aboriginal people. They learn how Aboriginal people adapted to their changing environment and learned to manage it in a sustainable way. This vitally important knowledge about how to live with, and manage, the Australian environment, was codified into Aboriginal lore and custom and handed down in stories and laws, from generation to generation. Students start to examine the idea of Country/Place, in this context.

February 22, 2017

Making a USB powered soldering iron that doesn't suck

Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's linux.conf.au Open Hardware miniconf.

Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).

Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).

If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.

With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.

Here's a photo of it running from a Chromebook Pixel charger:

February 21, 2017

$AUD 35k available in 2017 Grants Program

Linux Australia is delighted to announce the availability of $AUD 35,000
for open source, open data, open government, open education, open
hardware and open culture projects, as part of the organisation’s
commitment to free and open source systems and communities in the region.

This year, we have deliberately weighted some areas in which we strongly
welcome grant applications.

More information is available at: https://linux.org.au/projects/grants

Please do share this with colleagues who may find it of interest, and
feel free to contact the Linux Australia Council if you would like a
private discussion.

With kind regards,

Kathy Reid

President, Linux Australia

February 20, 2017

New LinkedIn Interface Delete Your Data? Here’s How to Bring it Back.

Over the past few years it has seemed like LinkedIn were positioning themselves to take over your professional address book. Through offering CRM-like features, users were able to see a summary of their recent communications with each connection as well as being able to add their own notes and categorise their connections with tags. It appeared to be a reasonable strategy for the company, and many users took the opportunity to store valuable business information straight onto their connections.

Then at the start of 2017 LinkedIn decided to progressively foist a new user experience upon its users, and features like these disappeared overnight in lieu of a more ‘modern’ interface. People who grew to depend on this integration were in for a rude shock — all of a sudden it was missing. Did LinkedIn delete the information? There was no prior warning given and I still haven’t seen any acknowledgement or explanation (leave alone an apology) from LinkedIn/Microsoft on the inconvenience/damage caused.

If anything, this reveals the risks in entrusting your career/business to a proprietary cloud service. Particularly with free/freemium (as in cost) services, the vendor is more likely to change things on a whim or move that functionality to a paid tier.

It’s another reason why I’ve long been an advocate for open standards and free and open source software.

Fortunately there’s a way to export all of your data from LinkedIn. This is what we’ll use to get back your tags and notes. These instructions are relevant for the new interface. Go to your account settings and in the first section (“Basics”) you should see an option called “Getting an archive of your data”.

altLinkedIn: Getting an archive of your data

Click on Request Archive and you’ll receive an e-mail when it’s available for download. Extract the resulting zip file and look for a file called Contacts.csv. You can open it in a text editor, or better yet a spreadsheet like LibreOffice Calc or Excel.

In my copy, my notes and tags were in columns D and E respectively. If you have many, it may be a lot of work to manually integrate them back into your address book. I’d love suggestions on how to automate this. Since I use Gmail, I’m currently looking into Google’s address book import/export format, which is CSV based.

As long as Microsoft/LinkedIn provide a full export feature, this is a good way to maintain ownership of your data. It’s good practice to take an export every now and then to give yourself some peace-of-mind and avoid vendor lock-in.

This article has also been published on LinkedIn.

February 19, 2017

HPC/Cloud Hybrids for Efficient Resource Allocation and Throughput

HPC systems running massively parallel jobs need a fairly static software operating environment running on bare metal hardware, a high speed interconnect to reach their full potential, and offer linear performance scaling for cleverly designed applications. Cloud computing, on the other hand, offers flexible virtual environments and can be used for pleasingly parallel workloads.

read more

CareerNexus: a new way to find work

The start-up that I have co-founded, CareerNexus, is looking for job seekers to take part in a product test and market experiment. If you, or someone you know, wants to know more and potentially take part, message me.

If we can help just a fraction of those people who have difficulty finding work through traditional means — people returning from parental leave, people looking for roles after being made redundant, mature workers, even some highly skilled professionals — we’ll be doing something great.

As an alternate means of finding work, it need not replace any mechanisms that you may already be engaged in. In other words, there is nothing for you to lose and hopefully much for you to gain.

Published in Engineers Without Borders Magazine

Engineers Without Borders asked me to write something for their Humanitarian Engineering magazine about One Laptop per Child. Here is what I wrote.

The school bell rings, and the children filter into the classroom. Each is holding an XO – their own personal learning device.

altStudents from Doomadgee often use their XOs for outdoors education. The sunlight-readable screen
combined with the built-in camera allow for hands-on exploration of their environment.

This is no ordinary classroom. As if by magic, the green and white XOs automatically see each other as soon as they are started up, allowing children to easily share information and collaborate on activities together. The kids converse on how they can achieve the tasks at hand. One girl is writing a story on her XO, and simultaneously on the same screen she can see the same story being changed by a boy across the room. Another group of children are competing in a game that involves maths questions.

altChildren in Kiwirrkurra, WA, collaborate on an activity with help from teachers.

Through the XO, the learning in this classroom has taken on a peer-to-peer character. By making learning more fun and engaging, children are better equipped to discover and pursue their interests. Through collaboration and connectivity, they can exchange knowledge with their peers and with the world. In the 21st century, textbooks should be digital and interactive. They should be up-to-date and locally relevant. They should be accessible and portable.

Of course, the teacher’s role remains vital, and her role has evolved into that of a facilitator in this knowledge network. She is better placed to provide more individual pathways for learning. Indeed the teacher is a learner as well, as the children quickly adapt to the new technology and learn skills that they can teach back.

altA teacher in Jigalong, WA, guides a workgroup of children in their class.

Helping to keep the classroom session smoothly humming along are children who have proven themselves to be proficient with assisting their classmates and fixing problems (including repairing hardware). These kids have taken part in training programmes that award them for their skills around the XO. In the process, they are learning important life skills around problem solving and teamwork.

altDozens of students in Doomadgee State School are proficient in fixing XO hardware.

This is all part of the One Education experience, an initiative from One Laptop per Child (OLPC) Australia. This educational programme provides a holistic educational scaffolding around the XO, the laptop developed by the One Laptop per Child Association that has its roots in the internationally-acclaimed MIT Media Lab in the USA.

The XO was born from a desire to empower each and every child in the world with their own personal learning device. Purpose-built for young children and using solid open source software, the XO provides an ideal platform for classroom learning. Designed for outdoors, with a rugged design and a high-resolution sunlight-readable screen, education is no longer confined to a classroom or even to the school grounds. Learning time needn’t stop with the school bell – many children are taking their XOs home. Also important is the affordability and full repairability of the devices, making it cost-effective versus non-durable and ephemeral items such as stationery, textbooks and other printed materials. There are over 3 million XOs in distribution, and in some countries (such as Uruguay) every child owns one.

altA One Education classroom in Kenya.

One Education’s mission is to provide educational opportunities to every child, no matter how remote or disadvantaged. The digital divide is a learning divide. This can be conquered through a combination of modern technology, training and support, provided in a manner that empowers local schools and communities. The story told above is already happening in many classrooms around the country and the world.

altA One Education classroom in northern Thailand.

With teacher training often being the Achilles’ heel of technology programmes in the field of education, One Education focuses only on teachers who have proven their interest and aptitude through the completion of a training course. Only then are they eligible to receive XOs (with an allocation of spare parts) into their classroom. Certified teachers are eligible for ongoing support from OLPC Australia, and can acquire more hardware and parts as required.

As a not-for-profit, OLPC Australia works with sponsors to heavily subsidise the costs of the One Education programme for low socio-economic status schools. In this manner, the already impressive total cost of ownership can be brought down even further.

High levels of teacher turnover are commonplace in remote Australian schools. By providing courses online, training can be scalable and cost-effective. Local teachers can even undergo further training to gain official trainer status themselves. Some schools have turned this into a business – sending their teacher-trainers out to train teachers in other schools.

altStudents in Geeveston in Tasmania celebrate their attainment of XO-champion status, recognising
their proficiency in using the XO and their helpfulness in the classroom.

With backing from the United Nations Development Programme, OLPC are tackling the Millennium Development Goals by focusing on Goal 2 (Achieve Universal Primary Education). The intertwined nature of the goals means that progress made towards this goal in turn assists the others. For example, education on health can lead to better hygiene and lower infant mortality. A better educated population is better empowered to help themselves, rather than being dependent on hand-outs. For people who cannot attend a classroom (perhaps because of remoteness, ethnicity or gender), the XO provides an alternative. OLPC’s focus on young children means that children are becoming engaged in their most formative years. The XO has been built with a minimal environmental footprint, and can be run off-grid using alternate power sources such as solar panels.

One Education is a young initiative, formed based on experiences learnt from technology deployments in Australia and other countries. Nevertheless, results in some schools have been staggering. Within one year of XOs arriving in Doomadgee State School in northern Queensland, the percentage of Year 3 pupils meeting national literacy standards leapt from 31% to 95%.

altA girl at Doomadgee State School very carefully removes the screen from an XO.

2013 will see a rapid expansion of the programme. With $11.7m in federal government funding, 50,000 XOs will be distributed as part of One Education. These schools will be receiving the new XO Duo (AKA XO-4 Touch), a new XO model developed jointly with the OLPC Association. This version adds a touch-screen user experience while maintaining the successful laptop form factor. The screen can swivel and fold backwards over the keyboard, converting the laptop into a tablet. This design was chosen in response to feedback from educators that a hardware keyboard is preferred to a touch-screen for entering large amounts of information. As before, the screen is fully sunlight-readable. Performance and battery life have improved significantly, and it is fully repairable as before.

As One Education expands, there are growing demands on OLPC Australia to improve the offering. Being a holistic project, there are plenty of ways in which we could use help, including in education, technology and logistics. We welcome you to join us in our quest to provide educational opportunities to the world’s children.

February 18, 2017

Choose your own adventure – keynote

This is a blog version of the keynote I gave at linux.conf.au 2017. Many thanks to everyone who gave such warm feedback, and I hope it helps spur people to think about systemic change and building the future. The speech can be watched at https://www.youtube.com/watch?v=J6IqGuxCKa8.

I genuinely believe we are at a tipping point right now. A very important tipping point where we have at our disposal all the philosophical and technical means to invent whatever world we want, but we’re at risk of reinventing the past with shiny new things. This talk is about trying to make active choices about how we want to live in future and what tools we keep or discard to get there. Passive choices are still a choice, they are choosing the status quo. We spend a lot of our time tinkering around the edges of life as it is, providing symptomatic relief for problems we find, but we need to take a broader systems based view and understand what systemic change we can make to properly address those problems.

We evolved over hundreds of thousands of years using a cooperative competitive social structure that helped us work together to flourish in every habitat, rapidly and increasingly evolve an learn, and establish culture, language, trade and travel. We were constantly building on what came before and we built our tools as we went.

In recent millennia we invented systems of complex differentiated and interdependent skills, leading to increasingly rapid advancements in how we live and organise ourselves physically, politically, economically and socially, especially as we started building huge cities. Lots of people meant a lot of time to specialise, and with more of our basic needs taken care of, we had more time for philosophy and dreaming.

Great progress created great surplus, creating great power, which we generally centralised in our great cities under rulers that weren’t always so great. Of course, great power also created great inequalities so sometimes we burned down those great cities, just to level the playing field. We often took a symptomatic relief approach to bad leaders by replacing them, without fundamentally changing the system.

But in recent centuries we developed the novel idea that all people have inalienable rights and can be individually powerful. This paved the way for a massive culture shift and distribution of power combined with heightened expectations of individuals in playing a role in their own destiny, leading us to the world as we know it today. Inalienable rights paved the way for people thinking differently about their place in the world, the control they had over their lives and how much control they were happy to cede to others. This makes us, individually, the most powerful we have ever beed, which changes the game moving forward.

You see, the internet was both a product and an amplifier of this philosophical transition, and of course it lies at the heart of our community. Technology has, in large part, only sped up the cooperative competitive models of adapting, evolving and flourishing we have always had. But the idea that anyone has a right to life and liberty started a decentralisation of power and introduced the need for legitimate governance based on the consent of citizens (thank you Locke).

Citizens have the powers of publishing, communications, monitoring, property, even enforcement. So in recent decades we have shifted fundamentally from kings in castles to nodes in a network, from scarcity to surplus or reuse models, from closed to open systems, and the rate of human progress only continues to grow towards an asymptoic climb we can’t even imagine.

To help capture this, I thought I’d make a handy change.log on human progress to date.

# Notable changes to homo sapiens – change.log
## [2.1.0] – 1990s CE “technology revolution & internet”
### Changed
– New comms protocol to distribute “rights”. Printing press patch unexpectedly useful for distributing resources. Moved from basic multi-core to clusters of independent processors with exponential growth in power distribution.

## [2.0.0] – 1789 CE “independence movements”
### Added
– Implemented new user permissions called “rights”, early prototype of multi-core processing with distributed power & comms.

## [1.2.0] – 1760 CE “industrial revolution”
### Changed
– Agricultural libraries replaced by industrial libraries, still single core but heaps faster.

## [1.1.1] – 1440 CE “gutenberg”
### Patched
– Printing press a minor patch for more efficient instructions distribution, wonder if it’d be more broadly useful?

## [1.1.0] – 2,000 BCE “cities era”
### Changed
– Switched rural for urban operating environment. Access to more resources but still on single core.

## [1.0.0] – 8,000 BCE “agricultural revolution”
### Added
– New agricultural libraries, likely will create surplus and population explosion. Heaps less resource intensive.

## [0.1.0] – 250,000 BCE “homo sapiens”
### Added
– Created fork from homo erectus, wasn’t confident in project direction though they may still submit contributions…

(For more information about human evolution, see https://www.bighistoryproject.com)

The point to this rapid and highly oversimplified historical introduction is threefold: 1) we are more powerful than ever before, 2) the rate of change is only increasing, and 3) we made all this up, and we can make it up again. It is important to recognise that we made all of this up. Intellectually we all understand this but it matters because we often assume things are how they are, and then limit ourselves to working within the constraints of the status quo. But what we invented, we can change, if we choose.

We can choose our own adventure, or we let others choose on our behalf. And if we unthinkingly implement the thinking, assumptions and outdated paradigms of the past, then we are choosing to reimplement the past.

Although we are more individually and collectively powerful than ever before, how often do you hear “but that’s just how we’ve always done it”, “but that’s not traditional”, or “change is too hard”. We are demonstrably and historically utter masters at change, but life has become so big, so fast, and so interrelated that change has become scary for many people, so you see them satisfied by either ignoring change or making iterative improvements to the status quo. But we can do better. We must do better.

I believe we are at a significant tipping point in history. The world and the very foundations our society were built on have changed, but we are still largely stuck in the past in how we think and plan for the future. If we don’t make some active decisions about how we live, think and act, then we will find ourselves subconsciously reinforcing the status quo at every turn and not in a position to genuinely create a better future for all.

So what could we do?

  • Solve poverty and hunger: distributed property through nanotechnology and 3D printing, universal education and income.
  • Work 2 days a week, automate the rest: work, see “Why the Future is Workless” by Tim Dunlop
  • Embrace and extend our selves: Transhumanism, para olympics, “He was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” — William Gibson, from Johnny Mnemonic. Why are we so conservative about what it means to be human? About our picture of self? Why do we get caught up on what is “natural” when almost nothing we do is natural.
  • Overcome the tyranny of distance: rockets for international travel, interstellar travel, the opportunity to have new systems of organising ourselves
  • Global citizens: Build a mighty global nation where everyone can flourish and have their rights represented beyond the narrow geopolitical nature of states: peer to peer economy, international rights, transparent gov, digital democracy, overcome state boundaries,
  • ?? What else ?? I’m just scratching the surface!

So how can we build a better world? Luckily, the human species has geeks. Geeks, all of us, are special because we are the pioneers of the modern age and we get to build the operating system for all our fellow humans. So it is our job to ensure what we do makes the world a better place.

rOml is going to talk more about future options for open source in the Friday keynote, but I want to explore how we can individually and collectively build for the future, not for the past.

I would suggest, given our role as creators, it is incumbent on us to both ensure we build a great future world that supports all the freedoms we believe in. It means we need to be individually aware of our unconscious bias, what beliefs and assumptions we hold, who benefits from our work, whether diversity is reflected in our life and work, what impact we have on society, what we care about and the future we wish to see.

Collectively we need to be more aware of whether we are contributing to future or past models, whether belief systems are helping or hindering progress, how we treat others and what from the past we want to keep versus what we want to get rid of.

Right now we have a lot going on. On the one hand, we have a lot of opportunities to improve things and the tools and knowledge at our disposal to do so. On the other hand we have locked up so much of our knowledge and tools, traditional institutions are struggling to maintain their authority and control, citizens are understandably frustrated and increasingly taking matters into their own hands, we have greater inequality than ever before, an obsession with work at the cost of living, and we are expected to sacrifice our humanity at the alter of economics

Questions to ask yourself:

Who are/aren’t you building for?
What is the default position in society?
What does being human mean to you?
What do we value in society?
What assumptions and unconscious bias do you have?
How are you helping non-geeks help themselves?
What future do you want to see?

What should be the rights, responsibilities and roles of
citizens, governments, companies, academia?

Finally,we must also help our fellow humans shift from being consumers to creators. We are all only as free as the tools we use, and though geeks will always be able to route around damage, be that technical or social, many of our fellow humans do not have the same freedoms we do.

Fundamental paradigm shifts we need to consider in building the future.

Scarcity → Surplus
Closed → Open
Centralised → Distributed
Analogue → Digital
Belief → Rationalism
Win/lose → Cooperative competitive
Nationalism → Transnationalism
Normative humans → Formative humans

Open source is the best possible modern expression of cooperative competitiveness that also integrates our philosophical shift towards human rights and powerful citizens, so I know it will continue to thrive and win when pitted against closed models, broadly speaking.

But in inventing the future, we need to be so very careful that we don’t simply rebuild the past with new shiny tools. We need to keep one eye always on the future we want to build, on how what we are doing contributes to that future, and to ensuring we have enough self awareness and commitment to ensuring we don’t accidentally embed in our efforts the outdated and oftentimes repressive habits of the past.

To paraphrase Gandhi, build the change you want to see. And build it today.

Thank you, and I hope you will join me in forging a better future.

February 16, 2017

New Research on Our Little Cousins to the North!

Homo floresiensis

Last year, several research papers were published on the ongoing excavations and analysis of material from the island of Flores in Indonesia, where evidence of very small stature hominins was found in the cave of Liang Bua, in 2003. The initial dates dated these little people to between 50,000 and about 14,000 years ago, which would have meant that they lived side-by-side with anatomically modern humans in Indonesia, in the late Ice Age. The hominins, dubbed Homo floresiensis, after the island on which they were found, stood about 1m tall – smaller than any group of modern humans known. Their tiny size included a tiny brain – more in the range of 4 million year old Australopithecus than anything else. However, critical areas of higher order thinking in their brains were on par with modern humans.

Baffled by the seeming wealth of contradictions, these little people raised, researchers returned to the island, and the cave of Liang Bua, determined to check all of their findings in even more detail. Last year, they reported that they had in fact made some mistakes, the first time around. Very, very subtle changes in the sediments of the deposits, revealed that the Homo floresiensis bones belonged to some remnant older deposits, which had been eroded away in other parts of the cave, and replaced by much younger layers. Despite the samples for dating having been taken from close to the hominin bones, as luck would have it, they were all in the younger deposits! New dates, run on the actual sediments containing the bones, gave ages of between 190,000 to 60,000 years. Dates from close to the stone tools found with the hominins gave dates down to 50,000 years ago, but no later.

Liang Bua. Image by Rosino

The researchers – demonstrating a high level of ethics and absolutely correct scientific procedure, published the amended stratigraphy and dates, showing how the errors had occurred. At another site, Mata Menge, they had also found some ancestral hominins – very similar in body type to the ones from Liang Bua, dated to 700,000 years ago. Palaeoanthropologists were able to find similarities linking these hominins to the early Homo erectus found on Java and dated to about 1.2 million years ago, leading researchers to suggest that Homo floresiensis was a parallel evolution to modern humans, out of early Homo erectus in Indonesia, making them a fairly distant cousin on the grand family tree.

Careful examination of the deposits has now also called in to question whether Homo floresiensis could control fire. We know that they made stone tools – of a type pretty much unchanged over more than 600,000 years, and they used these tools to help them hunt Stegodon – an Ice Age dwarf elephant, which was as small as 1.5m at the shoulder. However, researchers now think that evidence of controlled fire is only in layers associated with modern humans. It is this cross-over between Homo floresiensis and modern humans, arriving about 60,000 – 50,000 years ago, that is a focus of current research – including that of teams working there now. At the moment, it looks as if Homo floresiensis disappears at about the same time that modern humans arrive, which sadly, is a not totally unlikely pattern.

Stegodon. Image by I, Vjdchauhan.

What does this have to do with Australia? Well, it’s always interesting to get information about our immediate neighbours and their history (and prehistory). But beyond that – we know that the ancestors of Aboriginal people (modern humans) were in Australia by about 60,000 – 50,000 years ago, so understanding how they arrived is part of understanding our own story. For more case studies on interesting topics in archaeology and palaeontology see our Archaeology Textbook resources for Year 11 students.

New D&D Cantrip

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician

The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death – theirs or yours.

This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual.

New D&D Cantrip is a post from: Errata

Two Weeks’ Notice

Last week, a rather heavy document envelope showed up in the mail.

Inside I found a heavy buff-coloured envelope, along with my passport — now containing a sticker featuring an impressive collection of words, numbers, and imagery of landmarks from the United States of America. I’m reliably informed that sticker is the valid US visa that I’ve spent the last few months applying for.

Having that visa issued has unblocked a fairly important step in my path to moving in with Josh (as well as eventually getting married, but that’s another story). I’m very very excited about making the move, though very sad to be leaving the city I’ve grown up in and come to love, for the time being.

Unrelatedly, I happened to have a trip planned to Montréal to attend ConFoo in March. Since I’ll already be in the area, I’m using that trip as my opportunity to move.

My last day in Hobart will be Thursday 2 March. Following that, I’ll be spending a day in Abu Dhabi (yes, there is a good reason for this), followed by a week in Montréal for ConFoo.

After that, I’ll be moving in with Josh in Petaluma, California on Saturday 11 March.

But until then, I definitely want to enjoy what remaining time I have in Hobart, and catch up with many many people.

Over the next two weeks I’ll be:

  • Attending, and presenting a talk at WD42 — my talk will be one of my pieces for ConFoo, and is entirely new material. Get along!
  • Having a farewell do, *probably* on Tuesday 28 February (but that’s not confirmed yet). I’ll post details about where and when that’ll be in the near future (once I’ve made plans)
  • Madly packing and making sure that that I use up as close to 100% of my luggage allowance as possible

If you want to find some time to catch up over the next couple of weeks, before I disappear for quite some time, do let me know.

February 15, 2017

FreeDV 700C

Over the past month the FreeDV 700C mode has been developed, integrated into the FreeDV GUI program version 1.2, and tested. Windows versions (64 and 32 bit) of this program can be downloaded from freedv.org. Thanks Richard Shaw for all your hard work on the release and installers.

FreeDV 700C uses the Codec 2 700C vocoder with the COHPSK modem. Some early results:

  • The US test team report 700C contacts over 2500km at SNRs down to -2dB, in conditions where SSB cannot be heard.
  • My own experience: the 700C speech quality is not quite as good as FreeDV 1600, but usable for conversation. That’s OK – it’s very early days for the 700C codec, and hey, it’s half the bit rate of 1600. I’m actually quite excited that 700C can be used conversationally at this early stage! I experienced a low SNR channel where FreeDV 700C didn’t work but SSB did, however 700C certainly works at much lower SNRs than 1600.
  • Some testers in Europe report 700C falling over at relatively high SNRs (e.g. 8dB). I also experienced this on a 1500km contact. Suspect this is a bug or corner case we can fix, especially in light of the US teams results.

Tony, K2MO, has put together this fine video demonstrating the various FreeDV modes over a simulated HF channel:

It’s early days for 700C, and there are mixed reports. However it’s looking promising. My next steps are to further explore the real world operation of FreeDV 700C, and work on improving the low SNR performance further.

Modems for HF Digital Voice Part 2

In the previous post I argued that pushing bits through a HF channel involves much wailing and gnashing of teeth. Now we shall apply numbers and graphs to the problem, which is – in a nutshell – Engineering.

QPSK Modem Simulation

I have worked up a GNU Octave modem simulation called hf_modem_curves.m. This operates at 1 sample/symbol, i.e. the sample rate is the symbol rate. So we takes some random bits, map them to QPSK symbols, add noise, then turn the noisy symbols back into bits and count errors:

The simulation ignores a few real world details like timing and phase synchronisation, so is a best case model. That’s OK for now. QPSK uses symbols that each carry 2 bits of information, here is the symbol set or “constellation”:

Four different points, each representing a different 2 bit combination. For example the bits ’00’ would be the cross at 45 degrees, ’10’ at 135 degrees etc. The plot above shows all possible symbols, but we just send one at a time. However it’s useful to plot all of the received symbols like this, as an indication of received signal quality. If the channel is playing nice, we receive something like this:

Each cross is now a fuzzy dot, as noise has been added by the channel. No bit errors yet – a bit error happens when we get enough noise to move received symbols into another quadrant. This sort of channel is called Additive White Gaussian Noise (AWGN). Line of site UHF radio is a good example of a real world AWGN channel – all you have to worry about is additive noise.

With a fading or multipath channel like HF we end up with something like:

In a fading channel the received symbol amplitudes bounce up and down as the channel fades in and out. Sometimes the symbols dip down into the noise and we get lots of bit errors. Sometimes the signal is reinforced, and the symbol amplitude gets bigger.

The simulation used for the multipath or HF channel uses a two path model, with additive noise as per the AWGN simulation:

Graphs and Modem Performance

Turns out there are some surprisingly good models to help us work out the expected Bit Error Rate (BER) for a modem. By “model” I mean people have worked out the maths to describe the Bit Error Rate (BER) for a QPSK Modem. This graph shows us how to work out the BER for QPSK (and BPSK):

So the red line shows us the BER given Eb/No (E-B on N-naught), which is a normalised form of Signal to Noise Ratio (SNR). Think about Eb/No as a modem running at 1 bit per second, with the noise power measured in 1 Hz of bandwidth. It’s a useful scale for comparing modems and modulation schemes.

Looking at the black lines, we can see that for an Eb/No or 4dB, we can expect a BER of 1E-2 or 0.01 or 1% of our bits will be received in error over an AWGN channel. This curve is for QPSK or BPSK, different curves would be used for other modems like FSK.

Given Eb/No you can work out the SNR if you know the bit rate and noise bandwidth:

    SNR = S/N = EbRb/NoB

or in dB:

    SNR(dB) = Eb/No(dB) + 10log10(Rb/B)

For example at Rb = 1600 bit/s and a noise bandwidth B = 3000 Hz:

    SNR(dB) = 4 + 10log10(1600/3000) = 1.27 dB

OK, so that was for ideal QPSK. Lets add a few more curves to our graph:

We have added the experimental results for our QPSK simulation (green), and for Differential QPSK (DQPSK – blue). Our QPSK modem simulation (green) is right on top of the theoretical QPSK curve (red) – this is good and shows our simulation is working really well.

DQPSK was discussed in Part 1. Phase differences are sent, which helps with phase errors in the channels but costs us extra bit errors. This is evident on the curves – at the 1E-2 BER line, DQPSK requires 7dB Eb/No, 3dB more (double the power) of QPSK.

Now lets look at modem performance for HF (multipath) channels, on this rather busy graph (click for larger version):

Wow, HF sucks. Looking at the theoretical HF QPSK performance (straight red line) to achieve a BER of 1E-2, we need 14dB of Eb/No. That’s 10dB worse than QPSK on the AWGN channel. With DQPSK, we need about 16dB.

For HF, a lot of extra power is required to make a small difference in BER.

Some of the kinks in the HF curves (e.g. green QPSK HF simulated just under red QPSK HF theory) are due to not enough simulation points – it’s not actually possible to do better than theory!

Estimated Performance of FreeDV Modes

Now we have the tools to estimate the performance of FreeDV modes. FreeDV 1600 uses Codec 2 at 1300 bit/s, plus a little FEC at 300 bit/s to give a total of 1600 bit/s. With the FEC, lets say we can get reasonable voice quality at 4% BER. FreeDV 1600 uses a DQPSK modem.

On an AWGN channel, that’s an Eb/No of 4.4dB for DQPSK, and a SNR of:

    SNR(dB) = 4.4 + 10log10(1600/3000) = 1.7 dB

On a multipath channel, that’s an Eb/No of 11dB for DQPSK, and a SNR of:

    SNR(dB) = 11 + 10log10(1600/3000) = 8.3 dB

As discussed in Part 1, FreeDV 700C uses diversity and coherent QPSK, and has a multipath (HF) performance curve plotted in cyan above, and close to ideal QPSK on AWGN channels. The payload data rate is 700 bit/s, however we have an overhead of two pilot symbols for every 4 data symbols. This means we effectively need a bit rate of Rb = 700*(4+2)/4 = 1050 bit/s to pump 700 bits/s through the channel. It doesn’t have any FEC (yet, anyway), so we need a BER of a little lower than FreeDV 1600, about 2%. Running the numbers:

On an AWGN channel, for 2% BER we need an Eb/No of 3dB for QPSK, and a SNR of:

    SNR(dB) = 3 + 10log10(1050/3000) = -1.5 dB

On a multipath channel, diversity (cyan line) helps a lot, that’s an Eb/No of 8dB, and a SNR of:

    SNR(dB) = 8 + 10log10(1050/3000) = 3.4 dB

The diversity model in the simulation uses two carriers. The amplitudes of each carrier after passing through the multipath model are plotted below:

Often when one carrier is faded, the other is not faded, so when we recombine them at the receiver we get an average that is closer to AWGN performance. However diversity is not perfect, occasionally both carriers are wiped out at the same time by a fade.

So we can see FreeDV 700C is about 4 dB in front of FreeDV 1600, which matches the best reports from early adopters. I’ve had reports of FreeDV 700C operating at as low as -2dB , which is presumably on channels that don’t have heavy fading and are more like AWGN. Also some reports of 700C falling over at high SNRs (around like 8dB)! However that is probably a bug, e.g. a sync issue or something else we can track down in time.

Real world channels can vary. The multipath model above doesn’t take into account fast or slow fading, it just calculates the average bit errors rate. In practice, slow fading is hard to handle in digital voice applications, as the whole channel might be wiped out for a few seconds.

Now that we have a reasonable 700 bit/s codec – we can also consider other schemes, such as a more powerful FEC code rather than diversity. Like diversity, FEC codes provide “coding gain”, moving our operating point to the left. Really good codes operate at 10% BER, right over on the Eb/No = 2dB region of the curve. No free lunch of course – such codes may require long latency (seconds) or be expensive to decode.

Next Steps

I’d like to “instrument” FreeDV 700C and work with the 700C early adopters to find out how well it’s working, why and how it falls over, and work through any obvious bugs. Then start experimenting with ways to make it operate at lower SNRs, such as more powerful FEC codes or even non-redundant techniques like Trellis decoding.

Now we have shown Codec 700C has sufficient quality for conversations over the air, I’m planning another iteration of the Codec 2 700C vocoder design to see if we can improve speech quality.

Links

Modems for HF Digital Voice Part 1.

More Eb/No to SNR worked examples.

Similar modem calculations were used to develop a 100 kbit/s telemetry system to send HD images from High Altitude Balloons.

February 14, 2017

j-core + Numato Spartan 6 board + Fedora 25

A couple of changes to http://j-core.org/#download_bitstream made it easy for me to get going:

  • In order to make ModemManager not try to think it’s a “modem”, create /etc/udev/rules.d/52-numato.rules with the following content:
    # Make ModemManager ignore Numato FPGA board
    ATTRS{idVendor}=="2a19", ATTRS{idProduct}=="1002", ENV{ID_MM_DEVICE_IGNORE}="1"
  • You will need to install python3-pyserial and minicom
  • The minicom command line i used was:
    sudo stty -F /dev/ttyACM0 -crtscts && minicom -b 115200 -D /dev/ttyACM0

and along with the instructions on j-core.org, I got it to load a known good build.

This Week in HASS – term 1, week 3

This is a global week in HASS for primary students. Our youngest students are marking countries around the world where they have family members, slightly older students are examining the Mayan calendar, while older students get nearer to Australia, examining how people reached Australia and encountered its unique wildlife.

Foundation to Year 3

Mayan date

Foundation students doing the Me and My Global Family unit (F.1) are working with the world map this week, marking countries where they have family members with coloured sticky dots. Those doing the My Global Family unit (F.6), and students in Years 1 to 3 (Units 1.1; 2.1 and 3.1), are examining the Mayan calendar this week. The Mayan calendar is a good example of an alternative type of calendar, because it is made up of different parts, some of which do not track the seasons, and is cyclical, based on nested circles. The students learn about the 2 main calendars used by the Mayans – a secular and a celebratory sacred calendar, as well as how the Mayans divided time into circles running at different scales – from the day to the millennium and beyond. And no, in case anyone is still wondering – they did not predict the end of the world in 2012, merely the end of one particular long-range cycle, and hence, the beginning of a new one…

Years 3 to 6

Lake Mungo, where people lived at least 40,000 years ago.

Students doing the Exploring Climates unit (3.6), and those in Years 4 to 6 (Units 4.1, 5.1 and 6.1), are examining how people reached Australia during the Ice Age, and what Australia was like when they arrived. People had to cross at least 90 km of open sea to reach Australia, even during the height of the Ice Age, and this sea gap led to the relative isolation of animals in Australia from others in Asia. This phenomenon was first recorded by Alfred Wallace, who drew a line on a map marking the change in fauna. This line became known as the Wallace line, as a result. Students will also examine the archaeological evidence, and sites of the first people in Australia, ancestors of Aboriginal people. The range of sites across Australia, with increasingly early dates, amply demonstrate the depth of antiquity of Aboriginal knowledge and experience in Australia.

February 13, 2017

High Power Lustre

(Most of the hard work here was done by fellow blogger Rashmica - I just verified her instructions and wrote up this post.)

Lustre is a high-performance clustered file system. Traditionally the Lustre client and server have run on x86, but both the server and client will also work on Power. Here's how to get them running.

Server

Lustre normally requires a patched 'enterprise' kernel - normally an old RHEL, CentOS or SUSE kernel. We tested with a CentOS 7.3 kernel. We tried to follow the Intel instructions for building the kernel as much as possible - any deviations we had to make are listed below.

Setup quirks

We are told to edit ~/kernel/rpmbuild/SPEC/kernel.spec. This doesn't exist because the directory is SPECS not SPEC: you need to edit ~/kernel/rpmbuild/SPECS/kernel.spec.

I also found there was an extra quote mark in the supplied patch script after -lustre.patch. I removed that and ran this instead:

for patch in $(<"3.10-rhel7.series"); do \
      patch_file="$HOME/lustre-release/lustre/kernel_patches/patches/${patch}" \
      cat "${patch_file}" >> $HOME/lustre-kernel-x86_64-lustre.patch \
done

The fact that there is 'x86_64' in the patch name doesn't matter as you're about to copy it under a different name to a place where it will be included by the spec file.

Building for ppc64le

Building for ppc64le was reasonably straight-forward. I had one small issue:

[build@dja-centos-guest rpmbuild]$ rpmbuild -bp --target=`uname -m` ./SPECS/kernel.spec
Building target platforms: ppc64le
Building for target ppc64le
error: Failed build dependencies:
       net-tools is needed by kernel-3.10.0-327.36.3.el7.ppc64le

Fixing this was as simple as a yum install net-tools.

This was sufficient to build the kernel RPMs. I installed them and booted to my patched kernel - so far so good!

Building the client packages: CentOS

I then tried to build and install the RPMs from lustre-release. This repository provides the sources required to build the client and utility binaries.

./configure and make succeeded, but when I went to install the packages with rpm, I found I was missing some dependencies:

error: Failed dependencies:
        ldiskfsprogs >= 1.42.7.wc1 is needed by kmod-lustre-osd-ldiskfs-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
    sg3_utils is needed by lustre-iokit-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        attr is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        lsof is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le

I was able to install sg3_utils, attr and lsof, but I was still missing ldiskfsprogs.

It seems we need the lustre-patched version of e2fsprogs - I found a mailing list post to that effect.

So, following the instructions on the walkthrough, I grabbed the SRPM and installed the dependencies: yum install -y texinfo libblkid-devel libuuid-devel

I then tried rpmbuild -ba SPECS/e2fsprogs-RHEL-7.spec. This built but failed tests. Some failed because I ran out of disk space - they were using 10s of gigabytes. I found that there were some comments in the spec file about this with suggested tests to disable, so I did that. Even with that fix, I was still failing two tests:

  • f_pgsize_gt_blksize: Intel added this to their fork, and no equivalent exists in the master e2fsprogs branches. This relates to Intel specific assumptions about page sizes which don't hold on Power.
  • f_eofblocks: This may need fixing for large page sizes, see this bug.

I disabled the tests by adding the following two lines to the spec file, just before make %{?_smp_mflags} check.

rm -rf tests/f_pgsize_gt_blksize
rm -rf tests/f_eofblocks

With those tests disabled I was able to build the packages successfully. I installed them with yum localinstall *1.42.13.wc5* (I needed that rather weird pattern to pick up important RPMs that didn't fit the e2fs* pattern - things like libcom_err and libss)

Following that I went back to the lustre-release build products and was able to successfully run yum localinstall *ppc64le.rpm!

Testing the server

After disabling SELinux and rebooting, I ran the test script:

sudo /usr/lib64/lustre/tests/llmount.sh

This spat out one scary warning:

mount.lustre FATAL: unhandled/unloaded fs type 0 'ext3'

The test did seem to succeed overall, and it would seem that is a known problem, so I pressed on undeterred.

I then attached a couple of virtual harddrives for the metadata and object store volumes, and having set them up, proceeded to try to mount my freshly minted lustre volume from some clients.

Testing with a ppc64le client

My first step was to test whether another ppc64le machine would work as a client.

I tried with an existing Ubuntu 16.04 VM that I use for much of my day to day development.

A quick google suggested that I could grab the lustre-release repository and run make debs to get Debian packages for my system.

I needed the following dependencies:

sudo apt install module-assistant debhelper dpatch libsnmp-dev quilt

With those the packages built successfully, and could be easily installed:

dpkg -i lustre-client-modules-4.4.0-57-generic_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deblustre-utils_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deb

I tried to connect to the server:

sudo mount -t lustre $SERVER_IP@tcp:/lustre /lustre/

Initially I wasn't able to connect to the server at all. I remembered that (unlike Ubuntu), CentOS comes with quite an aggressive firewall by default. I ran the following on the server:

systemctl stop firewalld

And voila! I was able to connect, mount the lustre volume, and successfully read and write to it. This is very much an over-the-top hack - I should have poked holes in the firewall to allow just the ports lustre needed. This is left as an exercise for the reader.

Testing with an x86_64 client

I then tried to run make debs on my Ubuntu 16.10 x86_64 laptop.

This did not go well - I got the following error:

liblustreapi.c: In function ‘llapi_get_poollist’:
liblustreapi.c:1201:3: error: ‘readdir_r’ is deprecated [-Werror=deprecated-declarations]

This looks like one of the new errors introduced in recent GCC versions, and is a known bug. To work around it, I found the following stanza in a lustre/autoconf/lustre-core.m4, and removed the -Werror:

AS_IF([test $target_cpu == "i686" -o $target_cpu == "x86_64"],
        [CFLAGS="$CFLAGS -Wall -Werror"])

Even this wasn't enough: I got the following errors:

/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
         .d_compare = ll_dcompare,
                  ^~~~~~~~~~~
/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: note: (near initialization for ‘ll_d_ops.d_compare’)

I figured this was probably because Ubuntu 16.10 has a 4.8 kernel, and Ubuntu 16.04 has a 4.4 kernel. Work on supporting 4.8 is ongoing.

Sure enough, when I fired up a 16.04 x86_64 VM with a 4.4 kernel, I was able to build and install fine.

Connecting didn't work first time - the guest failed to mount, but I did get the following helpful error on the server:

LNetError: 2595:0:(acceptor.c:406:lnet_acceptor()) Refusing connection from 10.61.2.227: insecure port 1024

Refusing insecure port 1024 made me thing that perhaps the NATing that qemu was performing for me was interfering - perhaps the server expected to get a connection where the source port was privileged, and qemu wouldn't be able to do that with NAT.

Sure enough, switching NAT to bridging was enough to get the x86 VM to talk to the ppc64le server. I verified that ls, reading and writing all succeeded.

Next steps

The obvious next steps are following up the disabled tests in e2fsprogs, and doing a lot of internal performance and functionality testing.

Happily, it looks like Lustre might be in the mainline kernel before too long - parts have already started to go in to staging. This will make our lives a lot easier: for example, the breakage between 4.4 and 4.8 would probably have already been picked up and fixed if it was the main kernel tree rather than an out-of-tree patch set.

In the long run, we'd like to make Lustre on Power just as easy as Lustre on x86. (And, of course, more performant!) We'll keep you up to date!

(Thanks to fellow bloggers Daniel Black and Andrew Donnellan for useful feedback on this post.)