Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

May 22, 2018

Nellie Bly – investigative journalist extraordinaire!

May is the birth month of Elizabeth Cochrane Seaman, better known as “Nellie Bly“. Here at OpenSTEM, we have a great fondness for Nellie Bly – an intrepid 19th century journalist and explorer, who emulated Jules Verne’s fictional character, Phileas Fogg, in racing around the world in less than 80 days in 1889/1890. Not only […]

May 20, 2018

LUV June 2018 Workshop: Being an Acrobat: Linux and PDFs

Jun 16 2018 12:30
Jun 16 2018 16:30
Jun 16 2018 12:30
Jun 16 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Portable Document Format (PDF) is a file format first specified by Adobe Systems in 1993. It was a proprietary format until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization.

This workshop presentation will provide various ways that PDF files can can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 16, 2018 - 12:30

read more

LUV June 2018 Main Meeting: VoxxedDays conference report

Jun 5 2018 18:30
Jun 5 2018 20:30
Jun 5 2018 18:30
Jun 5 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, June 5, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Pam, Voxxed Days conference report

Andrew will report on a conference he recently attended, covering Language-Level Virtualization with GraalVM, Aggressive Web Apps and more.

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 5, 2018 - 18:30

May 18, 2018

How to maintain a local mirror of github repositories

Share

Similarly to yesterday’s post about mirroring ONAP’s git, I also want to mirror all of the git repositories for certain github projects. In this specific case, all of the Kubernetes repositories.

So once again, here is a script based on something Tony Breeds and I cooked up a long time ago for OpenStack…

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

from github import Github as github


GITHUB_ACCESS_TOKEN = '...use yours!...'


def get_github_projects():
    g = github(GITHUB_ACCESS_TOKEN)
    for user in ['kubernetes']:
        for repo in g.get_user(login=user).get_repos():
            yield('https://github.com', repo.full_name)


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = []
for res in list(get_github_projects()):
    if len(res) == 3:
        projects.append(res)
    else:
        projects.append((res[0], res[1], res[1]))
    
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(starting_dir)

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

This script is basically the same as the ONAP one, but it understands how to get a project list from github and doesn’t need to handle ONAP’s slightly strange repository naming scheme.

I hope it is useful to someone other than me.

Share

The post How to maintain a local mirror of github repositories appeared first on Made by Mikal.

May 17, 2018

How to maintain a local mirror of ONAP’s git repositories

Share

For various reasons, I like to maintain a local mirror of git repositories I use a lot, in this case ONAP. This is mostly because of the generally poor network connectivity in Australia, but its also because it makes cloning a new repository super fast.

Tony Breeds and I baked up a script to do this for OpenStack repositories a while ago. I therefore present a version of that mirror script which does the right thing for ONAP projects.

One important thing to note here that differs from OpenStack — ONAP projects aren’t named in a way where they will consistently sit in a directory structure together. For example, there is an “oom” repository, as well as an “oom/registrator” repository. We therefore need to normalise repository names on clone to ensure they don’t clobber each other — I do that by replacing path separators with underscores.

So here’s the script:

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

ONAP_GIT_BASE = 'ssh://mikal@gerrit.onap.org:29418'


def get_onap_projects():
    data = subprocess.check_output(
               ['ssh', 'gerrit.onap.org', 'gerrit',
                'ls-projects']).split('\n')
    for project in data:
        yield (ONAP_GIT_BASE, project,
               'onap/%s' % project.replace('/', '_'))


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = list(get_onap_projects())
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(os.path.abspath(starting_dir))

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

Note that your ONAP gerrit username probably isn’t “mikal”, so you might want to change that.

This script will checkout all ONAP git repositories into a directory named “onap” in your current working directory. A second run will add any new repositories, as well as updating the existing ones. Note that these are clones intended to be served with a local git server, instead of being clones you’d edit directly. To clone one of the mirrored repositories for development, you would then do something like:

$ git clone onap/aai_babel development/aai_babel

Or similar.

Share

The post How to maintain a local mirror of ONAP’s git repositories appeared first on Made by Mikal.

May 14, 2018

Actively looking for work

I am now actively looking for work, ideally something with Unix/C/Python in the research/open source/not-for-proft space. My long out of date resume has been updated.

May 13, 2018

Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
...
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt

May 12, 2018

Head On

Share

A sequel to Lock In, this book is a quick and fun read of a murder mystery. It has Scalzi’s distinctive style which has generally meshed quite well for me, so it’s not surprise that I enjoyed this book.

 

Head On Book Cover Head On
John Scalzi
Fiction
Tor Books
April 19, 2018
336

To some left with nothing, winning becomes everything In a post-virus world, a daring sport is taking the US by storm. It's frenetic, violent and involves teams attacking one another with swords and hammers. The aim: to obtain your opponent's head and carry it through the goalposts. Impossible? Not if the players have Hayden's Syndrome. Unable to move, Hayden's sufferers use robot bodies, which they operate mentally. So in this sport anything goes, no one gets hurt - and crowds and competitors love it. Until a star athlete drops dead on the playing field. But is it an accident? FBI agents Chris Shane and Leslie Vann are determined to find out. In this game, fortunes can be made - or lost. And both players and owners will do whatever it takes to win, on and off the field.John Scalzi returns with Head On, a chilling near-future SF with the thrills of a gritty cop procedural. Head On brings Scalzi's trademark snappy dialogue and technological speculation to the future world of sports.

Share

The post Head On appeared first on Made by Mikal.

May 11, 2018

Vale Janet Hawtin Reid

Janet Hawtin ReidJanet Hawtin Reid (@lucychili) sadly passed away last week.

A mutual friend called me a earlier in the week to tell me, for which I’m very grateful.  We both appreciate that BlueHackers doesn’t ever want to be a news channel, so I waited writing about it here until other friends, just like me, would have also had a chance to hear via more direct and personal channels. I think that’s the way these things should flow.

knitted Moomin troll by Janet Hawtin ReidI knew Janet as a thoughtful person, with strong opinions particularly on openness and inclusion.  And as an artist and generally creative individual,  a lover of nature.  In recent years I’ve also seen her produce the most awesome knitted Moomins.

Short diversion as I have an extra connection with the Moomin stories by Tove Jansson: they have a character called My, after whom Monty Widenius’ eldest daughter is named, which in turn is how MySQL got named.  I used to work for MySQL AB, and I’ve known that My since she was a little smurf (she’s an adult now).

I’m not sure exactly when I met Janet, but it must have been around 2004 when I first visited Adelaide for Linux.conf.au.  It was then also that Open Source Industry Australia (OSIA) was founded, for which Janet designed the logo.  She may well have been present at the founding meeting in Adelaide’s CBD, too.  OSIA logo - by Janet Hawtin ReidAnyhow, Janet offered to do the logo in a conversation with David Lloyd, and things progressed from there. On the OSIA logo design, Janet wrote:

I’ve used a star as the current one does [an earlier doodle incorporated the Southern Cross]. The 7 points for 7 states [counting NT as a state]. The feet are half facing in for collaboration and half facing out for being expansive and progressive.

You may not have realised this as the feet are quite stylised, but you’ll definitely have noticed the pattern-of-7, and the logo as a whole works really well. It’s a good looking and distinctive logo that has lasted almost a decade and a half now.

Linux Australia logo - by Janet Hawtin ReidAs Linux Australia’s president Kathy Reid wrote, Janet also helped design the ‘penguin feet’ logo that you see on Linux.org.au.  Just reading the above (which I just retrieved from a 2004 email thread) there does seem to be a bit of a feet-pattern there… of course the explicit penguin feet belong with the Linux penguin.

So, Linux Australia and OSIA actually share aspects of their identity (feet with a purpose), through their respective logo designs by Janet!  Mind you, I only realised all this when looking through old stuff while writing this post, as the logos were done at different times and only a handful of people have ever read the rationale behind the OSIA logo until now.  I think it’s cool, and a fabulous visual legacy.

Fir tree in clay, by Janet Hawtin ReidFir tree in clay, by Janet Hawtin Reid. Done in “EcoClay”, brought back to Adelaide from OSDC 2010 (Melbourne) by Kim Hawtin, Janet’s partner.

Which brings me to a related issue that’s close to my heart, and I’ve written and spoken about this before.  We’re losing too many people in our community – where, in case you were wondering, too many is defined as >0.  Just like in a conversation on the road toll, any number greater than zero has to be regarded as unacceptable. Zero must be the target, as every individual life is important.

There are many possible analogies with trees as depicted in the above artwork, including the fact that we’re all best enabled to grow further.

Please connect with the people around you.  Remember that connecting does not necessarily mean talking per-se, as sometimes people just need to not talk, too.  Connecting, just like the phrase “I see you” from Avatar, is about being thoughtful and aware of other people.  It can just be a simple hello passing by (I say hi to “strangers” on my walks), a short email or phone call, a hug, or even just quietly being present in the same room.

We all know that you can just be in the same room as someone, without explicitly interacting, and yet feel either connected or disconnected.  That’s what I’m talking about.  Aim to be connected, in that real, non-electronic, meaning of the word.

If you or someone you know needs help or talk right now, please call 1300 659 467 (in Australia – they can call you back, and you can also use the service online).  There are many more resources and links on the BlueHackers.org website.  Take care.

FreeDV 700D Part 4 – Acquisition

Since 2012 I have built a series of modems (FDMDV, COHPSK, OFDM) for HF Digital voice. I always get stuck on “acquisition” – demodulator algorithms that acquire and lock onto the received signal. The demod needs to rapidly estimate the frequency offset and “coarse” timing – the position where the modem frame starts in the sequence of received samples.

For my application (Digital Voice over HF), it’s complicated by the low SNR and fading HF channels, and the requirement for fast sync (a few hundred ms). For Digital Voice (DV) we need something fast enough to emulate Push To Talk (PTT) operation. In comparison HF data modems have it easy – they can take many lazy seconds to synchronise.

The latest OFDM modem has been no exception. I’ve spent several weeks messing about with acquisition algorithms to get half decent performance. Still some tuning to do but for my own sanity I think I’ll stop development here for now, write up the results, and push FreeDV 700D out for general consumption.

Acquisition and Sync Requirements

  1. Sync up quickly (a few 100ms) with high SNR signals.
  2. Sync up eventually (a few is seconds OK) for low SNR signals over poor channels. Sync eventually is better than none on channels where even SSB is struggling.
  3. Detect false sync and get out of it quickly. Don’t stay stuck in a false sync state forever.
  4. Hang onto sync through fades of a few seconds.
  5. Assume the operator can tune to within +/- 20Hz of a given frequency.
  6. Assume the radio drifts no more than +/- 0.2Hz/s (12 Hz a minute).
  7. Assume the sample clock offset (difference in ADC/DAC sample rates) is no more than 500ppm.

Actually the last three aren’t really requirements, it’s just what fell out of the OFDM modem design when I optimised it for low SNR performance on HF channels! The frequency stability of modern radios is really good; sound card sample clock offset less so but perhaps we can measure that and tell the operator if there is a problem.

Testing Acquisition

The OFDM modem sends pilot (known) symbols every frame. The demodulator correlates (compares) the incoming signal with the pilot symbol sequence. When it finds a close match it has a coarse timing candidate. It can then try to estimate the frequency offset. So we get a coarse timing estimate, a metric (called mx1) that says how close the match is, and a frequency offset estimate.

Estimating frequency offsets is particularly tricky, I’ve experienced “much wailing and gnashing of teeth” with these nasty little algorithms in past (stop laughing Matt). The coarse timing estimator is more reliable. The problem is that if you get an incorrect coarse timing or frequency estimate the modem can lock up incorrectly and may take several seconds, or operator intervention, before it realises its mistake and tries again.

I ended up writing a lot of GNU Octave functions to help develop and test the acquisition algorithms in ofdm_dev.

For example the function below runs 100 tests, measures the timing and frequency error, and plots some histograms. The core demodulator can cope with about +/ 1.5Hz of residual frequency offset and a few samples of timing error. So we can generate probability estimates from the test results. For example if we do 100 tests of the frequency offset estimator and 50 are within 1.5Hz of being correct, then we can say we have a 50% (0.5) probability of getting the correct frequency estimate.

octave:1> ofdm_dev
octave:2> acquisition_histograms(fin_en=0, foff_hz=-15, EbNoAWGN=-1, EbNoHF=3)
AWGN P(time offset acq) = 0.96
AWGN P(freq offset acq) = 0.60
HF P(time offset acq) = 0.87
HF P(freq offset acq) = 0.59

Here are the histograms of the timing and frequency estimation errors. These were generated using simulations of noisy HF channels (about 2dB SNR):


The x axis of timing is in samples, x axis of freq in Hz. They are both a bit biased towards positive errors. Not sure why. This particular test was with a frequency offset of -15Hz.

Turns out that as the SNR improves, the estimators do a better job. The next function runs a bunch of tests at different SNRs and frequency offsets, and plots the acquisition probabilities:

octave:3> acquisition_curves




The timing estimator also gives us a metric (called mx1) that indicates how strong the match was between the incoming signal and the expected pilot sequence. Here is a busy little plot of mx1 against frequency offset for various Eb/No (effectively SNR):

So as Eb/No increases, the mx1 metric tends to gets bigger. It also falls off as the frequency offset increases. This means sync is tougher at low Eb/No and larger frequency offsets. The -10dB value was thrown in to see what happens with pure noise and no signal at the input. We’d prefer not to sync up to that. Using this plot I set the threshold for a valid signal at 0.25.

Once we have a candidate time and freq estimate, we can test sync by measuring the number of bit errors a set of 10 Unique Word (UW) bits spread over the modem frame. Unlike the payload data in the modem frame, these bits are fixed, and known to the transmitter and receiver. In my initial approach I placed the UW bits right at the start of the modem frame. However I discovered a problem – with certain frequency offsets (e.g. multiples of the modem frame rate like +/- 6Hz) – it was possible to get a false sync with no UW errors. So I messed about with the placement of the UW bits until I had a UW that would not give any false syncs at any incorrect frequency offset. To test the UW I wrote another script:

octave:4> debug_false_sync

Which outputs a plot of UW errors against the residual frequency offset:

Note how at any residual frequency offset other than -1.5 to +1.5 Hz there are at least two bit errors. This allows us to reliably detect a false sync due to an incorrect frequency offset estimate.

State Machine

The estimators are wrapped up in a state machine to control the entire sync process:

  1. SEARCHING: look at a buffer of incoming samples and estimate timing, freq, and the mx1 metric.
  2. If mx1 is big enough, lets jump to TRIAL.
  3. TRIAL: measure the number of Unique Word bit errors for a few frames. If they are bad this is probably a false sync so jump back to SEARCHING.
  4. If we get a low number of Unique Word errors for a few frames it’s high fives all round and we jump to SYNCED.
  5. SYNCED: We put up with up two seconds of high Unique Word errors, as this is life on a HF channel. More than two seconds, and we figure the signal is gone for good so we jump back to SEARCHING.

Reading Further

HF Modem Frequency Offset Estimation, an earlier look at freq offset estimation for HF modems
COHPSK and OFDM waveform design spreadsheet
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
README_ofdm.txt, including specifications of the OFDM modem.

May 08, 2018

Adding oslo privsep to a new project, a worked example

Share

You’ve decided that using sudo to run command lines as root is lame and that it is time to step up and do things properly. How do you do that? Well, here’s a simple guide to adding oslo privsep to your project!

In a previous post I showed you how to add a new method that ran with escalated permissions. However, that’s only helpful if you already have privsep added to your project. This post shows you how to do that thing to your favourite python project. In this case we’ll use OpenStack Cinder as a worked example.

Note that Cinder already uses privsep because of its use of os-brick, so the instructions below skip adding oslo.privsep to requirements.txt. If your project has never ever used privsep at all, you’ll need to add a line like this to requirements.txt:

oslo.privsep

For reference, this post is based on OpenStack review 566,479, which I wrote as an example of how to add privsep to a new project. If you’re after a complete worked example in a more complete form than this post then the review might be useful to you.

As a first step, let’s add the code we’d want to write to actually call something with escalated permissions. In the Cinder case I chose the cgroups throttling code for this example. So first off we’d need to create the privsep directory with the relevant helper code:

diff --git a/cinder/privsep/__init__.py b/cinder/privsep/__init__.py
new file mode 100644
index 0000000..7f826a8
--- /dev/null
+++ b/cinder/privsep/__init__.py
@@ -0,0 +1,32 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Setup privsep decorator."""
+
+from oslo_privsep import capabilities
+from oslo_privsep import priv_context
+
+sys_admin_pctxt = priv_context.PrivContext(
+ 'cinder',
+ cfg_section='cinder_sys_admin',
+ pypath=__name__ + '.sys_admin_pctxt',
+ capabilities=[capabilities.CAP_CHOWN,
+ capabilities.CAP_DAC_OVERRIDE,
+ capabilities.CAP_DAC_READ_SEARCH,
+ capabilities.CAP_FOWNER,
+ capabilities.CAP_NET_ADMIN,
+ capabilities.CAP_SYS_ADMIN],
+)

This code defines the permissions that our context (called cinder_sys_admin in this case) has. These specific permissions in the example above should correlate with those that you’d get if you ran a command with sudo. There was a bit of back and forth about what permissions to use and how many contexts to have while we were implementing privsep in OpenStack Nova, but we’ll discuss those in a later post.

Next we need the code that actually does the privileged thing:

diff --git a/cinder/privsep/cgroup.py b/cinder/privsep/cgroup.py
new file mode 100644
index 0000000..15d47e0
--- /dev/null
+++ b/cinder/privsep/cgroup.py
@@ -0,0 +1,35 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Helpers for cgroup related routines.
+"""
+
+from oslo_concurrency import processutils
+
+import cinder.privsep
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_create(name):
+    processutils.execute('cgcreate', '-g', 'blkio:%s' % name)
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_limit(name, rw, dev, bps):
+    processutils.execute('cgset', '-r',
+                         'blkio.throttle.%s_bps_device=%s %d' % (rw, dev, bps),
+                         name)

Here we just provide two methods which manipulate cgroups. That allows us to make this change to the throttling implementation in Cinder:

diff --git a/cinder/volume/throttling.py b/cinder/volume/throttling.py
index 39cbbeb..3c6ddaa 100644
--- a/cinder/volume/throttling.py
+++ b/cinder/volume/throttling.py
@@ -22,6 +22,7 @@ from oslo_concurrency import processutils
 from oslo_log import log as logging
 
 from cinder import exception
+import cinder.privsep.cgroup
 from cinder import utils
 
 
@@ -65,8 +66,7 @@ class BlkioCgroup(Throttle):
         self.dstdevs = {}
 
         try:
-            utils.execute('cgcreate', '-g', 'blkio:%s' % self.cgroup,
-                          run_as_root=True)
+            cinder.privsep.cgroup.cgroup_create(self.cgroup)
         except processutils.ProcessExecutionError:
             LOG.error('Failed to create blkio cgroup \'%(name)s\'.',
                       {'name': cgroup_name})
@@ -81,8 +81,7 @@ class BlkioCgroup(Throttle):
 
     def _limit_bps(self, rw, dev, bps):
         try:
-            utils.execute('cgset', '-r', 'blkio.throttle.%s_bps_device=%s %d'
-                          % (rw, dev, bps), self.cgroup, run_as_root=True)
+            cinder.privsep.cgroup.cgroup_limit(self.cgroup, rw, dev, bps)
         except processutils.ProcessExecutionError:
             LOG.warning('Failed to setup blkio cgroup to throttle the '
                         'device \'%(device)s\'.', {'device': dev})

These last two snippets should be familiar from the previous post about pivsep in this series. Finally for the actual implementation of privsep, we need to make sure that rootwrap has permissions to start the privsep helper daemon. You’ll get one daemon per unique security context, but in this case we only have one of those so we’ll only need one rootwrap entry. Note that I also remove the previous rootwrap entries for cgset and cglimit while I’m here.

diff --git a/etc/cinder/rootwrap.d/volume.filters b/etc/cinder/rootwrap.d/volume.filters
index abc1517..d2d1720 100644
--- a/etc/cinder/rootwrap.d/volume.filters
+++ b/etc/cinder/rootwrap.d/volume.filters
@@ -43,6 +43,10 @@ lvdisplay4: EnvFilter, env, root, LC_ALL=C, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WAR
 # This line ties the superuser privs with the config files, context name,
 # and (implicitly) the actual python code invoked.
 privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
+
+# Privsep calls within cinder iteself
+privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, cinder.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.*
+
 # The following and any cinder/brick/* entries should all be obsoleted
 # by privsep, and may be removed once the os-brick version requirement
 # is updated appropriately.
@@ -93,8 +97,6 @@ ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7]
 ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3]
 
 # cinder/volume/utils.py: setup_blkio_cgroup()
-cgcreate: CommandFilter, cgcreate, root
-cgset: CommandFilter, cgset, root
 cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+
 
 # cinder/volume/driver.py

And because we’re not bad people we’d of course write a release note about the changes we’ve made…

diff --git a/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
new file mode 100644
index 0000000..e78fb00
--- /dev/null
+++ b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
@@ -0,0 +1,13 @@
+---
+security:
+    Privsep transitions. Cinder is transitioning from using the older style
+    rootwrap privilege escalation path to the new style Oslo privsep path.
+    This should improve performance and security of Nova in the long term.
+  - |
+    privsep daemons are now started by Cinder when required. These daemons can
+    be started via rootwrap if required. rootwrap configs therefore need to
+    be updated to include new privsep daemon invocations.
+upgrade:
+  - |
+    The following commands are no longer required to be listed in your rootwrap
+    configuration: cgcreate; and cgset.

This code will now work. However, we’ve left out one critical piece of the puzzle — testing. If this code was uploaded like this, it would fail in the OpenStack gate, even though it probably passed on your desktop. This is because many of the gate jobs are setup in such a way that they can’t run rootwrapped commands, which in this case means that the rootwrap daemon won’t be able to start.

I found this quite confusing in Nova when I was implementing things and had missed a step. So I wrote a simple test fixture that warns me when I am being silly:

diff --git a/cinder/test.py b/cinder/test.py
index c8c9e6c..a49cedb 100644
--- a/cinder/test.py
+++ b/cinder/test.py
@@ -302,6 +302,9 @@ class TestCase(testtools.TestCase):
         tpool.killall()
         tpool._nthreads = 20
 
+        # NOTE(mikal): make sure we don't load a privsep helper accidentally
+        self.useFixture(cinder_fixtures.PrivsepNoHelperFixture())
+
     def _restore_obj_registry(self):
         objects_base.CinderObjectRegistry._registry._obj_classes = \
             self._base_test_obj_backup
diff --git a/cinder/tests/fixtures.py b/cinder/tests/fixtures.py
index 6e275a7..79e0b73 100644
--- a/cinder/tests/fixtures.py
+++ b/cinder/tests/fixtures.py
@@ -1,4 +1,6 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -21,6 +23,7 @@ import os
 import warnings
 
 import fixtures
+from oslo_privsep import daemon as privsep_daemon
 
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
 
@@ -131,3 +134,29 @@ class WarningsFixture(fixtures.Fixture):
                     ' This key is deprecated. Please update your policy '
                     'file to use the standard policy values.')
         self.addCleanup(warnings.resetwarnings)
+
+
+class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
+    def __init__(self, context):
+        raise Exception('You have attempted to start a privsep helper. '
+                        'This is not allowed in the gate, and '
+                        'indicates a failure to have mocked your tests.')
+
+
+class PrivsepNoHelperFixture(fixtures.Fixture):
+    """A fixture to catch failures to mock privsep's rootwrap helper.
+
+    If you fail to mock away a privsep'd method in a unit test, then
+    you may well end up accidentally running the privsep rootwrap
+    helper. This will fail in the gate, but it fails in a way which
+    doesn't identify which test is missing a mock. Instead, we
+    raise an exception so that you at least know where you've missed
+    something.
+    """
+
+    def setUp(self):
+        super(PrivsepNoHelperFixture, self).setUp()
+
+        self.useFixture(fixtures.MonkeyPatch(
+            'oslo_privsep.daemon.RootwrapClientChannel',
+            UnHelperfulClientChannel))

Now if you fail to mock a privsep’ed call, then you’ll get something like this:

==============================
Failed 1 tests - output below:
==============================

cinder.tests.unit.test_volume_throttling.ThrottleTestCase.test_BlkioCgroup
--------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
        return func(*args, **keywargs)
      File "cinder/tests/unit/test_volume_throttling.py", line 66, in test_BlkioCgroup
        throttle = throttling.BlkioCgroup(1024, 'fake_group')
      File "cinder/volume/throttling.py", line 69, in __init__
        cinder.privsep.cgroup.cgroup_create(self.cgroup)
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in _wrap
        self.start()
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in start
        channel = daemon.RootwrapClientChannel(context=self)
      File "cinder/tests/fixtures.py", line 141, in __init__
        raise Exception('You have attempted to start a privsep helper. '
    Exception: You have attempted to start a privsep helper. This is not allowed in the gate, and indicates a failure to have mocked your tests.

The last bit is the most important. The fixture we installed has detected that you’ve failed to mock a privsep’ed call and has informed you. So, the last step of all is fixing our tests. This normally involves changing where we mock, as many unit tests just lazily mock the execute() call. I try to be more granular than that. Here’s what that looked like in this throttling case:

diff --git a/cinder/tests/unit/test_volume_throttling.py b/cinder/tests/unit/test_volume_throttling.py
index 82e2645..edbc2d9 100644
--- a/cinder/tests/unit/test_volume_throttling.py
+++ b/cinder/tests/unit/test_volume_throttling.py
@@ -29,7 +29,9 @@ class ThrottleTestCase(test.TestCase):
             self.assertEqual([], cmd['prefix'])
 
     @mock.patch.object(utils, 'get_blkdev_major_minor')
-    def test_BlkioCgroup(self, mock_major_minor):
+    @mock.patch('cinder.privsep.cgroup.cgroup_create')
+    @mock.patch('cinder.privsep.cgroup.cgroup_limit')
+    def test_BlkioCgroup(self, mock_limit, mock_create, mock_major_minor):
 
         def fake_get_blkdev_major_minor(path):
             return {'src_volume1': "253:0", 'dst_volume1': "253:1",
@@ -37,38 +39,25 @@ class ThrottleTestCase(test.TestCase):
 
         mock_major_minor.side_effect = fake_get_blkdev_major_minor
 
-        self.exec_cnt = 0
+        throttle = throttling.BlkioCgroup(1024, 'fake_group')
+        with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
+                             cmd['prefix'])
 
-        def fake_execute(*cmd, **kwargs):
-            cmd_set = ['cgset', '-r',
-                       'blkio.throttle.%s_bps_device=%s %d', 'fake_group']
-            set_order = [None,
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024),
-                         # a nested job starts; bps limit are set to the half
-                         ('read', '253:0', 512),
-                         ('read', '253:2', 512),
-                         ('write', '253:1', 512),
-                         ('write', '253:3', 512),
-                         # a nested job ends; bps limit is resumed
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024)]
-
-            if set_order[self.exec_cnt] is None:
-                self.assertEqual(('cgcreate', '-g', 'blkio:fake_group'), cmd)
-            else:
-                cmd_set[2] %= set_order[self.exec_cnt]
-                self.assertEqual(tuple(cmd_set), cmd)
-
-            self.exec_cnt += 1
-
-        with mock.patch.object(utils, 'execute', side_effect=fake_execute):
-            throttle = throttling.BlkioCgroup(1024, 'fake_group')
-            with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            # a nested job
+            with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
                 self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
                                  cmd['prefix'])
 
-                # a nested job
-                with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
-                    self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                                     cmd['prefix'])
+        mock_create.assert_has_calls([mock.call('fake_group')])
+        mock_limit.assert_has_calls([
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024),
+            # a nested job starts; bps limit are set to the half
+            mock.call('fake_group', 'read', '253:0', 512),
+            mock.call('fake_group', 'read', '253:2', 512),
+            mock.call('fake_group', 'write', '253:1', 512),
+            mock.call('fake_group', 'write', '253:3', 512),
+            # a nested job ends; bps limit is resumed
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024)])

…and we’re done. This post has been pretty long so I am going to stop here for now. However, hopefully I’ve demonstrated that its actually not that hard to implement privsep in a project, even with some slight testing polish.

Share

The post Adding oslo privsep to a new project, a worked example appeared first on Made by Mikal.

May 06, 2018

FreeDV 700D and SSB Comparison

Mark, VK5QI has just performed a SSB versus FreeDV 700D comparison between his home in Adelaide and the Manly Warringah Radio Society WebSDR SDR in Sydney, about 1200km away. The band was 40m, and the channel very poor, with some slow fading. Mark used SVN revision 3581, built himself on Ubuntu, with an interleaver setting (Tools-Options menu) of 1 frame. Transmit power for SSB and FreeDV 700D was about the same.

I’m still finishing off FreeDV 700D integration and tuning the mode – but this is a very encouraging start. Thanks Mark!

May 04, 2018

FreeDV 1600 Sample Clock Offset Bug

So I’m busy integrating FreeDV 700D into the FreeDV GUI program. The 700D modem works on larger frames (160ms) than the previous modes (e.g. 20ms for FreeDV 1600) so I need to adjust FIFO sizes.

As a reference I tried FreeDV 1600 between two laptops (one tx, one rx) and noticed it was occasionally losing frame sync, generating bit errors, and producing the occasional bloop in the audio. After a little head scratching I discovered a bug in the FreeDV 1600 FDMDV modem! Boy, is my face red.

The FMDMV modem was struggling with sample clock differences between the mod and demod. I think the bug was introduced when I did some (too) clever refactoring to reduce FDMDV memory consumption while developing the SM1000 back in 2014!

Fortunately I have a trail of unit test programs, leading back from FreeDV GUI, to the FreeDV API (freedv_tx and freedv_rx), then individual unit tests for each modem (fdmdv_mod/fdmdv_demod), and finally Octave simulation code (fdmdv.m, fdmdv_demod.m and friends) for the modem.

Octave (or an equivalent vector based scripting language like Python/numpy) is much easier to work with than C for complex DSP problems. So after a little work I reproduced the problem using the Octave version of the FDMDV modem – bit errors happening every time there was a timing jump.

The modulator sends parallel streams of symbols at about 50 baud. These symbols are output at a sample rate of 8000 Hz. Part of the demodulators job is to estimate the best place to sample each received modem symbol, this is called timing estimation. When the tx and rx are separate, the two sample clocks are slightly different – your 8000 Hz clock will be a few Hz different to mine. This means the timing estimate is a moving target, and occasionally we need to compenstate by talking a few more or few less samples from the 8000 Hz sample stream.

In the plot below the Octave demodulator was fed with a signal that is transmitted at 8010 Hz instead of the nominal 8000 Hz. So the tx is sampling faster than the rx. The y axis is the timing estimate in samples, x axis time in seconds. For FreeDV 1600 there are 160 samples per symbol (50 baud at 8 kHz). The timing estimate at the rx drifts forwards until we hit a threshold, set at +/- 40 samples (quarter of a symbol). To avoid the timing estimate drifting too far, we take a one-off larger block of samples from the input, the timing takes a step backwards, then starts drifting up again.

Back to the bug. After some head scratching, messing with buffer shifts, and rolling back phases I eventually fixed the problem in the Octave code. Next step is to port the code to C. I used my test framework that automatically compares a bunch of vectors (states) in the Octave code to the equivalent C code:

octave:8> system("../build_linux/unittest/tfdmdv")
sizeof FDMDV states: 40032 bytes
ans = 0
octave:9> tfdmdv
tx_bits..................: OK
tx_symbols...............: OK
tx_fdm...................: OK
pilot_lut................: OK
pilot_coeff..............: OK
pilot lpf1...............: OK
pilot lpf2...............: OK
S1.......................: OK
S2.......................: OK
foff_coarse..............: OK
foff_fine................: OK
foff.....................: OK
rxdec filter.............: OK
rx filt..................: OK
env......................: OK
rx_timing................: OK
rx_symbols...............: OK
rx bits..................: OK
sync bit.................: OK
sync.....................: OK
nin......................: OK
sig_est..................: OK
noise_est................: OK

passes: 46 fails: 0

Great! This system really lets me move fast once the Octave code is written and tested. Next step is to test the C version of the FDMDV modem using the command line arguments. Note how I used sox to insert a sample rate offset by changing the same rate of the raw sample stream:

build_linux/src$ ./fdmdv_get_test_bits - 30000 | ./fdmdv_mod - - | sox -t raw -r 8000 -s -2 - -t raw -r 7990 - | ./fdmdv_demod - - 14 demod_dump.txt | ./fdmdv_put_test_bits -
-----------------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
bits 29568  errors 0  BER 0.0000

Zero errors, despite 10Hz sample clock offset. Yayyyyy. The C demodulator outputs a bunch of vectors that can be plotted with an Octave helper program:

octave:6> fdmdv_demod_c("../build_linux/src/demod_dump.txt",28000)

The FDMDV modem is integrated with Codec 2 in the FreeDV API. This can be tested using the freedv_tx/freedv_rx programs. For convenience, I generated some 60 second test files at different sample rates. Here is how I test using the freedv_rx program:

./freedv_rx 1600 ~/Desktop/ve9qrp_1600_8010.raw - | aplay -f S16

The ouput audio sounds good, no bloops, and by examining the freedv_rx_log.txt file I can see the demodulator didn’t loose sync. Cool.

Here is a table of the samples I used for testing:

No clock offset Simulates Tx sample rate 10Hz slower than Rx Simulates Tx sampling 10Hz faster than Rx

Finally, the FreeDV API is linked with the FreeDV GUI program. Here is a video of me testing different sample clock offsets using the raw files in the table above. Note there is no audio in this video as my screen recorder fights with FreeDV for use of sound cards. However the decoded FreeDV audio should be uninterrupted, there should be no re-syncs, and zero bit errors:

The fix has been checked into codec2-dev SVN rev 3556, and will make it’s way into FreeDV GUI 1.3, to be released in late May 2018.

Reading Further

FDMDV modem
README_fdmdv.txt
Steve Ports an OFDM modem from Octave to C, some more on the Octave/C automated test framework and porting complex DSP algorithms.
Testing a FDMDV Modem. Early blog post on FDMDV modem with some more disucssion on sample clock offsets
Timing Estimation for PSK modems, talks a little about how we generate a timing estimate

Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets 😢) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes 💤. Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

The Great Quake: How the Biggest Earthquake in North America Changed Our Understanding of the Planet by Henry Fountain

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10

Share

May 03, 2018

How to make a privileged call with oslo privsep

Share

Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep

...

@nova.privsep.sys_admin_pctxt.entrypoint
def update_motd(message):
    with open('/etc/motd', 'w') as f:
        f.write(message)

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd

...

nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd

...

motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the update_motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

  • Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
  • Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
  • Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.

Share

The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

April 30, 2018

FreeDV 700D Part 3

After a 1 year hiatus, I am back into FreeDV 700D development, working to get the OFDM modem, LDPC FEC, and interleaver algorithms developed last year into real time operation. The aim is to get improved performance on HF channels over FreeDV 700C.

I’ve been doing lots of refactoring, algorithm development, fixing bugs, tuning, and building up layers of C code so we can get 700D on the air.

Steve ported the OFDM modem to C – thanks Steve!

I’m building up the software in the form of command line utilities, some notes, examples and specifications in Codec 2 README_ofdm.txt.

Last week I stayed at the shack of Chris, VK5CP, in a quiet rural location at Younghusband on the river Murray. As well as testing my Solar Boat, Mark (VK5QI) helped me test FreeDV 700D. This was the first time the C code software has been tested over a real HF radio channel.

We transmitted signals from YoungHusband, and received them at a remote SDR in Sydney (about 1300km away), downloading wave files of the received signal for off-line analysis.

After some tweaking, it worked! The frequency offset was a bit off, so I used the cohpsk_ch utility to shift it within the +/- 25Hz acquisition range of the FreeDV 700D demodulator. I also found some level sensitivity issues with the LDPC decoder. After implementing a form of AGC, the number of bit errors dropped by a factor of 10.

The channel had nasty fading of around 1Hz, here is a video of the “sample #32” spectrum bouncing around. This rapid fading is a huge challenge for modems. Note also the spurious birdie off to the left, and the effect of receiver AGC – the noise level rises during fades.

Here is a spectrogram of the same sample 33. The x axis is time in seconds. It’s like a “waterfall” SDR plot on it’s side. Note the heavy “barber pole” fading, which corresponds to the fades sweeping across the spectrum in the video above.

Here is the smoothed SNR estimate. The SNR is moving target for real world HF channels, the SNR moves between 2 and 6dB.

FreeDV 700D was designed to work down to 2dB on HF fading channels so pat on the back for me! Hundreds of hours of careful development and testing meant this thing actually worked when it went on air….

Sample 32 is a longer file that contains test frames instead of coded voice. The QPSK scatter diagram is a messy cross, typical of fading channels, as the amplitude of the signal moves in and out:

The LDPC FEC does a good job. Here are plots of the uncoded (raw) bit errors, and the bit errors after LDPC decoding, with the SNR estimates below:

Here are some wave and raw (headerless) audio files. The off air audio is error free, albeit at the low quality of Codec 2 at 700 bits/s. The goal of this work is to get intelligible speech through HF channels at low SNRs. We’ll look at improving the speech quality as a future step.

Still, error free digital voice on a heavily faded HF channel at 2dB SNR is pretty cool.

See below for how to use the last two raw file samples.

sample 33 off air modem signal Sample 33 decoded voice Sample 32 off air test frames raw file Sample 33 off air voice raw file

SNR estimation

After I sampled the files I had a problem – I needed to know the SNR. You see in my development I use simulated channels where I know exactly what the SNR is. I need to compare the performance of the real world, off-air signals to my expected results at a given SNR.

Unfortunately SNR on a fading channel is a moving target. In simulation I measure the total power and noise over the entire run, and the simulated fading channel is consistent. Real world channels jump all over the place as the ionosphere bounces around. Oh well, knowing we are in the ball park is probably good enough. We just need to know if FreeDV 700D is hanging onto real world HF channels at roughly the SNRs it was designed for.

I came up with a way of measuring SNR, and tested it with a range of simulated AWGN (just noise) and fading channels. The fading bandwidth is the speed at which the fading channel evolves. Slow fading channels might change at 0.2Hz, faster channels, like samples #32 and #33, at about 1Hz.

The blue line is the ideal, and on AWGN and slowly fading channels my SNR estimator does OK. It reads a dB low as the fading bandwidth increases to 1Hz. We are interested in the -2 to 4dB SNR range.

Command Lines

With the samples in the table above and codec2-dev SVN rev 3465, you can repeat some of my decodes using Octave and C:

octave:42> ofdm_ldpc_rx("32.raw")
EsNo fixed at 3.000000 - need to est from channel
Coded BER: 0.0010 Tbits: 54992 Terrs:    55
Codec PER: 0.0097 Tpkts:  1964 Terrs:    19
Raw BER..: 0.0275 Tbits: 109984 Terrs:  3021

david@penetrator:~/codec2-dev/build_linux/src$ ./ofdm_demod ../../octave/32.raw /dev/null -t --ldpc
Warning EsNo: 3.000000 hard coded
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/32.raw /dev/null --testframes
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/33.raw  - | aplay -f S16

Next Steps

I’m working steadily towards integrating FreeDV 700D into the FreeDV GUI program so anyone can try it. This will be released in May 2018.

Reading Further

Towards FreeDV 700D
FreeDV 700D – First Over The Air Tests
Steve Ports an OFDM modem from Octave to C
Codec 2 README_ofdm.txt

Be Gonski Ready!

Gonski is in the news again with the release of the Gonski 2.0 report. This is most likely to impact on schools and teachers in a range of ways from funding to curriculum. Here at OpenSTEM we can help you to be ahead of the game by using our materials, which are already Gonski-ready! The […]

April 29, 2018

Solar Boat

Two years ago when I bought my Hartley TS16 sail boat I dreamed of converting it to solar power. In January I installed a Torqueedo electric outboard and a 24V, 100AH Lithium battery back. That’s working really well. Next step was to work out a way to mount some surplus 200W solar panels on the boat. The idea is to (temporarily) detach the mast, and use the boat on the river Murray, a major river that passes within 100km of where I live in Adelaide, South Australia.

Over the last few weeks I worked with my friend Gary (VK5FGRY) to mount solar panels on the TS16. Gary designed and fabricated some legs from 40mm square aluminium:

With a matching rubber foot on each leg, the panels sit firmly on the gel coat of the boat, and are held down by ropes or octopus straps.

The panels maximum power point is at 28.5V (and 7.5A) which is close to the battery pack under charge (3.3*8 = 26.4V) so I decided to try a direct DC connection – no inverter or charger. I ran some tests in the back yard: each panel was delivering about 4A into the battery pack, and two in parallel delivered about 8A. I didn’t know solar panels could be connected in parallel, but happily this means I can keep my direct DC connection. Horizontal panels costs a few amps – a good example of why solar panels are usually angled at the sun. However the azimuth of the boat will be always changing so horizontal is the only choice. The panels are very sensitive to shadowing; a hand placed on a panel, or a small shadow is enough to drop the current to 0A. OK, so now I had a figure for panel output – about 4A from each panel.

This didn’t look promising. Based on my sea voyages with the Torqueedo, I estimated I would need 800W (about 30A) to maintain my target houseboat speed of 4 knots (7 km/hr); that’s 8 panels which won’t ft on my boat! However the current draw on the river might be different without tides, and waves, and I wasn’t sure exactly how many AH I would get over a day from the sun. Would trees on the river bank shadow the panels?

So it was off to Younghusband on the Murray, where our friend Chris (VK5CP) was hosting a bunch of Ham Radio guys for an extended Anzac day/holiday weekend. It’s Autumn here, with generally sunny days of about 23C. The sun is up from from 6:30am to 6pm.

Turns out that even with two panels – the solar boat was really practical! Over three days we made three trips of 2 hours each, at speeds of 3 to 4 knots, using only the panels for charging. Each day I took friends out, and they really loved it – so quiet and peaceful, and the river scenery is really nice.

After an afternoon cruise I would park the boat on the South side of the river to catch the morning sun, which in Autumn appears to the North here in Australia. I measured the panel current as 2A at 7am, 6A at 9am, 9A at 10am, and much to my surprise the pack was charged by 11am! In fact I had to disconnect the panels as the cell voltage was pushing over 4V.

On a typical run upriver we measured 700W = 4kt, 300W = 3.1kt, 150W = 2.5kt, and 8A into the panels in full sun. Panel current dropped to 2A with cloud which was a nasty surprise. We experienced no shadowing issues from trees. The best current we saw at about noon was 10A. We could boost the current by 2A by putting three guys on one side of the boat and tipping the entire boat (and solar panels) towards the sun!

Even partial input from solar can have a big impact. Lets say at 4 knots (30A) I can drive for 2 hours using 60% of my 100AH pack. If I back off the speed a little, so I’m drawing 20A, then 10A from the panels will extend my driving time to 6 hours.

I slept on the boat, and one night I found a paddle steamer (the Murray Princess) parked across the river from me, all lit up with fairy lights:

On our final adventure, my friend Darin (VK5IX) and I were entering Lake Carlet, when suddenly the prop hit something very hard, “crack crack crack”. My poor prop shaft was bent and my propeller is wobbling from side to side:

We gently e-motored back and actually recorded our best results – 3 knots on 300W, 10A from the panels, 10A to the motor.

With 4 panels I would have a very practical solar boat, capable of 4-6 hours cruising a day just on solar power. The 2 extra panels could be mounted as a canopy over the rear of the boat. I have an idea about an extended solar adventure of several days, for example 150km from Younghusband to Goolwa.

Reading Further

Engage the Silent Drive
Lithium Cell Amp Hour Tester and Electric Sailing

April 28, 2018

PoE termination board

For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.

Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng who I'd used for the USB-C termination boards I did a while back.

Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.



One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.

For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
  • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
  • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
  • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
  • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.


Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).

Fedora on ODROID-HC1 mini NAS (ARMv7)

EDIT: I am having a problem where the Fedora kernel does not always detect the disk drive (whether cold, warm or hotplugged). I’ve built upstream 4.16 kernel and it works perfectly every time. It doesn’t seem to be uas related, disabling that on the usb-storage module doesn’t make any difference. I’m looking into it…

Hardkernel is a Korean company that makes various embedded ARM based systems, which it calls ODROID.

One of their products is the ODROID-HC1, a mini NAS designed to take a single 2.5″ SATA drive (HC stands for “Home Cloud”) which comes with 2GB RAM and a Gigabit Ethernet port. There is also a 3.5″ model called the HC2. Both of these are based on the ODROID-XU4, which itself is based on the previous iteration ODROID-XU3. All of these are based on the Samsung Exynos5422 SOC and should work with the following steps.

The Exynos SOC needs proprietary first stage bootloaders which are embedded in the first 1.4MB or so at the beginning of the SD card in order to load U-Boot. As these binary blobs are not re-distributable, Fedora cannot support these devices out of the box, however all the other bits are available including the kernel, device tree and U-Boot. So, we just need to piece it all together and the result is a stock Fedora system!

To do this you’ll need the ODROID device, a power supply (5V/4A for HC1, 12V/2A for HC2), one of their UART adapters, an SD card (UHS-I) and probably a hard drive if you want to use it as a NAS (you may also want a battery for the RTC and a case).

ODROID-HC1 with UART, RTC battery, SD card and 2.5″ drive.

Note that the default Fedora 27 ARM image does not support the Realtek RTL8153 Ethernet adapter out of the box (it does after a kernel upgrade) so if you don’t have a USB Ethernet dongle handy we’ll download the kernel packages on our host, save them to the SD card and install them on first boot. The Fedora 28 image works out of the box, so if you’re installing 28 you can skip that step.

Download the Fedora Minimal ARM server image and save it in your home dir.

Install the Fedora ARM installer and U-Boot bootloader files for the device on your host PC.

sudo dnf install fedora-arm-installer uboot-images-armv7

Insert your SD card into your computer and note the device (mine is /dev/mmcblk0) using dmesg or df commands. Once you know that, open a terminal and let’s write the Fedora image to the SD card! Note that we are using none as the target because it’s not a supported board and we will configure the bootloader manually.

sudo fedora-arm-image-installer \
--target=none \
--image=Fedora-Minimal-armhfp-27-1.6-sda.raw.xz \
--resizefs \
--norootpass \
--media=/dev/mmcblk0

First things first, we need to enable the serial console and turn off cpuidle else it won’t boot. We do this by mounting the boot partition on the SD card and modifying the extlinux bootloader configuration.

sudo mount /dev/mmcblk0p2 /mnt
 
sudo sed -i "s|append|& cpuidle.off=1 \
console=tty1 console=ttySAC2,115200n8|" \
/mnt/extlinux/extlinux.conf

As mentioned, the kernel that comes with Fedora 27 image doesn’t support the Ethernet adapter, so if you don’t have a spare USB Ethernet dongle, let’s download the updates now. If you’re using Fedora 28 this is not necessary.

cd /mnt
 
sudo wget http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-core-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-modules-4.16.3-200.fc27.armv7hl.rpm
 
cd ~/

Unmount the boot partition.

sudo umount /mnt

Now, we can embed U-Boot and the required bootloaders into the SD card. To do this we need to download the files from Hardkernel along with their script which writes the blobs (note that we are downloading the files for the XU4, not HC1, as they are compatible). We will tell the script to use the U-Boot image we installed earlier, this way we are using Fedora’s U-Boot not the one from Hardkernel.

Download the required files from Hardkernel.

mkdir hardkernel ; cd hardkernel
 
wget https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/sd_fusing.sh \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl1.bin.hardkernel \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl2.bin.hardkernel.720k_uboot \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/tzsw.bin.hardkernel
 
chmod a+x sd_fusing.sh

Copy the Fedora U-Boot files into the local dir.

cp /usr/share/uboot/odroid-xu3/u-boot.bin .

Finally, run the fusing script to embed the files onto the SD card, passing in the device for your SD card.
sudo ./sd_fusing.sh /dev/mmcblk0

That’s it! Remove your SD card and insert it into your ODROID, then plug the UART adapter into a USB port on your computer and connect to it with screen (check dmesg for the port number, generally ttyUSB0).

sudo screen /dev/ttyUSB0

Now power on your ODROID. If all goes well you should see the SOC initialise, load Fedora’s U-Boot and boot Fedora to the welcome setup screen. Complete this and then log in as root or your user you have just set up.

Welcome configuration screen for Fedora ARM.

If you’re running Fedora 27 image, install the kernel updates, remove the RPMs and reboot the device (skip this if you’re running Fedora 28).
sudo dnf install --disablerepo=* /boot/*rpm
 
sudo rm /boot/*rpm
 
sudo reboot

Fedora login over serial connection.

Once you have rebooted, the Ethernet adapter should work and you can do your regular updates

sudo dnf update

You can find your SATA drive at /dev/sda where you should be able to partition, format, mount it, share it and well, do whatever you want with the box.

You may wish to take note of the IP address and/or configure static networking so that you can SSH in once you unplug the UART.

Enjoy your native Fedora embedded ARM Mini NAS 🙂

April 26, 2018

A first program in golang, with a short aside about Google

Share

I have reached the point in my life where I needed to write my first program in golang. I pondered for a disturbingly long time what exactly to write, but then it came to me…

Back in the day Google had an internal short URL service (think bit.ly, but for internal things). It was called “go” and lived at http://go. So what should I write as my first golang program? go of course.

The implementation is on github, and I am sure it isn’t perfect. Remember, it was a learning exercise. I mostly learned that golang syntax is a bit bonkers, and that etcd hates me.

This code stores short URLs in etcd, and redirects you to the right place if it knows about the short code you used. If you just ask for the root URL, you get a list of the currently defined short codes, as well as a form to create new ones. Not bad for a few hours hacking I think.

Share

The post A first program in golang, with a short aside about Google appeared first on Made by Mikal.

LUV May 2018 Main Meeting: "Share" with FOSS Software

May 1 2018 18:30
May 1 2018 20:30
May 1 2018 18:30
May 1 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, May 1, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Linux Users of Victoria is a subcommittee of Linux Australia.

May 1, 2018 - 18:30

read more

LUV May 2018 Workshop: Ubuntu 18.04 Bionic Beaver

May 19 2018 12:30
May 19 2018 16:30
May 19 2018 12:30
May 19 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Ubuntu 18.04 Bionic Beaver

The latest long term support version of Ubuntu Linux has been released!  Come along to learn what's new and try it out, or get help upgrading.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

May 19, 2018 - 12:30

etcd v2 and v3 data stores are separate

Share

Just noting this because it wasted way more of my time that it should have…

So you write an etcd app in a different language from your previous apps and it can’t see the data that the other apps wrote? Check the versions of your client libraries. The v2 and v3 data stores in etcd are different, and cannot be seen by each other. You need to convert your v2 data to the v3 data store before it will be visible there.

You’re welcome.

Share

The post etcd v2 and v3 data stores are separate appeared first on Made by Mikal.

April 23, 2018

pyconau 2018 call for proposals now open

Share

The pyconau call for proposals is now open, and runs until 28 May. I took my teenagers to pyconau last year and they greatly enjoyed it. I hadn’t been to a pyconau in ages, and ended up really enjoying thinking about things from topic areas I don’t normally need to think about. I think expanding one’s horizons is generally a good idea.

Should I propose something for this year? I am unsure. Some random ideas that immediately spring to mind:

  • something about privsep: I think a generalised way to make privileged calls in unprivileged code is quite interesting, especially in a language which is often used for systems management and integration tasks. That said, perhaps its too OpenStacky given how disinterested in OpenStack talks most python people seem to be.
  • nova-warts: for a long time my hobby has been cleaning up historical mistakes made in OpenStack Nova that wont ever rate as a major feature change. What lessons can other projects learn from a well funded and heavily staffed project that still thought that exec() was a great way to do important work? There’s definitely an overlap with the privsep talk above, but this would be more general.
  • a talk about how I had to manage some code which only worked in python2, and some other code that only worked in python3 and in the end gave up on venvs and decided that Docker containers are like the ultimate venvs. That said, I suspect this is old hat and was obvious to everyone except me.
  • something else I haven’t though of.

Anyways, I’m undecided. Comments welcome.

Also, here’s an image for this post. Its the stone henge we found at Guerilla Bay last weekend. I assume its in frequent use for tiny tiny druids.

Share

The post pyconau 2018 call for proposals now open appeared first on Made by Mikal.

April 22, 2018

Caliban’s War

Share

This is the second book in the Leviathan Wakes series by James SA Corey. Just as good as the first, this is a story about how much a father loves his daughter, moral choices, and politics — just as much as it is the continuation of the story arc around the alien visitor. I haven’t seen this far in the Netflix series, but I sure hope they get this right, because its a very good story so far.

Caliban's War Book Cover Caliban's War
James S. A. Corey
Fiction
Orbit Books
April 30, 2013
624

For someone who didn't intend to wreck the solar system's fragile balance of power, Jim Holden did a pretty good job of it. While Earth and Mars have stopped shooting each other, the core alliance is shattered. The outer planets and the Belt are uncertain in their new - possibly temporary - autonomy. Then, on one of Jupiter's moons, a single super-soldier attacks, slaughtering soldiers of Earth and Mars indiscriminately and reigniting the war. The race is on to discover whether this is the vanguard of an alien army, or if the danger lies closer to home.

Share

The post Caliban’s War appeared first on Made by Mikal.

April 21, 2018

Exploring change and how to scale it

Over the past decade I have been involved in several efforts trying to make governments better. A key challenge I repeatedly see is people trying to change things without an idea of what they are trying to change to, trying to fix individual problems (a deficit view) rather than recognising and fixing the systems that created the problems in the first place. So you end up getting a lot of symptomatic relief and iterative improvements of antiquated paradigms without necessarily getting transformation of the systems that generated the problems. A lot of the effort is put into applying traditional models of working which often result in the same old results, so we also need to consider new ways to work, not just what needs to be done.

With life getting faster and (arguably) exponentially more complicated, we need to take a whole of system view if we are to improve ‘the system’ for people. People sometimes balk when I say this thinking it too hard, too big or too embedded. But we made this, we can remake it, and if it isn’t working for us, we need to adapt like we always have.

I also see a lot of slogans used without the nuanced discussion they invite. Such (often ideological) assumptions can subtly play out without evidence, discussion or agreement on common purpose. For instance, whenever people say smaller or bigger government I try to ask what they think the role of government is, to have a discussion. Size is assumed to correlate to services, productivity, or waste depending on your view, but shouldn’t we talk about what the public service should do, and then the size is whatever is appropriate to do what is needed? People don’t talk about a bigger or smaller jacket or shoes, they get the right one for their needs and the size can change over time as the need changes. Indeed, perhaps the public service of the future could be a dramatically different workforce comprised of a smaller group of professional public servants complimented with and a large demographically representative group of part time citizens doing their self nominated and paid “civic duty year of service” as a form of participatory democracy, which would bring new skills and perspectives into governance, policy and programs.

We need urgently to think about the big picture, to collectively talk about the 50 or 100 year view for society, and only then can we confidently plan and transform the structures, roles, programs and approaches around us. This doesn’t mean we have to all agree to all things, but we do need to identify the common scaffolding upon which we can all build.

This blog posts challenges you to think systemically, critically and practically about five things:

    • What future do you want? Not what could be a bit better, or what the next few years might hold, or how that shiny new toy you have could solve the world’s problems (policy innovation, data, blockchain, genomics or any tool or method). What is the future you want to work towards, and what does good look like? Forget about your particular passion or area of interest for a moment. What does your better life look like for all people, not just people like you?
    • What do we need to get there? What concepts, cultural values, paradigm, assumptions should we take with us and what should we leave behind? What new tools do we need and how do we collectively design where we are going?
    • What is the role of gov, academia, other sectors and people in that future? If we could create a better collective understanding of our roles in society and some of the future ideals we are heading towards, then we would see a natural convergence of effort, goals and strategy across the community.
    • What will you do today? Seriously. Are you differentiating between symptomatic relief and causal factors? Are you perpetuating the status quo or challenging it? Are you being critically aware of your bias, of the system around you, of the people affected by your work? Are you reaching out to collaborate with others outside your team, outside your organisation and outside your comfort zone? Are you finding natural partners in what you are doing, and are you differentiating between activities worthy of collaboration versus activities only of value to you (the former being ripe for collaboration and the latter less so).
    • How do we scale change? I believe we need to consider how to best scale “innovation” and “transformation”. Scaling innovation is about scaling how we do things differently, such as the ability to take a more agile, experimental, evidence based, creative and collaborative approach to the design, delivery and continuous improvement of stuff, be it policy, legislation or services. Scaling transformation is about how we create systemic and structural change that naturally drives and motivates better societal outcomes. Each without the other is not sustainable or practical.

How to scale innovation and transformation?

I’ll focus the rest of this post on the question of scaling. I wrote this in the context of scaling innovation and transformation in government, but it applies to any large system. I also believe that empowering people is the greatest way to scale anything.

  • I’ll firstly say that openness is key to scaling everything. It is how we influence the system, how we inspire and enable people to individually engage with and take responsibility for better outcomes and innovate at a grassroots level. It is how we ensure our work is evidence based, better informed and better tested, through public peer review. Being open not only influences the entire public service, but the rest of the economy and society. It is how we build trust, improve collaboration, send indicators to vendors and influence academics. Working openly, open sourcing our research and code, being public about projects that would benefit from collaboration, and sharing most of what we do (because most of the work of the public service is not secretive by any stretch) is one of the greatest tools in try to scale our work, our influence and our impact. Openness is also the best way to ensure both a better supply chain as well as a better demand for things that are demonstrable better.

A quick side note to those who argue that transparency isn’t an answer because all people don’t have to tools to understand data/information/etc to hold others accountable, it doesn’t mean you don’t do transparency at all. There will always be groups or people naturally motivated to hold you to account, whether it is your competitors, clients, the media, citizens or even your own staff. Transparency is partly about accountability and partly about reinforcing a natural motivation to do the right thing.

Scaling innovation – some ideas:

  • The necessity of neutral, safe, well resourced and collaborative sandpits is critical for agencies to quickly test and experiment outside the limitations of their agencies (technical, structural, political, functional and procurement). Such places should be engaged with the sectors around them. Neutral spaces that take a systems view also start to normalise a systems view across agencies in their other work, which has huge ramifications for transformation as well as innovation.
  • Seeking and sharing – sharing knowledge, reusable systems/code, research, infrastructure and basically making it easier for people to build on the shoulders of each other rather than every single team starting from scratch every single time. We already have some communities of practice but we need to prioritise sharing things people can actually use and apply in their work. We also need to extend this approach across sectors to raise all boats. Imagine if there was a broad commons across all society to share and benefit from each others efforts. We’ve seen the success and benefits of Open Source Software, of Wikipedia, of the Data Commons project in New Zealand, and yet we keep building sector or organisational silos for things that could be public assets for public good.
  • Require user research in budget bids – this would require agencies to do user research before bidding for money, which would create an incentive to build things people actually need which would drive both a user centred approach to programs and would also drive innovation as necessary to shift from current practices :) Treasury would require user research experts and a user research hub to contrast and compare over time.
  • Staff mobility – people should be supported to move around departments and business units to get different experiences and to share and learn. Not everyone will want to, but when people stay in the same job for 20 years, it can be harder to engage in new thinking. Exchange programs are good but again, if the outcomes and lessons are not broadly shared, then they are linear in impact (individuals) rather than scalable (beyond the individuals).
  • Support operational leadership – not everyone wants to be a leader, disruptor, maker, innovator or intrapreneur. We need to have a program to support such people in the context of operational leadership that isn’t reliant upon their managers putting them forward or approving. Even just recognising leadership as something that doesn’t happen exclusively in senior management would be a huge cultural shift. Many managers will naturally want to keep great people to themselves which can become stifling and eventually we lose them. When people can work on meaningful great stuff, they stay in the public service.
  • A public ‘Innovation Hub’ – if we had a simple public platform for people to register projects that they want to collaborate on, from any sector, we could stimulate and support innovation across the public sector (things for which collaboration could help would be surfaced, publicly visible, and inviting of others to engage in) so it would support and encourage innovation across government, but also provides a good pipeline for investment as well as a way to stimulate and support real collaboration across sectors, which is substantially lacking at the moment.
  • Emerging tech and big vision guidance - we need a team, I suggest cross agency and cross sector, of operational people who keep their fingers on the pulse of technology to create ongoing guidance for New Zealand on emerging technologies, trends and ideas that anyone can draw from. For government, this would help agencies engage constructively with new opportunities rather than no one ever having time or motivation until emerging technologies come crashing down as urgent change programs. This could be captured on a constantly updating toolkit with distributed authorship to keep it real.

Scaling transformation – some ideas:

  • Convergence of effort across sectors – right now in many countries every organisation and to a lesser degree, many sectors, are diverging on their purpose and efforts because there is no shared vision to converge on. We have myriad strategies, papers, guidance, but no overarching vision. If there were an overarching vision for New Zealand Aotearoa for instance, co-developed with all sectors and the community, one that looks at what sort of society we want into the future and what role different entities have in achieving that ends, then we would have the possibility of natural convergence on effort and strategy.
    • Obviously when you have a cohesive vision, then you can align all your organisational and other strategies to that vision, so our (government) guidance and practices would need to align over time. For the public sector the Digital Service Standard would be a critical thing to get right, as is how we implement the Higher Living Standards Framework, both of which would drive some significant transformation in culture, behaviours, incentives and approaches across government.
  • Funding “Digital Public Infrastructure” – technology is currently funded as projects with start and end dates, and almost all tech projects across government are bespoke to particular agency requirements or motivations, so we build loads of technologies but very little infrastructure that others can rely upon. If we took all the models we have for funding other forms of public infrastructure (roads, health, education) and saw some types of digital infrastructure as public infrastructure, perhaps they could be built and funded in ways that are more beneficial to the entire economy (and society).
  • Agile budgeting – we need to fund small experiments that inform business cases, rather than starting with big business cases. Ideally we need to not have multi 100 million dollar projects at all because technology projects simply don’t cost that anymore, and anyone saying otherwise is trying to sell you something :) If we collectively took an agile budgeting process, it would create a systemic impact on motivations, on design and development, or implementation, on procurement, on myriad things. It would also put more responsibility on agencies for the outcomes of their work in short, sharp cycles, and would create the possibility of pivoting early to avoid throwing bad money after good (as it were). This is key, as no transformative project truly survives the current budgeting model.
  • Gov as a platform/API/enabler (closely related to DPI above) – obviously making all government data, content, business rules (inc but not just legislation) and transactional systems available as APIs for building upon across the economy is key. This is how we scale transformation across the public sector because agencies are naturally motivated to deliver what they need to cheaper, faster and better, so when there are genuinely useful reusable components, agencies will reuse them. Agencies are now more naturally motivated to take an API driven modular architecture which creates the bedrock for government as an API. Digital legislation (which is necessary for service delivery to be integrated across agency boundaries) would also create huge transformation in regulatory and compliance transformation, as well as for government automation and AI.
  • Exchange programs across sectors – to share knowledge but all done openly so as to not create perverse incentives or commercial capture. We need to also consider the fact that large companies can often afford to jump through hoops and provide spare capacity, but small to medium sized companies cannot, so we’d need a pool for funding exchange programs with experts in the large proportion of industry.
  • All of system service delivery evidence base – what you measure drives how you behave. Agencies are motivated to do only what they need to within their mandates and have very few all of system motivations. If we have an all of government anonymised evidence base of user research, service analytics and other service delivery indicators, it would create an accountability to all of system which would drive all of system behaviours. In New Zealand we already have the IDI (an awesome statistical evidence base) but what other evidence do we need? Shared user research, deidentified service analytics, reporting from major projects, etc. And how do we make that evidence more publicly transparent (where possible) and available beyond the walls of government to be used by other sectors?  More broadly, having an all of government evidence base beyond services would help ensure a greater evidence based approach to investment, strategic planning and behaviours.

WaveNet and Codec 2

Yesterday my friend and fellow open source speech coder Jean-Marc Valin (of Speex and Opus fame) emailed me with some exciting news. W. Bastiaan Kleijn and friends have published a paper called “Wavenet based low rate speech coding“. Basically they take bit stream of Codec 2 running at 2400 bit/s, and replace the Codec 2 decoder with the WaveNet deep learning generative model.

What is amazing is the quality – it sounds as good an an 8000 bit/s wideband speech codec! They have generated wideband audio from the narrowband Codec model parameters. Here are the samples – compare “Parametrics WaveNet” to Codec 2!

This is a game changer for low bit rate speech coding.

I’m also happy that Codec 2 has been useful for academic research (Yay open source), and that the MOS scores in the paper show it’s close to MELP at 2400 bit/s. Last year we discovered Codec 2 is better than MELP at 600 bit/s. Not bad for an open source codec written (more or less) by one person.

Now I need to do some reading on Deep Learning!

Reading Further

Wavenet based low rate speech coding
Wavenet Speech Samples
AMBE+2 and MELPe 600 Compared to Codec 2

April 20, 2018

NAPLAN and vocabulary

It is the time of year when the thoughts of teachers of students in years 3, 5, 7 and 9 turn (not so) lightly to NAPLAN. I’m sure many of you are aware of the controversial review of NAPLAN by Les Perelman, a retired professor from MIT in the United States. Perelman conducted a similar […]

Using a Kenwood TH-D72A with Pat on Linux and ax25

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb

Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps

along with the systemd script that comes with Pat:

/usr/share/pat/ax25/install-systemd-ax25-unit.bash

Configuration

Once the packages are installed, it's time to configure everything correctly:

  1. Power cycle the radio.
  2. Enable TNC in packet12 mode (band A*).
  3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
  4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

     wl2k    CALLSIGN    9600    128    4    Winlink
    
  5. Set HBAUD to 1200 in /etc/default/ax25.

  6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

     gcc -o tmd710_tncsetup tmd710_tncsetup.c
    
  7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

     tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
    
  8. Start ax25 driver:

     systemctl start ax25.service
    

Connecting to a winlink gateway

To monitor what is being received and transmitted:

axlisten -cart

Then create aliases like these in ~/.wl2k/config.json:

{
  "connect_aliases": {
    "ax25-VA7EOC": "ax25://wl2k/VA7EOC-10",
    "ax25-VE7LAN": "ax25://wl2k/VE7LAN-10"
  },
}

and use them to connect to your preferred Winlink gateways.

Troubleshooting

If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

  • HBAUD in /etc/default/ax25 should be set to 1200
  • line speed in /etc/ax25/axports should be set to 9600
  • SERIAL_SPEED in tmd710_tncsetup should be set to 9600
  • radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

April 19, 2018

Art with condiments

Share

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

Share

The post Art with condiments appeared first on Made by Mikal.

April 18, 2018

City2Surf 2018

Share

I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury.

This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run.

Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense.

I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far!

Share

The post City2Surf 2018 appeared first on Made by Mikal.

April 17, 2018

Lithium Cell Amp Hour Tester and Electric Sailing

I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells.

The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting!

The empty beer can in the background makes a useful insulated stand off. Might need to make more of those.

When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module:

Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V.

Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery.

After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it.

The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete!

In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test:

I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on:

The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:

  12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH

So this cells a bit low, and won’t be finding it’s way onto my boat!

Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that?

Results

It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles.

I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance.

My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels.

One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time!

Reading Further

Electric Car BMS Controller
New Lithium Battery Pack for my EV
Engage the Silent Drive
EV Bugs

LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting

Apr 21 2018 12:00
Apr 21 2018 16:00
Apr 21 2018 12:00
Apr 21 2018 16:00
Location: 
Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne

As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria.

Linux Users of Victoria is a subcommittee of Linux Australia.

April 21, 2018 - 12:00

Introducing: Click Sync

Chrome’s syncing is pretty magical: you can see your browsing history from your phone, tablet, and computers, all in one place. When you install Chrome on a new computer, it automatically downloads your extensions. You can see your bookmarks everywhere, it even lets you open a tab from another device.

There’s one thing that’s always bugged me, however. When you click a link, it turns purple, as all visited links should. But it doesn’t turn purple on your other devices. Google have had this bug on their radar for ages, but it hasn’t made much progress. There’s already an extension that kind of fixes this, but it works by hashing every URL you visit and sending them to a server run by the extension author: not something I’m particularly comfortable with.

And so, I wrote Click Sync!

When you click a link, it’ll use Chrome’s inbuilt sync service to tell all your other computers to mark it as visited. If you like watching videos of links turn purple without being clicked, I have just the thing for you:

While you’re thinking about how Chrome syncs between all your devices, it’s good to setup a Chrome Passphrase, if you haven’t already. This encrypts your personal data before it passes through Google’s servers.

Unfortunately, Chrome mobile doesn’t support extensions, so this is only good for syncing between computers. If you run into any bugs, head on over the Click Sync repository, and let me know!

April 15, 2018

Testing HAB Telemetry Protocols

On Saturday Mark and I had a pleasant day bench testing High Altitude Balloon (HAB) Telemetry protocols and demodulators.

Project Horus HAB flights use a low power transmitter to send regular updates of the balloons position and status. To date, this has been sent using RTTY, and demodulated using Fldigi, or a special version modified for HAB work called dl-Fldigi.

Lora is becoming common in HAB circles, however I am confident we can do better using a custom protocol and well engineered, and most importantly – open source – modems. While very well designed and conveniently packaged, Lora is not magic – modem performance is defined by physics.

A few year ago, Mark and I developed and flight tested a binary protocol (Horus Binary) for HAB flights. We have dusted this off, and I’ve written a C callable API (horus_api.c) to make Horus RTTY and Binary easy to use. The plan is to release a cross platform GUI application that supports Horus Binary, so anyone with a SSB receiver can join in the fun of tracking Horus flights using Horus Binary.

A good HAB telemetry protocol works at low SNRs, and has fast updates to allow accurate positioning of the payload during the final decent. A way of measuring the performance is Packet Error Rate (PER) – how many telemetry packets get through at a given Signal to Noise Ratio (SNR).

So we generated some synthetic Horus RTTY and Binary packets at calibrated SNRs using GNU Octave simulation code (fsk_horus.m), then played the wave files through several modems.

Here are the results (click for a larger version):

The X-axis is in Eb/No, which is proportional to SNR:

  SNR = EBNodB + 10log10(Rb/BW)

where Rb is the bit rate and BW is the noise bandwidth you want to measure SNR in. Eb/No is handy as it normalises for the effect of bit rate and noise bandwidth, making modem comparison easier.

Protocol dl-Fldigi
RTTY
Fldigi
RTTY
Horus
RTTY
Horus
Binary
Eb/No
(50% PER)
13.0 12.0 11.5 4.5
Rb 100 100 100 200
SNR (3000Hz) -1.7 -2.7 -3.2 -7.2
Packet
Duration
6 6 6 1.6
Wave File Listen Listen Listen Listen

Discussion

The older dl-Fldigi is a few dB behind the more modern Fldigi. Our Horus RTTY and especially Binary protocols are doing very well. At the same bit rate (Eb/No curve), Horus Binary is 9dB ahead of dl-Fldigi, which is a very useful gain; at least double the Line of Site (LOS) range, and equivalent to having nearly 10x the transmit power. The Binary packets are fast as well, allowing for rapid position updates in the final descent.

Trade offs are possible, for example if we slowed Horus Binary to 50 bits/s, it’s packet duration would be 6.4s (about the same as RTTY) however 50% PER would occur at a SNR of -13dB, a 15dB improvement over dl-Fldigi.

Reading Further

Project Horus
Binary Telemetry Protocol
All Your Modem are Belong To Us
SNR and Eb/No Worked Example

On Selecting a Well Engaged Open Source Vendor

Share

Aptira is in an interesting position in the Open Source market, because we don’t usually sell software. Instead, our customers come to us seeking assistance with deciding which OpenStack to use, or how to embed ONAP into their nationwide networks, or how to move their legacy networks to the software defined future. Therefore, our most common role is as a trusted advisor to help our customers decide which Open Source products to buy.

(My boss would insist that I point out here that we do customisation of Open Source for our customers, and have assisted many in the past with deploying pure upstream solutions. Basically, we do what is the right fit for the customer, and aren’t obsessed with fitting customers into pre-defined moulds that suit our partners.)

That makes it important that we recommend products from companies that are well engaged with their upstream Open Source communities. That might be OpenStack, or ONAP, or even something like Open Daylight. This raises the obvious question – what makes a company well engaged with an upstream project?

Read more over at my employer’s blog

Share

The post On Selecting a Well Engaged Open Source Vendor appeared first on Made by Mikal.

Configuring docker to use rexray and Ceph for persistent storage

Share

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working…

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
    Selecting previously unselected package rexray.
    (Reading database ... 177547 files and directories currently installed.)
    Preparing to unpack rexray_0.9.0-1_amd64.deb ...
    Unpacking rexray (0.9.0-1) ...
    Setting up rexray (0.9.0-1) ...
    
    rexray has been installed to /usr/bin/rexray
    
    REX-Ray
    -------
    Binary: /usr/bin/rexray
    Flavor: client+agent+controller
    SemVer: 0.9.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    Formed: Thu, 04 May 2017 07:38:11 AEST
    
    libStorage
    ----------
    SemVer: 0.6.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    Formed: Thu, 04 May 2017 07:36:11 AEST
    

Which is of course horrid. What that script seems to have done is install a deb’d version of rexray based on an alien’d package:

    root@labosa:~/rexray# dpkg -s rexray
    Package: rexray
    Status: install ok installed
    Priority: extra
    Section: alien
    Installed-Size: 36140
    Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
    Architecture: amd64
    Version: 0.9.0-1
    Depends: libc6 (>= 2.3.2)
    Description: Tool for managing remote & local storage.
     A guest based storage introspection tool that
     allows local visibility and management from cloud
     and storage platforms.
     .
     (Converted from a rpm package by alien version 8.86.)
    

If I was building anything more than a test environment I think I’d want to do a better job of installing rexray than this, so you’ve been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren’t mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common
    root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
    The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
    ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
    rbdmap                       100%   92     0.1KB/s   00:00
    ceph.conf                    100%  681     0.7KB/s   00:00
    ceph.client.admin.keyring    100%   63     0.1KB/s   00:00
    ceph.client.glance.keyring   100%   64     0.1KB/s   00:00
    ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00
    ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00
    root@labosa:/etc# modprobe rbd
    

You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: ceph
    

And the rexray output sure made it look like it worked…

    root@labosa:/etc# rexray service start
    ● rexray.service - rexray
       Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
     Main PID: 477423 (rexray)
        Tasks: 5
       Memory: 1.5M
          CPU: 9ms
       CGroup: /system.slice/rexray.service
               └─477423 /usr/bin/rexray start -f
    
    May 29 10:14:07 labosa systemd[1]: Started rexray.
    

Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray
    May 29 10:14:08 labosa rexray[477423]: -------
    May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
    May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
    May 29 10:14:08 labosa rexray[477423]: libStorage
    May 29 10:14:08 labosa rexray[477423]: ----------
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting libStorage server" error.driver=ceph time=1496016848215
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="daemon failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting rex-ray" error.driver=ceph time=1496016848216
    

That’s because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: rbd
    
    rbd:
      defaultPool: rbd
    

Now to install docker:

    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
        linux-image-extra-virtual
    root@labosa:/var/log# sudo apt-get install apt-transport-https \
        ca-certificates curl software-properties-common
    root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    root@labosa:/var/log# sudo add-apt-repository \
        "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
        $(lsb_release -cs) \
        stable"
    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install docker-ce
    

Now let’s make a rexray volume.

    root@labosa:/var/log# rexray volume ls
    ID  Name  Status  Size
    root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
        --opt=size=1
    A size of 1 here means 1gb
    mysql
    root@labosa:/var/log# rexray volume ls
    ID         Name   Status     Size
    rbd.mysql  mysql  available  1
    

Let’s start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    Unable to find image 'mysql:latest' locally
    latest: Pulling from library/mysql
    10a267c67f42: Pull complete
    c2dcc7bb2a88: Pull complete
    17e7a0445698: Pull complete
    9a61839a176f: Pull complete
    a1033d2f1825: Pull complete
    0d6792140dcc: Pull complete
    cd3adf03d6e6: Pull complete
    d79d216fd92b: Pull complete
    b3c25bdeb4f4: Pull complete
    02556e8f331f: Pull complete
    4bed508a9e77: Pull complete
    Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
    Status: Downloaded newer image for mysql:latest
    ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
    

And now to prove that persistence works and that there’s nothing up my sleeve…

    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
        -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | mysql              |
    | performance_schema |
    | sys                |
    +--------------------+
    4 rows in set (0.00 sec)
    
    mysql> create database demo;
    Query OK, 1 row affected (0.03 sec)
    
    mysql> use demo;
    Database changed
    mysql> create table foo(val char(5));
    Query OK, 0 rows affected (0.14 sec)
    
    mysql> insert into foo(val) values ('a'), ('b'), ('c');
    Query OK, 3 rows affected (0.08 sec)
    Records: 3  Duplicates: 0  Warnings: 0
    
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    

Now let’s re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql
    some-mysql
    root@labosa:/var/log# docker rm some-mysql
    some-mysql
    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
    
    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
        P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> use demo;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    

So there you go.

Share

I think I found a bug in python’s unittest.mock library

Share

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we’ve used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what “methods” were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem — the mock object doesn’t know if you’re the code under test, or the code that’s making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here’s an example:

#!/usr/bin/python3

from unittest import mock

class foo(object):
    def dummy(a, b):
        return a + b

@mock.patch.object(foo, 'dummy')
def call_dummy(mock_dummy):
    f = foo()
    f.dummy(1, 2)

    print('Asserting a call should work if the call was made')
    mock_dummy.assert_has_calls([mock.call(1, 2)])
    print('Assertion for expected call passed')

    print()
    print('Asserting a call should raise an exception if the call wasn\'t made')
    mock_worked = False
    try:
        mock_dummy.assert_has_calls([mock.call(3, 4)])
    except AssertionError as e:
        mock_worked = True
        print('Expected failure, %s' % e)

    if not mock_worked:
        print('*** Assertion should have failed ***')

    print()
    print('Asserting a call where the assertion has a typo should fail, but '
          'doesn\'t')
    mock_worked = False
    try:
        mock_dummy.typo_assert_has_calls([mock.call(3, 4)])
    except AssertionError as e:
        mock_worked = True
        print('Expected failure, %s' % e)
        print()

    if not mock_worked:
        print('*** Assertion should have failed ***')
        print(mock_dummy.mock_calls)
        print()

if __name__ == '__main__':
    call_dummy()

If I run that code, I get this:

$ python3 mock_assert_errors.py
Asserting a call should work if the call was made
Assertion for expected call passed

Asserting a call should raise an exception if the call wasn't made
Expected failure, Calls not found.
Expected: [call(3, 4)]
Actual: [call(1, 2)]

Asserting a call where the assertion has a typo should fail, but doesn't
*** Assertion should have failed ***
[call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn’t a thing, but we didn’t notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don’t really have a solution to this right now (I’m home sick and not thinking straight), but it would be interesting to see what other people think.

Share

The post I think I found a bug in python’s unittest.mock library appeared first on Made by Mikal.

Python3 venvs for people who are old and grumpy

Share

I’ve been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn’t a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad…

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv
    echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
    echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(pyenv init -)"' >> ~/.bashrc
    git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
    source ~/.bashrc
    

Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot
    cd ~/.virtualenvs/pyenv-infrasot
    pyenv virtualenv system infrasot
    

You can see your installed venvs like this:

    $ pyenv versions
    * system (set by /home/user/.pyenv/version)
      infrasot
    

Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot
    $ ... stuff you're doing ...
    $ pvenv deactivate
    

I’ll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Share

Giving serial devices meaningful names

Share

This is a hack I’ve been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$  cat /etc/udev/rules.d/60-local.rules
KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
    ATTRS{serial}=="A8003Ye7", \
    SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to “/dev/radish”.

You find out the vendor and product ID from lsusb like this:

$ lsusb
Bus 003 Device 003: ID 0624:0201 Avocent Corp.
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that’s great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more… difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
    PROGRAM="/usr/bin/usbtest /dev/%k", \
    SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is — in my case either a currentcost or a solar panel inverter.

Share

Hugo nominees for 2018

Share

Lifehacker kindly pointed out that the Hugo nominees are out for 2018. They are:

  • The Collapsing Empire, by John Scalzi. I’ve read this one and liked it.
  • New York 2140, by Kim Stanley Robinson. I’ve had a difficult time with Kim’s work in the past, but perhaps I’ll one day read this.
  • Provenance, by Ann Leckie. I liked Ancillary Justice, but failed to fully read the sequel, so I guess we’ll wait and see on this one.
  • Raven Stratagem, by Yoon Ha Lee. I know nothing!
  • Six Wakes, by Mur Lafferty. Again, I know nothing about this book or this author.

So a few there to consider in the future.

Share

The post Hugo nominees for 2018 appeared first on Made by Mikal.

The Collapsing Empire

Share

This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don’t know that and are busy having petty trade wars instead. It isn’t a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire…

The Collapsing Empire Book Cover The Collapsing Empire
John Scalzi
Fiction
Tor Books
March 21, 2017
336

Our universe is ruled by physics and faster than light travel is not possible—until the discovery of The Flow, an extra-dimensional field we can access at certain points in space-time that transport us to other worlds, around other stars. Humanity flows away from Earth, into space, and in time forgets our home world and creates a new empire, the Interdependency, whose ethos requires that no one human outpost can survive without the others. It’s a hedge against interstellar war—and a system of control for the rulers of the empire. The Flow is eternal—but it is not static. Just as a river changes course, The Flow changes as well, cutting off worlds from the rest of humanity. When it’s discovered that The Flow is moving, possibly cutting off all human worlds from faster than light travel forever, three individuals -- a scientist, a starship captain and the Empress of the Interdependency—are in a race against time to discover what, if anything, can be salvaged from an interstellar empire on the brink of collapse. “John Scalzi is the most entertaining, accessible writer working in SF today.” —Joe Hill "If anyone stands at the core of the American science fiction tradition at the moment, it is Scalzi." —The Encyclopedia of Science Fiction, Third Edition

Share

Things I read today: the best description I’ve seen of metadata routing in neutron

Share

I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Share

Escaping from blosxom

Share

I’ve been running my personal blog on a very hacked version of blosxom for a hilariously long time, and its time to escape. I’ve therefore started converting all of the content to wordpress here, and will eventually redirect the old domain to here as well.

Why blogging when its so 2000? I’m increasingly disinterested in social media like Facebook and Twitter. I figure if I’m going to note something down that looks like it might be useful to others I’ll put it on ye olde blog instead.

I’m sure the conversion isn’t perfect, and I’ve decided not to migrate very old content that simply not interesting any more (linux kernel patches from 2004 for example). If you find a post which has converted badly, just comment on it and I’ll do something about it. I am very sure that pretty much no one will do that thing however.

Share

The post Escaping from blosxom appeared first on Made by Mikal.

Nova vendordata deployment, an excessively detailed guide

Share

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.

Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.

Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.

The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example:

        {
            "testing": {
                "value1": 1,
                "value2": 2,
                "value3": "three"
            }
        }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time

Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata
$ cd vendordata
$ apt-get install virtualenvwrapper
$ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
$ mkvirtualenv vendordata
$ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:

[keystone_authtoken]
insecure = False
auth_plugin = password
auth_url = http://172.29.236.100:35357
auth_uri = http://172.29.236.100:5000
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
$ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api]
vendordata_providers=DynamicJSON
vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool
{
    "testing": {
        "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
    }
}

Share

So you want to setup a Ceph dev environment using OSA

Share

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I’ve never seen before called a “Scenario”. Basically this means that you need to export an environment variable called “SCENARIO” before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    

Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
    +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
    @@ -338,7 +338,9 @@
     #     foo: 1234
     #     bar: 5678
     #
    -ceph_conf_overrides: {}
    +ceph_conf_overrides:
    +  global:
    +    osd_pool_default_pg_num: 8
    
     #############
    @@ -373,4 +375,4 @@
     # Set this to true to enable File access via NFS.  Requires an MDS role.
     nfs_file_gw: true
     # Set this to true to enable Object access via NFS. Requires an RGW role.
    -nfs_obj_gw: false
    \ No newline at end of file
    +nfs_obj_gw: false
    

That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I’ll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
        cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
         health HEALTH_OK
         monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
         osdmap e20: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                102156 kB used, 3070 GB / 3070 GB avail
                      40 active+clean
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
    ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 2.99817 root default
    -2 2.99817     host labosa
     0 0.99939         osd.0        up  1.00000          1.00000
     1 0.99939         osd.1        up  1.00000          1.00000
     2 0.99939         osd.2        up  1.00000          1.00000
    

Share

Australia and the Commonwealth Games

Australia has been doing exceptionally well at the 2018 Commonwealth Games, held at the Gold Coast, Queensland. We can be very proud of our athletes, not only for their sporting prowess, but also because of their friendly demeanour and wonderful examples of the spirit of sportsmanship. I’m sure we all felt proud when the Australian […]

April 13, 2018

My little robotic pals

Years ago I decided to build an indoor robot with multiple kinects for navigation and a robotic arm for manipulation. It was an interesting time working out how to do this and what is needed to get a mobile base to map and navigate a static and dynamic indoor space. Any young players reading this might think that ROS can just magically make this all happen. There are some interesting issues to discover building your own base and some, um, "issues" shall we say that you will need to address that are not in the books or docs. I won't spoil it here for the new players other than to say be prepared to be persistent. 


There are two active wheels at the front and a single drag wheel at the back about 12 inches behind the front wheels. I wrote the code to control the arm myself as custom ROS nodes. A great trick here is you can inject sinusoidal movement by injecting a shim ROS node to take one target and smoothly move towards it.

Now I have a new friend for outdoor activity, the "hound bot". The little furry friend is still sans hair but has gps, imu, rc control override, and a ps4 eye camera mounted for depth perception and mapping. Taking a leaf out of one of the big car makers book and only using cameras for navigation. But for me it is about cost since a good lidar is still much to expensive for the hound.


The hound is a sort of monocoque where the copper looking square part at the front is part of a 1/4 inch aircraft grade alloy solid welded chassis that extends the lenght of the robot. The hound can do about 20km/h and is around 20kg in heft. The electronics bay in the middle is protected by a reinforced carbon fibre layup that I did. Mixing material for fun and slight weight loss.

One great part about doing this "because I want to" is that I am unbounded. Academic institutions might say that building robust alloy shells is not a worthwhile task and only the abstract algorithms matter. I get to pick and choose what matters based purely on what is interesting, what is hard to do (yay!), and what will help me get the robot to perform a task that I want.

The hound will get gripper(s) so it can autonomously "fetch" things for me such as the mail or go find and pick up objects on the lawn.

April 12, 2018

Leadership, and teamwork.

Photo by Mohamed Abd El Ghany - Women protestors in Tahrir Square, Egypt 2013.

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Linux Security Summit North America 2018 CFP Announced

lss logo

The CFP for the 2018 Linux Security Summit North America (LSS-NA) is announced.

LSS will be held this year as two separate events, one in North America
(LSS-NA), and one in Europe (LSS-EU), to facilitate broader participation in
Linux Security development. Note that this CFP is for LSS-NA; a separate CFP
will be announced for LSS-EU in May. We encourage everyone to attend both
events.

LSS-NA 2018 will be held in Vancouver, Canada, co-located with the Open Source Summit.

The CFP closes on June 3rd and the event runs from 27th-28th August.

To make a CFP submission, click here.

April 10, 2018

Post-work: the radical idea of a world without jobs | The Guardian

April 03, 2018

Net Promoter Score: The Most Useless Metric of All

A number of organisations use a customer service metric known as "Net Promoter", first suggested in the Harvard Business Review. Indeed, it is so common that apparently two-thirds of Fortune 500 companies are using the metric. It simply asks a single question: "How likely is it that you would recommend [company X] to a friend or colleague?". The typical scoring for the answer is a one to ten scale, with a value of 9 or 10 considered a "promoter" score, a 7 or 8 a "neutral" score, and a 0 to 6 a "detractor" score. The Net Promoter Score is calculated by subtracting the percentage of responders who are Detractors from the percentage of responders who are Promoters. It is a simple and blunt instrument and it's entirely the wrong tool to use.

To begin with, it fails of the most elementary mathematics. There is nothing to be gained from providing a score that provides an 11 point range from 0-10, yet only calculates a score from values of promoter, neutral, and detractor. In the Net Promoter system, a score of 6 is just as much a detractor as responder who provides a score of 0, despite what should be a glaringly obvious difference in reaction. It is stunning that a journal with the alleged quality of the Harvard Business Review didn't notice this - let alone the authors of the article.

Secondly, it conflates subjective responses with a quantitative value. What does a score of "6" mean anyway? According to the designers of the NPS, it's a detractor, a fail. Yet there is no guarantee that a responder interprets the value that way. In most assessment systems a "6" is a pass - and more to the point a "7" or "8" is considered a distinction grade; the latter would result in a cum laude or even magna cum laude in most universities. But in the NPS, it is merely a "neutral" result. The problem being of course, unless the individual is provided qualitative guidance with the values (which most organisations or applications don't do), there is no way of determining what their subjective score of 0-10 really reflects. Numerical values cannot be translated to qualitative values unless all parties are provided a means for correlation.

Thirdly, a single-value NPS provides no information to act upon. What does it mean that a respondent would or would not recommend a company, product, or service? Even assuming that the graduation is in place that matches values with scale, and qualitative assessment to numerical values, the answers still providing nothing to act upon. Is it the company or service as a whole that has resulted in the evaluation? Is it a part of company or service? Could it be, for a detractor, that the product or service was something that they thought they needed, but actually didn't? Unless the score is supplemented with an opportunity for a responder to explain their evaluation, there is no way that it creates an opportunity for action.

Given these errors, it is perhaps unsurprising that an unmodified "Net Promoter" method of measuring customer satisfaction ranked last in an extensive study by Schneider et al. in terms of predictive capability. Granted, some information is better than no information, and people do prefer shorter surveys to longer surveys. But as designed in its pure form, using a Net Promoter score is almost as bad as not collecting respondent data at all. A short survey which breaks up the item being reviewed into equal composite components, which guides subjective values to numerical values, which provides an opportunity for free-text qualitative information, and which measures metrics along the scale (with mean and distribution) will always be far more effective measurement of both a respondent's satisfaction, and an organisation's opportunity for action. As it is writ, the NPS should be avoided in all circumstances.

Looking back on starting Libravatar

As noted on the official Libravatar blog, I will be shutting the service down on 2018-09-01.

It has been an incredible journey but Libravatar has been more-or-less in maintenance mode for 5 years, so it's somewhat outdated in its technological stack and I no longer have much interest in doing the work that's required every two years when migrating to a new version of Debian/Django. The free software community prides itself on transparency and so while it is a difficult decision to make, it's time to be upfront with the users who depend on the project and admit that the project is not sustainable in its current form.

Many things worked well

The most motivating aspect of running Libravatar has been the steady organic growth within the FOSS community. Both in terms of traffic (in March 2018, we served a total of 5 GB of images and 12 GB of 302 redirects to Gravatar), integration with other sites and projects (Fedora, Debian, Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms of users:

In addition, I wanted to validate that it is possible to run a FOSS service without having to pay for anything out-of-pocket, so that it would be financially sustainable. Hosting and domain registrations have been entirely funded by the community, thanks to the generosity of sponsors and donors. Most of the donations came through Gittip/Gratipay and Liberapay. While Gratipay has now shut down, I encourage you to support Liberapay.

Finally, I made an effort to host Libravatar on FOSS infrastructure. That meant shying away from popular proprietary services in order to make a point that these convenient and well-known services aren't actually needed to run a successful project.

A few things didn't pan out

On the other hand, there were also a few disappointments.

A lot of the libraries and plugins never implemented DNS federation. That was the key part of the protocol that made Libravatar a decentralized service but unfortunately the rest of the protocol was must easier to implement and therefore many clients stopped there.

In addition, it turns out that while the DNS system is essentially a federated caching system for IP addresses, many DNS resolvers aren't doing a good job caching records and that created unnecessary latency for clients that chose to support DNS federation.

The main disappointment was that very few people stepped up to run mirrors. I designed the service so that it could scale easily in the same way that Linux distributions have coped with increasing user bases: "ftp" mirrors. By making the actual serving of images only require Apache and mod_rewrite, I had hoped that anybody running Apache would be able to add an extra vhost to their setup and start serving our static files. A few people did sign up for this over the years, but it mostly didn't work. Right now, there are no third-party mirrors online.

The other aspect that was a little disappointing was the lack of code contributions. There were a handful from friends in the first couple of months, but it's otherwise been a one-man project. I suppose that when a service works well for what people use it for, there are less opportunities for contributions (or less desire for it). The fact dev environment setup was not the easiest could definitely be a contributing factor, but I've only ever had a single person ask about it so it's not clear that this was the limiting factor. Also, while our source code repository was hosted on Github and open for pull requests, we never even received a single drive-by contribution, hinting at the fact that Github is not the magic bullet for community contributions that many people think it is.

Finally, it turns out that it is harder to delegate sysadmin work (you need root, for one thing) which consumes the majority of the time in a mature project. The general administration and maintenance of Libravatar has never moved on beyond its core team of one. I don't have a lot of ideas here, but I do want to join others who have flagged this as an area for "future work" in terms of project sustainability.

Personal goals

While I was originally inspired by Evan Prodromou's vision of a suite of FOSS services to replace the proprietary stack that everybody relies on, starting a free software project is an inherently personal endeavour: the shape of the project will be influenced by the personal goals of the founder.

When I started the project in 2011, I had a few goals:

This project personally taught me a lot of different technologies and allowed me to try out various web development techniques I wanted to explore at the time. That was intentional: I chose my technologies so that even if the project was a complete failure, I would still have gotten something out of it.

A few things I've learned

I learned many things along the way, but here are a few that might be useful to other people starting a new free software project:

  • Speak about your new project at every user group you can. It's important to validate that you can get other people excited about your project. User groups are a great (and cheap) way to kickstart your word of mouth marketing.

  • When speaking about your project, ask simple things of the attendees (e.g. create an account today, join the IRC channel). Often people want to support you but they can't commit to big tasks. Make sure to take advantage of all of the support you can get, especially early on.

  • Having your friends join (or lurk on!) an IRC channel means it's vibrant, instead of empty, and there are people around to field simple questions or tell people to wait until you're around. Nobody wants to be alone in a channel with a stranger.

Thank you

I do want to sincerely thank all of the people who contributed to the project over the years:

  • Jonathan Harker and Brett Wilkins for productive hack sessions in the Catalyst office.
  • Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the service.
  • Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for running mirrors on their servers.
  • Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and maintaining client libraries.
  • The Wellington Perl Mongers for their invaluable feedback on an early prototype.
  • The #equifoss group for their ongoing suppport and numerous ideas.
  • Nigel Babu and Melissa Draper for producing the first (and only) project stikers, as well as Chris Cormack for spreading so effectively.
  • Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for making Libravatar speak so many languages.

I'm sure I have forgotten people who have helped over the years. If your name belongs in here and it's not, please email me or leave a comment.

April 02, 2018

Audiobooks – March 2018

The Actor’s Life: A survival guide by Jenna Fischer

Combination of advice for making it as an actor and a memoir of her experiences. Interesting and enjoyable 8/10

One Man’s Wilderness: An Alaskan Odyssey by Sam Keith

Based on the journals of Richard Proenneke who built a cabin in the Alaskan wilderness and lived there for 16 month (& returned in later years). Interesting & I’m a little inspired 7/10

The Interstellar Age: The Story of the NASA Men and Women Who Flew the Forty-Year Voyager Mission by Jim Bell

Pretty much what the title says. Very positive throughout and switching between the science and profiles of the people smoothly. 8/10

Richard Nixon: The Life by John A Farrell

Comprehensive but balanced biography. Doesn’t shy away from Nixon’s many many problems but also covers his accomplishments and positive side (especially early in his career). 8/10

The Adventures of Sherlock Holmes, Book I – Arthur Conan Doyle – Read by David Timson

4 Stories unabridged. Reading is good but drop a point since the music is distracting at fast playback. 7/10

Death by Black Hole: And Other Cosmic Quandaries by Neil deGrasse Tyson

42 Essays on mainly space-related topics. Some overlap but pretty good, 10 years old so missing a few newer developments but good introduction. 8/10

The Sports Gene: Inside the Science of Extraordinary Athletic Performance by David Epstein

Good wide-ranging book on nature vs nurture in sports performance, how genes for athletic performance are not that simple & how little we know. 9/10

The Residence: Inside the Private World of the White House by Kate Andersen Brower

Gossipy account from interviewing various ex-staff ( maids, cooks, butlers). A different angle than from what I get from other accounts. 7/10

Tanker Pilot: Lessons from the Cockpit by Mark Hasara

Account of the author flying & planning aerial refueling operations during the Gulf wars & elsewhere. A bit of business advice but that is unobtrusive. No actual politics 7/10

The Big Short: Inside the Doomsday Machine by Michael Lewis

Account of various people who made billions shorting the mortgage market in the run up to 2008. Fun and easy for layman to follow. 8/10

Driverless: Intelligent Cars and the Road Ahead by Hod Lipson

Listening to it the week a driverless car first killed a pedestrian. Fairly good intro/history/overview although fast changing topic so will go out of date quickly. 7/10

Journeys in English by Bill Bryson

A series of radio shows. I found the music & random locations annoying. Had to slow it down due to varied voices, accents and words. Interesting despite that, 7/10

Share

The Gantry is attached!

Now the fourth axis finally looks at home on the CNC plate.  The new gantry sides are almost 100mm taller than the old ones and share a similar shape. While the gantry was off the machine was a good time to attach the new Z-Axis which gains a similar amount of Z travel. Final adjustment of where the spindle sits in its holder are still needed but it makes sense for the cutting edge to be fairly high up when the Z-Axis is fully retracted as shown.




After a day of great success early on a day of great problem solving arrived before the attachment was possible. The day of great success involved testing the two new sides to see if or how well they attached to the mount points at the base of the machine. These holes in the gantry were hand marked, drilled, and tapped so there was some good chance that they were off target enough to not work well. But those all went fine.

The second success was mounting the Z-Axis to the existing points on the gantry. I had in the back of my mind the thought that one side (the three holes on the bottom of the mount) to line up and attach fine but the top holes to be out of alignment. Both of these plates, seen in horizontal in the image above, were made by CNC so the holes should be where I intended. Though these plates were both mounted to the Z-Axis and the bottom plate goes right through to the lower steel bracket so the alignment might not have been 100%. I registered both plates to the smooth side of the spindle backing plate so the alignment in that axis should have been ok. To great surprise and joy the top holes also aligned perfectly and the second phase fell into place.

It was only when putting the new sides onto the gantry that interesting things started to happen. I will have a new blog post on that part soon and likely a video of the problems and solutions for that part. One thing I will say now is that it helps to have washers, bolts, and spare skate bearings on hand for this process depending on how you have designed your far side gantry upright.



March 27, 2018

Keystone Federated Swift – False Federation

This is the second post in my series of posts on Swift in a Keystone federated environment, and the first post where I’ll walk through the first environment. The environment I’m calling ‘False Federation’. For details on these series of posts including the rationalisation see my last introductory post.

 

False Federation

This first environment doesn’t actually use Keystone federation, instead it uses an existing ability of Swift to have more then 1 authentication middleware in the proxy pipeline. Which is why I’m calling this ‘False Federation’.

Swift Reseller’s and the reseller_prefix

Swift, in an OpenStack environment, talks to Keystone for identity management through Keystone’s authtoken and the Swift keystoneauth middlewares. However Keystone isn’t required. Swift was designed to be a complete standalone storage solution, in fact many Swift deployments use different (like swauth) and sometimes custom authentication middlewares. This way people can easily integrate Swift into their own environments.

If you’ve spend anytime setting up authentication middlewares (like keystoneauth) in Swift, you’ve undoubtedly come across Swift’s reseller_prefix option, and maybe thought to yourself why?

 

As I mentioned earlier from the start Swift was designed to be an end to end standalone storage system. One of the features it has always supported is the idea of more then 1 authentication middleware in the pipeline. And if you have more then 1, then you need a way to distinguish which authentication middleware handles what account. This is what the reseller_prefix does. Swift will match the reseller_prefix prefixed to the account name with the authentication middleware who is to handle it.

This is actually a really powerful feature. It means you could resell your storage solution to other parties to manage accounts, or connect up different parts of your organisation, if say for some reason you have more then 1 source you want to use as an authentication service.

Some authentication middleware’s like Keystoneauth can even cover more then 1 reseller_prefix, this is how service tokens tend to be deployed, so a service can have it’s own namespace of a users for isolation and the data is safe from accidental deletion.

And yes, it’s also possible to set an empty reseller_prefix.

 

Multiple Keystone middlewares

Having got the idea of reseller_prefixes out of the way, this is the first potential solution and the idea behind ‘False Federation’. If you have a large Swift cluster, you could place the required authentication middlewares for each separate OpenStack environment you want to connect it to.

 

NOTE: The are 2 middlewares needed to connect to a single Keystone instance, Keystones authtoken and then Swifts keystoneauth. Other authentication middleware, like swauth and many custom ones, are only 1 middleware. So a little less confusing.

 

Before I get into the configuration I should also mention before you run off and give it a go. The current upstream keystoneauth in Swift doesn’t support being placed multiple times in a pipeline. Why? Because of the way places itself in the wsgi environment. But never fear, I have written a patch to correct this behavior specifically for these set’s of experiments, and when I get a chance to clean it up and write some tests I’ll push it upstream. In the meantime you can grab hold of the patch here.

 

I’m not going into huge amounts of detail on how to connect to Keystone, the Swift documentation and installation guides to that too well. And really your just duplicating exactly that, but to each Keystone endpoint you want to connect. If you need detailed instructions, then let me know. They say an image is worth more then a 1000 words. So here is a how it’s done in 1 pretty diagram:

The run down is:

  • Edit your proxy-server.conf on each node, and create ‘[filter:authtoken]’ and ‘[filter:keystoneauth]’ sections for each Keystone endpoint. Noting the names of the filters have to be different.
  • Each ‘[filter:authtoken]’ will point to an endpoint, and it’s corresponding ‘[filter:keystoneauth]’ will have a different reseller_prefix which will need to be matched in the Object Storage endpoint on the keystone servers service catalog. (see project documentation)
  • You then place these filters in the proxy pipeline. When placing a pair the authtoken must come before it’s keystoneauth other. But the pair’s ketstoneauth must also appear before then next authtoken (like in the picture).

 

NOTE: I’ve left of a bunch of middleware options in the picture to keep it small and readable.

 

Now if I send the following GET requests:
GET /v1/KEY_matt/pictures/cat.png
GET /v1/AUTH_matt/pictures/cat.png

 

The first would be authenticated on the blue keystone (or via ‘authtoken1 keystoneauth1’) and the second with the green keystone (or via ‘authtoken2 keystoneauth2’).

 

Cons

This approach was to demonstrate what Swift could already do. But there are some limitations to this approach. Which as always depends on your situation. Keystone’s authtoken middelware will always go and try an authenticate. So would add a bunch of latency to each request going through the proxy. If they are close maybe that’s ok. But if this was a geographical cluster with keystones all around the world then… ouch. If using a custom middleware, you’d just skip reseller_prefixes that don’t relate to you (like keystoneauth does).

 

Maybe you could have a different Swift proxy in each “region” that only points to the local keystone, so you are only authenticating locally.. ok. But then a user can’t come and access their data if they happen to be in a different region.. even though your talking to the same cluster.

So really what we want to do is take advantage of Keystone federation, where we only ever have to talk to 1 instance, the local one for the region a Swift proxy lives. That way we get the speed and the ability to access our data from anywhere.

 

Next time…

So the next post we’ll add real keystone federation, but assume each federation environment is it’s own cluster, including each has it’s own Swift cluster. In which case we could take advantage of another Swift feature, container sync.

Then the final post would be what we really want, 1 large Swift cluster with multiple Federated keystone OpenStack clusters. But that will involve fiddling with the federation sync metadata and need a more detailed explanation on how Swift authentication works. So first I want to cover what Swift can do simply with the tools it comes with!

March 26, 2018

New Mirobot v3 arrival in Australia

Here’s our batch of brand new Mirobot v3 kits on their arrival in Australia, dozens stacked. Since the v3 have a neat acrylic frame, I think I’ll do a proper “unboxing” and first build video of one soon, so you can see for yourself what this is about. Many classes of year 5 and 6 […]

March 24, 2018

LUV Main April 2018 Meeting: Write docs like a software developer using the Linux toolchain

Apr 3 2018 18:30
Apr 3 2018 20:30
Apr 3 2018 18:30
Apr 3 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, April 3, 2018

6:30 PM to 8:30 PM
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Linux Users of Victoria is a subcommittee of Linux Australia.

April 3, 2018 - 18:30

read more

March 23, 2018

Keystone Federated Swift – A series of posts

Matt Treinish and I proposed a presentation at the OpenStack Summit in Vancouver in May, it was accepted but on standby. Which simply means we have a lightening talk slot (10 minutes), but may be bumped up to a full slot based on how other presenters go (visa issues, pull outs, etc).

Anyway, 10 minutes wont do the topic justice, so I thought what better then to also post details as I work through them here. Some of what I say may end up in the presentation, or may not. All I know is I’ve been asked a few times how to setup Swift in a Keystone federated environment. Let’s face it, Swift scales to a global cluster no worries, however other OpenStack components may have trouble doing the same. So federating a bunch of different regions and treating them as their own clouds makes heaps of sense. Great, then what’s the best way of integrating Swift into this federated environment?

 

My current Idea is to walk through 3 initial topologies. The first I’ll call ‘false federation’ where we can simply use Swift’s ability to use multiple authentication middlewares as different resellers to be able to authenticate to multiple keystone endpoints. For those playing along at home, the keystone middleware currently doesn’t let you do this, but I have a trivial patch that fixes this.. and plan to push it upstream as soon as I have a chance to clean it up and add tests.

 

The second, is separate swift clusters in each cloud. But using Swifts container sync to move objects so you still have access to your data on any cloud you visit… eventually.

 

And finally the third is what we’d all want, I large swift cluster, that all clouds talk to, so no matter where you are, there your data is. Plus gives better durability, dispersion, and everything we want out of a Swift cluster. The trick here will be making sure the same swift account name is used no matter which keystone your talk to, and assume this will come down to how you configure what you share during federated token exchange. I’ll leave this as the last post and we still need to play to iron it out.. but obviously is the dream.

These diagrams are obviously overly simplistic, but I hope you get the idea.

The next post will be the ‘False federation’ approach seeing as I already have a swift keystoneauth middleware patch that solves this.

March 20, 2018

You won’t find us on Facebook

I made these back in August 2016 (complete with lovingly hand-drawn thumb and middle finger icons), but it seems appropriate to share them again now. The images are CC-BY-SA, so go nuts, or you can grab them in sticker form from Redbubble.

facebook-thumb-blue

facebook-thumb-black

facebook-finger-blue

facebook-finger-black

March 19, 2018

Waiting for the other Gantry to drop

When cutting the first side of the gantry I used the "traditional" hold down method of clamps and the like on the edges. That works ok when you have a much larger piece of alloy than the part you are cutting. In this case there isn't much spare in the height axis (left right as shown) and as you can see very little in the x axis (up/down in the below image). My clamping allowed for more vibration on the first cutting than I like so I changed how I went about the second side of the gantry.

For the second gantry, after flipping things in the software so that I was coming in from the other side I drilled out 4 m6 holes and countersank them.


This way the bolts (m6x40) were almost flush with the work piece. These bolts go straight through the plywood and connect with t-slot nuts in the alloy bed of the cnc. So there isn't much ability to use bolts that are too long for this application. Counter sinking the bolts helps on a machine with limited Z travel as using some non stubby drill bits really locks down the amount of free play and clearance you can get. The downside of this work holding is that you are left with 4 m6 holes that don't really need to be in the final product.

In this case it doesn't matter as I can use them and a new plate to mount one or two cameras on the back gantry facing forwards. I have found that the best vantage for CNC viewing is when not in the same room and looking at the video streams.

In future jobs I might move the countersunk bolts to the edge so they are not on the final work piece.

So now all I have to do is free this piece from the waste, tap a bunch of m5 holes, drill and tap 5 holes on 3 sides of the new gantry pieces and I'm getting close to loading it on.

March 18, 2018

Dynamic DNS on your own domain

I recently moved my dynamic DNS hostnames from dyndns.org (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (fmarier.org).

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *.dyn.fmarier.org.

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. 1.2.3.4) for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under dyn.fmarier.org to No-IP:

dyn NS ns1.no-ip.com.
dyn NS ns2.no-ip.com.
dyn NS ns3.no-ip.com.
dyn NS ns4.no-ip.com.
dyn NS ns5.no-ip.com.

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

ssl=yes
protocol=noip
use=web, web=checkip.dyndns.com, web-skip='IP Address'
server=dynupdate.no-ip.com
login=myusername
password='Password1!'
machinename.dyn.fmarier.org

and the following in /etc/default/ddclient:

run_dhclient="false"
run_ipup="false"
run_daemon="true"
daemon_interval="3600"

Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the checkip.dyndns.com server will ban your IP address.

Testing

To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short machinename.dyn.fmarier.org

Auto apply latest package updates on OpenWrt (LEDE Project)

Running Linux on your router and wifi devices is fantastic, but it’s important to keep them up-to-date. This is how I auto-update my devices with the latest packages from OpenWrt (but not firmware, I still do that manually when there’s a new release).

This is a very simple shell script which uses OpenWrt’s package manager to fetch a list of updates, and then install them, rebooting the machine if that was successful. The log file is served up over http, in case you want to get the log easily to see what’s been happening (assuming you’re running uhttpd service).

Make a directory to hold the script.
root@firewall:~# mkdir -p /usr/local/sbin

Make the script.
root@firewall:~# cat > /usr/local/sbin/update-system.sh << \EOF
#!/bin/ash
opkg update
PACKAGES="$(opkg list-upgradable |awk '{print $1}')"
if [ -n "${PACKAGES}" ]; then
  opkg upgrade ${PACKAGES}
  if [ "$?" -eq 0 ]; then
    echo "$(date -I"seconds") - update success, rebooting" \
>> /www/update.result
    exec reboot
  else
    echo "$(date -I"seconds") - update failed" >> /www/update.result
  fi
else
  echo "$(date -I"seconds") - nothing to update" >> /www/update.result
fi
EOF

Make the script executable and touch the log file.
root@firewall:~# chmod u+x /usr/local/sbin/update-system.sh
root@firewall:~# touch /www/update.result

Give it a run manually, if you want.
root@firewall:~# /usr/local/sbin/update-system.sh

Next schedule the script in cron.
root@firewall:~# crontab -e

My cron entry looks like this, to run at 2am every day.

0 2 * * * /usr/local/sbin/update-system.sh

Now just start and enable cron.
root@firewall:~# /etc/init.d/cron start
root@firewall:~# /etc/init.d/cron enable

Download a copy of the log from another machine.
chris@box:~$ curl http://router/update.result
2018-03-18T10:14:49+1100 - nothing to update

That’s it! Now if you have multiple devices you can do the same, but maybe just set the cron entry for a different time of the night.

March 17, 2018

DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

https://events.drupal.org/nashville2018

March 16, 2018

Racism in the Office

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

Vale Stephen Hawking

Stephen Hawking was born on the 300th anniversary of Galileo Galilei‘s death (8 March 1942), and died on the anniversary of Albert Einstein‘s birth (14 March).   Having both reached the age of 76, Hawking actually lived a few months longer than Einstein, in spite of his health problems.  By the way, what do you call it when […]

March 12, 2018

Measuring SDR Noise Figure in Real Time

I’m building a sensitive receiver for FreeDV 2400A signals. As a first step I tried a HackRF with an external Low Noise Amplifier (LNA), and attempted to measure the Noise Figure (NF) using the system Mark and I developed two years ago.

However I was getting results that didn’t make sense and were not repeatable. So over the course of a few early morning sessions I came up with a real time NF measurement system, and wrinkled several bugs out of it. I also purchased a few Airspy SDRs, and managed to measure NF on them as well as the HackRF.

It’s a GNU Octave script called nf_from_stdio.m that accepts a sample stream from stdio. It assumes the signal contains a sine wave test tone from a calibrated signal generator, and noise from the receiver under test. By sampling the test tone it can establish the gain of the receiver, and by sampling the noise spectrum an estimate of the noise power.

The script can be driven from command line utilities like hackrf_transfer or airspy_rx or via software receivers like gqrx that can send SSB-demodaulted samples over UDP. Instructions are at the top of the script.

Equipment

I’m working from a home workbench, with rudimentary RF skills, a strong signal processing background and determination. I do have a good second hand signal generator (Marconi 2031), that cost AUD$1000 at a Hamfest, and a Rigol 815 Spec An (generously donated by Mel K0PFX, and Jim, N0OB) to support my FreeDV work. Both very useful and highly recommended. I cross-checked the sig-gen calibrated output using an oscilloscope and external attenuator (within 0.5dB). The Rigol is less accurate in amplitude (1.5dB on its specs), but useful for relative measurements, e.g. comparing cable attenuation.

For the NF test method I have used a calibrated signal source is required. I performed my tests at 435MHz using a -100dBm carrier generated from the Marconi 2031 sig-gen.

Usage and Results

The script accepts real samples from a SSB demod, or complex samples from an IQ source. Tune your receiver so that the sinusoidal test tone is in the 2000 to 4000 Hz range as displayed on Fig 2 of the script. In general for minimum NF turn all SDR gains up to maximum. Check Fig 1 to ensure the signal is not clipping, reduce the baseband gain if necessary.

Noise is measured between 5000 and 10000 Hz, so ensure the receiver passband is flat in that region. When using gqrx, I drag the filter bandwidth out to 12000 Hz.

The noise estimates are less stable than the tone power estimate, leading to some sample/sample variation in the NF estimate. I take the median of the last five estimates.

I tried supplying samples to nf_from_stdio using two methods:

  1. Using gqrx in UDP mode to supply samples over UDP. This allows easy tuning and the ability to adjust the SDR gains in real time, but requires a few steps to set up
  2. Using a “single” command line approach that consists of a chain of processing steps concatenated together. Once your signal is tuned you can start the NF measurements with a single step.

Instructions on how to use both methods are at the top of nf_from_stdio.m

Here are some results using both gqrx and command line methods, with and without an external (20dB gain/1dB NF) LNA. They were consistent across two laptops.

SDR Gqrx LNA Cmd Line LNA Cmd Line no LNA
AirSpy Mini 2.0 2.2 7.9
AirSpy R2 1.7 1.7 7.0
HackRF One 2.6 3.4 11.1

The results with LNA are what we would expect for system noise figures with a good LNA at the front end.

The “no LNA” Airspy NF results are curious – the Airspy specs state a NF of just 3.5dB. So we contacted Airspy via Twitter and email to see how they measured their stated NF. We haven’t received a response to date. I posted to the Airspy mailing list and one gentleman (Dave – WØLEV) kindly replied and has measured noise figures of 4dB using calibrated noise sources and attenuators.

Looking into the data sheets for the Airspy, it appears the R820T tuner at the front end of the Airspy has a NF of 3.5dB. However a system NF will always be worse than the first device, as other devices (e.g. the ADC) also inject noise.

Other possibilities for my figures are measurement error, ambient noise sources at my site, frequency dependent NF, or variations in individual R820T samples.

In our past work we have used Bit Error Rate (BER) results as an independent method of confirming system noise figure. We found a close match between theoretical and measured BER when testing with and without a LNA. I’ll be repeating similar low level BER tests with FreeDV 2400A soon.

Real Time Noise Figure

It’s really nice to read the system noise figure in real time. For example you can start it running, then experiment with grounding, tightening connectors, or moving the SDR away from the laptop, or connect/disconnect a LNA in real time and watch the results. Really helps catch little issues in these difficult to perform tests. After all – we are measuring thermal noise, a very weak signal.

Some of the NF problems I could find and remove with a real time measurement:

  • The Airspy mini is nearly 1dB worse on the front left USB port than the rear left USB port on my X220 Thinkpad!
  • The Airspy mini really likes USB extension cables with ferrite clamps – without the ferrite I found the LNA was ineffective in reducing the NF – being swamped by conducted laptop noise I guess.
  • Loose connectors can make the noise figure a few dB worse. Wiggle and tighten them all.
  • Position of SDR/LNA near the radio and other bench equipment.
  • My magic touch can decrease noise figure! Grounding effect I guess?

Development Bugs

I had to work through several problems before I started getting sensible numbers. This was quite discouraging for a while as the numbers were jumping all over the place. However its fair to say measuring NF is a tough problem. From what I can Google its an uncommon measurement for people in home workshops.

These bugs are worth mentioning as traps for anyone else attempting home NF measurements:

  1. Cable loss: I found a 1.5dB loss is some cable I was using between the sig gen and the SDR under test. I Measured the loss by comparing a few cables connected between my sig gen and spec an. While the 815 is not accurate in terms of absolute calibration (rated at 1.5dB), it can still be used for comparative measurements. The cable loss can be added to the calculations or just choose a low loss cable.
  2. Filter shape: I had initially placed the test tone under 1000Hz. However I noticed that the gqrx signal had a few dB of high pass filtering in this region (Fig 2 below). Not an issue for regular USB demodulation, but a few dB really matters for NF! So I moved the test tone to the 2-4kHz region where the gqrx output was nice and flat.
  3. A noisy USB port, especially without a clamp, on the Airspy Mini (photo below). Found by trying different SDRs and USB ports, and finally a clamp. Oh Boy, never expected that one. I was connecting the LNA and the NF was stuck at 4dB – swamped by noise from the USB Port I guess.
  4. Compression: Worth checking the SDR output is not clipped or in compression. I adjusted the sig gen output up and down 3dB, and checked the power estimate from the script changed by 3dB. Also worth monitoring Fig 1 from the script, make sure it’s not hitting the limits. The HackRF needed it’s baseband gain reduced, but the Airspys were OK.
  5. I used latest Airspy tools built from source (rather than Ubuntu 17 package) to get stdout piping working properly and not have other status information from printfs injected into the sample stream!

Credits

Thanks Mark, for the use of your RF hardware, and I’d also like to mention the awesome CSDR tools and fantastic gqrx software – both very handy for SDR work.

March 10, 2018

I said, let me tell you now

Montage of Library Bookshelves

Ever since I heard this month’s #AusGlamBlog theme was “Happiness” I’ve had that Happy song stuck in my head.

“Clap along if you know what happiness is to you”

I’m new to the library world as a professional, but not new to libraries. A sequence of fuzzy memories swirl in my mind when I think of libraries.

First, was my local public library children’s cave filled with books that glittered with colour like jewels.

Next, I recall the mesmerising tone and timbre of the librarian’s voice at primary school. Each week she transported us into a different story as we sat, cross legged in front of her, in some form of rapture.

Coming into closer focus I recall opening drawers in the huge wooden catalogue in the library at high school. Breathing in the deeply lovely, dusty air wafting up whilst flipping through those tiny cards was a tactile delight. Some cards were handwritten, some typewritten, some plastered with laser printed stickers.

And finally, I remember relishing the peace and quiet afforded by booking one of 49 carrel study booths at La Trobe University.

I love libraries. Libraries make me happy.

The loss of libraries makes me sad. I think of Alexandria, and more recently in Timbuktu, and closer to home, I mourn the libraries lost to the dreaming by the ravages of destructive colonial force on this little continent so many of us now call home.

Preservation and digitisation, and open collections give me hope. There can only ever be one precious original of a thing, but facsimiles, and copies and 3D blueprints increasingly means physical things can now too be shared and studied without needing to handle, or risk damaging the original.

Sending precious things from collection to collection is fraught with danger. The revelations of what Australian customs did to priceless plant specimens from France & New Zealand still gives me goosebumps of horror.

Digital. Copies. Catalogues, Circulation, Fines, Holds, Reserves, and Serial patterns. I’m learning new things about the complexities under the surface as I start to work seriously with the Koha Community Integrated Library System. I first learned about the Koha ILS more than a decade ago, but I'm only now getting a chance to work with it. It brings my secret love of libraries and my publicly proclaimed love of open source together in a way I still can’t believe is possible.

So yeah.

OH HAI! I’m Donna, and I’m here to help.

“Clap along if you feel like that's what you wanna do”

March 09, 2018

Amelia Earhart in the news

Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]

March 07, 2018

brawndo-installer

Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() {
   curl $1 | sudo /usr/bin/env bash
}

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

LUV March 2018 Workshop: Comparing window managers

Mar 17 2018 12:30
Mar 17 2018 16:30
Mar 17 2018 12:30
Mar 17 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

We're still looking for more people who can talk about the window manager they are using, what they like and dislike about it, and maybe demonstrate a little.

Please email me at <president@luv.asn.au> with the name of your window manager if you think you could help!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

read more

Audiobooks – Background and February 2018 list

Audiobooks

I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10

 

 

 

Share

March 05, 2018

WordPress Multisite on Debian

WordPress (a common CMS for blogs) is designed to be copied to a directory that Apache can serve and run by a user with no particular privileges while managing installation of it’s own updates and plugins. Debian is designed around the idea of the package management system controlling everything on behalf of a sysadmin.

When I first started using WordPress there was a version called “WordPress MU” (Multi User) which supported multiple blogs. It was a separate archive to the main WordPress and didn’t support all the plugins and themes. As a main selling point of WordPress is the ability to select from the significant library of plugins and themes this was a serious problem.

Debian WordPress

The people who maintain the Debian package of WordPress have always supported multiple blogs on one system and made it very easy to run in that manner. There’s a /etc/wordpress directory for configuration files for each blog with names such as config-etbe.coker.com.au.php. This allows having multiple separate blogs running from the same tree of PHP source which means only one thing to update when there’s a new version of WordPress (often fixing security issues).

One thing that appears to be lacking with the Debian system is separate directories for “media”. WordPress supports uploading images (which are scaled to several different sizes) as well as sound and apparently video. By default under Debian they are stored in /var/lib/wordpress/wp-content/uploads/YYYY/MM/filename. If you have several blogs on one system they all get to share the same directory tree, that may be OK for one person running multiple blogs but is obviously bad when several bloggers have independent blogs on the same server.

Multisite

If you enable the “multisite” support in WordPress then you have WordPress support for multiple blogs. The administrator of the multisite configuration has the ability to specify media paths etc for all the child blogs.

The first problem with this is that one person has to be the multisite administrator. As I’m the sysadmin of the WordPress servers in question that’s an obvious task for me. But the problem is that the multisite administrator doesn’t just do sysadmin tasks such as specifying storage directories. They also do fairly routine tasks like enabling plugins. Preventing bloggers from installing new plugins is reasonable and is the default Debian configuration. Preventing them from selecting which of the installed plugins are activated is unreasonable in most situations.

The next issue is that some core parts of WordPress functionality on the sub-blogs refer to the administrator blog, recovering a forgotten password is one example. I don’t want users of other blogs on the system to be referred to my blog when they forget their password.

A final problem with multisite is that it makes things more difficult if you want to move a blog to another system. Instead of just sending a dump of the MySQL database and a copy of the Apache configuration for the site you have to configure it for which blog will be it’s master. If going between multisite and non-multisite you have to change some of the data about accounts, this will be annoying on both adding new sites to a server and moving sites from the server to a non-multisite server somewhere else.

I now believe that WordPress multisite has little value for people who use Debian. The Debian way is the better way.

So I had to back out the multisite changes. Fortunately I had a cron job to make snapshots of the BTRFS subvolume that has the database so it was easy to revert to an older version of the MySQL configuration.

Upload Location

update etbe_options set option_value='/var/lib/wordpress/wp-content/uploads/etbe.coker.com.au' where option_name='upload_path';

It turns out that if you don’t have a multisite blog then there’s no way of changing the upload directory without using SQL. The above SQL code is an example of how to do this. Note that it seems that there is special case handling of a value of ‘wp-content/uploads‘ and any other path needs to be fully qualified.

For my own blog however I choose to avoid the WordPress media management and use the following shell script to create suitable HTML code for an image that links to a high resolution version. I use GIMP to create the smaller version of the image which gives me a lot of control over how to crop and compress the image to ensure that enough detail is visible while still being small enough for fast download.

#!/bin/bash
set -e

if [ "$BASE" = "" ]; then
  BASE="http://www.coker.com.au/blogpics/2018"
fi

while [ "$1" != "" ]; do
  BIG=$1
  SMALL=$(echo $1 | sed -s s/-big//)
  RES=$(identify $SMALL|cut -f3 -d\ )
  WIDTH=$(($(echo $RES|cut -f1 -dx)/2))px
  HEIGHT=$(($(echo $RES|cut -f2 -dx)/2))px
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$SMALL\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"
  shift
done

Compromised Guest Account

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.

March 02, 2018

Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for libravatar.org, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (http://libravatar.org/).
  • Redirect anything else to https://www.libravatar.org/.

The reason for this is that the main Libravatar service listens on www.libravatar.org and not libravatar.org, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes
    </Directory>

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ https://www.libravatar.org/ [last,redirect=301]
</VirtualHost>

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/www.libravatar.org.conf:

[renewalparams]
authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span>
libravatar.org = /var/www/acme
www.libravatar.org = /var/www/acme

February 27, 2018

Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:


PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/lock.inc).

The cryptic introduction to the error message actually describes what the problem is, as error messages usually do. However, also like a lot of error messages it really doesn't provide an immediately obvious solution. So the problem is that a lock has been initiated on username@localhost, and because of the database cannot be accessed, and therefore the site won't load.

This is different to similar error messages, such as:


PDOException: SQLSTATE[08004] [1040] Too many connections in lock_may_be_available() (line 167 of /website/includes/lock.inc).

Which again, means what it says, and will probably need more file system space, clearing cache etc.

The blunt trauma method to solve the problem at hand however is to remove the offending user and recreate it. However before one does the usual rules about site and database backups apply. Unless you're feeling confident, and we know what confidence means in monkeying around with production databases.

The username, database, and password will be stored in Drupal's site located in sites/default/settings.php. Recreate the user and ... now what was their privileges again? If these haven't been set the result will be similar to:

Warning: PDO::__construct(): The server requested authentication method unknown to the client [mysql_old_password] in DatabaseConnection->__construct() (line 304 of /website/includes/database/database.inc).

To fix this, grant the user the appropriate privileges. In MySQL/MariaDB etc this will be


GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES ON database.* TO 'user'@'localhost' IDENTIFIED BY 'password';

And hopefully this short post will save someone else a bit of time.

February 26, 2018

At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]

February 25, 2018

Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad

February 24, 2018

LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more

February 23, 2018

Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

February 22, 2018

Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.

Conclusion

The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

February 21, 2018

MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.

February 17, 2018

An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to https://simplytest.me

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.

 

For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.

 

4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot

 

In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.

 

Experiment

You have 24 hours to experiment with the simplytest.me sandbox - after that it disappears.

 

Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at catalyst-au.net to discuss our Drupal services.

LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more

February 16, 2018

Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

February 09, 2018

Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]

February 08, 2018

Thinkpad X1 Carbon

I just bought a Thinkpad X1 Carbon to replace my Thinkpad X301 [1]. It cost me $289 with free shipping from an eBay merchant which is a great deal, a new battery for the Thinkpad X301 would have cost about $100.

It seems that laptops aren’t depreciating in value as much as they used to. Grays Online used to reliably have refurbished Thinkpads with manufacturer’s warranty selling for about $300. Now they only have IdeaPads (a cheaper low-end line from Lenovo) at good prices, admittedly $100 to $200 for an IdeaPad is a very nice deal if you want a cheap laptop and don’t need something too powerful. But if you want something for doing software development on the go then you are looking at well in excess of $400. So I ended up buying a second-hand system from an eBay merchant.

CPU

I was quite excited to read the specs that it has an i7 CPU, but now I have it I discovered that the i7-3667U CPU scores 3990 according to passmark (cpubenchmark.net) [2]. While that is much better than the U9400 in the Thinkpad X301 that scored 968, it’s only slightly better than the i5-2520M in my Thinkpad T420 that scored 3582 [3]. I bought the Thinkpad T420 in August 2013 [4], I had hoped that Moore’s Law would result in me getting a system at least twice as fast as my last one. But buying second-hand meant I got a slower CPU. Also the small form factor of the X series limits the heat dissipation and therefore limits the CPU performance.

Keyboard

Thinkpads have traditionally had the best keyboards, but they are losing that advantage. This system has a keyboard that feels like an Apple laptop keyboard not like a traditional Thinkpad. It still has the Trackpoint which is a major feature if you like it (I do). The biggest downside is that they rearranged the keys. The PgUp/PgDn keys are now by the arrow keys, this could end up being useful if you like the SHIFT-PgUp/SHIFT-PgDn combinations used in the Linux VC and some Xterms like Konsole. But I like to keep my keys by the home keys and I can’t do that unless I use the little finger of my right hand for PgUp/PgDn. They also moved the Home, End, and Delete keys which is really annoying. It’s not just that the positions are different to previous Thinkpads (including X series like the X301), they are different to desktop keyboards. So every time I move between my Thinkpad and a desktop system I need to change key usage.

Did Lenovo not consider that touch typists might use their products?

The keyboard moved the PrtSc key, and lacks ScrLk and Pause keys, but I hardly ever use the PrtSc key, and never use the other 2. The lack of those keys would only be of interest to people who have mapped them to useful functions and people who actually use PrtSc. It’s impractical to have a key as annoying to accidentally press as PrtSc between the Ctrl and Alt keys.

One significant benefit of the keyboard in this Thinkpad is that it has a backlight instead of having a light on the top of the screen that shines on the keyboard. It might work better than the light above the keyboard and looks much cooler! As an aside I discovered that my Thinkpad X301 has a light above the keyboard, but the key combination to activate it sometimes needs to be pressed several times.

Display

X1 Carbon 1600*900
T420 1600*900
T61 1680*1050
X301 1440*900

Above are the screen resolutions for all my Thinkpads of the last 8 years. The X301 is an anomaly as I got it from a rubbish pile and it was significantly older than Thinkpads usually are when I get them. It’s a bit disappointing that laptop screen resolution isn’t increasing much over the years. I know some people have laptops with resolutions as high as 2560*1600 (as high as a high end phone) it seems that most laptops are below phone resolution.

Kogan is currently selling the Agora 8+ phone new for $239, including postage that would still be cheaper than the $289 I paid for this Thinkpad. There’s no reason why new phones should have lower prices and higher screen resolutions than second-hand laptops. The Thinkpad is designed to be a high-end brand, other brands like IdeaPad are for low end devices. Really 1600*900 is a low-end resolution by today’s standards, 1920*1080 should be the minimum for high-end systems. Now I could have bought one of the X series models with a higher screen resolution, but most of them have the lower resolution and hunting for a second hand system with the rare high resolution screen would mean missing the best prices.

I wonder if there’s an Android app to make a phone run as a second monitor for a Linux laptop, that way you could use a high resolution phone screen to display data from a laptop.

This display is unreasonably bright by default. So bright it hurt my eyes. The xbacklight program doesn’t support my display but the command “xrandr –output LVDS-1 –brightness 0.4” sets the brightness to 40%. The Fn key combination to set brightness doesn’t work. Below a brightness of about 70% the screen looks grainy.

General

This Thinkpad has a 180G SSD that supports contiguous reads at 500MB/s. It has 8G of RAM which is the minimum for a usable desktop system nowadays and while not really fast the CPU is fast enough. Generally this is a nice system.

It doesn’t have an Ethernet port which is really annoying. Now I have to pack a USB Ethernet device whenever I go anywhere. It also has mini-DisplayPort as the only video connector, as that is almost never available at a conference venue (VGA and HDMI are the common ones) I’ll have to pack an adaptor when I give a lecture. It also only has 2 USB ports, the X301 has 3. I know that not having HDMI, VGA, and Ethernet ports allows designing a thinner laptop. But I would be happier with a slightly thicker laptop that has more connectivity options. The Thinkpad X301 has about the same mass and is only slightly thicker and has all those ports. I blame Apple for starting this trend of laptops lacking IO options.

This might be the last laptop I own that doesn’t have USB-C. Currently not having USB-C is not a big deal, but devices other than phones supporting it will probably be released soon and fast phone charging from a laptop would be a good feature to have.

This laptop has no removable battery. I don’t know if it will be practical to replace the battery if the old one wears out. But given that replacing the battery may be more than the laptop is worth this isn’t a serious issue. One significant issue is that there’s no option to buy a second battery if I need to have it run without mains power for a significant amount of time. When I was travelling between Australia and Europe often I used to pack a second battery so I could spend twice as much time coding on the plane. I know it’s an engineering trade-off, but they did it with the X301 and could have done it again with this model.

Conclusion

This isn’t a great laptop. The X1 Carbon is described as a flagship for the Thinkpad brand and the display is letting down the image of the brand. The CPU is a little disappointing, but it’s a trade-off that I can deal with.

The keyboard is really annoying and will continue to annoy me for as long as I own it. The X301 managed to fit a better keyboard layout into the same space, there’s no reason that they couldn’t have done the same with the X1 Carbon.

But it’s great value for money and works well.

February 03, 2018

Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.

[Gist]

View the code on Gist.

February 02, 2018

Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]

February 01, 2018

Querying Installed Package Versions Across An Openstack Cloud

AKA: The Joy of juju run

Package upgrades across an OpenStack cloud do not always happen at the same time. In most cases they may happen within an hour or so across your cloud but for a variety reasons, some upgrades may be applied inconsistently, delayed or blocked on some servers.

As these packages may be rolling out a much needed patch or perhaps carrying a bug, you may wish to know which services are impacted in fairly short order.

If your OpenStack cloud is running Ubuntu and managed by Juju and MAAS, here's where juju run can come to the rescue.

For example, perhaps there's an update to the Corosync library libcpg4 and you wish to know which of your HA clusters have what version installed.

From your Juju controller, create a list of servers managed by Juju:

Juju 1.x:

$ juju stat --format tabular > jsft.out

Now you could fashion a query like this, utilising juju run:

$ for i in $(egrep -o '[a-z]+-hacluster/[0-9]+' jsft.out | cut -d/ -f1 | sort -u);
do juju run --timeout 30s --service $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.3-1ubuntu4 == ceilometer-hacluster/1
2.3.3-1ubuntu4 == ceilometer-hacluster/0
2.3.3-1ubuntu4 == ceilometer-hacluster/2
2.3.3-1ubuntu4 == cinder-hacluster/0
2.3.3-1ubuntu4 == cinder-hacluster/1
2.3.3-1ubuntu4 == cinder-hacluster/2
2.3.3-1ubuntu4 == glance-hacluster/3
2.3.3-1ubuntu4 == glance-hacluster/4
2.3.3-1ubuntu4 == glance-hacluster/5
2.3.3-1ubuntu4 == keystone-hacluster/1
2.3.3-1ubuntu4 == keystone-hacluster/0
2.3.3-1ubuntu4 == keystone-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/1
2.3.3-1ubuntu4 == mysql-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/1
2.3.3-1ubuntu4 == ncc-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/1
2.3.3-1ubuntu4 == neutron-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/1
2.3.3-1ubuntu4 == osd-hacluster/2
2.3.3-1ubuntu4 == swift-hacluster/1
2.3.3-1ubuntu4 == swift-hacluster/0
2.3.3-1ubuntu4 == swift-hacluster/2

Juju 2.x:

$ juju status > jsft.out

Now you could fashion a query like this:

$ for i in $(egrep -o 'hacluster-[a-z]+/[0-9]+' jsft.out | cut -d/ -f1 |sort -u);
do juju run --timeout 30s --application $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.5-3ubuntu2 == hacluster-ceilometer/1
2.3.5-3ubuntu2 == hacluster-ceilometer/0
2.3.5-3ubuntu2 == hacluster-ceilometer/2
2.3.5-3ubuntu2 == hacluster-cinder/1
2.3.5-3ubuntu2 == hacluster-cinder/0
2.3.5-3ubuntu2 == hacluster-cinder/2
2.3.5-3ubuntu2 == hacluster-glance/0
2.3.5-3ubuntu2 == hacluster-glance/1
2.3.5-3ubuntu2 == hacluster-glance/2
2.3.5-3ubuntu2 == hacluster-heat/0
2.3.5-3ubuntu2 == hacluster-heat/1
2.3.5-3ubuntu2 == hacluster-heat/2
2.3.5-3ubuntu2 == hacluster-horizon/0
2.3.5-3ubuntu2 == hacluster-horizon/1
2.3.5-3ubuntu2 == hacluster-horizon/2
2.3.5-3ubuntu2 == hacluster-keystone/0
2.3.5-3ubuntu2 == hacluster-keystone/1
2.3.5-3ubuntu2 == hacluster-keystone/2
2.3.5-3ubuntu2 == hacluster-mysql/0
2.3.5-3ubuntu2 == hacluster-mysql/1
2.3.5-3ubuntu2 == hacluster-mysql/2
2.3.5-3ubuntu2 == hacluster-neutron/0
2.3.5-3ubuntu2 == hacluster-neutron/2
2.3.5-3ubuntu2 == hacluster-neutron/1
2.3.5-3ubuntu2 == hacluster-nova/1
2.3.5-3ubuntu2 == hacluster-nova/2
2.3.5-3ubuntu2 == hacluster-nova/0

You can of course substitute libcpg4 in the above query for any package that you need to check.

By far and away my most favourite feature of Juju at present, juju run reminds me of knife ssh, which is unsurprisingly one of my favourite features of Chef.

January 31, 2018

January 27, 2018

Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
 
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
 
Tis a mystery. Or is it?
 
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
 
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
 
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
 
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
 
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
 
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at linux.conf.au on Monday “Turning stories, into software.”
 

 

References

 

Photo by Josh Simmons

January 26, 2018

Linux.conf.au 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker

Closing

  • LCA 2019 will be in Christchurch, New Zealand – http://lca2019.linux.org.au
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions

 

 

Share

Linux.conf.au 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out

Challenges

  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication

Opportunities

  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • stackoverflow.com/jobs can filter
  • weworkremotely.com

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email

 

Share

Linux.conf.au 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report

 

Share

Linux.conf.au 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
  • https://contained.af/
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
  • metaparticle.io
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings

 

Share

January 25, 2018

Linux.conf.au 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper

Sysadmins

  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution

Insites

  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution

Examples

  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages

Website

http://red.ht/demo_rules

 

Share

Linux.conf.au 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page

Example

  • Goto www.example.com
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler

Availability

  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve

Future

  • localStorage instead of cookies


 

Share