Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

August 20, 2017

This Week in HASS – term 3, week 7

This week students are starting the final sections of their research projects and Scientific Reports. Our younger students are also preparing to set up a Class Museum.

Foundation/Prep/Kindy to Year 3

Our youngest students (Unit F.3) also complete a Scientific Report. By becoming familiar with the overall layout and skills associated with the scientific process at a young age, by the time students reach high school the process will be second-nature and their skills fine-tuned. This week teachers discuss how Science helps us find out things about the world. Teachers and students are also collecting material to form a Class Museum. Students in integrated, multi-age classes (Unit F-1.3) and Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking a similar set of activities this week, however, in increasing depth as appropriate for each year level, and with different subject matter, according to the class focus. By Year 3 (Unit 3.3), students are writing full sentences and even short paragraphs, focusing on a topic in the local history of their community or capital city, in their Scientific Report.

Years 3 to 6

Students in integrated Year 3/4 classes (Unit 3.7) and those in Year 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) are concentrating on analysis of data this week, for the final stages of their Scientific Report. It is expected that students have gathered information on their chosen research topic on an aspect of Australian history for the term by now and are analysing this information in order to answer their research questions and start to draw conclusions about their topic. This week’s lessons focus on pulling everything together towards a a full, final report. Teachers are able to quickly identify which students need extra guidance by referring to the Student Workbook, which tracks each student’s progress on a weekly basis. Thus feedback, intervention and additional support can be offered timeously and before the term marks are collated, allowing each student the chance to achieve their best.

Each year level focuses on a different aspect of Australian history and enough topics are supplied to ensure that each student is working on new information, even in multi-age classes. Instead of finding a continual stream of new, novel HASS units, or repeating material some students have covered before, OpenSTEM’s Understanding Our World® program allows teachers to tailor the same units to look different for each year level, thus ensuring that students are practising their skills on new material, as well as covering year-level appropriate skills and content. By the time students are in Year 6, they will have covered the full suite of Australian History up to the 20th century, as well as having studied each continent in turn. Civics and Citizenship and Economics and Business for part of this integrated whole and do not have to be taught separately. They will be ready to enter high school with a full suite of honed research and problem-solving skills, as well as having covered the core material necessary.

CNC Z Axis with 150mm or more of travel

Many of the hobby priced CNC machines have limited Z Axis movement. This coupled with limited clearance on the gantry force a limited number of options for work fixtures. For example, it is very unlikely that there will be clearance for a vice on the cutting bed of a cheap machine.

I started tinkering around with a Z Axis assembly which offers around 150mm of travel. The assembly also uses bearing blocks that should help overcome the tensions that drilling and cutting can offer.


The assembly is designed to be as thin as possible. The spindle mount is a little wider which allows easy bolting onto the spindle mount plate which attaches to these bearings and drive nut. The width of the assembly is important because it will limit the travel in the Y axis if it can interact with the gantry in any way.

Construction is mainly done in 1/4 and 1/2 inch 6061 alloy. The black bracket at the bottom is steel. This seemed like a reasonable choice since that bracket was going to be key to holding the weight and attachment to the gantry.

The Z axis shown above needs to be combined with a gantry height extension when attaching to a hobby CNC to be really effective. Using a longer travel Z axis like this would allow higher gantries which combined allow for easier fixturing and also pave the way for a 4/5th axis to fit under the cutter.

August 13, 2017

This Week in HASS – term 3, week 6

This week all our students are hard at work examining the objects they are using for their research projects. For the younger students these are objects that will be used to generate a Class Museum. For the older students, the objects of study relate to their chosen topic in Australian History.

Foundation / Prep / Kindy to Year 3

Students in Foundation/Prep/Kindy (Unit F.3) are examining items from the past and completing their Scientific Report by drawing these items in the Method section of the report. We also ask students to analyse their Data by drawing a picture of how people would have used that item in the past. Students in combined Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), as well as students in Year 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are also addressing the Method, Data and Analysis sections of their report by listing, describing and drawing the sources and information which the teacher has helped them to locate. The sources should include items which can be used to make a Class Museum, as well as old photographs, paintings, books, newspapers etc. Teachers can guide class discussions around how items were used in the past – which are familiar, and which are not and compare with the stories read in the first weeks of term.

Years 3 to 6

Older students are expected to analyse their Data in increasing detail relevant to their year-level, as well as listing sources in the Method section of their Scientific Reports. Students in Year 4 (Unit 4.3) are researching a topic from Australia at the time of contact with Europeans, which includes topics in Aboriginal and early colonial history. Students should consider each source and what information they can get from the source. In addition students should think about how objects, pictures and texts were used in the past and what inherent biases might be present. Students in Year 5 (Unit 5.3) are researching a topic from Australian colonial history. Teachers should guide students through the process of determining whether they are dealing with a primary or secondary source, as well as how to use that source to learn more about the past. Inherent bias in different sources should be discussed. Students in Year 6 (Unit 6.3) are researching a topic surrounding Federation and events in Australia in the early 20th century. Many of the sources available contain both primary and secondary information and students should be starting to develop an understanding of how to use, analyse and reference these sources. In preparation for the requirements of high school, teachers should guide these students through the process of building an interpretation of their analysis which is substantiated through reference to their sources (listed in the Bibliography of their report). Students should be able to show where they got their information and how they are interpreting that information. For students in Year 6, the Student Workbook is more of a guide for writing a complete Scientific Report, which they are expected to compile more or less independently.

August 10, 2017

Tools for talking

I gave a talk a couple of years ago called Tools for Talking.

I'm preparing a new talk, which, in some ways, is a sequel to this one. As part of that prep, I thought it might be useful to write some short summaries of each of the tools outlined here, with links to resources on them.

  • Powerful Non Defensive Communication
  • Non Violent Communication
  • Active Listening
  • Appreciative Inquiry
  • Transactional Analysis
  • The Drama Triangle vs
  • The Empowerment Dynamic
  • The 7 Cs

So I might try to make a start on that over the next week or so.

 

In the meantime, here's the slides:

And here's the video of the presentation at DrupalCon Barcelona

Larger format CNC

Having access to a wood cutting CNC machine that can do a full sheet of plywood at once has led me to an initial project for a large sconce stand. The sconce is 210mm square at the base and the DAR ash I used was 140mm across. This lead to the four edge grain glue ups in the middle of the stand.


The design was created in Fusion 360 by just seeing what might look good. Unfortunately the sketch export as DXF presented some issues on the import side. This was part of why a littler project like this was a good first choice rather than a more complex whole sheet of ply.

To get around the DXF issue the tip was to select a face of a body and create a sketch from that face. Then export the created sketch as DXF which seemed to work much better. I don't know what I had in the original sketch that I created the body from that the DXF export/import didn't like. Maybe the dimensions, maybe the guide lines, hard to know without a bisect. The CNC was using the EnRoute software, so I had to work out how to bounce things from Fusion over to EnRoute and then get some help to reCAM things on that side and setup tabs et al.

One tip for others would be to use the DAR timber to form a glue up before arriving at a facility with a larger cut surface. Fewer pieces means less tabs/bridges and easier reCAM. A preformed blue panel would also have let me used more advanced designs such as n and u slots to connect two pieces instead of edge grains to connect four.

Overall it was a fun build and the owner of the sconce will love having it slightly off the table top so it can more easily be seen.

pristine-tar and git-buildpackage Work-arounds

I recently ran into problems trying to package the latest version of my planetfilter tool.

This is how I was able to temporarily work-around bugs in my tools and still produce a package that can be built reproducibly from source and that contains a verifiable upstream signature.

pristine-tar being is unable to reproduce a tarball

After importing the latest upstream tarball using gbp import-orig, I tried to build the package but ran into this pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

So I decided to throw away what I had, re-import the tarball and try again. This time, I got a different pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

I filed bug 871938 for this.

As a work-around, I simply symlinked the upstream tarball I already had and then built the package using the tarball directly instead of the upstream git branch:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz
gbp buildpackage --git-tarball-dir=..

Given that only the upstream and master branches are signed, the .delta file on the pristine-tar branch could be fixed at any time in the future by committing a new .delta file once pristine-tar gets fixed. This therefore seems like a reasonable work-around.

git-buildpackage doesn't import the upstream tarball signature

The second problem I ran into was a missing upstream signature after building the package with git-buildpackage:

$ lintian -i planetfilter_0.7.4-1_amd64.changes
E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz
N: 
N:    The packaging includes an upstream signing key but the corresponding
N:    .asc signature for one or more source tarballs are not included in your
N:    .changes file.
N:    
N:    Severity: important, Certainty: certain
N:    
N:    Check: changes-file, Type: changes
N: 

This problem (and the lintian error I suspect) is fairly new and hasn't been solved yet.

So until gbp import-orig gets proper support for upstream signatures, my work-around was to copy the upstream signature in the export-dir output directory (which I set in ~/.gbp.conf) so that it can be picked up by the final stages of gbp buildpackage:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc

If there's a better way to do this, please feel free to leave a comment (authentication not required)!

August 07, 2017

NBN Fixed Wireless – Four Years On

It’s getting close to the fourth anniversary of our NBN fixed wireless connection. Over that time, speaking as someone who works from home, it’s been generally quite good. 22-24 Mbps down and 4-4.5 Mbps up is very nice. That said, there have been a few problems along the way, and more recently evenings have become significantly irritating.

There were some initial teething problems, and at least three or four occasions where someone was performing “upgrades” during business hours over the course of several consecutive days. These upgrade periods wouldn’t have affected people who are away at work or school or whatever during the day, as by the time they got home, the connection would have been back up. But for me, I had to either tether my mobile phone to my laptop, or go down to a cafe or friend’s place to get connectivity.

There’s also the icing problem, which occurs a couple of times a year when snow falls below 200-300 metres for a few days. No internet, and also no mobile phone.

These are all relatively isolated incidents though. What’s been happening more recently is our connection speed in the evenings has gone to hell. I don’t tend to do streaming video, and my syncing several GB of software mirrors happens automatically in the wee hours while I’m asleep, so my subjective impression for some time has just been that “things were kinda slower during the evenings” (web browsing, pushing/pulling from already cloned git repos, etc.). I vented about this on Twitter in mid-June but didn’t take any further action at the time.

Several weeks later, on the evening of July 28, I needed to update and rebuild a Ceph package for openSUSE and SLES. The specifics aren’t terribly relevant to this post, but the process (which is reasonably automated) involves running something like `git clone git@github.com:SUSE/ceph.git && cd ceph && git submodule update --init --recursive`, which in turn downloads a few GB of data. I’ve done this several times in the past, and it usually takes an hour, or maybe a bit more. So you start it up, then go make a meal, come back and you’re done.

Not so on that Friday evening. It took six hours.

I ran a couple of speed tests:

I looked at my smokeping graphs:

smokeping-2017-07-28

That’s awfully close to 20% packet loss in the evenings. It happens every night:

smokeping-last-10-days

And it’s been happening for a long time:

smokeping-last-400-days

Right now, as I’m writing this, the last three hours show an average of 15.57% packet loss:

smokeping-last-three-hours

So I’ve finally opened a support ticket with iiNet. We’ll see what they say. It seems unlikely that this is a problem with my equipment, as my neighbour on the same wireless tower has also had noticeable speed problems for at least the last couple of months. I’m guessing it’s either not enough backhaul, or the local NBN wireless tower is underprovisioned (or oversubscribed). I’m leaning towards the latter, as in recent times the signal strength indicators on the NTD flick between two amber and three green lights in the evenings, whereas during the day it’s three green lights all the time.

August 06, 2017

This Week in HASS – term 3, week 5

This week students in all year levels are working on their research project for the term. Our youngest students are looking at items and pictures from the past, while our older students are collecting source material for their project on Australian history.

Foundation/Prep/Kindy to Year 3

The focus of this term is an investigation into the past and how we can find out about past events. For students in Foundation/Prep/Kindy (Units F.1 and F-1.3), Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) it is recommended that the teacher bring in sources of information about the past for the students to examine. Teachers can tailor these to suit a particular direction for their class. Examples of possible sources include old toys, old books, historic photographs, texts and items about local history (including the school itself), images of old paintings, old newspaper articles which can be accessed online etc. OpenSTEM provides resources which can be used for these investigations: e.g. Historic Photographs of Families, Modes of Transport 100 Years Ago, Brisbane Through the Years, Perth Through the Years, resources on floods in Brisbane and Gundagai, bush fires in Victoria, on the different colonies in Australia etc. Teachers can also use the national and state resources such as the State Library of Queensland, particularly their Picture Archive; the State Library of NSW; the State Library of South Australia, particularly their images collection; the National Archives of Australia; Trove, which archives old newspapers in Australia; Museums Victoria, and many similar sites. Students should also be encouraged to bring material from home, which can be built up into a Class Museum.

Years 3 to 6

As students in Years 3 (Unit 3.7), 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) move into the period of gathering information from sources to address their research question, teachers should guide them to consider the nature of each source and how to record it. Resources such as Primary and Secondary Sources and Historical Sources aid in understanding the context of different kinds of sources and teachers should assist students to record the details of each source for their Method section of their Scientific Report. Recording these sources in detail is also essential for being able to compile a Bibliography, which is required to accompany the report. OpenSTEM resources are listed for each research topic for these units, but students (and teachers) should feel free to complement these with any additional material such as online collections of images and newspaper articles (such as those listed in the paragraph above). These will help students to achieve a more unique presentation for their report and demonstrate the ability to collate a variety of information, thus earning a higher grade. Using a wide range of sources will also give students a wider appreciation for their chosen topic in Australian history.

Time Synchronization with NTP and systemd

I recently ran into problems with generating TOTP 2-factor codes on my laptop. The fact that some of the codes would work and some wouldn't suggested a problem with time keeping on my laptop.

This was surprising since I've been running NTP for a many years and have therefore never had to think about time synchronization. After realizing that ntpd had stopped working on my machine for some reason, I found that systemd provides an easier way to keep time synchronized.

The new systemd time synchronization daemon

On a machine running systemd, there is no need to run the full-fledged ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job just fine.

However, I noticed that the daemon wasn't actually running:

$ systemctl status systemd-timesyncd.service 
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
           └─disable-with-time-daemon.conf
   Active: inactive (dead)
Condition: start condition failed at Thu 2017-08-03 21:48:13 PDT; 1 day 20h ago
     Docs: man:systemd-timesyncd.service(8)

referring instead to a mysterious "failed condition". Attempting to restart the service did provide more details though:

$ systemctl restart systemd-timesyncd.service 
$ systemctl status systemd-timesyncd.service 
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
           └─disable-with-time-daemon.conf
   Active: inactive (dead)
Condition: start condition failed at Sat 2017-08-05 18:19:12 PDT; 1s ago
           └─ ConditionFileIsExecutable=!/usr/sbin/ntpd was not met
     Docs: man:systemd-timesyncd.service(8)

The above check for the presence of /usr/sbin/ntpd points to a conflict between ntpd and systemd-timesyncd. The solution of course is to remove the former before enabling the latter:

apt purge ntp

Enabling time synchronization with NTP

Once the ntp package has been removed, it is time to enable NTP support in timesyncd.

Start by choosing the NTP server pool nearest you and put it in /etc/systemd/timesyncd.conf. For example, mine reads like this:

[Time]
NTP=ca.pool.ntp.org

before restarting the daemon:

systemctl restart systemd-timesyncd.service 

That may not be enough on your machine though. To check whether or not the time has been synchronized with NTP servers, run the following:

$ timedatectl status
...
 Network time on: yes
NTP synchronized: no
 RTC in local TZ: no

If NTP is not enabled, then you can enable it by running this command:

timedatectl set-ntp true

Once that's done, everything should be in place and time should be kept correctly:

$ timedatectl status
...
 Network time on: yes
NTP synchronized: yes
 RTC in local TZ: no

August 01, 2017

QEMU for ARM Processes

I’m currently doing some embedded work on ARM systems. Having a virtual ARM environment is of course helpful. For the i586 class embedded systems that I run it’s very easy to setup a virtual environment, I just have a chroot run from systemd-nspawn with the --personality=x86 option. I run it on my laptop for my own development and on a server my client owns so that they can deal with the “hit by a bus” scenario. I also occasionally run KVM virtual machines to test the boot image of i586 embedded systems (they use GRUB etc and are just like any other 32bit Intel system).

ARM systems have a different boot setup, there is a uBoot loader that is fairly tightly coupled with the kernel. ARM systems also tend to have more unusual hardware choices. While the i586 embedded systems I support turned out to work well with standard Debian kernels (even though the reference OS for the hardware has a custom kernel) the ARM systems need a special kernel. I spent a reasonable amount of time playing with QEMU and was unable to make it boot from a uBoot ARM image. The Google searches I performed didn’t turn up anything that helped me. If anyone has good references for getting QEMU to work for an ARM system image on an AMD64 platform then please let me know in the comments. While I am currently surviving without that facility it would be a handy thing to have if it was relatively easy to do (my client isn’t going to pay me to spend a week working on this and I’m not inclined to devote that much of my hobby time to it).

QEMU for Process Emulation

I’ve given up on emulating an entire system and now I’m using a chroot environment with systemd-nspawn.

The package qemu-user-static has staticly linked programs for emulating various CPUs on a per-process basis. You can run this as “/usr/bin/qemu-arm-static ./staticly-linked-arm-program“. The Debian package qemu-user-static uses the binfmt_misc support in the kernel to automatically run /usr/bin/qemu-arm-static when an ARM binary is executed. So if you have copied the image of an ARM system to /chroot/arm you can run the following commands like the following to enter the chroot:

cp /usr/bin/qemu-arm-static /chroot/arm/usr/bin/qemu-arm-static
chroot /chroot/arm bin/bash

Then you can create a full virtual environment with “/usr/bin/systemd-nspawn -D /chroot/arm” if you have systemd-container installed.

Selecting the CPU Type

There is a huge range of ARM CPUs with different capabilities. How this compares to the range of x86 and AMD64 CPUs depends on how you are counting (the i5 system I’m using now has 76 CPU capability flags). The default CPU type for qemu-arm-static is armv7l and I need to emulate a system with a armv5tejl. Setting the environment variable QEMU_CPU=pxa250 gives me armv5tel emulation.

The ARM Architecture Wikipedia page [2] says that in armv5tejl the T stands for Thumb instructions (which I don’t think Debian uses), the E stands for DSP enhancements (which probably isn’t relevant for me as I’m only doing integer maths), the J stands for supporting special Java instructions (which I definitely don’t need) and I’m still trying to work out what L means (comments appreciated).

So it seems clear that the armv5tel emulation provided by QEMU_CPU=pxa250 will do everything I need for building and testing ARM embedded software. The issue is how to enable it. For a user shell I can just put export QEMU_CPU=pxa250 in .login or something, but I want to emulate an entire system (cron jobs, ssh logins, etc).

I’ve filed Debian bug #870329 requesting a configuration file for this [1]. If I put such a configuration file in the chroot everything would work as desired.

To get things working in the meantime I wrote the below wrapper for /usr/bin/qemu-arm-static that calls /usr/bin/qemu-arm-static.orig (the renamed version of the original program). It’s ugly (I would use a config file if I needed to support more than one type of CPU) but it works.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
  if(setenv("QEMU_CPU", "pxa250", 1))
  {
    printf("Can't set $QEMU_CPU\n");
    return 1;
  }
  execv("/usr/bin/qemu-arm-static.orig", argv);
  printf("Can't execute \"%s\" because of qemu failure\n", argv[0]);
  return 1;
}

NBN FTTN

Unfortunate for us our home only got FTTN NBN connection. but like others I thought I would share the speed improvement results from cleaning up wiring inside your own home. we have 2 phone sockets 1 in the bedroom and one in the kitchen. by removing the cable from the kitchen to the bedroom, we managed to increase our maximum line rate from 14.2Mbps upload and 35.21Mbps download to 20Mbps upload and 47 Mbps download.

Bedroom Phone Line connected.
Line Statistics Post Wiring clean up

we’ve also put a speed change request from the 12/5 plan to the 50/20 plan so next month we should be enjoying a bit more of an NBN.

To think that with FTTH you could of had up to 4 100/40 connections. and you wouldn’t of had to pay someone to rewire your phone sockets.

Update:

speed change has gone through

NBN ModemModem statistics on 50/20 speed

July 31, 2017

Running a Tor Relay

I previously wrote about running my SE Linux Play Machine over Tor [1] which involved configuring ssh to use Tor.

Since then I have installed a Tor hidden service for ssh on many systems I run for clients. The reason is that it is fairly common for them to allow a server to get a new IP address by DHCP or accidentally set their firewall to deny inbound connections. Without some sort of VPN this results in difficult phone calls talking non-technical people through the process of setting up a tunnel or discovering an IP address. While I can run my own VPN for them I don’t want their infrastructure tied to mine and they don’t want to pay for a 3rd party VPN service. Tor provides a free VPN service and works really well for this purpose.

As I believe in giving back to the community I decided to run my own Tor relay. I have no plans to ever run a Tor Exit Node because that involves more legal problems than I am willing or able to deal with. A good overview of how Tor works is the EFF page about it [2]. The main point of a “Middle Relay” (or just “Relay”) is that it only sends and receives encrypted data from other systems. As the Relay software (and the sysadmin if they choose to examine traffic) only sees encrypted data without any knowledge of the source or final destination the legal risk is negligible.

Running a Tor relay is quite easy to do. The Tor project has a document on running relays [3], which basically involves changing 4 lines in the torrc file and restarting Tor.

If you are running on Debian you should install the package tor-geoipdb to allow Tor to determine where connections come from (and to not whinge in the log files).

ORPort [IPV6ADDR]:9001

If you want to use IPv6 then you need a line like the above with IPV6ADDR replaced by the address you want to use. Currently Tor only supports IPv6 for connections between Tor servers and only for the data transfer not the directory services.

Data Transfer

I currently have 2 systems running as Tor relays, both of them are well connected in a European DC and they are each transferring about 10GB of data per day which isn’t a lot by server standards. I don’t know if there is a sufficient number of relays around the world that the share of the load is small or if there is some geographic dispersion algorithm which determined that there are too many relays in operation in that region.

July 30, 2017

This Week in HASS – term 3, week 4

This week younger students start investigating how we can find out about the past. This investigation will be conducted over the next 3 weeks and will culminate in a Scientific Report. Older students are considering different sources of historical information and how they will use these sources in their research.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), as well as those in integrated classes (Unit F-1.3) and Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are all starting to think about how we can find out about the past. This is a great opportunity for teachers to encourage students to think about how we know about the past and brainstorm ideas, as well as coming up with their own avenues of inquiry. Teachers may wish to hold a Question and Answer session in class to help guide students to examine many different aspects of this topic. The resource Finding Out About The Past contains core information to help the teacher guide the discussion to cover different ways of examining the past. This discussion can be tailored to the level and individual circumstances of each class. Foundation/Prep/Kindy students are just starting to think about the past as a time before the present and how this affects what we know about past events. The discussion can be developed in higher years, and the teacher can start to introduce the notion of sources of information, including texts and material culture. This investigation forms the basis for the Method section of the Scientific Report, which is included in the Student Workbook.

Years 3 to 6

Students in Years 3 (Unit 3.7), 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) are following a similar line of investigation this week, but examining Historical Sources specifically. As well as Primary and Secondary Sources, students are encouraged to think about Oral Sources, Textual Sources and Material Culture (artefacts such as stone tools or historical items). This discussion forms the basis for students completing the Method section of their Scientific Report, where they will list the sources of information and how these contributed to their research. Older students might be able to self-direct this process, although teachers may wish to guide the process through an initial class discussion. Teachers may wish to take the class through a discussion of the sources they are using for their research and discuss how students will use and report on these sources in their report for their topic.

July 29, 2017

QSO Today Podcast

Eric, 4Z1UG, has kindly interviewed me for his fine QSO Today Podcast.

Apache Mesos on Debian

I decided to try packaging Mesos for Debian/Stretch. I had a spare system with a i7-930 CPU, 48G of RAM, and SSDs to use for building. The i7-930 isn’t really fast by today’s standards, but 48G of RAM and SSD storage mean that overall it’s a decent build system – faster than most systems I run (for myself and for clients) and probably faster than most systems used by Debian Developers for build purposes.

There’s a github issue about the lack of an upstream package for Debian/Stretch [1]. That upstream issue could probably be worked around by adding Jessie sources to the APT sources.list file, but a package for Stretch is what is needed anyway.

Here is the documentation on building for Debian [2]. The list of packages it gives as build dependencies is incomplete, it also needs zlib1g-dev libapr1-dev libcurl4-nss-dev openjdk-8-jdk maven libsasl2-dev libsvn-dev. So BUILDING this software requires Java + Maven, Ruby, and Python along with autoconf, libtool, and all the usual Unix build tools. It also requires the FPM (Fucking Package Management) tool, I take the choice of name as an indication of the professionalism of the author.

Building the software on my i7 system took 79 minutes which includes 76 minutes of CPU time (I didn’t use the -j option to make). At the end of the build it turned out that I had mistakenly failed to install the Fucking Package Management “gem” and it aborted. At this stage I gave up on Mesos, the pain involved exceeds my interest in trying it out.

How to do it Better

One of the aims of Free Software is that bugs are more likely to get solved if many people look at them. There aren’t many people who will devote 76 minutes of CPU time on a moderately fast system to investigate a single bug. To deal with this software should be prepared as components. An example of this is the SE Linux project which has 13 source modules in the latest release [3]. Of those 13 only 5 are really required. So anyone who wants to start on SE Linux from source (without considering a distribution like Debian or Fedora that has it packaged) can build the 5 most important ones. Also anyone who has an issue with SE Linux on their system can find the one source package that is relevant and study it with a short compile time. As an aside I’ve been working on SE Linux since long before it was split into so many separate source packages and know the code well, but I still find the separation convenient – I rarely need to work on more than a small subset of the code at one time.

The requirement of Java, Ruby, and Python to build Mesos could be partly due to language interfaces to call Mesos interfaces from Ruby and Python. Ohe solution to that is to have the C libraries and header files to call Mesos and have separate packages that depend on those libraries and headers to provide the bindings for other languages. Another solution is to have autoconf detect that some languages aren’t installed and just not try to compile bindings for them (this is one of the purposes of autoconf).

The use of a tool like Fucking Package Management means that you don’t get help from experts in the various distributions in making better packages. When there is a FOSS project with a debian subdirectory that makes barely functional packages then you will be likely to have an experienced Debian Developer offer a patch to improve it (I’ve offered patches for such things on many occasions). When there is a FOSS project that uses a tool that is never used by Debian developers (or developers of Fedora and other distributions) then the only patches you will get will be from inexperienced people.

A software build process should not download anything from the Internet. The source archive should contain everything that is needed and there should be dependencies for external software. Any downloads from the Internet need to be protected from MITM attacks which means that a responsible software developer has to read through the build system and make sure that appropriate PGP signature checks etc are performed. It could be that the files that the Mesos build downloaded from the Apache site had appropriate PGP checks performed – but it would take me extra time and effort to verify this and I can’t distribute software without being sure of this. Also reproducible builds are one of the latest things we aim for in the Debian project, this means we can’t just download files from web sites because the next build might get a different version.

Finally the fpm (Fucking Package Management) tool is a Ruby Gem that has to be installed with the “gem install” command. Any time you specify a gem install command you should include the -v option to ensure that everyone is using the same version of that gem, otherwise there is no guarantee that people who follow your documentation will get the same results. Also a quick Google search didn’t indicate whether gem install checks PGP keys or verifies data integrity in other ways. If I’m going to compile software for other people to use I’m concerned about getting unexpected results with such things. A Google search indicates that Ruby people were worried about such things in 2013 but doesn’t indicate whether they solved the problem properly.

July 28, 2017

RegTech – a primer for the uninitiated

Whilst working at AUSTRAC I wrote a brief about RegTech which was quite helpful. I was given permission to blog the generically useful parts of it for general consumption :) Thanks Leanne!

Overview – This brief is the most important thing you will read in planning transformation! Government can’t regulate in the way we have traditionally done. Traditional approaches are too small, too slow and too ineffective. We need to explore new ways to regulate and achieve the goal of a stronger financial sector resistance to abuse that leverages data, automation, machine learning, technology and collaboration. We are here to help!

The key here is to put technology at the heart of the business strategy, rather than as simply an implementation mechanism. By embracing technology thinking, which means getting geeks into the strategy and policy rooms, we can build the foundation of a modern, responsive, agile, proactive and interactive regulator that can properly scale.

The automation of compliance with RegTech has the potential to overcome individual foibles and human error in a way that provides the quantum leap in culture and compliance that our regulators, customers, policy makers and the community are increasingly demanding… The Holy Grail is when we start to actually write regulation and legislation in code. Imagine the productivity gains and compliance savings of instantaneous certified compliance… We are now in one of the most exciting phases in the development of FinTech since the inception of e-banking.Treasurer Morrison, FinTech Australia Summit, Nov 2016

On the back of the FinTech boom, there is a growth in companies focused on “RegTech” solutions and services to merge technology and regulation/compliance needs for a more 21st century approach to the problem space. It is seen as a logical next step to the FinTech boom, given the high costs and complexity of regulation in the financial sector, but the implications for the broader regulatory sector are significant. The term only started being widely used in 2015. Other governments have started exploring this space, with the UK Government investing significantly.

Core themes of RegTech can be summarised as: data; automation; security; disruption; and enabling collaboration. There is also an overall drive towards everything being closer to real-time, with new data or information informing models, responses and risk in an ongoing self-adjusting fashion.

  • Data driven regulation – better monitoring, better use of available big and small data holdings to inform modelling and analysis (rather than always asking a human to give new information), assessment on the fly, shared data and modelling, trends and forecasting, data analytics for forward looking projections rather than just retrospective analysis, data driven risk and adaptive modelling, programmatic delivery of regulations (regulation as a platform).
  • Automation – reporting, compliance, risk modelling of transactions to determine what should be reported as “suspicious”, system to system registration and escalation, use of machine learning and AI, a more blended approach to work combining humans and machines.
  • Security – biometrics, customer checks, new approaches to KYC, digital identification and assurance, sharing of identity information for greater validation and integrity checking.
  • Disruptive technologies – blockchain, cloud, machine learning, APIs, cryptography, augmented reality and crypto-currencies just to start!
  • Enabling collaboration – for-profit regulation activities, regulation/compliance services and products built on the back of government rules/systems/data, access to distributed ledgers, distributed risk models and shared data/systems, broader private sector innovation on the back of regulator open data and systems.

Some useful references for the more curious:

July 27, 2017

LUV Main August 2017 Meeting

Aug 1 2017 18:30
Aug 1 2017 20:30
Aug 1 2017 18:30
Aug 1 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Tuesday, August 1, 2017

6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

  • Tony Cree, CEO Aboriginal Literacy Foundation (to be confirmed)
  • Russell Coker, QEMU and ARM on AMD64

Russell Coker will demonstrate how to use QEMU to run software for ARM CPUs on an x86 family CPU.

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 1, 2017 - 18:30

LUV Beginners August Meeting: Secure Shell (SSH)

Aug 26 2017 12:30
Aug 26 2017 16:30
Aug 26 2017 12:30
Aug 26 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Secure Shell (SSH)

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 26, 2017 - 12:30

read more

July 25, 2017

Forking Mon and DKIM with Mailing Lists

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

July 23, 2017

This Week in HASS – term 3, week 3

This week our youngest students are playing games from different places around the world, in the past. Slightly older students are completing the Timeline Activity. Students in Years 4, 5 and 6 are starting to sink their teeth into their research project for the term, using the Scientific Process.

Foundation/Prep/Kindy to Year 3

Playing hoopsThis week students in stand-alone Foundation/Prep/Kindy classes (Unit F.3) and those integrated with Year 1 (Unit F-1.3) are examining games from the past. The teacher can choose to match these to the stories from Week 1 of the unit, as games are listed matching each of the places and time periods included in those stories. However, some games are more practical to play than others, and some require running around, so the teacher may wish to choose games which suit the circumstances of each class. Teachers can discuss how different places have different types of games and why these games might be chosen in those places (e.g. dragons in China and lions in Africa).

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) have this week to finish off the Timeline Activity. The Timeline activity requires some investment of time, which can be done as 2 half hour sessions or one longer session. Some flexible timing is built into the unit for teachers who want to match this activity to the number line in Maths, and other revise or cover the number line in more depth as a complement to this activity.

Years 3 to 6

Arthur Phillip

Last week students in Years 3 to 6 chose a research topic, related to a theme in Australian History. Different themes are studied by different year levels. Students in Year 3 (Unit 3.7) study a topic in the history of their capital city or local community. Students in Year 4 (Unit 4.3) study a topic from Australian history in the precolonial or early colonial periods. Students in Year 5 (Unit 5.3) study a topic from Australian colonial history and students in Year 6 (Unit 6.3) study a topic related to Federation or 20th century Australian history. These research topics are undertaken as a Scientific Investigation. This week the focus is on defining a Research Question and undertaking Background Research. Student workbooks will guide students through the process of choosing a research question within their chosen topic, and then how to start the Background Research. These sections will be included in the Scientific Report each student produces at the end of this unit. OpenSTEM resources available with each unit provide a starting point for this Background Research.

 

test post

test posting from wordpress.com

01 – [Jul-24 13:35 API] Volley error on https://public-api.wordpress.com/rest/v1.1/sites/4046490/posts/366/?context=edit&locale=en_AU – exception: null
02 – [Jul-24 13:35 API] StackTrace: com.android.volley.ServerError
at com.android.volley.toolbox.BasicNetwork.performRequest(BasicNetwork.java:179)
at com.android.volley.NetworkDispatcher.run(NetworkDispatcher.java:114)

03 – [Jul-24 13:35 API] Dispatching action: PostAction-PUSHED_POST
04 – [Jul-24 13:35 POSTS] Post upload failed. GENERIC_ERROR: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
05 – [Jul-24 13:35 POSTS] updateNotificationError: Error while uploading the post: The Jetpack site is inaccessible or returned an error: transport error – HTTP status code was not 200 (403) [-32300]
06 – [Jul-24 13:35 EDITOR] Focus out callback received

July 20, 2017

New Dates for Earliest Archaeological Site in Aus!

Thylacine or Tasmanian Tiger.

This morning news was released of a date of 65,000 years for archaeological material at the site of Madjedbebe rock shelter in the Jabiluka mineral lease area, surrounded by Kakadu National Park. The site is on the land of the Mirarr people, who have partnered with archaeologists from the University of Queensland for this investigation. It has also produced evidence of the earliest use of ground-stone tool technology, the oldest seed-grinding tools in Australia and stone points, which may have been used as spears. Most fascinating of all, there is the jawbone of a Tasmanian Tiger or Thylacine (which was found across continental Australia during the Ice Age) coated in a red pigment, thought to be the reddish rock, ochre. There is much evidence of use of ochre at the site, with chucks and ground ochre found throughout the site. Ochre is often used for rock art and the area has much beautiful rock art, so we can deduce that these rock art traditions are as old as the occupation of people in Australia, i.e. at least 65,000 years old! The decoration of the jawbone hints at a complex realm of abstract thought, and possibly belief, amongst our distant ancestors – the direct forebears of modern Aboriginal people.

Kakadu view, NT Tourism.

Placing the finds from Madjebebe rock shelter within the larger context, the dating, undertaken by Professor Zenobia Jacobs from the University of Wollongong, shows that people were living at the site during the Ice Age, a time when many, now-extinct, giant animals roamed Australia; and the tiny Homo floresiensis was living in Indonesia. These finds show that the ancestors of Aboriginal people came to Australia with much of the toolkit of their rich, complex lives already in place. This technology, extremely advanced for the time, allowed them to populate the entire continent of Australia, first managing to survive in the hash Ice Age environment and then also managing to adapt to the enormous changes in sea level, climate and vegetation at the end of the Ice Age.

The team of archaeologists working at Madjebebe rock shelter, in conjunction with Mirarr traditional owners, are finding all sorts of wonderful archaeological material, from which they can deduce much rich, detailed information about the lives of the earliest people in Australia. We look forward to hearing more from them in the future. Students who are interested, especially those in Years 4, 5 and 6, can read more about these sites and the animals and lives of people in Ice Age Australia in our resources People Reach Australia, Early Australian Sites, Ice Age Animals and the Last Ice Age, which are covered in Units 4.1, 5.1 and 6.1.

July 17, 2017

XDP on Power

This post is a bit of a break from the standard IBM fare of this blog, as I now work for Canonical. But I have a soft spot for Power from my time at IBM - and Canonical officially supports 64-bit, little-endian Power - so when I get a spare moment I try to make sure that cool, officially-supported technologies work on Power before we end up with a customer emergency! So, without further ado, this is the story of XDP on Power.

XDP

eXpress Data Path (XDP) is a cool Linux technology to allow really fast processing of network packets.

Normally in Linux, a packet is received by the network card, an SKB (socket buffer) is allocated, and the packet is passed up through the networking stack.

This introduces an inescapable latency penalty: we have to allocate some memory and copy stuff around. XDP allows some network cards and drivers to process packets early - even before the allocation of the SKB. This is much faster, and so has applications in DDOS mitigation and other high-speed networking use-cases. The IOVisor project has much more information if you want to learn more.

eBPF

XDP processing is done by an eBPF program. eBPF - the extended Berkeley Packet Filter - is an in-kernel virtual machine with a limited set of instructions. The kernel can statically validate eBPF programs to ensure that they terminate and are memory safe. From this it follows that the programs cannot be Turing-complete: they do not have backward branches, so they cannot do fancy things like loops. Nonetheless, they're surprisingly powerful for packet processing and tracing. eBPF programs are translated into efficient machine code using in-kernel JIT compilers on many platforms, and interpreted on platforms that do not have a JIT. (Yes, there are multiple JIT implementations in the kernel. I find this a terrifying thought.)

Rather than requiring people to write raw eBPF programs, you can write them in a somewhat-restricted subset of C, and use Clang's eBPF target to translate them. This is super handy, as it gives you access to the kernel headers - which define a number of useful data structures like headers for various network protocols.

Trying it

There are a few really interesting project that are already up and running that allow you to explore XDP without learning the innards of both eBPF and the kernel networking stack. I explored the samples in the bcc compiler collection and also the samples from the netoptimizer/prototype-kernel repository.

The easiest way to get started with these is with a virtual machine, as recent virtio network drivers support XDP. If you are using Ubuntu, you can use the uvt-kvm tooling to trivially set up a VM running Ubuntu Zesty on your local machine.

Once your VM is installed, you need to shut it down and edit the virsh XML.

You need 2 vCPUs (or more) and a virtio+vhost network card. You also need to edit the 'interface' section and add the following snippet (with thanks to the xdp-newbies list):

<driver name='vhost' queues='4'>
    <host tso4='off' tso6='off' ecn='off' ufo='off'/>
    <guest tso4='off' tso6='off' ecn='off' ufo='off'/>
</driver>

(If you have more than 2 vCPUs, set the queues parameter to 2x the number of vCPUs.)

Then, install a modern clang (we've had issues with 3.8 - I recommend v4+), and the usual build tools.

I recommend testing with the prototype-kernel tools - the DDOS prevention tool is a good demo. Then - on x86 - you just follow their instructions. I'm not going to repeat that here.

POWERful XDP

What happens when you try this on Power? Regular readers of my posts will know to expect some minor hitches.

XDP does not disappoint.

Firstly, the prototype-kernel repository hard codes x86 as the architecture for kernel headers. You need to change it for powerpc.

Then, once you get the stuff compiled, and try to run it on a current-at-time-of-writing Zesty kernel, you'll hit a massive debug splat ending in:

32: (61) r1 = *(u32 *)(r8 +12)
misaligned packet access off 0+18+12 size 4
load_bpf_file: Permission denied

It turns out this is because in Ubuntu's Zesty kernel, CONFIG_HAS_EFFICIENT_UNALIGNED_ACCESS is not set on ppc64el. Because of that, the eBPF verifier will check that all loads are aligned - and this load (part of checking some packet header) is not, and so the verifier rejects the program. Unaligned access is not enabled because the Zesty kernel is being compiled for CPU_POWER7 instead of CPU_POWER8, and we don't have efficient unaligned access on POWER7.

As it turns out, IBM never released any officially supported Power7 LE systems - LE was only ever supported on Power8. So, I filed a bug and sent a patch to build Zesty kernels for POWER8 instead, and that has been accepted and will be part of the next stable update due real soon now.

Sure enough, if you install a kernel with that config change, you can verify the XDP program and load it into the kernel!

If you have real powerpc hardware, that's enough to use XDP on Power! Thanks to Michael Ellerman, maintainer extraordinaire, for verifying this for me.

If - like me - you don't have ready access to Power hardware, you're stuffed. You can't use qemu in TCG mode: to use XDP with a VM, you need multi-queue support, which only exists in the vhost driver, which is only available for KVM guests. Maybe IBM should release a developer workstation. (Hint, hint!)

Overall, I was pleasantly surprised by how easy things were for people with real ppc hardware - it's encouraging to see something not require kernel changes!

eBPF and XDP are definitely growing technologies - as Brendan Gregg notes, now is a good time to learn them! (And those on Power have no excuse either!)

July 16, 2017

This Week in HASS – term 3, week 2

This week older students start their research projects for the term, whilst younger students are doing the Timeline Activity. Our youngest students are thinking about the places where people live and can join together with older students as buddies to Build A Humpy together.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), or those in classes integrated with Year 1 (Unit F-1.3) are considering different types of homes this week. They will think about where the people in the stories from last week live and compare that to their own houses. They can consider how homes were different in the past and how our homes help us meet our basic needs. There is an option this week for these students to buddy with older students, especially those in Years 4, 5 and 6, to undertake the Building A Humpy activity together. In this activity students collect materials to build a replica Aboriginal humpy or shelter outside. Many teachers find that both senior primary and the younger students get a lot of benefit from helping each other with activities, enriching the learning experience. The Building a Humpy activity is one where the older students can assist the younger students with the physical requirements of building a humpy, whilst each group considers aspects of the activity relevant to their own studies, and comparing past ways of life to their own.

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking the Timeline Activity this week. This activity is designed to complement the concept of the number line from the Mathematics curriculum, whilst helping students to learn to visualise the abstract concepts of the past and different lengths of time between historical events and the present. In this activity students walk out a timeline, preferably across a large open space such as the school Oval, whilst attaching pieces of paper at intervals to a string. The pieces of paper refer to specific events in history (starting with their own birth years) and cover a wide range of events from the material covered this year. Teachers can choose from events in Australian and world history, covering 100s, 1000s and even millions of years, back to the dinosaurs. Teachers can also add their own events. Thus the details of the activity are able to be altered in different years to maintain student interest. Depending on the class, the issue of scale can be addressed in various ways. By physically moving their bodies, students will start to understand the lengths of time involved in examinations of History. This activity is repeated in increasing detail in higher years, to make sure that the fundamental concepts are absorbed by students over time.

Years 3 to 6

Science ExplosionStudents in Years 3 to 6 are starting their term research projects on Australian history this week. Students in Year 3 (Unit 3.7) concentrate on topics from the history of their capital city or local community. Suggested topics are included for Brisbane, Melbourne, Sydney, Adelaide, Darwin, Hobart, Perth and Canberra. Teachers can substitute their own topics for a local community study. Students will undertake a Scientific Investigation into an aspect of their chosen research project and will produce a Scientific Report. It is recommended that teachers supplement the resources provided with old photographs, books, newspapers etc, many of which can be accessed online, to provide the students with extra material for their investigation.

First Fleet 1788First Fleet

Students in Year 4 (Unit 4.3) will be focusing on Australia in the period up to and including the arrival of the First Fleet and the early colonial period. OpenSTEM’s Understanding Our World® program encompasses the whole Australian curriculum for HASS and thus does not simply rely on “flogging the First Fleet to death”! There are 7 research themes for Year 4 students: “Australia Before 1788”; “The First Fleet”; “Convicts and Settlers”; “Aboriginal People in Colonial Australia”; “Australia and Other Nations in the 17th, 18th and 19th centuries”; “Colonial Children”; “Colonial Animals and their Impact”. These themes are allocated to groups of students and each student chooses an individual research topic within their groups themes. Suggested topics are given in the Teacher Handbook, as well as suggested resources.

19th century china dolls

Year 5 (Unit 5.3) students focus on the colonial period in Australia. There are 9 research themes for Year 5 students. These are: “The First Fleet”; “Convicts and Settlers”; “The 6 Colonies”; “Aboriginal People in Colonial Australia”; “Resistance to Colonial Authorities”; “Sugar in Queensland”; “Colonial Children”; “Colonial Explorers” and “Colonial Animals and their Impact”. As well as themes unique to Year 5, some overlap is provided to facilitate teaching in multi-year classes. The range of themes also allows for the possibility of teachers choosing different themes in different years. Once again individual topics and resources are suggested in the Teacher Handbook.

Year 6 (Unit 6.3) students will examine research themes around Federation and the early 20th century. There are 8 research themes for Year 6 students: “Federation and Sport”; “Women’s Suffrage”; “Aboriginal Rights in Australia”; “Henry Parkes and Federation”; “Edmund Barton and Federation”; “Federation and the Boer War”; “Samuel Griffith and the Constitution”; “Children in Australian History”. Individual research topics and resources are suggested in the Teachers Handbook. It is expected that students in Year 6 will be able to research largely independently, with weekly guidance from their teacher. OpenSTEM’s Understanding Our World® program is aimed at developing research skills in students progressively, especially over the upper primary years. If the program is followed throughout the primary years, students are well prepared for high school by the end of Year 6, having practised individual research skills for several years.

 

July 13, 2017

Bitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.

 

So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.

July 12, 2017

Toggling Between Pulseaudio Outputs when Docking a Laptop

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available.
...
  * index: 1
    name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
    driver: <module-alsa-card.c>
...
    ports:
        analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
            properties:

        analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
            properties:
                device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker

Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug
action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb  > docked 
lsusb  > undocked 
colordiff -u docked undocked 

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked  2017-07-07 19:10:51.875405241 -0700
+++ undocked    2017-07-07 19:11:00.511336071 -0700
@@ -1,15 +1,6 @@
 Bus 001 Device 002: ID 8087:8000 Intel Corp. 
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
-Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
-Bus 003 Device 080: ID 17ef:1010 Lenovo 
 Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
-Bus 002 Device 041: ID xxxx:xxxx ...
-Bus 002 Device 040: ID xxxx:xxxx ...
-Bus 002 Device 039: ID xxxx:xxxx ...
-Bus 002 Device 038: ID 17ef:100f Lenovo 
-Bus 002 Device 037: ID xxxx:xxxx ...
-Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
-Bus 002 Device 036: ID 17ef:1010 Lenovo 
 Bus 002 Device 002: ID xxxx:xxxx ...
 Bus 002 Device 004: ID xxxx:xxxx ...
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash

if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then
    # docked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
else
    # undocked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
fi

July 11, 2017

Linux Security Summit 2017 Schedule Published

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

July 10, 2017

This Week in HASS – term 3, week 1

Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units branch into new areas to provide some fresh material and a new direction for the new semester. This term younger students are studying the lives of children in the past from a narrative context, whilst older students are delving into aspects of Australian history.

Foundation/Prep/Kindy to Year 3

The main resource for our youngest students for Unit F.3 is Children in the Past – a collection of stories of children from a range of different historical situations. This resource contains 6 stories of children from Aboriginal Australia more than 1,000 years ago, Ancient Egypt, Ancient Rome, Ancient China, Aztec Mexico and Zulu Southern Africa several hundred years ago. Teachers can choose one or two stories from this resource to study in depth with the students this term. The range of stories allows teachers to tailor the material to their class and ensure that there is no need to repeat the same stories in consecutive years. Students will compare the lives of children in the stories with their own lives – focusing on different aspects in different weeks of the term. In this first week teachers will read the stories to the class and help them find the places described on the OpenSTEM “Our World” map and/or a globe.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), will also be examining stories from the Children in the Past resource. Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) will also be comparing their own lives with those of children in the past; however, they will use a collection of stories called Living in the Past, which covers the same areas and time periods as Children in the Past, but provides more in-depth information about a broader range of subject areas and includes the story of the young Tom Petrie, growing up in Brisbane in the 1840s. Students in Year 1 will be considering family structures and the differences and similarities between their own families and the families described in the stories. Students in Year 2 are starting to understand the differences which technology makes to peoples’ lives, especially the technology behind different modes of transport. Students in Year 3 retain a focus on local history. In fact, the Understanding Our World® units for Year 3, term 3 are tailored to match the capital city of the state or territory in which the student lives. Currently units are available for Brisbane and Perth, other capital cities are in preparation. Additional resources are available describing the foundation and growth of Brisbane and Perth, with other cities to follow. Teachers may also prefer to focus on the local community in a smaller town and substitute their own resources for those of the capital city.

Years 3 to 6

Opening of the first parliamentFirst Australian Parliament

Older students are focusing on Australian history this term – Year 3 students (Unit 3.7) will be considering the history of their capital city (or local community) within the broader context of Australian history. Students in Year 4 (Unit 4.3) will be examining Australia in the period up to and including the first half of the 19th century. Students in Year 5 (Unit 5.3) examine the colonial period in Australian history; whilst students in Year 6 (Unit 6.3) are investigating Federation and Australia in the 20th century. In this first week of term, students in Years 3 to 6 will be compiling a timeline of Australian history and filling in important events which they already know about or have learnt about in previous units. Students will revisit this timeline in later weeks to add additional information. The main resources for this week are The History of Australia, a broad overview of Australian history from the Ice Age to the 20th century; and the History of Australian Democracy, an overview of the development of the democratic process in Australia.

Governor's House Sydney 1791The rest of the 3rd term will be spent compiling a scientific report on an investigation into an aspect of Australian history. Students in Year 3 will choose a research topic from a list of themes concerning the history of their capital city. Students in Year 4 will choose from themes on Australia before 1788, the First Fleet, experiences of convicts and settlers, including children, as well as the impact of different animals brought to Australia during the colonial period. Students in Year 5 will choose from themes on the Australian colonies and people including explorers, convicts and settlers, massacres and resistance, colonial animals and industries such as sugar in Queensland. Students in Year 6 will choose from themes on Federation, including personalities such as Henry Parkes and Edmund Barton, Sport, Women’s Suffrage, Children, the Boer War and Aboriginal experiences. This research topic will be undertaken as a guided investigation throughout the term.

Bitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

July 08, 2017

One Million Jobs for Spartan

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more

July 07, 2017

Using the Nitrokey HSM with GPG in macOS

Getting yourself set up in macOS to sign keys using a Nitrokey HSM with gpg is non-trivial. Allegedly (at least some) Nitrokeys are supported by scdaemon (GnuPG’s stand-in abstraction for cryptographic tokens) but it seems that the version of scdaemon in brew doesn’t have support.

However there is gnupg-pkcs11-scd which is a replacement for scdaemon which uses PKCS #11. Unfortunately it’s a bit of a hassle to set up.

There’s a bunch of things you’ll want to install from brew: opensc, gnupg, gnupg-pkcs11-scd, pinentry-mac, openssl and engine_pkcs11.

brew install opensc gnupg gnupg-pkcs11-scd pinentry-mac \
    openssl engine-pkcs11

gnupg-pkcs11-scd won’t create keys, so if you’ve not made one already, you need to generate yourself a keypair. Which you can do with pkcs11-tool:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --keypairgen --key-type rsa:2048 \
    --id 10 --label 'Danielle Madeley'

The --id can be any hexadecimal id you want. It’s up to you to avoid collisions.

Then you’ll need to generate and sign a self-signed X.509 certificate for this keypair (you’ll need both the PEM form and the DER form):

/usr/local/opt/openssl/bin/openssl << EOF
engine -t dynamic \
    -pre SO_PATH:/usr/local/lib/engines/engine_pkcs11.so \
    -pre ID:pkcs11 \
    -pre LIST_ADD:1 \
    -pre LOAD \
    -pre MODULE_PATH:/usr/local/lib/opensc-pkcs11.so
req -engine pkcs11 -new -key 0:10 -keyform engine \
    -out cert.pem -text -x509 -days 3640 -subj '/CN=Danielle Madeley/'
x509 -in cert.pem -out cert.der -outform der
EOF

The flag -key 0:10 identifies the token and key id (see above when you created the key) you’re using. If you want to refer to a different token or key id, you can change these.

And import it back into your HSM:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --write-object cert.der --type cert \
    --id 10 --label 'Danielle Madeley'

You can then configure gnupg-agent to use gnupg-pkcs11-scd. Edit the file ~/.gnupg/gpg-agent.conf:

scdaemon-program /usr/local/bin/gnupg-pkcs11-scd
pinentry-program /usr/local/bin/pinentry-mac

And the file ~./gnupg/gnupg-pkcs11-scd.conf:

providers nitrokey
provider-nitrokey-library /usr/local/lib/opensc-pkcs11.so

gnupg-pkcs11-scd is pretty nifty in that it will throw up a (pin entry) dialog if your token is not available, and is capable of supporting multiple tokens and providers.

Reload gpg-agent:

gpg-agent --server gpg-connect-agent << EOF
RELOADAGENT
EOF

Check your new agent is working:

gpg --card-status

Get your key handle (grip), which is the 40-character hex string after the phrase KEY-FRIEDNLY (sic):

gpg-agent --server gpg-connect-agent << EOF
SCD LEARN
EOF

Import this key into gpg as an ‘Existing key’, giving the key grip above:

gpg --expert --full-generate-key

You can now use this key as normal, create sub-keys, etc:

gpg -K
/Users/danni/.gnupg/pubring.kbx
-------------------------------
sec> rsa2048 2017-07-07 [SCE]
 1172FC7B4B5755750C65F9A544B80C280F80807C
 Card serial no. = 4B43 53233131
uid [ultimate] Danielle Madeley <danielle@madeley.id.au>

echo -n "Hello World" | gpg --armor --clearsign --textmode

Side note: the curses-based pinentry doesn’t deal with piping content into stdin, which is why you want pinentry-mac.

Terminal console showing a gpg signing command. Over the top is a dialog box prompting the user to insert her Nitrokey token

You can also import your certificate into gpgsm:

gpgsm --import < ca-certificate
gpgsm --learn-card

And that’s it, now you can sign your git tags with your super-secret private key, or whatever it is you do. Remember that you can’t exfiltrate the secret keys from your HSM in the clear, so if you need a backup you can create a DKEK backup (see the SmartcardHSM docs), or make sure you’ve generated that revocation certificate, or just decided disaster recovery is for dweebs.

July 06, 2017

Broadband Speeds, 2 Years Later

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

July 04, 2017

python-pkcs11 with the Nitrokey HSM

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

July 02, 2017

The Science of Cats

Ah, the comfortable cat! Most people agree that cats are experts at being comfortable and getting the best out of life, with the assistance of their human friends – but how did this come about? Geneticists and historians are continuing to study how cats and people came to live together and how cats came to organise themselves into such a good deal in their relationship with humans. Cats are often allowed liberties that few other animals, even domestic animals, can get away with – they are fed and usually pampered with comfortable beds (including human furniture), are kept warm, cuddled on demand; and, very often, are not even asked to provide anything except affection (on their terms!) in return. Often thought of as solitary animals, cats’ social behaviour is actually a lot more complex and recently further insights have been gained about how cats and humans came to enjoy the relationship that they have today.

Many people know that the Ancient Egyptians came to certain agreements with cats – cats are depicted in some of their art and mummified cats have been found. It is believed that cats may have been worshipped as representatives of the Cat Goddess, Bastet – interestingly enough, a goddess of war! Statues of cats from Ancient Egypt emphasise their regal bearing and tendency towards supercilious expressions. Cats were present in Egyptian art by 1950 B.C. and it was long thought that Egyptians were the first to domesticate the cat. However, in 2004 a cat was found buried with a human  on the island of Cyprus in the Mediterranean 9,500 years ago, making it the earliest known cat associated with humans. This date was many thousands of years earlier than Egyptian cats. In 2008 a site in the Nile Valley was found which contained the remains of 6 cats – a male, a female and 4 kittens, which seemed to have been cared for by people about 6,000 years ago.

African Wild Cat, photo by Sonelle, CC-BY-SA

It is now fairly well accepted that cats domesticated people, rather than the other way round! Papers refer to cats as having “self-domesticated”, which sounds in line with cat behaviour. Genetically all modern cats are related to African (also called Near Eastern) wild cats 8,000 years ago. There was an attempt to domesticate leopard cats about 7,500 years ago in China, but none of these animals contributed to the genetic material of the world’s modern cat populations. As humans in the Near East developed agriculture and started to live in settled villages, after 10,000 years ago, cats were attracted to these ready sources of food and more. The steady supply of food from agriculture allowed people to live in permanent villages. Unfortunately, these villages, stocked with food, also attracted other animals, such as rats and mice, not as welcome and potential carriers of disease. The rats and mice were a source of food for the cats who probably arrived in the villages as independent, nocturnal hunters, rather than as deliberately being encouraged by people.

Detail of cat from tomb of Nebamun

Once cats were living in close proximity to people, trust developed and soon cats were helping humans in the hunt, as is shown in this detail from an Egyptian tomb painting on the right. Over time, cats became pets and part of the family and followed farmers from Turkey across into Europe, as well as being painted sitting under dining tables in Egypt. People started to interfere with the breeding of cats and it is now thought that the Egyptians selected more social, rather than more territorial cats. Contrary to the popular belief that cats are innately solitary, in fact African Wild Cats have complex social behaviour, much of which has been inherited by the domestic cat. African wild cats live in loosely affiliated groups made up mostly of female cats who raise kittens together. There are some males associated with the group, but they tend to visit infrequently and have a larger range, visiting several of the groups of females and kittens. The female cats take turns at nursing, looking after the kittens and hunting. The adult females share food only with their own kittens and not with the other adults. Cats recognise who belongs to their group and who doesn’t and tend to be aggressive to those outside the group. Younger cats are more tolerant of strangers, until they form their own groups. Males are not usually social towards each other, but occasionally tolerate each other in loose ‘brotherhoods’.

In our homes we form the social group, which may include one or more cats. If there is more than one cat these may subdivide themselves into cliques or factions. Pairs of cats raised together often remain closely bonded and affectionate for life. Other cats (especially males) may isolate themselves from the group and do not want to interact with other cats. Cats that are happy on their own do not need other cats for company. It is more common to find stressed cats in multi-cat households. Cats will tolerate other cats best if they are introduced when young. After 2 years of age cats are less tolerant of newcomers to the group. Humans take the place of parents in their cats’ lives. Cats who grow up with humans retain some psychological traits from kittenhood and never achieve full psychological maturity.

At the time that humans were learning to manipulate the environment to their own advantage by domesticating plants and animals, cats started learning to manipulate us. They have now managed to achieve very comfortable and prosperous lives with humans and have followed humans around the planet. Cats arrived in Australia with the First Fleet, having found a very comfortable niche on sailing ships helping to control vermin. Matthew Flinders‘ cat, Trim, became famous as a result of the book Flinders wrote about him. However, cats have had a devastating effect on the native wildlife of Australia. They kill millions of native animals every year, possibly even millions each night. It is thought that they have been responsible for the extinction of numbers of native mice and small marsupial species. Cats are very efficient and deadly nocturnal hunters. It is recommended that all cats are kept restrained indoors or in runs, especially at night. We must not forget that our cuddly companions are still carnivorous predators.

June 26, 2017

Codec 2 Wideband

I’m spending a month or so improving the speech quality of a couple of Codec 2 modes. I have two aims:

  1. Make the 700 bit/s codec sound better, to improve speech quality on low SNR HF channels (beneath 0dB).
  2. Develop a higher quality mode in the 2000 to 3000 bit/s range, that can be used on HF channels with modest SNRs (around 10dB)

I ran some numbers on the new OFDM modem and LDPC codes, and turns out we can get 3000 bit/s of codec data through a 2000 Hz channel at down to 7dB SNR.

Now 3000 bit/s is broadband for me – I’ve spent years being very frugal with my bits while I play in low SNR HF land. However it’s still a bit low for Opus which kicks in at 6000 bit/s. I can’t squeeze 6000 bit/s through a 2000 Hz RF channel without higher order QAM constellations which means SNRs approaching 20dB.

So – what can I do with 3000 bit/s and Codec 2? I decided to try wideband(-ish) audio – the sort of audio bandwidth you get from Skype or AM broadcast radio. So I spent a few weeks modifying Codec 2 to work at 16 kHz sample rate, and Jean Marc gave me a few tips on using DCTs to code the bits.

It’s early days but here are a few samples:

Description Sample
1 Original Speech Listen
2 Codec 2 Model, orignal amplitudes and phases Listen
3 Synthetic phase, one bit voicing, original amplitudes Listen
4 Synthetic phase, one bit voicing, amplitudes at 1800 bit/s Listen
5 Simulated analog SSB, 300-2600Hz BPF, 10dB SNR Listen

Couple of interesting points:

  • Sample (2) is as good as Codec 2 can do, its the unquantised model parameters (harmonic phases and amplitudes). It’s all down hill from here as we quantise or toss away parameters.
  • In (3) I’m using a one bit voicing model, this is very vocoder and shouldn’t work this well. MBE/MELP all say you need mixed excitation. Exploring that conundrum would be a good Masters degree topic.
  • In (3) I can hear the pitch estimator making a few mistakes, e.g. around “sheet” on the female.
  • The extra 4kHz of audio bandwidth doesn’t take many more bits to encode, as the ear has a log frequency response. It’s maybe 20% more bits than 4kHz audio.
  • You can hear some words like “well” are muddy and indistinct in the 1800 bit/s sample (4). This usually means the formants (spectral) peaks are not well defined, so we might be tossing away a little too much information.
  • The clipping on the SSB sample (5) around the words “depth” and “hours” is an artifact of the PathSim AGC. But dat noise. It gets really fatiguing after a while.

Wideband audio is a big paradigm shift for Push To Talk (PTT) radio. You can’t do this with analog radio: 2000 Hz of RF bandwidth, 8000 Hz of audio bandwidth. I’m not aware of any wideband PTT radio systems – they all work at best 4000 Hz audio bandwidth. DVSI has a wideband codec, but at a much higher bit rate (8000 bits/s).

Current wideband codecs shoot for artifact-free speech (and indeed general audio signals like music). Codec 2 wideband will still have noticeable artifacts, and probably won’t like music. Big question is will end users prefer this over SSB, or say analog FM – at the same SNR? What will 8kHz audio sound like on your HT?

We shall see. I need to spend some time cleaning up the algorithms, chasing down a few bugs, and getting it all into C, but I plan to be testing over the air later this year.

Let me know if you want to help.

Play Along

Unquantised Codec 2 with 16 kHz sample rate:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 -o - | play -t raw -r 16000 -s -2 -

With “Phase 0” synthetic phase and 1 bit voicing:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 --phase0 --postfilter -o - | play -t raw -r 16000 -s -2 -

Links

FreeDV 2017 Road Map – this work is part of the “Codec 2 Quality” work package.

Codec 2 page – has an explanation of the way Codec 2 models speech with harmonic amplitudes and phases.

Guess the Artefact! – #2

Today’s Guess the Artefact! covers one of a set of artefacts which are often found confusing to recognise. We often get questions about these artefacts, from students and teachers alike, so here’s a chance to test your skills of observation. Remember – all heritage and archaeological material is covered by State or Federal legislation and should never be removed from its context. If possible, photograph the find in its context and then report it to your local museum or State Heritage body (the Dept of Environment and Heritage Protection in Qld; the Office of Environment and Heritage in NSW; the Dept of Environment, Planning and Sustainable Development in ACT; Heritage Victoria; the Dept of Environment, Water and Natural Resources in South Australia; the State Heritage Office in WA and the Heritage Council – Dept of Tourism and Culture in NT).

This artefact is made of stone. It measures about 12 x 8 x 3 cm. It fits easily and comfortably into an adult’s hand. The surface of the stone is mostly smooth and rounded, it looks a little like a river cobble. However, one side – the right-hand side in the photo above – is shaped so that 2 smooth sides meet in a straight, sharpish edge. Such formations do not occur on naturally rounded stones, which tells us that this was shaped by people and not just rounded in a river. The smoothed edges meeting in a sharp edge tell us that this is ground-stone technology. Ground stone technology is a technique used by people to create smooth, sharp edges on stones. People grind the stone against other rocks, occasionally using sand and water to facilitate the process, usually in a single direction. This forms a smooth surface which ends in a sharp edge.

Neolithic Axe

Ground stone technology is usually associated with the Neolithic period in Europe and Asia. In the northern hemisphere, this technology was primarily used by people who were learning to domesticate plants and animals. These early farmers learned to grind grains, such as wheat and barley, between two stones to make flour – thus breaking down the structure of the plant and making it easier to digest. Our modern mortar and pestle is a descendant of this process. Early farmers would have noticed that these actions produced smooth and sharp edges on the stones. These observations would have led them to apply this technique to other tools which they used and thus develop the ground-stone technology. Here (picture on right) we can see an Egyptian ground stone axe from the Neolithic period. The toolmaker has chosen an attractive red and white stone to make this axe-head.

In Japan this technology is much older than elsewhere in the northern hemisphere, and ground-stone axes have been found dating to 30,000 years ago during the Japanese Palaeolithic period. Until recently these were thought to be the oldest examples of ground-stone technology in the world. However, in 2016, Australian archaeologists Peter Hiscock, Sue O’Connor, Jane Balme and Tim Maloney reported in an article in the journal Australian Archaeology, the finding of a tiny flake of stone (just over 1 cm long and 1/2 cm wide) from a ground stone axe in layers dated to 44,000 to 49,000 years ago at the site of Carpenter’s Gap in the Kimberley region of north-west Australia. This tiny flake of stone – easily missed by anyone not paying close attention – is an excellent example of the extreme importance of ‘archaeological context’. Archaeological material that remains in its original context (known as in situ) can be dated accurately and associated with other material from the same layers, thus allowing us to understand more about the material. Anything removed from the context usually can not be dated and only very limited information can be learnt.

The find from the Kimberley makes Australia the oldest place in the world to have ground-stone technology. The tiny chip of stone, broken off a larger ground-stone artefact, probably an axe, was made by the ancestors of Aboriginal people in the millennia after they arrived on this continent. These early Australians did not practise agriculture, but they did eat various grains, which they leaned to grind between stones to make flour. It is possible that whilst processing these grains they learned to grind stone tools as well. Our artefact, shown above, is undated. It was found, totally removed from its original context, stored under an old house in Brisbane. The artefact is useful as a teaching aid, allowing students to touch and hold a ground-stone axe made by Aboriginal people in Australia’s past. However, since it was removed from its original context at some point, we do not know how old it is, or even where it came from exactly.

Our artefact is a stone tool. Specifically, it is a ground stone axe, made using technology that dates back almost 50,000 years in Australia! These axes were usually made by rubbing a hard stone cobble against rocks by the side of a creek. Water from the creek was used as a lubricant, and often sand was added as an extra abrasive. The making of ground-stone axes often left long grooves in these rocks. These are called ‘grinding grooves’ and can still be found near some creeks in the landscape today, such as in Kuringai Chase National Park in Sydney. The ground-stone axes were usually hafted using sticks and lashings of plant fibre, to produce a tool that could be used for cutting vegetation or other uses. Other stone tools look different to the one shown above, especially those made by flaking stone; however, smooth stones should always be carefully examined in case they are also ground-stone artefacts and not just simple stones!

LUV Beginners July Meeting: Teaching programming using video games

Jul 15 2017 12:30
Jul 15 2017 16:30
Jul 15 2017 12:30
Jul 15 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Andrew Pam will be demonstrating a range of video games that run natively on Linux and explicitly include programming skills as part of the game including SpaceChem, InfiniFactory, TIS-100, Shenzen I/O, Else Heart.Break(), Hack 'n' Slash and Human Resource Machine.  He will seek feedback on the suitability of these games for teaching programming skills to non-programmers and the possibility of group play in a classroom or workshop setting.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 15, 2017 - 12:30

read more

LUV Main July 2017 Meeting

Jul 4 2017 18:30
Jul 4 2017 20:30
Jul 4 2017 18:30
Jul 4 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, July 4, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 4, 2017 - 18:30

June 24, 2017

Duolingo Plus is Extremely Broken

After using Duolingo for over a year and accumulating almost 100,000 points I thought it would do the right thing and pay for the Plus service. It was exactly the right time as I would be travelling overseas and the ability to do lessons offline and have them sync later seemed ideal.

For the first few days it seemed to be operating fine; I had downloaded the German tree and was working my way through it. Then I downloaded the French tree, and several problems started to emerge.

read more

June 22, 2017

Hire me!

tl;dr: I’ve recently moved to the San Francisco Bay Area, received my US Work Authorization, so now I’m looking for somewhere  to work. I have a résumé and an e-mail address!

I’ve worked a lot in Free and Open Source Software communities over the last five years, both in Australia and overseas. While much of my focus has been on the Python community, I’ve also worked more broadly in the Open Source world. I’ve been doing this community work entirely as a volunteer, most of the time working in full-time software engineering jobs which haven’t related to my work in the Open Source world.

It’s pretty clear that I want to move into a job where I can use the skills I’ve been volunteering for the last few years, and put them to good use both for my company, and for the communities I serve.

What I’m interested in doing fits best into a developer advocacy or community management sort of role. Working full-time on helping people in tech be better at what they do would be just wonderful. That said, my background is in code, and working in software engineering with a like-minded company would also be pretty exciting (better still if I get to write a lot of Python).

  • Something with a strong developer relations element. I enjoy working with other developers, and I love having the opportunity to get them excited about things that I’m excited about. As a conference organiser, I’m very aware of the line between terrible marketing shilling, and genuine advocacy by and for developers: I want to help whoever I work for end up on the right side of that line.
  • Either in San Francisco, North of San Francisco, or Remote-Friendly. I live in Petaluma, a lovely town about 50 minutes north of San Francisco, with my wonderful partner, Josh. We’re pretty happy up here, but I’m happy to regularly commute as far as San Francisco. I’ll consider opportunities in other cities, but they’d need to primarily be remote.
  • Relevant to Open Source. The Open Source world is where my experience is, it’s where I know people, and it’s the world where I can be most credible. This doesn’t mean I need to be working on open source itself, but I’d love to be able to show up at OSCON or linux.conf.au and be excited to have my company’s name on my badge.

Why would I be good at this? I’ve been working on building and interacting with communities of developers, especially in the Free and Open Source Software world, for the last five years.

You can find a complete list of what I’ve done in my résumé, but here’s a selection of what I think’s notable:

  • Co-organised two editions of PyCon Australia, and led the linux.conf.au 2017 team. I’ve led PyCon AU, from inception, to bidding, to the successful execution for two years in a row. As the public face of PyCon AU, I made sure that the conference had the right people interested in speaking, and that we had many from Australian Python community interested in attending. I took what I learned at PyCon AU and applied it to run linux.conf.au 2017, where our CFP attracted its largest ever response (beating the previous record by more than 30%).
  • Developed Registrasion, an open source conference ticket system. I designed and developed a ticket sales system that allowed for automation of the most significant time sinks that linux.conf.au and PyCon Australia registration staff had experienced in previous years. Registrasion was Open Sourced, and several other conferences are considering adopting it.
  • Given talks at countless open source and developer events, both in Australia, and overseas. I’ve presented at OSCON, PyCons in five countries, and myriad other conferences. I’ve presented on a whole lot of technical topics, and I’ve recently started talking more about the community-level projects I’ve been involved with.
  • Designed, ran, and grew PyCon Australia’s outreach and inclusion programmes. Each year, PyCon Australia has offered upwards of $10,000 (around 10% of conference budget) in grants to people who otherwise wouldn’t be able to attend the conference: this is not just speakers, but people whose presence would improve the conference just by being there. I’ve led a team to assess applications for these grants, and lead our outreach efforts to make sure we find the right people to receive these grants.
  • Served as a council member for Linux Australia. Linux Australia is the peak body for Open Source communities in Australia, as well as underwriting the region’s more popular Open Source and Developer conferences. In particular, I led a project to design governance policies to help make sure the conferences we underwrite are properly budgeted and planned.

So, if you know of anything going at the moment, I’d love to hear about it. I’m reachable by e-mail (mail@chrisjrn.com) but you can also find me on Twitter (@chrisjrn), or if you really need to, LinkedIn.

June 18, 2017

HASS Additional Activities

OK, so you’ve got the core work covered for the term and now you have all those reports to write and admin to catch up on. Well, the OpenSTEM™ Understanding Our World® HASS plus Science material has heaps of activities which help students to practise core curricular skills and can keep students occupied. Here are some ideas:

 Aunt Madge’s Suitcase Activity

Aunt Madge

Aunt Madge is a perennial favourite with students of all ages. In this activity, students use clues to follow Aunt Madge around the world trying to return her forgotten suitcase. There’s a wide range of locations to choose from on every continent – both natural and constructed places. This activity can be tailored for group work, or the whole class, and by adjusting the number of locations to be found, the teacher can adjust to the available time, anywhere from 10-15 minutes to a whole lesson. Younger students enjoy matching the pictures of locations and trying to find the countries on the map. Older students can find out further information about the locations on the information sheets. Teachers can even choose a theme for the locations (such as “Ancient History” or “Aboriginal Places”) and see if students can guess what it is.

 Ancient Sailing Ships Activity

Sailing Ships (History + Science)Science

Students in Years 3 to 6 have undertaken the Ancient Sailing Ships activity this term, however, there is a vast scope for additional aspects to this activity. Have students compared the performance of square-rigged versus lateen sails? How about varying the number of masts? Have students raced the vessels against each other? (a water trough and a fan is all that’s needed for some exciting races) Teachers can encourage the students to examine the effects of other changes to ship design, such as adding a keel or any other innovations students can come up with, which can be tested. Perhaps classes or grades can even race their ships against each other.

Trade and Barter Activity

Students in years 5 and 6 in particular enjoy the Trade and Barter activity, which teaches them the basics of Economics without them even realising it! This activity covers so many different aspects of the curriculum, that it is always a good one to revisit, even though it was not in this term’s units. Students enjoy the challenge and will find the activity different each time. It is a particularly good choice for a large chunk of time, or for smaller groups; perhaps a more experienced group can coach other students. The section of the activity which has students developing their own system of writing is one that lends itself to extension and can even be spun off as a separate activity.

Games from the Past

Kids Playing TagKids Playing Tag

Students of all ages enjoy many of the games listed in the resource Games From The Past. Several of these games are best done whilst running around outside, so if that is an option, then choose from the Aboriginal, Chinese or Zulu games. Many of these games can be played by large groups. Older students might like to try recreating some of the rules for some of the games of Ancient Egypt or the Aztecs. If this resource wasn’t part of the resources for your particular unit, it can be downloaded from the OpenSTEM™ site directly.

 

Class Discussions

The b) and c) sections of the Teacher Handbooks contain suggestions for topics of discussion – such as Women Explorers or global citizenship, or ideas for drawings that the students can do. These can also be undertaken as additional activities. Teachers could divide students into groups to research and explore particular aspects of these topics, or stage debates, allowing students to practise persuasive writing skills as well.

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineAdding events to a timeline, or the class calendar, also good ways to practise core skills.

The OpenSTEM™ Our World map is used as the perfect complement to many of the Understanding Our World® units. This map comes blank and country names are added to the map during activities. The end of term is also a good chance for students to continue adding country names to the map. These can be cut out of the resource World Countries, which supplies the names in a suitable font size. Students can use the resource World Maps to match the country names to their locations.

We hope you find these suggestions useful!

Enjoy the winter holidays – not too long now to a nice, cosy break!

June 11, 2017

This Week in HASS – term 2, week 9

The OpenSTEM™ Understanding Our World® units have only 9 weeks per term, so this is the last week! Our youngest students are looking at some Aboriginal Places; slightly older older students are thinking about what their school and local area were like when their parents and grandparents were children; and students in years 3 to 6 are completing their presentations and anything else that might be outstanding from the term.

Foundation/Prep/Kindy

Students in the stand-alone Foundation/Prep/Kindy class (Unit F.2) examine Aboriginal Places this week. Students examine which places are special to Aboriginal people, and how these places should be cared for by Aboriginal people and the broader community. Several of the Australian places in the Aunt Madge’s Suitcase Activity can be used to support this discussion in the classroom. Students in an integrated Foundation/Prep/Kindy and Year 1 class (Unit F.6), as well as Year 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) students consider life in the times of their parents and grandparents, with specific reference to their school, or the local area studied during this unit. Teachers may wish to invite older members of the community (including interested parents and/or grandparents) in to the class to describe their memories of the area in former years. Were any of them past students of the school? This is a great opportunity for students to come up with their own questions about life in past times.

Years 3 to 6

Aunt Madge

Students in Year 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are finishing off their presentations and any outstanding work this week. Sometimes the middle of term can be very rushed and so it’s always good to have some breathing space at the end to catch up on anything that might have been squeezed out before. For those classes where everyone is up-to-date and looking for extra activities, the Aunt Madge’s Suitcase Activity is always popular with students and can be used to support their learning. Teachers may wish to select a range of destinations appropriate to the work covered during the term and encourage students to think about how those destinations relate to the material covered in class. Destinations may be selected by continent or theme – e.g. natural places or historical sites. A further advantage of Aunt Madge is that the activity can be tailored to fit the available time – from 5 or 10 minutes for a single destination, to 45 minutes or more for a full selection; and played in groups, or as a whole class, allowing some students to undertake the activity while other students may be catching up on other work. Students may also wish to revisit aspects of the Ancient Sailing Ships Activity and expand on their investigations.

Although this is the last week of this term’s units, we will have some more suggestions for extra activities next week – particularly those that keep the students busy while teachers attend to marking or compiling of reports.

Mysterious 400 Bad Request in Django debug mode

While upgrading Libravatar to a more recent version of Django, I ran into a mysterious 400 error.

In debug mode, my site was working fine, but with DEBUG = False, I would only a page containing this error:

Bad Request (400)

with no extra details in the web server logs.

Turning on extra error logging

To see the full error message, I configured logging to a file by adding this to settings.py:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        'file': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/tmp/debug.log',
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

Then I got the following error message:

Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.

Temporary hack

Sure enough, putting this in settings.py would make it work outside of debug mode:

ALLOWED_HOSTS = ['*']

which means that there's a mismatch between the HTTP_HOST from Apache and the one that Django expects.

Root cause

The underlying problem was that the Libravatar config file was missing the square brackets around the ALLOWED_HOSTS setting.

I had this:

ALLOWED_HOSTS = 'www.example.com'

instead of:

ALLOWED_HOSTS = ['www.example.com']

June 09, 2017

LUV Beginners June Meeting: Debian 9 release party!

Jun 17 2017 12:30
Jun 17 2017 16:30
Jun 17 2017 12:30
Jun 17 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Debian Linux version 9 (codename "Stretch") is scheduled for release on 17 June 2017.  Join us in celebrating the release and assisting anyone who would like to install or upgrade to the new version!


There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 17, 2017 - 12:30

June 08, 2017

Heredocs with Gaussian and Slurm

Gaussian is a well-known computational chemistry package, and sometimes subject to debate over its license (e.g., the terms state researchers who develop competing software packages are not permitted to use the software, compare performance etc). Whilst I have some strong opinions about such a license, this will be elaborated at another time. The purpose here is to illustrate the use of heredocs with Slurm.

read more

June 06, 2017

Applied PKCS#11

The most involved thing I’ve had to learn this year is how to actually use PKCS #11 to talk to crypto hardware. It’s actually not that clear. Most of the examples are buried in random bits of C from vendors like Oracle or IBM; and the spec itself is pretty dense. Especially when it comes to understanding how you actually use it, and what all the bits and pieces do.

In honour of our Prime Minister saying he should have NOBUS access into our cryptography, which is why we should all start using hardware encryption modules (did you know you can use your TPM) and thus in order to save the next girl 6 months of poking around on a piece of hardware she doesn’t really *get*, I started a document: Applied PKCS#11.

The later sections refer to the API exposed by python-pkcs11, but the first part is generally relevant. Hopefully it makes sense, I’m super keen to get feedback if I’ve made any huge logical leaps etc.

June 05, 2017

LUV Main June 2017 Meeting

Jun 6 2017 18:30
Jun 6 2017 20:30
Jun 6 2017 18:30
Jun 6 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, June 6, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 6, 2017 - 18:30

June 04, 2017

This Week in HASS – term 2, week 8

This week we are starting into the last stretch of the term. Students are well into their final sections of work. Our youngest students are thinking about how we care for places, slightly older students are displaying their posters and older students are giving their presentations.

Foundation/Prep/Kindy to Year 3

Our youngest students doing the stand-alone Foundation/Prep/Kindy unit (F.2) are thinking about how we look after different places this week. Students in integrated Foundation/Prep/Kindy and Year 1 classes, doing Unit F.6, are displaying their posters on an issue in their local environment. These posters were prepared in proceeding weeks and can now be displayed either at school or in a local library or hall. The teacher may choose to invite parents to view the posters as well. Students in Years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) also have posters to display on a range of issues, either at the school, in a local place, such as a park, or even a local heritage place. Discussions around points of view and the intended audience of the posters can help students to gain a more in-depth understanding and critique their own work.

Years 3 to 6

Students in Years 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are in the second of 3 weeks set aside for their presentations. The presentations cover a significant body of work and thus a 3 weeks of lessons are set aside for the presentations, as well as for finishing any other sections of work not yet completed. Year 3 students are considering extreme climate areas of Australia and other parts of the world, such as the Sahara Desert, Arctic and Antarctica and Mount Everest, by studying explorers such as Edmund Hillary and Tenzing Norgay, Robert Scott and Pawel Strzelecki. Year 4 students are studying explorers and the environments and animals of Africa and South America, such as Francisco Pizarro, the Giant Vampire Bat, Vasco Da Gama and the Cape Lion. Year 5 students are studying explorers, environments and animals of North America, such as Henry Hudson, Hernando de Soto and the Great Auk. Year 6 students are studying explorers, environments and indigenous peoples of Asia, such as Vitus Bering, Zheng He, Marco Polo, the Mongols and the Rus.

Speaking in June 2017

I will be at several events in June 2017:

  • db tech showcase 2017 – 16-17 June 2017 – Tokyo, Japan. I’m giving a talk about best practices around MySQL High Availability.
  • O’Reilly Velocity 2017 – 19-22 June 2017 – San Jose, California, USA. I’m giving a tutorial about best practices around MySQL High Availability. Use code CC20 for a 20% discount.

I look forward to meeting with you at either of these events, to discuss all things MySQL (High Availability, security, cloud, etc.), and how Percona can help you.

As I write this, I’m in Budva, Montenegro, for the Percona engineering meeting.

Six is the magic number

I have talked about controlling robot arms with 4 or 5 motors and the maths involved in turning a desired x,y,z target into servo angles. Things get a little too interesting with 6 motors as you end up with a great deal of solutions to a positioning problem and need to work out a 'best' choice.


So I finally got MoveIt! to work to control a six motor arm using ROS. I now also know that using MoveIt on lower order arms isn't going to give you much love. Six is the magic number (plus claw motor) to get things working and patience is your best friend in getting the configuration and software setup going.

This was great as MoveIt was the last corner of the ROS stack that I hadn't managed to get to work for me. The great part is that the knowledge I gained playing with MoveIt will work on larger more accurate and expensive robot arms.

June 03, 2017

The Why and How of HPC-Cloud Hybrids with OpenStack

High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.

read more

June 01, 2017

Update on python-pkcs11

I spent a bit of time fleshing out the support matrix for python-pkcs11 and getting things that aren’t SoftHSM into CI for integration testing (there’s still no one-command rollout for BuildBot connected to GitHub, but I got there in the end).

The nice folks at Nitrokey are also sending me some devices to widen the compatibility matrix. Also happy to make it work with CloudHSM if someone at Amazon wants to hook me up!

I also put together API docs that hopefully help to explain how to actually use the thing and added support for RFC3279 to pyasn1_modules (so you can encode your elliptic curve parameters).

Next goal is to open up my Django HSM integrations to add encrypted database fields, encrypted file storage and various other offloads onto the HSM. Also look at supporting certificate objects for all that wonderful stuff.

May 29, 2017

Fedora 25 + Lenovo X1 Carbon 4th Gen + OneLink+ Dock

As of May 29th 2017, if you want to do something crazy like use *both* ports of the OneLink+ dock to use monitors that aren’t 640×480 (but aren’t 4k), you’re going to need a 4.11 kernel, as everything else (for example 4.10.17, which is the latest in Fedora 25 at time of writing) will end you in a world of horrible, horrible pain.

To install, run this:

sudo dnf install \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-core-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-cross-headers-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-devel-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-modules-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-tools-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/kernel-tools-libs-4.11.3-200.fc25.x86_64.rpm \
https://kojipkgs.fedoraproject.org//packages/kernel/4.11.3/200.fc25/x86_64/perf-4.11.3-200.fc25.x86_64.rpm

This grabs a kernel that’s sitting in testing and isn’t yet in the main repositories. However, I can now see things on monitors, rather than 0 to 1 monitor (most often 0). You can also dock/undock and everything doesn’t crash in a pile of fail.

I remember a time when you could fairly reliably buy Intel hardware and have it “just work” with the latest distros. It’s unfortunate that this is no longer the case, and it’s more of a case of “wait six months and you’ll still have problems”.

Urgh.

(at least Wayland and X were bug for bug compatible?)

Configuring docker to use rexray and Ceph for persistent storage

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working...

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
    Selecting previously unselected package rexray.
    (Reading database ... 177547 files and directories currently installed.)
    Preparing to unpack rexray_0.9.0-1_amd64.deb ...
    Unpacking rexray (0.9.0-1) ...
    Setting up rexray (0.9.0-1) ...
    
    rexray has been installed to /usr/bin/rexray
    
    REX-Ray
    -------
    Binary: /usr/bin/rexray
    Flavor: client+agent+controller
    SemVer: 0.9.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    Formed: Thu, 04 May 2017 07:38:11 AEST
    
    libStorage
    ----------
    SemVer: 0.6.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    Formed: Thu, 04 May 2017 07:36:11 AEST
    


Which is of course horrid. What that script seems to have done is install a deb'd version of rexray based on an alien'd package:

    root@labosa:~/rexray# dpkg -s rexray
    Package: rexray
    Status: install ok installed
    Priority: extra
    Section: alien
    Installed-Size: 36140
    Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
    Architecture: amd64
    Version: 0.9.0-1
    Depends: libc6 (>= 2.3.2)
    Description: Tool for managing remote & local storage.
     A guest based storage introspection tool that
     allows local visibility and management from cloud
     and storage platforms.
     .
     (Converted from a rpm package by alien version 8.86.)
    


If I was building anything more than a test environment I think I'd want to do a better job of installing rexray than this, so you've been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren't mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common
    root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
    The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
    ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
    rbdmap                       100%   92     0.1KB/s   00:00    
    ceph.conf                    100%  681     0.7KB/s   00:00    
    ceph.client.admin.keyring    100%   63     0.1KB/s   00:00    
    ceph.client.glance.keyring   100%   64     0.1KB/s   00:00    
    ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00    
    ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00  
    root@labosa:/etc# modprobe rbd
    


You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: ceph
    


And the rexray output sure made it look like it worked...

    root@labosa:/etc# rexray service start
    ● rexray.service - rexray
       Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
     Main PID: 477423 (rexray)
        Tasks: 5
       Memory: 1.5M
          CPU: 9ms
       CGroup: /system.slice/rexray.service
               └─477423 /usr/bin/rexray start -f
    
    May 29 10:14:07 labosa systemd[1]: Started rexray.
    


Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray
    May 29 10:14:08 labosa rexray[477423]: -------
    May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
    May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
    May 29 10:14:08 labosa rexray[477423]: libStorage
    May 29 10:14:08 labosa rexray[477423]: ----------
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting libStorage server" error.driver=ceph time=1496016848215
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="daemon failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting rex-ray" error.driver=ceph time=1496016848216
    


That's because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: rbd
    
    rbd:
      defaultPool: rbd
    


Now to install docker:

    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
        linux-image-extra-virtual
    root@labosa:/var/log# sudo apt-get install apt-transport-https \
        ca-certificates curl software-properties-common
    root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    root@labosa:/var/log# sudo add-apt-repository \
        "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
        $(lsb_release -cs) \
        stable"
    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install docker-ce
    


Now let's make a rexray volume.

    root@labosa:/var/log# rexray volume ls
    ID  Name  Status  Size
    root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
        --opt=size=1
    A size of 1 here means 1gb
    mysql
    root@labosa:/var/log# rexray volume ls
    ID         Name   Status     Size
    rbd.mysql  mysql  available  1
    


Let's start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    Unable to find image 'mysql:latest' locally
    latest: Pulling from library/mysql
    10a267c67f42: Pull complete 
    c2dcc7bb2a88: Pull complete 
    17e7a0445698: Pull complete 
    9a61839a176f: Pull complete 
    a1033d2f1825: Pull complete 
    0d6792140dcc: Pull complete 
    cd3adf03d6e6: Pull complete 
    d79d216fd92b: Pull complete 
    b3c25bdeb4f4: Pull complete 
    02556e8f331f: Pull complete 
    4bed508a9e77: Pull complete 
    Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
    Status: Downloaded newer image for mysql:latest
    ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
    


And now to prove that persistence works and that there's nothing up my sleeve...
    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
        -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | mysql              |
    | performance_schema |
    | sys                |
    +--------------------+
    4 rows in set (0.00 sec)
    
    mysql> create database demo;
    Query OK, 1 row affected (0.03 sec)
    
    mysql> use demo;
    Database changed
    mysql> create table foo(val char(5));
    Query OK, 0 rows affected (0.14 sec)
    
    mysql> insert into foo(val) values ('a'), ('b'), ('c');
    Query OK, 3 rows affected (0.08 sec)
    Records: 3  Duplicates: 0  Warnings: 0
    
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    


Now let's re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql
    some-mysql
    root@labosa:/var/log# docker rm some-mysql
    some-mysql
    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
    
    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
        P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> use demo;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    
So there you go.

Tags for this post: docker ceph rbd rexray
Related posts: So you want to setup a Ceph dev environment using OSA; Juno nova mid-cycle meetup summary: containers

Comment

LilacSat-1 Codec 2 in Space!

On May 25th LilacSat-1 was launched from the ISS. The exiting news is that it contains an analog FM to Codec 2 repeater. I’ve been in touch with Wei Mingchuan, BG2BHC during the development phase, and it’s wonderful to see the satellite in orbit. He reports that some Hams have had preliminary contacts.

The LilacSat-1 team have developed their own waveform, that uses a convolutional code running over BPSK at 9600 bit/s. Wei reports a MDS of about -127 dBm on a USRP B210 SDR which is quite respectable and much better than analog FM. GNU radio modules are available to support reception. I think it’s great that Wei and team have used open source (including Codec 2) to develop their own novel systems, in this case a hybrid FM/digital system with custom FEC and modulation.

Now I need to get organised with some local hams and find out how to work this satellite myself!

Part 2 – Making a LilacSat-1 Contact

On Saturday 3 June 2017 Mark VK5QI, Andy VK5AKH and I just made our first LilacSat-1 contact at 12:36 local time on a lovely sunny winter day here in Adelaide! Mark did a fine job setting up a receive station in his car, and Andy put together the video below showing both ends of the conversation:

The VHF tx and UHF rx stations were only 20m apart but the path to LilacSat-1 was about 400km each way. Plenty of signal as you can see from the error free scatter diagram.

I’m fairly sure there is something wrong with the audio (perhaps levels into the codec), as the decoded Codec 2 1300 bit/s signal is quite distorted. I can also hear similar distortion on other LilicSat-1 contacts I have listened too.

Let me show you what I mean. Here is a sample of my voice from LilacSat-1, and another sample of my voice that I encoded locally using the Codec 2 c2enc/c2dec command line tools.

There is a clue in this QSO – one end of the contact is much clearer than the other:

I’ll take a closer look at the Codec 2 bit stream from the satellite over the next few days to see if I can spot any issues.

Well done to LilacSat-1 team – quite a thrill for me to send my own voice through my own codec into space and back!

Part 3 – Level Analysis

Sunday morning 4 June after a cup of coffee! I added a little bit of code to codec2.c:codec2_decode_1300() to dump the energy quantister levels:

    e_index = unpack_natural_or_gray(bits, &nbit, E_BITS, c2->gray);
    e[3] = decode_energy(e_index, E_BITS);
    fprintf(stderr, "%d %f\n", e_index, e[3]);

The energy of the current frame is encoded as a 5 bit binary number. It’s effectively the “AF gain” or “volume” of the current 40ms frame of speech. We unpack the bits and use a look up table to get the actual energy.

We can then run the Codec 2 command line decoder with the LilacSat-1 Codec 2 data Mark captured yesterday to extract a file of energies:

./c2dec 1300 ~/Desktop/LilacSat-1/lilacsat_dgr.c2 - 2>lilacsat1_energy.txt | play -t raw -r 8000 -s -2 - trim 30 6

The lilacsat1_energy.txt file contains the energy quantiser index and decoded energy in a table (matrix) that I can load into Octave and plot. I also ran the same text on the reference cq_freedv file used in Part 2 above:

So the top plot is the input speech “cq freedv ….”, and the middle plot the resulting energy quantiser index values. The energy bounces about with the level of the input speech. Now the bottom plot is from the LilacSat-1 sample. It is “red lined” – hard up against the upper limits of the quantiser. This could explain the audio distortion we are hearing.

Wei emailed me overnight and other Hams (e.g. Bob N6RFM) have discovered that reducing the Mic gain on the uplink FM radios indeed improves the audio quality. Wei is looking into in-flight adjustments of the gain between the FM rx and Codec 2 tx on LilacSat-1.

Note to self – I should look into quantiser ranges to make Codec 2 robust to people driving it with different levels.

Part 4 – Some Improvements

Sunday morning 4 June 11:36am pass: Mark set up his VHF tx in my car, and we played the cq_freedv canned wave file using a laptop and signalink so we could easily vary the tx drive:

Fortunately I have plenty of power available in my Electric Vehicle – we just tapped across 13.2V worth of Lithium cells in the rear pack:

We achieved better results, but not quite as good as using the source file directly without a journey through the VHF FM uplink:

LilacSat-1 3 June high mic gain

LilacSat-1 4 June low mic gain

encoded locally (no VHF FM uplink)

There is still quite a lot of noise on the decoded audio, probably from the VHF uplink. Codec 2 performs poorly in the presence of high levels of background noise. As we are under-deviating, the SNR of the FM uplink will be reduced, further increasing noise. However Wei has just emailed me that his team is reducing the “AF gain” between the VHF rx and Codec 2 on LilacSat-1 so we should hear some improvements on the next few passes.

Note to self #2 – add some noise reduction inside of Codec 2 to make it more robust to different input signal conditions.

Links

The LilacSat-1 page has links to GNU Radio modules that can be used to receive signals from the satellite.

Mark, VK5QI, describes he car’s exotic antennas system and how it was used on todays LilacSat-1 contact.

LilacSat-1 HowTo, Mark and I have documented the set up procedure for LilacSat-1, and written some scripts to help automate the process.

May 28, 2017

This Week in HASS – term 2, week 7

This week students are starting to round off their main body of assessable work for the term. Older students are completing and starting to present their presentations, while younger students have posters and models to finish off.

Foundation/Prep/Kindy to Year 3

Students in our stand-alone Foundation/Prep/Kindy unit (F.2) are continuing to explore with their senses this week. While still working on their model or collage for their Favourite Place, they are using their sense of Smell to consider which aromas they like or dislike. Teachers (and students) can bring in a range of things with different smells to explore in class. Ideas for these are given in the Teacher’s Handbook. An important part of this investigation is considering how one can represent one’s favourite smells in the model or collage – students might try to draw the objects associated with the smells, or see if they can find creative alternatives to represent this sense.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F.6) and those in Years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) are completing their posters to be displayed next week. These posters cover topics of local significance – either local history information, or dealing with issues, such as littering or the need for play equipment. As the students work on the posters, teachers are holding discussions with them on responsibility for different issues. The delegation of responsibility to members of the community, local government, other authorities, people who use the facilities, the school P&C, the students etc. should be discussed in class, so that students start to understand how people have different responsibilities in different situations. The teacher can also revisit issues of responsibility in the classroom – what are the students responsible for? What is the teacher responsible for? What is the school responsible for? These discussions are an important means of allowing the students to interact and practise group discussion skills, as well as helping them to think about responsibilities.

Years 3 to 6

Roald Amundsen in fur skinsStudents in Year 3 (Unit 3.6) are completing their presentation on an extreme climate explorer and may start presenting it to the class this week. Year 4 students (Unit 4.2) are presenting on their explorer of Africa and South America. Year 5 students (Unit 5.2) are presenting on their chosen explorer from North America; and Year 6 students (Unit 6.2) are presenting on their chosen explorer from Asia. The remaining 3 weeks of this unit are allocated to the presentations, to ensure enough time for these to be given in full. The presentations should cover all the aspects raised over previous weeks and answered in the Student Workbook – the environments and geography of the areas explored; sustainability issues, such as extinction of animals and changes in local environments; characteristics of the countries involved in the explorations; reasons for explorations and how these created the background which led to the settlement of Australia and the role of indigenous people, as well as impact on indigenous people and their environments. The presentation is thus a comprehensive body of work.

 

FreeDV 2017 Road Map

Half way through the year but I thought I better write down some plans anyway! Helps me organise my thoughts and minimise the tangential work. The main goal for 2017 is a FreeDV mode that is competitive with SSB at low SNRs on HF channels. But first, lets see what happened in 2016….

Achievements in 2016

Here is the 2016 Roadmap. Reviewing it, we actually made good progress on a bunch of the planned activities:

  • Brady O’Brien (KC9TPA) worked with me to develop the FreeDV 2400A and 2400B modes.
  • Fine progress on the SM2000 project, summarised nicely in my Gippstech 2016 SM2000 talk. Thanks in particular to Brady, Rick (KA8BMA) and Neil (VK5KA).
  • The open telemetry work provided some key components for the Wenet system for high speed SSDV images from High Altitude Balloons. This work spun out of FSK modem development by Brady and myself, combined with powerful LDPC FEC codes from VK5DSP, with lots of work by Mark VK5QI, and AREG club members. It operates close to the limits of physics: with a 100mW signals we transmit HD images over 100km at 100 kbit/s using a $20 SDR and a good LNA. Many AREG members have set up Wenet receive stations using $100 roadkill laptops refurbished with Linux. It leaves commercial telemetry chips-sets in the dust, about 10dB behind us in terms of performance.
  • Ongoing FreeDV outreach via AREG FreeDV broadcasts, attending conferences and Hamfests. Thank you to all those who promote and use FreeDV.

FreeDV 2017 Roadmap

Codec 2 700C is the breakthrough I have been waiting for. Communications quality, conversational speech at just 700 bit/s, and even on a rough first pass it outperforms MELPe at 600 bit/s. Having a viable codec at 700 bit/s lets us consider powerful LDPC FEC codes in the 2000Hz SSB type bandwidths I’m targeting, which has led to a new OFDM modem and the emerging FreeDV 700D mode. I now feel comfortable that I can reach the goal of sub zero dB SNR digital speech that exceeds SSB in quality.

So here is the 2017 roadmap. Partially shaded work packages are partially complete. The pink work packages are ongoing activities rather than project based:

Rather than push FreeDV 700D straight out, I have decided to have another iteration at Codec 2 quality, using the Codec 700C algorithms as a starting point. The FreeDV 700D work has suggested we can use latency to overcome the HF channel, which means frames of several hundred ms to several seconds. By exploring correlation over longer Codec 2 frames we can achieve lower bit rates (e.g. sub 400 bit/s) or get better voice quality at 700 bit/s and above.

I’ve been knocking myself out to get good results at low SNRs. However many HF and indeed VHF/UHF PTT radio conversions take place at SNRs of greater than 10dB. This allows us to support higher bit rate codecs, and achieve better speech quality. For example moving from 0dB to 10dB means 10 times the bit rate at the same Bit Error Rate (BER). The OFDM modem will allow us to pack up to 4000 bit/s into a 2000 Hz SSB channel.

The algorithms that work so well for Codec 2 700C can be used to increase the quality at higher bit rates. So the goal of the “Codec 2 Quality” work package is to (i) improve the quality at 700-ish bit/s, and (ii) come up with a Codec 2 mode that improves on the speech quality of Codec 2 1300 (as used in the FreeDV 1600 mode) at 2000-ish bit/s.

After the Codec 2 quality improvement I will port the new algorithms to C, release and tune on the FreeDV GUI program, then port to the SM1000. Fortunately the new OFDM modem is simpler in terms of memory and computation that the COHPSK modem used for FreeDV 700C. We have an option to use a short LDPC code (224 bits) which with a little work will run OK on the SM1000.

Putting it all Together

The outputs will be a low SNR mode competitive with SSB at low SNR, and (hopefully) a high SNR mode that sounds better than SSB at medium to high SNRs. It will be available as a free software download (FreeDV GUI program), an embedded stand-alone product (the SM1000), and as a gcc library (FreeDV API)

Then we can get back to VHF/UHF work and the SM2000 project.

Help Wanted

When will this all happen? Much sooner if you help!

I’m a busy guy, making steady progress in the field of open source digital radio. While I appreciate your ideas, and enjoy brainstorming as much as the next person, what I really want are your patches, and consistent week in/week out effort. If you can code in C and/or have/are willing to learn a little GNU Octave, there is plenty of work to be done in SM1000 maintenance, a port of the OFDM modem to C, SM1000 hardware/software maintenance, and FreeDV GUI program refactoring and maintenance. Email me.

Links

FreeDV 2016 Roadmap. Promises, promises……
FreeDV 2400A and 2400B modes
SM2000 – First post introducing the SM2000 project
Gippstech 2016 SM2000 talk – Good summary of SM2000 project to date
LowSNR site – from Bill (VK5DSP) modem guru and LDPC code-smith
Horus 39 – Fantastic High Speed SSDV Images – Good summary of Wenet blog posts and modem technology
SM1000 Digital Voice Adaptor
AREG FreeDV broadcasts

So you want to setup a Ceph dev environment using OSA

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I've never seen before called a "Scenario". Basically this means that you need to export an environment variable called "SCENARIO" before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    


Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
    +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
    @@ -338,7 +338,9 @@
     #     foo: 1234
     #     bar: 5678
     #
    -ceph_conf_overrides: {}
    +ceph_conf_overrides:
    +  global:
    +    osd_pool_default_pg_num: 8
     
     
     #############
    @@ -373,4 +375,4 @@
     # Set this to true to enable File access via NFS.  Requires an MDS role.
     nfs_file_gw: true
     # Set this to true to enable Object access via NFS. Requires an RGW role.
    -nfs_obj_gw: false
    \ No newline at end of file
    +nfs_obj_gw: false
    


That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I'll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
        cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
         health HEALTH_OK
         monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
         osdmap e20: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                102156 kB used, 3070 GB / 3070 GB avail
                      40 active+clean
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
    ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
    -1 2.99817 root default                                      
    -2 2.99817     host labosa                                   
     0 0.99939         osd.0        up  1.00000          1.00000 
     1 0.99939         osd.1        up  1.00000          1.00000 
     2 0.99939         osd.2        up  1.00000          1.00000 
    


Tags for this post: openstack osa ceph openstack-ansible
Related posts: Configuring docker to use rexray and Ceph for persistent storage

Comment

May 25, 2017

This Week in HASS, term 2 week 5

pipsie little birdNAPLAN’s over and it’s time to sink our teeth into the main body of curriculum work before mid-year reporting rolls around. Our younger students are using all their senses to study the environment and local area around them, whilst our older students are hard at work on their Explorer projects.

Foundation/Prep/Kindy to Year 3

Unit F.2 for stand-alone Foundation/Prep/Kindy classes has the students continuing to think about their Favourite Place. This week students are considering what they can hear in their Favourite Place and how they will depict that in their model of their Favourite Place. Students can also think about what their Favourite Sounds are and whether or not these would occur in their Favourite Place. Students in integrated Foundation/Prep/Kindy classes (Unit F.6) and Years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) have this week set aside for an excursion to a local park or area of heritage significance. If an excursion outside school grounds is impractical teachers can achieve similar results from an excursion around the school and oval. Students are using their senses to interpret their environment, as well as thinking about living and non-living things, natural and managed landscapes and sources of heat and light.

Years 3 to 6

Students in Years 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are continuing their project on an explorer. This week the focus for most students is on animals which may have been encountered by their explorer. Year 3 students are examining animals from different climate zones and how they are adapted to deal with climate extremes. Students in Years 4 and 5 look at extinct animals from Africa, South America or North America, assessing impact and sustainability issues. Students in Year 4 (and optionally as an extension for Year 3) consider the life cycle of their chosen animal. Students in years 4, 5 and 6 also start to examine the differences between Primary and Secondary sources and some of the OpenSTEM resources contain quotes or copies of primary material, so that students can refer to these in their project. Year 6 students are examining the changing Economies and Politics of Asia through time, in order to place the explorations within a broader context and to gain a greater understanding of the development of the global situation. Students have another 2 weeks to complete their presentation on their explorer (including environment and other aspects), before assessment of this project.

Urban HF Noise

Over the past 30 years, HF radio noise in urban areas has steadily increased. S6-S9 noise levels are common, which makes it hard to listen to the signals we want to receive.

I’ve been wondering if we can attenuate this noise using knowledge of the properties of the noise, and some clever DSP. Even 6dB would be useful, that’s like the transmitting station increasing their power by a factor of 4. I’ve just spent 2 months working on a 4dB improvement in my FreeDV work. So this week I’ve been messing about with pen and paper and a few simulations, exploring the problem of man-made noise on HF radio.

PWM Noise

One source of noise is switching power supplies, which have short, high current pulses flowing through them at a rate of a few hundred kHz. A series of short impulses in the time domain produces a series of spectral lines (i.e sinusoids or tones) in the frequency domain, so a 200kHz switcher produces tones at 200kHz, 400kHz, 600kHz etc. These tones are the “birdies” we hear as we tune our HF radios. The shorter the pulses are, the higher in frequency they will extend.

Short pulses lead to efficient switch mode power supplies, which is useful for energy efficiency, and especially desirable for high power devices like electric car chargers and solar panel inverters. So the trend is shorter switching times, higher currents and therefore more HF noise.

The power supplies adjust the PWM pulse-width back and forth as they adjust to varying conditions, which introduces a noise component. This is similar to phase noise in oscillators, and causes a continuous noise floor to appear in addition to the tones. The birdies we can tune around, but the noise floor sets a limit on urban HF operations.

The Octave script impulse_noise.m was used to generate the plots in this post. Here is a plot of some PWM impulse samples (top), and the HF spectrum.

I’ve injected a “wanted” signal at 1MHz for comparison. Given a switcher frequency of 255kHz, with 0.1V impulse amplitude, the noise floor is -90dBV down, or about 10uV. This is S5-S6 level noise, assuming 0.1V impulse amplitude induced onto our antenna by local switcher noise, e.g. nearby house wiring, or the neighbors TV. These numbers seem reasonable and match what we hear in our receivers.

Single Pulses

Single, isolated pulses are an easier problem. Examples are lightning or man-made sources that produce pulses at a rate slower than the bandwidth of the signal we are interested in.

A single impulse produces a flat spectrum, so the noise at frequency f Hz is almost the same as the noise at frequency f+delta Hz, where delta is small. This means you can use the noise at frequencies next to the one you are interested in to estimate and remove the noise in your frequency of interest.

Here is an impulse that lasts two samples, the magnitude spectrum changes slowly, although the phase changes quickly due to the time offset of the impulse.

Turns out that if the impulse position is known, and most of the energy is confined to that impulse, we can make a reasonable estimate of the noise at one frequency, from the noise at adjacent frequencies. Below we estimate the phase and magnitude (green cross) of frequency bin H(k+1) (nearby blue cross) from bin H(k). I’ve actually plotted H(k-1), H(k), and H(k+1) for comparison. The error in the estimation is -44dB down, so that’s a lot of noise removed.

Unfortunately this gets harder when there are multiple impulses in the same time window, and I can’t work out how to remove noise is this case. However this idea might be useful for some classes of impulse noise.

Noise Blanker

Another idea I tried was “blanking” out the impulses, buy opening and closing a switch so that the impulses are not allowed into the receiver. This works OK when we have a wideband signal, but falls over when just a bandpass version is available. In the bandpass version the “pulse” is smeared over time and we are no longer able to gate it out.

There will also be problems dealing with multiple PWM signals, that have different timing and frequency.

I haven’t looked at samples of the RF received from any real world switcher signals yet. I anticipate the magnitude and phase of the switcher signal will be all over the place, due to some torturous transfer function between the switcher and the terminals of my receiver. Plus various other signals will be present. Possibly there is a wide spectrum (short noise pulses) that we can work with. However I’d much rather deal with narrow bandpass signals consisting of just our wanted signal plus the switcher noise floor.

Next Steps

I might get back to my FreeDV work now, and leave this work on the back burner. I do feel I’m getting my head around the problem, and developing a “bag of tricks” that will be useful when other pieces fall into place.

The urban noise appears to be localised, e.g. if you head out into the country the background noise level is much lower. This suggests it’s coupled into the HF antenna by some local effect like induction. So another approach is to estimate the noise using a separate receiver that just picks up the local noise, through a sense antenna that is inefficient for long distance HF signals.

The local noise sequence could then be subtracted from the HF signal. I am aware of analog boxes that do this, using a magnitude and phase network to match the differences in signals received by the sense and HF antennas.

However a DSP approach will allow a more complex relationship (like an impulse response that extends for several microseconds) between the two antenna signals, and allow automatic adjustment. The noise spectrum can change quickly, as PWM is modulated and multiple devices turn on and off in the neighborhood. However the relationship between the two antennas will change slowly if they are fixed in space. This problem reminds me of echo cancellation, something I have played with before. Given radio hardware is now very cheap ($20 SDR dongles), multiple receivers could also be used.

So my gut feel remains that HF urban noise can be reduced to some extent (e.g. 6 or 12dB suppression) using DSP. If those nasty PWM switchers are inducing RF voltages into our antennas, we can work out a way to subtract those voltages.

May 22, 2017

Announcing new high-level PKCS#11 HSM support for Python

Recently I’ve been working on a project that makes use of Thales HSM devices to encrypt/decrypt data. There’s a number of ways to talk to the HSM, but the most straight-forward from Linux is via PKCS#11. There were a number of attempts to wrap the PKCS#11 spec for Python, based on SWIG, cffi, etc., but they were all (a) low level, (b) not very Pythonic, (c) have terrible error handling, (d) broken, (e) inefficient for large files and (f) very difficult to fix.

Anyway, given that nearly all documentation on how to actually use PKCS#11 has to be discerned from C examples and thus I’d developed a pretty good working knowledge of the C API, and I’ve wanted to learn Cython for a while, I decided I’d write a new binding based on a high level wrapper I’d put into my app. It’s designed to be accessible, pick sane defaults for you, use generators where appropriate to reduce work, stream large files, be introspectable in your programming environment and be easy to read and extend.

https://github.com/danni/python-pkcs11

It’s currently a work in progress, but it’s now available on pip. You can get a session on a device, create a symmetric key, find objects, encrypt and decrypt data. The Cryptoki spec is quite large, so I’m focusing on the support that I need first, but it should be pretty straightforward for anyone who wanted to add something else they needed. I like to think I write reasonably clear, self-documenting code.

At the moment it’s only tested on SoftHSMv2 and the Thales nCipher Edge, which is what I have access to. If someone at Amazon wanted this to work flawlessly on CloudHSM, send me an account and I’ll do it :-P Then I can look at releasing my Django integrations for fields, storage, signing, etc.

May 21, 2017

This Week in HASS – term 2, week 6

This week students doing the Understanding Our World® program are exploring their environment and considering indigenous peoples. Younger students are learning about local history and planning a poster on a local issue. Older students are studying indigenous peoples around the world. All the students are working strongly on their main pieces of assessment for the term.

school iconFoundation/Prep/Kindy to Year 3

Our youngest students, using the stand-along Foundation/Prep/Kindy unit (F.2) are exploring the sense of touch in their environment this week. Students consider a range of fabrics and textiles and choose which ones match their favourite place, for inclusion in their model or collage. Students in integrated classes of Foundation/Prep/Kindy and Year 1 (Unit F.6), Year 1 students (Unit 1.2), Year 2 students (Unit 2.2) and Year 3 (Unit 3.2) are starting to prepare a poster on an issue regarding their school, or local park/heritage place, while considering the local history. These investigations should be based on the excursion from last week. Students will have 2 weeks to prepare their posters, for display either at the school or a local venue, such as the library or community hall.

Years 3 to 6

Students in Years 3 to 6 are continuing with their project on an explorer. Students in Year 3 (Unit 3.6) are examining Australian Aboriginal groups from extreme climate areas of Australia, such as the central deserts, or cold climate areas. Students then choose one of these groups to describe in their Student Workbook, and add to their presentation. Students in Year 4 (Unit 4.2) are studying indigenous peoples of Africa and South America. They will then select a group from the area visited by their explorer, to include in their presentation. Year 5 students (Unit 5.2) do the same with indigenous groups from North America; whilst year 6 students (Unit 6.2) have a wide range of resources on indigenous peoples from Asia to select for study and inclusion in their presentation. Resources are available on groups from across mainland Asia (such as the Mongols, Tatars, Rus, Han), as well as South-East Asia (such as Malay, Dyak, Dani etc.). This is the last section of work to be included in the presentation, and students will then finish their presentation and present it to the class.

May 19, 2017

Borrowing a Pencil

Student: Can I borrow a pencil?

Teacher: I don’t know. Can you?

Student: Yes. I might add that colloquial irregularities occur frequently in any language. Since you and the rest of our present company understood perfectly my intended meaning, being particular about the distinctions between “can” and “may” is purely pedantic and arguably pretentious.

Teacher: True, colloquialism and the judicious interpretation of context help us communicate with nuance, range, and efficiency. And yet, as your teacher, my job is to teach you to think about language with care and rigour. Understanding the shades of difference between one word and another, and to think carefully about what you want to say, will give you greater power and versatility in your speech and writing.

Student: Point taken. May I have a pencil?

Teacher: No, you may not. We do not have pencils since the department cut funding for education again last year.

PostgreSQL date ranges in Django forms

Django’s postgres extensions support data types like DateRange which is super useful when you want to query your database against dates, however they have no form field to expose this into HTML.

Handily Django 1.11 has made it super easy to write custom widgets with complex HTML.

Start with a form field based off MultiValueField:

from django import forms
from psycopg2.extras import DateRange


class DateRangeField(forms.MultiValueField):
    """
    A date range
    """

    widget = DateRangeWidget

    def __init__(self, **kwargs):
        fields = (
            forms.DateField(required=True),
            forms.DateField(required=True),
        )
        super().__init__(fields, **kwargs)

    def compress(self, values):
        try:
            lower, upper = values
            return DateRange(lower=lower, upper=upper, bounds='[]')
        except ValueError:
            return None

The other side of a form field is a Widget:

from django import forms
from psycopg2.extras import DateRange


class DateRangeWidget(forms.MultiWidget):
    """Date range widget."""
    template_name = 'forms/widgets/daterange.html'

    def __init__(self, **kwargs):
        widgets = (
            forms.DateInput(),
            forms.DateInput(),
        )
        super().__init__(widgets, **kwargs)

    def decompress(self, value):
        if isinstance(value, DateRange):
            return (value.lower, value.upper)
        elif value is None:
            return (None, None)
        else:
            return value

    class Media:
        css = {
            'all': ('//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/daterangepicker.min.css',)  # noqa: E501
        }

        js = (
            '//cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js',
            '//cdnjs.cloudflare.com/ajax/libs/jquery-date-range-picker/0.14.4/jquery.daterangepicker.min.js',  # noqa: E501
        )

Finally we can write a template to use the jquery-date-range-picker:

{% for widget in widget.subwidgets %}
<input type="hidden" name="{{ widget.name }}"{% if widget.value != None %} value="{{ widget.value }}"{% endif %}{% include "django/forms/widgets/attrs.html" %} />
{% endfor %}

<div id='container_for_{{ widget.attrs.id }}'></div>

With a script block:

(function() {
    var format = 'D/M/YYYY';
    var isoFormat = 'YYYY-MM-DD';
    var startInput = $('#{{ widget.subwidgets.0.attrs.id }}');
    var endInput = $('#{{ widget.subwidgets.1.attrs.id }}');

    $('#{{ widget.attrs.id }}').dateRangePicker({
        inline: true,
        container: '#container_for_{{ widget.attrs.id }}',
        alwaysOpen: true,
        format: format,
        separator: ' ',
        getValue: function() {
            if (!startInput.val() || !endInput.val()) {
                return '';
            }

            var start = moment(startInput.val(), isoFormat);
            var end = moment(endInput.val(), isoFormat);

            return start.format(format) + ' ' + end.format(format);
        },
        setValue: function(s, start, end) {
            start = moment(start, format);
            end = moment(end, format);

            startInput.val(start.format(isoFormat));
            endInput.val(end.format(isoFormat));
        }
    });
})();

You can now use this DateRangeField in a form, retrieve it from cleaned_data for database queries or store it in a model DateRangeField.

May 18, 2017

FreeDV 700D – First Over The Air Tests

OK so after several attempts I finally managed to push a 700D signal from my QTH in Adelaide (PF95gc) 1170km to the Manly Warringah Radio Society WebSDR in Sydney (QF56oh). Bumped my power up a little, raised my antenna, and hunted around until I found a relatively birdie-free frequency, as even low level birdies are stronger than my very weak signal.

Have a listen:

Analog SSB 700D modem Decoded 700D DV

Here is a spectrogram (i.e. a waterfall with the water falling from left to right) of the analog then digital signal:

Faint birdies (tones) can be seen as horizontal lines at 1000 and 2000 Hz. You can see the slow fading on the digital signal as it dips beneath the noise every few seconds.

The scatter diagram looks like bugs (bits?) splattered on a windscreen:

The slow fading causes the errors to bounce up and down over time (above). The packet error rate (measured on the 28 bit Codec 2 frames) is 26%. This is rather high, but I would argue we have intelligible speech here, and that the intelligibility is better than SSB.

Yayyyyyyy.

I used 4 interleaver frames, which is about 640ms. Perhaps a longer interleaver would ride over the fades.

I’m impressed! Conditions were pretty bad on 40m, the band was “closed”. This is day 1 of FreeDV 700D. It will improve from here.

Command Lines

The Octave demodulator doing it’s thing:

octave:56> ofdm_rx("~/Desktop/700d_part2/manly5_4.wav",4, "manly5_4.err")
Coded BER: 0.0206 Tbits: 12544 Terrs:   259 PER: 0.2612 Tpacketerrs:   117 Tpackets:   448
Raw BER..: 0.0381 Tbits: 26432 Terrs:  1007

Not sure if I’m working out raw and coded BER right as they are not usually this close. Will look into that. Maybe all the errors are in the fades, where both the demod and LDPC decoder fall in a heap.

The ofdm_tx/ofdm_rx system transmits test frames of known data, so we can work out the BER. By xor-ing the tx and rx bits we can generate an error pattern that can be used to insert errors into a Codec 2 700C bit stream, using this magic incantation:

~/codec2-dev/build_linux/src$ sox ~/Desktop/cq_freedv_8k.wav ~/Desktop/cq_freedv_8k.wav -t raw -r 8000 -s -2 - | ./c2enc 700C - - | ./insert_errors - - ../../octave/manly5_4.err 28 | ./c2dec 700C - - | sox -t raw -r 8000 -s -2 - ~/Desktop/manly5_4_ldpc224_4.wav

It’s just like the real thing. Trust me. And it gives me a feel for how the system is hanging together earlier rather than after months more development.

Links

Lots of links on the Towards FreeDV 700D post earlier today.

The Collapsing Empire




ISBN: 076538888X
LibraryThing
This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don't know that and are busy having petty trade wars instead. It isn't a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire...

Tags for this post: book john_scalzi
Related posts: The Last Colony ; The End of All Things; Zoe's Tale; Agent to the Stars; Redshirts; Fuzzy Nation
Comment Recommend a book

Towards FreeDV 700D

For the last two months I have been beavering away at FreeDV 700D, as part my eternal quest to show SSB who’s house it is.

This work was inspired by Bill, VK5DSP, who kindly developed some short LDPC codes for me; and suggested I could improve on the synchronisation overhead of the cohpsk modem. As an aside – Bill is part if the communications payload team for the QB50 SUSat Cubesat – currently parked at the ISS awaiting launch! Very Kerbal.

Anyhoo – I’ve developed a new OFDM modem that has less syncronisation overhead, works better, and occupies less RF bandwidth (1000 Hz) than the cohpsk modem used for 700C. I have wrapped my head around such arcane mysteries as coding gain and now have LDPC codes playing nicely over that nasty old HF channel.

It looks like FreeDV 700D has a gain of 4dB over 700C. This means error free operation at -2dB SNR for AWGN, and 2dB SNR over a challenging fast fading HF channel (two paths, 1Hz Doppler, 1ms delay).

Major Innovations:

  1. An OFDM modem with with low overhead (small Eb/No penalty) synchronisation, even on fading channels.
  2. Use of LDPC codes.
  3. Long (several seconds) interleaver.
  4. Ruthlessly hunting down any dB’s leaking out of my performance curves.

One nasty surprise was that after a closer look at the short (224,112) LDPC codes, I discovered they don’t give any real improvement over the simple diversity scheme used for FreeDV 700C. However with long interleaving (several seconds) of the short codes, or a long (few thousand bit/several seconds) LDPC code we get an additional 3dB gain. The interleaver allows us to ride over the ups and downs of the fast fading channel.

Interleaving has a few downsides. One is delay, the other is when they fail you lose a big chunk of data.

I’ve avoided delay until now, using the argument that low delay is essential for PTT radio. However I’d like to test long delays and see what the trade off/end user experience is. Once someone is speaking – i.e in the middle of an “over” – I suspect we won’t notice the delay. However it could get confusing in fast handovers. This is experimental radio, designed for very low SNRs, so lets give it a try.

We could send the uncoded data without interleaving – allowing low delay decoding when the SNR is high. A switch could control LDPC decoding, allowing a user selection of coded-high-delay or uncoded-low-delay, like a noise banker. Mark, VK5QI, has suggested interleaver depth also be adjustable which I think is a good idea. The decoder could automagically determine interleaver depth by attempting decoding over a range of depths (1,2,4,8,16 frames etc) and noting when the LDPC code converges.

Or maybe we could use a small, low delay, interleaver, and just live with the fades (like we do on SSB) and get the vocoder to mute or interpolate over them, and enjoy low or modest latency.

I’m also interested to see how the LDPC code mops up errors like static bursts and other real-world HF rubbish that SSB subjects us to even on high SNR channels.

So, lots of room for experimentation. At this stage it’s all in GNU Octave simulation form, no C implementation or FreeDV GUI mode exists yet.

Lots more I could write about the engineering behind the modem, but lets leave it there for now and take a look at some results.

Results

Here is a rather busy set of BER versus SNR curves (click for larger version, and here is an EPS file version):

The 10-2 line is where the codec gets easy to listen to.

Observe far-right green (700C) to black (700D candidate with lots of interleaving) HF curves, which are about 4dB apart. Also the far-left cyan shows 700D working at -3dB SNR on AWGN channels. One dB later (-2dB) LDPC magic stomps all errors.

Here are some speech/modem tone samples on simulated channels:

AWGN -2dB SNR Analog SSB 700D modem 700D DV
HF +0.8dB SNR Analog SSB 700D modem 700D DV

The analog samples have a 300 to 2600 Hz BPF applied at the tx and rx side, to model an analog SSB radio. The analog SSB and 700D modem signals have exactly the same RMS power and channel models applied to them. In the AWGN channel, it’s difficult to hear the 700D modem signal, however the SSB is audible as it has peaks 9dB above the average.

OK so the 700 bit/s vocoder (Codec 2 700C) speech quality is not great even with no errors, but we have found it supports conversations just fine, and there is plenty of room for improvement. The same techniques (OFDM modem, LDPC interleaving) can also be applied to high quality/high bit rate/high SNR voice modes. But first – I want to push this low SNR DV work through to completion.

Simulation Code

This list summarises the GNU Octave code I’ve developed, as I’ll probably forget the details when I move onto the next project. Feel free to try any of these scripts and let me know what I’ve forgotten to check in. It’s all checked into codec2-dev/octave.

ldpc.m Wrapper functions for using the CML library LDPC functions with Octave
ldpcut.m Unit test/demo for ldpc.m
ldpc_qpsk.m Runs simulations for a bunch of codes for AWGN and HF channels using a simulated QPSK OFDM modem. Runs at the Rs (the symbol rate), assumes ideal modem
ldpc_short.m Simulation used for initial short LDPC code investigation using an ideal rate Rs BPSK modem. Bunch of codes and interleaving schemes tested
ofdm_lib.m Library of OFDM modem functions
ofdm_rs.m Rate Rs OFDM modem simulation used to develop low overhead pilot symbol phase estimation scheme
ofmd_dev.m Rate Fs OFDM modem simulation. This is the real deal, with timing and frequency offset estimation, LDPC integration, and tests for coarse timing and frequency offset estimation
ofdm_tx.m Generates test frames of OFDM raw file samples to play over your HF radio
ofdm_rx.m Receives raw file samples from your HF radio and 700D-demodulates-decodes, and measures BER and PER

Sing Along

Just this morning I tried to radiate some FreeDV 700D from my home to some interstate SDRs on 40M, but alas conditions were against me. I did manage to radiate across my bench so I know the waveform does make it through real HF radios OK.

Please try sending these files through your radio:

ssb_otx_224_32.wav 32 frame (5.12 second) interleaver
ssb_otx_224_4.wav 4 frame (0.64 second) interleaver

Get someone (or a websdr) to sample the received signal (8000Hz sample rate, 16 bit mono), and email me the received file.

Or you can decode it yourself using:

octave:10> ofdm_rx('~/Desktop/otx_224_32_mysample.wav',32);

or:

octave:10> ofdm_rx('~/Desktop/otx_224_4_mysample.wav',4);

The rx side is still a bit rough, I’ll refine it as I try the system with real off-air signals and flush out the bugs.

Update: FreeDV 700D – First Over The Air Tests.

Links

QB50 SUSat cubesat – Bill and team’s Cubesat currently parked at the ISS!
Codec 2 700C and Short LDPC Codes
Testing FreeDV 700C
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
FreeDV 700D – First Over The Air Tests

May 16, 2017

LUV Beginners May Meeting: Dealing with Security as a Linux Desktop User

May 20 2017 12:30
May 20 2017 16:30
May 20 2017 12:30
May 20 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

This presentation will introduce the various aspects of IT security that Linux Desktop users may be grappling with on an ongoing basis. The target audience of the talk will be beginners (newbies) - who might have had bad experiences using Windows OS all these years, and don't know what to expect when tiptoeing into the new world of Linux.  General Linux users who don't always pay much attention to aspects of security may also find interest in sharing some of the commonsense practices that are essential to using our computers safely.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 20, 2017 - 12:30

read more

Recovering from an unbootable Ubuntu encrypted LVM root partition

A laptop that was installed using the default Ubuntu 16.10 (xenial) full-disk encryption option stopped booting after receiving a kernel update somewhere on the way to Ubuntu 17.04 (zesty).

After showing the boot screen for about 30 seconds, a busybox shell pops up:

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)
Enter 'help' for list of built-in commands.

(initramfs)

Typing exit will display more information about the failure before bringing us back to the same busybox shell:

Gave up waiting for root device. Common problems:
  - Boot args (cat /proc/cmdline)
    - Check rootdelay= (did the system wait long enough?)
    - Check root= (did the system wait for the right device?)
  - Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/ubuntu--vg-root does not exist. Dropping to a shell! 

BusyBox v.1.21.1 (Ubuntu 1:1.21.1-1ubuntu1) built-in shell (ash)   
Enter 'help' for list of built-in commands.  

(initramfs)

which now complains that the /dev/mapper/ubuntu--vg-root root partition (which uses LUKS and LVM) cannot be found.

There is some comprehensive advice out there but it didn't quite work for me. This is how I ended up resolving the problem.

Boot using a USB installation disk

First, create bootable USB disk using the latest Ubuntu installer:

  1. Download an desktop image.
  2. Copy the ISO directly on the USB stick (overwriting it in the process):

     dd if=ubuntu.iso of=/dev/sdc1
    

and boot the system using that USB stick (hold the option key during boot on Apple hardware).

Mount the encrypted partition

Assuming a drive which is partitioned this way:

  • /dev/sda1: EFI partition
  • /dev/sda2: unencrypted boot partition
  • /dev/sda3: encrypted LVM partition

Open a terminal and mount the required partitions:

cryptsetup luksOpen /dev/sda3 sda3_crypt
vgchange -ay
mount /dev/mapper/ubuntu--vg-root /mnt
mount /dev/sda2 /mnt/boot
mount -t proc proc /mnt/proc
mount -o bind /dev /mnt/dev

Note:

  • When running cryptsetup luksOpen, you must use the same name as the one that is in /etc/crypttab on the root parition (sda3_crypt in this example).

  • All of these partitions must be present (including /proc and /dev) for the initramfs scripts to do all of their work. If you see errors or warnings, you must resolve them.

Regenerate the initramfs on the boot partition

Then "enter" the root partition using:

chroot /mnt

and make sure that the lvm2 package is installed:

apt install lvm2

before regenerating the initramfs for all of the installed kernels:

update-initramfs -c -k all

May 12, 2017

Python3 venvs for people who are old and grumpy

I've been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn't a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad...

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv
    echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
    echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(pyenv init -)"' >> ~/.bashrc
    git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
    source ~/.bashrc
    


Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot
    cd ~/.virtualenvs/pyenv-infrasot
    pyenv virtualenv system infrasot
    


You can see your installed venvs like this:

    $ pyenv versions
    * system (set by /home/user/.pyenv/version)
      infrasot
    


Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot
    $ ... stuff you're doing ...
    $ pvenv deactivate
    


I'll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Tags for this post: python venv virtualenvwrapper python3
Related posts: Implementing SCP with paramiko; Packet capture in python; A pythonic example of recording metrics about ephemeral scripts with prometheus; mbot: new hotness in Google Talk bots; Calculating a SSH host key with paramiko; Twisted conch

Comment

May 11, 2017

Assessments

Well, NAPLAN is behind us for another year and so we can all concentrate on curriculum work again! This year we have updated our assessment material to make it even easier to map the answers in the Student Workbooks with the curriculum codes. Remember, our units integrate across several curriculum areas. The model answers now contain colour coded curriculum codes that look like this:  These numbers refer to specific curriculum strands, which are now also listed in our Assessment Guides. In the back of each Assessment Guide is a colour coded table – Gold for History; Green for Geography; Light Green for HASS Skills; Orange for Civics and Citizenship; Purple for Economics and Business and Blue for Science. Each curriculum code is included in this table, along with the rubric for grades A to E, or AP to BA for the younger students.

These updates mean that teachers can now match each question to the specific curriculum area being assessed, thus simplifying the process for grading, and reporting on, each curriculum area. So, if you need to report separate grades for Science and HASS, or even History and Civics and Citizenship, you can tally the results across the questions which address those subject areas, to obtain an overall grade for each subject. Since this can be done on a question-by-question basis, you can even keep a running tally of how each student is doing in each subject area through the term, by assessing those questions they have answered, on a regular basis.

Please make sure that you have the latest updates of both the model answers and the assessment guides for each unit that you are teaching, with the codes as shown here. If you don’t have the latest updates, please download them from our site. Log in with your account, go to your downloads (click on “My Account” on the top right and then “Downloads” on the left). Find the Model Answers PDF for your unit(s) in the list of available downloads and click the button(s) to download each one again. Email us if there are any issues.

How to get Fedora working on a System 76 Oryx Pro

Problems: a) No sound b) Only onboard screen, does not recognise HDMI or Mini-DP Solutions: 1) Install Korora 2) Make sure you're not using an outdated kernel that doesn't have the snd-hda-intel driver available. 3) dnf install akmod-nvidia xorg-x11-drv-nvidia Extra resources: http://sub-pop.net/post/fedora-23-on-system76-oryx-pro/

LCA 2017 roundup

I've just come back from LCA at the Wrest Point hotel and fun complex in Hobart, over the 16th to the 20th of January. It was a really great conference and keeps the bar for both social and technical enjoyment at a high level.

I stayed at a nearby AirBNB property so I could have my own kitchenette - I prefer to be able to at least make my own breakfast rather than purchase it - and to give me a little exercise each day walking to and from the conference. Having the conference in the same building as a hotel was a good thing, though, as it both simplified accommodation for many attendees and meant that many other facilities were available. LCA this year provided lunch, which was a great relief as it meant more time to socialise and learn and it also spared the 'nearby' cafes and the hotel's restaurants from a huge overload. The catering worked very well.

From the first keynote right to the last closing ceremony, the standard was very high. I enjoyed all the keynotes - they really challenged us in many different ways. Pia gave us a positive view of the role of free, open source software in making the world a better place. Dan made us think of what happens to projects when they stop, for whatever reason. Nadia made us aware of the social problems facing maintainers of FOSS - a topic close to my heart, as I see the way we use many interdependent pieces of software as in conflict with users' social expectations that we produce some kind of seamless, smooth, cohesive whole for their consumption. And Robert asked us to really question our relationship with our users and to look at the "four freedoms" in terms of how we might help everyone, even people not using FOSS. The four keynotes really linked together well - an amazing piece of good work compared to other years - and I think gave us new drive.

I never had a session where I didn't want to see something - which has not always been true for LCA - and quite often I skipped seeing something I wanted to see in order to see something even more interesting. While the miniconferences sometimes lacked the technical punch or speaker polish, they were still all good and had something interesting to learn. I liked the variety of miniconf topics as well.

Standout presentations for me were:

  • Tom Eastman talking about building application servers - which, in a reversal of the 'cloud' methodology, have to sit inside someone else's infrastructure and maintain a network connection to their owners.
  • Christoph Lameter talking about making kernel objects movable - particularly the inode and dentry caches. Memory fragmentation affects machines with long uptimes, and it was fascinating to hear Matthew Wilcox, Dave Chinner, Keith Packard and Christoph talking about how to fix some of these issues. That's the kind of opportunity that a conference like LCA provides.
  • James Dumay's talk on Blue Ocean, a new look for Jenkins. It really brings it a modern, interactive look and I hope this becomes the new default.

Maths Challenge (Basic Operations)

As we are working on expanding our resources in the Maths realm, we thought it would be fun to start a little game here.

Remember “Letters and Numbers” on SBS? (Countdown in UK, Cijfers en Letters in The Netherlands and Belgium, originally Des Chiffres et des Lettres in France).

The core rules for numbers game are: you get 6 numbers, to use with basic operations (add, subtract, multiply, divide) to get as close as possible to a three digit target number. You can only use each number once, but you don’t have to use all numbers. No intermediate result is allowed to be be negative.

Now try this for practice:

Your 6 numbers (4 small, 2 large):    1     9     6     9     25     75

Your target: 316

Comment on this post with your solution (full working)!  We’re not worrying about a time limit, as it’s about the problem solving.

May 10, 2017

Cafe Dark Ages

Today, like most mornings, I biked to a cafe to hack on my laptop while slurping on iced coffee. Exercise, fresh air, sugar, caffeine and R&D. On this lovely sunny Autumn day I’m tapping away on my lappy, teasing bugs out of my latest digital radio system.

Creating new knowledge is a slow, tedious, business.

I test each small change by running an experiment several hundred thousand times, using simulation software on my laptop. R&D – Science by another name – is hard. One in ten of my ideas actually work, despite being at the peak of my career, having a PhD in the field, and help from many very intelligent peers.

A different process is going on at the table next to me. An “Integrative Health Consultant” is going about her business, speaking to a young client.

In an earnest yet authoritative Doctor-voice the “consultant” revs up with ill-informed dietary advice, moves on to over-priced under-performing products that the consultant just happens to sell, and ends up with a thinly disguised invitation to join her Multi-Level-Marketing (MLM) organisation. With a few side journeys through anti-vaxer land, conspiracy theories, organic food, anti-carbo and anti-gluten, sprinkled with disparaging remarks on Science, evidence based medicine and an inspired stab at dissing oncology (“I know this guy who had chemo and still died!”). All heavily backed by n=1 anecdotes.

A hobby of mine is critical thinking, so I am aware that most of their conversation is bullshit. I know how new knowledge is found (see above) and it’s not from Facebook.

But this post is not about the arguments of alt-med versus evidence based medicine. Been there, done that.

Here is what bothers me. These were both good people, who more or less believe in what they say. They are not stupid, they are intelligent and want to help people get and stay healthy. I have friends and family that I love who believe this crap. But they are hurting society and making people sicker.

Steering people away from modern, evidence based medicine kills people. Someone who is persuaded to see a naturopath rather than an oncologist will find out too late the price of well-meaning ignorance. Anti-vaxers hurt, maim, and kill for their beliefs. I shudder to think of the wasted lives and billions of dollars that could be spent on far better outcomes than lining the pockets of snake oil salespeople.

There is some encouraging news. The Australian Government has started removing social security benefits from people who don’t vaccinate. The Nursing and Midwifery Board is also threatening to take action against Nurses who push an anti-vaccination stance.

But this is beating people with a stick, where is the carrot?

Doctors in the Dark Ages were good people. They really believed leaches, blood letting and prayer where helping the patients they loved. But those beliefs sustained untold human misery. The difference with today?

Science, Education, and Policy.

May 09, 2017

Get a 50% discount during NAPLAN week 2017

That’s right. Use the NAPLAN17 coupon code to receive a 50% discount on any base PDF resource, teacher unit bundle or subscription during this NAPLAN week, up to Sunday 14th May 2017. This offer is valid for anyone: existing and new customers, subscribers, and there are no other restrictions.

Why? Well, as you know we feel very strongly that good materials help awesome teachers deliver excellent outcomes. And while standardised assessment can assist a teacher with that, the way that NAPLAN results are now aggregated for comparing schools (including by the media) does little to improve either standards, the wellbeing and success of students, or the wellbeing of teachers.  Of course, NAPLAN “just is”, for now, but we’d like to support every teacher and school as they focus on the core teaching materials that help our students’ literacy, numeracy, general knowledge and skills. We’re here to help!

If you’re an existing customer, see if there are any units you’ve been wanting to get anyway. If you are a might-be-new-customer of OpenSTEM, we look forward to welcoming you!

May 08, 2017

May 07, 2017

This Week in HASS – term 2, week 4

It’s NAPLAN week and that means time is short! Fortunately, the Understanding Our World™ program is based on 9 week units, which means that if you run out of time in any particular week, it’s not a disaster. Furthermore, we have made sure that there is plenty of catch-up time within the lessons, so that there is no need to feel rushed. This week students are getting into the nitty gritty of their term projects. Our youngest students are studying their surroundings at school and in the local area. Older students are getting to the core of their research projects.

Foundation to Year 3

Students in our standalone Foundation/Kindy/Prep class (unit F.2) are starting to build a model of their Favourite Place. It is the teacher’s choice whether they build a diorama, make a poster or collage, or how this is done in class. This week students start by drawing or cutting out pictures to show aspects of their favourite place. Students in an integrated Foundation/Kindy/Prep (unit F.6) and Year 1 class are using their senses to investigate their class and school – what can we see, hear, smell, feel and taste? Some ideas can be found in resources such as My Favourite Sounds and the Teacher Handbook also contains lots of ideas for these investigations. Students in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are also discussing how the school and local area has changed through time. The teacher can use old maps, photos or newspaper reports to guide students through these discussions. What information is available in the school? What do local families remember?

Years 3 to 6

Students in Year 3 (unit 3.6), 4 (unit 4.2), 5 (unit 5.2) and 6 (unit 6.2) are continuing to research their explorer. This week year 3 students are focusing on the climates encountered by their explorer. Resources such as Climate Zones of Australia and Climate Zones of the World can help the class to identify these climate areas. Year 4 students examine Environments in Africa and South America, in order to discuss the environments encountered by their explorer. Students in Year 5 can read up about the environment encountered by their explorer in North America, and Year 6 students examine the Environments of Asia. In each case, the student workbook guides the student through this investigation and helps them to isolate pertinent information to include in their presentation. This helps students to gain an understanding of how to research a topic and derive an understanding of what information they need to consider. Teachers can use the workbook to check in and see how students are travelling in their progress towards completing the project, as well as their understanding of the content covered.

May 05, 2017

Speaking in May 2017

It was a big April if you’re in the MySQL ecosystem, so am looking forward to other events that have different focus and a different base, so to speak. See you at:

  • rootconf – May 11-12 2017 – Bangalore, India. My first Rootconf was last year, and it was a great event; I look forward to going there again this year, to talk about capacity planning for your databases. If you register with this link you get a 10% discount.
  • Open Source Data Center Conference – May 16-18 2017 – Berlin, Germany. I’ve enjoyed my trips to OSDC in the last few years, and they’re on their last tickets now – so register if you plan to go!

May 03, 2017

How to Really Clean a Roomba

The official iRobot Roomba instructional videos show a Roomba doing its thing in an immaculately clean house. When it comes time to clean the Roomba itself, an immaculately manicured woman empties a sprinkling of dirt from the Roomba’s hopper into a bin, flicks no dust at all off the rotor brush and then delicately grooms the main brush, before putting the Roomba back on to charge.

It turns out the cleaning procedure is a bit more involved for two long-haired adults and three cats living on a farm. Note that the terminology used in the instructions below was made up by me just now, and may or may not match what’s in the Roomba manual. Also, our Roomba is named Neville.

First, assemble some tools. You will need at least two screwdrivers (one phillips, one slotted), the round red Roomba brush cleaning thingy, and a good sharp knife. You will not need the useless flat red Roomba cleaning thingy the woman in the official video used to groom the main brush.

01-toolsBrace yourself, then turn the Roomba over (here we see that Neville had an unfortunate encounter with some old cat-related mess, in addition to the usual dirt, mud, hair, straw, wood shavings, chicken feathers, etc.)

02-start-dirty

Remove the hopper:

03-empty-hopper

Empty the hopper:

04-hopper-empty

See if you can see if the fan inside the hopper looks like it’s clogged. It’s probably good this time (Neville hasn’t accidentally been run with one filter missing lately), but we may as well open it up anyway.

05-hopper-top

Take the top off:

06-hopper-open

Take the filter plate off:

07-hopper-open-more

Take the fan cowling off. Only a bit of furry dusty gunk:

08-hopper-fan

Remove the furry dusty gunk:

09-hopper-clean

Next, pop open the roller enclosure:

10-rollers-open

Remove the rollers and take their end caps off:

11-rollers-out

Pull the brush roller through the round Roomba brush cleaning thingy:

12-brush-roller-cleaning

This will remove most of the hair, pine needles and straw:

13-brush-roller-gunk

Do it again to remove the rest of the hair, pine needles and straw:

14-brush-roller-almost-clean

Check the end of the axle for even more hair:

15-brush-roller-end

This can be removed using your knife:

16-brush-roller-end-clean

The rubber roller also needs a good bit of knife action:

17-rubber-roller-knife

18-rubber-roller-knife-end

The rubber roller probably really needs to be replaced at this point (it’s getting a bit shredded), but I’ll do that next time.

19-rubber-roller-clean

Clean the inside of the roller enclosure using a hand-held Dyson vacuum cleaner:

20-inside-clean-vacuum

That didn’t work very well. I assume Neville managed to escape into the bathroom recently and got a bit wet (he’s a free spirit).

21-inside-vaccumed

Rubbing with a damp cloth helps somewhat:

22-inside-wiped

But this Orange Power stuff is even better:

23-orange-power

See?

24-inside-clean

Next, use your knife to liberate the rotor brush:

25-rotor-knife

Almost there, but we need to take the whole thing off:

26-rotor-partial-clean

Yikes:

27-rotor-off

Note to self: buy new rotor brush.

28-rotor-clean

One of Neville’s wheels has horrible gunk stuck it its tread. Spraying with Orange Power and wiping helps somewhat (we’ll come back to this later):

29-wheel-gunk

Use a screwdriver to pop the little front wheel out, and cut the hair off its axle with a knife. The axle could probably use some WD40 too:

30-front-wheel-off

Remove the entire bottom plate:

31-bottom-off

Use that hand-held vacuum cleaner again to get rid of the dust balls:

32-bottom-vacuum

Use your fingers or a screwdriver to remove small chicken and/or duck feathers from the wheel housing:

33-wheel-feather-01

33-wheel-feather-02

Use a chopstick to scrape the remaining gunk out of the wheel tread:

34-wheel-chopstick

Voila!

35-roomba-clean

Here’s what came out of the poor little guy:

36-roomba-gunk

Finally, reassemble everything, and Neville is ready for next time (or, at least, is ready to go back on the charger):

37-roomba-done

API, ABI and backwards compatibility are a hard necessity

Recently, I was reading a thread on LKML on a proposal to change the behavior of the open system call when confronted with unknown flags. The thread is worth a read as the topic of augmenting things that exist probably by accident to be “better” is always interesting, as is the definition of “better”.

Keeping API and/or ABI compatibility is something that isn’t a new problem, and it’s one that people are pretty good at sometimes messing up.

This problem does not go away just because “we have cloud now”. In any distributed system, in order to upgrade it (or “be agile” as the kids are calling it), you by definition are going to have either downtime or at least two versions running concurrently. Thus, you have to have your interfaces/RPCs/APIs/ABIs/protocols/whatever cope with changes.

You cannot instantly upgrade the world, it happens gradually. You also have to design for at least three concurrent versions running. One is the original, the second is your upgrade, your third is the urgent fix because the upgrade is quite broken in some new way you only discover in production.

So, the way you do this? Never ever EVER design for N-1 compatibility only. Design for going back a long way, much longer than you officially support. You want to have a design and programming culture of backwards compatibility to ensure you can both do new and exciting things and experiment off to the side.

It’s worth going and rereading Rusty’s API levels posts from 2008:

April 30, 2017

This Week in HASS – term 2, week 3

This week all of our students start to get into the focus areas of their units. For our youngest students that means starting to examine their “Favourite Place” – a multi-sensory examination which help them to explore a range of different kinds of experiences as they build a representation of their Favourite Place. Students in Years 1 to 3 start mapping their local area and students in Years 3 to 6 start their research topics for the term, each choosing a different explorer to investigate.

Foundation/Kindy/Prep to Year 3

Students doing our stand-alone Foundation/Kindy/Prep unit (F.2) start examining the concept of a Favourite Place this week. This week is an introduction to a 6 week investigation, using all their senses to consider different aspects of places. They are focusing on thinking about what makes their favourite place special to them and how different people like different places. This provides great opportunities for practising skills of considering alternate points of view, having respectful discussions and accepting that others might have opinions different to their own, but no less valid. Students in integrated Foundation/Kindy/Prep (unit F.6) classes and in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are doing some mapping this week, learning to represent school buildings, open areas, roads, houses, shops etc in a 2 dimensional plan. This exercise forms the foundation for an examination of the school and local landscape over the next few weeks.

Years 3 to 6

Students in Years 3 to 6 start their research projects this week. Students doing unit 3.6, Exploring Climates, will be investigating people who have explored extreme climates. Options include the first people to reach Australia during the Ice Age, Aboriginal people who lived in Australia’s central deserts, Europeans who explored central Australia, such as Sturt, Leichhardt and others. Students doing unit 4.2 will be investigating explorers of Africa and South America, including Ferdinand Magellan (and Elcano), Walter Raleigh, Amerigo Vespucci and many others. Students doing unit 5.2  are investigating explorers of North America. Far beyond Christopher Columbus, choices include Vikings such as Eric the Red, Leif Erikson and Bjarni Herjolfsson; Vitus Bering (after whom the Bering Strait is named), the French in the colony of Quebec, such as Jacques Cartier, Samuel de Champlain and Pierre François-Xavier de Charlevoix. Some 19th century women such as Isabella Bird (pictured on right) and Nellie Bly are also provided as options for research. Unit 6.2 examines explorers of Asia. In this unit, Year 6 students are encouraged to move beyond a Eurocentric approach to exploration and consider explorers from other areas such as Asia and Africa as well. Thus explorers such as Ibn Battuta, Ahmad Ibn Fadlan, Gan Ying, Ennin and Zheng He, join the list with Willem Barents, William Adams, Marco Polo and Abel Tasman. Women explorers include Gertrude Bell and Ida Pfeiffer. The whole question of women explorers, and the constraints under which they have operated in different cultures and time periods, can form part of a class discussion, either as extension or for classes with a particular interest.

Teachers have the option for student to present the results of their research (which will cover the next 4 weeks) as a slide presentation, using software such as Powerpoint, a poster, a narrative, a poem, a short play or any other format that is useful, and some teachers have managed to combine this with requirements for other subject areas, such as English or Digital Technologies, thereby making the exercise even more time-efficient.

What do you mean ExceptT doesn't Compose?

Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata.

At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus:

Using exceptions for control flow is the root of many evils in software.

Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded:

Yet what to do if you want composable code? Currently I have
type Rpc a = ExceptT RpcError IO a
which is terrible

But what do we mean by "composable"? I like the wikipedia definition:

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements.

The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code.

At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text.

In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together.

So lets look at the naive version of a program that doesn't do any exception handling at all.

  import Data.ByteString.Char8 (readFile, writeFile)

  import Naive.Cat (Cat, parseCat)
  import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection)
  import Naive.Dog (Dog, parseDog)

  import Prelude hiding (readFile, writeFile)

  import System.Environment (getArgs)
  import System.Exit (exitFailure)

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile
      _ -> putStrLn "Expected three file names." >> exitFailure

  readCatFile :: FilePath -> IO Cat
  readCatFile fpath = do
    putStrLn "Reading Cat file."
    parseCat <$> readFile fpath

  readDogFile :: FilePath -> IO Dog
  readDogFile fpath = do
    putStrLn "Reading Dog file."
    parseDog <$> readFile fpath

  writeResultFile :: FilePath -> Result -> IO ()
  writeResultFile fpath result = do
    putStrLn "Writing Result file."
    writeFile fpath $ renderResult result

  processFiles :: FilePath -> FilePath -> FilePath -> IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

Once built as per the instructions in the repo, it can be run with:

  dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null
  Reading Cat file 'Naive/Cat.hs'
  Reading Dog file 'Naive/Dog.hs'.
  Writing Result file '/dev/null'.

The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown:

  • Either of the two readFile calls.
  • The writeFile call.
  • The parsing functions parseCat and parseDog.
  • Opening the database connection.
  • The database connection could terminate during the processing stage.

So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things.

Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:

  {-# LANGUAGE OverloadedStrings #-}
  import           Control.Exception (SomeException)
  import           Control.Monad.IO.Class (liftIO)
  import           Control.Error (ExceptT, fmapL, fmapLT, handleExceptT
                                 , hoistEither, runExceptT)

  import           Data.ByteString.Char8 (readFile, writeFile)
  import           Data.Monoid ((<>))
  import           Data.Text (Text)
  import qualified Data.Text as T
  import qualified Data.Text.IO as T

  import           Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError)
  import           Improved.Db (DbError, Result, processWithDb, renderDbError
                               , renderResult, withDatabaseConnection)
  import           Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError)

  import           Prelude hiding (readFile, writeFile)

  import           System.Environment (getArgs)
  import           System.Exit (exitFailure)

  data ProcessError
    = ECat CatParseError
    | EDog DogParseError
    | EReadFile FilePath Text
    | EWriteFile FilePath Text
    | EDb DbError

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] ->
              report =<< runExceptT (processFiles inFile1 infile2 outFile)
      _ -> do
          putStrLn "Expected three file names, the first two are input, the last output."
          exitFailure

  report :: Either ProcessError () -> IO ()
  report (Right _) = pure ()
  report (Left e) = T.putStrLn $ renderProcessError e


  renderProcessError :: ProcessError -> Text
  renderProcessError pe =
    case pe of
      ECat ec -> renderCatParseError ec
      EDog ed -> renderDogParseError ed
      EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg
      EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg
      EDb dbe -> renderDbError dbe


  readCatFile :: FilePath -> ExceptT ProcessError IO Cat
  readCatFile fpath = do
    liftIO $ putStrLn "Reading Cat file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL ECat $ parseCat bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  readDogFile :: FilePath -> ExceptT ProcessError IO Dog
  readDogFile fpath = do
    liftIO $ putStrLn "Reading Dog file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL EDog $ parseDog bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO ()
  writeResultFile fpath result = do
    liftIO $ putStrLn "Writing Result file."
    handleExceptT handler . writeFile fpath $ renderResult result
    where
      handler :: SomeException -> ProcessError
      handler e = EWriteFile fpath (T.pack $ show e)

  processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- fmapLT EDb . withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function.

We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:

  EDb (DbError3 ResultError1)

contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing.

We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose.

Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including:

  • Errors are explicit in the types of the functions, making the code easier to reason about.
  • Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions.
  • The programmer spends less time chasing the source of exceptions in large complex code bases.
  • More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought.

Want to discuss this? Try reddit.

April 27, 2017

Continuing the Conversation at DrupalCon and Into the Future

My blog post from last week was very well received and sparked a conversation in the Drupal community about the future of Drupal. That conversation has continued this week at DrupalCon Baltimore.

Yesterday during the opening keynote, Dries touched on some of the issues raised in my blog post. Later in the day we held an unofficial BoF. The turn out was smaller than I expected, but we had a great discussion.

Drupal moving from a hobbyist and business tool to being an enterprise CMS for creating "ambitious digital experiences" was raised in the Driesnote and in other conversations including the BoF. We need to acknowledge that this has happened and consider it an achievement. Some people have been left behind as Drupal has grown up. There is probably more we can do to help these people. Do we need more resources to help them skill up? Should we direct them towards WordPress, backdrop, squarespace, wix etc? Is it is possible to build smaller sites that eventually grow into larger sites?

In my original blog post I talked about "peak Drupal" and used metrics that supported this assertion. One metric missing from that post is dollars spent on Drupal. It is clear that the picture is very different when measuring success using budgets. There is a general sense that a lot of money is being spent on high end Drupal sites. This has resulted in less sites doing more with Drupal 8.

As often happens when trying to solve problems with Drupal during the BoF descended into talking technical solutions. Technical solutions and implementation detail have a place. I think it is important for the community to move beyond this and start talking about Drupal as a product.

In my mind Drupal core should be a content management framework and content hub service for building compelling digital experiences. For the record, I am not arguing Drupal should become API only. Larger users will take this and build their digital stack on top of this platform. This same platform should support an ecosystem of Drupal "distros". These product focused projects target specific use cases. Great examples of such distros include Lightning, Thunder, Open Social, aGov and Drupal Commerce. For smaller agencies and sites a distro can provide a great starting point for building new Drupal 8 sites.

The biggest challenge I see is continuing this conversation as a community. The majority of the community toolkit is focused on facilitating technical discussions and implementations. These tools will be valuable as we move from talking to doing, but right now we need tools and processes for engaging in silver discussions so we can build platinum level products.

April 26, 2017

LUV Main May 2017 Meeting: The Plasma programming language

May 2 2017 18:30
May 2 2017 20:30
May 2 2017 18:30
May 2 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, May 2, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• Dr. Paul Bone, the Plasma programming language
• To be announced

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 2, 2017 - 18:30

read more

April 24, 2017

Weechat – a trial

I’m a big fan on console apps. But for IRC, I have been using quassel, as it gives me a client on my phone. But I’ve been cleaning up my cloud accounts, and thought of the good old days when you’d simply run a console IRC client in screen or tmux.

Many years ago I used weechat, it was awesome, so I thought why not go and have a play.

To my surprise weechat has a /relay command which allows other clients to connect. One such client is WeechatAndroid, which in itself isn’t a IRC client, but actually a client that can talk to a already running weechat… This is exactly what I want.

So in case any of you wanted to do the same, this is my current tmux + weechat setup. Mostly gleened from various internet sources.

 

Initial WeeChat setup

Once you start weechat, you can simply configure things, when you do, it writes it to its config file. So really all you need to do is place or edit the config file directly. So once setup you can easily move it. However, as you’ll always want to be connected, it’s nice to know how to change configuration while it’s running… you know, so you can play.

I’ll assume you have installed weechat in whatever distro your using. So we’ll start by running weechat (inside a tmux or screen):

weechat

Now while we are inside weechat, lets first start by installing/activating some scripts:

/script install buffers.pl go.py colorize_nicks.py urlserver.py

Noting that there are heaps of scripts to install, but lets explain these:

  • buffers.pl – makes a left side window listing the buffers (or channels).
  • go.py – allows you to quickly search the buffers.
  • urlserver.py – automatically shorens long urls so they don’t break if they overlap over a line.
  • colorize_nicks.py – is obvious.

The go.py script is useful. But turns out you can also turn on mouse support with:

/mouse enable

Which will then allow you to use your mouse to select a buffer, although this will break normal copy and pasting, so maybe not worth the effort, but thought I’d mention it.
Although of course normally you’d use the weechat keybindings to access them, <alt+left or right> to go to a buffer, or <alt-a> to goto the last active buffer. But go, means you can easily jump, and if you add a go.py keybinding:

/key bind meta-g /go

An <alt+g> will lauch it, so you can type away, is really easy.

But we are probably getting away from our selves. There are plenty of places that can tell you how to configure it connect to freenode etc. I used: https://weechat.org/files/doc/devel/weechat_quickstart.en.html

Now that you have some things configured lets setup the relay script/plugin.

 

Setting up /relay

This actually isn’t too hard. But there was a gotcha, which is why I’m writing about it. I first simply followed WeechatAndroid’s guide. But this just sets up an insecure relay:


/relay add weechat 8001
/set relay.network.password "your-secret-password"

NOTE: 8001 is the port, so you can change this, and the password to whatever you want.

However, that’s great for a test, but we probably want SSL, so we need a SSL cert:


mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

NOTE: To change or see where weechat is looking for the cert is to see what the value of ‘relay.network.ssl_cert_key’ or just do a: /set relay.*

We can tell weechat to load this sslcert without restarting by running:

/relay sslcertkey

And here’s the gotcha, you will fail to connect via SSL until you create an instance of the weechat relay protocal (listening socket) with ssl in the name of the protocol:
/relay del weechat
/relay add ssl.weechat 8001

Or just be smart and setup the SSL version.

On the weechat buffer you will see the client connecting and disconnecting, so is a good way to debug connection issues.

 

Weechat /relay + ssl (TL;DR)

mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

In weechat:

/relay sslcertkey
/relay add ssl.weechat 8001
/set relay.network.password "your-secret-password"

April 23, 2017

This Week in HASS – term 2, week 2

It is hoped that by now all the school routine is shaking back down into place. No doubt you’ve all got ANZAC Day marked on your class calendars, and this may be a good time to revisit some of the celebrations with the younger students. This week our younger students are looking at types of homes and local Aboriginal groups. Students in Year 3 are investigating climate zones and biomes of Australia, while students in Years 4 to 6 are looking at Europe in the ‘Age of Discovery’ (the 15th to 18th centuries).

Foundation/Prep to Year 3

House in Hobart TASStudents in our stand-alone Foundation/Prep class (Unit F.2), in line with the name of the unit “Where We Live”, are examining different types of homes and talking about how people get the things they need (such as shelter, warmth etc) from their homes. Students examine a wide range of different types of homes including freestanding houses, apartments, townhouses, as well as boats, caravans and other less conventional homes.

Students in integrated Foundation/Prep classes (Unit F.6) and in years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) are finding out about their local Aboriginal groups, in the area of their school. Students will be considering how the groups are connected to the land and what changes they have seen since they first arrived in that area, thousands of years before. Remember, if you need information about your local Aboriginal group, feel free to contact us and ask.

Years 3 to 6

Students in Year 3, doing the Unit “Exploring Climates” (Unit 3.6) are consolidating work done last week on climate zones and the biomes of Australia. This week they are focusing on matching the climate zone to the region of Australia. Students in Years 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are shifting focus across to Europe in the 15th to 18th centuries – the ‘Age of Discovery’.

This sets the scene for further examinations of explorers and the research project students will undertake this term, as well as introducing students to the conditions in Europe which later led to colonisation, thereby providing some important background information for Australian history in Term 3. Students can examine Spain, Portugal and England and the role that they played in exploring the world at this time.

Science!

Sailing Ships (History + Science)Sailing Ship Science

Did you know: the Understanding Our World™ program also fully covers the Science component of the Australian Curriculum at each year level, integrated with the HASS materials!

In line with the Age of Discovery explorer theme, student start their Science activity: “Ancient Sailing Ships“. A perennial favourite with students, this activity involves making a simple model sailing ship and then examining the forces acting on the ship, the properties of different parts of the ships and the materials from which they were made, examining different types of sails (square-rigged versus lateen-rigged), as well as considering the phases of matter associated with sailing ships.

Some schools set up water troughs and fans and race the ships against each other, which causes much excitement! This activity also helps students understand some of the challenges faced by explorers who travelled the world in similar vessels.

April 21, 2017

Many People Want To Talk

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

April 19, 2017

Drupal, We Need To Talk

Update 21 April: I've published a followup post with details of the BoF to be held at DrupalCon Baltimore on Tuesday 25 April. I hope to see you there so we can continue the conversation.

Drupal has a problem. No, not that problem.

We live in a post peak Drupal world. Drupal peaked some time during the Drupal 8 development cycle. I’ve had conversations with quite a few people who feel that we’ve lost momentum. DrupalCon attendances peaked in 2014, Google search impressions haven’t returned to their 2009 level, core downloads have trended down since 2015. We need to accept this and talk about what it means for the future of Drupal.

Technically Drupal 8 is impressive. Unfortunately the uptake has been very slow. A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor.

In the five years Drupal 8 was being developed there was a fundamental shift in software architecture. During this time we witnessed the rise of microservices. Drupal is a monolithic application that tries to do everything. Don't worry this isn't trying to rekindle the smallcore debate from last decade.

Today it is more common to see an application that is built using a handful of Laravel micro services, a couple of golang services and one built with nodejs. These applications often have multiple frontends; web (react, vuejs etc), mobile apps and an API. This is more effort to build out, but it likely to be less effort maintaining it long term.

I have heard so many excuses for why Drupal 8 adoption is so slow. After a year I think it is safe to say the community is in denial. Drupal 8 won't be as popular as D7.

Why isn't this being talked about publicly? Is it because there is a commercial interest in perpetuating the myth? Are the businesses built on offering Drupal services worried about scaring away customers? Adobe, Sitecore and others would point to such blog posts to attack Drupal. Sure, admitting we have a problem could cause some short term pain. But if we don't have the conversation we will go the way of Joomla; an irrelevant product that continues its slow decline.

Drupal needs to decide what is its future. The community is full of smart people, we should be talking about the future. This needs to be a public conversation, not something that is discussed in small groups in dark corners.

I don't think we will ever see Drupal become a collection of microservices, but I do think we need to become more modular. It is time for Drupal to pivot. I think we need to cut features and decouple the components. I think it is time for us to get back to our roots, but modernise at the same time.

Drupal has always been a content management system. It does not need to be a content delivery system. This goes beyond "Decoupled (Headless) Drupal". Drupal should become a "content hub" with pluggable workflows for creating and managing that content.

We should adopt the unix approach, do one thing and do it well. This approach would allow Drupal to be "just another service" that compliments the application.

What do you think is needed to arrest the decline of Drupal? What should Drupal 9 look like? Let's have the conversation.

CNC Alloy Candelabra

While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

April 18, 2017

Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

April 17, 2017

This Week in HASS – term 2, week 1

Welcome to the new school term, and we hope you all had a wonderful Easter! Many of our students are writing NAPLAN this term, so the HASS program provides a refreshing focus on something different, whilst practising skills that will help students prepare for NAPLAN without even realising it! Both literacy and numeracy are foundation skills of much of the broader curriculum and are reinforced within our HASS program as well. Meantime our younger students are focusing on local landscapes this term, while our older students are studying explorers of different continents.

Foundation to Year 3

Our youngest students (Foundation/Prep Unit F.2) start the term by looking at different types of homes. A wide selection of places can be homes for people around the world, so students can compare where they live to other types of homes. Students in integrated Foundation/Prep and Years 1 to 3 (Units F.61.2; 2.2 and 3.2) start their examination of the local landscape by examining how Aboriginal people arrived in Australia 60,000 years ago. They learn how modern humans expanded across the world during the last Ice Age, reaching Australia via South-East Asia. Starting with this broad focus allows them to narrow down in later weeks, finally focusing on their local community.

Year 3 to Year 6

Students in Years 3 to 6 (Units 3.6; 4.2; 5.2 and 6.2) are looking at explorers this term. Each year level focuses on explorers of a different part of the world. Year 3 students investigate different climate zones and explorers of extreme climate areas (such as the Poles, or the Central Deserts of Australia).  Year 4 students examine Africa and South America and investigate how European explorers during the ‘Age of Discovery‘ encountered different environments, animals and people on these continents. The students start with prehistory and this week they are looking at how Ancient Egyptians and Bantu-speaking groups explored Africa thousands of years ago. They also examine Great Zimbabwe. Year 5 students are studying North America, and this week are starting with the Viking voyages to Greenland and Newfoundland, in the 10th century. Year 6 students focus on Asia, and start with a study in Economics by examining the Dutch East India Company of the 17th and 18th centuries. (Remember HASS for years 5 and 6 includes History, Geography, Civics and Citizenship and Economics and Business – we cover it all, plus Science!)

You might be wondering how on earth we integrate such apparently disparate topics for multi-year classes! Well, our Teacher Handbooks are full of tricks to make teaching these integrated classes a breeze. The Teacher Handbooks with lesson plans and hints for how to integrate across year levels are included, along with the Student Workbooks, Model Answers and Assessment Guides, within our bundles for each unit. Teachers using these units have been thrilled at how easy it is to use our material in multi-year level classes, whilst knowing that each student is covering curriculum-appropriate material for their own year level.

More KVM Modules Configuration

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

April 16, 2017

Creating an OpenStack Ironic deploy image with Buildroot

Edit: See this post on how to automate the builds using buildimage scripts.

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

April 13, 2017

Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

To disable the built-in cronjob, I ran the following:

systemctl disable certbot.service
systemctl disable certbot.timer

Then I put my own renewal script in /etc/cron.daily/certbot-renew:

#!/bin/bash

/usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service"

pushd /etc/ > /dev/null
/usr/bin/git add letsencrypt ejabberd
DIFFSTAT="$(/usr/bin/git diff --cached --stat)"
if [ -n "$DIFFSTAT" ] ; then
    /usr/bin/git commit --quiet -m "Renewed letsencrypt certs"
    echo "$DIFFSTAT"
fi
popd > /dev/null

# Generate the right certs for ejabberd and znc
if test /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem -nt /etc/ejabberd/ejabberd.pem ; then
    cat /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem > /etc/ejabberd/ejabberd.pem
fi
cat /etc/letsencrypt/live/irc.fmarier.org/privkey.pem /etc/letsencrypt/live/irc.fmarier.org/fullchain.pem > /home/francois/.znc/znc.pem

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

Finally, since my XMPP server and IRC bouncer need the private key and the full certificate chain to be in the same file, so I regenerate these files at the end of the script. In the case of ejabberd, I only do so if the certificates have actually changed since overwriting ejabberd.pem changes its timestamp and triggers an fcheck notification (since it watches all files under /etc).

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

April 08, 2017

Speaking in April 2017

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

April 06, 2017

Remote Presentations

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.

April 05, 2017

'Advanced Computing': A International Journal of Plagiarism

Advanced Computing : An International Journal was a publication that I considering writing for. However it is almost certainly a predatory open-access journal, that seeks a "publication charge", without even performing the minimal standards of editorial checking.

I can just tolerate the fact that the most recent issue has numerous spelling and grammatical errors as the I believe that English is not the first language of the authors. It should have been caught by the editors, but we'll let that slide for a far greater crime - that of widespread plagiarism.

The fact that the editors clearly didn't even check for this is in inexcusable oversight.

I opened this correspondence to the editors in the hope that others will find it prior to submitting or even considering submission to the journal in question. I also hope the editors take the opportunity to dramatically improve their editorial standards.

read more

Light to Light, Day Three

The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment