Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

September 17, 2019

LUV October 2019 Main Meeting and AGM

Oct 1 2019 19:00
Oct 1 2019 21:00
Oct 1 2019 19:00
Oct 1 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

NOTE: The library closes at 7pm so arrivals after that time will need to contact Andrew on (0421) 775 358 or any other attendee for admission.

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

October 1, 2019 - 19:00

read more

Software Freedom Day 2019

Sep 21 2019 13:00
Sep 21 2019 18:00
Sep 21 2019 13:00
Sep 21 2019 18:00
Location: 
Electron Workshop, 31 Arden Street North Melbourne 3051

It's time once again to get excited about all the benefits that Free and Open Source Software have given us over the past year and get together to talk about how Freedom and Openness can improve our human rights, our privacy, our security and our communities. It's Software Freedom Day!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 21, 2019 - 13:00

read more

September 16, 2019

FreeDV 2020 over the QO-100 Satellite

Gerhard (OE3GBB), Steve (K5OKC) and I have been working on FreeDV 2020 over the Es’hail 2/QO-100 satellite. This satellite is in geosynchronous orbit and has a linear transponder. It’s designed for SSB so has a narrow bandwidth which rules out most digital voice modes – except FreeDV. For example FreeDV 2020 can send 8 kHz wide speech over just 1600 Hz of RF bandwidth, A linear amplifier also means the OFDM waveforms used by FreeDV will pass OK, as long as your transmit system is linear.

Modem Mods

Gerhard’s initial experiments showed that FreeDV 1600 worked well, but FreeDV 2020 was breaking up and losing sync. We guessed that this was due to significant phase noise on the channel, from the many up and down conversion steps and the transponder itself. Fortunately the SNR was quite high.

Steve and I modified the OFDM modem used for FreeDV 2020 to handle this. This modem had been designed for coherent demodulation on very low SNR HF fading channels. The phase tracking was designed for HF channels with a bandwidth of a few Hz. As a first step we added a high bandwidth option, then moved to differential demodulation. This allows us to handle faster phase shifts (i.e. more phase noise), at the expense of reduced low SNR performance. This is an acceptable trade off for this channel as we have plenty of SNR.

Gerhard also had some set-up problems getting everything to run on one machine – FreeDV 2020 likes a powerful, modern CPU due to the LPCNet codec.

I now have the rather complex Windows build process for FreeDV fully scripted (thanks Richard and Danilo). This means I can develop on Linux, then run a Docker script that rebuilds everything for Windows, packages it in an installer, and pops it up on my web site. Remarkably, it then produces the same results as the Linux version. This take a a lot of pain out of my life, and makes it easy for others to test innovations rapidly.

Here is a sample of the decoded audio, from a QSO between Gerhard and Dani (EA4GPZ):

The quality is quite high, and through a nice set of speakers the wide, 8kHz audio bandwidth is very pleasant. However I can hear some frame rate modulation, and I’ve heard similar on some other 2020 samples over HF channels from the US test team. I’ll explore that at some stage.

Gerhard’s QO100 Station










Team Work

I am enjoying working with Gerhard and Steve on this project. We are roughly equi-distant around the globe but the time zone shift allows us to bounce new software versions around for testing on 12 hour cycles. Working as a team, we investigated the problem and greatly improved the performance of FreeDV 2020 over the QO-100 satellite. We worked very carefully, debugging tricky problems, collecting and comparing samples, and discussing our results via email.

We have applied new speech coding technology (the neural net/machine learning based LPCNet), modified and optimised a HF modem, and sent our signals through a new satellite transponder. This is real experimental radio!

Our next step is to look at improving modem acquisition, which is also likely to require tuning for this channel.

Reading Further

Es’hail 2/QO-100 satellite
WebSDR for QO-100 satellite
FreeDV 2020 First On Air Tests
Steve Ports an OFDM modem from Octave to C

September 15, 2019

Prometheus 2.12, query logging, and startup failures on macos

Share

Prometheus v2.12 added active query logging. The basic idea is that there is a mmaped JSON file that contains all of the queries currently running. If prometheus was to crash, that file would therefore be a list of the queries running at the time of the crash. Overall, not a bad idea.

Some friends had recently added prometheus to their development environments. This is wired up to grafana dashboards for their microservices, and prometheus is configured to store 14 days worth of time series data via a persistent volume from the developer desktops. We did this because it is valuable for the developers to be able to see the history of metrics before and after their changes.

Now we have a developer using macos as their primary development platform, and since prometheus 2.12 it hasn’t worked. Specifically this developer is using parallels to provide the docker virtual machine on his mac. You can summarise the startup for prometheus in the dev environment like this:

$ docker run ...stuff...
...snip...
level=error ts=2019-09-15T02:20:23.520Z caller=query_logger.go:94 component=activeQueryTracker msg="Failed to mmap" file=/prometheus-data/data/queries.active Attemptedsize=20001 err="invalid argument"
panic: Unable to create mmap-ed active query log

goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fff9917af38, 0x15, 0x14, 0x2a6b7c0, 0xc00003c7e0, 0x2a6b7c0)
	/app/promql/query_logger.go:112 +0x4d2
main.main()
	/app/cmd/prometheus/main.go:361 +0x52bd

And here’s the underlying problem — because of the way the persistent data is mapped into this container (via parallels sharing in this case), the mmap of the active queries file fails and prometheus fails to start.

In other words, since prometheus 2.12 your prometheus data files have to be stored on a filesystem which supports mmap. Additionally, there is no flag to just disable the active query logger.

So how do we work around this? Well, here’s a horrible workaround — in the data directory that is volume mapped into the container, create a symlink that is to a path that is mmapable inside the docker container, even if that path doesn’t exist outside the container. For example, given that we store the prometheus time series at $CONFIG/prometheus-data:

$ ln -s /tmp/queries.active "$CONFIG/prometheus-data/queries.active"

Note that /tmp/queries.active does not exist on the developer’s mac. Prometheus now starts and its puppies and kittens the whole way down.

Share

September 14, 2019

VDSL versus HF Radio

I’m putting up a 40M dipole. When I Tx on 40m (50W peak) my Internet drops out. Sometimes it comes back, other times the modem loses sync. The dipole has a balun, and is nicely tuned.

I tried some ferrites with several turns on the modem VDSL and power cables which improved the situation a little. But I still get a momentary drop out of Internet on PTT, and if I try hard enough I can still lose sync on the modem.

Now I have NBN (Australian National Broadband Network) with a VDSL link over traditional copper phone lines to a “node” several hundred metres away. Turns out VDSL uses bandwidth up to 30 MHz … so I guess I’m getting right into it’s pass band. Old school ADSL only used a few MHz. The phone line used for this service is 50 years old and has significant differential to common mode conversion. It’s not much of a transmission line. But probably a pretty good antenna!

I build a little jig with a transformer to couple the differential signal to my spectrum analyser and take a look:

Lotsa turns on the primary, one turn on the secondary, some core I found in the junk box. I adjusted the coupling capacitors in line with both arms of the primary so that the modem didn’t lose sync when I plugged it in (about 5pF). Also in this photo is the series LC circuit, but disconnected (open) at this stage.

Sure enough, I could see Rx energy from the node to my modem at around 7MHz, and other energy out to 12MHz. In the 7MHz region, I could see the Rx signal from the “node” at -60dBm. When I Tx SSB on 7.18 MHz my SSB signal was -30dBm. No wonder the modem is choking.

After some experimentation, I came up with a 7MHz LC series resonant circuit connected across the phone line. When the modem does it’s training thing, is sees a short circuit around 7MHz and ignores that region as no good. So when I transmit in that region, there is no modem signal to interfere with.

I started with a 800nH/600pF filter. Xc and Xl is a rather low 37 ohms reactance at resonance, and just a bit higher than that above resonance (e.g. at 8-12 MHz), attenuating a lot of the HF energy. So it was basically a LPF, killing anything above 6 MHz. This stopped the drop outs, but my Internet downstream bandwidth dropped from 55 to 24 Mbps.

After some fiddling with a spreadsheet I came up with a 5uH/100pF series LC notch filter that works a bit better. This has a few hundred ohms of Xc above resonance, which results in just a few dB attenuation at 8-12 MHz. This obtained 38 Mbps downstream. Upstream was the same (24 Mbps) as with no filter. Good enough.

The inductor is 9 turns on a F37-61 core. Make sure you use a material suitable for high Q inductors. I initially used the wrong core material and couldn’t get a decent notch.

Here is a sweep of the notch filter:

I put 560 ohm resistors in series with the tracking generator output and spectrum analyser input to approximate the line impedance using this jig:

Here is a plot of the system in action:

The yellow plot is the original, unfiltered VDSL signal. At the same time I’m transmitting SSB. You can see my SSB signal on 7.18 MHz (yellow peak above the “1”).

Purple is with the series LC notch filter installed. You can see the notch left of the “1” at 7MHz. The “node” has worked out 7-8MHz is a dud band so isn’t sending any information. So nothing to interfere with when I PTT SSB. I’m not sending a SSB signal in this plot.

Note also the 8-12 MHz purple (filtered) is just a few dB lower than the yellow (unfiltered). So the notch filter isn’t wiping out the HF signals.

These plots show a mixture of Tx (-10dBm) from my modem, and Rx (underlying gentle downwards slope) – the signal from the “node”. I assume it’s full duplex, we just can’t see the Rx signal most of the time. I am sampling the combined signal next to the modem, so Tx dominates. You can see the Rx signal better when the modem is training.

For some reason my modem doesn’t Tx in the 6-8MHz band. Probably a good thing for RFI.

Results

Without the filter I get immediate interruptions pings and loss of modem sync after 20 seconds. With the filter I’ve hammered it for the last few weeks with SSB and FreeDV signals but no interruptions in pings or the received audio and waterfall from a local KiwiSDR.

There is a hit on my downstream bandwidth, but it’s not significant for me. Much nicer to be able to transmit on 40m and not have the Internet break!

Here is the finished filter, installed near the modem in some heat shrink:

I’d be interested to see if this idea will work at other sites. Due to the random nature of the phone lines no two VDSL installations are the same. If you do try it, carefully check the tuning of the notch filter.

I’ve also seen suggestions of using a quarter wave stub (about 10m of phone cable) to get the same effect. This is a neat idea, as you could just buy a 10m phone extension lead, and plug it in parallel with your VDSL line. However once again – carefully check the tuning of the stub – phone cable is messy, uncalibrated stuff!

This was an interesting little project, with a satisfying result. I quite like learning about RF, and (re) learnt about the trade-offs around reactance at resonance, transmission lines, and inductor core material.

Thanks for help and useful comments from AREG members on their mailing list. Several other AREG members are also suffering from the same problem, so I imagine it’s wide spread in other countries that use VDSL.

September 13, 2019

On the airwaves

As of this year, I’m now an amateur radio operator! Callsign VK2FAAS, foundation licence. It’s something I’ve always had an interest in doing. As a kid, I had some toy 27 MHz radios with barely 20 metres of range. Then, I got a job working as a sysadmin at a wireless ISP where we built long-distance wireless networks. And, while at LCA2013 I attended a ham radio BoF (“birds of a feather”) session, where some operators made a DX (long distance) contact by fashioning an antenna out of some wire tied to a tree.

September 11, 2019

Deploying and Configuring Vim on NixOS

NixOS Gears by Craige McWhirter

I had a need to deploy vim and my particular preferred configuration both system-wide and across multiple systems (via NixOps).

I started by creating a file named vim.nix that would be imported into either /etc/nixos/configuration.nix or an appropriate NixOps Nix file. This example is a stub that shows a number of common configuration items:

vim.nix:

with import <nixpkgs> {};

vim_configurable.customize {
  name = "vim";   # Specifies the vim binary name.
  # Below you can specify what usually goes into `~/.vimrc`
  vimrcConfig.customRC = ''
    " Preferred global default settings:
    set number                    " Enable line numbers by default
    set background=dark           " Set the default background to dark or light
    set smartindent               " Automatically insert extra level of indentation
    set tabstop=4                 " Default tabstop
    set shiftwidth=4              " Default indent spacing
    set expandtab                 " Expand [TABS] to spaces
    syntax enable                 " Enable syntax highlighting
    colorscheme solarized         " Set the default colour scheme
    set t_Co=256                  " use 265 colors in vim
    set spell spelllang=en_au     " Default spell checking language
    hi clear SpellBad             " Clear any unwanted default settings
    hi SpellBad cterm=underline   " Set the spell checking highlight style
    hi SpellBad ctermbg=NONE      " Set the spell checking highlight background
    match ErrorMsg '\s\+$'        "

    let g:airline_powerline_fonts = 1   " Use powerline fonts
    let g:airline_theme='solarized'     " Set the airline theme

    set laststatus=2   " Set up the status line so it's coloured and always on

    " Add more settings below
  '';
  # store your plugins in Vim packages
  vimrcConfig.packages.myVimPackage = with pkgs.vimPlugins; {
    start = [               # Plugins loaded on launch
      airline               # Lean & mean status/tabline for vim that's light as air
      solarized             # Solarized colours for Vim
      vim-airline-themes    # Collection of themes for airlin
      vim-nix               # Support for writing Nix expressions in vim
    ];
    # manually loadable by calling `:packadd $plugin-name`
    # opt = [ phpCompletion elm-vim ];
    # To automatically load a plugin when opening a filetype, add vimrc lines like:
    # autocmd FileType php :packadd phpCompletion
  };
}

I then needed to import this file into my system packages stanza:

  environment = {
    systemPackages = with pkgs; [
      someOtherPackages   # Normal package listing
      (
        import ./vim.nix
      )
    ];
  };

This will then install and configure Vim as you've defined it.

If you'd like to give this build a run in a non-production space, I've written vim_vm.nix with which you can build a VM, ssh into afterwards and test the Vim configuration:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./vim_vm.nix
...
$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::18080-:80,hostfwd=tcp::10022-:22"
$ ./result/bin/run-vim-vm-vm

Then, from a another terminal:

$ ssh nixos@localhost -p 10022

And you should be in a freshly baked NixOS VM with your Vim config ready to be used.

There's an always current example of my production Vim configuration in my mio-ops repo.

September 10, 2019

Deploying Gitea on NixOS

NixOS Gitea by Craige McWhirter

I've been using GitLab for years but recently opted to switch to Gitea, primarily because of timing and I was looking for something more lightweight, not because of any particular problems with GitLab.

To deploy Gitea via NixOps I chose to craft a Nix file (example) that would be included in a host definition. The linked and below definition provides a deployment of Gitea, using Postgres, Nginx, ACME certificates and ReStructured Text rendering with syntax highlighting.

version-management/gitea_for_NixOps.nix:

    { config, pkgs, lib, ... }:

    {

      services.gitea = {
        enable = true;                               # Enable Gitea
        appName = "MyDomain: Gitea Service";         # Give the site a name
        database = {
          type = "postgres";                         # Database type
          passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password
        };
        domain = "source.mydomain.tld";              # Domain name
        rootUrl = "https://source.mydomaain.tld/";   # Root web URL
        httpPort = 3001;                             # Provided unique port
        extraConfig = let
          docutils =
            pkgs.python37.withPackages (ps: with ps; [
              docutils                               # Provides rendering of ReStructured Text files
              pygments                               # Provides syntax highlighting
          ]);
        in ''
          [mailer]
          ENABLED = true
          FROM = "gitea@mydomain.tld"
          [service]
          REGISTER_EMAIL_CONFIRM = true
          [markup.restructuredtext]
          ENABLED = true
          FILE_EXTENSIONS = .rst
          RENDER_COMMAND = ${docutils}/bin/rst2html.py
          IS_INPUT_FILE = false
        '';
      };

      services.postgresql = {
        enable = true;                # Ensure postgresql is enabled
        authentication = ''
          local gitea all ident map=gitea-users
        '';
        identMap =                    # Map the gitea user to postgresql
          ''
            gitea-users gitea gitea
          '';
      };

      services.nginx = {
        enable = true;                                          # Enable Nginx
        recommendedGzipSettings = true;
        recommendedOptimisation = true;
        recommendedProxySettings = true;
        recommendedTlsSettings = true;
        virtualHosts."source.MyDomain.tld" = {                  # Gitea hostname
          enableACME = true;                                    # Use ACME certs
          forceSSL = true;                                      # Force SSL
          locations."/".proxyPass = "http://localhost:3001/";   # Proxy Gitea
        };
      };

      security.acme.certs = {
          "source.mydomain".email = "anEmail@mydomain.tld";
      };

    }

This line from the above file should stand out:

              passwordFile = "/run/keys/gitea-dbpass";   # Where to find the password

Where does that file come from? It's pulled from a secrets.nix file (example) that for this example, could look like this:

secrets.nix:

    { config, pkgs, ... }:

    {
      deployment.keys = {
        # An example set of keys to be used for the Gitea service's DB authentication
        gitea-dbpass = {
          text        = "uNgiakei+x>i7shuiwaeth3z";   # Password, generated using pwgen -yB 24
          user        = "gitea";                      # User to own the key file
          group       = "wheel";                      # Group to own the key file
          permissions = "0640";                       # Key file permissions
        };
      };
    }

The file's path /run/keys/gitea-dbpass is determined by the elements. So deployment.keys determines the initial path of /run/keys and the next element gitea-dbpass is a descriptive name provided by the stanza's author to describe the key's use and also provide the final file name.

Now that we have described the Gitea service in gitea_for_NixOps.nix and the required credentials in secrets.nix we need to pull it all together for deployment. We achieve that in this case by importing both these files into our existing host definition:

myhost.nix:

    {
      myhost =
        { config, pkgs, lib, ... }:

        {

          imports =
            [
              ./secrets.nix                               # Import our secrets
              ./version-management/gitea_got_NixOps.nix   # Import Gitea
            ];

          deployment.targetHost = "192.168.132.123";   # Target's IP address

          networking.hostName = "myhost";              # Target's hostname.
        };
    }

To deploy Gitea to your NixOps managed host, you merely run the deploy command for your already configured host and deployment, which would look like this:

    $ nixops deploy -d MyDeployment --include myhost

You should now have a running Gitea server and be able to create an initial admin user.

In my nixos-examples repo I have a version-management directory with some example files and a README with information and instructions. You can use two of the files to generate a Gitea VM to take a quick poke around. There is also an example of how you can deploy Gitea in production using NixOps, as per this post.

If you wish to dig a little deeper, I have my production deployment over at mio-ops.

September 09, 2019

Monitoring OpenWrt with collectd, InfluxDB and Grafana

In my previous blog post I showed how to set up InfluxDB and Grafana (and Prometheus). This is how I configured my OpenWrt devices to provide monitoring and graphing of my network.

OpenWrt includes support for collectd (and even graphing inside Luci web interface) so we can leverage this and send our data across the network to the monitoring host.

OpenWrt stats in Grafana

Install and configure packages on OpenWrt

Log into your OpenWrt devices and install the required packages.

opkg update
opkg install luci-app-statistics collectd collectd-mod-cpu \
collectd-mod-interface collectd-mod-iwinfo \
collectd-mod-load collectd-mod-memory collectd-mod-network collectd-mod-uptime
/etc/init.d/luci_statistics enable
/etc/init.d/collectd enable

Next, log into your device’s OpenWrt web interface and you should see a new Statistics menu at the top. Hover over this and click on Setup so that we can configure collectd.

Add the Hostname field and enter in the device’s hostname (or some name you want).

Click on General plugins and make sure that Processor, System Load, Memory and Uptime are all enabled. Hit Save & Apply.

Under Network plugins, ensure Interfaces is enabled and select the interfaces you want to monitor (lan, wan, wifi, etc).

Still under Network plugins, also ensure Wireless is enabled but don’t select any interfaces (it will work it out). Hit Save & Apply (I don’t bother with the Ping plugin).

Click on Output plugins and ensure Network is enabled so that we can stream metrics to InfluxDB. All you need to do is add an entry under server interfaces that points to the IP address of your monitor server (which is running InfluxDB with the collectd listener enabled). Hit Save & Apply.

Finally, you can leave RRDTool plugin as it is, or disable it if you want to (it will stop showing graphs in Luci if you do, but we’re using Grafana anyway and you’ll have less load on your router). If you do enable, it make sure it is writing data to tmpfs to avoid wearing our your flash (this is the default configuration).

That’s your OpenWrt configuration done!

Loading a dashboard in Grafana

Still in your web browser, log into Grafana on your monitor node (port 3000 by default).

Import a new dashboard.

We will use an existing dashboard by contributor vooon341, so simply type in the number 3484 and hit Load.

This will download the dashboard from Grafana and prompt for settings. Enter whatever Name you like, select InfluxDB as your data source (configured in the previous blog post), then hit Import.

Grafana will now go and query InfluxDB and present your dashboard with all of your OpenWrt devices.

OpenWrt also supports a LUA Prometheus node exporter, so if you wanted to add those as well, you could. However, I think collectd does a reasonable job.

September 08, 2019

Setting up a monitoring host with Prometheus, InfluxDB and Grafana

Prometheus and InfluxDB are powerful time series database monitoring solutions, both of which are natively supported with graphing tool, Grafana.

Setting up these simple but powerful open source tools gives you a great base for monitoring and visualising your systems. We can use agents like node-exporter to publish metrics on remote hosts which Prometheus will scrape, and other tools like collectd which can send metrics to InfluxDB’s collectd listener (more on that later!).

Prometheus’ node exporter metrics in Grafana

I’m using CentOS 7 on a virtual machine, but this should be similar to other systems.

Install Prometheus

Prometheus is the trickiest to install, as there is no Yum repo available. You can either download the pre-compiled binary or run it in a container, I’ll do the latter.

Install Docker and pull the image (I’ll use Quay instead of Dockerhub).

sudo yum install docker
sudo systemctl start docker
sudo systemctl enable docker
sudo docker pull quay.io/prometheus/prometheus

Create a basic configuration file for Prometheus which we will pass into the container. This is also where we configure clients for Prometheus to pull data from, so let’s add a localhost target for the monitor node itself.

cat << EOF | sudo tee /etc/prometheus.yml
global:
  scrape_interval:     15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
    - targets: ['localhost:9090']
  - job_name: 'node'
    static_configs:
    - targets:
      - localhost:9100
EOF

Now we can start a persistent container. We’ll pass in the config file we created earlier but also a dedicated volume so that the database is persistent across updates. We use host networking so that Prometheus can talk to localhost to monitor itself (not required if you want to configure Prometheus to talk to the host’s external IP instead of localhost).

sudo docker run -dit \
--network host \
--name prometheus \
--restart always \
-p 9090:9090 \
--volume prometheus:/prometheus \
-v /etc/prometheus.yml:/etc/prometheus/prometheus.yml:Z \
quay.io/prometheus/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--web.enable-lifecycle \
--web.enable-admin-api

Check that the container is running properly, it should say that it is ready to receive web requests in the log. You should also be able to browse to the endpoint on port 9090 (you can run queries here, but we’ll use Grafana).

sudo docker ps
sudo docker logs prometheus

Updating Prometheus config

Updating and reloading the config is easy, just edit /etc/prometheus.yml and send a message to Prometheus to reload (this was enabled by web.enable-lifecycle option). This is useful when adding new nodes to scrape metrics from.

curl -s -XPOST localhost:9090/-/reload

In the container log (as above) you should see that it has reloaded the config.

Installing Prometheus node exporter

You’ll notice in the Prometheus configuration above we have a job called node and a target for localhost:9100. This is a simple way to start monitoring the monitor node itself! Installing the node exporter in a container is not recommended, so we’ll use the Copr repo and install with Yum.

sudo curl -Lo /etc/yum.repos.d/_copr_ibotty-prometheus-exporters.repo \
https://copr.fedorainfracloud.org/coprs/ibotty/prometheus-exporters/repo/epel-7/ibotty-prometheus-exporters-epel-7.repo

sudo yum install node_exporter
sudo systemctl start node_exporter
sudo systemctl enable node_exporter

It should be listening on port 9100 and Prometheus should start getting metrics from http://localhost:9100/metrics automatically (we’ll see them later with Grafana).

Install InfluxDB

Influxdata provides a yum repository so installation is easy!

cat << \EOF | sudo tee /etc/yum.repos.d/influxdb.repo
[influxdb]
name=InfluxDB
baseurl=https://repos.influxdata.com/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://repos.influxdata.com/influxdb.key
EOF
sudo yum install influxdb

The defaults are fine, other than enabling collectd support so that other clients can send metrics to InfluxDB. I’ll show you how to use this in another blog post soon.

sudo sed-i 's/^\[\[collectd\]\]/#\[\[collectd\]\]/' /etc/influxdb/influxdb.conf
cat << EOF | sudo tee -a /etc/influxdb/influxdb.conf
[[collectd]]
  enabled = true
  bind-address = ":25826"
  database = "collectd"
  retention-policy = ""
   typesdb = "/usr/local/share/collectd"
   security-level = "none"
EOF

This should open a number of ports, including InfluxDB itself on TCP port 8086 and collectd receiver on UDP port 25826.

sudo ss -ltunp |egrep "8086|25826"

Create InfluxDB collectd database

Finally, we need to connect to InfluxDB and create the collectd database. Just run the influx command.

influx

And at the prompt, create the database and exit.

CREATE DATABASE collectd
exit

Install Grafana

Grafana has a Yum repository so it’s also pretty trivial to install.

cat << EOF | sudo tee /etc/yum.repos.d/grafana.repo
[grafana]
name=Grafana
baseurl=https://packages.grafana.com/oss/rpm
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
EOF
sudo yum install grafana

Grafana pretty much works out of the box and can be configured via the web interface, so simply start and enable it. The server listens on port 3000 and the default username is admin with password admin.

sudo systemctl start grafana
sudo systemctl enable grafana
sudo ss -ltnp |grep 3000

Now you’re ready to log into Grafana!

Configuring Grafana

Browse to the IP of your monitoring host on port 3000 and log into Grafana.

Now we can add our two data sources. First, Prometheus, poing to localhost on port 9090

..and then InfluxDB, pointing to localhost on port 8086 and to the collectd database.

Adding a Grafana dashboard

Make sure they tested OK and we’re well on our way. Next we just need to create some dashboards, so let’s get a dashboard to show node exporter and we’ll hopefully at least see the monitoring host itself.

Go to Dashboards and hit import.

Type the number 1860 in the dashboard field and hit load.

This should automatically download and load the dash, all you need to do is select your Prometheus data source from the Prometheus drop down and hit Import!

Next you should see the dashboard with metrics from your monitor node.

So there you go, you’re on your way to monitoring all the things! For anything that supports collectd, you can forward metrics to UDP port 25826 on your monitor node. More on that later…

September 05, 2019

Why Computers Lie Badly At Alarming Speed and the unum Promise

The translation of arithmetic to physical hardware with using the IEEE standard employed numerical representation is fraught with difficulty. As is well known by any who have used even a pocket calculator, computer processors are imprecise with dangerous rounding errors, which vary on different systems. Further, the standard representation method, IEEE 754 "Standard for Floating-Point Arithmetic" (1985, revised 2008), is extremely inefficient from an engineering perspective with increasing physical cost when additional precision is sought.

The basic issue is the limitations in converting decimal or floating point notation into binary form. The IEEE standard suggests that when a calculation overflows the value +inf should be used instead, and when a number is too small the standard says to use 0 instead. Inserting infinity to represent "a very big number" or 0 to represent a "very small number" will certainly cause computational issues. Floating point operations have additional issues when employed in parallel, breaking the logic of associative properties. The equation (a + b) + (c + d) in parallel will not equal the equation ((a + b) + c) + d when run in serial.

These issues have been known in computer science for some decades (Goldberg, 1991). In recent years an attempt has been made to reconstruct the physical implementation of arithmetic to physical hardware by providing a superset to IEEE's 754 standard and IEEE 1788, Standard for Interval
Arithmetic. This number format, the Unum (Gustafson, 2015), consists of a bit string of variable length with six sub-fields: a sign bit, exponent, fraction, uncertainty bit, exponent size, and fraction size. The uncertainty bit, or ubit, specifies whether or not there are additional bits after fraction, instead of rounding, in other words a precise interval. This means that numbers that are close to
zero or infinity are treated as such and are never represented as zero or infinity. To date, Unums have not been translated into hardware as they require more logic than floating-point numbers, but software logic has been provided.

Why Computers Lie Badly At Alarming Speed and the unum Promise
Challenges in High Performance Computing Conference
2-6 Sept, 2019 Mathematical Sciences Institute, Australian National University
http://levlafayette.com/files/2019ChallHPC-unums.pdf

September 04, 2019

Install newer git from software collections and enable globally

Work on Linux almost always means git for me, but the version provided by CentOS and RHEL is too old. Software collections is a convenient way to get a newer version and enable it for everyone by default.

First, enable software collections (different for RHEL and CentOS).

# CentOS
sudo yum install centos-release-scl
# RHEL
sudo yum-config-manager --enable rhel-server-rhscl-7-rpms

Install the newer version of git you want (e.g. git 2.18).

sudo yum install rh-git218

Enable it for everyone for any new sessions.

cat << EOF | sudo tee /etc/profile.d/git-scl.sh
source scl_source enable rh-git218
EOF

Test with a new session.

git --version
$SHELL
git --version

September 02, 2019

Replacing a NixOS Service with an Upstream Version

NixOS Hydra Gears by Craige McWhirter

It's fairly well documented how to replace a NixOS service in the stable channel with one from the unstable channel.

What if you need to build from an upstream branch that's not in either of stable or unstable channels? This is how I go about it, including building a VM in which to test the result.

I specifically wanted to test the new hydra-notify service, so to test that, I need to replace the existing Hydra module in nixpkgs with the one from upstream source. Start by checking out the hydra source:

$ git clone https://github.com/NixOS/hydra.git

We Can configure Nix to replace the nixpkgs version of Hydra with a build from hydra/master.

You can see a completed example in hydra_notify.nix but the key points are that we need to disable Hydra in the standard Nix packages:

  disabledModules = [ "services/continuous-integration/hydra/default.nix" ];

as well as import the module definition from the Hydra source we downloaded:

  imports =
    [
      "/path/to/source/hydra/hydra-module.nix"
    ];

and we need to switch services.hydra to services.hydra-dev in two locations:

  networking.firewall.allowedTCPPorts = [ config.services.hydra-dev.port 80 443 ];

  services.hydra-dev = {
    ...
  };

With these three changes, we have swapped out the Hydra in nixpkgs for one to be built from the upstream source in hydra_notify.nix.

Next we need to build a configuration for our VM that uses the replaced Hydra module declared in hydra_notify.nix. This is hydra_vm.nix, which is a simple NixOS configuration, which importantly includes our replaced Hydra module:

  imports =
    [
      ./hydra_notify.nix
    ];

to give this a run yourself, checkout nixos-examples and change to the services/hydra_upstream directory:

$ git clone https://code.mcwhirter.io/craige/nixos-examples.git
$ cd  nixos-examples/services/hydra_upstream

After updating the path to Hydra's source, We can then build the VM with:

$ nix-build '<nixpkgs/nixos>' -A vm --arg configuration ./hydra_vm.nix

Before launching the VM, I like to make sure that it is provided with enough RAM and both hydra's web UI and SSH are available by exporting the below Qemu options:

$ export QEMU_OPTS="-m 4192"
$ export QEMU_NET_OPTS="hostfwd=tcp::10443-:443,hostfwd=tcp::10022-:22"

So now we're ready to launch the VM:

./result/bin/run-hydra-notications-vm

Once it has booted, you should be able to ssh nixos@localhost -p 10022 and hit the Hydra web UI at localhost:10443.

Once you've logged into the VM you can run systemctl status hydra-notify to check that you're running upstream Hydra.

August 29, 2019

NixOS Appears to be Always Building From Source

NixOS Gears by Craige McWhirter

One of the things that NixOS and Hydra make easy is running your own custom cache of packages. A number of projects and companies make use of this.

A NixOS or Nix user can then make use of these caches by adding them to nix.conf for Nix users or /etc/nixos/configuration.nix for NixOS users.

What most people will want, is for their devices to have access to both caches.

If you add the new cache "incorrectly", you may suddenly find your device building almost everything from source, as I did.

The default /etc/nix/nix.conf for NixOS users has these default lines:

substituters = https://cache.nixos.org
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=

Many projects running custom caches will advise NixOS users to add a stanza like this to /etc/nixos/configuration.nix:

{
  nix = {
    binaryCaches = [
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

If you add this stanza to your NixOS configuration, you will end up with a nix.conf that looks like this:

...
substituters = https://cache.my-project.org/
...
trusted-public-keys = cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

Which will result in your systems only pulling cached packages from that cache and building everything else that's missing.

If you want to take advantage of what a custom cache is providing but not lose the advantages of the primary NixOS cache, your stanza in configuration.nix needs to looks like this:

{
  nix = {
    binaryCaches = [
      "https://cache.nixos.org"
      "https://cache.my-project.org/"
    ];
    binaryCachePublicKeys = [
      "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
      "cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP="
    ];
  };
}

You will now get the benefit of both caches and your nix.conf will now look like:

...
substituters = https://cache.nixos.org https://cache.my-project.org/
...
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= cache.my-project.org:j/Kb+r+tGeM+4YZH+ECfTr+b4OFViKHaciuIOHw1/DP=
...

The order does not matter, I just feel more comfortable putting the cache I consider "primary" first. The order is determined by NixOS, using the cache-info file from each Hydra cache:

$ curl https://cache.nixos.org/nix-cache-info
StoreDir: /nix/store
WantMassQuery: 1
Priority: 40

If you were experiencing excessive building from source and your intention was to draw from two caches, this should resolve it for you.

August 27, 2019

International HPC Certification Forum and AU-NZ Contributions

High Performance Computing (HPC) is the most effective method to process increasingly large and complex datasets, making them increasingly critical for research organisations. Researchers wanting to use HPC resources often start with low levels of skills in using those systems. Despite this situation, educational programmes coming out of well-informed user needs analysis and/or a widely acknowledged set of required skills, capabilities and knowledge are rare. As a result, the training of researchers typically left to individual HPC sites, such as the Pawsey Supercomputing Centre, National Compute Infrastructure Australia, and the University of Melbourne.

With different sites providing their own training with varied content and delivery there is a lack of consistency in skills and knowledge among HPC users, despite the fact that there is a recognised high level of homogeneity in HPC skills (e.g., UNIX-like environments, cluster architecture, job submission principles, parallel programming techniques).

One group trying to address this challenge on an international level is the International HPC Certification Forum ("the Forum"). The Forum was established by a global collection of individuals committed to identifying competency areas, skills and measurable outcomes per identified HPC user roles. The Forum plans to provide examination and certification of users in fine-grained competencies. The Forum has purposefully not taken ownership for training content, separating the definition of skills and certificates from education content and delivery, but allows for the option of delivery agents to be recognized as including examinable content. Australia has been involved from the start of this effort toward a global curriculum with two members of the governing Board.

For Australia and New Zealand HPC educators and trainers in the HPC environment there is a desire for a collaborative development of course content. This is a rational allocation of scarce temporal and financial resources. This has generated ongoing interest in establishing collaboration among HPC educators to develop a programme suitable for Forum Certification. With a lead from the Pawsey Supercomputing Centre, the University of Melbourne, Adelaide University, and NCI, national coordination of HPC educators in Australia and New Zealand are developing a repository of knowledge for content, delivery, and assessment, with the objective of increasing regional research output.

This is a presentation initially given at the ARDC eResearch Skilled Workforce Summit, 29-30 July, 2019, in Sydney then in an expanded version at the HPC-AI Advisory Council, August 29, 2019, in Perth, and finally in a reduced version for a ARDC Tech-Talk, 6 Sept, 2019.

http://levlafayette.com/files/2019ARDCsummit.pdf

August 20, 2019

Installing Your First Hydra

NixOS Hydra Gears by Craige McWhirter

Hydra is a Nix-based continuous build system. My method for configuring a server to be a Hydra build server, is to create a hydra.nix file like this:

# NixOps configuration for machines running Hydra

{ config, pkgs, lib, ... }:

{

  services.postfix = {
    enable = true;
    setSendmail = true;
  };

  services.postgresql = {
    enable = true;
    package = pkgs.postgresql;
    identMap =
      ''
        hydra-users hydra hydra
        hydra-users hydra-queue-runner hydra
        hydra-users hydra-www hydra
        hydra-users root postgres
        hydra-users postgres postgres
      '';
  };

  networking.firewall.allowedTCPPorts = [ config.services.hydra.port ];

  services.hydra = {
    enable = true;
    useSubstitutes = true;
    hydraURL = "https://my.website.org";
    notificationSender = "my.website.org";
    buildMachinesFiles = [];
    extraConfig = ''
      store_uri = file:///var/lib/hydra/cache?secret-key=/etc/nix/my.website.org/secret
      binary_cache_secret_key_file = /etc/nix/my.website.org/secret
      binary_cache_dir = /var/lib/hydra/cache
    '';
  };

  services.nginx = {
    enable = true;
    recommendedProxySettings = true;
    virtualHosts."my.website.org" = {
      forceSSL = true;
      enableACME = true;
      locations."/".proxyPass = "http://localhost:3000";
    };
  };

  security.acme.certs = {
      "my.website.org".email = "my.email@my.website.org";
  };

  systemd.services.hydra-manual-setup = {
    description = "Create Admin User for Hydra";
    serviceConfig.Type = "oneshot";
    serviceConfig.RemainAfterExit = true;
    wantedBy = [ "multi-user.target" ];
    requires = [ "hydra-init.service" ];
    after = [ "hydra-init.service" ];
    environment = builtins.removeAttrs (config.systemd.services.hydra-init.environment) ["PATH"];
    script = ''
      if [ ! -e ~hydra/.setup-is-complete ]; then
        # create signing keys
        /run/current-system/sw/bin/install -d -m 551 /etc/nix/my.website.org
        /run/current-system/sw/bin/nix-store --generate-binary-cache-key my.website.org /etc/nix/my.website.org/secret /etc/nix/my.website.org/public
        /run/current-system/sw/bin/chown -R hydra:hydra /etc/nix/my.website.org
        /run/current-system/sw/bin/chmod 440 /etc/nix/my.website.org/secret
        /run/current-system/sw/bin/chmod 444 /etc/nix/my.website.org/public
        # create cache
        /run/current-system/sw/bin/install -d -m 755 /var/lib/hydra/cache
        /run/current-system/sw/bin/chown -R hydra-queue-runner:hydra /var/lib/hydra/cache
        # done
        touch ~hydra/.setup-is-complete
      fi
    '';
  };
  nix.trustedUsers = ["hydra" "hydra-evaluator" "hydra-queue-runner"];
  nix.buildMachines = [
    {
      hostName = "localhost";
      systems = [ "x86_64-linux" "i686-linux" ];
      maxJobs = 6;
      # for building VirtualBox VMs as build artifacts, you might need other
      # features depending on what you are doing
      supportedFeatures = [ ];
    }
  ];
}

From there it can be imported in your configuration.nix or NixOps files like this:

{ config, pkgs, ... }:

{

  imports =
    [
      ./hydra.nix
    ];

...
}

To deploy hydra, you will then need to either run nixos-rebuild switch on the server or use nixops deploy -d my.network.

The result of this deployment, via NixOps can be seen at hydra.mcwhirter.io.

August 18, 2019

LUV September 2019 Main Meeting

Sep 3 2019 19:00
Sep 3 2019 21:00
Sep 3 2019 19:00
Sep 3 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

NOTE: The library closes at 7pm so arrivals after that time will need to contact Andrew on (0421) 775 358 or any other attendee for admission.

Speakers:

  • To be announced

 

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 3, 2019 - 19:00

August 16, 2019

Passwordless restricted guest account on Ubuntu

Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.

Create a user that can login without a password

First of all, I created a new user with a random password (using pwgen -s 64):

adduser guest

Then following these instructions, I created a new group and added the user to it:

addgroup nopasswdlogin
adduser guest nopasswdlogin

In order to let that user login using GDM without a password, I added the following to the top of /etc/pam.d/gdm-password:

auth    sufficient      pam_succeed_if.so user ingroup nopasswdlogin

Note that this user is unable to ssh into this machine since it's not part of the sshuser group I have setup in my sshd configuration.

Privacy settings

In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:

Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:

and configured it to clear everything on exit:

Create a password-less system keyring

In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.

Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.

I also made sure to disable saving of passwords, payment methods and addresses in the browser too.

Restrict user account further

Finally, taking an idea from this similar solution, I prevented the user from making any system-wide changes by putting the following in /etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla:

[guest-policy]
Identity=unix-user:guest
Action=*
ResultAny=no
ResultInactive=no
ResultActive=no

If you know of any other restrictions that could be added, please leave a comment!

August 15, 2019

Setting Up Wireless Networking with NixOS

NixOS Gears by Craige McWhirter

The current NixOS Manual is a little sparse on details for different options to configure wireless networking. The version in master is a little better but still ambiguous. I've made a pull request to resolve this but in the interim, this documents how to configure a number of wireless scenarios with NixOS.

If you're going to use NetworkManager, this is not for you. This is for those of us who want reproducible configurations.

To enable a wireless connection with no spaces or special characters in the name that uses a pre-shared key, you first need to generate the raw PSK:

$ wpa_passphrase exampleSSID abcd1234
network={
        ssid="exampleSSID"
        #psk="abcd1234"
        psk=46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d
}

Now you can add the following stanza to your configuration.nix to enable wireless networking and this specific wireless connection:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    exampleSSID = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

If you had another WiFi connection that had spaces and/or special characters in the name, you would configure it like this:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    "example's SSID" = {
      pskRaw = "46c25aa68ccb90945621c1f1adbe93683f884f5f31c6e2d524eb6b446642762d";
    };
  };
};

The final scenario that I have, is connecting to open SSIDs that have some kind of secondary method (like a login in web page) for authentication of connections:

networking.wireless = {
  enable = true;
  userControlled.enable = true;
  networks = {
    FreeWiFi = {};
  };
};

This is all fairly straight forward but was non-trivial to find the answers too.

August 14, 2019

An animated GIF resume

Share

The graphic designer at work and I were talking, and I challenged him to come up with a resume as an animated GIF. This is where he landed…

I think its quite clever. Need a graphic designer or video team? Consider onefishsea.

Share

August 11, 2019

Audiobooks – July 2019

The Return of the King by J.R.R Tolkien. Narrated by Rob Inglis. Excellent although I should probably listen slower next time. 10/10

Why Superman Doesn’t Take Over the World: What Superheroes Can Tell Us About Economics by J. Brian O’Roark

A good idea for a theme but author didn’t quite nail it. Further let down in audiobook format when the narrator talked to invisible diagrams. 6/10

A Fabulous Creation: How the LP Saved Our Lives by David Hepworth

Covers the years 1967 (Sgt Peppers) to 1982 (Thriller) when the LP dominated music. Lots of information all delivered in the authors great style. 8/10

The Front Runner by Matt Bai

Nominally a story about the downfall of Democratic presidential front-runner Gray Hart in 1987. Much of the book is devoted to how norms of political coverage changed at that moment due to changes in technology & culture. 8/10

A race like no other: 26.2 Miles Through the Streets of New York by Liz Robbins

Covering the 2007 New York marathon it follows the race with several top & amateur racers. Lots of digressions into the history of the race and the runners. Worked well 8/10

1983: Reagan, Andropov, and a World on the Brink by Taylor Downing

An account of how escalations in the cold war in 1983 nearly lead to Nuclear War, with the Americans largely being unaware of the danger. Superb 9/10


The High cost of Free Parking (2011 edition) by Donald Shoup.

One of the must-read books in the field although not a revelation for today’s readers. Found it a little repetitive (23 hours) and talking to diagrams and equations doesn’t work in audiobook format. 6/10



Share

August 08, 2019

The wonderful world of machine learning automated lego sorting

Share

Inspired by Alastair D’Silva‘s cunning plans for world domination, I’ve been googling around for automated lego sorting systems recently. This seems like a nice tractable machine learning problem with some robotics thrown in for fun.

Some cool projects if you’re that way inclined:

This sounds like a great way to misspend some evenings to me…

Share

August 03, 2019

LUV August 2019 Workshop: Drupal

Aug 17 2019 12:30
Aug 17 2019 16:30
Aug 17 2019 12:30
Aug 17 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Alexar Pendashtash will run a workshop about Drupal, a web application framework he has extensively used throughout the past ten years for a variety of applications.

You will not need any prior knowledge of programming or web development to benefit from the talk.

Alexar is a technologist and a social entrepreneur.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

August 17, 2019 - 12:30

read more

LUV August 2019 Main Meeting: Open Source Hardware and Software for Assistive Technologies

Aug 6 2019 19:00
Aug 6 2019 21:00
Aug 6 2019 19:00
Aug 6 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

NOTE: The library closes at 7pm so arrivals after that time will need to contact Andrew on (0421) 775 358 or any other attendee for admission.

Speakers:

  • Jonathan Oxer: Open Source Hardware and Software for Assistive Technologies

 

Open Source Hardware and Software for Assistive Technologies

Learn how Open Source hardware and software can be used in a wide variety of assistive technologies, including wheelchair control, environmental systems, robotics, communication, prosthetics, games, and telepresence.

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

August 6, 2019 - 19:00

read more

What is the Spotify model for Agile?

Share

The other day someone said to me  that “they use the Spotify development model”, and I said “you who the what now?”. It was a super productive conversation that I am quite proud of.

So… in order to look like less of a n00b in the next conversation, what is the “Spotify development model”? Well, it turns out the Spotify came up with a series of tweaks to the standard Agile process in order to scale their engineering teams. If you google for “spotify development model” or “spotify agile” you’ll get lots and lots of third party blog posts about what Spotify did (I guess a bit like this one), but its surprisingly hard to find primary sources. The best I’ve found so far is this Quora answer from a former VP of Engineering at Spotify, although some of the resources he links to no longer exist.

Here’s a quick summary though:

Squads: the basic unit of development team. Squads sit together, and have all of the tools and skills to release a feature to production. Squads self-organize and can choose in what way they work. For example some might use Scrum, while others might choose Kanban. Squads have a long term mission, but its usually a shared belief — “we will improve the search interface” for example. Squads don’t have a designated leader, but they do have a Product Owner.

An aside, squad interdependencies: Squads are meant to feel like independent small startups, so the dependancies between Squads are closely managed. You can’t completely eliminate dependancies if you’re going to build big things, but you can actively track those dependancies and make sure that they’re consciously managed.

Product Owners: each Squad has a Product Owner, who is responsible for prioritising the work of the Squad. The Product Owner also keeps in touch with other Product Owners in associated areas to ensure that a coherent overall product is being built.

Tribes: a Tribe is a set of Squads which work in related areas. So for example it might be all of the Squads which work on the mobile client, or all of the Squads dealing with backend infrastructure. Tribes are usually co-located (the same building), and are kept to less than 100 members to ensure that everyone knows each other. Tribes hold regular gatherings where they show off what they are working on, what they have delivered, and what others can learn from them.

Chapters: if you’re the only operations person in your Squad, then that can be an isolating experience and stops you from learning from other operations people’s experiences. To solve this, people with similar skills are also grouped into an overlay called a Chapter. Chapters meet regularly like Tribes, but Chapters are also where you get your people management from — the Chapter Lead is your people manager and does all the usual HR stuff to ensure your career needs are being met. At Spotify Chapters are a sub-unit of a Tribe, you don’t have Chapters which cross Tribe boundaries.

Guilds: Guilds are similar to Chapters, but aren’t skills based. They’re people with common interests, and can cross Tribe boundaries. An example of a Guild might be all your web developers, or all your Prometheus aficionados.

Share

July 29, 2019

Quick hack: extracting the contents of a Docker image to disk

Share

For various reasons, I wanted to inspect the contents of a Docker image without starting a container. Docker makes it easy to get an image as a tar file, like this:

docker save -o foo.tar image

But if you extract that tar file you’ll find a configuration file and manifest as JSON files, and then a series of tar files, one per image layer. You use the manifest to determine in what order you extract the tar files to build the container filesystem.

That’s fiddly and annoying. So I wrote this quick python hack to extract an image tarball into a directory on disk that I could inspect:

#!/usr/bin/python3

# Call me like this:
#  docker-image-extract tarfile.tar extracted

import tarfile
import json
import os
import sys

image_path = sys.argv[1]
extracted_path = sys.argv[2]

image = tarfile.open(image_path)
manifest = json.loads(image.extractfile('manifest.json').read())

for layer in manifest[0]['Layers']:
    print('Found layer: %s' % layer)
    layer_tar = tarfile.open(fileobj=image.extractfile(layer))

    for tarinfo in layer_tar:
        print('  ... %s' % tarinfo.name)
        if tarinfo.isdev():
            print('  --> skip device files')
            continue

        dest = os.path.join(extracted_path, tarinfo.name)
        if not tarinfo.isdir() and os.path.exists(dest):
            print('  --> remove old version of file')
            os.unlink(dest)

        layer_tar.extract(tarinfo, path=extracted_path)

Hopefully that’s useful to someone else (or future me).

Share

July 23, 2019

Codec 2 700C Equaliser Part 2

This post extends the work of Part 1. I’ve developed a new Equaliser (EQ) algorithm, and ported it to C.

EQ in front of VQ

After several days of experimentation I moved the EQ in front of the VQ. The algorithm used in Part 1 included the VQ in the loop (top), however I found this had problems removing large amounts of low frequency energy. I’m not sure why – with the VQ in the loop the operation is complex.

The new algorithm (bottom) is rather simple; we compare the average spectrum of the input speech to an ideal “mask”. The difference between the two are the equaliser weights. However, like many DSP algorithms, it took several days of careful experimentation, trial and error, listening tests, and a many backwards step to get the results I wanted. I coded and tested a bunch of candidate algorithms in vq_700c_eq.m

The input spectrum is averaged using a IIR filter, so it takes about a second to adapt.

This plot shows the EQ weights ramping up over a few frames (one frame is 40ms). It’s effectively a map of the excess spectral energy – the stuff we want to remove before the VQ:

This sample has a lot of excess LF energy. The HF spike is due to the shaping of the VQ – it expects all samples to have low energy in the last bin. This plot shows average values over an entire sample:

Dark blue is the input speech (target), and cyan the input speech after equalisation. Green is the “ideal” mask we make the EQ shoot for.

Objective Results

The following table is similar to the table in Part 1. It presents results for stage 1 (vq1) and stage 2 (vq2) of the two stage vector quantisation process. eq1 is the algorithm from Part 1, which I used as my starting point. eq2 is the latest algorithm for real time implementation, which does a comparable job. What really matters are the last three columns – the output error after quantisation for the current 700C VQ (vq2), Part 1 eq (vq2_eq1), and latest algorithm (vq_eq2).

Results Table

I placed the results in a separate text file as I had trouble getting them to display neatly in WordPress.

The first two samples are “contrived”: I took “cq_freedv_8k” and using Audacity added 12dB gain between 0 and 500Hz to get cq_freedv_8k_lfboost. I set the high frequency cut off at 3500 Hz to get cq__freedv_8k_hfcut. This was really useful – in listening tests lfboost broke the EQ design from Part 1 – it was ineffective at cleaning up the sample! Note the obj measure for this sample (vq2_eq1 column) is still OK – one reason why we back up objective results with listening tests.

Subjective Results

Sample Codec 2 700C Codec 2 700C + EQ
cq_ref Listen Listen
kristoff Listen Listen
cq_freedv_8k Listen Listen
cq_freedv_8k_lfboost Listen Listen

I listen to these samples with a small set of loudspeakers, that have some low frequency response. The last sample is an extreme case. That cq_freedv_8k sample already has quite a lot of low frequency, and we have added 12dB extra. The equalised sample is much improved, but not quite as good as the original “clean” cq_freedv_8k sample. The quality improves over the first few seconds as the EQ adapts.

Discussion

An EQ has been developed, tested over a range of samples, and ported to C. Tests have been written to ensure the C port matches the Octave version, and it’s ready to try in the real word on x86 and embedded (stm32) platforms.

I’m quite pleased with this work. I’ve resolved a long standing personal mystery – why some samples/speakers code well and others don’t. I’ve made Codec 2 700C more robust to a range of different microphone inputs. This will mean an incremental improvement in the average speech quality of on-air modes like FreeDV 700C and 700D. I’ve also learned a lot about VQ, which can be applied to new codec designs.

Lessons learned:

  1. This work has reinforced how important accurate representation of the low frequencies is for speech.
  2. Using a contrived sample is a good technique, it gives you a known error to test against.
  3. This was a good exercise in learning the strengths and weaknesses of VQ.
  4. VQs are fussy about what you throw at them.
  5. Variance is a really useful objective measure.
  6. Use a decent training database (from Part 1 where I trained a new VQ).

The contrived cq_freedv_8k examples have a very big variance (column 1 of the results table) before quantisation – this suggests “more information to quantise”. However they started as the same sample, and just had their frequency response tweaked in a way that did not affect the perceptual information they contain at all. This suggests equalisation might be a neat way to lower the bit rate. We may be able to find a transformation in the spectrum that sounds more or less the same to the human ear, but is easier (requires less bits) to encode.

Make me wonder – how many bits of other codecs are spent coding non-essential information like the arbitrary frequency responses thrown at them?

You can play around with EQ yourself

$ ./c2enc 700C ../../raw/cq_ref.raw - --eq --var | ./c2dec 700C - - | aplay -f S16_LE

Links

Codec 2 700C Equaliser
Codec 2 700C
Codec 2 Pull Request for this work

Mastermind in JavaScript

Share

I’ve been learning JavaScript for the last few days, and I figured I’d implement Jacqui’s favourite board game as a learning exercise. Jacqui loves a simple colour guessing game called Mastermind. In the game someone picks four coloured pins and then the player has to progressively guess what those colours are.

In my JavaScript version the computer picks four colours, and you need to work out what they are. Click on the white squares to cycle through colours and then hit the “guess” button when you’re ready to see how many you got right. The gray boxes in the top row will progressively reveal their colours as you guess them.

The code is here, and the game can be played here.

Share

July 22, 2019

Generating an sha256 Hash for Nix Packages

NixOS Gears by Craige McWhirter

Let's say that you're working to replicate the PureOS environment for the Librem 5 phone so that you can run NixOS on it instead and need to package calls. Perhaps you just want to use Nix to package something else that isn't packaged yet.

When you start digging into Nix packaging, you'll start to see stanzas like this one:

src = fetchFromGitLab {
  domain = "source.puri.sm";
  owner = "Librem5";
  repo = pname;
  rev = "v${version}";
  sha256 = "1702hbdqhfpgw0c4vj2ag08vgl83byiryrbngbq11b9azmj3jhzs";
};

It's fairly self explanatory and merely a breakdown of a URL into it's component parts so that they can be reused elsewhere in the packaging system. It was the generation of the sha256 hash that stumped me the most.

I'd not been able to guess how it was generated. I was not able to find clear instructions in the otherwise pretty thorough Nix documentation.

Putting clues together from a variety of other blog posts, this is how I eventually came to generate the correct sha256 hash for Nix packages:

Using the above hash for libhandy, I was able to test the method I'd come up with, using nix-prefetch-url to download the tagged version and provide an sha256 hash which I could compare to one in the existing libhandy default.nix file:

$ nix-prefetch-url --unpack https://source.puri.sm/Librem5/libhandy/-/archive/v0.0.10/libhandy-v0.0.10.tar.gz
unpacking...
[0.3 MiB DL]
path is
'/nix/store/58i61w34hx06gcdaf1x0gwi081qk54km-libhandy-v0.0.10.tar.gz'
1702hbdqhfpgw0c4vj2ag08vgl83byiryrbngbq11b9azmj3jhzs

Low and behold, I have matching sha256 hashes. As I'm wanting to create a package for "calls", I now safely do the same against it's repository on the way to crafting a Nix file for that:

$ nix-prefetch-url --unpack https://source.puri.sm/Librem5/calls/-/archive/v0.0.1/calls-v0.0.1.tar.gz
unpacking...
[0.1 MiB DL]
path is '/nix/store/3c7aifgmf90d7s60ph5lla2qp4kzarb8-calls-v0.0.1.tar.gz'
0qjgajrq3kbml3zrwwzl23jbj6y62ccjakp667jq57jbs8af77pq

That sha256 hash is what I'll drop into my nix file for "calls":

src = fetchFromGitLab {
  domain = "source.puri.sm";
  owner = "Librem5";
  repo = pname;
  rev = "v${version}";
  sha256 =
  0qjgajrq3kbml3zrwwzl23jbj6y62ccjakp667jq57jbs8af77pq";
};

Now we have an sha256 hash that can be used by Nix to verify source downloads before building.

July 20, 2019

How Niantic is Killing Ingress

For the past several years, I've been an active player of Ingress, a game where two competing factions play a sort of "capture-the-flag" of public locations of note using an augmentation of Google maps. The game, the precursor to Pokémon Go, and Harry Potter: Wizards Unite, has had its fair share of issues over the past six years. But on July 19, 2019, a death knell was sounded by the very company that produces the game; they forced players (albeit temporarily) to adopt the new interface, Ingress Prime, which is passionately hated by the overwhelming majority of the game's players, and for good reason. The interface is a radical change to the old version, has distracting effects, issues with visual accessibility, and is cumbersome to use. These issues have been raised by the player community for months now, but have largely fallen on deaf ears. Why is this? Why would a game company be so inattentive to the player base?

It is perhaps not so well known, but Niantic started off as a Google project, working on the Field Trip alogorithm, which would push information to users on what the algorithm thinks you might be interested in, and with integration into Google Glass. There's a fascinating unlisted video on Youtube, with an astounding 22 million views, where you basically witness in all of two and a half minutes of how a person is turned on a thoughtless robot, the ideal consumer. Of course, such an algorithm can't make such decisions randomly, it has to know where a person goes, what their habits are and so forth. Trying to find out this information by surveys and the like would be onerous to the extreme; but Google Location Services can provide that data, and players will willingly give up such privacy for the entertainment of an Augmented Reality game, whether it is Ingress, Pokémon Go, or Harry Potter: Wizards Unite.

OK, six years on one might think "big deal", as various form of software spying is so ubiquitous only a few seem to care anymore. But in 2013 it was still an item of some public debate. Certainly, Ingress et. al., also made money from select advertising and merchandise, but that's hardly enough to cover costs. The real business, however, is Niantic Real World Platform, which uses the Augmented Reality front to have a commercial back-end, with Point of Interest data collection, social analytics, CRM; the talk from Phil Kreslin at the Augmented World Expo 2018 provides a lot of the details. The games are the, often enjoyable, entry point behind the data collection.

How does this relate to game development? In a nutshell, if a game isn't generating interest, it isn't collecting data, and if it isn't collecting data it's not helping Niantic's core business. Ingress was the first of its kind - it was popular for a couple of years, and still has a smaller, but dedicated, fan-base who for many have it as part of their daily lives. There were some issues, such as the fact it was in beta for a long period of time, which significant changes in play, leading it to be nicknamed "Calvinball". To enhance the data collection activities, one of the moves was to allow players to submit favourite hotels, cafes, etc as portals. It quickly got out of hand of course, and Niantic had to pull back on what constituted a Point of Interest, but a lot of damage had been done. Today there are swathes of empty or barely deployed portals as a result.

Shortly afterward Pokémon Go came on to the scene, and Niantic must have thought they struck gold. With a fancier interface, simple gameplay it was enormously popular, and recently claimed to 150 million active players, even setting five Guinness World Records. By comparison, Ingress probably has less than 10% of that figure; one would have to dig into the Niantic data, and they're not exactly making this information publically available. The point being, of course, is that Pokémon Go was huge and they're hoping to replicate the same success with Harry Potter: Wizards Unite.

It seems to me, however, that Niantic has completely misread the reasons for their success with Pokémon Go. They think that the reason for Pokémon Go's success relative to Ingress is due to the user-interface, and whilst that has a degree of truth the reality is that Pokémon fans are deep fans. By way of analogy, after its first season Star Trek was going to be dropped due to low ratings. However those who were viewers were deep viewers, and their campaign to keep it on led to its lasting success. Pokémon fans are a bit like this; it's the world's largest media franchise, for reasons I do not even begin to understand (my interest in Pokémon is very close to zero). To summarise, the reason that Pokémon Go succeeded is because it provided an Ingress-like AR game to an extremely popular setting. The reason that Ingress Prime fails is because it ignores the existing player community and believes that a resource-intensive Pokémon-style interface will improve the game's popularity. They have their market analysis absolutely upside-down. Which will make for an interesting case study in the future, I suppose.

With this in mind, can Ingress be saved? In theory yes, if the Niantic concentrates on gameplay, rather than massive changes to the UI. An interesting game generates new players even with an old UI (consider the continuing and lasting popularity of tabletop roleplaying games). The following is a list of suggestions, which do not involve any radical changes to the core principles of the game:

  • Tweak the level and badge gains so tha they fit some sort of mathematical progression. I am at a complete loss to understand why this wasn't done in the first instance, by programmers, for goodness sake. Progression on the AP-level table looks like it was designed by a drunk.
  • Allow players to increase levels about 16. The cap is absolutely unnecessary and any sort of geometric progression will ensure that the game is still accessible to new players.
  • Likewise allow devices to increase beyond level 8. Again the cap is unnecessary, and leads to a situation where existing players who have peaked play out of habit rather than for achievement.
  • Restrict portals that are higher than level 8 to genuine points of notable public interest. My favourite Indian restaurant, as charming as it is, is not a wonder of the world like the Eiffel Tower or the Pyramid of Giza.
  • Keep the familiar interface of Scanner Redacted beyond the end-of-September end-date and open it up to new players; at the moment only Agents who made accounts using the original 1.0 Scanner can access Scanner Redacted.

However, I have next to zero confidence that Niantic will introduce any of these changes, even though they would improve Niantic's core business of data collection. Which will mean after many years of game-play, Ingress will simply die in an (overwhelmingly graphic UI) whimper. Maybe then we'll be lucky and Niantic will release the original source-code and the community will be able to build something from the cadaver. But I doubt it.

July 18, 2019

SM1000 V2 Firmware

After a year of development we have released version 2 of the SM1000 firmware. Major new features include:

  • FreeDV 700D (thanks Don W7DMR)
  • A Morse menu system (thanks Stuart VK4MSL)
  • The SM1000 Manual which tells you how to upgrade to the new firmware

I’d also like to thanks Steve (K5OKC) for his fine work on the OFDM modem, Danilo (DB4PLE), and Richard (KF5OIM) for their work on the build system, Github/Travis integration and testing; and Walter (K5WH) and George (AC6RB) for helping us sort out the Windows flashing procedure.

The software engineering side of Codec 2 has come a long way in 2019 – we now have about 50 automated tests and a very nice build system for the x86 and embedded stm32 ports. About half the tests run on a stm32 Discovery card and ensure the stm32 port keeps running as we push the x86 side of the project forward.

The SM1000 hardware was developed by myself and Rick Barnich KA8BMA a few years ago. It is being manufactured, tested and shipped by our good friend Edwin at Dragino in Shenzhen, China.

The FreeDV 700D port includes some sophisticated signal processing:

  • The Codec 2 700C speech codec.
  • The OFDM modem developed by Steve and I that has been optimised for HF radio.
  • State of the art LDPC forward error correction software designed by Bill, VK5DSP.

Largely through Don’s effort that is all running on a tiny 168MHz micro-controller in just 192k of RAM. With identical results to the x86 version.

I’ve really enjoyed working with the team on this project, in particular through GitHub. I like GitHub more that Git – which I still get into trouble with occasionally.

The stm32 side of the Codec 2 project is much improved with a shiny new build system, up to date documentation, and we have the automated tests to ensure we keep it that way. Thanks guys!

Here’s my SM1000 on the bench during development. I’ve hooked up a serial port to print debug messages, and am using laptops to feed it with FreeDV signals:

Links
SM1000 page and store
SM1000 Manual
Porting a LDPC Decoder to a STM32 Microcontroller Don tells us how he squeezed sophisticated LDPC error correction into a microcontroller.
Pull Request where the integration work was done for the V2 release.
2019 Codec2 and FreeDV Update
FreeDV 700D – a description of this new mode.
SM1000 Development – scan the archives around 2014/2015 for many posts on SM1000 hardware development.

July 16, 2019

OFDM Frequency Acquisition Revisited

I’ve had some anecdotal reports (from the UK and VK5) and a sample suggesting 700D sync is very slow (around 10s) on some fading HF channels, with problems even at high SNRs. This was puzzling to me, as I carefully developed the 700D modem against models of fast fading (1 Hz Doppler bandwidth) HF channels.

So I took a look at the samples, and revisited the OFDM modem acquisition simulations. I ran a bunch of tests using the core timing and freq offset estimators, for example:

octave:12> ofdm_dev; acquisition_test(Ntests=100, EbNodB=10, foff_hz=0, hf_en=2,verbose=1);

I added a slow fading channel option (hf=2). Turns out slow fading (e.g. 0.1Hz Doppler) leads to relatively “stationary” notches in the spectrum that confuse the estimator. The net result is many seconds with a poor frequency offset and hence no sync. Curiously, it’s less of a problem on fast fading channels. The channel conditions change so fast that we rapidly get to a channel state that the estimator likes and sync proceeds.

Here is a spectrogram of a slow fading channel, note the deep notch that appears around 10 seconds. It’s chomping out a big chunk of the signal and slowly moves across the signal for a few seconds. Our challenge is to estimate the frequency offset with half the signal missing!

Beneath that is a plot of frequency offset estimate for the same channel. It’s meant to be about zero, but during the slow, frequency selective fade it gets stuck at around -3Hz for several seconds.


Once I could reproduce the problem I worked out a few improvements, and added some ctests to make sure we trap any similar problems in future. The steps I took are described in the notes of this Codec 2 Pull Request. The major changes are:

  1. A new frequency offset estimator.
  2. This still has some residual frequency offset, so I added a two-speed phase estimator to rapidly track out any residual fine frequency errors soon after sync.
  3. Increased the frequency offset estimation range to +/-60Hz, to make tuning easier for FreeDV 700D and FreeDV 2020.

The following two plots measure the probability of getting a valid frequency offset estimate (on each frame) for various Eb/No (previous algorithm, and new improved algorithm):


Steve (K5OKC) and I worked on optimising the acquisition functions, to ensure they would run in real time on the stm32 for the upcoming 700D port. I’m happy with the result: improved sync, wider frequency range, some automated tests to trap similar issues in future, and running in real time even on the stm32. Nice that we can incrementally improve this modem. It’s available now if building codec2 and freedv-gui from source.

Acquisition is hard. Much harder than many other parts of modem design. For my next modem (or FreeDV waveform) I’ll get acquisition right first, then design the rest if the waveform (e.g. number of FEC/payload bits, carriers) around that.

Running the Acquisition Simulations

Writing some notes to myself so I remember how to so this next time (and there’s always a next time with acquisition!).

Generate a bunch of curves measuring acquisition performance using simulated HF and AWGN channels:

octave:12> ofdm_dev; acquistion_curves

Test complete sync (core estimators integrated with state machine) at a certain time offset:

octave:12> ofdm_ldpc_rx("~/Desktop/700D/vk2tpm_004.wav", "700D", 1, "", 3, 5)

Test complete sync at a range of time offsets to gather stats, e.g mean and variance of sync time. I used real, off air signals:

octave:12> ofdm_time_sync("~/Desktop/700D/vk2tpm_004.wav", 30)
octave:13> ofdm_time_sync("~/Desktop/mike/websdr_recording_2019-04-21T09_11_56Z_3625.0kHz.wav", 30)

Links

FreeDV 700D Part 4 – Acquisition My last lap around acquisition.

July 14, 2019

Installing Debian buster on a GnuBee PC 2

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.0
reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.0.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   8390655   8388608     4G Linux swap
/dev/sda2  8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1
iface eth0.1 inet static
    address 192.168.10.1
    netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot

Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx  # type password set above
mkdir .ssh
vim .ssh/authorized_keys  # paste your ed25519 ssh pubkey

Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
bash ./debian-modules-install
reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor  # choose vim.basic
dpkg-reconfigure locales  # enable the locale that your desktop is using

Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

Next steps

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel though this is being worked on by community members.

I hope to resolve these issues soon, and will update this blog post once I do, but you are more than welcome to leave a comment if you know of a solution I may have overlooked.

July 11, 2019

Gurobi Installation and Tests on a HPC system

Gurobi is an optimisation solver, which describes itself as follows, thus explaining it's increasing popularity:

The Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms.

The following outlines the installation procedure on a Linux cluster, various licensing condundrums, and a sample job using Slurm.

Installation

In our case we acquired a floating academic license. A form will need to filled out, scanned, and sent back to Gurobi. This is all sub-optimal and like any every partially proprietary software it's a damaged good. But certainly it's not as bad as it could be, small mercies.

Having received a licene file and having downloaded the software one can install. The following is an EasyBuild script (Gurobi-8.1.1.eb), but it's basically a tarball which contains several binaries, docs, examples etc.


name = 'Gurobi'
version = '8.1.1'
easyblock = 'Tarball'
homepage = 'http://www.gurobi.com'
description = """The Gurobi Optimizer is a state-of-the-art solver for mathematical programming. The solvers in the Gurobi Optimizer were designed from the ground up to exploit modern architectures and multi-core processors, using the most advanced implementations of the latest algorithms."""
toolchain = {'name': 'dummy', 'version': 'dummy'}
# registration is required
# source_urls = ['http://www.gurobi.com/downloads/user/gurobi-optimizer']
sources = ['%(namelower)s%(version)s_linux64.tar.gz']
moduleclass = 'math'

Licensing

As is often the case with proprietary software, the greatest pain for sysadmins will be dealing with the license. Even in those cases where this is quicker than the software install itself, at least with the software you know that the work is necessary. With licenses, it's unnecesssary work, and I keep a sharp eye on the number of expected seconds I have left in my life. For anyone else reading this, hopefully I've saved a few for you.

Like most sensible HPC systems the management node is not directly accessible to the outside world. A license file will need to be created from the grbprobe command and relevant material added into a gurobi.lic file. The following is an example:


# DO NOT EDIT THIS FILE except as noted
#
# License ID XXXXXX
TYPE=TOKEN
VERSION=8
TOKENSERVER=spartan-build.hpc.unimelb.edu.au
HOSTNAME=spartan-build.hpc.unimelb.edu.au
HOSTID=XXXXXXXX
SOCKETS=2
EXPIRATION=2020-04-18
USELIMIT=4096
DISTRIBUTED=100
SPECIAL=2
KEY=XXXXXXXX
CKEY=XXXXXXXX
# Uncomment and edit the following lines as desired:
PORT=XXXXX
# # PASSWORD=YourPrivatePassword

Gurobi strongly prefers that the license is installed in a /opt/gurobi directory according to their documentation, but on an HPC system it is doubtful that this is mounted across compute nodes. Thus a path will have to be exported with the appropriate variable when run. In the meantime, the token server can be started:


(vSpartan) [root@spartan-build gurobi]# module load Gurobi
(vSpartan) [root@spartan-build gurobi]# grb_ts
..

Smoke Test

With the token server running, it should be possible to run a Gurobi task. Note however that in an HPC environment where the management node is private and running the token server, and the login node is public, you may encounter a subnet error.


[lev@spartan-login1 ~]$ module load Gurobi
[lev@spartan-login1 ~]$ export GRB_LICENSE_FILE=/usr/local/easybuild/software/Gurobi/gurobi.lic
[lev@spartan-login1 ~]$ gurobi_cl
ERROR 10009: Server must be on the same subnet

Thus in these situations it is best to run on a compute node after launching an interactive job.


[lev@spartan-login1 ~]$ sinteractive
..
[lev@spartan-rc110 ~] module load Gurobi
[lev@spartan-rc110 ~] export GRB_LICENSE_FILE=/usr/local/easybuild/software/Gurobi/gurobi.lic
[lev@spartan-rc110 ~]$ gurobi_cl
Usage: gurobi_cl [--command]* [param=value]* filename
Type 'gurobi_cl --help' for more information.

Slurm Job Script

A number of example Gurobi jobs are provided in the appliction, and the misc07.mps file is a good example for a speedtest. Note that this task is pleasingly parallel and run much faster as a multicore job compared to a single-core job. Here is a sample Slurm script, gurobi.slurm. For the test case, it is worth testing the sample job with a single CPU vs eight or more.


#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
module load Gurobi/7.0.1
export GRB_LICENSE_FILE=/usr/local/easybuild/software/Gurobi/gurobi.lic
time gurobi_cl misc07.mps

The following are some sample results:

cpus-per-task=1

real 0m9.255s
user 0m8.220s
sys 0m1.009s

cpus-per-task=8

real 0m2.558s
user 0m14.593s
sys 0m3.778s

Restart License File

One other issue that's too easy to overlook is if the management node is restarted for any reason (e.g., a planned outage for system upgrades), the Gurobi license server will have to be restarted. One method to do this is to write a short script and it to the list of services that are required on boot. It may be necessary to source a profile in order to invoke the modules system in a non-interactive shell e.g.,


#!/bin/bash
. /etc/profile.d/z00_lmod.sh
. /etc/profile.d/z01_spartan.sh
module load Gurobi/8.1.1
grb_ts

July 08, 2019

Red Hat, Red Heart.

Red Hat, Red Heart. kattekrab Mon, 08/07/2019 - 22:10

July 06, 2019

SIP Encryption on VoIP.ms

My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.

First of all, I changed the registration line in /etc/asterisk/sip.conf to use the "tls" scheme:

[general]
register => tls://mydid:mypassword@servername.voip.ms

then I enabled incoming TCP connections:

tcpenable=yes

and TLS:

tlsenable=yes
tlscapath=/etc/ssl/certs/

Finally, I changed my provider entry in the same file to:

[voipms]
type=friend
host=servername.voip.ms
secret=mypassword
username=mydid
context=from-voipms
allow=ulaw
allow=g729
insecure=port,invite
transport=tls
encryption=yes

(Note the last two lines.)

The dialplan didn't change and so I still have the following in /etc/asterisk/extensions.conf:

[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _011X.,n,Authenticate(1234) ; require password for international calls
exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
exten => _011X.,n,Hangup(16)

Server certificate

The only thing I still need to fix is to make this error message go away in my logs:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

It appears to be related to the fact that I didn't set tlscertfile in /etc/asterisk/sip.conf and that it's using its default value of asterisk.pem, a non-existent file.

Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.

July 03, 2019

Audiobooks – June 2019

Robot Visions by Isaac Asimov

A collection of short Robot stores and very short essays. Lots of classic stories although the essays are mostly forgettable. 7/10

Foreigner by Robert J. Sawyer

An alien counterpart of Sigmund Freud psychoanalyzes her race’s equivalent of Galileo. 3rd in the trilogy. I like it enough. 7/10

In Your Defence: Stories of Life and Law by Sarah Langford

An English Barrister describes 11 cases she has worked on. The lives and cases are mostly tragic but the writing is very compelling. 8/10

The Unthinkable: Who Survives When Disaster Strikes and Why by Amanda Ripley

A wide tour of the various ways people react in disasters for ignoring to freezing. Lots of interesting stories, some investigations into the psychology and some practical advice. 8/10

The Fellowship of the Ring by J.R.R Tolkien. Narrated by Rob Inglis.

The first time I’ve ever listened to this version. Excellent in every way. 10/10

Podcasting: The Ultimate Guide to Record, Produce, and Launch Your Podcast and Build Raving Fans by Martin C. Glover

A quick (40 minutes) intro to podcasting, some do’s and don’ts for perspective podcasters. Worth a listen if you are new to the topic and considering. 6/10

Nothing is real: The Beatles Were Underrated And Other Sweeping Statements About Pop by David Hepworth

A collection of essays, many about the Beatles but covering lots of other Pop-Music topics. A lot of good ones in there. 7/10

Safely to Earth: The Men and Women Who Brought the Astronauts Home by Jack Clemons

A memoir of a engineer who worked on the Shuttle and Apollo programs about his time there and what he worked on including the shuttle software. 7/10


The Two Towers by J.R.R Tolkien. Narrated by Rob Inglis.

10/10

Share

July 01, 2019

LUV July 2019 Workshop: Debian 10 "Buster"

Jul 20 2019 12:30
Jul 20 2019 16:30
Jul 20 2019 12:30
Jul 20 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

 

Celebrating the release of Debian 10 "Buster":

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

July 20, 2019 - 12:30

read more

LUV July 2019 Main Meeting: Automated Linux Firewall Failover

Jul 2 2019 19:00
Jul 2 2019 21:00
Jul 2 2019 19:00
Jul 2 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE LATER START TIME

7:00 PM to 9:00 PM Tuesday, July 2, 2019
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Adrian Close: Automated Linux Firewall Failover

 

Automated Linux Firewall Failover

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

July 2, 2019 - 19:00

read more

June 29, 2019

Long-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

June 26, 2019

Installing NixOS on a Headless Raspberry Pi 3

NixOS Raspberry Pi Gears by Craige McWhirter

This represents the first step in being able to build ready-to-run NixOS images for headless Raspberry Pi 3 devices. Aarch64 images for NixOS need to be built natively on aarch64 hardware so the first Pi 3, the subject of this post, will need a keyboard and mouse attached for two commands.

A fair chunk of this post is collated from NixOS on ARM and NixOS on ARM/Raspberry Pi into a coherent, flowing process with additional steps related to the goal of this being a headless Raspberry Pi 3.

Head to Hydra job nixos:release-19.03:nixos.sd_image.aarch64-linux and download the latest successful build. ie:

 $ wget https://hydra.nixos.org/build/95346103/download/1/nixos-sd-image-19.03.172980.d5a3e5f476b-aarch64-linux.img

You will then need to write this to your SD Card:

# dd if=nixos-sd-image-19.03.172980.d5a3e5f476b-aarch64-linux.img of=/dev/sdX status=progress

Make sure you replace "/dev/sdX" with the correct location of your SD card.

Once the SD card has been written, attach the keyboard and screen, insert the SD card into the Pi and boot it up.

When the boot process has been completed, you will be thrown to a root prompt where you need to set a password for root and start the ssh service:

[root@pi-tri:~]#
[root@pi-tri:~]# passwd
New password:
Retype new password:
passwd: password updated successfully

[root@pi-tri:~]# systemctl start sshd

You can now complete the rest of this process from the comfort of whereever you normally work.

After successfully ssh-ing in and examining your disk layout with lsblk, the first step is to remove the undersized, FAT32 /boot partition:

# fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 7.4 GiB, 7948206080 bytes, 15523840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2178694e

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1 *     16384   262143   245760  120M  b W95 FAT32
/dev/mmcblk0p2      262144 15522439 15260296  7.3G 83 Linux


# echo -e 'a\n1\na\n2\nw' | fdisk /dev/mmcblk0

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): Partition number (1,2, default 2):
The bootable flag on partition 1 is disabled now.

Command (m for help): Partition number (1,2, default 2):
The bootable flag on partition 2 is enabled now.

Command (m for help): The partition table has been altered.
Syncing disks.

# fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 7.4 GiB, 7948206080 bytes, 15523840 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2178694e

Device         Boot  Start      End  Sectors  Size Id Type
/dev/mmcblk0p1       16384   262143   245760  120M  b W95 FAT32
/dev/mmcblk0p2 *    262144 15522439 15260296  7.3G 83 Linux

Next we need to configure NixOS to boot the basic system we need with ssh enabled, root and a single user and disks configured correctly. I have this example file which at the time of writing looked like this:

# This is an example of a basic NixOS configuration file for a Raspberry Pi 3.
# It's best used as your first configuration.nix file and provides ssh, root
# and user accounts as well as Pi 3 specific tweaks.

{ config, pkgs, lib, ... }:

{
  # NixOS wants to enable GRUB by default
  boot.loader.grub.enable = false;
  # Enables the generation of /boot/extlinux/extlinux.conf
  boot.loader.generic-extlinux-compatible.enable = true;

  # For a Raspberry Pi 2 or 3):
  boot.kernelPackages = pkgs.linuxPackages_latest;

  # !!! Needed for the virtual console to work on the RPi 3, as the default of 16M doesn't seem to be enough.
  # If X.org behaves weirdly (I only saw the cursor) then try increasing this to 256M.
  boot.kernelParams = ["cma=32M"];

  # File systems configuration for using the installer's partition layout
  fileSystems = {
    "/" = {
      device = "/dev/disk/by-label/NIXOS_SD";
      fsType = "ext4";
    };
  };

  # !!! Adding a swap file is optional, but strongly recommended!
  swapDevices = [ { device = "/swapfile"; size = 1024; } ];

  hardware.enableRedistributableFirmware = true; # Enable support for Pi firmware blobs

  networking.hostName = "nixosPi";     # Define your hostname.
  networking.wireless.enable = false;  # Toggles wireless support via wpa_supplicant.

  # Select internationalisation properties.
  i18n = {
    consoleFont = "Lat2-Terminus16";
    consoleKeyMap = "us";
    defaultLocale = "en_AU.UTF-8";
  };

  time.timeZone = "Australia/Brisbane"; # Set your preferred timezone:

  # List services that you want to enable:
  services.openssh.enable = true;  # Enable the OpenSSH daemon.

  # Configure users for your Pi:
   users.mutableUsers = false;     # Remove any users not defined in here

  users.users.root = {
    hashedPassword = "$6$eeqJLxwQzMP4l$GTUALgbCfaqR8ut9kQOOG8uXOuqhtIsIUSP.4ncVaIs5PNlxdvAvV.krfutHafrxNN7KzaM7uksr6bXP5X0Sx1";
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 Voohu4vei4dayohm3eeHeecheifahxeetauR4geigh9eTheey3eedae4ais7pei4ruv4 me@myhost"
    ];
  };

  # Groups to add
  users.groups.myusername.gid = 1000;

  # Define a user account.
  users.users.myusername = {
    isNormalUser = true;
    uid = 1000;
    group = "myusername";
    extraGroups = ["wheel" ];
    hashedPassword = "$6$l2I7i6YqMpeviVy$u84FSHGvZlDCfR8qfrgaP.n7/hkfGpuiSaOY3ziamwXXHkccrOr8Md4V5G2M1KcMJQmX5qP7KOryGAxAtc5T60";
    openssh.authorizedKeys.keys = [
      "ssh-ed25519 Voohu4vei4dayohm3eeHeecheifahxeetauR4geigh9eTheey3eedae4ais7pei4ruv4 me@myhost"
    ];
  };

  # This value determines the NixOS release with which your system is to be
  # compatible, in order to avoid breaking some software such as database
  # servers. You should change this only after NixOS release notes say you
  # should.
  system.stateVersion = "19.03"; # Did you read the comment?
  system.autoUpgrade.enable = true;
  system.autoUpgrade.channel = https://nixos.org/channels/nixos-19.03;
}

Once this is copied into place, you only need to rebuild NixOS using it by running:

# nixos-rebuild switch

Now you should have headless Pi 3 which you can use to build CD card images for other Pi 3's that are fully configured and ready to run.

June 25, 2019

Linux Security Summit North America 2019: Schedule Published

The schedule for the 2019 Linux Security Summit North America (LSS-NA) is published.

This year, there are some changes to the format of LSS-NA. The summit runs for three days instead of two, which allows us to relax the schedule somewhat while also adding new session types.  In addition to refereed talks, short topics, BoF sessions, and subsystem updates, there are now also tutorials (one each day), unconference sessions, and lightning talks.

The tutorial sessions are:

These tutorials will be 90 minutes in length, and they’ll run in parallel with unconference sessions on the first two days (when the space is available at the venue).

The refereed presentations and short topics cover a range of Linux security topics including platform boot security, integrity, container security, kernel self protection, fuzzing, and eBPF+LSM.

Some of the talks I’m personally excited about include:

The schedule last year was pretty crammed, so with the addition of the third day we’ve been able to avoid starting early, and we’ve also added five minute transitions between talks. We’re hoping to maximize collaboration via the more relaxed schedule and the addition of more types of sessions (unconference, tutorials, lightning talks).  This is not a conference for simply consuming talks, but to also participate and to get things done (or started).

Thank you to all who submitted proposals.  As usual, we had many more submissions than can be accommodated in the available time.

Also thanks to the program committee, who spent considerable time reviewing and discussing proposals, and working out the details of the schedule. The committee for 2019 is:

  • James Morris (Microsoft)
  • Serge Hallyn (Cisco)
  • Paul Moore (Cisco)
  • Stephen Smalley (NSA)
  • Elena Reshetova (Intel)
  • John Johnansen (Canonical)
  • Kees Cook (Google)
  • Casey Schaufler (Intel)
  • Mimi Zohar (IBM)
  • David A. Wheeler (Institute for Defense Analyses)

And of course many thanks to the event folk at Linux Foundation, who handle all of the logistics of the event.

LSS-NA will be held in San Diego, CA on August 19-21. To register, click here. Or you can register for the co-located Open Source Summit and add LSS-NA.

 

June 23, 2019

X-Axis is now ready!

The thread plate is now mounted to the base with thread lock in select locations. The top can still come off easily so I can drill holes to mount the gantry to the alloy tongue that comes out the bottom middle (there is one on the other side too).


Without the 75mm by 50mm by 1/4 inch 6061 alloy angle brackets you could flex the steel in the middle. Now, well... it is not so easy for a human to apply enough force to do it. The thread plate is only supported by 4 colonnades at the left and right side. The middle is unsupported to allow the gantry to travel 950mm along. I think the next build will be more a vertical mill style than sliding gantry to avoid these rigidity challenges.


June 20, 2019

Booting a NixOS aarch64 Image in Qemu

NixOS Gears by Craige McWhirter

To boot a NixOS aarch64 image in qemu, in this example, a Raspberry Pi3 (B), you can use the following command:

 qemu-system-aarch64 -M raspi3 -drive format=raw,file=NIXOS.IMG \
 -kernel ./u-boot-rpi3.bin -serial stdio -d in_asm -m 1024

You will need to replace NIXOS.IMG with the name of the image file you downloaded ie: nixos-sd-image-18.09.2568.1e9e709953e-aarch64-linux.img

You will also need to mount the image file and copy out u-boot-rpi3.bin for the -kernel option.

A nerd snipe, in which I reverse engineer the Aussie Broadband usage API

Share

I was curious about the newly available FTTN NBN service in my area, so I signed up to see what’s what. Of course, I need a usage API so that I can graph my usage in prometheus and grafana as everyone does these days. So I asked Aussie. The response I got was that I was welcome to reverse engineer the REST API that the customer portal uses.

So I did.

I give you my super simple implementation of an Aussie Broadband usage client in Python. Patches of course are welcome.

I’ve now released the library on pypi under the rather innovative name of “aussiebb”, so installing it is as simple as:

$ pip install aussiebb

Share

June 18, 2019

TEN THOUSAND DISKS

In OpenPOWER land we have a project called op-test-framework which (for all its strengths and weaknesses) allows us to test firmware on a variety of different hardware platforms and even emulators like Qemu.

Qemu is a fantasic tool allowing us to relatively quickly test against an emulated POWER model, and of course is a critical part of KVM virtual machines running natively on POWER hardware. However the default POWER model in Qemu is based on the "pseries" machine type, which models something closer to a virtual machine or a PowerVM partition rather than a "bare metal" machine.

Luckily we have Cédric Le Goater who is developing and maintaining a Qemu "powernv" machine type which more accurately models running directly on an OpenPOWER machine. It's an unwritten rule that if you're using Qemu in op-test, you've compiled this version of Qemu!

Teething Problems

Because the "powernv" type does more accurately model the physical system some extra care needs to be taken when setting it up. In particular at one point we noticed that the pretend CDROM and disk drive we attached to the model were.. not being attached. This commit took care of that; the problem was that the PCI topology defined by the layout required us to be more exact about where PCI devices were to be added. By default only three spare PCI "slots" are available but as the commit says, "This can be expanded by adding bridges"...

More Slots!

Never one to stop at a just-enough solution, I wondered how easy it would be to add an extra PCI bridge or two to give the Qemu model more available slots for PCI devices. It turns out, easy enough once you know the correct invocation. For example, adding a PCI bridge in the first slot of the first default PHB is:

-device pcie-pci-bridge,id=pcie.3,bus=pcie.0,addr=0x0

And inserting a device in that bridge just requires us to specify the bus and slot:

-device virtio-blk-pci,drive=cdrom01,id=virtio02,bus=pcie.4,addr=3

Great! Each bridge provides 31 slots, so now we have plenty of room for extra devices.

Why Stop There?

We have three free slots, and we don't have a strict requirement on where devices are plugged in, so lets just plug a bridge into each of those slots while we're here:

-device pcie-pci-bridge,id=pcie.3,bus=pcie.0,addr=0x0 \
-device pcie-pci-bridge,id=pcie.4,bus=pcie.1,addr=0x0 \
-device pcie-pci-bridge,id=pcie.5,bus=pcie.2,addr=0x0

What happens if we insert a new PCI bridge into another PCI bridge? Aside from stressing out our PCI developers, a bunch of extra slots! And then we could plug bridges into those bridges and then..


Thus was born "OpTestQemu: Add PCI bridges to support more devices." and the testcase "Petitboot10000Disks". The changes to the Qemu model setup fill up each PCI bridge as long as we have devices to add, but reserve the first slot to add another bridge if we run out of room... and so on..

Officially this is to support adding interesting disk topologies to test Pettiboot use cases, stress test device handling, and so on, but while we're here... what happens with 10,000 temporary disks?

======================================================================
ERROR: testListDisks (testcases.Petitboot10000Disks.ConfigEditorTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/sam/git/op-test-framework/testcases/Petitboot10000Disks.py", line 27, in setUp
    self.system.goto_state(OpSystemState.PETITBOOT_SHELL)
  File "/home/sam/git/op-test-framework/common/OpTestSystem.py", line 366, in goto_state
    self.state = self.stateHandlers[self.state](state)
  File "/home/sam/git/op-test-framework/common/OpTestSystem.py", line 695, in run_IPLing
    raise my_exception
UnknownStateTransition: Something happened system state="2" and we transitioned to UNKNOWN state.  Review the following for more details
Message="OpTestSystem in run_IPLing and the Exception=
"filedescriptor out of range in select()"
 caused the system to go to UNKNOWN_BAD and the system will be stopping."

Yeah that's probably to be expected without some more massaging. What about a more modest 512?

I: Resetting PHBs and training links...
[   55.293343496,5] PCI: Probing slots...
[   56.364337089,3] PHB#0000:02:01.0 pci_find_ecap hit a loop !
[   56.364973775,3] PHB#0000:02:01.0 pci_find_ecap hit a loop !
[   57.127964432,3] PHB#0000:03:01.0 pci_find_ecap hit a loop !
[   57.128545637,3] PHB#0000:03:01.0 pci_find_ecap hit a loop !
[   57.395489618,3] PHB#0000:04:01.0 pci_find_ecap hit a loop !
[   57.396048285,3] PHB#0000:04:01.0 pci_find_ecap hit a loop !
[   58.145944205,3] PHB#0000:05:01.0 pci_find_ecap hit a loop !
[   58.146465795,3] PHB#0000:05:01.0 pci_find_ecap hit a loop !
[   58.404954853,3] PHB#0000:06:01.0 pci_find_ecap hit a loop !
[   58.405485438,3] PHB#0000:06:01.0 pci_find_ecap hit a loop !
[   60.178957315,3] PHB#0001:02:01.0 pci_find_ecap hit a loop !
[   60.179524173,3] PHB#0001:02:01.0 pci_find_ecap hit a loop !
[   60.198502097,3] PHB#0001:02:02.0 pci_find_ecap hit a loop !
[   60.198982582,3] PHB#0001:02:02.0 pci_find_ecap hit a loop !
[   60.435096197,3] PHB#0001:03:01.0 pci_find_ecap hit a loop !
[   60.435634380,3] PHB#0001:03:01.0 pci_find_ecap hit a loop !
[   61.171512439,3] PHB#0001:04:01.0 pci_find_ecap hit a loop !
[   61.172029071,3] PHB#0001:04:01.0 pci_find_ecap hit a loop !
[   61.425416049,3] PHB#0001:05:01.0 pci_find_ecap hit a loop !
[   61.425934524,3] PHB#0001:05:01.0 pci_find_ecap hit a loop !
[   62.172664549,3] PHB#0001:06:01.0 pci_find_ecap hit a loop !
[   62.173186458,3] PHB#0001:06:01.0 pci_find_ecap hit a loop !
[   63.434516732,3] PHB#0002:02:01.0 pci_find_ecap hit a loop !
[   63.435062124,3] PHB#0002:02:01.0 pci_find_ecap hit a loop !
[   64.177567772,3] PHB#0002:03:01.0 pci_find_ecap hit a loop !
[   64.178099773,3] PHB#0002:03:01.0 pci_find_ecap hit a loop !
[   64.431763989,3] PHB#0002:04:01.0 pci_find_ecap hit a loop !
[   64.432285000,3] PHB#0002:04:01.0 pci_find_ecap hit a loop !
[   65.180506790,3] PHB#0002:05:01.0 pci_find_ecap hit a loop !
[   65.181049905,3] PHB#0002:05:01.0 pci_find_ecap hit a loop !
[   65.432105600,3] PHB#0002:06:01.0 pci_find_ecap hit a loop !
[   65.432654326,3] PHB#0002:06:01.0 pci_find_ecap hit a loop !

(That isn't good)

[   66.177240655,5] PCI Summary:
[   66.177906083,5] PHB#0000:00:00.0 [ROOT] 1014 03dc R:00 C:060400 B:01..07 
[   66.178760724,5] PHB#0000:01:00.0 [ETOX] 1b36 000e R:00 C:060400 B:02..07 
[   66.179501494,5] PHB#0000:02:01.0 [ETOX] 1b36 000e R:00 C:060400 B:03..07 
[   66.180227773,5] PHB#0000:03:01.0 [ETOX] 1b36 000e R:00 C:060400 B:04..07 
[   66.180953149,5] PHB#0000:04:01.0 [ETOX] 1b36 000e R:00 C:060400 B:05..07 
[   66.181673576,5] PHB#0000:05:01.0 [ETOX] 1b36 000e R:00 C:060400 B:06..07 
[   66.182395253,5] PHB#0000:06:01.0 [ETOX] 1b36 000e R:00 C:060400 B:07..07 
[   66.183207399,5] PHB#0000:07:02.0 [PCID] 1af4 1001 R:00 C:010000 (          scsi) 
[   66.183969138,5] PHB#0000:07:03.0 [PCID] 1af4 1001 R:00 C:010000 (          scsi) 

(a lot more of this)

[   67.055196945,5] PHB#0002:02:1e.0 [PCID] 1af4 1001 R:00 C:010000 (          scsi) 
[   67.055926264,5] PHB#0002:02:1f.0 [PCID] 1af4 1001 R:00 C:010000 (          scsi) 
[   67.094591773,5] INIT: Waiting for kernel...
[   67.095105901,5] INIT: 64-bit LE kernel discovered
[   68.095749915,5] INIT: Starting kernel at 0x20010000, fdt at 0x3075d270 168365 bytes

zImage starting: loaded at 0x0000000020010000 (sp: 0x0000000020d30ee8)
Allocating 0x1dc5098 bytes for kernel...
Decompressing (0x0000000000000000 <- 0x000000002001f000:0x0000000020d2e578)...
Done! Decompressed 0x1c22900 bytes

Linux/PowerPC load: 
Finalizing device tree... flat tree at 0x20d320a0
[   10.120562] watchdog: CPU 0 self-detected hard LOCKUP @ pnv_pci_cfg_write+0x88/0xa4
[   10.120746] watchdog: CPU 0 TB:50402010473, last heartbeat TB:45261673150 (10039ms ago)
[   10.120808] Modules linked in:
[   10.120906] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.5-openpower1 #2
[   10.120956] NIP:  c000000000058544 LR: c00000000004d458 CTR: 0000000030052768
[   10.121006] REGS: c0000000fff5bd70 TRAP: 0900   Not tainted  (5.0.5-openpower1)
[   10.121030] MSR:  9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 48002482  XER: 20000000
[   10.121215] CFAR: c00000000004d454 IRQMASK: 1 
[   10.121260] GPR00: 00000000300051ec c0000000fd7c3130 c000000001bcaf00 0000000000000000 
[   10.121368] GPR04: 0000000048002482 c000000000058544 9000000002009033 0000000031c40060 
[   10.121476] GPR08: 0000000000000000 0000000031c40060 c00000000004d46c 9000000002001003 
[   10.121584] GPR12: 0000000031c40000 c000000001dd0000 c00000000000f560 0000000000000000 
[   10.121692] GPR16: 0000000000000000 0000000000000000 0000000000000001 0000000000000000 
[   10.121800] GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 
[   10.121908] GPR24: 0000000000000005 0000000000000000 0000000000000000 0000000000000104 
[   10.122016] GPR28: 0000000000000002 0000000000000004 0000000000000086 c0000000fd9fba00 
[   10.122150] NIP [c000000000058544] pnv_pci_cfg_write+0x88/0xa4
[   10.122187] LR [c00000000004d458] opal_return+0x14/0x48
[   10.122204] Call Trace:
[   10.122251] [c0000000fd7c3130] [c000000000058544] pnv_pci_cfg_write+0x88/0xa4 (unreliable)
[   10.122332] [c0000000fd7c3150] [c0000000000585d0] pnv_pci_write_config+0x70/0x9c
[   10.122398] [c0000000fd7c31a0] [c000000000234fec] pci_bus_write_config_word+0x74/0x98
[   10.122458] [c0000000fd7c31f0] [c00000000023764c] __pci_read_base+0x88/0x3a4
[   10.122518] [c0000000fd7c32c0] [c000000000237a18] pci_read_bases+0xb0/0xc8
[   10.122605] [c0000000fd7c3300] [c0000000002384bc] pci_setup_device+0x4f8/0x5b0
[   10.122670] [c0000000fd7c33a0] [c000000000238d9c] pci_scan_single_device+0x9c/0xd4
[   10.122729] [c0000000fd7c33f0] [c000000000238e2c] pci_scan_slot+0x58/0xf4
[   10.122796] [c0000000fd7c3430] [c000000000239eb8] pci_scan_child_bus_extend+0x40/0x2a8
[   10.122861] [c0000000fd7c34a0] [c000000000239e34] pci_scan_bridge_extend+0x4d4/0x504
[   10.122928] [c0000000fd7c3580] [c00000000023a0f8] pci_scan_child_bus_extend+0x280/0x2a8
[   10.122993] [c0000000fd7c35f0] [c000000000239e34] pci_scan_bridge_extend+0x4d4/0x504
[   10.123059] [c0000000fd7c36d0] [c00000000023a0f8] pci_scan_child_bus_extend+0x280/0x2a8
[   10.123124] [c0000000fd7c3740] [c000000000239e34] pci_scan_bridge_extend+0x4d4/0x504
[   10.123191] [c0000000fd7c3820] [c00000000023a0f8] pci_scan_child_bus_extend+0x280/0x2a8
[   10.123256] [c0000000fd7c3890] [c000000000239b5c] pci_scan_bridge_extend+0x1fc/0x504
[   10.123322] [c0000000fd7c3970] [c00000000023a064] pci_scan_child_bus_extend+0x1ec/0x2a8
[   10.123388] [c0000000fd7c39e0] [c000000000239b5c] pci_scan_bridge_extend+0x1fc/0x504
[   10.123454] [c0000000fd7c3ac0] [c00000000023a064] pci_scan_child_bus_extend+0x1ec/0x2a8
[   10.123516] [c0000000fd7c3b30] [c000000000030dcc] pcibios_scan_phb+0x134/0x1f4
[   10.123574] [c0000000fd7c3bd0] [c00000000100a800] pcibios_init+0x9c/0xbc
[   10.123635] [c0000000fd7c3c50] [c00000000000f398] do_one_initcall+0x80/0x15c
[   10.123698] [c0000000fd7c3d10] [c000000001000e94] kernel_init_freeable+0x248/0x24c
[   10.123756] [c0000000fd7c3db0] [c00000000000f574] kernel_init+0x1c/0x150
[   10.123820] [c0000000fd7c3e20] [c00000000000b72c] ret_from_kernel_thread+0x5c/0x70
[   10.123854] Instruction dump:
[   10.123885] 7d054378 4bff56f5 60000000 38600000 38210020 e8010010 7c0803a6 4e800020 
[   10.124022] e86a0018 54c6043e 7d054378 4bff5731 <60000000> 4bffffd8 e86a0018 7d054378 
[   10.124180] Kernel panic - not syncing: Hard LOCKUP
[   10.124232] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.5-openpower1 #2
[   10.124251] Call Trace:

I wonder if I can submit that bug without someone throwing something at my desk.

The X Axis is growing...

The new cnc X axis will be around a meter in length. This presents some issues with material selection as steel that is 1100mm long by 350mm wide and 5mm thick will flex when only supported by the black columns at each end. I have some brackets to sure that up so the fixture plate will not be pushed away or vibrate under cutting load.




The linear rails are longer than the ballscrew to allow the gantry to travel the full length of the ballscrew. In this case a 1 meter ballscrew allows about 950mm of tip to tip travel and thus 850mm of cutter travel. The gantry is 100mm wide, shown as just the mounting plate in the picture above.

The black columns to hold the fixture plate are 38mm square and 60mm high solid steel. They come in at about 500grams a pop. The steel plate is about 15kg. I was originally going to use 38mm solid square steel stock as the shims under the linear rails but they came in at over 8kg each and the build was starting to get heavy.

The columns are m6 tapped both ends to hold the fixture plate up above the assembly. I will likely laminate some 1.2mm alloy to the base of the fixture plate to mitigate chips falling through the screw fixture holes into the rails and ballscrew.

I have to work out the final order of the 1/4 inch 6061 brackets that sure up the 5mm thick fixture plate yet. Without edge brackets you can flex the steel when it is only supported at the ends. Yes, I can see why vertical mills are made.

I made the plate that will have the gantry attached on the cnc but had to refixture things as the cnc can not cut something that long in any of the current axis.



It is interesting how much harder 6061 is compared to some of the more economic alloys when machining things. You can see the cnc machine facing more resistance especially on 6mm and larger holes.  It will be interesting to see if the cnc can handle drilling steel at some stage.

June 15, 2019

OpenSUSE 15 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Fedora, here is how I was able to create an OpenSUSE 15 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:

net.ipv4.ip_forward=1

and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and apply these changes:

iptables-apply

before restarting the lxc networking:

systemctl restart lxc-net.service

Creating the container

Once that's in place, you can finally create the OpenSUSE 15 container:

lxc-create -n opensuse15 -t download -- -d opensuse -r 15 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Logging in as root

Start up the container and get a login console:

lxc-start -n opensuse15 -F

In another terminal, set a password for the root user:

lxc-attach -n opensuse15 passwd

You can now use this password to log into the console you started earlier.

Logging in as an unprivileged user via ssh

As root, install a few packages:

zypper install vim openssh sudo man
systemctl start sshd
systemctl enable sshd

and then create an unprivileged user:

useradd francois
passwd francois
cd /home
mkdir francois
chown francois:100 francois/

and give that user sudo access:

visudo  # uncomment "wheel" line
groupadd wheel
usermod -aG wheel francois

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

June 13, 2019

Intersections and connections

Intersections and connections kattekrab Thu, 13/06/2019 - 07:07

Raspberry Pi HAT identity EEPROMs, a simple guide

Share

I’ve been working on a RFID scanner than can best be described as an overly large Raspberry Pi HAT recently. One of the things I am grappling with as I get closer to production boards is that I need to be able to identify what version of the HAT is currently installed — the software can then tweak its behaviour based on the hardware present.

I had toyed with using some spare GPIO lines and “hard coded” links on the HAT to identify board versions to the Raspberry Pi, but it turns out others have been here before and there’s a much better way. The Raspberry Pi folks have defined something called the “Hardware On Top” (HAT) specification which defines an i2c EEPROM which can be used to identify a HAT to the Raspberry Pi.

There are a couple of good resources I’ve found that help you do this thing — sparkfun have a tutorial which covers it, and there is an interesting forum post. However, I couldn’t find a simple tutorial for HAT designers that just covered exactly what they need to know and nothing else. There were also some gaps in those documents compared with my experiences, and I knew I’d need to look this stuff up again in the future. So I wrote this page.

Initial setup

First off, let’s talk about the hardware. I used an 24LC256P DIL i2c EEPROM — these are $2 on ebay, or $6 from Jaycar. The pins need to be wired like this:

24LC256P Pin Raspberry Pi Pin Notes
1 (AO) GND (pins 6, 9, 14, 20, 25, 30, 34, 39) All address pins tied to ground will place the EEPROM at address 50. This is the required address in the specification
2 (A1) GND
3 (A2) GND
4 VSS GND
5 SDA 27

You should also add a 3.9K pullup resistor from EEPROM pin 5 to 3.3V.

You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
6 SCL 28

You should also add a 3.9K pullup resistor from EEPROM pin 6 to 3.3V.

You must use this pin for the Raspberry Pi to detect the EEPROM on startup!
7 WP Not connected Write protect. I don’t need this.
8 VCC 3.3V (pins 1 or 17) The EEPROM is capable of being run at 5 volts, but must be run at 3.3 volts to work as a HAT identification EEPROM.

The specification requires that the data pin be on pin 27, the clock pin be on pin 28, and that the EEPROM be at address 50 on the i2c bus as described in the table above. There is also some mention of pullup resistors in both the data sheet and the HAT specification, but not in a lot of detail. The best I could find was a circuit diagram for a different EEPROM with the pullup resistors shown.

My test EEPROM wired up on a little breadboard looks like this:

My prototype i2c EEPROM circuit

And has a circuit diagram like this:

An ID EEPROM circuit

Next enable i2c on your raspberry pi. You also need to hand edit /boot/config.txt and then reboot. The relevant line of my config.txt look like this:

dtparam=i2c_vc=on

After reboot you should have an entry at /dev/i2c-0.

GOTCHA: you can’t probe the i2c bus that the HAT standard uses, and I couldn’t get flashing the EEPROM to work on that bus either.

Now time for our first gotcha — the version detection i2c bus is only enabled during boot and then turned off. An i2cdetect on bus zero wont show the device post boot for this reason. This caused an initial panic attack because I thought my EEPROM was dead, but that was just my twitchy nature showing through.

You can verify your EEPROM works by enabling bus one. To do this, add these lines to /boot/config.txt:

dtparam=i2c_arm=on
dtparam=i2c_vc=on

After a reboot you should have /dev/i2c-0 and /dev/i2c-1. You also need to move the EEPROM to bus 1 in order for it to be detected:

24LC256P Pin Raspberry Pi Pin Notes
5 SDA 3
6 SCL 5

You’ll need to move the EEPROM back before you can use it for HAT detection.

Programming the EEPROM

You program the EEPROM with a set of tools provided by the raspberry pi folks. Check those out and compile them, they’re not packaged for raspbian that I can find:

pi@raspberrypi:~ $ git clone https://github.com/raspberrypi/hats
Cloning into 'hats'...
remote: Enumerating objects: 464, done.
remote: Total 464 (delta 0), reused 0 (delta 0), pack-reused 464
Receiving objects: 100% (464/464), 271.80 KiB | 119.00 KiB/s, done.
Resolving deltas: 100% (261/261), done.
pi@raspberrypi:~ $ cd hats/eepromutils/
pi@raspberrypi:~/hats/eepromutils $ ls
eepdump.c    eepmake.c            eeptypes.h  README.txt
eepflash.sh  eeprom_settings.txt  Makefile
pi@raspberrypi:~/hats/eepromutils $ make
cc eepmake.c -o eepmake -Wno-format
cc eepdump.c -o eepdump -Wno-format

The file named eeprom_settings.txt is a sample of the settings for your HAT. Fiddle with that until it makes you happy, and then compile it:

$ eepmake eeprom_settings.txt eeprom_settings.eep
Opening file eeprom_settings.txt for read
UUID=b9e3b4e9-e04f-4759-81aa-8334277204eb
Done reading
Writing out...
Done.

And then we can flash our EEPROM, remembering that I’ve only managed to get flashing to work while the EEPROM is on bus 1 (pins 2 and 5):

$ sudo sh eepflash.sh -w -f=eeprom_settings.eep -t=24c256 -d=1
This will attempt to talk to an eeprom at i2c address 0xNOT_SET on bus 1. Make sure there is an eeprom at this address.
This script comes with ABSOLUTELY no warranty. Continue only if you know what you are doing.
Do you wish to continue? (yes/no): yes
Writing...
0+1 records in
0+1 records out
107 bytes copied, 0.595252 s, 0.2 kB/s
Closing EEPROM Device.
Done.

Now move the EEPROM back to bus 0 (pins 27 and 28) and reboot. You should end up with entries in the device tree for the HAT. I get:

$ cd /proc/device-tree/hat/
$ for item in *
> do
>   echo "$item: "`cat $item`
>   echo
> done
name: hat

product: GangScan

product_id: 0x0001

product_ver: 0x0008

uuid: b9e3b4e9-e04f-4759-81aa-8334277204eb

vendor: madebymikal.com

Now I can have my code detect if the HAT is present, and if so what version. Comments welcome!

Share

June 11, 2019

Codec 2 700C Equaliser

During the recent FreeDV QSO party, I was reminded of a problem with the codec used for FreeDV 700D. Some speakers are muffled and hard to understand, while others code quite nicely and are easy to listen too. I’m guessing the issue is around the Vector Quantiser (VQ) used to encode the speech spectrum. As I’ve been working on Vector Quantisation (VQ) recently for the LPCNet project, I decided to have a fresh look at this problem with Codec 2.

I’ve been talking to a few people about the Codec 2 700C VQ and the idea of an equaliser. Thanks Stefan, Thomas, and Jean Marc for you thoughts.

Vector Quantiser

The FreeDV 700C and FreeDV 700D modes both use Codec 2 700C. Sorry about the confusing nomenclature – FreeDV and Codec 2 aren’t always in lock step.

Vector Quantisers are trained on speech from databases that represent a variety of speakers. These databases tend to have standardised frequency responses across all samples. I used one of these databases to train the VQ for Codec 2 700C. However when used in the real world (e.g. with FreeDV), the codec gets connected to many different microphones and sound cards with varying frequency responses.

For example, the VQ training database might be high pass filtered at 150 Hz, and start falling off at 3600 Hz. A gamer headset used for FreeDV might have low frequency energy down to 20 Hz, and and have a gentle high pass slope such that energy at 3 kHz is 6dB louder than energy at 1000 Hz. Another user of FreeDV might have a completely different frequency response on their system.

The VQ naively tries to match the input spectrum. If you have input speech that is shaped differently to the VQ training data, the VQ tends to expend a lot of bits matching spectral shaping rather than concentrating on important features of speech that make it intelligible. The result is synthesised speech that has artefacts, is harder to understand and muffled.

In contrast, commercial radios have it easier, they can control the microphone and input analog signal frequency response to neatly match the codec.

Equaliser Algorithm

I wrote an Octave script (vq_700c_eq.m) to look into the problem and try a few different algorithms. It allows to me analyse speech frame by frame and in batch mode, so I can listen to the results.

The equaliser is set to the average quantiser error for each of the K=20 bands in each vector, a similar algorithm to [1]. For these initial tests, I calculated the equaliser values as the mean quantiser error over the entire sample. This is cheating a bit to get an “early result” and test the general idea. A real world EQ would need to adapt to input speech.

Here is a 3D mesh plot of the spectrum of the cq_ref sample evolving over time. For each 40ms frame, we have a K=20 element vector of samples we need to quantise. Note the high levels towards the low frequency end in this sample.

Averaging the first stage VQ error over the sample, we get the following equaliser values:

Note the large values at the start and end of the spectrum. The eq_hi curve is the mean error of just the high energy frames (ignoring silence frames).

Here is a snapshot of a single frame, showing the target vector, the first stage vector quantiser’s best effort, and the error.

Here is the same frame after the equaliser has been applied to the target vector:

In this case the vector quantiser has selected a different vector – it’s not “fighting” the static frequency response so much and can focus on the more perceptually important parts of the speech spectrum. It also means the 2nd stage can address perceptually important features of the vector rather than the static frequency response.

For this frame the variance (mean square error) was halved using the equaliser.

Results

The following table presents the results in terms of the variance (mean square error) in dB^2. The first column is the variance of the input data, samples with a wider spectral range will tend to be higher. The idea of the quantiser is to reduce the variance (quantiser error) as much as possible. It’s a two stage quantiser (9 bits or 512 entries) per stage.

The two right hand columns show the results (variance after 2nd stage) without and with the equaliser, using the Codec 2 700C VQ (which I have labelled train_120 in the source code). On some samples it has quite an effect (cq_ref, cq_freedv_8k), less so on others. After the equaliser, most of the samples are in the 8dB^2 range.

--------------------------------------------------------
Sample        Initial  stg1    stg1_eq     stg2  stg2_eq
--------------------------------------------------------
hts1a         120.40   17.40   16.07       9.34    8.66
hts2a         149.13   18.67   16.85       9.71    8.81
cq_ref        170.07   34.08   20.33      20.07   11.33
ve9qrp_10s     66.09   23.15   14.97      13.14    8.10
vk5qi         134.39   21.52   14.65      12.03    8.05
c01_01_8k     126.75   18.84   14.11      10.19    7.51
ma01_01        91.22   23.96   16.26      14.05    8.91
cq_freedv_8k  118.80   29.60   16.41      17.46    8.97

In the next table, we use a different vector quantiser (all_speech), derived from a different training database. This used much more training data than train_120 above. In this case, the VQ (in general) does a better job, and the equaliser has a smaller effect. Notable exceptions are the hts1a/hts2a samples, which are poorer. They seem messed up no matter what I do. The c01_01_8k/ma0_01 samples are from within the all-speech database, so predictably do quite well.

---------------------------------------------------------
Sample        Initial  stg1    stg1_eq    stg2    stg2_eq
---------------------------------------------------------
hts1a         120.40   20.75   16.63      11.36    9.05
hts2a         149.13   24.31   16.54      12.48    7.90
cq_ref        170.07   22.41   17.54      12.11    9.26
ve9qrp_10s     66.09   15.29   13.88       8.12    7.29
vk5qi         134.39   14.66   13.25       7.95    7.17
c01_01_8k     126.75   10.64   10.17       5.19    5.07
ma01_01        91.22   14.09   13.20       7.05    6.68
cq_freedv_8k  118.80   17.25   13.23       9.13    7.04

Listening to the samples:

  1. The equaliser doesn’t mess up anything. This is actually quite important. We are modifying the speech spectrum so it’s important that the equaliser doesn’t make any samples sound worse if they don’t need equalisation.
  2. In line with the tables above, the equaliser improves samples like cq_ref, cq_freedv_8k and vk5qi on Codec 700C. In particular a bass artefact that I sometimes hear can be removed, and (I hope) intelligibility improved.
  3. The second (all_speech) VQ improves the quality of most samples. Also in line with the variance table for this VQ, the equaliser makes a smaller improvement.
  4. hts1a and hts2a are indeed poorer with the all_speech VQ. You can’t win them all.

Here are some samples showing the various VQ and EQ options. For listening, I used a small set of loudspeakers with some bass response, as the artefacts I am interested in often affect the bass end.

Codec 2 700C + train_120 VQ Listen
Codec 2 700C + train_120 VQ + EQ Listen
Codec 2 700C + all_speech VQ Listen
Codec 2 700C + all_speech VQ + EQ Listen

Using my loudspeakers, I can hear the annoying bass artefact being removed by the EQ (first and second samples). In the next two samples (all_speech VQ), the effect of the EQ is less pronounced, as the VQ itself does a better job.

Conclusions and Discussion

The Codec 2 700C VQ can be improved, either by training a new VQ, or adding an equaliser that adjusts the input target vector. A new VQ would break compatibility with Codec 2 700C, so we would need to release a new mode, and push that through to a new FreeDV mode. The equaliser alone could be added to the current Codec 2 700C implementation, without breaking compatibility.

Variance (mean squared error) is a pretty good objective measure of the quantiser performance, and aligned with the listening tests results. Minimising variance is heading in the right direction. This is important, as listening tests are difficult and subjective in nature.

This work might be useful for LPCNet or other NN based codecs, which have a similar set of parameters that require vector quantisation. In particular if we want to use NN techniques at lower bit rates.

There is remaining mystery over the hts1a/hts2a samples. They must have a spectrum that the equaliser can’t adjust effectively and the VQ doesn’t address well. This suggests other equaliser algorithms might to a better job.

Another possibility is training the VQ to handle a wider variety of inputs by including static spectral shaping in the training data. This could be achieved by filtering the spectrally flat training database, and appending the shaped data to the VQ. However this would increase the variance the VQ has to deal with, and possibly lead to more bits for a given VQ performance.

Reading Further

Codec 2 700C Equaliser Part 2

[1] I. H. J. Nel and W. Coetzer, “An Adaptive Homomorphic Vocoder at 550 bits/second,” /IEEE South African Symposium on Communications and Signal Processing/, Johannesburg, South Africa, 1990, pp. 131-136.

Codec 2 700C

June 10, 2019

Trail run: the base of Urambi Hill

Share

This one has been on my list for a little while — a nice 10km loop around the bottom of Urambi Hill. I did it as an out and back, although there is a loop option if you cross the bridge that was my turn around point. For the loop option cross the bridge, run a couple of hundred meters to the left and then cross the river again at the ford. Expect to get your feet wet if you choose that option!

Not particularly shady, but nice terrain. There is more vertical ascent than I expected, but it wasn’t crazy. I haven’t posted pictures of this run because it was super foggy when I did it so the pictures are just of white mist.

Share

Securing Linux with Ansible

The Ansible Hardening role from the OpenStack project is a great way to secure Linux boxes in a reliable, repeatable and customisable manner.

It was created by former colleague of mine Major Hayden and while it was spun out of OpenStack, it can be applied generally to a number of the major Linux distros (including Fedora, RHEL, CentOS, Debian, SUSE).

The role is based on the Secure Technical Implementation Guide (STIG) out of the Unites States for RHEL, which provides recommendations on how best to secure a host and the services it runs (category one for highly sensitive systems, two for medium and three for low). This is similar to the Information Security Manual (ISM) we have in Australia, although the STIG is more explicit.

Rules and customisation

There is deviation from the STIG recommendations and it is probably a good idea to read the documentation about what is offered and how it’s implemented. To avoid unwanted breakages, many of the controls are opt-in with variables to enable and disable particular features (see defaults/main.yml).

You probably do not want to blindly enable everything without understanding the consequences. For example, Kerberos support in SSH will be disabled by default (via “security_sshd_disable_kerberos_auth: yes” variable) as per V-72261, so this might break access if you rely on it.

Other features also require values to be enabled. For example, V-71925 of the STIG recommends passwords for new users be restricted to a minimum lifetime of 24 hours. This is not enabled by default in the Hardening role (central systems like LDAP are recommended), but can be enabled be setting the following variable for any hosts you want it set on.

security_password_min_lifetime_days: 1

In addition, not all controls are available for all distributions.

For example, V-71995 of the STIG requires umask to be set to 077, however the role does not currently implement this for RHEL based distros.

Run a playbook

To use this role you need to get the code itself, using either Ansible Galaxy or Git directly. Ansible will look in the ~/.ansible/roles/ location by default and find the role, so that makes a convenient spot to clone the repo to.

mkdir -p ~/.ansible/roles
git clone https://github.com/openstack/ansible-hardening \
~/.ansible/roles/ansible-hardening

Next, create an Ansible play which will make use of the role. This is where we will set variables to enable or disable specific control for hosts which are run using the play. For example, if you’re using a graphical desktop, then you will want to make sure X.Org is not removed (see below). Include any other variables you want to set from the defaults/main.yml file.

cat > play.yml << EOF
---
- name: Harden all systems
  hosts: all
  become: yes
  vars:
    security_rhel7_remove_xorg: no
    security_ntp_servers:
      - ntp.internode.on.net
  roles:
    - ansible-hardening
EOF

Now we can run our play! Ansible uses an inventory of hosts, but we’ll just run this against localhost directly (with the options -i localhost, -c local). It’s probably a good idea to run it with the –check option first, which will not actually make any changes.

If you’re running in Fedora, make sure you also set Python3 as the interpreter.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
./play.yml

This will run through the role, executing all of the default tasks while including or excluding others based on the variables in your play.

Running specific sets of controls

If you only want to run a limited set of controls, you can do so by running the play with the relevant –tags option. You can also exclude specific tasks with –skip-tags option. Note that there are a number of required tasks with the always tag which will be run regardless.

To see all the available tags, run your playbook with the –list-tags option.

ansible-playbook --list-tags ./play.yml

For example, if you want to only run the dozen or so Category III controls you can do so with the low tag (don’t forget that some tasks may still need enabling if you want to run them and that the always tagged tasks will still be run). Combine tags by comma separating them, so to also run a specific control like V-72057, or controls related to SSH, just add it them with low.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
--tags low,sshd,V-72057 \
./play.yml

Or if you prefer, you can just run everything except a specific set. For example, to exclude Category I controls, skip the high tag. You can also add both options.

ansible-playbook -i localhost, -c local \
-e ansible_python_interpreter=/usr/bin/python3 \
--ask-become-pass \
--check \
--tags sshd,V-72057 \
--skip-tags high \
./play.yml

Once you’re happy, don’t forget to remove the –check option to apply the changes.

June 05, 2019

What is Gang Scan?

Share

Gang Scan is an open source (and free) attendance tracking system based on custom RFID reader boards that communicate back to a server over wifi. The boards are capable of queueing scan events in the case of intermittent network connectivity, and the server provides simple reporting.

Share

June 02, 2019

Audiobooks – May 2019

Springfield Confidential: Jokes, Secrets, and Outright Lies from a Lifetime Writing for The Simpsons by Mike Reiss

Great book. Simpsons insider stories, stuff about show business, funny jokes. 9/10

Combat Crew: The Story of 25 Combat Missions Over Europe From the Daily Journal of a B-17 Gunner by John Comer

Interesting 1st-hand account (with some borrowings from others in unit). Good details and atmosphere from missions and back at base/leave 8/10

Far-Seer by Robert J. Sawyer

“An allegory about Galileo on a planet of intelligent dinosaurs”. 1st in a Trilogy by one of my favorite authors. Balanced between similarities & differences from humans. 7/10

Working Actor: Breaking in, Making a Living, and Making a Life in the Fabulous Trenches of Show Business by David Dean Bottrell

Lots of advice for aspiring actors along with plenty of interesting stories from the author’s career. 8/10

Becoming by Michelle Obama

A good memoir. Lots of coverage of her early life, working career and the White House. Not exhaustive and it skips ahead at time. But very interesting and inspirational. 8/10

Fossil Hunter by Robert J. Sawyer

2nd in the Trilogy. The main human analog here is Darwin with a murder-mystery and God checked in for fun. 7/10

The Wright Brothers by David McCullough

Well written as expected and concentrates on the period when the brothers were actively flying which is the most interesting but avoids their legal battles & some other negatives. 8/10


Share

May 31, 2019

LUV June 2019 Main Meeting: Unlocking insights from Big Data / An Introduction to Packaging

Jun 4 2019 19:00
Jun 4 2019 21:00
Jun 4 2019 19:00
Jun 4 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE LATER START TIME

7:00 PM to 9:00 PM Tuesday, June 4, 2019
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Matt Moore: Unlocking insights from Big Data
  • Andrew Worsely: An Introduction to Packaging

 

Unlocking insights from Big Data

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 4, 2019 - 19:00

read more

LUV June 2019 Workshop: Computer philosopher Ted Nelson

Jun 15 2019 12:30
Jun 15 2019 16:30
Jun 15 2019 12:30
Jun 15 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Speaker: Andrew Pam celebrated the birthday of computer philosopher Ted Nelson with a summary of his work.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 15, 2019 - 12:30

May 29, 2019

dm-crypt: Password Prompts Proliferate

Just recently Petitboot added a method to ask the user for a password before allowing certain actions to proceed. Underneath the covers this is checking against the root password, but the UI "pop-up" asking for the password is relatively generic. Something else which has been on the to-do list for a while is support for mounting encrpyted partitions, but there wasn't a good way to retrieve the password for them - until now!

With the password problem solved, there isn't too much else to do. If Petitboot sees an encrypted partition it makes a note of it and informs the UI via the device_add interface. Seeing this the UI shows this device in the UI even though there aren't any boot options associated with it yet:

encrypted_hdr

Unlike normal devices in the menu these are selectable; once that happens the user is prompted for the password:

encrypted_password

With password in hand pb-discover will then try to open the device with cryptsetup. If that succeeds the encrypted device is removed from the UI and replaced with the new un-encrypted device:

unencrypted_hdr

That's it! These devices can't be auto-booted from at the moment since the password needs to be manually entered. The UI also doesn't have a way yet to select specific options for cryptsetup, but if you find yourself needing to do so you can run cryptsetup manually from the shell and pb-discover will recognise the new unencrypted device automatically.

This is in Petitboot as of v1.10.3 so go check it out! Just make sure your image includes the kernel and cryptsetup dependencies.

May 27, 2019

1-Wire home automation tutorial from linux.conf.au 2019, part 3

Share

This is the third in a set of posts about the home automation tutorial from linux.conf.au 2019. You should probably read part 1 and part 2 before this post.

In the end Alistair decided that my home automation shield was defective, which is the cause of the errors from the past post. So I am instead running with the prototype shield that he handed me when I started helping with the tutorial preparation. That shield has some other bugs (misalignments of holes mainly), but is functional apart from that.

I have also decided that I’m not super excited by hassos, and just want to run the orangepi with the OWFS to MQTT gateway into my existing home assistant setup if possible, so I am going to focus on getting that bare component working for now.

To that end, the gateway can be found at https://github.com/InfernoEmbedded/OWFS-MQTT-Bridge, and is a perl script named ha-daemon.pl. I needed to install some dependancies, which in my case were for armbian:

$ apt-get install perl libanyevent-perl cpanminus libdist-zilla-perl libfile-slurp-perl libdatetime-format-strptime-perl
$ dzil listdeps | cpanm --sudo

Then I needed to write a configuration file and put it at ha.toml in the same directory as the daemon. Mine looks like this:

[general]
	timezone="Australia/Sydney"
	discovery_prefix="homeassistant"

[1wire]
	host="localhost"
	port=4304
	timeout=5 # seconds, will reconnect after this if no response
	sensor_period=30 # seconds
	switch_period=10 # seconds
	debug=true

[mqtt]
	host="192.168.1.6"
	port=1883

Now run the gateway like this:

$ perl ha-daemon.pl

I see messages on MQTT that a temperature sensor is being published to home assistant:

homeassistant/sensor/1067C6697351FF_temperature/config {
	"name": "10.67C6697351FF_temperature",
	"current_temperature_topic": "temperature/10.67C6697351FF/state",
	"unit_of_measurement": "°C"
}

However, I do not see temperature readings being published. Having added some debug code to OWFS-MQTT, this appears to be because no temperature is being returned from the read operation:

2019-05-27 17:28:14.833: lib/Daemon/OneWire.pm:73:Daemon::OneWire::readTemperatureDevices(): Reading temperature for device '10.67C6697351FF'
[...snip...]
2019-05-27 17:28:14.867: /usr/local/share/perl/5.24.1/AnyEvent/OWNet.pm:117:Daemon::OneWire::__ANON__(): Read data: $VAR1 = bless( {
                 'payload' => 0,
                 'size' => 0,
                 'version' => 0,
                 'offset' => 0,
                 'ret' => 4294967295,
                 'sg' => 270
               }, 'AnyEvent::OWNet::Response' );

I continue to debug.

Share

May 26, 2019

Mount Bimberi on a Scout Bushwalking Course

Share

Julian Yates kindly ran a bushwalking course for Scouts Australia over the last five days, which covered walking in Uncontrolled Terrain (the definition in the Australian VET scheme for the most difficult bushwalking — significant off track navigation in areas where emergency response will be hard to get). I helped with some of the instruction, but was also there working on my own bushwalking qualifications.

The walk was to Mount Bimberi, which is the highest point in the ACT. We started with a short night walk into Oldfield’s Hut on Friday night after a day of classroom work. The advantage of this was that we started Saturday at Oldfield’s Hut, which offered morning views which did not suck.

On Saturday morning we walked up to Mount Bimberi via Murray’s Gap. This involved following the ACT / NSW border up the hillside, which was reasonably well marked with tape and cairns.

Our route on the way to Bimberi:

And the way back:

On Sunday we walked back out to the cars and did the three hour drive back to Canberra. I’ll include the walk out here for completeness:

Share

May 23, 2019

Installing Ubuntu 18.04 using both full-disk encryption and RAID1

I recently setup a desktop computer with two SSDs using a software RAID1 and full-disk encryption (i.e. LUKS). Since this is not a supported configuration in Ubuntu desktop, I had to use the server installation medium.

This is my version of these excellent instructions.

Server installer

Start by downloading the alternate server installer and verifying its signature:

  1. Download the required files:

     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.2-server-amd64.iso
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS
     wget http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/SHA256SUMS.gpg
    
  2. Verify the signature on the hash file:

     $ gpg --keyid-format long --keyserver hkps://keyserver.ubuntu.com --recv-keys 0xD94AA3F0EFE21092
     $ gpg --verify SHA256SUMS.gpg SHA256SUMS
     gpg: Signature made Fri Feb 15 08:32:38 2019 PST
     gpg:                using RSA key D94AA3F0EFE21092
     gpg: Good signature from "Ubuntu CD Image Automatic Signing Key (2012) <cdimage@ubuntu.com>" [undefined]
     gpg: WARNING: This key is not certified with a trusted signature!
     gpg:          There is no indication that the signature belongs to the owner.
     Primary key fingerprint: 8439 38DF 228D 22F7 B374  2BC0 D94A A3F0 EFE2 1092
    
  3. Verify the hash of the ISO file:

     $ sha256sum --ignore-missing -c SHA256SUMS
     ubuntu-18.04.2-server-amd64.iso: OK
    

Then copy it to a USB drive:

dd if=ubuntu-18.04.2-server-amd64.iso of=/dev/sdX

and boot with it.

Manual partitioning

Inside the installer, use manual partitioning to:

  1. Configure the physical partitions.
  2. Configure the RAID array second.
  3. Configure the encrypted partitions last

Here's the exact configuration I used:

  • /dev/sda1 is 512 MB and used as the EFI parition
  • /dev/sdb1 is 512 MB but not used for anything
  • /dev/sda2 and /dev/sdb2 are both 4 GB (RAID)
  • /dev/sda3 and /dev/sdb3 are both 512 MB (RAID)
  • /dev/sda4 and /dev/sdb4 use up the rest of the disk (RAID)

I only set /dev/sda2 as the EFI partition because I found that adding a second EFI partition would break the installer.

I created the following RAID1 arrays:

  • /dev/sda2 and /dev/sdb2 for /dev/md2
  • /dev/sda3 and /dev/sdb3 for /dev/md0
  • /dev/sda4 and /dev/sdb4 for /dev/md1

I used /dev/md0 as my unencrypted /boot partition.

Then I created the following LUKS partitions:

  • md1_crypt as the / partition using /dev/md1
  • md2_crypt as the swap partition (4 GB) with a random encryption key using /dev/md2

Post-installation configuration

Once your new system is up, sync the EFI partitions using DD:

dd if=/dev/sda1 of=/dev/sdb1

and create a second EFI boot entry:

efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l \EFI\ubuntu\shimx64.efi

Ensure that the RAID drives are fully sync'ed by keeping an eye on /prod/mdstat and then reboot, selecting "ubuntu2" in the UEFI/BIOS menu.

Once you have rebooted, remove the following package to speed up future boots:

apt purge btrfs-progs

To switch to the desktop variant of Ubuntu, install these meta-packages:

apt install ubuntu-desktop gnome

then use debfoster to remove unnecessary packages (in particular the ones that only come with the default Ubuntu server installation).

Fixing booting with degraded RAID arrays

Since I have run into RAID startup problems in the past, I expected having to fix up a few things to make degraded RAID arrays boot correctly.

I did not use LVM since I didn't really feel the need to add yet another layer of abstraction of top of my setup, but I found that the lvm2 package must still be installed:

apt install lvm2

with use_lvmetad = 0 in /etc/lvm/lvm.conf.

Then in order to automatically bring up the RAID arrays with 1 out of 2 drives, I added the following script in /etc/initramfs-tools/scripts/local-top/cryptraid:

 #!/bin/sh
 PREREQ="mdadm"
 prereqs()
 {
      echo "$PREREQ"
 }
 case $1 in
 prereqs)
      prereqs
      exit 0
      ;;
 esac

 mdadm --run /dev/md0
 mdadm --run /dev/md1
 mdadm --run /dev/md2

before making that script executable:

chmod +x /etc/initramfs-tools/scripts/local-top/cryptraid

and refreshing the initramfs:

update-initramfs -u -k all

Disable suspend-to-disk

Since I use a random encryption key for the swap partition (to avoid having a second password prompt at boot time), it means that suspend-to-disk is not going to work and so I disabled it by putting the following in /etc/initramfs-tools/conf.d/resume:

RESUME=none

and by adding noresume to the GRUB_CMDLINE_LINUX variable in /etc/default/grub before applying these changes:

update-grub
update-initramfs -u -k all

Test your configuration

With all of this in place, you should be able to do a final test of your setup:

  1. Shutdown the computer and unplug the second drive.
  2. Boot with only the first drive.
  3. Shutdown the computer and plug the second drive back in.
  4. Boot with both drives and re-add the second drive to the RAID array:

     mdadm /dev/md0 -a /dev/sdb3
     mdadm /dev/md1 -a /dev/sdb4
     mdadm /dev/md2 -a /dev/sdb2
    
  5. Wait until the RAID is done re-syncing and shutdown the computer.

  6. Repeat steps 2-5 with the first drive unplugged instead of the second.
  7. Reboot with both drives plugged in.

At this point, you have a working setup that will gracefully degrade to a one-drive RAID array should one of your drives fail.

May 21, 2019

A nerd snipe, in which I learn to read gerber files

Share

So, I had the realisation last night that the biggest sunk cost with getting a PCB made in China is the shipping. The boards are about 50 cents each, and then its $25 for shipping (US dollars of course). I should therefore be packing as many boards into a single order as possible to reduce the shipping cost per board.

I have a couple of boards on the trot at the moment, my RFID attendance tracker project (called GangScan), and I’ve just decided to actually get my numitrons working and whipped up a quick break out board for those. You’ll see more about that one later I’m sure.

I decided to ask my friends in Canberra if they needed any boards made, and one friend presented with a set of Gerber CAM files and nothing else. That’s a pain because I need to know the dimensions of the board for the quoting system. Of course, I couldn’t find a tool to do extract that for me with a couple of minutes of Googling, so… I decided to just learn to read the file format.

Gerber is well specified, with a quite nice specification available online. So it wasn’t too hard to dig out the dimensions layer from the zipped gerber files and then do this:

Contents of file Meaning Dimensional impact
G04 DipTrace 3.3.1.2* Comment
G04 BoardOutline.gbr* Comment
%MOIN*% File is in inch units
G04 #@! TF.FileFunction,Profile* Comment
G04 #@! TF.Part,Single* Comment
%ADD11C,0.005512*% Defines an apperture. D11 is a circle with diameter 0.005512 inches
%FSLAX26Y26*% Resolution is 2.6, i.e. there are 2 integer places and 6 decimal places
G04* Comment
G70* Historic way of setting units to inches
G90* Historic way of setting coordinates to absolute notation
G75* Sets quadrant mode graphics state parameter to ‘multi quadrant’
G01* Sets interpolation mode graphics state parameter to ‘linear interpolation’
G04 BoardOutline* Comment
%LPD*% Sets the object polarity to dark
X394016Y394016D2* Set current point to 0.394016, 0.394016 (in inches) Top left is 0.394016, 0.394016 inches
D11* Draw the previously defined tiny circle
Y1194016D1* Draw a vertical line to 1.194016 inches Board is 1.194016 inches tall
X1931366Y1194358D1* Draw a line to 1.931366, 1.194358 inches
Board is 1.931366 inches wide (and not totally square)
Y394358D1* Draw a vertical line to 0.394358 inches
X394016Y394016D1* Draw a line to 0.394016, 0.394016 inches
M02* End of file

So this board is effectively 3cm by 5cm.

A nice little nerd snipe to get the morning going.

Share

May 20, 2019

Linux Security Summit 2019 North America: CFP / OSS Early Bird Registration

The LSS North America 2019 CFP is currently open, and you have until May 31st to submit your proposal. (That’s the end of next week!)

If you’re planning on attending LSS NA in San Diego, note that the Early Bird registration for Open Source Summit (which we’re co-located with) ends today.

You can of course just register for LSS on its own, here.

Gangscan 0.6 boards

Share

So I’ve been pottering away for a while working on getting the next version of the gang scan boards working. These ones are much nicer: thicker tracks for signals, better labelling, support for a lipo battery charge circuit, a prototype audio circuit, and some LEDs to indicate status. I had them fabbed at the same place as last time, although the service was much faster this time around.

A gang scan 0.6 board

I haven’t got as far as assembling a board yet — I need to get some wire thin enough for the vias before I can do that. I’ll let you know how I go though.

Share

May 19, 2019

Trigs map

Share

A while ago I had a map of all the trig points in the ACT and links to the posts I’d written during my visits. That had atrophied over time. I’ve just spent some time fixing it up again, and its now at https://www.madebymikal.com/trigs_map.html — I hope its useful to someone else.

Share

May 18, 2019

Trail run: Lake Tuggeranong to Kambah Pool (return)

Share

This wasn’t the run I’d planned for this day, but here we are. This runs along the Centenary Trail between Kambah Pool and Lake Tuggeranong. Partially shaded, but also on the quite side of the ridge line where you can’t tell that you’re near the city. Don’t take the tempting river ford, there is a bridge a little further downstream! 14.11km and 296 vertical ascent.

Be careful of mountain bikers on this popular piece of single track. You’re allowed to run here, but some cyclists don’t leave much time to notice other track users.

Share

Trail run: Tuggeranong Stone Wall loop

Share

The Tuggeranong Stone wall is a 140 year old boundary between to former stations. Its also a nice downhill start to a trail run. This loop involves starting at the Hyperdome, following the wall down, and the continuing along to Pine Island before returning. Partially shaded, and with facilities at the Hyperdome and Pine Island. 6km, and 68m vertically.

Share

Trail run: Barnes and ridgeline

Share

A first attempt at running to Barnes and Brett trigs, this didn’t work out quite as well as I’d expected (I ran out of time before I’d hit Brett trig). The area wasn’t as steep as I’d expected, being mostly rolling grazing land with fire trails. Lots of gates and now facilities, but stunning views of southern Canberra from the ridgeline. 11.11km and 421m of vertical ascent.

Share

Trail run: Pine Island South to Point Hut with a Hill

Share

This one is probably a little bit less useful to others, as the loop includes a bit more of the suburb than is normal. That said, you could turn this into a suburb avoiding loop quite easily. A nice 11.88km run with a hill climb at the end. A total ascent of 119 metres. There isn’t much shade along the run, but there is some in patches. There are bathrooms at Point Hut and Pine Island.

Be careful of mountain bikers on this popular piece of single track. You’re allowed to run here, but some cyclists don’t leave much time to notice other track users.

Share

Trail run: Cooleman Ridge

Share

This run includes Cooleman and Arawang trig points. Not a lot of shade, but a pleasant run. 9.86km and 264m of vertical ascent.

Share

Trail running guide: Tuggeranong

Share

I’ve been running on trails more recently (I’m super bored with roads and bike paths), but running on trails makes load management harder — often I’m looking for a run of approximately XX length with no more than YY vertical ascent. So I was thinking, maybe I should just write the runs that I do down so that over time I create a menu of options for when I need them.

This page documents my Tuggeranong runs.

NameDistance (km)Vertical Ascent (m)NotesPosts
Cooleman Ridge9.78264Cooleman and Arawang Trigs. Not a lot of shade and no facilities.25 April 2019
Pine Island South to Point Hut with a Hill11.88119A nice Point Hut and Pine Island loop with a hill climb at the end. Toilets at Point Hut and Pine Island. Not a lot of shade. Beware of mountain bikes!21 February 2019
Barnes and ridgeline11.11421Not a lot of shade and no facilities, but stunning views of southern Canberra.2 May 2019
Lake Tuggeranong to Kambah Pool (return)14.11296Partial shade and great views, but beware the mountain bikes!11 May 2019
Tuggeranong Stone Wall loop668Partial shade and facilities at the Hyperdome and Pine Island.27 April 2019

Share

May 09, 2019

Audiobooks – April 2019

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker

Amazing good book, well argued and lots of information. The only downside is he talks to some diagrams [downloadable] at times. Highly Recommend. 9/10

A History of Britain, Volume : Fate of Empire 1776 – 2000 by Simon Schama

I didn’t enjoy this all that much. The author tried to use various lives to illustrate themes but both the themes and biographies suffered. Huge areas also left out. 6/10

Where Did You Get This Number? : A Pollster’s Guide to Making Sense of the World by Anthony Salvanto

An overview of (mostly) political polling and it’s history. Lots of examples for the 2016 US election campaign. Light but interesting. 7/10

Squid Empire: The Rise and Fall of the Cephalopods by Danna Staaf

Pretty much what the titles says. I got a little lost with all the similarly names species but the general story was interesting enough and not too long. 6/10

Apollo in the Age of Aquarius by Neil M. Maher

The story of the back and forth between NASA and the 60s counterculture from the civil rights struggle and the antiwar movement to environmentalism and feminism. Does fairly well. 7/10


Share

May 06, 2019

Visual Studio Code for Linux kernel development

Here we are again - back in 2016 I wrote an article on using Atom for kernel development, but I didn't stay using it for too long, instead moving back to Emacs. Atom had too many shortcomings - it had that distinctive Electron feel, which is a problem for a text editor - you need it to be snappy. On top of that, vim support was mediocre at best, and even as a vim scrub I would find myself trying to do things that weren't implemented.

So in the meantime I switched to spacemacs, which is a very well integrated "vim in Emacs" experience, with a lot of opinionated (but good) defaults. spacemacs was pretty good to me but had some issues - disturbingly long startup times, mediocre completions and go-to-definitions, and integrating any module into spacemacs that wasn't already integrated was a big pain.

After that I switched to Doom Emacs, which is like spacemacs but faster and closer to Emacs itself. It's very user configurable but much less user friendly, and I didn't really change much as my elisp-fu is practically non-existent. I was decently happy with this, but there were still some issues, some of which are just inherent to Emacs itself - like no actually usable inbuilt terminal, despite having (at least) four of them.

Anyway, since 2016 when I used Atom, Visual Studio Code (henceforth referred to as Code) came along and ate its lunch, using the framework (Electron) that was created for Atom. I did try it years ago, but I was very turned off by its Microsoft-ness, it seeming lack of distinguishing features from Atom, and it didn't feel like a native editor at all. Since it's massively grown in popularity since then, I decided I'd give it a try.

Visual Studio Code

Vim emulation

First things first for me is getting a vim mode going, and Code has a pretty good one of those. The key feature for me is that there's Neovim integration for Ex-commands, filling a lot of shortcomings that come with most attempts at vim emulation. In any case, everything I've tried to do that I'd do in vim (or Emacs) has worked, and there are a ton of options and things to tinker with. Obviously it's not going to do as much as you could do with Vimscript, but it's definitely not bad.

Theming and UI customisation

As far as the editor goes - it's good. A ton of different themes, you can change the colour of pretty much everything in the config file or in the UI, including icons for the sidebar. There's a huge sore point though, you can't customise the interface outside the editor pretty much at all. There's an extension for loading custom CSS, but it's out of the way, finnicky, and if I wanted to write CSS I wouldn't have become a kernel developer.

Extensibility

Extensibility is definitely a strong point, the ecosystem of extensions is good. All the language extensions I've tried have been very fully featured with a ton of different options, integration into language-specific linters and build tools. This is probably Code's strongest feature - the breadth of the extension ecosystem and the level of quality found within.

Kernel development

Okay, let's get into the main thing that matters - how well does the thing actually edit code. The kernel is tricky. It's huge, it has its own build system, and in my case I build it with cross compilers for another architecture. Also, y'know, it's all in C and built with make, not exactly great for any kind of IDE integration.

The first thing I did was check out the vscode-linux-kernel project by GitHub user "amezin", which is a great starting point. All you have to do is clone the repo, build your kernel (with a cross compiler works fine too), and run the Python script to generate the compile_commands.json file. Once you've done this, go-to-definition (gd in vim mode) works pretty well. It's not flawless, but it does go cross-file, and will pop up a nice UI if it can't figure out which file you're after.

Code has good built-in git support, so actions like staging files for a commit can be done from within the editor. Ctrl-P lets you quickly navigate to any file with fuzzy-matching (which is impressively fast for a project of this size), and Ctrl-Shift-P will let you search commands, which I've been using for some git stuff.

git command completion in Code

There are some rough edges, though. Code is set on what so many modern editors are set on, which is the "one window per project" concept - so to get things working the way you want, you would open your kernel source as the current project. This makes it a pain to just open something else to edit, like some script, or checking the value of something in firmware, or chucking something in your bashrc.

Auto-triggering builds on change isn't something that makes a ton of sense for the kernel, and it's not present here. The kernel support in the repo above is decent, but it's not going to get you close to what more modern languages can get you in an editor like this.

Oh, and it has a powerpc assembly extension, but I didn't find it anywhere near as good as the one I "wrote" for Atom (I just took the x86 one and switched the instructions), so I'd rather use the C mode.

Terminal

Code has an actually good inbuilt terminal that uses your login shell. You can bring it up with Ctrl-`. The biggest gripe I have always had with Emacs is that you can never have a shell that you can actually do anything in, whether it's eshell or shell or term or ansi-term, you try to do something in it and it doesn't work or clashes with some Emacs command, and then when you try to do something Emacs-y in there it doesn't work. No such issue is present here, and it's a pleasure to use for things like triggering a remote build or doing some git operation you don't want to do with commands in the editor itself.

Not the most important feature, but I do like not having to alt-tab out and lose focus.

Well...is it good?

Yeah, it is. It has shortcomings, but installing Code and using the repo above to get started is probably the simplest way to get a competent kernel development environment going, with more features than most kernel developers (probably) have in their editors. Code is open source and so are its extensions, and it'd be the first thing I recommend to new developers who aren't already super invested into vim or Emacs, and it's worth a try if you have gripes with your current environment.

May 05, 2019

Ignition!

Share

Whilst the chemistry was sometimes over my head, this book is an engaging summary of the history of US liquid rocket fuels during the height of the cold war. Fun to read and interesting as well. I enjoyed it.

Ignition! Book Cover Ignition!
John Drury Clark
Technology & Engineering
1972
214

Share

May 04, 2019

Codec2 and FreeDV Update

Quite a lot of Codec2/FreeDV development going on this year, so much that I have been neglecting the blog! Here is an update…..

Github, Travis, and STM32

Early in 2019, the number of active developers had grown to the point where we needed more sophisticated source control, so in March we moved the Codec 2 project to GitHub. One feature I’m enjoying is the collaboration and messaging between developers.

Danilo (DB4PLE) immediately had us up and running with Travis, a tool that automatically builds our software every time it is pushed. This has been very useful in spotting build issues quickly, and reducing the amount of “human in the loop” manual testing.

Don (W7DMR), Danilo, and Richard (KF5OIM) have been doing some fantastic work on the cmake build and test system for the stm32 port of 700D. A major challenge has been building the same code on desktop platforms without breaking the embedded stm32 version, which has tight memory constraints.

We now have a very professional build and test system, and can run sophisticated unit tests from anywhere in the world on remote stm32 development hardware. A single “cmake test all” command can build and run a suite of automated tests on the x86 and stm32 platforms.

The fine stm32 work by Don will soon lead to new firmware for the SM1000, and FreeDV 700D is already running on radios that support the UHSDR firmware.

FreeDV in the UK

Mike (G4ABP), contacted me with some fine analysis of the FreeDV modems on the UK NVIS channel. Mike is part of a daily UK FreeDV net, which was experiencing some problems with loss of sync on FreeDV 700C. Together we have found (and fixed) bugs with FreeDV 700C and 700D.

The UK channel is interesting: high SNR (>10dB), but at times high Doppler spread (>3Hz) which the earlier FreeDV 700C modem may deal with better due to it’s high sampling rate of the channel phase. In contrast, FreeDV 700D has been designed for moderate Doppler (1Hz), but heavily optimised for low SNR operation. More investigation required here with off air samples to run any potential issues to ground.

I would like to hear from you if you have problems getting FreeDV 700D to work with strong signals! This could be a sign of fast fading “breaking” the modem. By working together, we can improve FreeDV.

FreeDV in Argentina

Jose, LU5DKI, is part of an active FreeDV group in Argentina. They have a Facebook page for their Radio Club Coronel Pringles LU1DIL that describes their activities. They are very happy with the low SNR and interference rejecting capabilities of FreeDV 700D:

Regarding noise FREEDV IS IMMUNE TO NOISE, coincidentally our CLUB is installed in a TELEVISION MONITORING CENTER, where the QRN by the monitors and computers is very intense, it is impossible to listen to a single SSB station, BUT FREEDV LISTENS PERFECTLY IN 40 METERS

Roadmap for 2019

This year I would like to get FreeeDV 2020 released, and FreeDV 700D running on the SM1000. A bonus would be some improvement in the speech quality for the lower rate modes.

Reading Further

FreeDV 2020 First On Air Tests
Porting a LDPC Decoder to a STM32 Microcontroller
Universal Ham Software Defined Radio Github page

FreeDV 2020 First On Air Tests

Brad (AC0ZJ), Richard (KF5OIM) and I have been putting the pieces required for the new FreeDV 2020 mode, which uses LPCNet Neural Net speech synthesis technology developed by Jean-Marc Valin. The goal of this mode is 8kHz audio bandwidth in just 1600 Hz of RF bandwidth. FreeDV 2020 is designed for HF channels where SSB an “armchair copy” – SNRs of better than 10dB and slow fading.

FreeDV 2020 uses the fine OFDM modem ported to C by Steve (K5OKC) for the FreeDV 700D mode. Steve and I have modified this modem so it can operate at the higher bit rate required for FreeDV 2020. In fact, the modem can now be configured on the command line for any bandwidth and bit rate that you like, and even adapt the wonderful LDPC FEC codes developed by Bill (VK5DSP) to suit.

Brad is working on the integration of the FreeDV 2020 mode into the FreeDV GUI program. It’s going well, and he has made 1200 mile transmissions across the US to a SDR using the Linux version. Brad has also done some work on making FreeDV GUI deal with USB sound cards that come and go in different order.

Mark, VK5QI has just made a 3200km FreeDV transmission from Adelaide, South Australia to a KiwiSDR in the Bay of Islands, New Zealand. He decoded it with the partially working OSX version (we do most of our development on Ubuntu Linux).

I’m surprised as I didn’t think it would work so well over such long paths! There’s a bit of acoustic echo from Mark’s shack but you can get an idea of the speech quality compared to SSB. Thanks Mark!

For the adventurous, the freedv-gui source code 2020 development branch is here). We are currently performing on air tests with the Linux version, and Brad is working on the Windows build.

Reading Further

Steve Ports an OFDM modem from Octave to C
Bill’s (VK5DSP) Low SNR Blog

May 02, 2019

Restricted Sleep Regime

Since moving down to Melbourne my poor sleep has started up again. It’s really hard to say what the main factor driving this is. My doctor down here has put me onto a drug free way of trying to improve my sleep, and I think I kind of like it, while it’s no silver bullet, it is something I can go back to if I’m having trouble with my sleep, without having to get a prescription.

The basic idea is to maximise sleep efficiency. If you’re only getting n hours sleep a night, only spend n hours  a night in bed. This forces you to stay up and go to bed rather late for a few nights. Hopefully, being tired will help you sleep through the night in one large segment. Once you’ve successfully slept through the night a few times, relax your bed time by say fifteen minutes, and get used to that. Slowly over time, you increase the amount of sleep you’re getting, while keeping your efficiency high.

 
Person T has had Person A design a one-page flyer and sent it to Person J... as a single image. Person T is two hours ahead, time-zone wise, and Person A is roughly 12 hours behind.

Person J also wishes to email out the flyer with hyperlinks on each of two names in the image.

Sent as a bare image, she will not fly.

Embedding the image in a PDF would allow only the entire image to possess a single hyperlink.

So... crank up GIMP, open image, select the Move tool, drag Guides from each Ruler to section up the image. Each Guide changes nothing, however its presence allows the Rectangle Select tool to be very precise and consistent.

Now File ⇒ Save the work-file in case you wish to adjust things for another round. Here, I have applied the Cubist tool from the Filters to most of the content, so the idea is conveyed without revealing details of said content.

The next step is to Rectangle Select the top area (in the screenshot above, the left-name area has been Rectangle Selected), then Copy it (Ctrl+C is the keyboard shortcut), then File ⇒ Create ⇒ From Clipboard (Ctrl+Shift+V is the shortcut) to make the copy into a new image, export that image (File ⇒ Export) as a PNG (lossless compression), repeat for the bottom area, then in the central section, for the left, left-name, centre, right-name, right areas.

Open LibreOffice Writer, Insert ⇒ Image the top-area image, right-click, choose Properties, under the Type tab make it “As character” under the Crop tab set the Scale so it will all fit nicely (58% in this case, which can be tweaked later to suit), OK. Click to the right of the image, press Shift+Enter to insert a NewLine (rather than a paragraph).

Now Insert ⇒ Image the centre left area, then left-name, centre, right-name, right. With the name areas (in this case) I also chose the Hyperlink tab within the Properties dialogue, and pasted the link into the URL field, making that image section click-able. When done, Shift+Enter to make a place for the bottom area.

Finally, Insert ⇒ Image the bottom-area image (and if it does not all butt up squarely, check (Format ⇒ Paragraph) that the Line Spacing for the document’s sole paragraph is set to Single). Now save (for the sake of posterior) and click the “Export as PDF” button.

April 30, 2019

Election Activity Bundle

With the upcoming federal election, many teachers want to do some related activities in class – and we have the materials ready for you! To make selecting suitable resources a bit easier, we have an Election Activity Bundle containing everything you need, available for just $9.90. Did you know that the secret ballot is an Australian […]

FreeDV QSO Party 2019 Part 2

Here is a fine summary of the FreeDV QSO Party 2019, which took place last weekend. I took part, and made several local and interstate contacts, plus listened in to many more.

Thanks so much to AREG for organising the event, and for all the Hams world wide who took part.

It would be great to make some international DX contacts using the mode, in particular to get some operational experience with the modems on long distance channels.

Generating Wideband Speech from Narrowband

I’m slowly getting up to speed on this machine learning caper. I had some free time this week, so I set myself a little machine learning exercise.

LPCNet represents speech using 18 bands that cover the range from 0 to 8000Hz (in the form of MFCCs). However the Wavenet work demonstrated high quality speech using just the Codec 2 2400 bit/s features, which only contain information in the 0 to 4 kHz range. This suggests we can regenerate the speech energy above 4000Hz, from the features beneath 4000Hz. In a speech coding application, this would save bits, as we no longer have to quantise and transmit the high frequency band energies.

So the goals of this project were:

  1. Gain experience in Machine Learning.
  2. Generate reasonable quality speech by synthesising the top 6 bands (3200 to 8000Hz) from the information in the lower 12 bands (which cover 0 to 2800Hz). Doesn’t have to be a perfect reconstruction, after all, we are throwing information away.

Method

As a starting point I set up a Keras model with a couple of Dense layers, and a linear output layer. The band features extracted by dump_data are the log10 of the band energies. Multiplying by 10 gives us the energy in dB. This is quite neat, as the network “loss” function (mean square error) effectively reports the distance in dB^2/10. This gives us a nice objective measure of distortion, and hopefully speech quality.

So the input to the network is the lower 12 bands, and the LPCNet pitch gain (a rough estimate of voicing). The output is the 6 high frequency bands.

I wanted to sanity check my network to make sure it was getting results better than random. So I reasoned a trivial algorithm would just set the HF band energies to their mean. The distortion of this trivial algorithm is the variance of the training data. I measured the variance of the HF bands as 200dB^2, and my first attempts were reducing this to 50dB^2, a factor of four improvement. So that’s a good start.

I messed about with the number and size of layers, activation functions, optimisers, batch size, epochs, which produced minor changes. On a whim I decided to remove the mean and that significantly reduced the error to 12dB^2, or a rms error of 3.5dB. That’s within the range of what a (coarse) vector quantiser will achieve – and we are doing it with zero bits.

BTW – is it just me or is NN design just guesswork? Maybe it becomes educated guesswork as experience grows.

The mean of the bands is the related to frame energy, and is often sent to the decoder in some form as part of regular quantisation. So is the pitch gain (e.g. in the form of a voicing flag). However this NN only seems to work well when it’s the mean of all 18 bands.

Testing

I used a “vanilla” LPCNet system to test, slightly modified to output band energies rather than the DCT of the band energies. First I generate features using dump_data, then play the features straight into test_lpcnet to synthesise. There is no quantisation. I placed my HF regeneration network in between, optionally replacing the top 6 bands with the estimates from the network.

I use a small database (1E6 vectors) for LPCNet experimentation, as this trains LPCNet in just a few hours. I determined by experiment this was just large enough to synthesise good quality speech on samples from within the training database. I have a script (tinytrain.sh) that trains the network, and generates test samples, automating the process. If my experimental algorithms don’t work with this tiny database, no point going any further, and I haven’t wasted too much time. If the experiments work out, I can then train a better network, e.g. to deal with many different speakers and conditions.

So I used the same 1E6 size database of vectors for training the high band regeneration algorithm. The file are available from my LPCNet Github repo (train_regen.py, test_regen.py, run_regen.sh, plot_regen.m).

Samples

Condition Female Male
Original Play Play
Vanilla LPCNet Play Play
Regen HF with mean removed Play Play
Regen HF without mean removed Play Play

The mean removed samples sound rather close to the vanilla. Almost too close. In the other samples without mean removal, the high frequencies are not regenerated as well, although they might still be useful in a low bit rate coding scenario, given they cost zero bits.

Here is a 3D plot of the first 1 second (100 x 10ms frames) of the bands for the female sample above. The little hills roll up and down as the words are articulated. At the high frequency end, the peaks tend to correspond to consonants, e.g. the “ch” in birch is in the middle. Right at the end (around frame 100 at the “rear”), we can see thet start of “sss” in slid

The second plot shows the error of the regeneration network:

Below is a plot of a single frame, showing the original and regenerated bands;

Conclusion

Anyhoo, looks like the idea has promise, especially for reducing the bit rate required for speech codecs. Further work will show if the idea works for a wider range of speech samples, and with quantisation. The current model could possibly be improved to take into account adjacent frames using a covnet or RNN. OK, on with the next experiment …..

LUV May 2019 Main Meeting: Kali Linux

May 7 2019 19:00
May 7 2019 21:00
May 7 2019 19:00
May 7 2019 21:00
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE LATER START TIME

7:00 PM to 9:00 PM Tuesday, May 7, 2019
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • errbufferoverfl: Kali Linux

Kali Linux

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

May 7, 2019 - 19:00

read more

LUV May 2019 Workshop: RISC-V development board

May 25 2019 12:30
May 25 2019 16:30
May 25 2019 12:30
May 25 2019 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

PLEASE NOTE CHANGE OF DATE DUE TO THE ELECTION

This month's meeting will be on 25 May rather than 18 May 2019 due to the election on the usual workshop date.

Speaker: Rodney Brown demonstrates his low-cost RISC-V development board running facial recognition software using the on-chip nerual network.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

May 25, 2019 - 12:30

read more

April 27, 2019

Building new pods for the Spectracom 8140 using modern components

I've mentioned a bunch of times on the time-nuts list that I'm quite fond of the Spectracom 8140 system for frequency distribution. For those not familiar with it, it's simply running a 10MHz signal against a 12v DC power feed so that line-powered pods can tap off the reference frequency and use it as an input to either a buffer (10MHz output pods), decimation logic (1MHz, 100kHz etc.), or a full synthesizer (Versa-pods).

It was only in October last year that I got a house frequency standard going using an old Efratom FRK-LN which now provides the reference; I'd use a GPSDO, but I live in a ground floor apartment without a usable sky view, this of course makes it hard to test some of the GPS projects I'm doing. Despite living in a tiny apartment I have test equipment in two main places, so the 8140 is a great solution to allow me to lock all of them to the house standard.


(The rubidium is in the chunky aluminium chassis underneath the 8140)

Another benefit of the 8140 is that many modern pieces of equipment (such as my [HP/Agilent/]Keysight oscilloscope) have a single connector for reference frequency in/out, and should the external frequency ever go away it will switch back to its internal reference, but also send that back out the connector, which could lead to other devices sharing the same signal switching to it. The easy way to avoid that is to use a dedicated port from a distribution amplifier for each device like this, which works well enough until you have this situation in multiple locations.

As previously mentioned the 8140 system uses pods to add outputs, while these pods are still available quite cheaply used on eBay (as of this writing, for as low as US$8, but ~US$25/pod has been common for a while), recently the cost of shipping to Australia has gone up to the point I started to plan making my own.

By making my own pods I also get to add features that the original pods didn't have[1], I started with a quad-output pod with optional internal line termination. This allows me to have feeds for multiple devices with the annoying behaviour I mentioned earlier. The enclosure is a Pomona model 4656, with the board designed to slot in, and offer pads for the BNC pins to solder to for easy assembly.



This pod uses a Linear Technologies (now Analog Devices) LTC6957 buffer for the input stage replacing a discrete transistor & logic gate combined input stage in the original devices. The most notable change is that this stage works reliably down to -30dBm input (possibly further, couldn't test beyond that), whereas the original pods stop working right around -20dBm.

As it turns out, although it can handle lower input signal levels, in other ways including power usage it seems very similar. One notable downside is the chip tops out at 4v absolute maximum input, so a separate regulator is used just to feed this chip. The main regulator has also been changed from a 7805 to an LD1117 variant.

On this version the output stage is the same TI 74S140 dual 4-input NAND gate as was used on the original pods, just in SOIC form factor.

As with the next board there is one error on the board, the wire loop that forms the ground connection was intended to fit a U-type pin header, however the footprint I used on the boards was just too tight to allow the pins through, so I've used some thin bus wire instead.



The second major variant I designed was a combo version, allowing sine & square outputs by just switching a jumper, or isolated[2] or line-regenerator (8040TA from Spectracom) versions with a simple sub-board containing just an inductor (TA) or 1:1 transformer (isolated).



This is the second revision of that board, where the 74S140 has been replaced by a modern TI 74LVC1G17 buffer. This version of the pod, set for sine output, uses almost exactly 30mA of current (since both the old & new pods use linear supplies that's the most sensible unit), whereas the original pods are right around 33mA. The empty pods at the bottom-left are simply placeholders for 2 100 ohm resistors to add 50 ohm line termination if desired.

The board fits into the Pomona 2390 "Size A" enclosures, or for the isolated version the Pomona 3239 "Size B". This is the reason the BNC connectors have to be extended to reach the board, on the isolated boxes the BNC pins reach much deeper into the enclosure.

If the jumpers were removed, plus the smaller buffer it should be easy to fit a pod into the Pomona "Miniature" boxes too.



I was also due to create some new personal businesscards, so I arranged the circuit down to a single layer (the only jumper is the requirement to connect both ground pins on the connectors) and merged it with some text converted to KiCad footprints to make a nice card on some 0.6mm PCBs. The paper on that photo is covering the link to the build instructions, which weren't written at the time (they're *mostly* done now, I may update this post with the link later).

Finally, while I was out travelling at the start of April my new (to me) HP 4395A arrived so I've finally got some spectrum output. The output is very similar between the original and my version, with the major notable difference being that my version is 10dB worse at the third harmonic. I lack the equipment (and understanding) to properly measure phase noise, but if anyone in AU/NZ wants to volunteer their time & equipment for an afternoon I'd love an excuse for a field trip.



Spectrum with input sourced from my house rubidium (natively a 5MHz unit) via my 8140 line. Note that despite saying "ExtRef" the analyzer is synced to its internal 10811 (which is an optional unit, and uses an external jumper, hence the display note.



Spectrum with input sourced from the analyzer's own 10811, and power from the DC bias generator also from the analyzer.


1: Or at least I didn't think they had, I've since found out that there was a multi output pod, and one is currently in the post heading to me.
2: An option on the standard Spectracom pods, albeit a rare one.

April 22, 2019

Pi-hole with DNS over TLS on Fedora

Quick and dirty guide to using Pi-hole with Stubby to provide both advertisement blocking and DNS over TLS. I’m using Fedora 30 ARM server edition on a Raspberry Pi 3.

Download Fedora Server ARM edition and write it to an SD card for the Raspberry Pi 3.

sudo fedora-arm-image-installer --resizefs --image=Fedora-Server-armhfp-30-1.2-sda.raw.xz --target=rpi3 --media=/dev/mmcblk0

Make sure your Raspberry Pi can already resolve DNS queries from some other source, such as your router or internet provider.

Log into the Fedora Server Cockpit web interface for the server (port 9090) and enable automatic updates from the Software tab. Else you can do updates manually.

sudo dnf -y update &amp;amp;&amp;amp; sudo reboot

Install Stubby

Install Stubby to forward DNS requests over TLS.

sudo dnf install getdns-stubby bind-utils

Edit the Stubby config file.

sudo vim /etc/stubby/stubby.yml

Set listen_addresses to localhost 127.0.0.1 on port 53000 (also set your preferred upstream DNS providers, if you want to change the defaults, e.g. CloudFlare).

listen_addresses:
– 127.0.0.1@53000
– 0::1@53000

Start and enable Stubby, checking that it’s listening on port 53000.

sudo systemctl restart stubby
sudo ss -lunp |grep 53000
sudo systemctl enable stubby

Stubby should now be listening on port 53000, which we can test with dig. The following command should return an IP address for google.com.

dig @localhost -p 53000 google.com

Next we’ll use Pi-hole as a caching DNS service to forward requests to Stubby (and provide advertisement blocking).

Install Pi-hole

Sadly, Pi-hole doesn’t support SELinux at the moment so set it to permissive mode (or write your own rules).

sudo setenforce 0
sudo sed -i s/^SELINUX=.*/SELINUX=permissive/g /etc/selinux/config

Install Pi-hole from their Git repository.

sudo dnf install git
git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole
cd "Pi-hole/automated install/"
sudo ./basic-install.sh

The installer will run, install deps and prompt for configuration. When asked what DNS to use, select Custom from the bottom of the list.

Custom DNS servers

Set the server to 127.0.0.1 (note that we cannot set the port here, we’ll do that later)

Use local DNS server

In the rest of the installer, also enable the web interface and server if you like and allow it to modify the firewall else this won’t work at all! 🙂 Make sure you take note of your admin password from the last screen, too.

Finally, add the port to our upstream (localhost) DNS server so that Pi-hole can forward requests to Stubby.

sudo sed -i '/^server=/ s/$/#53000/' /etc/dnsmasq.d/01-pihole.conf
sudo sed -i '/^PIHOLE_DNS_[1-9]=/ s/$/#53000/' /etc/pihole/setupVars.conf
sudo systemctl restart pihole-FTL

If you don’t want to muck around with localhost and ports you could probably add an IP alias and bind your Stubby to that on port 53 instead.

Testing

On a machine on your network, set /etc/resolv.conf to point to the IP address of your Pi-hole server to use it for DNS.

On the Pi-hole, check incoming DNS requests to ensure they are listening and forwarding on the right ports using tcpdump.

sudo tcpdump -Xnn -i any port 53 or port 53000 or port 853

Back on your client machine, ping google.com and with any luck it will resolve.

For a new query, tcpdump on your Pi-hole box should show an incoming request from the client machine to your pi-hole on port 53, a follow-up localhost request to 53000 and then outward request from your Pi-hole to 853, then finally the returned result back to your client machine.

You should also notice that the payload for the internal DNS queries are plain text, but the remote ones are encrypted.

Web interface

Start browsing around and see if you notice any difference where you’d normally see ads. Then jump onto the web interface on your Pi-hole box and take a look around.

Pi-hole web interface

If that all worked, you could get your DHCP server to point clients to your shiny new Pi-hole box (i.e. use DHCP options 6,<ip_address>).

If you’re feeling extra brave, you could redirect all unencrypted DNS traffic on port 53 back to your internal DNS before it leaves your network, but that might be another blog post…


Vale Polly Samuel (1963-2017): On Dying & Death

Those of you who follow me on Twitter will know some of this already, but I’ve been meaning to write here for quite some time about all this. It’s taken me almost two years to write, because it’s so difficult to find the words to describe this. I’ve finally decided to take the plunge and finish it despite feeling it could be better, but if I don’t I’ll never get this out.

September 2016 was not a good month for my wonderful wife Polly, she’d been having pains around her belly and after prodding the GP she managed to get a blood test ordered. They had suspected gallstones or gastritis but when the call came one evening to come in urgently in the next morning for another blood test we knew something was up. After the blood test we were sent off for an ultrasound of the liver and with that out of the way went out for a picnic on Mount Dandenong for a break. Whilst we were eating we got another phone call from the GP, this time to come and pick up a referral for an urgent MRI. We went to pick it up but when they found out Polly had already eaten they realised they would need to convert to a CT scan. A couple of phone calls later we were booked in for one that afternoon. That evening was another call to come back to see the GP. We were pretty sure we knew what was coming.

The news was not good, Polly had “innumerable” tumours in her liver. Over 5 years after surgery and chemo for her primary breast cancer and almost at the end of her 5 years of tamoxifen the cancer had returned. We knew the deal with metastatic cancer, but it was still a shock when the GP said “you know this is not a curable situation”. So the next day (Friday) it was right back to her oncologist who took her of the tamoxifen immediately (as it was no longer working) and scheduled chemotherapy for the following Monday, after an operation to install a PICC line. He also explained about what this meant, that this was just a management technique to (hopefully) try and shrink the tumours and make life easier for Polly for a while. It was an open question about how long that while would be, but we knew from the papers online that she had found looking at the statistics that it was likely months, not years, that we had. Polly wrote about it all at the time, far more eloquently than I could, with more detail, on her blog.

Chris, my husband, best pal, and the love of my life for 17 years, and I sat opposite the oncologist. He explained my situation was not good, that it was not a curable situation. I had already read that extensive metastatic spread to the liver could mean a prognosis of 4-6 months but if really favorable as long as 20 months.

The next few months were a whirlwind of chemo, oncology, blood tests, crying, laughing and loving. We were determined to talk about everything, and Polly was determined to prepare as quickly as she could for what was to come. They say you should “put your affairs in order” and that’s just what she did, financially, business-wise (we’d been running an AirBNB and all those bookings had to be canceled ASAP, plus of course all her usual autism consulting work) and personally. I was so fortunate that my work was so supportive and able to be flexible about my hours and days and so I could be around for these appointments.

Over the next few weeks it was apparent that the chemo was working, breathing & eating became far easier for her and a follow up MRI later on showed that the tumours had shrunk by about 75%. This was good news.

In October 2016 was Polly’s 53rd birthday and so she set about planning a living wake for herself, with a heap of guests, music courtesy of our good friend Scott, a lot of sausages (and other food) and good weather. Polly led the singing and there was an awful lot of merriment. Such a wonderful time and such good memories were made that day.

Polly singing at her birthday party in 2016

That December we celebrated our 16th wedding anniversary together at a lovely farm-stay place in the Yarra Valley before having what we were pretty sure was our last Christmas together.

Polly and Chris at the farm-stay for our wedding anniversary

But then in January came the news we’d been afraid of, the blood results were showing that the first chemo had run out of steam and stopped working, so it was on to chemo regime #2. A week after starting the new regime we took a delayed holiday up to the Blue Mountains in New South Wales (we’d had to cancel previously due to her diagnosis) and spent a long weekend exploring the area and generally having fun.

Polly and Chris at Katoomba, NSW

But in early February it was clear that the second line chemo wasn’t doing anything, and so it was on to the third line chemo. Polly had also been having fluid build up in her abdomen (called ascites) and we knew they would have to start to draining that at some point, February was that point; we spent the morning of Valentines Day in the radiology ward where they drained around 4 litres from her! The upside from that was it made life so much easier again for her. We celebrated that by going to a really wonderful restaurant that we used for special events for dinner for Valentines, something we hadn’t thought possible that morning!

Valentine's Day dinner at Copperfields

Two weeks after that we learned from the oncologist that the third line chemo wasn’t doing anything either and he had to give us the news that there wasn’t any treatment he could offer us that had any prospect of helping. Polly took that in her usual pragmatic and down-to-earth way, telling the oncologist that she didn’t see him as the reaper but as her fairy godfather who had given her months of extra quality time and bringing a smile to his and my face. She also asked whether the PICC line (which meant she couldn’t have a bath, just shower with a protective cover over it) could come out and the answer was “yes”.

The day before that news we had visited the palliative ward there for the first time, Polly had a hard time with hospitals and so we spent time talking to the staff, visiting rooms and Polly all the time reframing it to reduce and remove the anxiety. The magic words were “hotel-hospital”, which it really did feel like. We talked with the oncologist about how it all worked and what might happen.

We also had a home palliative team who would come and visit, help with pain management and be available on the phone at all hours to give advice and assist where they could. Polly felt uncertain about them at first as she wasn’t sure what they would make of her language issues and autism, but whilst they did seem a bit fazed at first by someone who was dealing with the fact that they were dying in such a blunt and straightforward manner things soon smoothed out.

None of this stopped us living, we continued to go out walking at our favourite places in our wonderful part of Melbourne, continued to see friends, continued to joke and dance and cry and laugh and cook and eat out.

Polly on minature steam train

Oh, and not forgetting putting a new paved area in so we could have a little outdoor fire area to enjoy with friends!

Chris laying paving slabs for fire area Polly and Morghana enjoying the fire!

But over time the ascites was increasing, with each drain being longer, with more fluid, and more taxing for Polly. She had decided that when it would get to the point that she would need two a week then that was enough and time to call it a day. Then, on a Thursday evening after we’d had an afternoon laying paving slabs for another little patio area, Polly was having a bath whilst I was researching some new symptoms that had appeared, and when Polly emerged I showed her what I had found. The symptoms matched what happens when that pressure that causes the ascites gets enough to push blood back down other pathways and as we read what else could lie in store Polly decided that was enough.

That night Polly emailed the oncologist to ask them to cancel her drain which was scheduled for the next day and instead to book her into the palliative ward. We then spent our final night together at home, before waking the next day to get the call to confirm that all was arranged from their end and that they would have a room by 10am, but to arrive when was good for us. Friends were informed and Polly and I headed off to the palliative ward, saying goodbye to the cats and leaving our house together for the very last time.

Arriving at the hospital we dropped in to see the oncology, radiology and front-desk staff we knew to chat with them before heading up to the palliative ward to meet the staff there and set up in the room. The oncologist visited and we had a good chat about what would happen with pain relief and sedation once Polly needed it. Shortly after our close friends Scott and Morghana arrived from different directions and I had brought Polly’s laptop and a 4G dongle and so on Skype arrived Polly’s good Skype pal Marisol joined us, virtually. We shared a (dairy free) Easter egg, some raspberry lemonade and even some saké! We had brought in a portable stereo and CD’s and danced and sang and generally made merry – which was so great.

After a while Polly decided that she was too uncomfortable and needed the pain relief and sedation, so everything was put in its place and we all said our goodbyes to Polly as she was determined to do the final stages on her own, and she didn’t want anyone around in case it caused her to try and hang on longer than she really should. I told her I was so proud of her and so honoured to be her husband for all this time. Then we left, as she wished, with Scott and Morghana coming back with me to the house. We had dinner together at the house and then Morghana left for home and Scott kindly stayed in the spare room.

The next day Scott and I returned to the hospital, Polly was still sleeping peacefully so after a while he and I had a late lunch together, making sure to fulfil Polly’s previous instructions to go enjoy something that she couldn’t, and then we went our separate ways. I had not been home long before I got the call from the hospital – Polly was starting to fade – so I contacted Scott and we both made our way back there again. The staff were lovely, they managed to rustle up some food for us as well as tea and coffee and would come and check on us in the waiting lounge, next door to where Polly was sleeping. At one point the nurse came in and said “you need a hug, she’s still sleeping”. Then, a while after, she came back in and said “I need a hug, she’s gone…”.

I was bereft. Whilst intellectually I knew this was inevitable, the reality of knowing that my life partner of 17 years was gone was so hard. The nurse told me us that we could see Polly now, and so Scott and I went to see her to say our final goodbye. She was so peaceful, and I was grateful that things had gone as she wanted and that she had been able to leave on her own terms and without the greater discomforts and pain that she was worried would still be coming. Polly had asked us to leave a CD on, and as we were leaving the nurses said to us “oh, we changed the CD earlier on today because it seemed strange to just have the one on all the time. We put this one on by someone called ‘Donna Williams’, it was really nice.”. So they had, unknowingly, put her own music on to play her out.

As you would expect if you had ever met Polly she had put all her affairs in order, including making preparations for her memorial as she wanted to make things as easy for me as possible. I arranged to have it live streamed for friends overseas and as part of that I got a recording of it, which I’m now making public below. Very sadly her niece Jacqueline, who talks at one point about going ice skating with her, has also since died.

Polly and I were so blessed to have 16 wonderful years together, and even at the end the fact that we did not treat death as a taboo and talked openly and frankly about everything (both as a couple and with friends) was such a boon for us. She made me such a better person and will always be part of my life, in so many ways.

Finally, I leave you with part of Polly’s poem & song “Still Awake”..

Time is a thief, which steals the chances that we never get to take.
It steals them while we are asleep.
Let’s make the most of it, while we are still awake.

Polly at Cardinia Reservoir, late evening

This item originally posted here:

Vale Polly Samuel (1963-2017): On Dying & Death

April 20, 2019

Now migrated to Drupal 8!

Now migrated to Drupal 8! kattekrab Sat, 20/04/2019 - 22:08

Leadership, and teamwork.

Leadership, and teamwork. kattekrab Fri, 13/04/2018 - 04:09

Makarrata

Makarrata kattekrab Thu, 14/06/2018 - 20:19

Communication skills for everyone

Communication skills for everyone kattekrab Sat, 17/03/2018 - 13:01

DrupalCon Nashville

DrupalCon Nashville kattekrab Sat, 17/03/2018 - 22:01

Powerful Non Defensive Communication (PNDC)

Powerful Non Defensive Communication (PNDC) kattekrab Sun, 10/03/2019 - 09:00

I said, let me tell you now

I said, let me tell you now kattekrab Sat, 10/03/2018 - 09:56

The Five Whys

The Five Whys kattekrab Sat, 16/06/2018 - 09:16

Site building with Drupal

Site building with Drupal kattekrab Sat, 17/02/2018 - 14:05

Six years and 9 months...

Six years and 9 months... kattekrab Sat, 27/10/2018 - 13:05

April 17, 2019

Programming an AnyTone AT-D878UV on Linux using Windows 10 and VirtualBox

I recently acquired an AnyTone AT-D878UV DMR radio which is unfortunately not supported by chirp, my usual go-to free software package for programming amateur radios.

Instead, I had to setup a Windows 10 virtual machine so that I could setup the radio using the manufacturer's computer programming software (CPS).

Install VirtualBox

Install VirtualBox:

apt install virtualbox virtualbox-guest-additions-iso

and add your user account to the vboxusers group:

adduser francois vboxusers

to make filesharing before the host and the guest work.

Finally, reboot to ensure that group membership and kernel modules are all set.

Create a Windows 10 virtual machine

Create a new Windows 10 virtual machine within VirtualBox. Then, download Windows 10 from Microsoft then start the virtual machine mounting the .iso file as an optical drive.

Follow the instructions to install Windows 10, paying attention to the various privacy options you will be offered.

Once Windows is installed, mount the host's /usr/share/virtualbox/VBoxGuestAdditions.iso as a virtual optical drive and install the VirtualBox guest additions.

Installing the CPS

With Windows fully setup, it's time to download the latest version of the computer programming software.

Unpack the downloaded file and then install it as Admin (right-click on the .exe).

Do NOT install the GD driver update or the USB driver, they do not appear to be necessary.

Program the radio

First, you'll want to download from the radio to get a starting configuration that you can change.

To do this:

  1. Turn the radio on and wait until it has finished booting.
  2. Plug the USB programming cable onto the computer and the radio.
  3. From the CPS menu choose "Set COM port".
  4. From the CPS menu choose "Read from radio".

Save this original codeplug to a file as a backup in case you need to easily reset back to the factory settings.

To program the radio, follow this handy third-party guide since it's much better than the official manual.

You should be able to use the "Write to radio" menu option without any problems once you're done creating your codeplug.

April 13, 2019

Secure ssh-agent usage

ssh-agent was in the news recently due to the matrix.org compromise. The main takeaway from that incident was that one should avoid the ForwardAgent (or -A) functionality when ProxyCommand can do and consider multi-factor authentication on the server-side, for example using libpam-google-authenticator or libpam-yubico.

That said, there are also two options to ssh-add that can help reduce the risk of someone else with elevated privileges hijacking your agent to make use of your ssh credentials.

Prompt before each use of a key

The first option is -c which will require you to confirm each use of your ssh key by pressing Enter when a graphical prompt shows up.

Simply install an ssh-askpass frontend like ssh-askpass-gnome:

apt install ssh-askpass-gnome

and then use this to when adding your key to the agent:

ssh-add -c ~/.ssh/key

Automatically removing keys after a timeout

ssh-add -D will remove all identities (i.e. keys) from your ssh agent, but requires that you remember to run it manually once you're done.

That's where the second option comes in. Specifying -t when adding a key will automatically remove that key from the agent after a while.

For example, I have found that this setting works well at work:

ssh-add -t 10h ~/.ssh/key

where I don't want to have to type my ssh password everytime I push a git branch.

At home on the other hand, my use of ssh is more sporadic and so I don't mind a shorter timeout:

ssh-add -t 4h ~/.ssh/key

Making these options the default

I couldn't find a configuration file to make these settings the default and so I ended up putting the following line in my ~/.bash_aliases:

alias ssh-add='ssh-add -c -t 4h'

so that I can continue to use ssh-add as normal and have not remember to include these extra options.

April 11, 2019

Using a MCP4921 or MCP4922 as a SPI DAC for Audio on Raspberry Pi

Share

I’ve been playing recently with using a MCP4921 as an audio DAC on a Raspberry Pi Zero W, although a MCP4922 would be equivalent (the ’22 is a two channel DAC, the ’21 is a single channel DAC). This post is my notes on where I got to before I decided that thing wasn’t going to work out for me.

My basic requirement was to be able to play sounds on a raspberry pi which already has two SPI buses in use. Thus, adding a SPI DAC seemed like a logical choice. The basic circuit looked like this:

MCP4921 SPI DAC circuit

Driving this circuit looked like this (noting that this code was a prototype and isn’t the best ever). The bit that took a while there was realising that the CS line needs to be toggled between 16 bit writes. Once that had been done (which meant moving to a different spidev call), things were on the up and up.

This was the point I realised that I was at a dead end. I can’t find a way to send the data to the DAC in a way which respects the timing of the audio file. Before I had to do small writes to get the CS line to toggle I could do things fast enough, but not afterwards. Perhaps there’s a DMA option instead, but I haven’t found one yet.

Instead, I think I’m going to go and try PWM based audio. If that doesn’t work, it will be a MAX219 i2c DAC for me!

Share

April 10, 2019

Audiobooks – March 2019

An Economist Gets Lunch: New Rules for Everyday Foodies by Tyler Cowen

A huge amount of practical advice and how and where to find the best food both locally and abroad. Plus good explanations as to why. 8/10

The Not-Quite States of America: Dispatches from the Territories and Other Far-Flung Outposts of the USA by Doug Mack

Writer tours the not-states of the USA. A bit too fluffy most of the time & too much hanging with US expats. Some interesting bits. 6/10

Shattered: Inside Hillary Clinton’s Doomed Campaign by Jonathan Allen & Amie Parnes

Chronology of the campaign based on background interviews with staffers. A ready needs a good knowledge of the race since this is assumed. Interesting enough. 7/10

Rush Hour by Iain Gatel

A history of commuting (from the early railway era), how it has driven changes in housing, work and society. Plus lots of other random stuff. Very pleasant. 8/10

Share