Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

August 29, 2016

Monitoring of Monitoring

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

[mtb/events] Oxfam Trailwalker - Sydney 2016 - ARNuts


A great day out on the trail with friends (fullsize)
Though it did not really hit me in the lead up or during the event until half way that it was yet another 100km and these are indeed somewhat tough to get through. The day out in the bush with my friends Alex, David and Julie was awesome.

As I say in the short report with the photos linked below, Oxfam is a great charity and that they have these trailwalker events in many places around the world to fundraise and get people to enjoy some quality outdoor time is pretty awesome. This is a hard course, that it took us 14h30m to get through it shows that but it sure is pretty, amazing native flowers, views (water ways and bush) and that it can get in to Manly with hardly realising you are in the middle of the biggest city in Australia is awesome.

My words and photos are online in my Oxfam Trailalker - Sydney 2016 - ARnuts gallery. What a fun day out!.

August 28, 2016

fakecloud

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack.

I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day’s hacking with a new web framework.

https://github.com/craig-sanders/fakecloud

fakecloud is a post from: Errata

August 26, 2016

Live migrating Btrfs from RAID 5/6 to RAID 10

Recently it was discovered that the RAID 5/6 implementation in Btrfs is broken, due to the fact that can miscalculate parity (which is rather important in RAID 5 and RAID 6).

So what to do with an existing setup that’s running native Btfs RAID 5/6?

Well, fortunately, this issue doesn’t affect non-parity based RAID levels such as 1 and 0 (and combinations thereof) and it also doesn’t affect a Btrfs filesystem that’s sitting on top of a standard Linux Software RAID (md) device.

So if down-time isn’t a problem, we could re-create the RAID 5/6 array using md and put Btrfs back on top and restore our data… or, thanks to Btrfs itself, we can live migrate it to RAID 10!

A few caveats though. When using RAID 10, space efficiency is reduced to 50% of your drives, no matter how many you have (this is because it’s mirrored). By comparison, with RAID 5 you lose a single drive in space, with RAID 6 it’s two, no-matter how many drives you have.

This is important to note, because a RAID 5 setup with 4 drives that is using more than 2/3rds of the total space will be too big to fit on RAID 10. Btrfs also needs space for System, Metadata and Reserves so I can’t say for sure how much space you will need for the migration, but I expect considerably more than 50%. In such cases, you may need to add more drives to the Btrfs array first, before the migration begins.

So, you will need:

  • At least 4 drives
  • An even number of drives (unless you keep one as a spare)
  • Data in use that is much less than 50% of the total provided by all drives (number of disks / 2)

Of course, you’ll have a good, tested, reliable backup or two before you start this. Right? Good.

Plug any new disks in and partition or luksFormat them if necessary. We will assume your new drive is /dev/sdg, you’re using dm-crypt and that Btrfs is mounted at /mnt. Substitute these for your actual settings.
cryptsetup luksFormat /dev/sdg
UUID="$(cryptsetup luksUUID /dev/sdg)"
echo "luks-${UUID} UUID=${UUID} none" >> /etc/crypttab
cryptsetup luksOpen luks-${UUID} /dev/sdg
btrfs device add /dev/mapper/luks-${UUID} /mnt

The migration is going to take a long time, so best to run this in a tmux or screen session.

screen
time btrfs balance /mnt
time btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt

After this completes, check that everything has been migrated to RAID 10.
btrfs fi df /srv/data/
Data, RAID10: total=2.19TiB, used=2.18TiB
System, RAID10: total=96.00MiB, used=240.00KiB
Metadata, RAID10: total=7.22GiB, used=5.40GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

If you still see some RAID 5/6 entries, run the same migrate command and then check that everything has migrated successfully.

Now while we’re at it, let’s defragment everything.
time btrfs filesystem defragment /srv/data/ # this defrags the metadata
time btrfs filesystem defragment -r /srv/data/ # this defrags data

For good measure, let’s rebalance again without the migration (this will also take a while).
time btrfs fi balance start --full-balance /srv/data/

August 25, 2016

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

August 24, 2016

Small fix for AMP WordPress plugin

If you use AMP plugin for WordPress to make AMP (Accelerated Mobile Pages) version of your posts and have some troubles validating them on AMP validator, you may try this fix for AMP plugin to make those pages valid.

August 20, 2016

Basics of Backups

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Failing Systems – update 2016-08-22

When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

When in doubt make a backup.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

August 19, 2016

Speaking in August 2016

I know this is a tad late, but there have been some changes, etc. recently, so apologies for the delay of this post. I still hope to meet many of you to chat about MySQL/Percona Server/MariaDB Server, MongoDB, open source databases, and open source in general in the remainder of August 2016.

  • LinuxCon+ContainerCon North America – August 22-24 2016 – Westin Harbour Castle, Toronto, Canada – I’ll be speaking about lessons one can learn from database failures and enjoying the spectacle that is the 25th anniversary of Linux!
  • Chicago MySQL Meetup Group – August 29 2016 – Vivid Seats, Chicago, IL – more lessons from database failures here, and I’m looking forward to meeting users, etc. in the Chicago area

While not speaking, Vadim Tkachenko and I will be present at the @scale conference. I really enjoyed my time there previously, and if you get an invite, its truly a great place to learn and network.

August 17, 2016

Getting In Sync

Since at least v1.0.0 Petitboot has used device-mapper snapshots to avoid mounting block devices directly. Primarily this is so Petitboot can mount disks and potentially perform filesystem recovery without worrying about messing it up and corrupting a host's boot partition - all changes happen to the snapshot in memory without affecting the actual device.

This of course gets in the way if you actually do want to make changes to a block device. Petitboot will allow certain bootloader scripts to make changes to disks if configured (eg, grubenv updates), but if you manually make changes you would need to know the special sequence of dmsetup commands to merge the snapshots back to disk. This is particulary annoying if you're trying to copy logs to a USB device!

Depending on how recent a version of Petitboot you're running, there are two ways of making sure your changes persist:

Before v1.2.2

If you really need to save changes from within Petitboot, the most straightforward way is to disable snapshots. Drop to the shell and enter

nvram --update-config petitboot,snapshots?=false
reboot

Once you have rebooted you can remount the device as read-write and modify it as normal.

After v1.2.2

To make this easier while keeping the benefit of snapshots, v1.2.2 introduces a new user-event that will merge snapshots on demand. For example:

mount -o remount,rw /var/petitboot/mnt/dev/sda2
cp /var/log/messages /var/petitboot/mnt/dev/sda2/
pb-event sync@sda2

After calling pb-event sync@yourdevice, Petitboot will remount the device back to read-only and merge the current snapshot differences back to disk. You can also run pb-event sync@all to sync all existing snapshots if desired.

What’s next

I received an overwhelming number of comments when I said I was leaving MariaDB Corporation. Thank you – it is really nice to be appreciated.

I haven’t left the MySQL ecosystem. In fact, I’ve joined Percona as their Chief Evangelist in the CTO Office, and I’m going to focus on the MySQL/Percona Server/MariaDB Server ecosystem, while also looking at MongoDB and other solutions that are good for Percona customers. Thanks again for the overwhelming response on the various social media channels, and via emails, calls, etc.

Here’s to a great time at Percona to focus on open source databases and solutions around them!

My first blog post on the Percona blog – I’m Colin Charles, and I’m here to evangelize open source databases!, the press release.

August 15, 2016

Neo-Colonialism and Neo-Liberalism, Intelligence Analysis, and More

Watch a lot of media outlets and over and over again and you hear the terms 'Neocolonialism' and 'Free Trade' from time to time. Until fairly recently, I wasn't entirely aware of what exactly this meant and how it came to be. As indicated in my last post, up until a certain point wealth was distributed rather evenly throughout the world. Then 'colonialism' happened and the wealth gap between

Changing of the guard

I posted a message to the internal mailing lists at MariaDB Corporation. I have departed (I resigned) the company, but definitely not the community. Thank you all for the privilege of serving the large MariaDB Server community of users, all 12 million+ of you. See you on the mailing lists, IRC, and the developer meetings.

The Japanese have a saying, “leave when the cherry blossoms are full”.

I’ve been one of the earliest employees of this post-merge company, and was on the founding team of the MariaDB Server having been around since 2009. I didn’t make the first company meeting in Mallorca (August 2009) due to the chickenpox, but I’ve been to every one since.

We made the first stable MariaDB Server 5.1 release in February 2010. Our first Linux distribution release was in openSUSE. Our then tagline: MariaDB: Community Developed. Feature Enhanced. Backward Compatible.

In 2013, we had to make a decision: merge with our sister company SkySQL or take on investment of equal value to compete; majority of us chose to work with our family.

Our big deal was releasing MariaDB Server 5.5 – Wikipedia migrated, Google wanted in, and Red Hat pushed us into the enterprise space.

Besides managing distributions and other community related activities (and in the pre-SkySQL days Rasmus and I did everything from marketing to NRE contract management, down to even doing press releases – you wear many hats when you’re in a startup of less than 20 people), in this time, I’ve written over 220 blog posts, spoken at over 130 events (an average of 18 per year), and given generally over 250 talks, tutorials and keynotes. I’ve had numerous face-to-face meetings with customers, figuring out what NRE they may need and providing them solutions. I’ve done numerous internal presentations, audience varying from the professional services & support teams, as well as the management team. I’ve even technically reviewed many books, including one of the best introductions by our colleague, Learning MySQL & MariaDB.

Its been a good run. Seven years. Uncountable amount of flights. Too many weekends away working for the cause. A whole bunch of great meetings with many of you. Seen the company go from bootstrap, merger, Series A, and Series B.

It’s been a true privilege to work with many of you. I have the utmost respect for Team MariaDB (and of course my SkySQL brethren!). I’m going to miss many of you. The good thing is that MariaDB Server is an open source project, and I’m not going to leave the project or #maria. I in fact hope to continue speaking and working on MariaDB Server.

I hope to remain connected to many of you.

Thank you for this great privilege.

Kind Regards,
Colin Charles

[mtb/events] Razorback Ultra - Spectacular run in the Victorian Alps


Alex and another Canberran on the Razorback (fullsize)
Alex and I signed up for the Razorback Ultra because it is in an amazing part of the country and sounded like a fun event to go do. I was heading into it a week after Six Foot, however this is all just training for UTA100 so why not. All I can say is every trail runner should do this event, it is amazing.

The atmosphere at the race is laid back and it is all about heading up into the mountains and enjoying yourself. I will be back for sure.

My words and photos are online in my Razorback Ultra 2016 gallery. This is truly one of the best runs in Australia.

[mtb/events] Geoquest 2016 - Port Mac again with Resultz


My Mirage 730 - Matilda, having a rest while we ran around (fullsize)
I have fun at Goequest and love doing the event however have been a bit iffy about trying to organise a team for a few years. As many say one of the hardest things in the event is getting 4 people to the start line ready to go.

This year my attitude was similar to last, if I was asked to join a team I would probably say yes. I was asked and thus ended up racing with a bunch of fun guys under the banner of Michael's company Resultz Racing. Another great weekend on the mid north NSW coast with some amazing scenery (the two rogaines were highlights, especially the punchbowl waterfall on the second one).

My words and photos are online in my Geoquest 2016 gallery. Always good fun and a nice escape from winter.

[mtb] The lots of vert lunch run, reasons to live in Canberra


Great view of the lake from the single track on the steep side of BM (fullsize)
This run that is so easy to get out for at lunch is a great quality climbing session and shows off canberra beautifully. What fun.

Photos and some words are online on my Lots of vert lunch run page.

[various] Vote Greens Maybe

I have had the parodies of the Call me maybe song in my head again today (the Orica Green edge one was brilliant and there are some inspired versions out there). This had me thinking of different lyrics, maybe something to suggest people vote Green this Saturday for a better and fairer Australia.

Vote Green Maybe
I threw a wish in the well
For a better Australia today
I looked at our leaders today
And now they're in our way

I'll not trade my freedom for them
All our dollars and cents to the rich
I wasn't looking for this
But now they're in our way

Our democracy is squandered
Broken promises
Lies everywhere
Hot nights
Winds are blowing
Freak weather events, climate change

Hey I get to vote soon
And this isn't crazy
But here's my idea
So vote Greens maybe
It's hard to look at our future 
But here's my idea
So vote Greens maybe

Hey I get to vote soon
And this isn't crazy
But here's my idea
So vote Greens maybe
And all the major parties
Try to shut us up
But here's my idea
So vote Greens maybe

Liberal and Labor think they should rule
I take no time saying they fail
They gave us nothing at all
And now they're in our way

I beg for a fairer Australia
At first sight our policies are real
I didn't know if you read them
But it's the Greens way  

Your vote can fix things
Healthier people
Childrens education
Fairer policies
A change is coming
Where you think you're voting, Greens?

Hey I get to vote soon
And this isn't crazy
But here's my idea
So vote Greens maybe
It's worth a look to a brighter future
But here's my idea
So vote Greens maybe

Before this change in our lives
I see children in detention
I see humans fleeing horrors
I see them locked up and mistreated
Before this change in our lives
I see a way to fix this
And you should know that
Voting Green can help fix this, Green, Green, Green...

It's bright to look at our future 
But here's my idea
So vote Greens maybe

Hey I get to vote soon
And this isn't crazy
But here's my idea
So vote Greens maybe
And all the major parties
Try to shut us up
But here's my idea
So vote Greens maybe

Before this change in our lives
I see children in detention
I see humans fleeing horrors
I see them locked up and mistreated
Before this change in our lives
I see a way to fix this
And you should know that
So vote Green Saturday
Call Me Maybe (Carly Rae Jepsen)
I threw a wish in the well
Don't ask me I'll never tell
I looked at you as it fell
And now you're in my way

I trade my soul for a wish
Pennies and dimes for a kiss
I wasn't looking for this
But now you're in my way

Your stare was holding
Ripped jeans
Skin was showing
Hot night
Wind was blowing
Where you think you're going baby?

Hey I just met you
And this is crazy
But here's my number
So call me maybe
It's hard to look right at you baby
But here's my number
So call me maybe

Hey I just met you
And this is crazy
But here's my number
So call me maybe
And all the other boys
Try to chase me
But here's my number
So call me maybe

You took your time with the call
I took no time with the fall
You gave me nothing at all
But still you're in my way

I beg and borrow and steal
At first sight and it's real
I didn't know I would feel it
But it's in my way

Your stare was holding
Ripped jeans
Skin was showing
Hot night
Wind was blowing
Where you think you're going baby?

Hey I just met you
And this is crazy
But here's my number
So call me maybe
It's hard to look right at you baby
But here's my number
So call me maybe

Before you came into my life
I missed you so bad
I missed you so bad
I missed you so so bad
Before you came into my life
I missed you so bad
And you should know that
I missed you so so bad, bad, bad, bad....

It's hard to look right at you baby
But here's my number
So call me maybe

Hey I just met you
And this is crazy
But here's my number
So call me maybe
And all the other boys
Try to chase me
But here's my number
So call me maybe

Before you came into my life
I missed you so bad
I missed you so bad
I missed you so so bad
Before you came into my life
I missed you so bad
And you should know that
So call me, maybe

[various] Safety Sewing


No reflections (fullsize)

None outside either (fullsize)

Better when full/open (fullsize)

Also better when closed, much brightness (fullsize)
For over a year I have been planning to do this, my crumpler bag (the complete seed) which I bought in 2008 has been my primary commuting and daily use bag for stuff since that time and as much as I love the bag there is one major problem. No reflective marking anywhere on the bag.

Some newer crumplers have reflective strips and other such features and if I really wanted to spend big I could get them to do a custom bag with whatever colours and reflective bits I can dream up. There are also a number of other brands that do a courier bag with reflective bits or even entire panels or similar that are reflective. However this is the bag I own and it is still perfectly good for daily use so no need to go buy something new.

So I got a $4 sewing kit I had sitting around in the house, some great 3M reflective tape material and finally spent the time to rectify this feature missing from the bag. After breaking 3 needles and spending a while getting it done I now have a much safer bag especially commuting home on these dark winter nights. The sewing work is a bit messy however it is functional which is all that matters to me.

August 14, 2016

The rise and fall of the Gopher protocol | MinnPost

Twenty-five years ago, a small band of programmers from the University of Minnesota ruled the internet. And then they didn’t.

The committee meeting where the team first presented the Gopher protocol was a disaster, “literally the worst meeting I’ve ever seen,” says Alberti. “I still remember a woman in pumps jumping up and down and shouting, ‘You can’t do that!’ ”

Among the team’s offenses: Gopher didn’t use a mainframe computer and its server-client setup empowered anyone with a PC, not a central authority. While it did everything the U (University of Minnesota) required and then some, to the committee it felt like a middle finger. “You’re not supposed to have written this!” Alberti says of the group’s reaction. “This is some lark, never do this again!” The Gopher team was forbidden from further work on the protocol.

Read the full article (a good story of Gopher and WWW history!) at https://www.minnpost.com/business/2016/08/rise-and-fall-gopher-protocol

Have It Your Way: Maximizing Drive-Thru Contributions - PyConAu 2016

by VM (Vicky) Brasseur.

Slides.

Vicky talked about the importance non-committing contributors but the primary focus is on committing contributors due to time limits.

Covered the different types of drive-thru contributors and why they show up.

  • Scratching an itch.
  • Unwilling / Unable to find an alternative to this project
  • They like you.

Why do they leave?

  • Itch has been sratched.
  • Not enough time.
  • No longer using the project.
  • Often a high barrier to contribution.
  • Absence of appreciation.
  • Unpleasant people.
  • Inappropriate attribution.

Disadvantages

  • It takes more time to help them land patches
    • Reluctance to help them "as they're not community".

It appears to be that many project see community as the foundation but Vicky contended it is contributors.

More drive-thru contributors are a sign of a healthy project and can lead to a larger community.

Advantages:

  • Have better processes in place.
  • Faster patch and release times.
  • More eyes and shallower bugs
  • Better community, code and project reputation.

Leads to a healthier overall project.

Methods for Maxmising drive-thru contributions:

Documentation!

  • give your project super powers.
  • Scales!
  • Ensures efficient and successful contributions.
  • Minimises questions.
  • Standardises processes.
  • Vicky provided a documentation quick start guide.

Mentoring!

  • Code review.
  • "Office hours" for communication.
  • Hackfests.
  • New contributor events.

Process improvements!

  • Tag starter bugs
  • Contributor SLA
  • Use containers / VM of dev environment

Culture!

  • Value contributions and contributors
  • Culture of documentation
  • Default to assistance

Outreach! * Gratitude * Recognition * Follow-up!

Institute the "No Asshole" rule.

PyConAu 2016

Keynote - Python All the Things - PyConAu 2016

by Russell Keith-Magee.

Keith spoke about porting Python to mobile devices. CPython being written in C enables it to leverage the supported platforms of the C language and be compiled a wide range of platforms.

There was a deep dive in the options and pitfalls when selecting a method to and implementing Python on Android phones.

Ouroboros is a pure Python implementation of the Python standard library.

Most of the tools discussed are at an early stage of development.

Why?

  • Being able to run on new or mobile platforms addresses an existential threat.
  • The threat also presents an opportunity to grown, broaden and improve Python.
  • Wants Python to be a "first contact" language, like (Visual) Basic once was.
  • Unlike Basic, Python also support very complex concepts and operations.
  • Presents an opportunity to encourage broader usage by otherwise passive users.
  • Technical superiority is rarely enough to guarantee success.
  • A breadth of technical domains is required for Python to become this choice.
  • Technical problems are the easiest to solve.
  • Te most difficult problems are social and community and require more attention.

Keith's will be putting his focus into BeeWare and related projects.

Fortune favours the prepared mind

(Louis Pasteur)

PyConAu 2016

August 13, 2016

SSD and M.2

The Need for Speed

One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.

M.2

M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

Power Use

I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

The Future

M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

Playing with Shifter Part 2 – converted Docker containers inside Slurm

This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.

Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:

required /usr/lib64/shifter/shifter_slurm.so shifter_config=/etc/shifter/udiRoot.conf

as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:

#!/bin/bash

#SBATCH -p debug
#SBATCH --image=debian:wheezy

shifter cat /etc/issue

results in the following on our RHEL compute nodes:

[samuel@bruce Shifter]$ cat slurm-1734069.out 
Debian GNU/Linux 7 \n \l

simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.

That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=dispel4py/docker.openmpi
#SBATCH --ntasks=3
#SBATCH --tasks-per-node=1

shifter cat /etc/issue

srun shifter python /home/tutorial/mpi4py_benchmarks/helloworld.py

This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:

[samuel@bruce Python]$ cat slurm-1734135.out
Ubuntu 14.04.4 LTS \n \l

libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce001

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
Hello, World! I am process 0 of 3 on bruce001.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce002

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 1 of 3 on bruce002.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
--------------------------------------------------------------------------
[[30199,2],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: bruce003

Another transport will be used instead, although this may result in
lower performance.
--------------------------------------------------------------------------
Hello, World! I am process 2 of 3 on bruce003.

It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).

Open-MPI allows you to specify what transports to use, so adding one line to my batch script:

export OMPI_MCA_btl=tcp,self,sm

cleans up the output a lot:

Ubuntu 14.04.4 LTS \n \l

Hello, World! I am process 0 of 3 on bruce001.
Hello, World! I am process 2 of 3 on bruce003.
Hello, World! I am process 1 of 3 on bruce002.

This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.

Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=tcp,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

giving these latency results:

[samuel@bruce MPI]$ cat slurm-1734137.out
# MPI Latency Test
# Size [B]        Latency [us]
0                        16.19
1                        16.47
2                        16.48
4                        16.55
8                        16.61
16                       16.65
32                       16.80
64                       17.19
128                      17.90
256                      19.28
512                      22.04
1024                     27.36
2048                     64.47
4096                    117.28
8192                    120.06
16384                   145.21
32768                   215.76
65536                   465.22
131072                  926.08
262144                 1509.51
524288                 2563.54
1048576                5081.11
2097152                9604.10
4194304               18651.98

To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):

#!/bin/bash
#SBATCH -p debug
#SBATCH --image=chrissamuel/docker.openmpi:latest
#SBATCH --ntasks=2
#SBATCH --tasks-per-node=1

export OMPI_MCA_btl=openib,self,sm

srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py

which then gave these latency numbers:

[samuel@bruce MPI]$ cat slurm-1734138.out
# MPI Latency Test
# Size [B]        Latency [us]
0                         2.52
1                         2.71
2                         2.72
4                         2.72
8                         2.74
16                        2.76
32                        2.73
64                        2.90
128                       4.03
256                       4.23
512                       4.53
1024                      5.11
2048                      6.30
4096                      7.29
8192                      9.43
16384                    19.73
32768                    29.15
65536                    49.08
131072                   75.19
262144                  123.94
524288                  218.21
1048576                 565.15
2097152                 811.88
4194304                1619.22

So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).

Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.

Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.

So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed! 🙂

This item originally posted here:

Playing with Shifter Part 2 – converted Docker containers inside Slurm

Microsoft Chicago – retro in qemu!

So, way back when (sometime in the early 1990s) there was Windows 3.11 and times were… for Workgroups. There was this Windows NT thing, this OS/2 thing and something brewing at Microsoft to attempt to make the PC less… well, bloody awful for a user.

Again, thanks to abandonware sites, it’s possible now to try out very early builds of Microsoft Chicago – what would become Windows 95. With the earliest build I could find (build 56), I set to work. The installer worked from an existing Windows 3.11 install.

I ended up using full system emulation rather than normal qemu later on, as things, well, booted in full emulation and didn’t otherwise (I was building from qemu master… so it could have actually been a bug fix).

chicago-launch-setupMmmm… Windows 3.11 File Manager, the fact that I can still use this is a testament to something, possibly too much time with Windows 3.11.

chicago-welcome-setupchicago-setupUnfortunately, I didn’t have the Plus Pack components (remember Microsoft Plus! ?- yes, the exclamation mark was part of the product, it was the 1990s.) and I’m not sure if they even would have existed back then (but the installer did ask).

chicago-install-dirObviously if you were testing Chicago, you probably did not want to upgrade your working Windows install if this was a computer you at all cared about. I installed into C:\CHICAGO because, well – how could I not!

chicago-installingThe installation went fairly quickly – after all, this isn’t a real 386 PC and it doesn’t have of-the-era disks – everything was likely just in the linux page cache.

chicago-install-networkI didn’t really try to get network going, it may not have been fully baked in this build, or maybe just not really baked in this copy of it, but the installer there looks a bit familiar, but not like the Windows 95 one – maybe more like NT 3.1/3.51 ?

But at the end… it installed and it was time to reboot into Chicago:
chicago-bootSo… this is what Windows 95 looked like during development back in July 1993 – nearly exactly two years before release. There’s some Windows logos that appear/disappear around the place, which are arguably much cooler than the eventual Windows 95 boot screen animation. The first boot experience was kind of interesting too:
Screenshot from 2016-08-07 20-57-00Luckily, there was nothing restricting the beta site ID or anything. I just entered the number 1, and was then told it needed to be 6 digits – so beta site ID 123456 it is! The desktop is obviously different both from Windows 3.x and what ended up in Windows 95.

Screenshot from 2016-08-07 20-57-48Those who remember Windows 3.1 may remember Dr Watson as an actual thing you could run, but it was part of the whole diagnostics infrastructure in Windows, and here (as you can see), it runs by default. More odd is the “Switch To Chicago” task (which does nothing if opened) and “Tracker”. My guess is that the “Switch to Chicago” is the product of some internal thing for launching the new UI. I have no ideawhat the “Tracker” is, but I think I found a clue in the “Find File” app:

Screenshot from 2016-08-13 14-10-10Not only can you search with regular expressions, but there’s “Containing text”, could it be indexing? No, it totally isn’t. It’s all about tracking/reporting problems:

Screenshot from 2016-08-13 14-15-19Well, that wasn’t as exciting as I was hoping for (after all, weren’t there interesting database like file systems being researched at Microsoft in the early 1990s?). It’s about here I should show the obligatory About box:
Screenshot from 2016-08-07 20-58-10It’s… not polished, and there’s certainly that feel throughout the OS, it’s not yet polished – and two years from release: that’s likely fair enough. Speaking of not perfect:

Screenshot from 2016-08-07 20-59-03When something does crash, it asks you to describe what went wrong, i.e. provide a Clue for Dr. Watson:

Screenshot from 2016-08-13 12-09-22

But, most importantly, Solitaire is present! You can browse the Programs folder and head into Games and play it! One odd tihng is that applications have two >> at the end, and there’s a “Parent Folder” entry too.

Screenshot from 2016-08-13 12-01-24Solitair itself? Just as I remember.

Screenshot from 2016-08-07 21-21-27Notably, what is missing is anything like the Start menu, which is probably the key UI element introduced in Windows 95 that’s still with us today. Instead, you have this:

Screenshot from 2016-08-13 11-55-15That’s about the least exciting Windows menu possible. There’s the eye menu too, which is this:

Screenshot from 2016-08-13 11-56-12More unfinished things are found in the “File cabinet”, such as properties for anything:
Screenshot from 2016-08-13 12-02-00But let’s jump into Control Panels, which I managed to get to by heading to C:\CHICAGO\Control.sys – which isn’t exactly obvious, but I think you can find it through Programs as well.Screenshot from 2016-08-13 12-02-41Screenshot from 2016-08-13 12-05-40The “Window Metrics” application is really interesting! It’s obvious that the UI was not solidified yet, that there was a lot of experimenting to do. This application lets you change all sorts of things about the UI:

Screenshot from 2016-08-13 12-05-57My guess is that this was used a lot internally to twiddle things to see what worked well.

Another unfinished thing? That familiar Properties for My Computer, which is actually “Advanced System Features” in the control panel, and from the [Sample Information] at the bottom left, it looks like we may not be getting information about the machine it’s running on.

Screenshot from 2016-08-13 12-06-39

You do get some information in the System control panel, but a lot of it is unfinished. It seems as if Microsoft was experimenting with a few ways to express information and modify settings.

Screenshot from 2016-08-13 12-07-13But check out this awesome picture of a hard disk for Virtual Memory:

Screenshot from 2016-08-13 12-07-47The presence of the 386 Enhanced control panel shows how close this build still was to Windows 3.1:

Screenshot from 2016-08-13 12-08-08At the same time, we see hints of things going 32 bit – check out the fact that we have both Clock and Clock32! Notepad, in its transition to 32bit, even dropped the pad and is just Note32!

Screenshot from 2016-08-13 12-11-10Well, that’s enough for today, time to shut down the machine:
Screenshot from 2016-08-13 12-15-45

Python for science, side projects and stuff! - PyConAu 2016

By Andrew Lonsdale.

  • Talked about using python-ppt for collaborating on PowerPoint presentations.
  • Covered his journey so far and the lessons he learned.
  • Gave some great examples of re-creating XKCD comics in Python (matplotlib_venn).
  • Claimed the diversion into Python and Matplotlib has helped is actual research.
  • Spoke about how using Python is great for Scientific research.
  • Summarised that side projects are good for Science and Python.
  • Recommended Elegant SciPy
  • Demo's using Emoji to represent bioinformatics using FASTQE (FASTQ as Emoji).

PyConAu 2016

MicroPython: a journey from Kickstarter to Space by Damien George - PyConAu 2016

Damien George.

Motivations for MicroPython:

  • To provide a high level language to control sophisticated micro-controllers.
  • Approached it as an intellectually stimulating research problem.
  • Wasn't even sure it was possible.
  • Chose Python because:
    • It was a high level language with powerful features.
    • Large existing community.
    • Naively thought it would be easy.
    • Found Python easy to learn.
    • Shallow but long learning curve of python makes it good for beginners and advanced programmers.
    • Bitwise operaitons make it usefult for micro-controllers.

Why Not Use CPython?

  • CPython pre-allocates memory, resulting in inefficient memory usage which is problematic for low RAM devices like micro controllers.

Usage:

  • If you know Python, you know MicroPython - it's implemented the same

Kickstarter:

Damien covered his experiences with Kickstarter.

Internals of MicroPython:

  • Damien covered the parser, lexer, compiler and runtime.
  • Walked us through the workflows of the internals.
  • Spoke about object represntation and the three machine word object forms:
    • Integers.
    • Strings.
    • Objects.
  • Covered the emitters:
    • Bytecode.
    • Native (machine code).
    • Inline assembler.

Coding Style:

Coding was more based on a physicist trying to make things work, than a computer engineer.

  • There's a code dashboard
  • Hosted on GitHub
  • Noted that he could not have done this without the support of the community.

Hardware:

Listed some of the micro controller boards that it runs on ad larger computers that currently run OpenWRT.

Spoke about the BBC micron:bit project. Demo'd speech synthesis and image display running on it.

MicroPython in Space:

Spoke about the port to LEON / SPARC / RTEMS for the European Space agency for satellite control, particularly the application layer.

Damien closed with an overview of current applications and ongoing software and hardware development.

Links:

micropython.org forum.micropython.org github.com/micropython

PyConAu 2016

August 12, 2016

Doing Math with Python - Amit Saha - PyConAu 2016

Amit Saha.

Slides and demos.

Why Math with Python?

  • Provides an interactive learning experience.
  • Provides a great base for future programming (ie: data science, machine learning).

Tools:

  • Python 3
  • SymPy
  • matplotlib

Amit's book: Doing Math with Python

PyConAu 2016

The Internet of Not Great Things - Nick Moore - PyConAu 2016

Nick Moore.

aka "The Internet of (Better) Things".

  • Abuse of IoT is not a technical issue.
  • The problem is who controls the data.
  • Need better analysis of the was it is used that is bad.
  • "If you're not the customer, you're the product."
    • by accepting advertising.
    • by having your privacy sold.
  • Led to a conflation of IoT and Big Data.
  • Product end of life by vendors ceasing support.
  • Very little cross vendor compatibility.
  • Many devices useless if the Internet is not available.
  • Consumer grade devices often fail.
  • Weak crypto support.
  • Often due to lack of entropy, RAM, CPU.
  • Poorly thought out update cycles.

Turning Complaints into Requirements:

We need:

  • Internet independence.
  • Generic interfaces.
  • Simplified Cryptography.
  • Easier Development.

Some Solutions:

  • Peer to peer services.
  • Standards based hardware description language.
  • Shared secrets, initialised by QR code.
  • Simpler development with MicroPython.

PyConAu 2016

OpenBMC - Boot your server with Python - Joel Stanley - PyConAu 2016

Joel Stanley.

  • OpenBMC is a Free Software BMC
  • Running embedded Linux.
  • Developed an API before developing other interfaces.

Goals:

  • A modern kernel.
  • Up to date userspace.
  • Security patches.
  • Better interfaces.
  • Reliable performance.
    • REST interface.
    • SSH instead of strange tools.

The Future:

  • Support more home devices.
  • Add a web interface.
  • Secure boot, trusted boot, more security features.
  • Upstream all of the things.
  • Support more hardware.

PyConAu 2016

Teaching Python with Minecraft - Digital K - PyConAu 2016

by Digital K.

The video of the talk is here.

  • Recommended for ages 10 - 16
  • Why Minecraft?
    • Kids familiarity is highly engaging.
    • Relatively low cost.
    • Code their own creations.
    • Kids already use the command line in Minecraft
  • Use the Minecraft API to receive commands from Python.
    • Place blocks
    • Move players
    • Build faster
    • Build larger structures and shapes
    • Easy duplication
    • Animate blocks (ie: colour change)
    • Create games

Option 1:

How it works:

  • Import Minecraft API libraries to your code.
  • Push it to the server.
  • Run the Minecraft client.

What you can Teach:

  • Co-ordinates
  • Time
  • Multiplications
  • Data
  • Art works with maths
  • Trigonometry
  • Geo fencing
  • Design
  • Geography

Connect to External Devices:

  • Connect to Raspberry Pi or Arduino.
  • Connect the game to events in the real world.

Other Resources:

PyConAu 2016

Scripting the Internet of Things - Damien George - PyConAu 2016

Damien George

Damien gave an excellent overview of using MicroPython with microcontrollers, particularly the ESP8266 board.

Damien's talk was excellent and covered a broad and interesting history of the project and it's current efforts.

PyConAu 2016

ESP8266 and MicroPython - Nick Moore - PyConAu 2016

Nick Moore

Slides.

  • Price and feature set are a game changer for hobbyists.
  • Makes for a more playful platform.
  • Uses serial programming mode to flash memory
  • Strict power requirements
  • The easy way to use them is with a NodeMCU for only a little more.
  • Tool kits:
  • Lua: (Node Lua).
  • Javascript: Espruino.
  • Forth, Lisp, Basic(?!).
  • Mircopython works on the ESP8266:
    • Drives micro controllers.
    • The onboard Wifi.
    • Can run a small webserver to view and control devices.
    • WebRepl can be used to copy files, as can mpy-utils.
    • Lacks:
      • an operating system.
      • Lacks multiprocessing.
      • Debugger / profiler.
  • Flobot:
    • Compiles via MicroPython.
    • A visual dataflow language for robots.

ES8266 and MicroPython provide an accessible entry into working with micro-crontrollers.

PyConAu 2016

August 10, 2016

Command line password management with pass

Why use a password manager in the first place? Well, they make it easy to have strong, unique passwords for each of your accounts on every system you use (and that’s a good thing).

For years I’ve stored my passwords in Firefox, because it’s convenient, and I never bothered with all those other fancy password managers. The problem is, that it locked me into Firefox and I found myself still needing to remember passwords for servers and things.

So a few months ago I decided to give command line tool Pass a try. It’s essentially a shell script wrapper for GnuPG and stores your passwords (with any notes) in individually encrypted files.

I love it.

Pass is less convenient in terms of web browsing, but it’s more convenient for everything else that I do (which is often on the command line). For example, I have painlessly integrated Pass into Mutt (my email client) so that passwords are not stored in the configuration files.

As a side-note, I installed the Password Exporter Firefox Add-on and exported my passwords. I then added this whole file to Pass so that I can start copying old passwords as needed (I didn’t want them all).

About Pass

Pass uses public-key cryptography to encrypt each password that you want to store as an individual file. To access the password you need the private key and passphrase.

So, some nice things about it are:

  • Short and simple shell script
  • Uses standard GnuPG to encrypt each password into individual files
  • Password files are stored on disk in a hierarchy of own choosing
  • Stored in Git repo (if desired)
  • Can also store notes
  • Can copy the password temporarily to copy/paste buffer
  • Can show, edit, or copy password
  • Can also generate a password
  • Integrates with anything that can call it
  • Tab completion!

So it’s nothing super fancy, “just” a great little wrapper for good old GnuPG and text files, backed by git. Perfect!

Install Pass

Installation of Pass (and Git) is easy:
sudo dnf -y install git pass

Prepare keys

You’ll need a pair of keys, so generate these if you haven’t already (this creates the keys under ~/.gnupg). I’d probably recommend RSA and RSA, 4096 bits long, using a decent passphrase and setting a valid email address (you can also separately use these keys to send signed emails and receive encrypted emails).
gpg2 --full-gen-key

We will need the key’s fingerprint to give to pass. It should be a string of 40 characters, something like 16CA211ACF6DC8586D6747417407C4045DF7E9A2.
gpg2 --list-keys

Note: Your fingerprint (and public keys) can be public, but please make sure that you keep your private keys secure! For example, don’t copy the ~/.gnupg directory to a public place (even though they are protected by a nice long passphrase, right? Right?).

Initialise pass

Before we can use Pass, we need to initialise it.
pass init

This creates the basic directory structure in the .password-store directory in your home directory. At this point it just has a plain text file with the fingerprint of the public key that it should use.

Adding git backing

If you haven’t already, you’ll need to tell Git who you are. Using the email address that you used when creating the GPG key is probably good.
git config --global user.email "you@example.com"
git config --global user.name "Your Name"

Now, go into the password-store directory and initialise it as a Git repository.
cd ~/.password-store
git init
git add .
git commit -m "intial commit"
cd -

Pass will now automatically commit changes for you!

Hierarchy

As mentioned, you can create any hierarchy you like. I quite like to use subdirectories and sort by function first (like mail, web, server), then domains (like gmail.com, twitter.com) and then server or username. This seems to work quite nicely with tab completion, too.

You can rearrange this at any time, so don’t worry too much!

Storing a password

Adding a password is simple and you can create any hierarchy that you want; you just tell pass to add a new password and where to store it. Pass will prompt you to enter the password.

For example, you might want to store your password for a machine at server1.example.com – you could do that like so:
pass add servers/example.com/server1

This creates the directory structure on disk and your first encrypted file!
~/.password-store/
└── servers
    └── example.com
        └── server1.gpg
 
2 directories, 1 file

Run the file command on that file and it should tell you that it’s encrypted.
file ~/.password-store/servers/example.com/server1.gpg

But is it really? Go ahead, cat that gpg file, you’ll see it’s encrypted (your terminal will probably go crazy – you can blindly enter the reset command to get it back).
cat ~/.password-store/servers/example.com/server1.gpg

So this file is encrypted – you can safely copy it anywhere (again, please just keep your private key secure).

Git history

Browse to the .password-store dir and run some git commands, you’ll see your history and showing will prompt for your GPG passphrase to decrypt the files stored in Git.

cd ~/.password-store
git log
git show
cd -

If you wanted to, you could push this to another computer as a backup (perhaps even via a git-hook!).

Storing a password, with notes

By default Pass just prompts for the password, but if you want to add notes at the same time you can do that also. Note that the password should still be on its own on the first line, however.
pass add -m mail/gmail.com/username

If you use two-factor authentication (which you should be), this is useful for also storing the account password and recovery codes.

Generating and storing a password

As I mentioned, one of the benefits of using a password manager is to have strong, unique passwords. Pass makes this easy by including the ability to generate one for you and store it in the hierarchy of your choosing. For example, you could generate a 32 character password (without special characters) for a website you often log into, like so:
pass generate -n web/twitter.com/username 32

Getting a password out

Getting a password out is easy; just tell Pass which one you want. It will prompt you for your passphrase, decrypt the file for you, read the first line and print it to the screen. This can be useful for scripting (more on that below).

pass web/twitter.com/username

Most of the time though, you’ll probably want to copy the password to the copy/paste buffer; this is also easy, just add the -c option. Passwords are automatically cleared from the buffer after 45 seconds.
pass -c web/twitter.com/username

Now you can log into Twitter by entering your username and pasting the password.

Editing a password

Similarly you can edit an existing password to change it, or add as many notes as you like. Just tell Pass which password to edit!
pass edit web/twitter.com/username

Copying and moving a password

It’s easy to copy an existing password to a new one, just specify both the original and new file.
pass copy servers/example.com/server1 servers/example.com/server2

If the hierarchy you created is not to your liking, it’s easy to move passwords around.
pass mv servers/example.com/server1 computers/server1.example.com

Of course, you could script this!

Listing all passwords

Pass will list all your passwords in a tree nicely for you.
pass list

Interacting with Pass

As pass is a nice standard shell program, you can interact with it easily. For example, to get a password from a script you could do something like this.
#!/usr/bin/env bash
 
echo "Getting password.."
PASSWORD="$(pass servers/testing.com/server2)"
if [[ $? -ne 0 ]]; then
    echo "Sorry, failed to get the password"
    exit 1
fi
echo "..and we got it, ${PASSWORD}"

Try it!

There’s lots more you can do with Pass, why not check it out yourself!

August 08, 2016

Setting up OpenStack Ansible All-in-one behind a proxy

Setting up OpenStack Ansible (OSA) All-in-one (AIO) behind a proxy requires a couple of settings, but it should work fine (we’ll also configure the wider system). There are two types of git repos that we should configure for (unless you’re an OpenStack developer), those that use http (or https) and those that use the git protocol.

Firstly, this assumes an Ubuntu 14.04 server install (with at least 60GB of free space on / partition).

All commands are run as the root user, so switch to root first.

sudo -i

Export variables for ease of setup

Setting these variables here means that you can copy and paste the relevant commands from the rest of this blog post.

Note: Make sure that your proxy is fully resolvable and then replace the settings below with your actual proxy details (leave out user:password if you don’t use one).

export PROXY_PROTO="http"
export PROXY_HOST="user:password@proxy"
export PROXY_PORT="3128"
export PROXY="${PROXY_PROTO}://${PROXY_HOST}:${PROXY_PORT}"

First, install some essentials (reboot after upgrade if you like).
echo "Acquire::http::Proxy \"${PROXY}\";" \
> /etc/apt/apt.conf.d/90proxy
apt-get update && apt-get upgrade
apt-get install git openssh-server rsync socat screen vim

Configure global proxies

For any http:// or https:// repositories we can just set a shell environment variable. We’ll set this in /etc/environment so that all future shells have it automatically.

cat >> /etc/environment << EOF
export http_proxy="${PROXY}"
export https_proxy="${PROXY}"
export HTTP_PROXY="${PROXY}"
export HTTPS_PROXY="${PROXY}"
export ftp_proxy="${PROXY}"
export FTP_PROXY="${PROXY}"
export no_proxy=localhost
export NO_PROXY=localhost
EOF

Source this to set the proxy variables in your current shell.
source /etc/environment

Tell sudo to keep these environment variables
echo 'Defaults env_keep = "http_proxy https_proxy ftp_proxy \
no_proxy HTTP_PROXY HTTPS_PROXY FTP_PROXY NO_PROXY"' \
> /etc/sudoers.d/01_proxy

Configure Git

For any git:// repositories we need to make a script that uses socat (you could use netcat) and tell Git to use this as the proxy.

cat > /usr/local/bin/git-proxy.sh << EOF
#!/bin/bash
# \$1 = hostname, \$2 = port
exec socat STDIO PROXY:${PROXY_HOST}:\${1}:\${2},proxyport=${PROXY_PORT}
EOF

Make it executable.
chmod a+x /usr/local/bin/git-proxy.sh

Tell Git to proxy connections through this script.
git config --global core.gitProxy /usr/local/bin/git-proxy.sh

Clone OpenStack Ansible

OK, let’s clone the OpenStack Ansible repository! We’re living on the edge and so will build from the tip of the master branch.
git clone git://git.openstack.org/openstack/openstack-ansible \
/opt/openstack-ansible
cd /opt/openstack-ansible/

If you would prefer to build from a specific release, such as the latest stable, feel free to now check out the appropriate tag. For example, at the time of writing this is tag 13.3.1. You can get a list of tags by running the git tag command.

# Only run this if you want to build the 13.3.1 release
git checkout -b tag-13.3.1 13.3.1

Or if you prefer, you can checkout the tip of the stable branch which prepares for the upcoming stable minor release.

# Only run this if you want to build the latest stable code
git checkout -b stable/matika origin/stable/mitaka

Prepare log location

If something goes wrong, it’s handy to be able to have the log available.

export ANSIBLE_LOG_PATH=/root/ansible-log

Bootstrap Ansible

Now we can kick off the ansible bootstrap. This prepares the system with all of the Ansible roles that make up an OpenStack environment.
./scripts/bootstrap-ansible.sh

Upon success, you should see:

System is bootstrapped and ready for use.

Bootstrap OpenStack Ansible All In One

Now let’s bootstrap the all in one system. This configures the host with appropriate disks and network configuration, etc ready to run the OpenStack environment in containers.
./scripts/bootstrap-aio.sh

Run the Ansible playbooks

The final task is to run the playbooks, which sets up all of the OpenStack components on the host and containers. Before we proceed, however, this requires some additional configuration for the proxy.

The user_variables.yml file under the root filesystem at /etc/openstack_deploy/user_variables.yml is where we configure environment variables for OSA to export and set some other options (again, note the leading / before etc – do not modify the template file at /opt/openstack-ansible/etc/openstack_deploy by mistake).

cat >> /etc/openstack_deploy/user_variables.yml << EOF
#
## Proxy settings
proxy_env_url: "\"${PROXY}\""
no_proxy_env: "\"localhost,127.0.0.1,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}\""
global_environment_variables:
  HTTP_PROXY: "{{ proxy_env_url }}"
  HTTPS_PROXY: "{{ proxy_env_url }}"
  NO_PROXY: "{{ no_proxy_env }}"
  http_proxy: "{{ proxy_env_url }}"
  https_proxy: "{{ proxy_env_url }}"
  no_proxy: "{{ no_proxy_env }}"
EOF

Secondly, if you’re running the latest stable, 13.3.x, you will need to make a small change to pip package list for the keystone (authentication component) container. Currently it pulls in httplib2 version 0.8, however this does not appear to respect the NO_PROXY variable and so keystone provisioning fails. Version 0.9 seems to fix this problem.

sed -i 's/state: present/state: latest/' \
/etc/ansible/roles/os_keystone/tasks/keystone_install.yml

Now run the playbooks!

Note: This will take a long time, perhaps a few hours, so run it in a screen or tmux session.

screen
time ./scripts/run-playbooks.sh

Verify containers

Once the playbooks complete, you should be able to list your running containers and see their status (there will be a couple of dozen).
lxc-ls -f

Log into OpenStack

Now that the system is complete, we can start using OpenStack!

You should be able to use your web browser to log into Horizon, the OpenStack Dashboard, at your AIO hosts’s IP address.

If you’re not sure what IP that is, you can find out by looking at which address port 443 is running on.

netstat -ltnp |grep 443

The admin user’s password is available in the user_secrets.yml file on the AIO host.
grep keystone_auth_admin_password \
/etc/openstack_deploy/user_secrets.yml

osa-aio

A successful login should reveal the admin dashboard.

osa-aio-admin

Enjoy your OpenStack Ansible All-in-one!

Windows 3.11 nostalgia

Because OS/2 didn’t go so well… let’s try something I’m a lot more familiar with. To be honest, the last time I in earnest used Windows on the desktop was around 3.11, so I kind of know it back to front (fun fact: I’ve read the entire Windows 3.0 manual).

It turns out that once you have MS-DOS installed in qemu, installing Windows 3.11 is trivial. I didn’t even change any settings for Qemu, I just basically specced everything up to be very minimal (50MB RAM, 512mb disk).

win31-setupwin31-disk4win31-installedWindows 3.11 was not a fun time as soon as you had to do anything… nightmares of drivers, CONFIG.SYS and AUTOEXEC.BAT plague my mind. But hey, it’s damn fast on a modern processor.

Swift + Xena would make the perfect digital preservation solution

Those of you might not know, but for some years I worked at the National Archives of Australia working on, at the time, their leading digital preservation platform. It was awesome, opensource, and they paid me to hack on it.
The most important parts of the platform was Xena and Digital Preservation Recorder (DPR). Xena was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.

We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.

Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.

Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.

Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.

Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!

Happy preserving.

August 07, 2016

SM2000 – Part 8 – Gippstech 2016 Presentation

Justin, VK7TW, has published a video of my SM2000 presentation at Gippstech, which was held in July 2016.

Brady O’Brien, KC9TPA, visited me in June. Together we brought the SM2000 up to the point where it is decoding FreeDV 2400A waveforms at 10.7MHz IF, which we demonstrate in this video. I’m currently busy with another project but will get back to the SM2000 (and other FreeDV projects) later this year.

Thanks Justin and Brady!

FreeDV and this video was also mentioned on this interesting Reddit post/debate from Gary KN4AQ on VHF/UHF Digital Voice – a peek into the future

OS/2 Warp Nostalgia

Thanks to the joys of abandonware websites, you can play with some interesting things from the 1990s and before. One of those things is OS/2 Warp. Now, I had a go at OS/2 sometime in the 1990s after being warned by a friend that it was “pretty much impossible” to get networking going. My experience of OS/2 then was not revolutionary… It was, well, something else on a PC that wasn’t that exciting and didn’t really add a huge amount over Windows.

Now, I’m nowhere near insane enough to try this on my actual computer, and I’ve managed to not accumulate any ancient PCs….

Luckily, qemu helps with an emulator! If you don’t set your CPU to Pentium (or possibly something one or two generations newer) then things don’t go well. Neither does a disk that by today’s standards would be considered beyond tiny. Also, if you dare to try to use an unpartitioned hard disk – OH MY are you in trouble.

Also, try to boot off “Disk 1” and you get this:
os2-wrong-floppyPossibly the most friendly error message ever! But, once you get going (by booting the Installation floppy)… you get to see this:

Screenshot from 2016-08-07 19-12-19and indeed, you are doing the time warp of Operating Systems right here. After a bit of fun, you end up in FDISK:

os2-installos2-1gb-too-muchWhy I can’t create a partition… WHO KNOWS. But, I tried again with a 750MB disk that already had a partition on it and…. FAIL. I think this one was due to partition type, so I tried again with partition type of 6 – plain FAT16, and not W95 FAT16 (LBA). Some memory is coming back to me of larger drives and LBA and nightmares…

But that worked!

warp4-installingos2-checkingThen, the OS/2 WARP boot screen… which seems to stick around for a long time…..

os2-install-2and maybe I could get networking….

os2-networkLadies and Gentlemen, the wonders of having to select DHCP:

os2-dhcpIt still asked me for some config, but I gleefully ignored it (because that must be safe, right!?) and then I needed to select a network adapter! Due to a poor choice on my part, I started with a rtl8139, which is conspicuously absent from this fine list of Token Ring adapters:

os2-tokenringand then, more installing……

os2-more-installingbefore finally rebooting into….

os2-failand that, is where I realized there was beer in the fridge and that was going to be a lot more fun.

August 04, 2016

Remembering Seymour Papert

papert-01

Today we’re remembering Seamour Papert, as we’ve received news that he died a few days ago (31st July 2016) at the age of 88.  Throughout his life, Papert did so much for computing and education, he even worked with the famous Jean Piaget who helped Papert further develop his views on children and learning.

For us at OpenSTEM, Papert is also special because in the late 1960s (yep that far back) he invented the Logo programming language, used to control drawing “turtles”.  The Mirobot drawing turtle we use in our Robotics Program is a modern descendant of those early (then costly) adventures.

I sadly never met him, but what a wonderful person he was.

For more information, see the media release at MIT’s Media Lab (which he co-founded) or search for his name online.

 

Supercomputers: Current Status and Future Trends

The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect.

read more

August 01, 2016

Putting Prometheus node_exporter behind apache proxy

I’ve been playing with Prometheus monitoring lately. It is fairly new software that is getting popular. Prometheus works using a pull architecture. A central server connects to each thing you want to monitor every few seconds and grabs stats from it.

In the simplest case you run the node_exporter on each machine which gathers about 600-800 (!) metrics such as load, disk space and interface stats. This exporter listens on port 9100 and effectively works as an http server that responds to “GET /metrics HTTP/1.1” and spits several hundred lines of:

node_forks 7916
node_intr 3.8090539e+07
node_load1 0.47
node_load15 0.21
node_load5 0.31
node_memory_Active 6.23935488e+08

Other exporters listen on different ports and export stats for apache or mysql while more complicated ones will act as proxies for outgoing tests (via snmp, icmp, http). The full list of them is on the Prometheus website.

So my problem was that I wanted to check my virtual machine that is on Linode. The machine only has a public IP and I didn’t want to:

  1. Allow random people to check my servers stats
  2. Have to setup some sort of VPN.

So I decided that the best way was to just use put a user/password on the exporter.

However the node_exporter does not  implement authentication itself since the authors wanted the avoid maintaining lots of security code. So I decided to put it behind a reverse proxy using apache mod_proxy.

Step 1 – Install node_exporter

Node_exporter is a single binary that I started via an upstart script. As part of the upstart script I told it to listen on localhost port 19100 instead of port 9100 on all interfaces

# cat /etc/init/prometheus_node_exporter.conf
description "Prometheus Node Exporter"

start on startup

chdir /home/prometheus/

script
/home/prometheus/node_exporter -web.listen-address 127.0.0.1:19100
end script

Once I start the exporter a simple “curl 127.0.0.1:19100/metrics” makes sure it is working and returning data.

Step 2 – Add Apache proxy entry

First make sure apache is listening on port 9100 . On Ubuntu edit the /etc/apache2/ports.conf file and add the line:

Listen 9100

Next create a simple apache proxy without authentication (don’t forget to enable mod_proxy too):

# more /etc/apache2/sites-available/prometheus.conf 
<VirtualHost *:9100>
 ServerName prometheus

CustomLog /var/log/apache2/prometheus_access.log combined
 ErrorLog /var/log/apache2/prometheus_error.log

ProxyRequests Off
 <Proxy *>
Allow from all
 </Proxy>

ProxyErrorOverride On
 ProxyPass / http://127.0.0.1:19100/
 ProxyPassReverse / http://127.0.0.1:19100/

</VirtualHost>

This simply takes requests on port 9100 and forwards them to localhost port 19100 . Now reload apache and test via curl to port 9100. You can also use netstat to see what is listening on which ports:

Proto Recv-Q Send-Q Local Address   Foreign Address State  PID/Program name
tcp   0      0      127.0.0.1:19100 0.0.0.0:*       LISTEN 8416/node_exporter
tcp6  0      0      :::9100         :::*            LISTEN 8725/apache2

 

Step 3 – Get Prometheus working

I’ll assume at this point you have other servers working. What you need to do now is add the following entries for you server in you prometheus.yml file.

First add basic_auth into your scape config for “node” and then add your servers, eg:

- job_name: 'node'

  scrape_interval: 15s

  basic_auth: 
    username: prom
    password: mypassword

  static_configs:
    - targets: ['myserver.example.com:9100']
      labels: 
         group: 'servers'
         alias: 'myserver'

Now restart Prometheus and make sure it is working. You should see the following lines in your apache logs plus stats for the server should start appearing:

10.212.62.207 - - [31/Jul/2016:11:31:38 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"
10.212.62.207 - - [31/Jul/2016:11:31:53 +0000] "GET /metrics HTTP/1.1" 200 11398 "-" "Go-http-client/1.1"
10.212.62.207 - - [31/Jul/2016:11:32:08 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"

Notice that connections are 15 seconds apart, get http code 200 and are 11k in size. The Prometheus server is using Authentication but apache doesn’t need it yet.

Step 4 – Enable Authentication.

Now create an apache password file:

htpasswd -cb /home/prometheus/passwd prom mypassword

and update your apache entry to the followign to enable authentication:

# more /etc/apache2/sites-available/prometheus.conf
 <VirtualHost *:9100>
 ServerName prometheus

 CustomLog /var/log/apache2/prometheus_access.log combined
 ErrorLog /var/log/apache2/prometheus_error.log

 ProxyRequests Off
 <Proxy *>
 Order deny,allow
 Allow from all
 #
 AuthType Basic
 AuthName "Password Required"
 AuthBasicProvider file
 AuthUserFile "/home/prometheus/passwd"
 Require valid-user
 </Proxy>

 ProxyErrorOverride On
 ProxyPass / http://127.0.0.1:19100/
 ProxyPassReverse / http://127.0.0.1:19100/
 </VirtualHost>

After you reload apache you should see the following:

10.212.56.135 - prom [01/Aug/2016:04:42:08 +0000] "GET /metrics HTTP/1.1" 200 11394 "-" "Go-http-client/1.1"
10.212.56.135 - prom [01/Aug/2016:04:42:23 +0000] "GET /metrics HTTP/1.1" 200 11392 "-" "Go-http-client/1.1"
10.212.56.135 - prom [01/Aug/2016:04:42:38 +0000] "GET /metrics HTTP/1.1" 200 11391 "-" "Go-http-client/1.1"

Note that the “prom” in field 3 indicates that we are logging in for each connection. If you try to connect to the port without authentication you will get:

Unauthorized
This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.

That is pretty much it. Note that will need to add additional Virtualhost entries for more ports if you run other exporters on the server.

 

FacebookGoogle+Share

Playing with Shifter – NERSC’s tool to use Docker containers in HPC

Early days yet, but playing with NERSC’s Shifter to let us use Docker containers safely on our test RHEL6 cluster is looking really interesting (given you can’t use Docker itself under RHEL6, and if you could the security concerns would cancel it out anyway).

To use a pre-built Ubuntu Xenial image, for instance, you tell it to pull the image:

[samuel@bruce ~]$ shifterimg pull ubuntu:16.04

There’s a number of steps it goes through, first retrieving the container from the Docker Hub:

2016-08-01T18:19:57 Pulling Image: docker:ubuntu:16.04, status: PULLING

Then disarming the Docker container by removing any setuid/setgid bits, etc, and repacking as a Shifter image:

2016-08-01T18:20:41 Pulling Image: docker:ubuntu:16.04, status: CONVERSION

…and then it’s ready to go:

2016-08-01T18:21:04 Pulling Image: docker:ubuntu:16.04, status: READY

Using the image from the command line is pretty easy:

[samuel@bruce ~]$ cat /etc/lsb-release
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch

[samuel@bruce ~]$ shifter --image=ubuntu:16.04 cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

and the shifter runtime will copy in a site specified /etc/passwd, /etc/group and /etc/nsswitch.conf files so that you can do user/group lookups easily, as well as map in site specified filesystems, so your home directory is just where it would normally be on the cluster.

[samuel@bruce ~]$ shifter --image=debian:wheezy bash --login
samuel@bruce:~$ pwd
/vlsci/VLSCI/samuel

I’ve not yet got to the point of configuring the Slurm plugin so you can queue up a Slurm job that will execute inside a Docker container, but very promising so far!

Correction: a misconception on my part – Shifter doesn’t put a Slurm batch job inside the container. It could, but there are good reasons why it’s better to leave that to the user (soon to be documented on the Shifter wiki page for Slurm integration).

This item originally posted here:

Playing with Shifter – NERSC’s tool to use Docker containers in HPC

July 29, 2016

Australia moves fast: North-West actually

Australia on globeThis story is about the tectonic plate on which we reside.  Tectonic plates move, and so continents shift over time.  They generally go pretty slow though.

What about Australia?  It appears that every year, we move 11 centimetres West and 7 centimetres North.  For a tectonic plate, that’s very fast.

The last time scientists marked our location on the globe was in 1994, with the Geocentric Datum of Australia 1994 (GDA1994) – generally called GDA94 in geo-spatial tools (such as QGIS).  So that datum came into force 22 years ago.  Since then, we’ve moved an astonishing 1.5 metres!  You may not think much of this, but right now it actually means that if you use a GPS in Australia to get coordinates, and plot it onto a map that doesn’t correct for this, you’re currently going to be off by 1.5 metres.  Depending on what you’re measuring/marking, you’ll appreciate this can be very significant and cause problems.

Bear in mind that, within Australia, GDA94 is not wrong as such, as its coordinates are relative to points within Australia. However, the positioning of Australia in relation to the rest of the globe is now outdated.  Positioning technologies have also improved.  So there’s a new datum planned for Australia, GDA2020.  By the time it comes into force, we’ll have shifted by 1.8 metres relative to GDA94.

We can have some fun with all this:

  • If you stand and stretch both your arms out, the tips of your fingers are about 1.5 metres apart – of course this depends a bit on the length of your arms, but it’ll give you a rough idea.  Now imagine a pipe or cable in the ground at a particular GPS position,  move 1.5 metres.  You could clean miss that pipe or cable… oops!  Unless your GPS is configured to use a datum that gets updated, such as WGS84.  However, if you had the pipe or cable plotted on a map that’s in GDA94, it becomes messy again.
  • If you use a tool such as Google Earth, where is Australia actually?  That is, will a point be plotted accurately, or be 1.5 metres out, or somewhere in between?
    Well, that would depend on when the most recent broad scale photos were taken, and what corrections the Google Earth team possibly applies during processing of its data (for example, Google Earth uses a different datum – WGS 84 for its calculations).
    Interesting question, isn’t it…
  • Now for a little science/maths challenge.  The Northern most tip of Australia, Cape York, is just 150km South of Papua New Guinea (PNG).  Presuming our plate maintains its present course and speed, roughly how many years until the visible bits (above sea level) of Australia and PNG collide?  Post your answer with working/reasoning in a comment to this post!  Think about this carefully and do your research.  Good luck!

July 28, 2016

Neuroscience in PSYOPS, World Order, and More

One of the funny things that I've heard is that people from one side believe that people from another side are somehow 'brainwashed' into believing what they do. As we saw in out last post there is a lot of manipulation and social engineering going on if you think about it, http://dtbnguyen.blogspot.com/2016/07/social-engineeringmanipulation-rigging.html We're going to examine just exactly why

License quibbles (aka Hiro & linux pt 2)

Since my last post regarding the conversion of media from Channel 9’s Catch Up service, I have been in discussion with the company behind this technology, Hiro-Media. My concerns were primarily around their use of the open source xvid media codec and whilst I am not a contributor to xvid (and hence do not have any ownership under copyright), I believe it is still my right under the GPL to request a copy of the source code.

First off I want to thank Hiro-Media for their prompt and polite responses. It is clear that they take the issue of license violations very seriously. Granted, it would be somewhat hypocritical for a company specialising in DRM to not take copyright violations within their own company seriously, but it would not be the first time.

I initially asserted that, due to Hiro’s use (and presumed modification) of xvid code, that this software was considered a derivative and therefore bound in its entirety by the GPL. Hiro-Media denied this stating they use xvid in its original, unmodified state and hence Hiro is simply a user of rather than a derivative of xvid. This is a reasonable statement albeit one that is difficult to verify. I want to stress at this point that in my playing with the Hiro software I have NOT in anyway reverse engineered it nor have I attempted to decompile their binaries in any way.

In the end, the following points were revealed:

  • The Mac version of Hiro uses a (claimed) unmodified version of the Perian Quicktime component
  • The Windows version of Hiro currently on Channel 9’s website IS indeed modified, what Hiro-Media terms an ‘accidental internal QA’ version. They state that they have sent a new version to Channel 9 that corrects this. The xvid code they are using can be found at http://www.koepi.info/xvid.html
  • Neither version has included a GPL preamble within their EULA as required. Again, I am assured this is to be corrected ASAP.

I want to reiterate that Hiro-Media have been very cooperative about this and appear to have genuine concern. I am impressed by the Hiro system itself and whilst I am still not a fan of DRM in general, this is by far the best compromise I have seen to date. They just didn’t have a linux version.

This brings me to my final, slightly more negative point. On my last correspondence with Hiro-Media, they concluded with the following:

Finally, please note our deepest concerns as to any attempt to misuse our system, including the content incorporated into it, as seems to be evidenced in your website. Prima facia, such behavior consists a gross and fundamental breach of our license (which you have already reviewed). Any such misuse may cause our company, as well as any of our partners, vast damages.

I do not wish to label this a threat (though I admit to feeling somewhat threatened by it), but I do want to clear up a few things about what I have done. The statement alleges I have violated Hiro’s license (pot? kettle? black?) however this is something I vehemently disagree with. I have read the license very careful (Obviously as I went looking for the GPL) and the only relevant part is:

You agree that you will not modify, adapt or translate, or disassemble, decompile, reverse engineer or otherwise attempt to discover the source code of the Software.

Now I admit to being completely guilty of a single part of this, I have attempted to discover the source code. BUT (and this is a really big BUT), I have attempted this by emailing Hiro-Media and asking them for it, NOT by decompiling (or in any other way inspecting) the software. In my opinion, the inclusion of that specific part in their license also goes against the GPL as such restrictions are strictly forbidden by it.
But back to the point, I have not modified, translated, disassembled, decompiled or reverse engineered the Hiro software. Additionally, I do not believe I have adapted it either. It is still doing exactly the same thing as it was originally, that is taking an incoming video stream, modifying it and decoding it. Importantly, I do not modify any files in any way. What I have altered is how Quicktime uses the data returned by Hiro. All my solution does is (using official OSX/Quicktime APIs) divert the output to a file rather than to the screen. In essence I have not done anything different to the ‘Save As’ option found in Quicktime Pro, however not owning Quicktime Pro, I merely found another way of doing this.

So that’s my conclusion. I will reply to Hiro-Media with a link to this post asking whether they still take issue with what I have done and take things from there.
To the guys from Hiro if you are reading this, I didn’t do any of this to start trouble. All I wanted was a way to play these files on my linux HTPC, with or without ads. Thankyou.

Channel 9, Catch Up, Hiro and linux

About 6 months ago, Channel 9 launched their ‘Catch Up’ service. Basically this is their way of fighting piracy and allowing people to download Australian made TV shows to watch on their PC. Now, of course, no ‘old media’ service would possibly do this without the wonders of DRM. Channel 9 though, are taking a slightly different approach.

Instead of the normal style of DRM that prevents you copying the file, Channel 9 employs technology from a company called Hiro. Essentially you install the Hiro player, download the file and watch it. The player will insert unskippable ads throughout the video, supposedly even targeted at your demographic. Now this is actually a fairly neat system, Channel 9 actually encourage you to share the video files over bittorrent etc! The problem, as I’m sure you can guess, is that there’s no player for linux.

So, just to skip to the punchline, yes it IS possible to get these files working on free software (completely legally & without the watermark)! If you just want to know how to do it, jump to the end as I’m going to explain a bit of background first.

Hiro

The Hiro technology is interesting in that it isn’t simply some custom player. The files you download from Channel 9 are actually xvid encoded, albeit a bastard in-bred cousin of what xvid should be. If you simply download the file and play it with vlc or mplayer, it will run, however you will get a nasty watermark over the top of the video and it will almost certainly crash about 30s in when it hits the first advertising blob. There is also some trickiness going on with the audio as, even if you can get the video to keep playing, the audio will jump back to the beginning at this point. Of course, the watermark isn’t just something that’s placed over the top in post-processing like a subtitle, its in the video data itself. To remove it you actually need to filter the video to modify the area covered by the watermark to darken/lighten the pixels affected. Sounds crazy and a tremendous amount of work right? Well thankfully its already been done, by Hiro themselves.

When you install Hiro, you don’t actually install a media player, you install either a DirectShow filter or a Quicktime component depending on your platform. This has the advantage that you can use reasonably standard software to play the files. Its still not much help for linux though.

Before I get onto how to create a ‘normal’ xvid file, I just want to mention something I think should be a concern for free software advocates. As you might know, xvid is an open codec, both for encoding and decoding. Due to the limitations of Quicktime and Windows Media Player, Hiro needs to include an xvid decoder as part of their filter. I’m sure its no surprise to anyone though that they have failed to release any code for this filter, despite it being based off a GPL’d work. IA(definitely)NAL, but I suspect there’s probably some dodginess going on here.

Using Catchup with free software

Very basically, the trick to getting the video working is that it needs to be passed through the filter provided by Hiro. I tried a number of methods to get the files converted for use with mplayer or vlc and in the end, unfortunately, I found that I needed to be using either Windows or OSX to get it done. Smarter minds than mine might be able to get the DirectShow filter (HiroTransform.ax) working with mplayer in a similar manner to how CoreAVC on linux works, but I had no luck.

But, if you have access to OSX, here’s how to do it:

  1. Download and install the Hiro software for Mac. You don’t need to register or anything, in fact, you can delete the application the moment you finish the install. All you need is the Quicktime component it added.
  2. Grab any file from the Catch Up Service (http://video.ninemsn.com.au/catchuptv). I’ve tested this with Underbelly, but all videos should work.
  3. Install ffmpegx (http://ffmpegx.com)
  4. Grab the following little script: CleanCatch.sh
  5. Run:
    chmod +x CleanCatch.sh
    ./CleanCatch.sh <filename>
  6. Voila. Output will be a file called ‘<filename>.clean.MP4′ and should be playable in both VLC and mplayer

Distribution

So, I’m the first to admit that the above is a right royal pain to do, particularly the whole requiring OSX part. To save everyone the hassle though, I believe its possible to simply distribute the modified file. Now again, IANAL, but I’ve gone over the Channel 9 website with a fine tooth comb and can see nothing that forbids me from distributing this newly encoded file. I agreed to no EULA when I downloaded the original video and their site even has the following on it:

You can share the episode with your friends and watch it as many times as you like – online or offline – with no limitations

That whole ‘no limitations’ part is the bit I like. Not only have Channel 9 given me permission to distribute the file, they’ve given it to me unrestricted. I’ve not broken any locks and in fact have really only used the software provided by Channel 9 and a standard transcoding package.

This being the case, I am considering releasing modified versions of Channel 9’s files over bittorrent. I’d love to hear people’s opinions about this before doing so though in case they know more than I (not a hard thing) about such matters.

The rallyduino lives

[Update: The code or the rallyduino can be found at: http://code.google.com/p/rallyduino/]

Amidst a million other things (Today is T-1 days until our little bubs technical due date! No news yet though) I finally managed to get the rallyduino into the car over the weekend. It actually went in a couple of weeks ago, but had to come out again after a few problems were found.

So, the good news, it all works! I wrote an automagic calibration routine that does all the clever number crunching and comes up with the calibration number, so after a quick drive down the road, everything was up and running. My back of the envelope calculations for what the calibration number for our car would be also turned out pretty accurate, only about 4mm per revolution out. The unit, once calibrated, is at least as accurate as the commercial devices and displays considerably more information. Its also more flexible as it can switch between modes dynamically and has full control from the remote handset. All in all I was pretty happy, even the instantaneous speed function worked first time.

To give a little background, the box uses a hall effect sensor mounted next to a brake caliper that pulses each time a wheel stud rotates past. This is a fairly common method for rally computers to use and to make life simpler, the rallyduino is pin compatible with another common commercial product, the VDO Minicockpit. As we already had a Minicockpit in the car, all we did was ‘double adapt’ the existing plug and pop in the rallyduino. This means there’s 2 computers running off the 1 sensor, in turn making it much simpler (assuming 1 of them is known to be accurate) to determine if the other is right.

After taking the car for a bit of a drive though, a few problems became apparent. The explain them, a pic is required:
8856285

The 4 devices in the pic are:

  1. Wayfinder electronic compass
  2. Terratrip (The black box)
  3. Rallyduino (Big silver box with the blue screen)
  4. VDO Minicockpit (Sitting on top of the rallyduino)

The major problem should be immediately obvious. The screen is completely unsuitable. Its both too small and has poor readability in daylight. I’m currently looking at alternatives and it seems like the simplest thing to do is get a 2×20 screen the same physical size as the existing 4×20. This, however, means that there would only be room for a single distance tracker rather than the 2 currently in place. The changeover is fairly trivial as the code, thankfully, is nice and flexible and the screen dimensions can be configured at compile time (From 2×16 to 4×20). Daylight readable screens are also fairly easily obtainable (http://www.crystalfontz.com is the ultimate small screen resource) so its just a matter of ordering one. In the long term I’d like to look at simply using a larger 4×20 screen but, as you can see, real estate on the dash is fairly tight.

Speaking of screens, I also found the most amazing little LCD controller from web4robot.com. It has both serial and I2C interfaces, a keypad decoder (with inbuilt debounce) and someone has gone to all the hard work of work of writing a common arduino interface library for it and other I2C LCD controllers (http://www.wentztech.com/radio/arduino/projects.html) . If you’re looking for such a LCD controller and you are working with an arduino, I cannot recommend these units enough. They’re available on eBay for about $20 Au delivered too. As much as I loved the unit from Phil Anderson, it simply doesn’t have the same featureset as this one, nor is it as robust.

So that’s where things are at. Apologies for the brain dump nature of this post, I just needed to get everything down while I could remember it all.

1 + 1 = 3

No updates to this blog in a while I’m afraid, things have just been far too busy to have had anything interesting (read geeky) to write about. That said, I am indulging and making this post purely to show off.

On Monday 25th May at 8:37am, after a rather long day/night, Mel and I welcomed Spencer Bailey Stewart into the world. There were a few little issues throughout the night (Particularly towards the end) and he had some small hurdles to get over in his first hour, but since then he has gone from strength to strength and both he and Mum and now doing amazingly well.
Obligatory Pic:

Spence

Spencer Bailey Stewart

He’s a very placid little man and would quite happily sleep through an earthquake, much to our delight. And yes, that is a little penguin he’s holding on to in that pic

So that’s all really. I am very conscious of not becoming a complete baby bore so unless something actually worth writing about happens, this will hopefully be my only baby post for the sake of a baby post.

Boxee iView back online

Just a quick post to say the ABC iView Boxee app has been updated to version 0.7 and should now be fully functional once again. I apologise to anyone who has been using the app for how long this update has taken and I wish I could say I’ve been off solving world hunger or something, but in reality I’ve just been flat out with work and family. I’ve also got a few other projects on the go that have been keeping me busy. These may or may not ever reach a stage of public consumption, but if they do it’ll be cool stuff.

For anyone using Boxee, you may need to remove the app from My Apps and wait for Boxee to refresh its repository index, but eventually version 0.7 should pop up. Its a few rough in places so I hope to do another cleanup within the next few weeks, but at least everything is functional again.

Going in to business

Lately my toying around with media centers has created some opportunities for commercial work in this area, which has been a pleasant change. As a result of this I’ve formed Noisy Media, a company specialising in the development of media centre apps for systems such as Boxee as well as the creation of customised media applications using XBMC (and similar) as a base.

Whilst I don’t expect this venture to ever be huge, I can see the market growing hugely in the future as products such as Google TV (Something I will be looking at VERY closely) and the Boxee Box are released and begin bringing streaming Video on Demand to the loungeroom. Given this is something I have a true passion for, the ability to turn work in this area into something profitable is very appealing, and exciting.

Here’s to video on demand!

ASX RSS down

Just a quick note to advise that the ASX RSS feed at http://noisymime.org/asx is currently not functional due to a change in the source data format.  I am in the process of performing a rewrite on this now and should have it back up and running (Better and more robust than ever) within the next few days.

Apologies for the delay in getting things back up and running.

Cortina Fuel Injection – Part 1 The Electronics

Besides being your run of the mill computer geek, I’ve always been a bit of a car geek as well. This often solicits down-the-nose looks from others who associate such people with V8 Supercar lovin’ petrolheads, which has always surprised me little because the most fun parts of working on a car are all just testing physics theories anyway. With that in mind, I’ll do this writeup from the point of view of the reader being a non-car, but scientifically minded person. First a bit of background…

Background

For the last 3 years or so my dad and I have been working a project to fuel inject our race car. The car itself is a 1968 Mk2 Cortina and retains the original 40 year old 1600 OHV engine. This engine is originally carbureted, meaning that it has a device that uses the vacuum created by the engine to mix fuel and air. This mixture is crucial to the running of an engine as the ratio of fuel to air dramatically alters power, response and economy. Carburetors were used for this function for a long time and whilst they achieve the basics very well, they are, at best, a compromise for most engines. To overcome these limitations, car manufacturers started moving to fuel injection in the 80’s, which allowed precise control of the amount of fuel added through the use of electronic signals. Initially these systems were horrible however, being driven by analog or very basic digital computers that did not have the power or inputs needed to accurately perform this function. These evolved to something useful throughout the 90’s and by the 00’s cars were having full sequential system (more on this later) that could deliver both good performance and excellent economy. It was our plan to fit something like the late 90’s type systems (ohh how did this change by the end though) to the Cortina with the aims of improving the power and drivability of the old engine. IN this post I’m going to run through the various components needed from the electrical side to make this all happen, as well as a little background on each. Our starting point was this:

The System

To have a computer control when are how much fuel to inject, it requires a number of inputs:

  • A crank sensor. This is the most important thing and tells the computer where in the 4-stroke cycle (HIGHLY recommended reading if you don’t know about the 4 strokes and engine goes through) the engine is and therefore WHEN to inject the fuel. Typically this is some form of toothed wheel that is on the end of the crankshaft with a VR or Hall effect sensor that pulses each time a tooth goes past it. The more teeth the wheel has, the more precisely the computer knows where the engine is (Assuming it can keep up with all the pulses). By itself the toothed wheel is not enough however as the computer needs a reference point to say when the cycle is beginning. This is typically done by either a 2nd sensor that only pulses once every 4 strokes, or by using what’s know as a missing tooth wheel, which is the approach we have taken. This works by having a wheel that would ordinarily have, say, 36 teeth, but has then had one of them removed. This creates an obvious gap in the series of pulses which the computer can use as a reference once it is told where in the cycle the missing tooth appears. The photos below show the standard Cortina crankshaft end and the wheel we made to fit onto the end

    Standard Crankshaft pulley

    36-1 sensor wheel added. Pulley is behind the wheel

    To read the teeth, we initially fitted a VR sensor, which sat about 0.75mm from the teeth, however due to issues with that item, we ended up replacing it with a Hall Effect unit.

  • Some way of knowing how much air is being pulled into the engine so that it knows HOW MUCH fuel to inject. In earlier fuel injection systems this was done with a Manifold Air Flow (MAF) sensor, a device which heated a wire that was in the path of the incoming air. By measuring the drop in temperature of the wire, the amount of air flowing over it could be determined (Although guessed is probably a better word as most of these systems tended to be fairly inaccurate). More recently systems (From the late 90’s onwards) have used Manifold Absolute Pressure (MAP) sensors to determine the amount of air coming in. Computationally these are more complex as there are a lot more variables that need to be known by the computer, but they tend to be much more accurate for the purposes of fuel injection. Nearly all aftermarket computers now use MAP and given how easy it is the setup (just a single vacuum hose going from the manifold to the ECU) this is the approach we took.

The above are the bare minimum inputs required for a computer to control the injection, however typically more sensors are needed in order to make the system operate smoothly. We used:

  • Temperature sensors: As the density of air changes with temperature, the ECU needs to know how hot or cold the incoming air is. It also needs to know the temperature of the water in the coolant system to know whether it is running too hot or cold so it can make adjustments as needed.
  • Throttle position sensor: The ratio of fuel to air is primarily controlled by the MAf or MAP sensor described above, however as changes in these sensors are not instantaneous, the ECU needs to know when the accelerator is pressed so it can add more fuel for the car to be able to accelerate quickly. These sensors are typically just a variable resistor fitted to the accelerator.
  • Camshaft sensor: I’ll avoid getting too technical here, but injection can essentially work in 2 modes, batched or sequential. In the 4 strokes an engine goes through, the crankshaft will rotate through 720 degrees (ie 2 turns). With just a crank sensor, the ECU can only know where the shaft is in the 0-360 degree range. To overcome this, most fuel injection systems up to about the year 2000 ran in ‘batched’ mode, meaning that the fuel injectors would fire in pairs, twice (or more) per 720 degrees. This is fine and cars can run very smoothly in this mode, however it means that after being injected, some fuel mixture sits in the intake manifold before being sucked into the chamber. During this time, the mixture starts to condense back into a liquid which does not burn as efficiently, leading to higher emissions and fuel consumption. To improve the situation, car manufacturers starting moving to sequential injection, meaning that the fuel is only ever injected at the time it can go straight into the combustion chamber. To do this, the ECU needs to know where in 720 degrees the engine is rather than just in 360 degrees. As the camshaft in a car runs at half the crankshaft speed, all you need to do this is place a similar sensor on this that produces 1 pulse every revolution (The equivalent of 1 pulse every 2 turns of the crank). In our case, we had decided that to remove the distributor (which is driven off the crank) and converted it to provide this pulse. I’ll provide a picture of this shortly, but it uses a single tooth that passes through a ‘vane’ type hall effect sensor, so that the signal goes high when the tooth enters the sensor and low when it leaves.
  • Oxygen sensor (O2) – In order to give some feedback to the ECU about how the engine is actually running, most cars these days run a sensor in the exhaust system to determine how much of the fuel going in is actually being burned. Up until very recently, virtually all of these sensors were what is known as narrowband, which in short means that they can determine whether the fuel/air mix is too lean (Not enough fuel) or too rich (Too much fuel), but not actually by how much. The upshot of this is that you can only ever know EXACTLY what the fuel/air mixture is when it switches from one state to the other. To overcome this problem, there is a different version of the sensor, known as wideband, that (within a certain range) can determine exactly how rich or lean the mixture is. If you ever feel like giving yourself a headache, take a read through http://www.megamanual.com/PWC/LSU4.htm which is the theory behind these sensors. They are complicated! Thankfully despite all the complication, they are fairly easy to use and allow much easier and quicker tuning once the ECU is up and running.

So with all of the above, pretty much the complete electronics system is covered. Of course, this doesn’t even start to cover off the wiring, fusing, relaying etc that has to go into making all of it work in the terribly noisy environment of a car, but that’s all the boring stuff

ECU

Finally the part tying everything together is the ECU (Engine Control Unit) itself. There are many different types of programmable ECUs available and they vary significantly in both features and price, ranging from about $400 to well over $10,000. Unsurprisingly there’s been a lot of interest in this area from enthusiasts looking to make their own and despite there having been a few of these to actually make it to market, the most successfully has almost certainly been Megasquirt. when we started with this project we had planned on using the 2nd generation Megasquirt which, whilst not having some of the capabilities of the top end systems, provided some great bang for bang. As we went along though, it became apparent that the Megasquirt 3 would be coming out at about the right time for us and so I decided to go with one of them instead. I happened to fluke one of the first 100 units to be produced and so we had it in our hands fairly quickly.

Let me just say that this is an AMAZING little box. From what I can see it has virtually all the capabilities of the (considerably) more expensive commercial units including first class tuning software (Multi platform, Win, OSX, linux) and a very active developer community. Combined with the Extra I/O board (MS3X) the box can do full sequential injection and ignition (With direct coil driving of ‘smart’ coils), launch control, traction control, ‘auto’ tuning in software, generic I/O of basically any function you can think of (including PID closed loop control), full logging via onboard SD slot and has a built in USB adaptor to boot!

Megasquirt 3 box and unassembled components

In the next post I’ll go through the hardware and setup we used to make all this happen. I’ll also run through the ignition system that we switched over to ECU control.

Command line Skype

Despite risking the occasional dirty look from a certain type of linux/FOSS supporter, I quite happily run the (currently) non-free Skype client on my HTPC. I have a webcam sitting on top of the TV and works flawlessly for holding video chats with family and friends.

The problem I initially faced however, was that my HTPC is 100% controlled by a keyboard only. Unlike Windows, the linux version of Skype has no built in shortcut keys (user defined or otherwise) for basic tasks such as answering and terminating calls. This makes it virtually impossible to use out of the box. On the upside though, the client does have an API and some wonderful person out there has created a python interface layer for it, aptly named, Skype4Py.

A little while ago when I still had free time on weekends, I sat down and quickly hacked together a python script for answering and terminating calls, as well as maximising video chats from the command line. I then setup a few global shortcut keys within xfce to run the script with the appropriate arguments.

I won’t go into the nitty gritty of the script as it really is a hack in some places (Particularly the video maximising), but I thought I’d post it up in case it is of use to anyone else. I’ve called it mythSkype, simply because the primary function of the machine I run it on is MythTV, but it has no dependencies on Myth at all.

The depedencies are:

  • Python – Tested with 2.6, though 2.5 should work
  • Skype4Py –  any version
  • xdotool – Only required for video maximising

To get the video maximising to work you’ll need to edit the file and set the screen_width and screen_height variables to match your resolution.
Make sure you have Skype running, then simply execute one of the following:

  • ./mythSkype -a (Answer any ringing calls)
  • ./mythSkype -e (End active calls)
  • ./mythSkype -m (Maximise the current video)

The first time you run the script, you will get a prompt from Skype asking if you are happy with Skype4Py having access to the application. Obviously you must give your ascent or nothing will work.

Its nothing fancy, but I hope its of use to others out there.

Download: mythSkype

ABC iView on Boxee

A few months ago I switched from using the standard mythfrontend to Boxee, a web enhanced version of the popular XBMC project. Now Boxee has a LOT of potential and the upcoming Boxee Box looks very promising, but its fantastic web capabilities are let down here in Australia as things such as Hulu and Netflix streaming are not available here.

What we do have though is a national broadcaster with a reasonably good online facility. The ABCs iView has been around for some time and has a really great selection of current programs available on it. Sounds like the perfect candidate for a Boxee app to me.

So with the help of Andy Botting and using Jeremy’s Vissers Python iView download app for initial guidance, I put together a Boxee app for watching iView programs fullscreen. For those wishing to try it out, just add the following repository within Boxee:
http://noisymime.org/boxee

Its mostly feature complete although there are a few things that still need to be added. If you have any suggestions or find a bug either leave a comment or put in a ticket at http://code.google.com/p/xbmc-boxee-iview/issues/list (A Google code site by Andy that I am storing this project at)

So that’s the short version of the story. Along the way however there has been a few hiccups and I want to say something about what the ABC (and more recently the BBC) have done to ‘protect’ their content.

The ABC have enabled a function called SWF Verification on their RTMP stream. This is something that Adobe offer on top of their RTMP products despite the fact that they omitted it from the public RTMP spec. That wouldn’t be so bad, except that they are now going after open source products that implement this, threatening them with cease and desists. Going slightly technical for a minute, SWF Verification is NOT a real protection system. It does not encrypt the content being sent nor does it actually prevent people copying it. The system works by requesting a ‘ping’ every 60-90 seconds. If the player can’t provide the correct response (Which is made up of things such as the date and time and the phrase “Genuine Adobe Flash Player 001″) then the server stops the streaming. Hardly high tech stuff.

In my opinion the ABC has made a huge mistake in enabling this as it achieves nothing in stopping piracy or restricting the service to a certain platform and serves only to annoy and frustrate the audience. There is a patch available at http://code.google.com/p/xbmc-boxee-iview that allows Boxee to read these streams directly, however until such time as this is included in Boxee/XBMC mainline (Watch this space: http://trac.xbmc.org/ticket/8971) or the ABC come to their senses as disable this anti-feature, this Boxee app will use the flash interface instead (boo!)

So that’s it. Hope the app is useful to people and, as stated above, if there are any problems, please let me know.

[EDIT]
I should’ve mentioned this originally, Andy and I did actually contact the iView team at the ABC regarding SWF verification. They responded with the following:

Thanks for contacting us re-Boxee. We agree that it’s a great platform and ultimately appropriate for iView iteration. Currently we’re working out our priorities this year for our small iView team, in terms of extended content offerings, potential platforms and general enhancements to the site.

Just some background on our security settings. We have content agreements with various content owners (individuals, production companies, US TV networks etc) a number require additional security, such as SWF hashing. Our content owners also consider non-ABC rendering of that content as not in the spirit of those agreements.

We appreciate the effort you have put into the plug-in and your general interest in all things iView. So once we are on our way with our development schedule for “out of the browser” iView, we’ll hopefully be in a position to share our priorities a little more. We would like to keep in touch with you this year and if you have any questions or comments my direct email is ********@abc.net.au.

[STATUS]
The app is currently in a WORKING state. If you are experiencing any problems, please send me a copy of your Boxee log file and I will investigate the issue.

July 27, 2016

LUV Main August 2016 Meeting: M.2 / CRCs

Aug 2 2016 18:30
Aug 2 2016 20:30
Aug 2 2016 18:30
Aug 2 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

  • Russell Coker, M.2
  • Rodney Brown, CRCs

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 2, 2016 - 18:30

read more

LUV Beginners August Meeting: File Sharing in Linux

Aug 20 2016 12:30
Aug 20 2016 16:30
Aug 20 2016 12:30
Aug 20 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

This hands-on presentation and tutorial with Wen Lin will introduce the various types of file sharing in Linux - from the more traditional NFS & Samba to the newer cloud-based ones like Dropbox, Google Drive and OwnCloud. The primary audience of the talk will be for beginners (newbies), but hopefully some of you who are familiar with Linux will get something out of it as well.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 20, 2016 - 12:30

read more

Get off my lawn: separating Docker workloads using cgroups

On my team, we do two different things in our Continuous Integration setup: build/functional tests, and performance tests. Build tests simply test whether a project builds, and, if the project provides a functional test suite, that the tests pass. We do a lot of MySQL/MariaDB testing this way. The other type of testing we do is performance tests: we build a project and then run a set of benchmarks against it. Python is a good example here.

Build tests want as much grunt as possible. Performance tests, on the other hand, want a stable, isolated environment. Initially, we set up Jenkins so that performance and build tests never ran at the same time. Builds would get the entire machine, and performance tests would never have to share with anyone.

This, while simple and effective, has some downsides. In POWER land, our machines are quite beefy. For example, one of the boxes I use - an S822L - has 4 sockets, each with 4 cores. At SMT-8 (an 8 way split of each core) that gives us 4 x 4 x 8 = 128 threads. It seems wasteful to lock this entire machine - all 128 threads - just so as to isolate a single-threaded test.1

So, can we partition our machine so that we can be running two different sorts of processes in a sufficiently isolated way?

What counts as 'sufficiently isolated'? Well, my performance tests are CPU bound, so I want CPU isolation. I also want memory, and in particular memory bandwith to be isolated. I don't particularly care about IO isolation as my tests aren't IO heavy. Lastly, I have a couple of tests that are very multithreaded, so I'd like to have enough of a machine for those test results to be interesting.

For CPU isolation we have CPU affinity. We can also do something similar with memory. On a POWER8 system, memory is connected to individual P8s, not to some central point. This is a 'Non-Uniform Memory Architecture' (NUMA) setup: the directly attached memory will be very fast for a processor to access, and memory attached to other processors will be slower to access. An accessible guide (with very helpful diagrams!) is the relevant RedBook (PDF), chapter 2.

We could achieve the isolation we want by dividing up CPUs and NUMA nodes between the competing workloads. Fortunately, all of the hardware NUMA information is plumbed nicely into Linux. Each P8 socket gets a corresponding NUMA node. lscpu will tell you what CPUs correspond to which NUMA nodes (although what it calls a CPU we would call a hardware thread). If you install numactl, you can use numactl -H to get even more details.

In our case, the relevant lscpu output is thus:

NUMA node0 CPU(s):     0-31
NUMA node1 CPU(s):     96-127
NUMA node16 CPU(s):    32-63
NUMA node17 CPU(s):    64-95

Now all we have to do is find some way to tell Linux to restrict a group of processes to a particular NUMA node and the corresponding CPUs. How? Enter control groups, or cgroups for short. Processes can be put into a cgroup, and then a cgroup controller can control the resouces allocated to the cgroup. Cgroups are hierarchical, and there are controllers for a number of different ways you could control a group of processes. Most helpfully for us, there's one called cpuset, which can control CPU affinity, and restrict memory allocation to a NUMA node.

We then just have to get the processes into the relevant cgroup. Fortunately, Docker is incredibly helpful for this!2 Docker containers are put in the docker cgroup. Each container gets it's own cgroup under the docker cgroup, and fortunately Docker deals well with the somewhat broken state of cpuset inheritance.3 So it suffices to create a cpuset cgroup for docker, and allocate some resources to it, and Docker will do the rest. Here we'll allocate the last 3 sockets and NUMA nodes to Docker containers:

cgcreate -g cpuset:docker
echo 32-127 > /sys/fs/cgroup/cpuset/docker/cpuset.cpus
echo 1,16-17 > /sys/fs/cgroup/cpuset/docker/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/docker/cpuset.mem_hardwall

mem_hardwall prevents memory allocations under docker from spilling over into the one remaining NUMA node.

So, does this work? I created a container with sysbench and then ran the following:

root@0d3f339d4181:/# sysbench --test=cpu --num-threads=128 --max-requests=10000000 run

Now I've asked for 128 threads, but the cgroup only has CPUs/hwthreads 32-127 allocated. So If I run htop, I shouldn't see any load on CPUs 0-31. What do I actually see?

htop screenshot, showing load only on CPUs 32-127

It works! Now, we create a cgroup for performance tests using the first socket and NUMA node:

cgcreate -g cpuset:perf-cgroup
echo 0-31 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.cpus
echo 0 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.mem_hardwall

Docker conveniently lets us put new containers under a different cgroup, which means we can simply do:

dja@p88 ~> docker run -it --rm --cgroup-parent=/perf-cgroup/ ppc64le/ubuntu bash
root@b037049f94de:/# # ... install sysbench
root@b037049f94de:/# sysbench --test=cpu --num-threads=128 --max-requests=10000000 run

And the result?

htop screenshot, showing load only on CPUs 0-31

It works! My benchmark results also suggest this is sufficient isolation, and the rest of the team is happy to have more build resources to play with.

There are some boring loose ends to tie up: if a build job does anything outside of docker (like clone a git repo), that doesn't come under the docker cgroup, and we have to interact with systemd. Because systemd doesn't know about cpuset, this is quite fiddly. We also want this in a systemd unit so it runs on start up, and we want some code to tear it down. But I'll spare you the gory details.

In summary, cgroups are surprisingly powerful and simple to work with, especially in conjunction with Docker and NUMA on Power!


  1. It gets worse! Before the performance test starts, all the running build jobs must drain. If we have 8 Jenkins executors running on the box, and a performance test job is the next in the queue, we have to wait for 8 running jobs to clear. If they all started at different times and have different runtimes, we will inevitably spend a fair chunk of time with the machine at less than full utilisation while we're waiting. 

  2. At least, on Ubuntu 16.04. I haven't tested if this is true anywhere else. 

  3. I hear this is getting better. It is also why systemd hasn't done cpuset inheritance yet. 

Personal submission to the Productivity Commission Review on Public Sector Data

My name is Pia Waugh and this is my personal submission to the Productivity Commission Review on Public Sector Data. It does not reflect the priorities or agenda of my employers past, present or future, though it does draw on my expertise and experience in driving the open data agenda and running data portals in the ACT and Commonwealth Governments from 2011 till 2015. I was invited by the Productivity Commission to do a submission and thought I could provide some useful ideas for consideration. I note I have been on maternity leave since January 2016 and am not employed by the Department of Prime Minister and Cabinet or working on data.gov.au at the time of writing this submission. This submission is also influenced by my work and collaboration with other Government jurisdictions across Australia, overseas and various organisations in the private and community sectors. I’m more than happy to discuss these ideas or others if useful to the Productivity Commission.

I would like to thank all those program and policy managers, civic hackers, experts, advocates, data publishers, data users, public servants and vendors whom I have had the pleasure to work with and have contributed to my understanding of this space. I’d also like to say a very special thank you to the Australian Government Chief Technology Officer, John Sheridan, who gave me the freedom to do what was needed with data.gov.au, and to Allan Barger who was my right hand man in rebooting the agenda in 2013, supporting agencies and helping establish a culture of data publishing and sharing across the public sector. I think we achieved a lot in only a few years with a very small but highly skilled team. A big thank you also to Alex Sadleir and Steven De Costa who were great to work with and made it easy to have an agile and responsive approach to building the foundation for an important piece of data infrastructure for the Australian Government.

Finally, this is a collection of some of my ideas and feedback for use by the Productivity Commission however, it doesn’t include everything I could possibly have to say on this topic because, frankly, we have a small baby who is taking most of my time at the moment. Please feel free to add your comments, criticisms or other ideas to the comments below! It is all licensed as Creative Commons 4.0 By Attribution, so I hope it is useful to others working in this space.

The Importance of Vision

Without a vision, we stumble blindly in the darkness. Without a vision, the work and behaviours of people and organisations are inevitably driven by other competing and often short term priorities. In the case of large and complex organisms like the Australian Public Service, if there is no cohesive vision, no clear goal to aim for, then each individual department is going to do things their own way, driven by their own priorities, budgets, Ministerial whims and you end up with what we largely have today: a cacophony of increasingly divergent approaches driven by tribalism that make collaboration, interoperability, common systems and data reuse impossible (or prohibitively expensive).

If however, you can establish a common vision, then even a strongly decentralised system can converge on the goal. If we can establish a common vision for public data, then the implementation of data programs and policies across the APS should become naturally more consistent and common in practice, with people naturally motivated to collaborate, to share expertise, and to reuse systems, standards and approaches in pursuit of the same goal.

My vision for public data is two-pronged and a bit of a paradigm shift: data by design and gov as an API! “Data by design” is about taking a data driven approach to the business of government and “gov as an API” is about changing the way we use, consume, publish and share data to properly enable a data driven public service and a broader network of innovation. The implementation of these ideas would create mashable government that could span departments, jurisdictions and international boundaries. In a heavily globalised world, no government is in isolation and it is only by making government data, content and services API enabled and reusable/interfacable, that we, collectively, can start to build the kind of analysis, products and services that meet the necessarily cross jurisdictional needs of all Australians, of all people.

More specifically, my vision is a data driven approach to the entire business of government that supports:

  • evidence based and iterative policy making and implementation;

  • transparent, accountable and responsible Government;

  • an open competitive marketplace built on mashable government data, content and services; and

  • a more efficient, effective and responsive public service.

What this requires is not so simple, but is utterly achievable if we could embed a more holistic whole of government approach in the work of individual departments, and then identify and fill the gaps through a central capacity that is responsible for driving a whole of government approach. Too often we see the data agenda oversimplified into what outcomes are desired (data visualisations, dashboards, analysis, etc) however, it is only in establishing multipurpose data infrastructure which can be reused for many different purposes that we will enable the kind of insights, innovation, efficiencies and effectiveness that all the latest reports on realising the value of data allude to. Without actual data, all the reports, policies, mission statements, programs and governance committees are essentially wasting time. But to get better government data, we need to build capacity and motivation in the public sector. We need to build a data driven culture in government. We also need to grow consumer confidence because a) demand helps drive supply, and b) if data users outside the public sector don’t trust that they can find, use and rely upon at least some government data, then we won’t ever see serious reuse of government data by the private sector, researchers, non-profits, citizens or the broader community.

Below is a quick breakdown of each of these priorities, followed by specific recommendations for each:

data infrastructure that supports multiple types of reuse. Ideally all data infrastructure developed by all government entities should be built in a modular, API enabled way to support data reuse beyond the original purpose to enable greater sharing, analysis, aggregation (where required) and publishing. It is often hard for agencies to know what common infrastructure already exists and it is easy for gaps to emerge, so another part of this is to map the data infrastructure requirements for all government data purposes, identify where solutions exist and any gaps. Where whole of government approaches are identified, common data infrastructure should be made available for whole of government use, to reduce the barrier to publishing and sharing data for departments. Too often, large expensive data projects are implemented in individual agencies as single purpose analytics solutions that don’t make the underlying data accessible for any other purpose. If such projects separated the data infrastructure from the analytics solutions, then the data infrastructure could be built to support myriad reuse including multiple analytics solutions, aggregation, sharing and publishing. If government data infrastructure was built like any other national infrastructure, it should enable a competitive marketplace of analysis, products and service delivery both domestically and globally. A useful analogy to consider is the example of roads. Roads are not typically built just from one address to another and are certainly not made to only support certain types of vehicles. It would be extremely inefficient if everyone built their own custom roads and then had to build custom vehicles for each type of road. It is more efficient to build common roads to a minimum technical standard that any type of vehicle can use to support both immediate transport needs, but also unknown transport needs into the future. Similarly we need to build multipurpose data infrastructure to support many types of uses.

greater publisher capacity and motivation to share and publish data. Currently the range of publishing capacity across the APS is extremely broad, from agencies that do nothing to agencies that are prolific publishers. This is driven primarily by different cultures and responsibilities of agencies and if we are to improve the use of data, we need to improve the supply of data across the entire public sector. This means education and support for agencies to help them understand the value to their BAU work. The time and money saved by publishing data, opportunities to improve data quality, the innovation opportunities and the ability to improve decision making are all great motivations once understood, but generally the data agenda is only pitched in political terms that have little to no meaning to data publishers. Otherwise there is no natural motivation to publish or share data, and the strongest policy or regulation in the world does not create sustainable change or effective outcomes if you cannot establish a motivation to comply. Whilst ever publishing data is seen as merely a compliance issue, it will be unlikely for agencies to invest the time and skills to publish data well, that is, to publish the sort of data that consumers want to use.

greater consumer confidence to improve the value realised from government data. Supply is nothing without demand and currently there is a relatively small (but growing) demand for government data, largely because people won’t use what they don’t trust. In the current landscape is difficult to find data and even if one can find it, it is often not machine readable or not freely available, is out of date and generally hard to use. There is not a high level of consumer confidence in what is provided by government so many people don’t even bother to look. If they do look, they find myriad data sources of ranging quality and inevitably waste many hours trying to get an outcome. There is a reasonable demand for data for research and the research community tends to jump through hoops – albeit reluctantly and at great cost – to gain access to government data. However, the private and civic sectors are yet to seriously engage apart form a few interesting outliers. We need to make finding and using useful data easy, and start to build consumer confidence or we will never even scratch the surface of the billions of dollars of untapped potential predicted by various studies. The data infrastructure section is obviously an important part of building consumer confidence as it should make it easier for consumers to find and have confidence in what they need, but it also requires improving the data culture across the APS, better outreach and communications, better education for public servants and citizens on how to engage in the agenda, and targeted programs to improve the publishing of data already in demand. What we don’t need is yet another “tell us what data you want” because people want to see progress.

a data driven culture that embeds in all public servants an understanding of the role of data in the every day work of the public service, from program management, policy development, regulation and even basic reporting. It is important to take data from being seen as a specialist niche delegated only to highly specialised teams and put data front and centre as part of the responsibilities of all public servants – especially management – in their BAU activities. Developing this culture requires education, data driven requirements for new programs and policies, some basic skills development but mostly the proliferation of an awareness of what data is, why it is important, and how to engage appropriate data skills in the BAU work to ensure a data driven approach. Only with data can a truly evidence driven approach to policy be taken, and only with data can a meaningful iterative approach be taken over time.

Finally, obviously the approach above requires an appropriately skilled team to drive policy, coordination and implementation of the agenda in collaboration with the broader APS. This team should reside in a central agenda to have whole of government imprimatur, and needs a mix of policy, commercial, engagement and technical data skills. The experience of data programs around the world shows that when you split policy and implementation, you inevitably get both a policy team lacking in the expertise to drive meaningful policy and an implementation team paralysed by policy indecision and an unclear mandate. This space is changing so rapidly that policy and implementation need to be agile and mutually reinforcing with a strong focus on getting things done.

As we examine the interesting opportunities presented by new developments such as blockchain and big data, we need to seriously understand the shift in paradigm from scarcity to surplus, from centralised to distributed systems, and from pre-planned to iterative approaches, if we are to create an effective public service for the 21st century.

There is already a lot of good work happening, so the recommendations in this submission are meant to improve and augment the landscape, not replicate. I will leave areas of specialisation to the specialists, and have tried to make recommendations that are supportive of a holistic approach to developing a data-driven public service in Australia.

Current Landscape

There has been progress in recent years towards a more data driven public sector however, these initiatives tend to be done by individual teams in isolation from the broader public service. Although we have seen some excellent exemplars of big data and open data, and some good work to clarify and communicate the intent of a data driven public service through policy and reviews, most projects have simply expanded upon the status quo thinking of government as a series of heavily fortified castles that take the extraordinary effort of letting in outsiders (including other departments) only under strictly controlled conditions and with great reluctance and cost. There is very little sharing at the implementation level (though an increasing amount of sharing of ideas and experience) and very rarely are new initiatives consulted across the APS for a whole of government perspective. Very rarely are actual data and infrastructure experts encouraged or supported to work directly together across agency or jurisdiction lines, which is a great pity. Although we have seen the idea of the value of data start to be realised and prioritised, we still see the implementation of data projects largely delegated to small, overworked and highly specialised internal teams that are largely not in the habit of collaborating externally and thus there is a lot of reinvention and diversity in what is done.

If we are to realise the real benefits of data in government and the broader economy, we need to challenge some of the status quo thinking and approaches towards data. We need to consider government (and the data it collects) as a platform for others to build upon rather than the delivery mechanism for all things to all people. We also need to better map what is needed for a data-driven public service rather than falling victim to the attractive (and common, and cheap) notion of simply identifying existing programs of work and claiming them to be sufficient to meet the goals of the agenda.

Globally this is still a fairly new space. Certain data specialisations have matured in government (eg. census/statistics, some spatial, some science data) but there is still a lack of a cohesive approach to data in any one agency. Even specialist data agencies tend to not look beyond the specialised data to have a holistic data driven approach to everything. In this way, it is critical to develop a holistic approach to data at all levels of the public service to embed the principles of data driven decision making in everything we do. Catalogues are not enough. Specialist data projects are not enough. Publishing data isn’t enough. Reporting number of datasets quickly becomes meaningless. We need to measure our success in this space by how well data is helping the public service to make better decisions, build better services, develop and iterate responsive and evidence based policy agendas, measure progress and understand the environment in which we operate.

Ideally, government agencies need to adopt a dramatic shift in thinking to assume in the first instance that the best results will be discovered through collaboration, through sharing, through helping people help themselves. There also needs in the APS to be a shift away from thinking that a policy, framework, governance structure or other artificial constructs are sufficient outcomes. Such mechanisms can be useful, but they can also be a distraction from getting anything tangible done. Such mechanisms often add layers of complexity and cost to what they purport to achieve. Ultimately, it is only what is actually implemented that will drive an outcome and I strongly believe an outcomes driven approach must be applied to the public data agenda for it to achieve its potential.

References

In recent years there has been a lot of progress. Below is a quick list to ensure they are known and built upon for the future. It is also useful to recognise the good work of the government agencies to date.

  • Public Data Toolkit – the data.gov.au team have pulled together a large repository of information, guidance and reports over the past 3 years on our open data toolkit at http://toolkit.data.gov.au. There are also some useful contributions from the Department of Communications Spatial Policy Branch. The Toolkit has links to various guidance from different authoritative agencies across the APS as well as general information about data management and publishing which would be useful to this review.

  • The Productivity Commission is already aware of the Legislative and other Barriers Workshop I ran at PM&C before going on maternity leave, and I commend the outcomes of that session to the Review.

  • The Financial Sector Inquiry (the “Murray Inquiry”) has some excellent recommendations regarding the use of data-drive approaches to streamline the work and reporting of the public sector which, if implemented, would generate cost and time savings as well as the useful side effect of putting in place data driven practices and approaches which can be further leveraged for other purposes.

  • Gov 2.0 Report and the Ahead of the Game Report – these are hard to find copies of online now, but have some good recommendations and ideas about a more data centric and evidence based public sector and I commend them both to the Review. I’m happy to provide copies if required.

  • There are many notable APS agency efforts which I recommend the Productivity Commission engage with, if they haven’t already. Below are a few I have come across to date, and it is far from an exhaustive list:

    • PM&C (Public Data Management Report/Implementation & Public Data Policy Statement)

    • Finance (running and rebooting data.gov.au, budget publishing, data integration in GovCMS)

    • ABS (multi agency arrangement, ABS.Stat)

    • DHS (analytics skills program, data infrastructure and analysis work)

    • Immigration (analytics and data publishing)

    • Social Services (benefits of data publishing)

    • Treasury (Budget work)

    • ANDS (catalogue work and upskilling in research sector)

    • NDI (super computer functionality for science)

    • ATO (smarter data program, automated and publications data publishing, service analytics, analytics, dev lab, innovationspace)

    • Industry (Lighthouse data integration and analysis, energy ratings data and app)

    • CrimTRAC and AUSTRAC (data collection, consolidation, analysis, sharing)

  • Other jurisdictions in Australia have done excellent work as well and you can see a list (hopefully up to date) of portals and policies on the Public Data Toolkit. I recommend the Productivity Commission engage with the various data teams for their experiences and expertise in this matter. There are outstanding efforts in all the State and Territory Governments involved as well as many Local Councils with instructive success stories, excellent approaches to policy, implementation and agency engagement/skills and private sector engagement projects.

Som current risks/issues

There are a number of issues and risks that exist in pursuing the current approach to data in the APS. Below are some considerations to take into account with any new policies or agendas to be developed.

  • There is significant duplication of infrastructure and investment from building bespoke analytics solutions rather than reusable data infrastructure that could support multiple analytics solutions. Agencies build multiple bespoke analytics projects without making the underpinning data available for other purposes resulting in duplicated efforts and under-utilised data across government.

  • Too much focus on pretty user interfaces without enough significant investment or focus on data delivery.

  • Discovery versus reuse – too many example of catalogues linking to dead data. Without the data, discovery is less than useful.

  • Limitations of tech in agencies by ICT Department – often the ICT Department in an agency is reticent to expand the standard operating environment beyond the status quo, creating an issue of limitation of tools and new technologies.

  • Copyright and legislation – particularly old interpretations of each and other excuses to not share.

  • Blockers to agencies publishing data (skills, resources, time, legislation, tech, competing priorities e.g. assumed to be only specialists that can do data).

  • Often activities in the public sector are designed to maintain the status quo (budgets, responsibilities, staff count) and there is very little motivation to do things more efficiently or effectively. We need to establish these motivations for any chance to be sustainable.

  • Public perceptions about the roles and responsibilities of government change over time and it is important to stay engaged when governments want to try something new that the public might be uncertain about. There has been a lot of media attention about how data is used by government with concerns aired about privacy. Australians are concerned about what Government plans to do with their data. Broadly the Government needs to understand and engage with the public about what data it holds and how it is used. There needs to be trust built to both improve the benefits from data and to ensure citizen privacy and rights are protected. Where government wants to use data in new ways, it needs to prosecute the case with the public and ensure there are appropriate limitations to use in place to avoid misuse of the data. Generally, where Australians can directly view the benefit of their data being used and where appropriate limitations are in place, they will probably react positively. For example, tax submission are easier now that their data auto-fills from their employers and health providers when completing Online Tax. People appreciate the concept of having to only update their details once with government.

Benefits

I agree with the benefits identified by the Productivity Commission discussion paper however I would add the following:

  • Publishing government data, if published well, enables a competitive marketplace of service and product delivery, the ability to better leverage public and academic analysis for government use and more broadly, taps into the natural motivation of the entire community to innovate, solve problems and improve life.

  • Establishing authoritative data – often government is the authoritative source of information it naturally collects as part of the function of government. When this data is not then publicly available (through anonymised APIs if necessary) then people will use whatever data they can get access to, reducing the authority of the data collected by Government

  • A data-drive approach to collecting, sharing and publishing data enables true iterative approaches to policy and services. Without data, any changes to policy are difficult to justify and impossible to track the impact, so data provides a means to support change and to identify what is working quickly. Such feedback loops enable iterative improvements to policies and programs that can respond to the changing financial and social environment the operate in.

  • Publishing information in a data driven way can dramatically streamline reporting, government processes and decision making, freeing up resources that can be used for more high value purposes.

Public Sector Data Principles

The Public Data Statement provides a good basis of principles for this agenda. Below are some principles I think are useful to highlight with a brief explanation of each.

Principles:

  • build for the future - legacy systems will always be harder to deal with so agencies need to draw a line in the sand and ensure new systems are designed with data principles, future reuse and this policy agenda in mind. Otherwise we will continue to build legacy systems into the future. Meanwhile, just because a legacy system doesn’t natively support APIs or improved access doesn’t mean you can’t affordably build middleware solutions to extract, transform, share and publish data in an automated way.

  • data first - wherever data is used to achieve an outcome, publish the data along with the outcome. This will improve public confidence in government outcomes and will also enable greater reuse of government data. For example, where graphs or analysis are published also publish the data. Where a mobile app is using data, publish the data API. Where a dashboard is set up, also provide access to the underpinning data.

  • use existing data, from the source where possible - this may involve engaging with or even paying for data from private sector or NGOs, negotiating with other jurisdictions or simply working with other government entities to gain access.

  • build reusable data infrastructure first - wherever data is part of a solution, the data should be accessible through APIs so that other outcomes and uses can be realised, even if the APIs are only used for internal access in the first instance.

  • data driven decision making to support iterative and responsive policy and implementations approaches – all decisions should be evidence based, all projects, policies and programs should have useful data indicators identified to measure and monitor the initiative and enable iterative changes backed by evidence.

  • consume your own data and APIs - agencies should consider how they can better use their own data assets and build access models for their own use that can be used publicly where possible. In consuming their own data and APIs, there is a better chance the data and APIs will be designed and maintained to support reliable reuse. This could raw or aggregate data APIs for analytics, dashboards, mobile apps, websites, publications, data visualisations or any other purpose.

  • developer empathy - if government agencies start to prioritise the needs of data users when publishing data, there is a far greater likelihood the data will be published in a way developers can use. For instance, no developer likes to use PDFs, so why would an agency publish data in a PDF (hint: there is no valid reason. PDF does not make your data more secure!).

  • standardise where beneficial but don’t allow the perfect to delay the good - often the focus on data jumps straight to standards and then multi year/decade standards initiatives are stood up which creates huge delays to accessing actual data. If data is machine readable, it can often be used and mapped to some degree which is useful, more useful than having access to nothing.

  • automate, automate, automate! – where human effort is required, tasks will always be inefficient and prone to error. Data collection, sharing and publishing should be automated where possible. For example, when data is regularly requested, agencies should automate the publishing of data and updates which both reduces the work for the agency and improves the quality for data users.

  • common platforms - where possible agencies should use existing common platforms to share and publish data. Where they need to develop new infrastructure, efforts should be made to identify where new platforms might be useful in a whole of government or multi agency context and built to be shared. This will support greater reuse of infrastructure as well as data.

  • a little less conversation a little more action – the public service needs to shift from talking about data to doing more in this space. Pilot projects, experimentation, collaboration between implementation teams and practitioners, and generally a greater focus on getting things done.

Recommendations for the Public Data agenda

Strategic

  1. Strong Recommendation: Develop a holistic vision and strategy for a data-driven APS. This could perhaps be part of a broader digital or ICT strategy, but there needs to be a clear goal that all government entities are aiming towards. Otherwise each agency will continue to do whatever they think makes sense just for them with no convergence in approach and no motivation to work together.

  2. Strong Recommendation: Develop and publish work program and roadmap with meaningful measures of progress and success regularly reported publicly on a public data agenda dashboard. NSW Government already have a public roadmap and dashboard to report progress on their open data agenda.

Whole of government data infrastructure

  1. Strong Recommendation: Grow the data.gov.au technical team to at least 5 people to grow the whole of government catalogue and cloud based data hosting infrastructure, to grow functionality in response to data publisher and data user requirements, to provide free technical support and training to agencies, and to regularly engage with data users to grow public confidence in government data. The data.gov.au experience demonstrated that even just a small motivated technical team could greatly assist agencies to start on their data publishing journey to move beyond policy hypothesising into practical implementation. This is not something that can be efficiently or effectively outsourced in my experience.

  • I note that in the latest report from PM&C, Data61 have been engaged to improve the infrastructure (which looks quite interesting) however, there still needs to be an internal technical capability to work collaboratively with Data61, to support agencies, to ensure what is delivered by contractors meets the technical needs of government, to understand and continually improve the technical needs and landscape of the APS, to contribute meaningfully to programs and initiatives by other agencies, and to ensure the policies and programs of the Public Data Branch are informed by technical realities.

  1. Recommendation: Establish/extend a data infrastructure governance/oversight group with representatives from all major data infrastructure provider agencies including the central public data team to improve alignment of agendas and approaches for a more holistic whole of government approach to all major data infrastructure projects. The group would assess new data functional requirements identified over time, would identify how to best collectively meet the changing data needs of the public sector and would ensure that major data projects apply appropriate principles and policies to enable a data driven public service. This work would also need to be aligned with the work of the Data Champions Network.

  2. Recommendation: Map out, publish and keep up to date the data infrastructure landscape to assist agencies in finding and using common platforms.

  3. Recommendation: Identify on an ongoing basis publisher needs and provide whole of government solutions where required to support data sharing and publishing (eg – data.gov.au, ABS infrastructure, NationalMap, analytics tools, github and code for automation, whole of gov arrangements for common tools where they provide cost benefits).

  4. Recommendation: Create a requirement for New Policy Proposals that any major data initiatives (particularly analytics projects) also make the data available via accessible APIs to support other uses or publishing of the data.

  5. Recommendation: Establish (or build upon existing efforts) an experimental data playground or series of playgrounds for agencies to freely experiment with data, develop skills, trial new tools and approaches to data management, sharing, publishing, analysis and reuse. There are already some sandbox environments available and these could be mapped and updated over time for agencies to easily find and engage with such initiatives.

Grow consumer confidence

  1. Strong Recommendation: Build automated data quality indicators into data.gov.au. Public quality indicators provide an easy way to identify quality data, thus reducing the time and effort required by data users to find something useful. This could also support a quality search interface, for instance data users could limit searches to “high quality government data” or choose granular options such as “select data updated this year”. See my earlier blog (from PM&C) draft of basic technical quality indicators which would be implemented quickly, giving data users a basic indication of how usable and useful data is in a consistent automated way. Additional quality indicators including domain specific quality indicators could be implemented in a second or subsequent iteration of the framework.

  2. Strong Recommendation: Establish regular public communications and engagement to improve relations with data users, improve perception of agenda and progress and identify areas of data provision to prioritise. Monthly blogging of progress, public access to the agenda roadmap and reporting on progress would all be useful. Silence is generally assumed to mean stagnation, so it is imperative for this agenda to have a strong public profile, which in part relies upon people increasingly using government data.

  3. Strong Recommendation: Establish a reasonable funding pool for agencies to apply for when establishing new data infrastructure, when trying to make existing legacy systems more data friendly, and for responding to public data requests in a timely fashion. Agencies should also be able to apply for specialist resource sharing from the central and other agencies for such projects. This will create the capacity to respond to public needs faster and develop skills across the APS.

  4. Strong Recommendation: The Australian Government undertake an intensive study to understand the concerns Australians hold relating to the use of their data and develop a new social pact with the public regarding the use and limitations of data.

  5. Recommendation: establish a 1-2 year project to support Finance in implementing the data driven recommendations from the Murray Inquiry with 2-3 dedicated technical resources working with relevant agency teams. This will result in regulatory streamlining, improved reporting and analysis across the APS, reduced cost and effort in the regular reporting requirements of government entities and greater reuse of the data generated by government reporting.

  6. Recommendation: Establish short program to focus on publishing and reporting progress on some useful high value datasets, applying the Public Data Policy Statement requirements for data publishing. The list of high value datasets could be drawn from the Data Barometer, the Murray Inquiry, existing requests from data.gov.au, and work from PM&C. The effort of determining the MOST high value data to publish has potentially got in the way of actual publishing, so it would be better to use existing analysis and prioritise some data sets but more importantly to establish data by default approach across govt for the kinds of serendipitous use of data for truly innovation outcomes.

  7. Recommendation: Citizen driven privacy – give citizens the option to share data for benefits and simplified services, and a way to access data about themselves.

Grow publisher capacity and motivation

  1. Strong Recommendation: Document the benefits for agencies to share data and create better guidance for agencies. There has been a lot of work since the reboot of data.gov.au to educate agencies on the value of publishing data. The value of specialised data sharing and analytics projects is often evident driving those kinds of projects, but traditionally there hasn’t been a lot of natural motivations for agencies to publish data, which had the unfortunate result of low levels of data publishing. There is a lot of anecdotal evidence that agencies have saved time and money by publishing data publicly, which have in turn driven greater engagement and improvements in data publishing by agencies. If these examples were better documented (now that there are more resources) and if agencies were given more support in developing holistic public data strategies, we would likely see more data published by agencies.

  2. Strong Recommendation: Implement an Agency League Table to show agency performance on publishing or otherwise making government data publicly available. I believe such a league table needs to be carefully designed to include measures that will drive better behaviours in this space. I have previously mapped out a draft league table which ranks agency performance by quantity (number of data resources, weighted by type), quality (see previous note on quality metrics), efficiency (the time and/or money saved in publishing data) and value (a weighted measure of usage and reuse case studies) and would be happy to work with others in re-designing the best approach if useful.

  3. Recommendation: Establish regular internal hackfests with tools for agencies to experiment with new approaches to data collection, sharing, publishing and analysis – build on ATO lab, cloud tools, ATO research week, etc.

  4. Recommendation: Require data reporting component for New Policy Proposals and new tech projects wherein meaningful data and metrics are identified that will provide intelligence on the progress of the initiative throughout the entire process, not just at the end of the project.

  5. Recommendation: Add data principles and API driven and automated data provision to the digital service standard and APSC training.

  6. Recommendation: Require public APIs for all government data, appropriately aggregated where required, leveraging common infrastructure where possible.

  7. Recommendation: Establish a “policy difference engine” – a policy dashboard that tracks the top 10 or 20 policy objectives for the government of the day which includes meaningful metrics for each policy objective over time. This will enable the discovery of trends, the identification of whether policies are meeting their objectives, and supports an evidence based iterative approach to the policies because the difference made by any tweaks to the policy agenda will be evident.

  8. Recommendation: all publicly funded research data to be published publicly, and discoverable on central research data hub with free hosting available for research institutions. There has been a lot of work by ANDS and various research institutions to improve discovery of research data, but a large proportion is still only available behind a paywall or with an education logon. A central repository would reduce the barrier for research organisations to publicly publish their data.

  9. Recommendation: Require that major ICT and data initiatives consider cloud environments for the provision, hosting or analysis of data.

  10. Recommendation: Identify and then extend or provide commonly required spatial web services to support agencies in spatially enabling data. Currently individual agencies have to run their own spatial services but it would be much more efficient to have common spatial web services that all agencies could leverage.

Build data drive culture across APS

  1. Strong Recommendation: Embed data approaches are considered in all major government investments. For example, if data sensors were built into major infrastructure projects it would create more intelligence about how the infrastructure is used over time. If all major investments included data reporting then perhaps it would be easier to keep projects on time and budget.

  2. Recommendation: Establish a whole of government data skills program, not just for specialist skills, but to embed in the entire APS and understanding of data-driven approaches for the public service. This would ideally include mandatory data training for management (in the same way OH&S and procurement are mandatory training). At C is a draft approach that could be taken.

  3. Recommendation: Requirement that all government contracts have create new data make that data available to the contracting gov entity under Creative Commons By Attribution only licence so that government funded data is able to published publicly according to government policy. I have seen cases of contracts leaving ownership with companies and then the data not being reusable by government.

  4. Recommendation: Real data driven indicators required for all new policies, signed off by data champions group, with data for KPIs publicly available on data.gov.au for public access and to feed policy dashboards. Gov entities must identify existing data to feed KPIs where possible from gov, private sector, community and only propose new data collection where new data is clearly required.

  • Note: it was good to see a new requirement to include evidence based on data analytics for new policy proposals and to consult with the Data Champions about how data can support new proposals in the recently launched implementation report on the Public Data Management Report. However, I believe it needs to go further and require data driven indicators be identified up front and reported against throughout as per the recommendation above. Evidence to support a proposal does not necessarily provide the ongoing evidence to ensure implementation of the proposal is successful or has the intended effect, especially in a rapidly changing environment.

  1. Recommendation: Establish relationships with private sector to identify aggregate data points already used in private sector that could be leveraged by public sector rather. This would be more efficient and accurate then new data collection.

  2. Recommendation: Establish or extend a cross agency senior management data champions group with specific responsibilities to oversee the data agenda, sign off on data indicators for NPPs as realistic, provide advice to Government and Finance on data infrastructure proposals across the APS.

  3. Recommendation: Investigate the possibilities for improving or building data sharing environments for better sharing data between agencies.

  4. Recommendation: Take a distributed and federated approach to linking unit record data. Secure API access to sensitive data would avoid creating a honey pot.

  5. Recommendation: Establish data awards as part of annual ICT Awards to include: most innovative analytics, most useful data infrastructure, best data publisher, best data driven policy.

  6. Recommendation: Extend the whole of government service analytics capability started at the DTO and provide access to all agencies to tap into a whole of government view of how users interact with government services and websites. This function and intelligence, if developed as per the original vision, would provide critical evidence of user needs as well as the impact of changes and useful automated service performance metrics.

  7. Recommendation: Support data driven publishing including an XML platform for annual reports and budgets, a requirement for data underpinning all graphs and datavis in gov publications to be published on data.gov.au.

  8. Recommendation: develop a whole of government approach to unit record aggregation of sensitive data to get consistency of approach and aggregation.

Implementation recommendations

  1. Move the Public Data Branch to an implementation agency – Currently the Public Data Branch sits in the Department of Prime Minister and Cabinet. Considering this Department is a policy entity, the questions arises as to whether it is the right place in the longer term for an agenda which requires a strong implementation capability and focus. Public data infrastructure needs to be run like other whole of government infrastructure and would be better served as part of a broader online services delivery team. Possible options would include one of the shared services hubs, a data specialist agency with a whole of government mandate, or the office of the CTO (Finance) which runs a number of other whole of government services.

Downloadable copy

July 26, 2016

Gather-ing some thoughts on societal challenges

On the weekend I went to the GatherNZ event in Auckland, an interesting unconference. I knew there were going to be some pretty awesome people hanging out which gave a chance for me to catch up with and introduce the family to some friends, hear some interesting ideas, and road test some ideas I’ve been having about where we are all heading in the future. I ran a session I called “Choose your own adventure, please” and it was packed! Below is a bit of a write up of what was discussed as there was a lot of interest in how to keep the conversation going. I confess, I didn’t expect so much interest as to be asked where the conversation could be continued, but this is a good start I think. I was particularly chuffed when a few attendee said the session blew their minds :)

I’m going to be blogging a fair bit over the coming months on this topic in any case as it relates to a book I’m in the process of researching and writing, but more on that next week!

Choose your own adventure, please

We are at a significant tipping point in history. The world and the very foundations our society were built on have changed, but we are still largely stuck in the past in how we think and plan for the future. If we don’t make some active decisions about how we live, think and prioritise, then we will find ourselves subconsciously reinforcing the status quo at every turn and not in a position to genuinely create a better future for all. I challenge everyone to consider how they think and to actively choose their own adventure, rather than just doing what was done before.

How has the world changed? Well many point to the changes in technology and science, and the impact these have had on our quality of life. I think the more interesting changes are in how power and perspectives has changed, which created the environment for scientific and technological progress in the first instance, but also created the ability for many many more individuals to shape the world around them. We have seen traditional paradigms of scarcity, centralisation and closed systems be outflanked and outdated by modern shifts to surplus, distribution and open systems. When you were born a peasant and died one, what power did you have to affect your destiny? Nowadays individuals are more powerful than ever in our collective history, with the traditionally centralised powers of publishing, property, communications, monitoring and even enforcement now distributed internationally to anyone with access to a computer and the internet, which is over a third of the world’s population and growing. I blogged about this idea more here. Of course, these shifts are proving challenging for traditional institutions and structures to keep up, but individuals are simply routing around these dinosaurs, putting such organisations in the uncomfortable position of either adapting or rendering themselves irrelevant.

Choices, choices, choices

We discussed a number of specific premises or frameworks that underpinned the development of much of the world we know today, but are now out of touch with the changing world we live in. It was a fascinating discussion, so thank you to everyone who came and contributed and although I think we only scratched the surface, I think it gave a lot of people food for thought :)

  • Open vs closed – open systems (open knowledge, data, government, source, science) are outperforming closed ones in almost everything from science, technology, business models, security models, government and political systems, human knowledge and social models. Open systems enable rapid feedback loops that support greater iteration and improvements in response to the world, and open systems create a natural motivation for the players involved to perform well and gain the benefits of a broader knowledge, experience and feedback base. Open systems also support a competitive collaborative environment, where organisations can collaborate on the common, but compete on their specialisation. We discussed how security by obscurity was getting better understood as a largely false premise and yet, there are still so many projects, decisions, policies or other initiatives where closed is the assumed position, in contrast to the general trend towards openness across the board.
  • Central to distributed – many people and organisations still act like kings in castles, protecting their stuff from the masses and only collaborating with walls and moats in place to keep out the riff raff. The problem is that everything is becoming more distributed, and the smartest people will never all be in the one castle, so if you want the best outcomes, be it for a policy, product, scientific discovery, service or anything else, you need to consider what is out there and how you can be a part of a broader ecosystem. Building on the shoulders of giants and being a shoulder for others to build upon. Otherwise you will always be slower than those who know how to be a node in the network. Although deeply hierarchical systems still exist, individuals are learning how to route around the hierarchy (which is only an imaginary construct in any case). There will always be specialists and the need for central controls over certain things however, if whatever you do is done in isolation, it will only be effective in isolation. Everything and everyone is more and more interconnected so we need to behave more in this way to gain the benefits, and to ensure what we do is relevant to those we do it for. By tapping into the masses, we can also tap into much greater capacity and feedback loops to ensure how we iterate is responsive to the environment we operate in. Examples of the shift included media, democracy, citizen movements, ideology, security, citizen science, gov as an API, transnational movements and the likely impact of blockchain technologies on the financial sector.
  • Scarcity to surplus – the shift from scarcity to surplus is particularly interesting because so much of our laws, governance structures, business models, trade agreements and rules for living are based around antiquated ideas of scarcity and property. We now apply the idea of ownership to everything and I shared a story of a museum claiming ownership on human remains taken from Australia. How can you own that and then refuse to repatriate the remains to that community? Copyright was developed when the ability to copy something was costly and hard. Given digital property (including a lot of “IP”) is so easily replicated with low/zero cost, it has wrought havoc with how we think about IP and yet we have continued to duplicate this antiquated thinking in a time of increasing surplus. This is a problem because new technologies could genuinely create surplus in physical properties, especially with the developments in nano-technologies and 3D printing, but if we bind up these technologies to only replicate the status quo, we will never realise the potential to solve major problems of scarcity, like hunger or poverty.
  • Nationalism and tribalism – because of global communications, more people feel connected with their communities of interest, which can span geopolitical, language, disability and other traditional barriers to forming groups. This will also have an impact on loyalties because people will have an increasingly complex relationship with the world around them. Citizens can and will increasingly jurisdiction shop for a nation that supports their lifestyle and ideological choices, the same way that multinational corporates have jurisdiction shopped for low tax, low regulation environments for some time. On a more micro level, individuals engage in us vs them behaviours all the time, and it gets in the way of working together.
  • Human augmentation and (dis)ability – what it means to look and be human will start to change as more human augmentation starts to become mainstream. Not just cosmetic augmentations, but functional. The body hacking movement has been playing with human abilities and has discovered that the human brain can literally adapt to and start to interpret foreign neurological inputs, which opens up the path to nor just augmenting existing human abilities, but expanding and inventing new human abilities. If we consider the olympics have pretty much found the limit of natural human sporting achievement and have become arguably a bit boring, perhaps we could lift the limitations on the para-olympics and start to see rocket powered 100m sprints, or cyborg Judo competitions. As we start to explore what we can do with ourselves physically, neurologically and chemically, it will challenge a lot of views on what it means to be human. By why should we limit ourselves?
  • Outsourcing personal responsibility – with advances in technology, many have become lazy about how far their personal responsibility extends. We outsource small tasks, then larger ones, then strategy, then decision making, and we end up having no personal responsibility for major things in our world. Projects can fail, decisions become automated, ethics get buried in code, but individuals can keep their noses clean. We need to stop trying to avoid risk to the point where we don’t do anything and we need to ensure responsibility for human decisions are not automated beyond human responsibility.
  • Unconscious bias of privileged views, including digital colonialism – the need to be really aware of our assumptions and try to not simply reinvent the status quo or reinforce “structural white supremacy” as it was put by the contributor. Powerful words worth pondering! Explicit inclusion was put forward as something to prioritise.
  • Work – how we think about work! If we are moving into a more automated landscape, perhaps how we think about work will fundamentally change which would have enormous ramifications for the social and financial environment. Check out Tim Dunlop’s writing on this :)
  • Facts to sensationalism – the flow of information and communications are now so rapid that people, media and organisations are motivated to ever more sensationalism rather than considered opinions or facts. Definitely a shift worth considering!

Other feedback from the room included:

  • The importance of considering ethics, values and privilege in making decisions.
  • The ability to route around hierarchy, but the inevitable push back of established powers on the new world.
  • The idea that we go in cycles of power from centralised to distributed and back again. I confess, this idea is new to me and I’ll be pondering on it more.

Any feedback, thinking or ideas welcome in the comments below :) It was a fun session.

July 23, 2016

Gather Conference 2016 – Afternoon

The Gathering

Chloe Swarbrick

  • Whose responsibility is it to disrupt the system?
  • Maybe try and engage with the system we have for a start before writing it off.
  • You disrupt the system yourself or you hold the system accountable

Nick McFarlane

  • He wrote a book
  • Rock Stars are dicks to work with

So you want to Start a Business

  • Hosted by Reuben and Justin (the accountant)
  • Things you need to know in your first year of business
  • How serious is the business, what sort of structure
    • If you are serious, you have to do things properly
    • Have you got paying customers yet
    • Could just be an idea or a hobby
  • Sole Trader vs Incorporated company vs Trust vs Partnership
  • Incorperated
    • Directors and Shareholders needed to be decided on
    • Can take just half an hour
  • when to get a GST number?
    • If over $60k turnover a year
    • If you have lots of stuff you plan to claim back.
  • Have an accounting System from Day 1 – Xero Pretty good
  • Get an advisor or mentor that is not emotionally invested in your company
  • If partnership then split up responsibilities so you can hold each other accountable for specific items
  • If you are using Xero then your accountant should be using Xero directly not copying it into a different system.
  • Remuneration
    • Should have a shareholders agreement
    • PAYE possibility from drawings or put 30% aside
    • Even if only a small hobby company you will need to declare income to IRD especially non-trivial level.
  • What Level to start at Xero?
    • Probably from the start if the business is intended to be serious
    • A bit of pain to switch over later
  • Don’t forget about ACC
  • Remember you are due provisional tax once you get over the the $2500 for the previous year.
  • Home Office expense claim – claim percentage of home rent, power etc
  • Get in professionals to help

Diversity in Tech

  • Diversity is important
    • Why is it important?
    • Does it mean the same for everyone
  • Have people with different “ways of thinking” then we will have a diverse views then wider and better solutions
  • example “Polish engineer could analysis a Polish specific character input error”
  • example “Controlling a robot in Samoan”, robots are not just in english
  • Stereotypes for some groups to specific jobs, eg “Indians in tech support”
  • Example: All hires went though University of Auckland so had done the same courses etc
  • How do you fix it when people innocently hire everyone from the same background? How do you break the pattern? No be the first different-hire represent everybody in that group?
  • I didn’t want to be a trail-blazer
  • Wow’ed out at “Women in tech” event, first time saw “majority of people are like me” in a bar.
  • “If he is a white male and I’m going to hire him on the team that is already full of white men he better be exception”
  • Worried about implication that “diversity” vs “Meritocracy” and that diverse candidates are not as good
  • Usual over-representation of white-males in the discussion even in topics like this.
  • Notion that somebody was only hired to represent diversity is very harmful especially for that person
  • If you are hiring for a tech position then 90% of your candidates will be white-males, try place your diversity in getting more diverse group applying for the jobs not tilt in the actual hiring.
  • Even in maker spaces where anyone is welcome, there are a lot fewer women. Blames mens mags having things unfinished, women’s mags everything is perfect so women don’t want to show off something that is unfinished.
  • Need to make the workforce diverse now to match the younger people coming into it
  • Need to cover “power income” people who are not exposed to tech
  • Even a small number are role models for the future for the young people today
  • Also need to address the problem of women dropping out of tech in the 30s and 40s. We can’t push girls into an “environment filled with acid”
  • Example taking out “cocky arrogant males” from classes into “advanced stream” and the remaining class saw women graduating and staying in at a much higher rate.

Podcasting

  • Paul Spain from Podcast New Zealand organising
  • Easiest to listen to when doing manual stuff or in car or bus
  • Need to avoid overload of commercials, eg interview people from the company about the topic of interest rather than about their product
  • Big firms putting money into podcasting
  • In the US 21% of the market are listening every single month. In NZ perhaps more like 5% since not a lot of awareness or local content
  • Some radios shows are re-cutting and publishing them
  • Not a good directory of NZ podcasts
  • Advise people use proper equipment if possible if more than a once-off. Bad sound quality is very noticeable.
  • One person: 5 part series on immigration and immigrants in NZ
  • Making the charts is a big exposure
  • Apples “new and noteworthy” list
  • Domination by traditional personalities and existing broadcasters at present. But that only helps traction within New Zealand

 

 

FacebookGoogle+Share

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s
print

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

July 22, 2016

Gather Conference 2016 – Morning

At the Gather Conference again for about the 6th time. It is a 1-day tech-orientated unconference held in Auckland every year.

The day is split into seven streamed sessions each 40 minutes long (of about 8 parallel rooms of events that are each scheduled and run by attendees) plus and opening and a keynote session.

How to Steer your own career – Shirley Tricker

  • Asked people hands up on their current job situation, FT vs PT, sinmgle v multiple jobs
  • Alternatives to traditional careers of work. possible to craft your career
  • Recommended Blog – Free Range Humans
  • Job vs Career
    • Job – something you do for somebody else
    • Career – Uniqie to you, your life’s work
    • Career – What you do to make a contribution
  • Predicted that a greater number of people will not stay with one (or even 2 or 3) employers through their career
  • Success – defined by your goals, lifestyle wishes
  • What are your strengths – Know how you are valuable, what you can offer people/employers, ways you can branch out
  • Hard and Soft Skills (soft skills defined broadly, things outside a regular job description)
  • Develop soft skills
    • List skills and review ways to develop and improve them
    • Look at people you admire and copy them
    • Look at job desctions
  • Skills you might need for a portfilio career
    • Good at organising, marketing, networking
    • flexible, work alone, negotiation
    • Financial literacy (handle your accounts)
  • Getting started
    • Start small ( don’t give up your day job overnight)
    • Get training via work or independently
    • Develop you strengths
    • Fix weaknesses
    • Small experiments
    • cheap and fast (start a blog)
    • Don’t have to start out as an expert, you can learn as you go
  • Just because you are in control doesn’t make it easy
  • Resources
    • Careers.govt.nz
    • Seth Goden
    • Tim Ferris
    • eg outsources her writing.
  • Tools
    • Xero
    • WordPress
    • Canva for images
    • Meetup
    • Odesk and other freelance websites
  • Feedback from Audience
    • Have somebody to report to, eg meet with friend/adviser monthly to chat and bounce stuff off
    • Cultivate Women’s mentoring group
    • This doesn’t seem to filter through to young people, they feel they have to pick a career at 18 and go to university to prep for that.
    • Give advice to people and this helps you define
    • Try and make the world a better place: enjoy the work you are doing, be happy and proud of the outcome of what you are doing and be happy that it is making the world a bit better
    • How to I “motivate myself” without a push from your employer?
      • Do something that you really want to do so you won’t need external motivation
      • Find someone who is doing something write and see what they did
      • Awesome for introverts
    • If you want to start a startup then work for one to see what it is like and learn skills
    • You don’t have to have a startup in your 20s, you can learn your skills first.
    • Sometimes you have to do a crappy job at the start to get onto the cool stuff later. You have to look at the goal or path sometimes

Books and Podcasts – Tanya Johnson

Stuff people recommend

  • Intelligent disobedience – Ira
  • Hamilton the revolution – based on the musical
  • Never Split the difference – Chris Voss (ex hostage negotiator)
  • The Three Body Problem – Lia CiXin – Sci Fi series
  • Lucky Peach – Food and fiction
  • Unlimited Memory
  • The Black Swan and Fooled by Randomness
  • The Setup (usesthis.com) website
  • Tim Ferris Podcast
  • Freakonomics Podcast
  • Moonwalking with Einstein
  • Clothes, Music, Boy – Viv Albertine
  • TIP: Amazon Whispersync for Kindle App (audiobook across various platforms)
  • TIP: Blinkist – 15 minute summaries of books
  • An Intimate History of Humanity – Theodore Zenden
  • How to Live – Sarah Bakewell
  • TIP: Pocketcasts is a good podcast app for Android.
  • Tested Podcast from Mythbusters people
  • Trumpcast podcast from Slate
  • A Fighting Chance – Elizabeth Warren
  • The Choice – Og Mandino
  • The Good life project Podcast
  • The Ted Radio Hour Podcast (on 1.5 speed)
  • This American Life
  • How to be a Woman by Caitlin Moran
  • The Hard thing about Hard things books
  • Flashboys
  • The Changelog Podcast – Interview people doing Open Source software
  • The Art of Oppertunity Roseland Zander
  • Red Rising Trilogy by Piers Brown
  • On the Rag podcast by the Spinoff
  • Hamish and Andy podcast
  • Radiolab podcast
  • Hardcore History podcast
  • Car Talk podcast
  • Ametora – Story of Japanese menswear since WW2
  • .net rocks podcast
  • How not to be wrong
  • Savage Love Podcast
  • Friday Night Comedy from the BBC (especially the News Quiz)
  • Answer me this Podcast
  • Back to work podcast
  • Reply All podcast
  • The Moth
  • Serial
  • American Blood
  • The Productivity podcast
  • Keeping it 1600
  • Ruby Rogues Podcast
  • Game Change – John Heilemann
  • The Road less Travelled – M Scott Peck
  • The Power of Now
  • Snow Crash – Neil Stevensen

My Journey to becoming a Change Agent – Suki Xiao

  • Start of 2015 was a policy adviser at Ministry
  • Didn’t feel connected to job and people making policies for
  • Outside of work was a Youthline counsellor
  • Wanted to make a difference, organised some internal talks
  • Wanted to make changes, got told had to be a manager to make changes (10 years away)
  • Found out about R9 accelerator. Startup accelerator looking at Govt/Business interaction and pain points
  • Get seconded to it
  • First month was very hard.
  • Speed of change was difficult, “Lean into the discomfort” – Team motto
  • Be married to the problem
    • Specific problem was making sure enough seasonal workers, came up with solution but customers didn’t like it. Was not solving the actual problem customers had.
    • Team was married to the problem, not the married to the solution
  • When went back to old job, found slower pace hard to adjust back
  • Got offered a job back at the accelerator, coaching up to 7 teams.
    • Very hard work, lots of work, burnt out
    • 50% pay cut
    • Worked out wasn’t “Agile” herself
    • Started doing personal Kanban boards
    • Cut back number of teams coaching, higher quality
  • Spring Board
    • Place can work at sustainable pace
    • Working at Nomad 8 as an independent Agile consultant
    • Work on separate companies but some support from colleges
  • Find my place
    • Joined Xero as a Agile Team Facilitator
  • Takeaways
    • Anybody can be a change agent
    • An environment that supports and empowers
    • Look for support
  • Conversation on how you overcome the “Everest” big huge goal
    • Hard to get past the first step for some – speaker found she tended to do first think later. Others over-thought beforehand
    • It seems hard but think of the hard things you have done in your life and it is usually not as bad
    • Motivate yourself by having no money and having no choice
    • Point all the bad things out in the open, visualise them all and feel better cause they will rarely happen
    • Learn to recognise your bad patterns of thoughts
    • “The Way of Art” Steven Pressfield (skip the Angels chapter)
  • Are places Serious about Agile instead of just placing lip-service?
    • Questioner was older and found places wanted younger Agile coaches
    • Companies had to completely change into organisation, eg replace project managers
    • eg CEO is still waterfall but people lower down are into Agile. Not enough management buy-in.
    • Speaker left on client that wasn’t serious about changing
  • Went though an Agile process, made “Putting Agile into the Org” as the product
  • Show customers what the value is
  • Certification advice, all sorts of options. Nomad8 course is recomended

 

FacebookGoogle+Share

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

network={
 key_mgmt=IEEE8021X
 eap=PEAP
 identity="USERNAME"
 anonymous_identity="USERNAME"
 password="PASS"
 phase1="auth=MD5"
 phase2="auth=CHAP password=PASS"
 eapol_flags=0
}

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

July 21, 2016

Conversations on Collected Health Data

wearable-health-deviceThere are more and more wearable devices that collect a variety of health data, and other health records are kept electronically. More often than not, the people whose data it is don’t actually have access. There are very important issues to consider, and you could use this for a conversation with your students, and in assignments.

On the individual level, questions such as

  • Who should own your health data?
  • Should you be able to get an overview of who has what kind of your data?  (without fuzzy vague language)
  • Should you be able to access your own data? (directly out of a device, or online service where a device sends its data)
  • Should you be able to request a company to completely remove data from their records?

For society, questions like

  • Should a company be allowed to hoard data, or should they be required to make it accessible (open data) for other researchers?

A comment piece in this week’s Nature entitled “Lift the blockade on health data” could be used as a starting point a conversation and for additional information:

http://nature.com/articles/doi:10.1038/535345a

Technology titans, such as Google and Apple, are moving into health. For all the potential benefits, the incorporation of people’s health data into algorithmic ‘black boxes’ could harm science and exacerbate inequalities, warn John Wilbanks and Eric Topol in a Comment piece in this week’s Nature. “When it comes to control over our own data, health data must be where we draw the line,” they stress.

Cryptic digital profiling is already shaping society; for example, online adverts are tailored to people’s age, location, spending and browsing habits. Wilbanks and Topol envision a future in which “companies are able to trade people’s disease profiles, unbeknown to them” and where “health decisions are abstruse and difficult to challenge, and advances in understanding are used to aggressively market health-related services to people — regardless of whether those services actually benefit their health.”

The authors call for a campaigning movement similar to the environmental one to break open how people’s data are being used, and to illuminate how such information could be used in the future. In their view, “the creation of credible competitors that are open source is the most promising way to regulate” corporations that have come to “resemble small nations in their own right”.

 

July 18, 2016

Social Engineering/Manipulation, Rigging Elections, and More

We recently had an election locally and I noticed how they were handing out 'How To Vote' cards which made me wonder. How much social engineering and manipulation do we experience each day/throughout our lives (please note, that all of the results are basically from the first few pages of any publicly available search engine)? - think about the education system and the way we're mostly taught to

July 16, 2016

GnuCOBOL: A Gnu Life for an Old Workhorse

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages.

Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

read more

Making surface mount pcbs with a CNC machine

The cool kidsTM like to use toaster ovens with thermocouples to bake their own surface mount boards at home. I've been exploring doing that using boards that I make on a CNC locally. The joy of designing in the morning and having the working product in the evening. It seems SOIC size is ok, but smaller SMT IC packages currently present an issue. This gives interesting fodder for how to increase precision down further. Doing SOIC and SMD LED/Resistors from a sub $1k CNC machine isn't too bad though IMHO. And unlike other pcb specific CNC machines I can also cut wood and metal with my machine :-p


Time to stock up on some SOIC microcontrollers for some full board productions. It will be very interesting to see if I can do an SMD usb connector. Makes it a nice complete black box to do something and talk ROS over USB.

July 12, 2016

Using Smatch static analysis on OpenPOWER OPAL firmware

For Skiboot, I’m always looking at new automated systems to find bugs in the code. A little while ago, I read about the Smatch tool developed by some folks at Oracle (they also wrote about using it on the Linux kernel).

I was eager to try it with skiboot to see if it could find anything.

Luckily, it was pretty easy. I built Smatch according to their documentation and then built skiboot:

make CHECK="/home/stewart/smatch/smatch" C=1 -j20 all check

Due to some differences in how we implement abort() and assert() in skiboot, I added “_abort”, “abort” and “assert_fail” to smatch_data/no_return_funcs in the Smatch source tree to silence some false positives.

It seems that there’s a few useful warnings there (some of which I’ve fixed in skiboot master already), along with some false positives around the preprocessor/complier tricks we do to ensure at compile time that an OPAL call definition has the correct number of arguments specified.

So far, so good though. Try it on your project!

July 11, 2016

Pia, Thomas and little A’s Excellent Adventure – Week 3

The last fortnight has just flown past! We have been getting into the rhythm of being on holidays, a difficult task for yours truly as the workaholic I am! Meanwhile we have also caught a lot more fish (up to 57 now, 53 were released), have been keeping up with the studies and little A has been (mostly) enjoying a broad range of new foods and experiences. The book is on hold for another week or two while I finish another project off.

Photos are added every few days to the flickr album.
IMAG1855

Studies

My studies are going well. The two (final) subject are “Law, Governance and Policy” and “White Collar Crime”. They are both great subjects and I’ve been thoroughly enjoying the readings, discussions and thinking critically about the issues therein. The White Collar Crime topic in particular has been fascinating! Each week we look at case studies of WCC in the news and there are some incredible issues every single week. A recent one directly relevant to us was the ACCC suing Heinz for a baby food advertised as “99% fruit” but made up of fruit concentrates and purees, resulting in a 67% sugar product. Wow! The advertising is all about how healthy it is and how it developed a taste for real foods in toddlers but it basically is just a sugar hit worse than a soft drink!

Fishing and weather

We have been doing fairly well and the largest trout so far was 69cm (7.5 pounds). We are exploring the area and finding some great new spots but there is certainly some crowding on weekends! Although Thomas was lamenting the lack of rain the first week, it then torrented leaving him to lament about too much rain! Hopefully now we’ll get a good mix of both rain (for fish) and sunshine. Meanwhile it has been generally much warmer than Canberra and the place we are staying in is always toasty warm so we are very comfortable.

Catchups in Wellington and Auckland

We are planning to go to Auckland for Gather later this month and to Wellington for GovHack at the end of July and then for the OS/OS conference in August. The plan is to catch up with ALL TEH PEEPS during those trips which we are really looking forward to! Little A and I did a little one day fly in fly out trip to Wellington last week to catch up with the data.govt.nz team to exchange information and experience with running government data portals. It was great to see Nadia, Rowan and the team and to see the recent work happening with the new beta.data.govt.nz and to share some of the experience we had with data.gov.au. Thanks very much to the team for great day and good luck in the next steps with your ambitious agenda! I know it will go well!

Visitors

Last week we had our first visitors. Thomas’ parents stayed with us for a week which has been lovely! Little A had a great time being pampered and we enjoyed showing them around. We had a number of adventures with them including some fishing, a trip to the local national park to see some beautiful volcanoes (still active!) and a place reminiscent of the Hydro Majestic in the Blue Mountains.

We also visited Te Porere Redoubt a Maori defensive structure including trenches, and a visit to the site of an old Maori settlement. The trench warfare skills developed by the Maori were used in the New Zealand wars and I got a few photos to show the deep trench running around the outside of the structure and then the labyrinth in the middle. There is a photo of a picture of a fortified Maori town showing that large spikes would have also been used for the defensive structure, and potentially some kind of roof? Incredible use of tactical structures for defence. One for you Sherro!

Wolverine baby

Finally, we had a small incident with little A which really showed how resilient little kids are. We were bushwalking with little A in a special backpack for carrying children. I had to step across a small gap and checked out the brush but only saw the soft leaves of a tree. I stepped across and suddenly little A screamed! Thomas was right on to it (I couldn’t see what was happening) and there had been a tiny low hanging piece of bramble (thorny vine) at little A’s face height! He quickly disentangled her and we sat her down to see the damage and console her. It had caught on her neck and luckily only gave her a few very shallow scratches but she was inconsolable. Anyway, a few cuddles later, some antiseptic cream and a warm shower and little A was perfectly happy, playing with her usual toys whilst Thomas and I were still keyed up. The next day the marks were dramatically faded and within a couple of days you could barely see them. She is healing super fast, like a baby Wolverine :) She is happily enjoying a range of foods now and gets a lot of walks and some time at the local playgroup for additional socialisation.

July 09, 2016

The Moon tonight

July 08, 2016

Nexus 6P and Galaxy S5 Mini

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.

Security

The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.

Design

This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.

Cables

This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.

July 06, 2016

Where to Get a POWER8 Development VM

POWER8 sounds great, but where the heck can I get a Power VM so I can test my code?

This is a common question we get at OzLabs from other open source developers looking to port their software to the Power Architecture. Unfortunately, most developers don't have one of our amazing servers just sitting around under their desk.

Thankfully, there's a few IBM partners who offer free VMs for development use. If you're in need of a development VM, check out:

So, next time you wonder how you can test your project on POWER8, request a VM and get to it!

linux.conf.au 2017 wants your talks!

lca2017-tweet-cfp-open

You might have noticed earlier this week that linux.conf.au 2017, which is happening in Hobart, Tasmania (and indeed, which I’m running!) has opened its call for proposals.

Hobart’s a wonderful place to visit in January – within a couple of hours drive, there’s wonderful undisturbed wilderness to go bushwalking in, historic sites from Tasmania’s colonial past, and countless wineries, distilleries, and other producers. Not to mention, the MONA Festival of Music and Arts will probably be taking place around the time of the conference. Coupled with temperate weather, and longer daylight hours than anywhere else in Australia, so there’s plenty of time to make the most of your visit.

linux.conf.au is – despite the name – one of the world’s best generalist Free and Open Source Software conferences. It’s been running annually since 1999, and this year, we’re inviting people to talk abut the Future of Open Source.

That’s a really big topic area, so here’s how our CFP announcement breaks it down:

THE FUTURE OF YOUR PROJECT
linux.conf.au is well-known for deeply technical talks, and lca2017 will be no exception. Our attendees want to be the first to know about new and upcoming developments in the tools they already use every day, and they want to know about new open source technology that they’ll be using daily in two years time.

OPENNESS FOR EVERYONE
Many of the techniques that have made Open Source so successful in the software and hardware world are now being applied to fields as disparate as science, data, government, and the law. We want to know how Open Thinking will help to shape your field in the future, and more importantly, we want to know how the rest of the world can help shape the future of Open Source.

THREATS FROM THE FUTURE
It’s easy to think that Open Source has won, but for every success we achieve, a new challenge pops up. Are we missing opportunities in desktop and mobile computing? Why is the world suddenly running away from open and federated communications? Why don’t the new generation of developers care about licensing? Let’s talk about how Software Freedom and Open Source can better meet the needs of our users and developers for years to come.

WHATEVER YOU WANT!
It’s hard for us to predict the future, but we know that you should be a part of it. If you think you have something to say about Free and Open Source Software, then we want to hear from you, even if it doesn’t fit any of the categories above.

My friend, and former linux.conf.au director, Donna Benjamin blogged about the CFP on medium and tweeted the following yesterday:

At @linuxconfau in Hobart, I’d like to hear how people are USING free & open source software, and what they do to help tend the commons.

Our CFP closes on Friday 5 August – and we’re not planning on extending that deadline – so put your thinking caps on. If you have an idea for the conference, feel free to e-mail me for advice, or you can always ask for help on IRC – we’re in #linux.conf.au on freenode – or you can find us on Facebook or Twitter.

What does the future of Open Source look like? Tell us by submitting a talk, tutorial, or miniconf proposal now! We can’t wait to hear what you have to say.

July 05, 2016

Speaking in July 2016

  • Texas LinuxFest – July 8-9 2016 – Austin, Texas – I’ve never spoken at this event before but have heard great things about it. I’ve got a morning talk about what’s in MariaDB Server 10.1, and what’s coming in 10.2.
  • db tech showcase – July 13-15 2016 – Tokyo, Japan – I’ve regularly spoken at this event and its a case of a 100% pure database conference, with a very captive audience. I’ll be talking about the lessons one can learn from other people’s database failures (this is the kind of talk that keeps on changing and getting better as the software improves).
  • The MariaDB Tokyo Meetup – July 21 2016 – Tokyo, Japan – Not the traditional meetup timing, since its 1.30pm-7pm, there will be many talks and its organised by the folk behind the SPIDER storage engine. It should be fun to see many people and food is being provided too. In Japanese: MariaDB コミュニティイベント in Tokyo, MariaDB Community Event in TOKYO.

Thunderbird Uses OpenGL – Who Knew?

I have a laptop and a desktop system (as well as a bunch of other crap, but let’s ignore that for a moment). Both laptop and desktop are running openSUSE Tumbleweed. I’m usually in front of my desktop, with dual screens, a nice keyboard and trackball, and the laptop is sitting with the lid closed tucked away under the desk. Importantly, the laptop is where my mail client lives. When I’m at my desk, I ssh from desktop to laptop with X forwarding turned on, then fire up Thunderbird, and it appears on my desktop screen. When I go travelling, I take the laptop with me, and I’ve still got my same email client, same settings, same local folders. Easy. Those of you considering heckling me for not using $any_other_mail_client and/or $any_other_environment, please save it for later.

Yesterday I had an odd problem. A new desktop system arrived, so I installed Tumbleweed, eventually ssh’d to my Laptop, started Thunderbird, and…

# thunderbird

…nothing happened. There’s usually a little bit of junk on the console at that point, and the Thunderbird window should have appeared on my desktop screen. But it didn’t. strace showed it stuck in a loop, waiting for something:

wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)

After an assortment of random dead ends (ancient and useless bug reports about Thunderbird and Firefox failing to run over remote X sessions), I figured I may as well attach a debugger to see if I could get any more information:

# gdb -p 22167
GNU gdb (GDB; openSUSE Tumbleweed) 7.11
[...]
Attaching to process 22167
Reading symbols from /usr/lib64/thunderbird/thunderbird-bin...
[...]
0x00007f2e95331a1d in poll () from /lib64/libc.so.6
(gdb) break
Breakpoint 1 at 0x7f2e95331a1d
(gdb) bt
#0 0x00007f2e95331a1d in poll () from /lib64/libc.so.6
#1 0x00007f2e8730b410 in ?? () from /usr/lib64/libxcb.so.1
#2 0x00007f2e8730cecf in ?? () from /usr/lib64/libxcb.so.1
#3 0x00007f2e8730cfe2 in xcb_wait_for_reply () from /usr/lib64/libxcb.so.1
#4 0x00007f2e86ecc845 in ?? () from /usr/lib64/libGL.so.1
#5 0x00007f2e86ec74b8 in ?? () from /usr/lib64/libGL.so.1
#6 0x00007f2e86e9a2a9 in ?? () from /usr/lib64/libGL.so.1
#7 0x00007f2e86e9654b in ?? () from /usr/lib64/libGL.so.1
#8 0x00007f2e86e966b3 in glXChooseVisual () from /usr/lib64/libGL.so.1
#9 0x00007f2e90fa0d6f in glxtest () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:230
#10 0x00007f2e90fa1003 in fire_glxtest_process () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:333
#11 0x00007f2e90f9b4cd in XREMain::XRE_mainInit (this=this@entry=0x7ffdfc66c448, aExitFlag=aExitFlag@entry=0x7ffdfc66c3ef) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:3134
#12 0x00007f2e90f9ee27 in XREMain::XRE_main (this=this@entry=0x7ffdfc66c448, argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, aAppData=aAppData@entry=0x7ffdfc66c648)
at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4362
#13 0x00007f2e90f9f0f2 in XRE_main (argc=1, argv=0x7ffdfc66d958, aAppData=0x7ffdfc66c648, aFlags=) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4484
#14 0x00000000004054c8 in do_main (argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, xreDirectory=0x7f2e9504a9c0) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:195
#15 0x0000000000404c4a in main (argc=1, argv=0x7ffdfc66d958) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:332
(gdb) continue
[Inferior 1 (process 22167) exited with code 01]

OK, so it’s libGL that’s waiting for something. Why is my mail client trying to do stuff with OpenGL?

Hang on! When I told gdb to continue, suddenly Thunderbird appeared, running properly, on my desktop display. WTF?

As far as I can tell, the problem is that my new desktop system has an NVIDIA GPU (nouveau drivers, BTW), and my laptop and previous desktop system both have Intel GPUs. Something about ssh’ing from the desktop with the NVIDIA GPU to the laptop with the Intel GPU, causes Thunderbird (and, indeed, any GL app — I also tried glxinfo and glxgears) to just wedge up completely. Whereas if I do the reverse (ssh from Intel GPU laptop to NVIDIA GPU desktop) and run GL apps, it works fine.

After some more Googling, I discovered I can make Thunderbird work properly over remote X like this:

# LIBGL_ALWAYS_INDIRECT=1 thunderbird

That will apparently cause glXCreateContext to return BadValue, which is enough to kick Thunderbird along. LIBGL_ALWAYS_SOFTWARE=1 works equally well to enable Thunderbird to function, while presumably still allowing it to use OpenGL if it really needs to for something (proof: LIBGL_ALWAYS_INDIRECT=1 glxgears fails, LIBGL_ALWAY_SOFTWARE=1 glxgears gives me spinning gears).

I checked Firefox too, and it of course has the same remote X problem, and the same solution.

Optical Action at a Distance

Generally when someone wants to install a Linux distro they start with an ISO file. Now we could burn that to a DVD, walk into the server room, and put it in our machine, but that's a pain. Instead let's look at how to do this over the network with Petitboot!

At the moment Petitboot won't be able to handle an ISO file unless it's mounted in an expected place (eg. as a mounted DVD), so we need to unpack it somewhere. Choose somewhere to host the result and unpack the ISO via whatever method you prefer. (For example bsdtar -xf /path/to/image.iso).

You'll get a bunch of files but for our purposes we only care about a few; the kernel, the initrd, and the bootloader configuration file. Using the Ubuntu 16.04 ppc64el ISO as an example, these are:

./install/vmlinux
./install/initrd.gz.
./boot/grub/grub.cfg

In grub.cfg we can see that the boot arguments are actually quite simple:

set timeout=-1

menuentry "Install" {
    linux   /install/vmlinux tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet
    initrd  /install/initrd.gz
}

menuentry "Rescue mode" {
    linux   /install/vmlinux rescue/enable=true --- quiet
    initrd  /install/initrd.gz
}

So all we need to do is create a PXE config file that points Petitboot towards the correct files.

We're going to create a PXE config file which you could serve from your DHCP server, but that does not mean we need to use PXE - if you just want a quick install you only need make these files accessible to Petitboot, and then we can use the 'Retrieve config from URL' option to download the files.

Create a petitboot.conf file somewhere accessible that contains (for Ubuntu):

label Install Ubuntu 16.04 Xenial Xerus
    kernel http://myaccesibleserver/path/to/vmlinux
    initrd http://myaccesibleserver/path/to/initrd.gz
    append tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet

Then in Petitboot, select 'Retrieve config from URL' and enter http://myaccesibleserver/path/to/petitboot.conf. In the main menu your new option should appear - select it and away you go!

July 04, 2016

Windows 10 to Linux

There is a lot of noise at the moment about Microsoft’s new operating system called Windows 10. Without repeating all the details you can have a look, say here or here or here. The essence of the story is that Microsoft is making it very difficult to avoid the new operating system. The advice being given is to not install the upgrade – which is anything but easy, since Windows 7 is supported until 2020.

The reality is that staying with Windows 7 is only delaying the inevitable. There is no reason to believe that Mircosoft’s offering in 2020 will be any better at respecting your ownership and every reason to think it will be worse. If you are one of these people considering sticking with Windows 7 then you have only two choices:

  • swallow your pride and update (either today or sometime in the next 4 years); or
  • migrate off the platform. If you migrate then, in practice, that means Linux (since Apple has similar beliefs about who really owns your computer).

In my opinion, if you actually want to own your own computer, you have to install Linux.


Swiss Professor starts Cybathlon

The Cybathlon will challenge assistive device developers to create technologies that thrive in day-to-day activities.

The prosthetic arm from the M.A.S.S. Impact team. (Credit: ETH Zurich) The prosthetic arm from the M.A.S.S. Impact team. (Credit: ETH Zurich)

While working as a professor in the sensory-motor systems lab at the Swiss Federal Institute of Technology in Zurich (ETH), Robert Riener noticed a need for assistive devices that would better meet the challenge of helping people with daily life. He knew there were solutions, but that it would require motivating developers to rise to the challenge.

So, Riener created Cybathlon, the first cyborg Olympics where teams from all over the world will participate in races on Oct. 8 in Zurich that will test how well their devices perform routine tasks. Teams will compete in six different categories that will push their assistive devices to the limit on courses developed carefully over three years by physicians, developers and the people who use the technology. Eighty teams have signed up so far.

Riener wants the event to emphasize how important it is for man and machine to work together—so participants will be called pilots rather than athletes, reflecting the role of the assistive technology.

“The goal is to push the development in the direction of technology that is capable of performing day-to-day tasks. And that way, there will an improvement in the future life of the person using the device,” says Riener.

Read more at http://blogs.discovermagazine.com/crux/2016/06/22/a-sneak-peek-at-the-first-cyborg-olympics/

and CYBATHLON: Championship for Athletes with Disabilities

July 01, 2016

Spartan: A New Architecture for Research Computing

Thursday July 30th, at the Gryphon Gallery at the University of Melbourne, was the official launch of the 'Spartan' high-performance computing and cloud hybrid. Speakers at the launch included Dr Stephen Giugni, Director, Research Platform Services., Prof Margaret Sheil, Acting Vice Chancellor of the University of Melbourne., Professor Richard Sinnott, Director, eResearch and Professor of Applied Computing Systems., Mr Bernard Meade, Head of Research Compute Services, Research Platform Services, and yours truly, in my role as HPC Support Engineer, Research Platform Services.

As I argued in my presentation, the great advantage of Spartan is that it is designed around what users need. Based on research from the previous general compute resource, Edward, most people wanted to submit lots of jobs with a relatively small core count and memory footprint with data parallel approaches, but some really needed a large core counts with a fast interconnect. Putting the two types of users of the same system was not ideal. Also, engineers tend to want performance from a system, whereas managers want flexibility. Spartan provides both through its partitioning system. I am convinced that this will be architecture of future research computing.

Spartan's launch has received extensive media coverage, including high ranking sites such as HPC Wire, Gizmodo, and Delimiter. In addition to the aforementioned speakers, particular thanks must also be given to Linh Vu, Daniel Tosello, and Chris Samuel for their engineering excellence in helping put together the system, and to Greg Sauter for his project management (and for his photography). Welcome to Spartan!

read more

Use your Electoral Right to Vote

Did you know…

In 1863, the state of Victoria allowed everyone on the municipal rolls to vote, which included women; who voted in the 1864 general election.  This was regarded as a mistake by the men in government, and the law was changed to exclude women in 1865.  It then took another 40 years before women got the vote again – first federally, then in each of the states individually.

People fought for these and other rights.  You now have the power to choose for what you believe is right.

Please use your electoral right to vote.  It’s important.

Linux Security Summit 2016 Schedule Published

The schedule for the 2016 Linux Security Summit is now published!

The keynote speaker for this year’s event is Julia Lawall.  Julia is a research scientist at Inria, the developer of Coccinelle, and the Linux Kernel coordinator for the Outreachy project.

Refereed presentations include:

See the schedule for the full list of talks.

Also included are updates from Linux kernel security subsystem maintainers, and snacks.

The event this year is co-located with LinuxCon North America in Toronto, and will be held on the 25th and 26th of August.  Standalone registration for the Linux Security Summit is $100 USD: click here to register.

You can also follow updates and news for the event via Twitter:  @LinuxSecSummit
See you there!

OLPC Australia training resources

Underpinning the OLPC Australia education programme is a cache of training resources. In addition to our Online Course and Learner Manual, we have a set of help videos, hosted on our Vimeo channel. I have updated the OLPC Disassembly instructions for the top and bottom of the XO with links to the videos.

OLPC Australia Education Newsletter

Editions 7 and 8 of the OLPC Australia Education Newsletter have come out in the past few weeks. In each edition, we will provide news, tips and tricks and stories from the field.

To subscribe, send an e-mail to education-newsletter+subscribe@laptop.org.au.

Creating an Education Programme

OLPC Australia had a strong presence at linux.conf.au 2012 in Ballarat, two weeks ago.

I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.

Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.

The references for this talk are on our development wiki.

Here’s a better version of the video I played near the beginning of my talk:

I should start by pointing out out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.

Investment in our Children’s Future

The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.

The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.

For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.

While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cites as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.

Comprehensive Education Programme

We have a responsibility to invest in our children’s education — it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.

We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.

OLPC Australia certificationsCertifications

Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.

OLPC Australia online training processOnline training process

We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child’s learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.

There’s a reason why I’m wearing a t-shirt that says “No, I won’t fix your computer.” We’re on a mission to develop a programme that is self-sustaining. We’ve set high goals for ourselves, and we are determined to meet them. We won’t get there overnight, but we’re well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.

As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.

OLPC Australia programme cycleProgramme cycle

Technology as an Enabler

Enabling this educational programme is the clever development and use of technology. That’s where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren’t IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.

The key principles of the Engineering Department are:

  • Technology is an integral and seamless part of the learning experience – the pen and paper of the 21st century.
  • To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
  • Empowering children to be content producers and collaborators, not just content consumers.
  • Open platform to allow learning from mistakes… and easy recovery.

OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the ‘last mile’ to the school. One thing I’m especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn’t work as they expect then they are totally at the mercy of the admins to fix it.

In an educational setting this is disastrous — it severely limits what our children can learn. We learn most from our mistakes, so let’s provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid 🙂

Evaluation

My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.

In federal parliament, Robert Oakeshott MP has been very supportive of our mission:

Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.

We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.

Join our mission

Schools can register their interest in our programme on our Education site.

Our Prospectus provides a high-level overview.

For a detailed analysis, see our Policy Document.

If you would like to get involved in our technical development, visit our development site.

Credits

Many thanks to colleagues Rangan Srikhanta (CEO) and Tracy Richardson (Education Manager) for some of the information and graphics used in this article.

Interview with Australian Council for Computers in Education Learning Network

Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

XO-AU OS 12.0 Release Candidate 2 released

Release Candidate 2 of the 2012 OLPC Australia operating system, XO-AU OS 12, has been released. We hope to make a final release in two weeks, in time for the start of term 2 of school in Queensland and Northern Territory.

To get started, visit our release notes page.

Installing the Release Candidate is no different from installing the XO-AU USB 3 stable release: extract the zip file to a USB stick and you’re ready to go.

The “What’s New” section outlines the changes in this release.

To provide feedback, please join our technical mailing list.

Following this, you can send your comments or ask questions on the list. The OLPC Australia Engineering team are active participants on this list, and we will reply. Remember, the better you can help us with quality information, the better we can make the product for you 🙂

HTML5 support in Browse

One of the most exciting improvements in OLPC OS 12.1.0 is a revamped Browse activity:

Browse, Wikipedia and Help have been moved from Mozilla to WebKit internally, as the Mozilla engine can no longer be embedded into other applications (like Browse) and Mozilla has stated officially that it is unsupported. WebKit has proven to be a far superior alternative and this represents a valuable step forward for Sugar’s future. As a user, you will notice faster activity startup time and a smoother browsing experience. Also, form elements on webpages are now themed according to the system theme, so you’ll see Sugar’s UI design blending more into the web forms that you access.

In short, the Web will be a nicer place on XOs. These improvements (and more!) will be making their way onto One Education XOs (such as those in Australia) in 2013.

Here are the results from the HTML5 Test using Browse 140 on OLPC OS 12.1.0 on an XO-1.75. The final score (345 and 15 bonus points) compares favourably against other Web browsers. Firefox 14 running on my Fedora 17 desktop scores 345 and 9 bonus points.

Update: Rafael Ortiz writes, “For the record previous non-webkit versions of browse only got 187 points on html5test, my beta chrome has 400 points, so it’s a great advance!

The HTML5 test - How well does your browser support HTML5 (01) The HTML5 test - How well does your browser support HTML5 (02) The HTML5 test - How well does your browser support HTML5 (03) The HTML5 test - How well does your browser support HTML5 (04) The HTML5 test - How well does your browser support HTML5 (05) The HTML5 test - How well does your browser support HTML5 (06) The HTML5 test - How well does your browser support HTML5 (07) The HTML5 test - How well does your browser support HTML5 (08) The HTML5 test - How well does your browser support HTML5 (09) The HTML5 test - How well does your browser support HTML5 (10)

Interviews from the field

Oracle, a sponsor of OLPC Australia, have posted some video interviews of a child and a teacher involved in the One Education programme.

A Complete Literacy Experience For Young Children

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboardA standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboardThe Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

XO-1 Training Pack

Our One Education programme is growing like crazy, and many existing deployments are showing interest. We wanted to give them a choice of using their own XOs to participate in the teacher training, rather than requiring them to purchase new hardware. Many have developer-locked XO-1s, necessitating a different approach than our official One Education OS.

The solution is our XO-1 Training Pack. This is a reconfiguration of OLPC OS 10.1.3 to be largely consistent with our 10.1.3-au release. It has been packaged for easy installation.

Note that this is not a formal One Education OS release, and hence is not officially supported by OLPC Australia.

If you’d like to take part in the One Education programme, or have questions, use the contact form on the front page.

Update: We have a list of improvements in 10.1.3-au builds over the OLPC OS 10.1.3 release. Note that some features are not available in the XO-1 Training Pack owing to the lesser storage space available on XO-1 hardware. The release notes have been updated with more detail.

Update: More information on our One News site.

OLPC Australia Education Newsletter, Edition 9

Edition 9 of the OLPC Australia Education Newsletter is now available.

In this edition, we provide a few classroom ideas for mathematics, profile the Jigsaw activity, de-mystify the Home views in Sugar and hear about the OLPC journey of Girraween Primary School.

To subscribe to receive future updates, send an e-​​mail to education-​newsletter+subscribe@​laptop.​org.​au.

Apache + YAJL

//github.com/Maxime2/stan-challenge - here on GitHub is my answer to Stan code challenge. It is an example how one can use SAX-like streaming parser inside an Apache module to process JSON with minimal delays.

Custom made Apache module gives you some savings on request processing time by avoiding invocation of any interpreter to process the request with any programming language (like PHP, Python or Go). The stream parser allows to start processing JSON as soon as the first buffer filled with data while the whole request is still in transmission. And again, as it is an Apache module, the response is starting to construct while request is processing (and still transmitting).

Red&Orange

A Taste of IBM

As a hobbyist programmer and Linux user, I was pretty stoked to be able to experience real work in the IT field that interests me most, Linux. With a mainly disconnected understanding of computer hardware and software, I braced myself to entirely relearn everything and anything I thought I knew. Furthermore, I worried that my usefulness in a world of maintainers, developers and testers would not be enough to provide any real contribution to the company. In actual fact however, the employees at OzLabs (IBM ADL) put a really great effort into making use of my existing skills, were attentive to my current knowledge and just filled in the gaps! The knowledge they've given me is practical, interlinked with hardware and provided me with the foot-up that I'd been itching for to establish my own portfolio as a programmer. I was both honoured and astonished by their dedication to helping me make a truly meaningful contribution!

On applying for the placement, I listed my skills and interests. Having a Mathematics, Science background, I listed among my greatest interests development of scientific simulation and graphics using libraries such as Python matplotlib and R. By the first day they got me to work, researching and implementing a routine in R that would qualitatively model the ability of a system to perform common tasks - a benchmark. A series of these microbenchmarks were made; I was in my element and actually able to contribute to a corporation much larger than I could imagine. The team at IBM reinforced my knowledge from the ground up, introducing the rigorous hardware and corporate elements at a level I was comfortable with.

I would say that my greatest single piece of take-home knowledge over the two weeks was knowledge of the Linux Kernel project, Git and GitHub. Having met the arch/powerpc and linux-next maintainers in person placed the Linux and Open Source development cycle in an entirely new perspective. I was introduced to the world of GitHub, and thanks to a few rigorous lessons of Git, I now have access to tools that empower me to safely and efficiently write code, and to build a public portfolio I can be proud of. Most members of the office donated their time to instruct me on all fronts, whether to do with career paths, programming expertise or conceptual knowledge, and the rest were all very good for a chat.

Approaching the tail-end of Year Twelve, I was blessed with some really good feedback and recommendations regarding further study. If during the two weeks I had any query regarding anything ranging from work-life to programming expertise even to which code editor I should use (a source of much contention) the people in the office were very happy to help me. Several employees donated their time to teach me really very intensive and long lessons regarding the software development concepts, including (but not limited to!) a thorough and helpful lesson on Git that was just on my level of understanding.

Working at IBM these past two weeks has not only bridged the gap between my hobby and my professional prospects, but more importantly established friendships with professionals in the field of Software Development. Without a doubt this really great experience of an environment that rewards my enthusiasm will fondly stay in my mind as I enter the next chapter of my life!

Using X-Plane 10 with ArduPilot SITL

ArduPilot has been able to use X-Plane as a HIL (hardware in the loop) backend for quite some time, but it never worked particularly well as the limitations of the USB interface to the hardware prevented good sensor timings.

We have recently added the ability to use X-Plane 10 as a SITL backend, which works much better. The SITL (software in the loop) system runs ArduPilot natively on your desktop machine, and talks to X-Plane directly using UDP packets.

The above video demonstrates flying a Boeing 747-400 in X-Plane 10 using ArduPilot SITL. It flies nicely, and does an automatic takeoff and landing quite well. You can use almost any of the fixed wing aircraft in X-Plane with ArduPilot SITL, which opens up a whole world of simulation to explore. Many people create models of their own aircraft in order to test out how they will fly or to test them in conditions (such as very high wind) that may be dangerous to test with a real model.

I have written up some documentation on how to use X-Plane 10 with SITL to help people get started. Right now it only works with X-Plane 10 although I may add support for X-Plane 9 in the future.

Michael Oborne has added nice support for using X-Plane with SITL in the latest beta of MissionPlanner, and does nightly builds of the SITL binary for Windows. That avoids the need to build ArduPilot yourself if you just want to fly the standard code and not modify it yourself.

Limitations

There are some limitations to the X-Plane SITL backend. First off, X-Plane has quite slow network support. On my machine I typically get a sensor data rate of around 27Hz, which is far below the 1200 Hz we normally use for simulation. To overcome this the ArduPilot SITL code does sensor extrapolation to bring the rate up to around 900Hz, which is plenty for SITL to run. That extrapolation introduces small errors which can make the ArduPilot EKF state estimator unhappy. To avoid that problem we run with "EKF type 10" which is a fake AHRS interface that gets all state information directly from the simulator. That means you can't use the X-Plane SITL backend to test EKF settings.

The next limitation is that the simulation fidelity depends somewhat on the CPU load on your machine. That is an unfortunate consequence of X-Plane not supporting lock-step scheduling. So you may notice that simulated aircraft on your machine may not fly identically to the same aircraft on someone elses machine. You can reduce this effect by lowering the graphics settings in X-Plane.

We can currently only get joystick input from X-Plane for aileron, elevator, rudder and throttle. It would be nice to support flight mode switches, flaps and other controls that are normally used with ArduPilot. That is probably possible, but isn't implemented yet. So if you want a full controller then you can instead connect a joystick to SITL directly instead of via X-Plane (for example using the MissionPlanner joystick module or the mavproxy joystick module).

Finally, we only support fixed wing aircraft in X-Plane at the moment. I have been able to fly a helicopter, but I needed to give manual collective control from a joystick as we don't yet have a way to provide collective pitch input over the X-Plane data interface.

Manned AIrcraft and ArduPilot

Please don't assume that because ArduPilot can fly full sized aircraft in a simulator that you should use ArduPilot to fly real manned aircraft. ArduPilot is not suitable for manned applications and the development team would appreciate it if you did not try to use it for manned aircraft.

Happy Flying

I hope you enjoy flying X-Plane 10 with ArduPilot SITL!

June 30, 2016

Coalitions

In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

Are we now the USSR?, Brexit, and More

Look at what's happened and you'll see the parallels: - in many parts of the world the past and current social and economic policies on offer basically aren't delivering. Clear that there is a democratic deficit. The policies at the top aren't dealing with enough of the population's problems CrossTalk BREXIT - GOAL! (Recorded 24 June) https://www.youtube.com/watch?v=kgKIc0bobO4 The Schulz Brexit

June 29, 2016

LUV Main July 2016 Meeting: ICT in Education / To Search Perchance to Find

Jul 5 2016 18:30
Jul 5 2016 20:30
Jul 5 2016 18:30
Jul 5 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

  • Dr Gill Lunniss and Daniel Jitnah, ICT in Education
  • Tim Baldwin, To Search Perchance to Find: Improving Information Access over
    Technical Web User Forums

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 5, 2016 - 18:30

read more

LUV Beginners July Meeting: GNU COBOL

Jul 16 2016 12:30
Jul 16 2016 16:30
Jul 16 2016 12:30
Jul 16 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages. Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 16, 2016 - 12:30

read more

June 24, 2016

Kernel interfaces and vDSO test

Getting Suckered

Last week a colleague of mine came up to me and showed me some of the vDSO on PowerPC and asked why on earth does it fail vdsotest. I should come clean at this point and admit that I knew very little about the vDSO and hadn't heard of vdsotest. I had to admit to this colleague that I had no idea everything looked super sane.

Unfortunately (for me) I got hooked, vdsotest was saying it was getting '22' instead of '-1' and it was the case where the vDSO would call into the kernel. It plagued me all night, 22 is so suspicious. Right before I got to work the next morning I had an epiphany, "I bet 22 is EINVAL".

Virtual Dynamically linked Shared Objects

The vDSO is a mechanism to expose some kernel functionality into userspace to avoid the cost of a context switch into kernel mode. This is a great feat of engineering, avoiding the context switch can have a dramatic speedup for userspace code. Obviously not all kernel functionality can be placed into userspace and even for the functionality which can, there may be edge cases in which the vDSO needs to ask the kernel.

Who tests the vDSO? For the portion that lies exclusively in userspace it will escape all testing of the syscall interface which is really what kernel developers are so focused on not breaking. Enter Nathan Lynch with vdsotest who has done some great work!

The Kernel

When the vDSO can't get the correct value without the kernel, it simply calls into the kernel because the kernel is the definitive reference for every syscall. On PowerPC something like this happens (sorry, our vDSO is 100% asm): 1

/*
 * Exact prototype of clock_gettime()
 *
 * int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp);
 *
 */
V_FUNCTION_BEGIN(__kernel_clock_gettime)
  .cfi_startproc
    /* Check for supported clock IDs */
    cmpwi   cr0,r3,CLOCK_REALTIME
    cmpwi   cr1,r3,CLOCK_MONOTONIC
    cror    cr0*4+eq,cr0*4+eq,cr1*4+eq
    bne cr0,99f

    /* [snip] */

    /*
     * syscall fallback
     */
99:
    li  r0,__NR_clock_gettime
    sc
    blr

For those not familiar, this couldn't be more simple. The start checks to see if it is a clock id that the vDSO can handle and if not it jumps to the 99 label. From here simply load the syscall number, jump to the kernel and branch to link register aka 'return'. In this case the 'return' statement would return to the userspace code which called the vDSO function.

Wait, having the vDSO calling into the kernel call gets us the wrong result? Or course it should, vdsotest is assuming a C ABI with return values and errno but the kernel doesn't do that, the kernel ABI is different. How does this even work on x86? Ohhhhh vdsotest does 2

static inline void record_syscall_result(struct syscall_result *res,
                     int sr_ret, int sr_errno)
{
    /* Calling the vDSO directly instead of through libc can lead to:
     * - The vDSO code punts to the kernel (e.g. unrecognized clock id).
     * - The kernel returns an error (e.g. -22 (-EINVAL))
     * So we need to recognize this situation and fix things up.
     * Fortunately we're dealing only with syscalls that return -ve values
     * on error.
     */
    if (sr_ret < 0 && sr_errno == 0) {
        sr_errno = -sr_ret;
        sr_ret = -1;
    }

    *res = (struct syscall_result) {
        .sr_ret = sr_ret,
        .sr_errno = sr_errno,
    };
}

That little hack isn't working on PowerPC and here's why:

The kernel puts the return value in the ABI specified return register (r3) and uses a condition register bit (condition register field 0, SO bit), so unlike x86 on error the return value isn't negative. To make matters worse, the condition register is very difficult to access from C. Depending on your definition of 'access from C' you might consider it impossible, a fixup like that would be impossible.

Lessons learnt

  • vDSO supplied functions aren't quite the same as their libc counterparts. Unless you have very good reason, and to be fair, vdsotest does have a very good reason, always access the vDSO through libc
  • Kernel interfaces aren't C interfaces, yep, they're close but they aren't the same
  • 22 is in fact EINVAL
  • Different architectures are... Different!
  • Variety is the spice of life

P.S I have a hacky patch waiting review


  1. arch/powerpc/kernel/vdso64/gettimeofday.S 

  2. src/vdsotest.h 

June 21, 2016

Zuul and Ansible in OpenStack CI

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

Overview of OpenStack CI with Zuul and Ansible
  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Pia, Thomas and Little A’s Excellent Adventure – Week 1

We arrived in Auckland after a fairly difficult flight. Little A had a mild cold and did NOT cope with the cabin pressure well, so there was a lot of walking cuddles around the plane kitchen to not disturb other passengers. After a restful night we picked up our rental car, a roomy 4 wheel drive, and drove to Turangi, a beautiful scenic introduction to our 3 month adventure! Our plan is to spend 3 months in Turing as a bit of a babymoon: to get to know little A as she goes through that lovely 6-9 months development which includes crawling, learning to eat and other fun stuff. We are also planning to catch a LOT of trout (and even keep some!), catch up with some studies and reading, and take the time to plan out the next chapter of our life. I’m also hoping to write a book if I can, but more on that later :)

So each week we’ll blog some highlights! Photos will be added every few days to the flickr album.

Our NZ Adventure

Arrival

The weather in Turangi has been gorgeous all week. Sunny and much warmer than Canberra, but of course Thomas would rather rain as that would get the Trout moving in the river :) We are renting a 3 bedroom house with woodfire heating which is toasty warm and very comfortable. The only downside is that we have no internet at the house, and the data plan on my phone doesn’t work at all at the house. So we are fairly offline, which has its pros and cons :) Good for relaxing, reflection, studying, writing and planning. Bad for Pia who feels like she has lost a limb! Meanwhile, the local library has reasonable WiFi and we have become a regular visitors.

Little A

Little A has made some new steps this week. She learned how to do raspberries, which she now does frequently. She also rolled over completely unassisted for the first time and spends a lot of time trying to roll more. Finally, she decided she wanted to start on solids. We know this because when Thomas was holding her whilst eating a banana, he turned away for a second to speak to me and she launched herself onto the banana, gumming furiously! So we have now tried some mashed potato, pumpkin and some water from the sippy cup. In all cases she insists on grabbing the spoon or sippy cup to feed herself.

Studies

Both of us are doing some extra studies whilst on this trip. I’m finishing off my degree this semester with a subject on policy and law, and another on white collar crime. Both are fascinating! Thomas is reading up on some areas of law he wants to brush up on for work and fun.

Book

My book preparations are going well, and I will be blogging about that in a few weeks once I get a bit more done. Basically I’m writing a book about the history and future of our species, focusing on the major philosophical and technological changes that have come and are coming, and the key things we need to carefully think about and change if we are to take advantage of how the world itself has fundamentally changed. It is a culmination of things I’ve been thinking about and exploring for the last 15 years, so I hope it proves useful in making a better world for everyone :)

Fishing

Part of the reason we have based this little sabbatical at Turangi is because it is arguably the best Trout fishing in the world, and is one of Thomas’ favourite places. It is a quaint and sleepy little country town with everything we need. The season hasn’t really kicked off yet and the fish aren’t running upstream yet, but we still netted 12 fish this week, of which we kept one Rainbow Trout for a delicious meal of Manuka smoked fish :)