Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

July 29, 2016

Australia moves fast: North-West actually

Australia on globeThis story is about the tectonic plate on which we reside.  Tectonic plates move, and so continents shift over time.  They generally go pretty slow though.

What about Australia?  It appears that every year, we move 11 centimetres West and 7 centimetres North.  For a tectonic plate, that’s very fast.

The last time scientists marked our location on the globe was in 1994, with the Geocentric Datum of Australia 1994 (GDA1994) – generally called GDA94 in geo-spatial tools (such as QGIS).  So that datum came into force 22 years ago.  Since then, we’ve moved an astonishing 1.5 metres!  You may not think much of this, but right now it actually means that if you use a GPS in Australia to get coordinates, and plot it onto a map that doesn’t correct for this, you’re currently going to be off by 1.5 metres.  Depending on what you’re measuring/marking, you’ll appreciate this can be very significant and cause problems.

Bear in mind that, within Australia, GDA94 is not wrong as such, as its coordinates are relative to points within Australia. However, the positioning of Australia in relation to the rest of the globe is now outdated.  Positioning technologies have also improved.  So there’s a new datum planned for Australia, GDA2020.  By the time it comes into force, we’ll have shifted by 1.8 metres relative to GDA94.

We can have some fun with all this:

  • If you stand and stretch both your arms out, the tips of your fingers are about 1.5 metres apart – of course this depends a bit on the length of your arms, but it’ll give you a rough idea.  Now imagine a pipe or cable in the ground at a particular GPS position,  move 1.5 metres.  You could clean miss that pipe or cable… oops!  Unless your GPS is configured to use a datum that gets updated, such as WGS84.  However, if you had the pipe or cable plotted on a map that’s in GDA94, it becomes messy again.
  • If you use a tool such as Google Earth, where is Australia actually?  That is, will a point be plotted accurately, or be 1.5 metres out, or somewhere in between?
    Well, that would depend on when the most recent broad scale photos were taken, and what corrections the Google Earth team possibly applies during processing of its data (for example, Google Earth uses a different datum – WGS 84 for its calculations).
    Interesting question, isn’t it…
  • Now for a little science/maths challenge.  The Northern most tip of Australia, Cape York, is just 150km South of Papua New Guinea (PNG).  Presuming our plate maintains its present course and speed, roughly how many years until the visible bits (above sea level) of Australia and PNG collide?  Post your answer with working/reasoning in a comment to this post!  Think about this carefully and do your research.  Good luck!

July 28, 2016

Neuroscience in PSYOPS, World Order, and More

One of the funny things that I've heard is that people from one side believe that people from another side are somehow 'brainwashed' into believing what they do. As we saw in out last post there is a lot of manipulation and social engineering going on if you think about it, http://dtbnguyen.blogspot.com/2016/07/social-engineeringmanipulation-rigging.html We're going to examine just exactly why

License quibbles (aka Hiro & linux pt 2)

Since my last post regarding the conversion of media from Channel 9’s Catch Up service, I have been in discussion with the company behind this technology, Hiro-Media. My concerns were primarily around their use of the open source xvid media codec and whilst I am not a contributor to xvid (and hence do not have any ownership under copyright), I believe it is still my right under the GPL to request a copy of the source code.

First off I want to thank Hiro-Media for their prompt and polite responses. It is clear that they take the issue of license violations very seriously. Granted, it would be somewhat hypocritical for a company specialising in DRM to not take copyright violations within their own company seriously, but it would not be the first time.

I initially asserted that, due to Hiro’s use (and presumed modification) of xvid code, that this software was considered a derivative and therefore bound in its entirety by the GPL. Hiro-Media denied this stating they use xvid in its original, unmodified state and hence Hiro is simply a user of rather than a derivative of xvid. This is a reasonable statement albeit one that is difficult to verify. I want to stress at this point that in my playing with the Hiro software I have NOT in anyway reverse engineered it nor have I attempted to decompile their binaries in any way.

In the end, the following points were revealed:

  • The Mac version of Hiro uses a (claimed) unmodified version of the Perian Quicktime component
  • The Windows version of Hiro currently on Channel 9’s website IS indeed modified, what Hiro-Media terms an ‘accidental internal QA’ version. They state that they have sent a new version to Channel 9 that corrects this. The xvid code they are using can be found at http://www.koepi.info/xvid.html
  • Neither version has included a GPL preamble within their EULA as required. Again, I am assured this is to be corrected ASAP.

I want to reiterate that Hiro-Media have been very cooperative about this and appear to have genuine concern. I am impressed by the Hiro system itself and whilst I am still not a fan of DRM in general, this is by far the best compromise I have seen to date. They just didn’t have a linux version.

This brings me to my final, slightly more negative point. On my last correspondence with Hiro-Media, they concluded with the following:

Finally, please note our deepest concerns as to any attempt to misuse our system, including the content incorporated into it, as seems to be evidenced in your website. Prima facia, such behavior consists a gross and fundamental breach of our license (which you have already reviewed). Any such misuse may cause our company, as well as any of our partners, vast damages.

I do not wish to label this a threat (though I admit to feeling somewhat threatened by it), but I do want to clear up a few things about what I have done. The statement alleges I have violated Hiro’s license (pot? kettle? black?) however this is something I vehemently disagree with. I have read the license very careful (Obviously as I went looking for the GPL) and the only relevant part is:

You agree that you will not modify, adapt or translate, or disassemble, decompile, reverse engineer or otherwise attempt to discover the source code of the Software.

Now I admit to being completely guilty of a single part of this, I have attempted to discover the source code. BUT (and this is a really big BUT), I have attempted this by emailing Hiro-Media and asking them for it, NOT by decompiling (or in any other way inspecting) the software. In my opinion, the inclusion of that specific part in their license also goes against the GPL as such restrictions are strictly forbidden by it.
But back to the point, I have not modified, translated, disassembled, decompiled or reverse engineered the Hiro software. Additionally, I do not believe I have adapted it either. It is still doing exactly the same thing as it was originally, that is taking an incoming video stream, modifying it and decoding it. Importantly, I do not modify any files in any way. What I have altered is how Quicktime uses the data returned by Hiro. All my solution does is (using official OSX/Quicktime APIs) divert the output to a file rather than to the screen. In essence I have not done anything different to the ‘Save As’ option found in Quicktime Pro, however not owning Quicktime Pro, I merely found another way of doing this.

So that’s my conclusion. I will reply to Hiro-Media with a link to this post asking whether they still take issue with what I have done and take things from there.
To the guys from Hiro if you are reading this, I didn’t do any of this to start trouble. All I wanted was a way to play these files on my linux HTPC, with or without ads. Thankyou.

Channel 9, Catch Up, Hiro and linux

About 6 months ago, Channel 9 launched their ‘Catch Up’ service. Basically this is their way of fighting piracy and allowing people to download Australian made TV shows to watch on their PC. Now, of course, no ‘old media’ service would possibly do this without the wonders of DRM. Channel 9 though, are taking a slightly different approach.

Instead of the normal style of DRM that prevents you copying the file, Channel 9 employs technology from a company called Hiro. Essentially you install the Hiro player, download the file and watch it. The player will insert unskippable ads throughout the video, supposedly even targeted at your demographic. Now this is actually a fairly neat system, Channel 9 actually encourage you to share the video files over bittorrent etc! The problem, as I’m sure you can guess, is that there’s no player for linux.

So, just to skip to the punchline, yes it IS possible to get these files working on free software (completely legally & without the watermark)! If you just want to know how to do it, jump to the end as I’m going to explain a bit of background first.

Hiro

The Hiro technology is interesting in that it isn’t simply some custom player. The files you download from Channel 9 are actually xvid encoded, albeit a bastard in-bred cousin of what xvid should be. If you simply download the file and play it with vlc or mplayer, it will run, however you will get a nasty watermark over the top of the video and it will almost certainly crash about 30s in when it hits the first advertising blob. There is also some trickiness going on with the audio as, even if you can get the video to keep playing, the audio will jump back to the beginning at this point. Of course, the watermark isn’t just something that’s placed over the top in post-processing like a subtitle, its in the video data itself. To remove it you actually need to filter the video to modify the area covered by the watermark to darken/lighten the pixels affected. Sounds crazy and a tremendous amount of work right? Well thankfully its already been done, by Hiro themselves.

When you install Hiro, you don’t actually install a media player, you install either a DirectShow filter or a Quicktime component depending on your platform. This has the advantage that you can use reasonably standard software to play the files. Its still not much help for linux though.

Before I get onto how to create a ‘normal’ xvid file, I just want to mention something I think should be a concern for free software advocates. As you might know, xvid is an open codec, both for encoding and decoding. Due to the limitations of Quicktime and Windows Media Player, Hiro needs to include an xvid decoder as part of their filter. I’m sure its no surprise to anyone though that they have failed to release any code for this filter, despite it being based off a GPL’d work. IA(definitely)NAL, but I suspect there’s probably some dodginess going on here.

Using Catchup with free software

Very basically, the trick to getting the video working is that it needs to be passed through the filter provided by Hiro. I tried a number of methods to get the files converted for use with mplayer or vlc and in the end, unfortunately, I found that I needed to be using either Windows or OSX to get it done. Smarter minds than mine might be able to get the DirectShow filter (HiroTransform.ax) working with mplayer in a similar manner to how CoreAVC on linux works, but I had no luck.

But, if you have access to OSX, here’s how to do it:

  1. Download and install the Hiro software for Mac. You don’t need to register or anything, in fact, you can delete the application the moment you finish the install. All you need is the Quicktime component it added.
  2. Grab any file from the Catch Up Service (http://video.ninemsn.com.au/catchuptv). I’ve tested this with Underbelly, but all videos should work.
  3. Install ffmpegx (http://ffmpegx.com)
  4. Grab the following little script: CleanCatch.sh
  5. Run:
    chmod +x CleanCatch.sh
    ./CleanCatch.sh <filename>
  6. Voila. Output will be a file called ‘<filename>.clean.MP4′ and should be playable in both VLC and mplayer

Distribution

So, I’m the first to admit that the above is a right royal pain to do, particularly the whole requiring OSX part. To save everyone the hassle though, I believe its possible to simply distribute the modified file. Now again, IANAL, but I’ve gone over the Channel 9 website with a fine tooth comb and can see nothing that forbids me from distributing this newly encoded file. I agreed to no EULA when I downloaded the original video and their site even has the following on it:

You can share the episode with your friends and watch it as many times as you like – online or offline – with no limitations

That whole ‘no limitations’ part is the bit I like. Not only have Channel 9 given me permission to distribute the file, they’ve given it to me unrestricted. I’ve not broken any locks and in fact have really only used the software provided by Channel 9 and a standard transcoding package.

This being the case, I am considering releasing modified versions of Channel 9’s files over bittorrent. I’d love to hear people’s opinions about this before doing so though in case they know more than I (not a hard thing) about such matters.

The rallyduino lives

[Update: The code or the rallyduino can be found at: http://code.google.com/p/rallyduino/]

Amidst a million other things (Today is T-1 days until our little bubs technical due date! No news yet though) I finally managed to get the rallyduino into the car over the weekend. It actually went in a couple of weeks ago, but had to come out again after a few problems were found.

So, the good news, it all works! I wrote an automagic calibration routine that does all the clever number crunching and comes up with the calibration number, so after a quick drive down the road, everything was up and running. My back of the envelope calculations for what the calibration number for our car would be also turned out pretty accurate, only about 4mm per revolution out. The unit, once calibrated, is at least as accurate as the commercial devices and displays considerably more information. Its also more flexible as it can switch between modes dynamically and has full control from the remote handset. All in all I was pretty happy, even the instantaneous speed function worked first time.

To give a little background, the box uses a hall effect sensor mounted next to a brake caliper that pulses each time a wheel stud rotates past. This is a fairly common method for rally computers to use and to make life simpler, the rallyduino is pin compatible with another common commercial product, the VDO Minicockpit. As we already had a Minicockpit in the car, all we did was ‘double adapt’ the existing plug and pop in the rallyduino. This means there’s 2 computers running off the 1 sensor, in turn making it much simpler (assuming 1 of them is known to be accurate) to determine if the other is right.

After taking the car for a bit of a drive though, a few problems became apparent. The explain them, a pic is required:
8856285

The 4 devices in the pic are:

  1. Wayfinder electronic compass
  2. Terratrip (The black box)
  3. Rallyduino (Big silver box with the blue screen)
  4. VDO Minicockpit (Sitting on top of the rallyduino)

The major problem should be immediately obvious. The screen is completely unsuitable. Its both too small and has poor readability in daylight. I’m currently looking at alternatives and it seems like the simplest thing to do is get a 2×20 screen the same physical size as the existing 4×20. This, however, means that there would only be room for a single distance tracker rather than the 2 currently in place. The changeover is fairly trivial as the code, thankfully, is nice and flexible and the screen dimensions can be configured at compile time (From 2×16 to 4×20). Daylight readable screens are also fairly easily obtainable (http://www.crystalfontz.com is the ultimate small screen resource) so its just a matter of ordering one. In the long term I’d like to look at simply using a larger 4×20 screen but, as you can see, real estate on the dash is fairly tight.

Speaking of screens, I also found the most amazing little LCD controller from web4robot.com. It has both serial and I2C interfaces, a keypad decoder (with inbuilt debounce) and someone has gone to all the hard work of work of writing a common arduino interface library for it and other I2C LCD controllers (http://www.wentztech.com/radio/arduino/projects.html) . If you’re looking for such a LCD controller and you are working with an arduino, I cannot recommend these units enough. They’re available on eBay for about $20 Au delivered too. As much as I loved the unit from Phil Anderson, it simply doesn’t have the same featureset as this one, nor is it as robust.

So that’s where things are at. Apologies for the brain dump nature of this post, I just needed to get everything down while I could remember it all.

1 + 1 = 3

No updates to this blog in a while I’m afraid, things have just been far too busy to have had anything interesting (read geeky) to write about. That said, I am indulging and making this post purely to show off.

On Monday 25th May at 8:37am, after a rather long day/night, Mel and I welcomed Spencer Bailey Stewart into the world. There were a few little issues throughout the night (Particularly towards the end) and he had some small hurdles to get over in his first hour, but since then he has gone from strength to strength and both he and Mum and now doing amazingly well.
Obligatory Pic:

Spence

Spencer Bailey Stewart

He’s a very placid little man and would quite happily sleep through an earthquake, much to our delight. And yes, that is a little penguin he’s holding on to in that pic

So that’s all really. I am very conscious of not becoming a complete baby bore so unless something actually worth writing about happens, this will hopefully be my only baby post for the sake of a baby post.

Boxee iView back online

Just a quick post to say the ABC iView Boxee app has been updated to version 0.7 and should now be fully functional once again. I apologise to anyone who has been using the app for how long this update has taken and I wish I could say I’ve been off solving world hunger or something, but in reality I’ve just been flat out with work and family. I’ve also got a few other projects on the go that have been keeping me busy. These may or may not ever reach a stage of public consumption, but if they do it’ll be cool stuff.

For anyone using Boxee, you may need to remove the app from My Apps and wait for Boxee to refresh its repository index, but eventually version 0.7 should pop up. Its a few rough in places so I hope to do another cleanup within the next few weeks, but at least everything is functional again.

Going in to business

Lately my toying around with media centers has created some opportunities for commercial work in this area, which has been a pleasant change. As a result of this I’ve formed Noisy Media, a company specialising in the development of media centre apps for systems such as Boxee as well as the creation of customised media applications using XBMC (and similar) as a base.

Whilst I don’t expect this venture to ever be huge, I can see the market growing hugely in the future as products such as Google TV (Something I will be looking at VERY closely) and the Boxee Box are released and begin bringing streaming Video on Demand to the loungeroom. Given this is something I have a true passion for, the ability to turn work in this area into something profitable is very appealing, and exciting.

Here’s to video on demand!

ASX RSS down

Just a quick note to advise that the ASX RSS feed at http://noisymime.org/asx is currently not functional due to a change in the source data format.  I am in the process of performing a rewrite on this now and should have it back up and running (Better and more robust than ever) within the next few days.

Apologies for the delay in getting things back up and running.

Cortina Fuel Injection – Part 1 The Electronics

Besides being your run of the mill computer geek, I’ve always been a bit of a car geek as well. This often solicits down-the-nose looks from others who associate such people with V8 Supercar lovin’ petrolheads, which has always surprised me little because the most fun parts of working on a car are all just testing physics theories anyway. With that in mind, I’ll do this writeup from the point of view of the reader being a non-car, but scientifically minded person. First a bit of background…

Background

For the last 3 years or so my dad and I have been working a project to fuel inject our race car. The car itself is a 1968 Mk2 Cortina and retains the original 40 year old 1600 OHV engine. This engine is originally carbureted, meaning that it has a device that uses the vacuum created by the engine to mix fuel and air. This mixture is crucial to the running of an engine as the ratio of fuel to air dramatically alters power, response and economy. Carburetors were used for this function for a long time and whilst they achieve the basics very well, they are, at best, a compromise for most engines. To overcome these limitations, car manufacturers started moving to fuel injection in the 80’s, which allowed precise control of the amount of fuel added through the use of electronic signals. Initially these systems were horrible however, being driven by analog or very basic digital computers that did not have the power or inputs needed to accurately perform this function. These evolved to something useful throughout the 90’s and by the 00’s cars were having full sequential system (more on this later) that could deliver both good performance and excellent economy. It was our plan to fit something like the late 90’s type systems (ohh how did this change by the end though) to the Cortina with the aims of improving the power and drivability of the old engine. IN this post I’m going to run through the various components needed from the electrical side to make this all happen, as well as a little background on each. Our starting point was this:

The System

To have a computer control when are how much fuel to inject, it requires a number of inputs:

  • A crank sensor. This is the most important thing and tells the computer where in the 4-stroke cycle (HIGHLY recommended reading if you don’t know about the 4 strokes and engine goes through) the engine is and therefore WHEN to inject the fuel. Typically this is some form of toothed wheel that is on the end of the crankshaft with a VR or Hall effect sensor that pulses each time a tooth goes past it. The more teeth the wheel has, the more precisely the computer knows where the engine is (Assuming it can keep up with all the pulses). By itself the toothed wheel is not enough however as the computer needs a reference point to say when the cycle is beginning. This is typically done by either a 2nd sensor that only pulses once every 4 strokes, or by using what’s know as a missing tooth wheel, which is the approach we have taken. This works by having a wheel that would ordinarily have, say, 36 teeth, but has then had one of them removed. This creates an obvious gap in the series of pulses which the computer can use as a reference once it is told where in the cycle the missing tooth appears. The photos below show the standard Cortina crankshaft end and the wheel we made to fit onto the end

    Standard Crankshaft pulley

    36-1 sensor wheel added. Pulley is behind the wheel

    To read the teeth, we initially fitted a VR sensor, which sat about 0.75mm from the teeth, however due to issues with that item, we ended up replacing it with a Hall Effect unit.

  • Some way of knowing how much air is being pulled into the engine so that it knows HOW MUCH fuel to inject. In earlier fuel injection systems this was done with a Manifold Air Flow (MAF) sensor, a device which heated a wire that was in the path of the incoming air. By measuring the drop in temperature of the wire, the amount of air flowing over it could be determined (Although guessed is probably a better word as most of these systems tended to be fairly inaccurate). More recently systems (From the late 90’s onwards) have used Manifold Absolute Pressure (MAP) sensors to determine the amount of air coming in. Computationally these are more complex as there are a lot more variables that need to be known by the computer, but they tend to be much more accurate for the purposes of fuel injection. Nearly all aftermarket computers now use MAP and given how easy it is the setup (just a single vacuum hose going from the manifold to the ECU) this is the approach we took.

The above are the bare minimum inputs required for a computer to control the injection, however typically more sensors are needed in order to make the system operate smoothly. We used:

  • Temperature sensors: As the density of air changes with temperature, the ECU needs to know how hot or cold the incoming air is. It also needs to know the temperature of the water in the coolant system to know whether it is running too hot or cold so it can make adjustments as needed.
  • Throttle position sensor: The ratio of fuel to air is primarily controlled by the MAf or MAP sensor described above, however as changes in these sensors are not instantaneous, the ECU needs to know when the accelerator is pressed so it can add more fuel for the car to be able to accelerate quickly. These sensors are typically just a variable resistor fitted to the accelerator.
  • Camshaft sensor: I’ll avoid getting too technical here, but injection can essentially work in 2 modes, batched or sequential. In the 4 strokes an engine goes through, the crankshaft will rotate through 720 degrees (ie 2 turns). With just a crank sensor, the ECU can only know where the shaft is in the 0-360 degree range. To overcome this, most fuel injection systems up to about the year 2000 ran in ‘batched’ mode, meaning that the fuel injectors would fire in pairs, twice (or more) per 720 degrees. This is fine and cars can run very smoothly in this mode, however it means that after being injected, some fuel mixture sits in the intake manifold before being sucked into the chamber. During this time, the mixture starts to condense back into a liquid which does not burn as efficiently, leading to higher emissions and fuel consumption. To improve the situation, car manufacturers starting moving to sequential injection, meaning that the fuel is only ever injected at the time it can go straight into the combustion chamber. To do this, the ECU needs to know where in 720 degrees the engine is rather than just in 360 degrees. As the camshaft in a car runs at half the crankshaft speed, all you need to do this is place a similar sensor on this that produces 1 pulse every revolution (The equivalent of 1 pulse every 2 turns of the crank). In our case, we had decided that to remove the distributor (which is driven off the crank) and converted it to provide this pulse. I’ll provide a picture of this shortly, but it uses a single tooth that passes through a ‘vane’ type hall effect sensor, so that the signal goes high when the tooth enters the sensor and low when it leaves.
  • Oxygen sensor (O2) – In order to give some feedback to the ECU about how the engine is actually running, most cars these days run a sensor in the exhaust system to determine how much of the fuel going in is actually being burned. Up until very recently, virtually all of these sensors were what is known as narrowband, which in short means that they can determine whether the fuel/air mix is too lean (Not enough fuel) or too rich (Too much fuel), but not actually by how much. The upshot of this is that you can only ever know EXACTLY what the fuel/air mixture is when it switches from one state to the other. To overcome this problem, there is a different version of the sensor, known as wideband, that (within a certain range) can determine exactly how rich or lean the mixture is. If you ever feel like giving yourself a headache, take a read through http://www.megamanual.com/PWC/LSU4.htm which is the theory behind these sensors. They are complicated! Thankfully despite all the complication, they are fairly easy to use and allow much easier and quicker tuning once the ECU is up and running.

So with all of the above, pretty much the complete electronics system is covered. Of course, this doesn’t even start to cover off the wiring, fusing, relaying etc that has to go into making all of it work in the terribly noisy environment of a car, but that’s all the boring stuff

ECU

Finally the part tying everything together is the ECU (Engine Control Unit) itself. There are many different types of programmable ECUs available and they vary significantly in both features and price, ranging from about $400 to well over $10,000. Unsurprisingly there’s been a lot of interest in this area from enthusiasts looking to make their own and despite there having been a few of these to actually make it to market, the most successfully has almost certainly been Megasquirt. when we started with this project we had planned on using the 2nd generation Megasquirt which, whilst not having some of the capabilities of the top end systems, provided some great bang for bang. As we went along though, it became apparent that the Megasquirt 3 would be coming out at about the right time for us and so I decided to go with one of them instead. I happened to fluke one of the first 100 units to be produced and so we had it in our hands fairly quickly.

Let me just say that this is an AMAZING little box. From what I can see it has virtually all the capabilities of the (considerably) more expensive commercial units including first class tuning software (Multi platform, Win, OSX, linux) and a very active developer community. Combined with the Extra I/O board (MS3X) the box can do full sequential injection and ignition (With direct coil driving of ‘smart’ coils), launch control, traction control, ‘auto’ tuning in software, generic I/O of basically any function you can think of (including PID closed loop control), full logging via onboard SD slot and has a built in USB adaptor to boot!

Megasquirt 3 box and unassembled components

In the next post I’ll go through the hardware and setup we used to make all this happen. I’ll also run through the ignition system that we switched over to ECU control.

Command line Skype

Despite risking the occasional dirty look from a certain type of linux/FOSS supporter, I quite happily run the (currently) non-free Skype client on my HTPC. I have a webcam sitting on top of the TV and works flawlessly for holding video chats with family and friends.

The problem I initially faced however, was that my HTPC is 100% controlled by a keyboard only. Unlike Windows, the linux version of Skype has no built in shortcut keys (user defined or otherwise) for basic tasks such as answering and terminating calls. This makes it virtually impossible to use out of the box. On the upside though, the client does have an API and some wonderful person out there has created a python interface layer for it, aptly named, Skype4Py.

A little while ago when I still had free time on weekends, I sat down and quickly hacked together a python script for answering and terminating calls, as well as maximising video chats from the command line. I then setup a few global shortcut keys within xfce to run the script with the appropriate arguments.

I won’t go into the nitty gritty of the script as it really is a hack in some places (Particularly the video maximising), but I thought I’d post it up in case it is of use to anyone else. I’ve called it mythSkype, simply because the primary function of the machine I run it on is MythTV, but it has no dependencies on Myth at all.

The depedencies are:

  • Python – Tested with 2.6, though 2.5 should work
  • Skype4Py –  any version
  • xdotool – Only required for video maximising

To get the video maximising to work you’ll need to edit the file and set the screen_width and screen_height variables to match your resolution.
Make sure you have Skype running, then simply execute one of the following:

  • ./mythSkype -a (Answer any ringing calls)
  • ./mythSkype -e (End active calls)
  • ./mythSkype -m (Maximise the current video)

The first time you run the script, you will get a prompt from Skype asking if you are happy with Skype4Py having access to the application. Obviously you must give your ascent or nothing will work.

Its nothing fancy, but I hope its of use to others out there.

Download: mythSkype

ABC iView on Boxee

A few months ago I switched from using the standard mythfrontend to Boxee, a web enhanced version of the popular XBMC project. Now Boxee has a LOT of potential and the upcoming Boxee Box looks very promising, but its fantastic web capabilities are let down here in Australia as things such as Hulu and Netflix streaming are not available here.

What we do have though is a national broadcaster with a reasonably good online facility. The ABCs iView has been around for some time and has a really great selection of current programs available on it. Sounds like the perfect candidate for a Boxee app to me.

So with the help of Andy Botting and using Jeremy’s Vissers Python iView download app for initial guidance, I put together a Boxee app for watching iView programs fullscreen. For those wishing to try it out, just add the following repository within Boxee:
http://noisymime.org/boxee

Its mostly feature complete although there are a few things that still need to be added. If you have any suggestions or find a bug either leave a comment or put in a ticket at http://code.google.com/p/xbmc-boxee-iview/issues/list (A Google code site by Andy that I am storing this project at)

So that’s the short version of the story. Along the way however there has been a few hiccups and I want to say something about what the ABC (and more recently the BBC) have done to ‘protect’ their content.

The ABC have enabled a function called SWF Verification on their RTMP stream. This is something that Adobe offer on top of their RTMP products despite the fact that they omitted it from the public RTMP spec. That wouldn’t be so bad, except that they are now going after open source products that implement this, threatening them with cease and desists. Going slightly technical for a minute, SWF Verification is NOT a real protection system. It does not encrypt the content being sent nor does it actually prevent people copying it. The system works by requesting a ‘ping’ every 60-90 seconds. If the player can’t provide the correct response (Which is made up of things such as the date and time and the phrase “Genuine Adobe Flash Player 001″) then the server stops the streaming. Hardly high tech stuff.

In my opinion the ABC has made a huge mistake in enabling this as it achieves nothing in stopping piracy or restricting the service to a certain platform and serves only to annoy and frustrate the audience. There is a patch available at http://code.google.com/p/xbmc-boxee-iview that allows Boxee to read these streams directly, however until such time as this is included in Boxee/XBMC mainline (Watch this space: http://trac.xbmc.org/ticket/8971) or the ABC come to their senses as disable this anti-feature, this Boxee app will use the flash interface instead (boo!)

So that’s it. Hope the app is useful to people and, as stated above, if there are any problems, please let me know.

[EDIT]
I should’ve mentioned this originally, Andy and I did actually contact the iView team at the ABC regarding SWF verification. They responded with the following:

Thanks for contacting us re-Boxee. We agree that it’s a great platform and ultimately appropriate for iView iteration. Currently we’re working out our priorities this year for our small iView team, in terms of extended content offerings, potential platforms and general enhancements to the site.

Just some background on our security settings. We have content agreements with various content owners (individuals, production companies, US TV networks etc) a number require additional security, such as SWF hashing. Our content owners also consider non-ABC rendering of that content as not in the spirit of those agreements.

We appreciate the effort you have put into the plug-in and your general interest in all things iView. So once we are on our way with our development schedule for “out of the browser” iView, we’ll hopefully be in a position to share our priorities a little more. We would like to keep in touch with you this year and if you have any questions or comments my direct email is ********@abc.net.au.

[STATUS]
The app is currently in a WORKING state. If you are experiencing any problems, please send me a copy of your Boxee log file and I will investigate the issue.

July 27, 2016

LUV Main August 2016 Meeting: TBD

Aug 2 2016 18:30
Aug 2 2016 20:30
Aug 2 2016 18:30
Aug 2 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

  • TBD

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 2, 2016 - 18:30

read more

LUV Beginners August Meeting: File Sharing

Aug 20 2016 12:30
Aug 20 2016 16:30
Aug 20 2016 12:30
Aug 20 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

Wen Lin will speak on file sharing

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 20, 2016 - 12:30

read more

Get off my lawn: separating Docker workloads using cgroups

On my team, we do two different things in our Continuous Integration setup: build/functional tests, and performance tests. Build tests simply test whether a project builds, and, if the project provides a functional test suite, that the tests pass. We do a lot of MySQL/MariaDB testing this way. The other type of testing we do is performance tests: we build a project and then run a set of benchmarks against it. Python is a good example here.

Build tests want as much grunt as possible. Performance tests, on the other hand, want a stable, isolated environment. Initially, we set up Jenkins so that performance and build tests never ran at the same time. Builds would get the entire machine, and performance tests would never have to share with anyone.

This, while simple and effective, has some downsides. In POWER land, our machines are quite beefy. For example, one of the boxes I use - an S822L - has 4 sockets, each with 4 cores. At SMT-8 (an 8 way split of each core) that gives us 4 x 4 x 8 = 128 threads. It seems wasteful to lock this entire machine - all 128 threads - just so as to isolate a single-threaded test.1

So, can we partition our machine so that we can be running two different sorts of processes in a sufficiently isolated way?

What counts as 'sufficiently isolated'? Well, my performance tests are CPU bound, so I want CPU isolation. I also want memory, and in particular memory bandwith to be isolated. I don't particularly care about IO isolation as my tests aren't IO heavy. Lastly, I have a couple of tests that are very multithreaded, so I'd like to have enough of a machine for those test results to be interesting.

For CPU isolation we have CPU affinity. We can also do something similar with memory. On a POWER8 system, memory is connected to individual P8s, not to some central point. This is a 'Non-Uniform Memory Architecture' (NUMA) setup: the directly attached memory will be very fast for a processor to access, and memory attached to other processors will be slower to access. An accessible guide (with very helpful diagrams!) is the relevant RedBook (PDF), chapter 2.

We could achieve the isolation we want by dividing up CPUs and NUMA nodes between the competing workloads. Fortunately, all of the hardware NUMA information is plumbed nicely into Linux. Each P8 socket gets a corresponding NUMA node. lscpu will tell you what CPUs correspond to which NUMA nodes (although what it calls a CPU we would call a hardware thread). If you install numactl, you can use numactl -H to get even more details.

In our case, the relevant lscpu output is thus:

NUMA node0 CPU(s):     0-31
NUMA node1 CPU(s):     96-127
NUMA node16 CPU(s):    32-63
NUMA node17 CPU(s):    64-95

Now all we have to do is find some way to tell Linux to restrict a group of processes to a particular NUMA node and the corresponding CPUs. How? Enter control groups, or cgroups for short. Processes can be put into a cgroup, and then a cgroup controller can control the resouces allocated to the cgroup. Cgroups are hierarchical, and there are controllers for a number of different ways you could control a group of processes. Most helpfully for us, there's one called cpuset, which can control CPU affinity, and restrict memory allocation to a NUMA node.

We then just have to get the processes into the relevant cgroup. Fortunately, Docker is incredibly helpful for this!2 Docker containers are put in the docker cgroup. Each container gets it's own cgroup under the docker cgroup, and fortunately Docker deals well with the somewhat broken state of cpuset inheritance.3 So it suffices to create a cpuset cgroup for docker, and allocate some resources to it, and Docker will do the rest. Here we'll allocate the last 3 sockets and NUMA nodes to Docker containers:

cgcreate -g cpuset:docker
echo 32-127 > /sys/fs/cgroup/cpuset/docker/cpuset.cpus
echo 1,16-17 > /sys/fs/cgroup/cpuset/docker/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/docker/cpuset.mem_hardwall

mem_hardwall prevents memory allocations under docker from spilling over into the one remaining NUMA node.

So, does this work? I created a container with sysbench and then ran the following:

root@0d3f339d4181:/# sysbench --test=cpu --num-threads=128 --max-requests=10000000 run

Now I've asked for 128 threads, but the cgroup only has CPUs/hwthreads 32-127 allocated. So If I run htop, I shouldn't see any load on CPUs 0-31. What do I actually see?

htop screenshot, showing load only on CPUs 32-127

It works! Now, we create a cgroup for performance tests using the first socket and NUMA node:

cgcreate -g cpuset:perf-cgroup
echo 0-31 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.cpus
echo 0 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.mems
echo 1 > /sys/fs/cgroup/cpuset/perf-cgroup/cpuset.mem_hardwall

Docker conveniently lets us put new containers under a different cgroup, which means we can simply do:

dja@p88 ~> docker run -it --rm --cgroup-parent=/perf-cgroup/ ppc64le/ubuntu bash
root@b037049f94de:/# # ... install sysbench
root@b037049f94de:/# sysbench --test=cpu --num-threads=128 --max-requests=10000000 run

And the result?

htop screenshot, showing load only on CPUs 0-31

It works! My benchmark results also suggest this is sufficient isolation, and the rest of the team is happy to have more build resources to play with.

There are some boring loose ends to tie up: if a build job does anything outside of docker (like clone a git repo), that doesn't come under the docker cgroup, and we have to interact with systemd. Because systemd doesn't know about cpuset, this is quite fiddly. We also want this in a systemd unit so it runs on start up, and we want some code to tear it down. But I'll spare you the gory details.

In summary, cgroups are surprisingly powerful and simple to work with, especially in conjunction with Docker and NUMA on Power!


  1. It gets worse! Before the performance test starts, all the running build jobs must drain. If we have 8 Jenkins executors running on the box, and a performance test job is the next in the queue, we have to wait for 8 running jobs to clear. If they all started at different times and have different runtimes, we will inevitably spend a fair chunk of time with the machine at less than full utilisation while we're waiting. 

  2. At least, on Ubuntu 16.04. I haven't tested if this is true anywhere else. 

  3. I hear this is getting better. It is also why systemd hasn't done cpuset inheritance yet. 

Personal submission to the Productivity Commission Review on Public Sector Data

My name is Pia Waugh and this is my personal submission to the Productivity Commission Review on Public Sector Data. It does not reflect the priorities or agenda of my employers past, present or future, though it does draw on my expertise and experience in driving the open data agenda and running data portals in the ACT and Commonwealth Governments from 2011 till 2015. I was invited by the Productivity Commission to do a submission and thought I could provide some useful ideas for consideration. I note I have been on maternity leave since January 2016 and am not employed by the Department of Prime Minister and Cabinet or working on data.gov.au at the time of writing this submission. This submission is also influenced by my work and collaboration with other Government jurisdictions across Australia, overseas and various organisations in the private and community sectors. I’m more than happy to discuss these ideas or others if useful to the Productivity Commission.

I would like to thank all those program and policy managers, civic hackers, experts, advocates, data publishers, data users, public servants and vendors whom I have had the pleasure to work with and have contributed to my understanding of this space. I’d also like to say a very special thank you to the Australian Government Chief Technology Officer, John Sheridan, who gave me the freedom to do what was needed with data.gov.au, and to Allan Barger who was my right hand man in rebooting the agenda in 2013, supporting agencies and helping establish a culture of data publishing and sharing across the public sector. I think we achieved a lot in only a few years with a very small but highly skilled team. A big thank you also to Alex Sadleir and Steven De Costa who were great to work with and made it easy to have an agile and responsive approach to building the foundation for an important piece of data infrastructure for the Australian Government.

Finally, this is a collection of some of my ideas and feedback for use by the Productivity Commission however, it doesn’t include everything I could possibly have to say on this topic because, frankly, we have a small baby who is taking most of my time at the moment. Please feel free to add your comments, criticisms or other ideas to the comments below! It is all licensed as Creative Commons 4.0 By Attribution, so I hope it is useful to others working in this space.

The Importance of Vision

Without a vision, we stumble blindly in the darkness. Without a vision, the work and behaviours of people and organisations are inevitably driven by other competing and often short term priorities. In the case of large and complex organisms like the Australian Public Service, if there is no cohesive vision, no clear goal to aim for, then each individual department is going to do things their own way, driven by their own priorities, budgets, Ministerial whims and you end up with what we largely have today: a cacophony of increasingly divergent approaches driven by tribalism that make collaboration, interoperability, common systems and data reuse impossible (or prohibitively expensive).

If however, you can establish a common vision, then even a strongly decentralised system can converge on the goal. If we can establish a common vision for public data, then the implementation of data programs and policies across the APS should become naturally more consistent and common in practice, with people naturally motivated to collaborate, to share expertise, and to reuse systems, standards and approaches in pursuit of the same goal.

My vision for public data is two-pronged and a bit of a paradigm shift: data by design and gov as an API! “Data by design” is about taking a data driven approach to the business of government and “gov as an API” is about changing the way we use, consume, publish and share data to properly enable a data driven public service and a broader network of innovation. The implementation of these ideas would create mashable government that could span departments, jurisdictions and international boundaries. In a heavily globalised world, no government is in isolation and it is only by making government data, content and services API enabled and reusable/interfacable, that we, collectively, can start to build the kind of analysis, products and services that meet the necessarily cross jurisdictional needs of all Australians, of all people.

More specifically, my vision is a data driven approach to the entire business of government that supports:

  • evidence based and iterative policy making and implementation;

  • transparent, accountable and responsible Government;

  • an open competitive marketplace built on mashable government data, content and services; and

  • a more efficient, effective and responsive public service.

What this requires is not so simple, but is utterly achievable if we could embed a more holistic whole of government approach in the work of individual departments, and then identify and fill the gaps through a central capacity that is responsible for driving a whole of government approach. Too often we see the data agenda oversimplified into what outcomes are desired (data visualisations, dashboards, analysis, etc) however, it is only in establishing multipurpose data infrastructure which can be reused for many different purposes that we will enable the kind of insights, innovation, efficiencies and effectiveness that all the latest reports on realising the value of data allude to. Without actual data, all the reports, policies, mission statements, programs and governance committees are essentially wasting time. But to get better government data, we need to build capacity and motivation in the public sector. We need to build a data driven culture in government. We also need to grow consumer confidence because a) demand helps drive supply, and b) if data users outside the public sector don’t trust that they can find, use and rely upon at least some government data, then we won’t ever see serious reuse of government data by the private sector, researchers, non-profits, citizens or the broader community.

Below is a quick breakdown of each of these priorities, followed by specific recommendations for each:

data infrastructure that supports multiple types of reuse. Ideally all data infrastructure developed by all government entities should be built in a modular, API enabled way to support data reuse beyond the original purpose to enable greater sharing, analysis, aggregation (where required) and publishing. It is often hard for agencies to know what common infrastructure already exists and it is easy for gaps to emerge, so another part of this is to map the data infrastructure requirements for all government data purposes, identify where solutions exist and any gaps. Where whole of government approaches are identified, common data infrastructure should be made available for whole of government use, to reduce the barrier to publishing and sharing data for departments. Too often, large expensive data projects are implemented in individual agencies as single purpose analytics solutions that don’t make the underlying data accessible for any other purpose. If such projects separated the data infrastructure from the analytics solutions, then the data infrastructure could be built to support myriad reuse including multiple analytics solutions, aggregation, sharing and publishing. If government data infrastructure was built like any other national infrastructure, it should enable a competitive marketplace of analysis, products and service delivery both domestically and globally. A useful analogy to consider is the example of roads. Roads are not typically built just from one address to another and are certainly not made to only support certain types of vehicles. It would be extremely inefficient if everyone built their own custom roads and then had to build custom vehicles for each type of road. It is more efficient to build common roads to a minimum technical standard that any type of vehicle can use to support both immediate transport needs, but also unknown transport needs into the future. Similarly we need to build multipurpose data infrastructure to support many types of uses.

greater publisher capacity and motivation to share and publish data. Currently the range of publishing capacity across the APS is extremely broad, from agencies that do nothing to agencies that are prolific publishers. This is driven primarily by different cultures and responsibilities of agencies and if we are to improve the use of data, we need to improve the supply of data across the entire public sector. This means education and support for agencies to help them understand the value to their BAU work. The time and money saved by publishing data, opportunities to improve data quality, the innovation opportunities and the ability to improve decision making are all great motivations once understood, but generally the data agenda is only pitched in political terms that have little to no meaning to data publishers. Otherwise there is no natural motivation to publish or share data, and the strongest policy or regulation in the world does not create sustainable change or effective outcomes if you cannot establish a motivation to comply. Whilst ever publishing data is seen as merely a compliance issue, it will be unlikely for agencies to invest the time and skills to publish data well, that is, to publish the sort of data that consumers want to use.

greater consumer confidence to improve the value realised from government data. Supply is nothing without demand and currently there is a relatively small (but growing) demand for government data, largely because people won’t use what they don’t trust. In the current landscape is difficult to find data and even if one can find it, it is often not machine readable or not freely available, is out of date and generally hard to use. There is not a high level of consumer confidence in what is provided by government so many people don’t even bother to look. If they do look, they find myriad data sources of ranging quality and inevitably waste many hours trying to get an outcome. There is a reasonable demand for data for research and the research community tends to jump through hoops – albeit reluctantly and at great cost – to gain access to government data. However, the private and civic sectors are yet to seriously engage apart form a few interesting outliers. We need to make finding and using useful data easy, and start to build consumer confidence or we will never even scratch the surface of the billions of dollars of untapped potential predicted by various studies. The data infrastructure section is obviously an important part of building consumer confidence as it should make it easier for consumers to find and have confidence in what they need, but it also requires improving the data culture across the APS, better outreach and communications, better education for public servants and citizens on how to engage in the agenda, and targeted programs to improve the publishing of data already in demand. What we don’t need is yet another “tell us what data you want” because people want to see progress.

a data driven culture that embeds in all public servants an understanding of the role of data in the every day work of the public service, from program management, policy development, regulation and even basic reporting. It is important to take data from being seen as a specialist niche delegated only to highly specialised teams and put data front and centre as part of the responsibilities of all public servants – especially management – in their BAU activities. Developing this culture requires education, data driven requirements for new programs and policies, some basic skills development but mostly the proliferation of an awareness of what data is, why it is important, and how to engage appropriate data skills in the BAU work to ensure a data driven approach. Only with data can a truly evidence driven approach to policy be taken, and only with data can a meaningful iterative approach be taken over time.

Finally, obviously the approach above requires an appropriately skilled team to drive policy, coordination and implementation of the agenda in collaboration with the broader APS. This team should reside in a central agenda to have whole of government imprimatur, and needs a mix of policy, commercial, engagement and technical data skills. The experience of data programs around the world shows that when you split policy and implementation, you inevitably get both a policy team lacking in the expertise to drive meaningful policy and an implementation team paralysed by policy indecision and an unclear mandate. This space is changing so rapidly that policy and implementation need to be agile and mutually reinforcing with a strong focus on getting things done.

As we examine the interesting opportunities presented by new developments such as blockchain and big data, we need to seriously understand the shift in paradigm from scarcity to surplus, from centralised to distributed systems, and from pre-planned to iterative approaches, if we are to create an effective public service for the 21st century.

There is already a lot of good work happening, so the recommendations in this submission are meant to improve and augment the landscape, not replicate. I will leave areas of specialisation to the specialists, and have tried to make recommendations that are supportive of a holistic approach to developing a data-driven public service in Australia.

Current Landscape

There has been progress in recent years towards a more data driven public sector however, these initiatives tend to be done by individual teams in isolation from the broader public service. Although we have seen some excellent exemplars of big data and open data, and some good work to clarify and communicate the intent of a data driven public service through policy and reviews, most projects have simply expanded upon the status quo thinking of government as a series of heavily fortified castles that take the extraordinary effort of letting in outsiders (including other departments) only under strictly controlled conditions and with great reluctance and cost. There is very little sharing at the implementation level (though an increasing amount of sharing of ideas and experience) and very rarely are new initiatives consulted across the APS for a whole of government perspective. Very rarely are actual data and infrastructure experts encouraged or supported to work directly together across agency or jurisdiction lines, which is a great pity. Although we have seen the idea of the value of data start to be realised and prioritised, we still see the implementation of data projects largely delegated to small, overworked and highly specialised internal teams that are largely not in the habit of collaborating externally and thus there is a lot of reinvention and diversity in what is done.

If we are to realise the real benefits of data in government and the broader economy, we need to challenge some of the status quo thinking and approaches towards data. We need to consider government (and the data it collects) as a platform for others to build upon rather than the delivery mechanism for all things to all people. We also need to better map what is needed for a data-driven public service rather than falling victim to the attractive (and common, and cheap) notion of simply identifying existing programs of work and claiming them to be sufficient to meet the goals of the agenda.

Globally this is still a fairly new space. Certain data specialisations have matured in government (eg. census/statistics, some spatial, some science data) but there is still a lack of a cohesive approach to data in any one agency. Even specialist data agencies tend to not look beyond the specialised data to have a holistic data driven approach to everything. In this way, it is critical to develop a holistic approach to data at all levels of the public service to embed the principles of data driven decision making in everything we do. Catalogues are not enough. Specialist data projects are not enough. Publishing data isn’t enough. Reporting number of datasets quickly becomes meaningless. We need to measure our success in this space by how well data is helping the public service to make better decisions, build better services, develop and iterate responsive and evidence based policy agendas, measure progress and understand the environment in which we operate.

Ideally, government agencies need to adopt a dramatic shift in thinking to assume in the first instance that the best results will be discovered through collaboration, through sharing, through helping people help themselves. There also needs in the APS to be a shift away from thinking that a policy, framework, governance structure or other artificial constructs are sufficient outcomes. Such mechanisms can be useful, but they can also be a distraction from getting anything tangible done. Such mechanisms often add layers of complexity and cost to what they purport to achieve. Ultimately, it is only what is actually implemented that will drive an outcome and I strongly believe an outcomes driven approach must be applied to the public data agenda for it to achieve its potential.

References

In recent years there has been a lot of progress. Below is a quick list to ensure they are known and built upon for the future. It is also useful to recognise the good work of the government agencies to date.

  • Public Data Toolkit – the data.gov.au team have pulled together a large repository of information, guidance and reports over the past 3 years on our open data toolkit at http://toolkit.data.gov.au. There are also some useful contributions from the Department of Communications Spatial Policy Branch. The Toolkit has links to various guidance from different authoritative agencies across the APS as well as general information about data management and publishing which would be useful to this review.

  • The Productivity Commission is already aware of the Legislative and other Barriers Workshop I ran at PM&C before going on maternity leave, and I commend the outcomes of that session to the Review.

  • The Financial Sector Inquiry (the “Murray Inquiry”) has some excellent recommendations regarding the use of data-drive approaches to streamline the work and reporting of the public sector which, if implemented, would generate cost and time savings as well as the useful side effect of putting in place data driven practices and approaches which can be further leveraged for other purposes.

  • Gov 2.0 Report and the Ahead of the Game Report – these are hard to find copies of online now, but have some good recommendations and ideas about a more data centric and evidence based public sector and I commend them both to the Review. I’m happy to provide copies if required.

  • There are many notable APS agency efforts which I recommend the Productivity Commission engage with, if they haven’t already. Below are a few I have come across to date, and it is far from an exhaustive list:

    • PM&C (Public Data Management Report/Implementation & Public Data Policy Statement)

    • Finance (running and rebooting data.gov.au, budget publishing, data integration in GovCMS)

    • ABS (multi agency arrangement, ABS.Stat)

    • DHS (analytics skills program, data infrastructure and analysis work)

    • Immigration (analytics and data publishing)

    • Social Services (benefits of data publishing)

    • Treasury (Budget work)

    • ANDS (catalogue work and upskilling in research sector)

    • NDI (super computer functionality for science)

    • ATO (smarter data program, automated and publications data publishing, service analytics, analytics, dev lab, innovationspace)

    • Industry (Lighthouse data integration and analysis, energy ratings data and app)

    • CrimTRAC and AUSTRAC (data collection, consolidation, analysis, sharing)

  • Other jurisdictions in Australia have done excellent work as well and you can see a list (hopefully up to date) of portals and policies on the Public Data Toolkit. I recommend the Productivity Commission engage with the various data teams for their experiences and expertise in this matter. There are outstanding efforts in all the State and Territory Governments involved as well as many Local Councils with instructive success stories, excellent approaches to policy, implementation and agency engagement/skills and private sector engagement projects.

Som current risks/issues

There are a number of issues and risks that exist in pursuing the current approach to data in the APS. Below are some considerations to take into account with any new policies or agendas to be developed.

  • There is significant duplication of infrastructure and investment from building bespoke analytics solutions rather than reusable data infrastructure that could support multiple analytics solutions. Agencies build multiple bespoke analytics projects without making the underpinning data available for other purposes resulting in duplicated efforts and under-utilised data across government.

  • Too much focus on pretty user interfaces without enough significant investment or focus on data delivery.

  • Discovery versus reuse – too many example of catalogues linking to dead data. Without the data, discovery is less than useful.

  • Limitations of tech in agencies by ICT Department – often the ICT Department in an agency is reticent to expand the standard operating environment beyond the status quo, creating an issue of limitation of tools and new technologies.

  • Copyright and legislation – particularly old interpretations of each and other excuses to not share.

  • Blockers to agencies publishing data (skills, resources, time, legislation, tech, competing priorities e.g. assumed to be only specialists that can do data).

  • Often activities in the public sector are designed to maintain the status quo (budgets, responsibilities, staff count) and there is very little motivation to do things more efficiently or effectively. We need to establish these motivations for any chance to be sustainable.

  • Public perceptions about the roles and responsibilities of government change over time and it is important to stay engaged when governments want to try something new that the public might be uncertain about. There has been a lot of media attention about how data is used by government with concerns aired about privacy. Australians are concerned about what Government plans to do with their data. Broadly the Government needs to understand and engage with the public about what data it holds and how it is used. There needs to be trust built to both improve the benefits from data and to ensure citizen privacy and rights are protected. Where government wants to use data in new ways, it needs to prosecute the case with the public and ensure there are appropriate limitations to use in place to avoid misuse of the data. Generally, where Australians can directly view the benefit of their data being used and where appropriate limitations are in place, they will probably react positively. For example, tax submission are easier now that their data auto-fills from their employers and health providers when completing Online Tax. People appreciate the concept of having to only update their details once with government.

Benefits

I agree with the benefits identified by the Productivity Commission discussion paper however I would add the following:

  • Publishing government data, if published well, enables a competitive marketplace of service and product delivery, the ability to better leverage public and academic analysis for government use and more broadly, taps into the natural motivation of the entire community to innovate, solve problems and improve life.

  • Establishing authoritative data – often government is the authoritative source of information it naturally collects as part of the function of government. When this data is not then publicly available (through anonymised APIs if necessary) then people will use whatever data they can get access to, reducing the authority of the data collected by Government

  • A data-drive approach to collecting, sharing and publishing data enables true iterative approaches to policy and services. Without data, any changes to policy are difficult to justify and impossible to track the impact, so data provides a means to support change and to identify what is working quickly. Such feedback loops enable iterative improvements to policies and programs that can respond to the changing financial and social environment the operate in.

  • Publishing information in a data driven way can dramatically streamline reporting, government processes and decision making, freeing up resources that can be used for more high value purposes.

Public Sector Data Principles

The Public Data Statement provides a good basis of principles for this agenda. Below are some principles I think are useful to highlight with a brief explanation of each.

Principles:

  • build for the future - legacy systems will always be harder to deal with so agencies need to draw a line in the sand and ensure new systems are designed with data principles, future reuse and this policy agenda in mind. Otherwise we will continue to build legacy systems into the future. Meanwhile, just because a legacy system doesn’t natively support APIs or improved access doesn’t mean you can’t affordably build middleware solutions to extract, transform, share and publish data in an automated way.

  • data first - wherever data is used to achieve an outcome, publish the data along with the outcome. This will improve public confidence in government outcomes and will also enable greater reuse of government data. For example, where graphs or analysis are published also publish the data. Where a mobile app is using data, publish the data API. Where a dashboard is set up, also provide access to the underpinning data.

  • use existing data, from the source where possible - this may involve engaging with or even paying for data from private sector or NGOs, negotiating with other jurisdictions or simply working with other government entities to gain access.

  • build reusable data infrastructure first - wherever data is part of a solution, the data should be accessible through APIs so that other outcomes and uses can be realised, even if the APIs are only used for internal access in the first instance.

  • data driven decision making to support iterative and responsive policy and implementations approaches – all decisions should be evidence based, all projects, policies and programs should have useful data indicators identified to measure and monitor the initiative and enable iterative changes backed by evidence.

  • consume your own data and APIs - agencies should consider how they can better use their own data assets and build access models for their own use that can be used publicly where possible. In consuming their own data and APIs, there is a better chance the data and APIs will be designed and maintained to support reliable reuse. This could raw or aggregate data APIs for analytics, dashboards, mobile apps, websites, publications, data visualisations or any other purpose.

  • developer empathy - if government agencies start to prioritise the needs of data users when publishing data, there is a far greater likelihood the data will be published in a way developers can use. For instance, no developer likes to use PDFs, so why would an agency publish data in a PDF (hint: there is no valid reason. PDF does not make your data more secure!).

  • standardise where beneficial but don’t allow the perfect to delay the good - often the focus on data jumps straight to standards and then multi year/decade standards initiatives are stood up which creates huge delays to accessing actual data. If data is machine readable, it can often be used and mapped to some degree which is useful, more useful than having access to nothing.

  • automate, automate, automate! – where human effort is required, tasks will always be inefficient and prone to error. Data collection, sharing and publishing should be automated where possible. For example, when data is regularly requested, agencies should automate the publishing of data and updates which both reduces the work for the agency and improves the quality for data users.

  • common platforms - where possible agencies should use existing common platforms to share and publish data. Where they need to develop new infrastructure, efforts should be made to identify where new platforms might be useful in a whole of government or multi agency context and built to be shared. This will support greater reuse of infrastructure as well as data.

  • a little less conversation a little more action – the public service needs to shift from talking about data to doing more in this space. Pilot projects, experimentation, collaboration between implementation teams and practitioners, and generally a greater focus on getting things done.

Recommendations for the Public Data agenda

Strategic

  1. Strong Recommendation: Develop a holistic vision and strategy for a data-driven APS. This could perhaps be part of a broader digital or ICT strategy, but there needs to be a clear goal that all government entities are aiming towards. Otherwise each agency will continue to do whatever they think makes sense just for them with no convergence in approach and no motivation to work together.

  2. Strong Recommendation: Develop and publish work program and roadmap with meaningful measures of progress and success regularly reported publicly on a public data agenda dashboard. NSW Government already have a public roadmap and dashboard to report progress on their open data agenda.

Whole of government data infrastructure

  1. Strong Recommendation: Grow the data.gov.au technical team to at least 5 people to grow the whole of government catalogue and cloud based data hosting infrastructure, to grow functionality in response to data publisher and data user requirements, to provide free technical support and training to agencies, and to regularly engage with data users to grow public confidence in government data. The data.gov.au experience demonstrated that even just a small motivated technical team could greatly assist agencies to start on their data publishing journey to move beyond policy hypothesising into practical implementation. This is not something that can be efficiently or effectively outsourced in my experience.

  • I note that in the latest report from PM&C, Data61 have been engaged to improve the infrastructure (which looks quite interesting) however, there still needs to be an internal technical capability to work collaboratively with Data61, to support agencies, to ensure what is delivered by contractors meets the technical needs of government, to understand and continually improve the technical needs and landscape of the APS, to contribute meaningfully to programs and initiatives by other agencies, and to ensure the policies and programs of the Public Data Branch are informed by technical realities.

  1. Recommendation: Establish/extend a data infrastructure governance/oversight group with representatives from all major data infrastructure provider agencies including the central public data team to improve alignment of agendas and approaches for a more holistic whole of government approach to all major data infrastructure projects. The group would assess new data functional requirements identified over time, would identify how to best collectively meet the changing data needs of the public sector and would ensure that major data projects apply appropriate principles and policies to enable a data driven public service. This work would also need to be aligned with the work of the Data Champions Network.

  2. Recommendation: Map out, publish and keep up to date the data infrastructure landscape to assist agencies in finding and using common platforms.

  3. Recommendation: Identify on an ongoing basis publisher needs and provide whole of government solutions where required to support data sharing and publishing (eg – data.gov.au, ABS infrastructure, NationalMap, analytics tools, github and code for automation, whole of gov arrangements for common tools where they provide cost benefits).

  4. Recommendation: Create a requirement for New Policy Proposals that any major data initiatives (particularly analytics projects) also make the data available via accessible APIs to support other uses or publishing of the data.

  5. Recommendation: Establish (or build upon existing efforts) an experimental data playground or series of playgrounds for agencies to freely experiment with data, develop skills, trial new tools and approaches to data management, sharing, publishing, analysis and reuse. There are already some sandbox environments available and these could be mapped and updated over time for agencies to easily find and engage with such initiatives.

Grow consumer confidence

  1. Strong Recommendation: Build automated data quality indicators into data.gov.au. Public quality indicators provide an easy way to identify quality data, thus reducing the time and effort required by data users to find something useful. This could also support a quality search interface, for instance data users could limit searches to “high quality government data” or choose granular options such as “select data updated this year”. See my earlier blog (from PM&C) draft of basic technical quality indicators which would be implemented quickly, giving data users a basic indication of how usable and useful data is in a consistent automated way. Additional quality indicators including domain specific quality indicators could be implemented in a second or subsequent iteration of the framework.

  2. Strong Recommendation: Establish regular public communications and engagement to improve relations with data users, improve perception of agenda and progress and identify areas of data provision to prioritise. Monthly blogging of progress, public access to the agenda roadmap and reporting on progress would all be useful. Silence is generally assumed to mean stagnation, so it is imperative for this agenda to have a strong public profile, which in part relies upon people increasingly using government data.

  3. Strong Recommendation: Establish a reasonable funding pool for agencies to apply for when establishing new data infrastructure, when trying to make existing legacy systems more data friendly, and for responding to public data requests in a timely fashion. Agencies should also be able to apply for specialist resource sharing from the central and other agencies for such projects. This will create the capacity to respond to public needs faster and develop skills across the APS.

  4. Strong Recommendation: The Australian Government undertake an intensive study to understand the concerns Australians hold relating to the use of their data and develop a new social pact with the public regarding the use and limitations of data.

  5. Recommendation: establish a 1-2 year project to support Finance in implementing the data driven recommendations from the Murray Inquiry with 2-3 dedicated technical resources working with relevant agency teams. This will result in regulatory streamlining, improved reporting and analysis across the APS, reduced cost and effort in the regular reporting requirements of government entities and greater reuse of the data generated by government reporting.

  6. Recommendation: Establish short program to focus on publishing and reporting progress on some useful high value datasets, applying the Public Data Policy Statement requirements for data publishing. The list of high value datasets could be drawn from the Data Barometer, the Murray Inquiry, existing requests from data.gov.au, and work from PM&C. The effort of determining the MOST high value data to publish has potentially got in the way of actual publishing, so it would be better to use existing analysis and prioritise some data sets but more importantly to establish data by default approach across govt for the kinds of serendipitous use of data for truly innovation outcomes.

  7. Recommendation: Citizen driven privacy – give citizens the option to share data for benefits and simplified services, and a way to access data about themselves.

Grow publisher capacity and motivation

  1. Strong Recommendation: Document the benefits for agencies to share data and create better guidance for agencies. There has been a lot of work since the reboot of data.gov.au to educate agencies on the value of publishing data. The value of specialised data sharing and analytics projects is often evident driving those kinds of projects, but traditionally there hasn’t been a lot of natural motivations for agencies to publish data, which had the unfortunate result of low levels of data publishing. There is a lot of anecdotal evidence that agencies have saved time and money by publishing data publicly, which have in turn driven greater engagement and improvements in data publishing by agencies. If these examples were better documented (now that there are more resources) and if agencies were given more support in developing holistic public data strategies, we would likely see more data published by agencies.

  2. Strong Recommendation: Implement an Agency League Table to show agency performance on publishing or otherwise making government data publicly available. I believe such a league table needs to be carefully designed to include measures that will drive better behaviours in this space. I have previously mapped out a draft league table which ranks agency performance by quantity (number of data resources, weighted by type), quality (see previous note on quality metrics), efficiency (the time and/or money saved in publishing data) and value (a weighted measure of usage and reuse case studies) and would be happy to work with others in re-designing the best approach if useful.

  3. Recommendation: Establish regular internal hackfests with tools for agencies to experiment with new approaches to data collection, sharing, publishing and analysis – build on ATO lab, cloud tools, ATO research week, etc.

  4. Recommendation: Require data reporting component for New Policy Proposals and new tech projects wherein meaningful data and metrics are identified that will provide intelligence on the progress of the initiative throughout the entire process, not just at the end of the project.

  5. Recommendation: Add data principles and API driven and automated data provision to the digital service standard and APSC training.

  6. Recommendation: Require public APIs for all government data, appropriately aggregated where required, leveraging common infrastructure where possible.

  7. Recommendation: Establish a “policy difference engine” – a policy dashboard that tracks the top 10 or 20 policy objectives for the government of the day which includes meaningful metrics for each policy objective over time. This will enable the discovery of trends, the identification of whether policies are meeting their objectives, and supports an evidence based iterative approach to the policies because the difference made by any tweaks to the policy agenda will be evident.

  8. Recommendation: all publicly funded research data to be published publicly, and discoverable on central research data hub with free hosting available for research institutions. There has been a lot of work by ANDS and various research institutions to improve discovery of research data, but a large proportion is still only available behind a paywall or with an education logon. A central repository would reduce the barrier for research organisations to publicly publish their data.

  9. Recommendation: Require that major ICT and data initiatives consider cloud environments for the provision, hosting or analysis of data.

  10. Recommendation: Identify and then extend or provide commonly required spatial web services to support agencies in spatially enabling data. Currently individual agencies have to run their own spatial services but it would be much more efficient to have common spatial web services that all agencies could leverage.

Build data drive culture across APS

  1. Strong Recommendation: Embed data approaches are considered in all major government investments. For example, if data sensors were built into major infrastructure projects it would create more intelligence about how the infrastructure is used over time. If all major investments included data reporting then perhaps it would be easier to keep projects on time and budget.

  2. Recommendation: Establish a whole of government data skills program, not just for specialist skills, but to embed in the entire APS and understanding of data-driven approaches for the public service. This would ideally include mandatory data training for management (in the same way OH&S and procurement are mandatory training). At C is a draft approach that could be taken.

  3. Recommendation: Requirement that all government contracts have create new data make that data available to the contracting gov entity under Creative Commons By Attribution only licence so that government funded data is able to published publicly according to government policy. I have seen cases of contracts leaving ownership with companies and then the data not being reusable by government.

  4. Recommendation: Real data driven indicators required for all new policies, signed off by data champions group, with data for KPIs publicly available on data.gov.au for public access and to feed policy dashboards. Gov entities must identify existing data to feed KPIs where possible from gov, private sector, community and only propose new data collection where new data is clearly required.

  • Note: it was good to see a new requirement to include evidence based on data analytics for new policy proposals and to consult with the Data Champions about how data can support new proposals in the recently launched implementation report on the Public Data Management Report. However, I believe it needs to go further and require data driven indicators be identified up front and reported against throughout as per the recommendation above. Evidence to support a proposal does not necessarily provide the ongoing evidence to ensure implementation of the proposal is successful or has the intended effect, especially in a rapidly changing environment.

  1. Recommendation: Establish relationships with private sector to identify aggregate data points already used in private sector that could be leveraged by public sector rather. This would be more efficient and accurate then new data collection.

  2. Recommendation: Establish or extend a cross agency senior management data champions group with specific responsibilities to oversee the data agenda, sign off on data indicators for NPPs as realistic, provide advice to Government and Finance on data infrastructure proposals across the APS.

  3. Recommendation: Investigate the possibilities for improving or building data sharing environments for better sharing data between agencies.

  4. Recommendation: Take a distributed and federated approach to linking unit record data. Secure API access to sensitive data would avoid creating a honey pot.

  5. Recommendation: Establish data awards as part of annual ICT Awards to include: most innovative analytics, most useful data infrastructure, best data publisher, best data driven policy.

  6. Recommendation: Extend the whole of government service analytics capability started at the DTO and provide access to all agencies to tap into a whole of government view of how users interact with government services and websites. This function and intelligence, if developed as per the original vision, would provide critical evidence of user needs as well as the impact of changes and useful automated service performance metrics.

  7. Recommendation: Support data driven publishing including an XML platform for annual reports and budgets, a requirement for data underpinning all graphs and datavis in gov publications to be published on data.gov.au.

  8. Recommendation: develop a whole of government approach to unit record aggregation of sensitive data to get consistency of approach and aggregation.

Implementation recommendations

  1. Move the Public Data Branch to an implementation agency – Currently the Public Data Branch sits in the Department of Prime Minister and Cabinet. Considering this Department is a policy entity, the questions arises as to whether it is the right place in the longer term for an agenda which requires a strong implementation capability and focus. Public data infrastructure needs to be run like other whole of government infrastructure and would be better served as part of a broader online services delivery team. Possible options would include one of the shared services hubs, a data specialist agency with a whole of government mandate, or the office of the CTO (Finance) which runs a number of other whole of government services.

Downloadable copy

July 26, 2016

Gather-ing some thoughts on societal challenges

On the weekend I went to the GatherNZ event in Auckland, an interesting unconference. I knew there were going to be some pretty awesome people hanging out which gave a chance for me to catch up with and introduce the family to some friends, hear some interesting ideas, and road test some ideas I’ve been having about where we are all heading in the future. I ran a session I called “Choose your own adventure, please” and it was packed! Below is a bit of a write up of what was discussed as there was a lot of interest in how to keep the conversation going. I confess, I didn’t expect so much interest as to be asked where the conversation could be continued, but this is a good start I think. I was particularly chuffed when a few attendee said the session blew their minds :)

I’m going to be blogging a fair bit over the coming months on this topic in any case as it relates to a book I’m in the process of researching and writing, but more on that next week!

Choose your own adventure, please

We are at a significant tipping point in history. The world and the very foundations our society were built on have changed, but we are still largely stuck in the past in how we think and plan for the future. If we don’t make some active decisions about how we live, think and prioritise, then we will find ourselves subconsciously reinforcing the status quo at every turn and not in a position to genuinely create a better future for all. I challenge everyone to consider how they think and to actively choose their own adventure, rather than just doing what was done before.

How has the world changed? Well many point to the changes in technology and science, and the impact these have had on our quality of life. I think the more interesting changes are in how power and perspectives has changed, which created the environment for scientific and technological progress in the first instance, but also created the ability for many many more individuals to shape the world around them. We have seen traditional paradigms of scarcity, centralisation and closed systems be outflanked and outdated by modern shifts to surplus, distribution and open systems. When you were born a peasant and died one, what power did you have to affect your destiny? Nowadays individuals are more powerful than ever in our collective history, with the traditionally centralised powers of publishing, property, communications, monitoring and even enforcement now distributed internationally to anyone with access to a computer and the internet, which is over a third of the world’s population and growing. I blogged about this idea more here. Of course, these shifts are proving challenging for traditional institutions and structures to keep up, but individuals are simply routing around these dinosaurs, putting such organisations in the uncomfortable position of either adapting or rendering themselves irrelevant.

Choices, choices, choices

We discussed a number of specific premises or frameworks that underpinned the development of much of the world we know today, but are now out of touch with the changing world we live in. It was a fascinating discussion, so thank you to everyone who came and contributed and although I think we only scratched the surface, I think it gave a lot of people food for thought :)

  • Open vs closed – open systems (open knowledge, data, government, source, science) are outperforming closed ones in almost everything from science, technology, business models, security models, government and political systems, human knowledge and social models. Open systems enable rapid feedback loops that support greater iteration and improvements in response to the world, and open systems create a natural motivation for the players involved to perform well and gain the benefits of a broader knowledge, experience and feedback base. Open systems also support a competitive collaborative environment, where organisations can collaborate on the common, but compete on their specialisation. We discussed how security by obscurity was getting better understood as a largely false premise and yet, there are still so many projects, decisions, policies or other initiatives where closed is the assumed position, in contrast to the general trend towards openness across the board.
  • Central to distributed – many people and organisations still act like kings in castles, protecting their stuff from the masses and only collaborating with walls and moats in place to keep out the riff raff. The problem is that everything is becoming more distributed, and the smartest people will never all be in the one castle, so if you want the best outcomes, be it for a policy, product, scientific discovery, service or anything else, you need to consider what is out there and how you can be a part of a broader ecosystem. Building on the shoulders of giants and being a shoulder for others to build upon. Otherwise you will always be slower than those who know how to be a node in the network. Although deeply hierarchical systems still exist, individuals are learning how to route around the hierarchy (which is only an imaginary construct in any case). There will always be specialists and the need for central controls over certain things however, if whatever you do is done in isolation, it will only be effective in isolation. Everything and everyone is more and more interconnected so we need to behave more in this way to gain the benefits, and to ensure what we do is relevant to those we do it for. By tapping into the masses, we can also tap into much greater capacity and feedback loops to ensure how we iterate is responsive to the environment we operate in. Examples of the shift included media, democracy, citizen movements, ideology, security, citizen science, gov as an API, transnational movements and the likely impact of blockchain technologies on the financial sector.
  • Scarcity to surplus – the shift from scarcity to surplus is particularly interesting because so much of our laws, governance structures, business models, trade agreements and rules for living are based around antiquated ideas of scarcity and property. We now apply the idea of ownership to everything and I shared a story of a museum claiming ownership on human remains taken from Australia. How can you own that and then refuse to repatriate the remains to that community? Copyright was developed when the ability to copy something was costly and hard. Given digital property (including a lot of “IP”) is so easily replicated with low/zero cost, it has wrought havoc with how we think about IP and yet we have continued to duplicate this antiquated thinking in a time of increasing surplus. This is a problem because new technologies could genuinely create surplus in physical properties, especially with the developments in nano-technologies and 3D printing, but if we bind up these technologies to only replicate the status quo, we will never realise the potential to solve major problems of scarcity, like hunger or poverty.
  • Nationalism and tribalism – because of global communications, more people feel connected with their communities of interest, which can span geopolitical, language, disability and other traditional barriers to forming groups. This will also have an impact on loyalties because people will have an increasingly complex relationship with the world around them. Citizens can and will increasingly jurisdiction shop for a nation that supports their lifestyle and ideological choices, the same way that multinational corporates have jurisdiction shopped for low tax, low regulation environments for some time. On a more micro level, individuals engage in us vs them behaviours all the time, and it gets in the way of working together.
  • Human augmentation and (dis)ability – what it means to look and be human will start to change as more human augmentation starts to become mainstream. Not just cosmetic augmentations, but functional. The body hacking movement has been playing with human abilities and has discovered that the human brain can literally adapt to and start to interpret foreign neurological inputs, which opens up the path to nor just augmenting existing human abilities, but expanding and inventing new human abilities. If we consider the olympics have pretty much found the limit of natural human sporting achievement and have become arguably a bit boring, perhaps we could lift the limitations on the para-olympics and start to see rocket powered 100m sprints, or cyborg Judo competitions. As we start to explore what we can do with ourselves physically, neurologically and chemically, it will challenge a lot of views on what it means to be human. By why should we limit ourselves?
  • Outsourcing personal responsibility – with advances in technology, many have become lazy about how far their personal responsibility extends. We outsource small tasks, then larger ones, then strategy, then decision making, and we end up having no personal responsibility for major things in our world. Projects can fail, decisions become automated, ethics get buried in code, but individuals can keep their noses clean. We need to stop trying to avoid risk to the point where we don’t do anything and we need to ensure responsibility for human decisions are not automated beyond human responsibility.
  • Unconscious bias of privileged views, including digital colonialism – the need to be really aware of our assumptions and try to not simply reinvent the status quo or reinforce “structural white supremacy” as it was put by the contributor. Powerful words worth pondering! Explicit inclusion was put forward as something to prioritise.
  • Work – how we think about work! If we are moving into a more automated landscape, perhaps how we think about work will fundamentally change which would have enormous ramifications for the social and financial environment. Check out Tim Dunlop’s writing on this :)
  • Facts to sensationalism – the flow of information and communications are now so rapid that people, media and organisations are motivated to ever more sensationalism rather than considered opinions or facts. Definitely a shift worth considering!

Other feedback from the room included:

  • The importance of considering ethics, values and privilege in making decisions.
  • The ability to route around hierarchy, but the inevitable push back of established powers on the new world.
  • The idea that we go in cycles of power from centralised to distributed and back again. I confess, this idea is new to me and I’ll be pondering on it more.

Any feedback, thinking or ideas welcome in the comments below :) It was a fun session.

July 23, 2016

Gather Conference 2016 – Afternoon

The Gathering

Chloe Swarbrick

  • Whose responsibility is it to disrupt the system?
  • Maybe try and engage with the system we have for a start before writing it off.
  • You disrupt the system yourself or you hold the system accountable

Nick McFarlane

  • He wrote a book
  • Rock Stars are dicks to work with

So you want to Start a Business

  • Hosted by Reuben and Justin (the accountant)
  • Things you need to know in your first year of business
  • How serious is the business, what sort of structure
    • If you are serious, you have to do things properly
    • Have you got paying customers yet
    • Could just be an idea or a hobby
  • Sole Trader vs Incorporated company vs Trust vs Partnership
  • Incorperated
    • Directors and Shareholders needed to be decided on
    • Can take just half an hour
  • when to get a GST number?
    • If over $60k turnover a year
    • If you have lots of stuff you plan to claim back.
  • Have an accounting System from Day 1 – Xero Pretty good
  • Get an advisor or mentor that is not emotionally invested in your company
  • If partnership then split up responsibilities so you can hold each other accountable for specific items
  • If you are using Xero then your accountant should be using Xero directly not copying it into a different system.
  • Remuneration
    • Should have a shareholders agreement
    • PAYE possibility from drawings or put 30% aside
    • Even if only a small hobby company you will need to declare income to IRD especially non-trivial level.
  • What Level to start at Xero?
    • Probably from the start if the business is intended to be serious
    • A bit of pain to switch over later
  • Don’t forget about ACC
  • Remember you are due provisional tax once you get over the the $2500 for the previous year.
  • Home Office expense claim – claim percentage of home rent, power etc
  • Get in professionals to help

Diversity in Tech

  • Diversity is important
    • Why is it important?
    • Does it mean the same for everyone
  • Have people with different “ways of thinking” then we will have a diverse views then wider and better solutions
  • example “Polish engineer could analysis a Polish specific character input error”
  • example “Controlling a robot in Samoan”, robots are not just in english
  • Stereotypes for some groups to specific jobs, eg “Indians in tech support”
  • Example: All hires went though University of Auckland so had done the same courses etc
  • How do you fix it when people innocently hire everyone from the same background? How do you break the pattern? No be the first different-hire represent everybody in that group?
  • I didn’t want to be a trail-blazer
  • Wow’ed out at “Women in tech” event, first time saw “majority of people are like me” in a bar.
  • “If he is a white male and I’m going to hire him on the team that is already full of white men he better be exception”
  • Worried about implication that “diversity” vs “Meritocracy” and that diverse candidates are not as good
  • Usual over-representation of white-males in the discussion even in topics like this.
  • Notion that somebody was only hired to represent diversity is very harmful especially for that person
  • If you are hiring for a tech position then 90% of your candidates will be white-males, try place your diversity in getting more diverse group applying for the jobs not tilt in the actual hiring.
  • Even in maker spaces where anyone is welcome, there are a lot fewer women. Blames mens mags having things unfinished, women’s mags everything is perfect so women don’t want to show off something that is unfinished.
  • Need to make the workforce diverse now to match the younger people coming into it
  • Need to cover “power income” people who are not exposed to tech
  • Even a small number are role models for the future for the young people today
  • Also need to address the problem of women dropping out of tech in the 30s and 40s. We can’t push girls into an “environment filled with acid”
  • Example taking out “cocky arrogant males” from classes into “advanced stream” and the remaining class saw women graduating and staying in at a much higher rate.

Podcasting

  • Paul Spain from Podcast New Zealand organising
  • Easiest to listen to when doing manual stuff or in car or bus
  • Need to avoid overload of commercials, eg interview people from the company about the topic of interest rather than about their product
  • Big firms putting money into podcasting
  • In the US 21% of the market are listening every single month. In NZ perhaps more like 5% since not a lot of awareness or local content
  • Some radios shows are re-cutting and publishing them
  • Not a good directory of NZ podcasts
  • Advise people use proper equipment if possible if more than a once-off. Bad sound quality is very noticeable.
  • One person: 5 part series on immigration and immigrants in NZ
  • Making the charts is a big exposure
  • Apples “new and noteworthy” list
  • Domination by traditional personalities and existing broadcasters at present. But that only helps traction within New Zealand

 

 

FacebookGoogle+Share

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s
print

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

July 22, 2016

Gather Conference 2016 – Morning

At the Gather Conference again for about the 6th time. It is a 1-day tech-orientated unconference held in Auckland every year.

The day is split into seven streamed sessions each 40 minutes long (of about 8 parallel rooms of events that are each scheduled and run by attendees) plus and opening and a keynote session.

How to Steer your own career – Shirley Tricker

  • Asked people hands up on their current job situation, FT vs PT, sinmgle v multiple jobs
  • Alternatives to traditional careers of work. possible to craft your career
  • Recommended Blog – Free Range Humans
  • Job vs Career
    • Job – something you do for somebody else
    • Career – Uniqie to you, your life’s work
    • Career – What you do to make a contribution
  • Predicted that a greater number of people will not stay with one (or even 2 or 3) employers through their career
  • Success – defined by your goals, lifestyle wishes
  • What are your strengths – Know how you are valuable, what you can offer people/employers, ways you can branch out
  • Hard and Soft Skills (soft skills defined broadly, things outside a regular job description)
  • Develop soft skills
    • List skills and review ways to develop and improve them
    • Look at people you admire and copy them
    • Look at job desctions
  • Skills you might need for a portfilio career
    • Good at organising, marketing, networking
    • flexible, work alone, negotiation
    • Financial literacy (handle your accounts)
  • Getting started
    • Start small ( don’t give up your day job overnight)
    • Get training via work or independently
    • Develop you strengths
    • Fix weaknesses
    • Small experiments
    • cheap and fast (start a blog)
    • Don’t have to start out as an expert, you can learn as you go
  • Just because you are in control doesn’t make it easy
  • Resources
    • Careers.govt.nz
    • Seth Goden
    • Tim Ferris
    • eg outsources her writing.
  • Tools
    • Xero
    • WordPress
    • Canva for images
    • Meetup
    • Odesk and other freelance websites
  • Feedback from Audience
    • Have somebody to report to, eg meet with friend/adviser monthly to chat and bounce stuff off
    • Cultivate Women’s mentoring group
    • This doesn’t seem to filter through to young people, they feel they have to pick a career at 18 and go to university to prep for that.
    • Give advice to people and this helps you define
    • Try and make the world a better place: enjoy the work you are doing, be happy and proud of the outcome of what you are doing and be happy that it is making the world a bit better
    • How to I “motivate myself” without a push from your employer?
      • Do something that you really want to do so you won’t need external motivation
      • Find someone who is doing something write and see what they did
      • Awesome for introverts
    • If you want to start a startup then work for one to see what it is like and learn skills
    • You don’t have to have a startup in your 20s, you can learn your skills first.
    • Sometimes you have to do a crappy job at the start to get onto the cool stuff later. You have to look at the goal or path sometimes

Books and Podcasts – Tanya Johnson

Stuff people recommend

  • Intelligent disobedience – Ira
  • Hamilton the revolution – based on the musical
  • Never Split the difference – Chris Voss (ex hostage negotiator)
  • The Three Body Problem – Lia CiXin – Sci Fi series
  • Lucky Peach – Food and fiction
  • Unlimited Memory
  • The Black Swan and Fooled by Randomness
  • The Setup (usesthis.com) website
  • Tim Ferris Podcast
  • Freakonomics Podcast
  • Moonwalking with Einstein
  • Clothes, Music, Boy – Viv Albertine
  • TIP: Amazon Whispersync for Kindle App (audiobook across various platforms)
  • TIP: Blinkist – 15 minute summaries of books
  • An Intimate History of Humanity – Theodore Zenden
  • How to Live – Sarah Bakewell
  • TIP: Pocketcasts is a good podcast app for Android.
  • Tested Podcast from Mythbusters people
  • Trumpcast podcast from Slate
  • A Fighting Chance – Elizabeth Warren
  • The Choice – Og Mandino
  • The Good life project Podcast
  • The Ted Radio Hour Podcast (on 1.5 speed)
  • This American Life
  • How to be a Woman by Caitlin Moran
  • The Hard thing about Hard things books
  • Flashboys
  • The Changelog Podcast – Interview people doing Open Source software
  • The Art of Oppertunity Roseland Zander
  • Red Rising Trilogy by Piers Brown
  • On the Rag podcast by the Spinoff
  • Hamish and Andy podcast
  • Radiolab podcast
  • Hardcore History podcast
  • Car Talk podcast
  • Ametora – Story of Japanese menswear since WW2
  • .net rocks podcast
  • How not to be wrong
  • Savage Love Podcast
  • Friday Night Comedy from the BBC (especially the News Quiz)
  • Answer me this Podcast
  • Back to work podcast
  • Reply All podcast
  • The Moth
  • Serial
  • American Blood
  • The Productivity podcast
  • Keeping it 1600
  • Ruby Rogues Podcast
  • Game Change – John Heilemann
  • The Road less Travelled – M Scott Peck
  • The Power of Now
  • Snow Crash – Neil Stevensen

My Journey to becoming a Change Agent – Suki Xiao

  • Start of 2015 was a policy adviser at Ministry
  • Didn’t feel connected to job and people making policies for
  • Outside of work was a Youthline counsellor
  • Wanted to make a difference, organised some internal talks
  • Wanted to make changes, got told had to be a manager to make changes (10 years away)
  • Found out about R9 accelerator. Startup accelerator looking at Govt/Business interaction and pain points
  • Get seconded to it
  • First month was very hard.
  • Speed of change was difficult, “Lean into the discomfort” – Team motto
  • Be married to the problem
    • Specific problem was making sure enough seasonal workers, came up with solution but customers didn’t like it. Was not solving the actual problem customers had.
    • Team was married to the problem, not the married to the solution
  • When went back to old job, found slower pace hard to adjust back
  • Got offered a job back at the accelerator, coaching up to 7 teams.
    • Very hard work, lots of work, burnt out
    • 50% pay cut
    • Worked out wasn’t “Agile” herself
    • Started doing personal Kanban boards
    • Cut back number of teams coaching, higher quality
  • Spring Board
    • Place can work at sustainable pace
    • Working at Nomad 8 as an independent Agile consultant
    • Work on separate companies but some support from colleges
  • Find my place
    • Joined Xero as a Agile Team Facilitator
  • Takeaways
    • Anybody can be a change agent
    • An environment that supports and empowers
    • Look for support
  • Conversation on how you overcome the “Everest” big huge goal
    • Hard to get past the first step for some – speaker found she tended to do first think later. Others over-thought beforehand
    • It seems hard but think of the hard things you have done in your life and it is usually not as bad
    • Motivate yourself by having no money and having no choice
    • Point all the bad things out in the open, visualise them all and feel better cause they will rarely happen
    • Learn to recognise your bad patterns of thoughts
    • “The Way of Art” Steven Pressfield (skip the Angels chapter)
  • Are places Serious about Agile instead of just placing lip-service?
    • Questioner was older and found places wanted younger Agile coaches
    • Companies had to completely change into organisation, eg replace project managers
    • eg CEO is still waterfall but people lower down are into Agile. Not enough management buy-in.
    • Speaker left on client that wasn’t serious about changing
  • Went though an Agile process, made “Putting Agile into the Org” as the product
  • Show customers what the value is
  • Certification advice, all sorts of options. Nomad8 course is recomended

 

FacebookGoogle+Share

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

network={
 key_mgmt=IEEE8021X
 eap=PEAP
 identity="USERNAME"
 anonymous_identity="USERNAME"
 password="PASS"
 phase1="auth=MD5"
 phase2="auth=CHAP password=PASS"
 eapol_flags=0
}

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

July 21, 2016

Conversations on Collected Health Data

wearable-health-deviceThere are more and more wearable devices that collect a variety of health data, and other health records are kept electronically. More often than not, the people whose data it is don’t actually have access. There are very important issues to consider, and you could use this for a conversation with your students, and in assignments.

On the individual level, questions such as

  • Who should own your health data?
  • Should you be able to get an overview of who has what kind of your data?  (without fuzzy vague language)
  • Should you be able to access your own data? (directly out of a device, or online service where a device sends its data)
  • Should you be able to request a company to completely remove data from their records?

For society, questions like

  • Should a company be allowed to hoard data, or should they be required to make it accessible (open data) for other researchers?

A comment piece in this week’s Nature entitled “Lift the blockade on health data” could be used as a starting point a conversation and for additional information:

http://nature.com/articles/doi:10.1038/535345a

Technology titans, such as Google and Apple, are moving into health. For all the potential benefits, the incorporation of people’s health data into algorithmic ‘black boxes’ could harm science and exacerbate inequalities, warn John Wilbanks and Eric Topol in a Comment piece in this week’s Nature. “When it comes to control over our own data, health data must be where we draw the line,” they stress.

Cryptic digital profiling is already shaping society; for example, online adverts are tailored to people’s age, location, spending and browsing habits. Wilbanks and Topol envision a future in which “companies are able to trade people’s disease profiles, unbeknown to them” and where “health decisions are abstruse and difficult to challenge, and advances in understanding are used to aggressively market health-related services to people — regardless of whether those services actually benefit their health.”

The authors call for a campaigning movement similar to the environmental one to break open how people’s data are being used, and to illuminate how such information could be used in the future. In their view, “the creation of credible competitors that are open source is the most promising way to regulate” corporations that have come to “resemble small nations in their own right”.

 

July 18, 2016

Social Engineering/Manipulation, Rigging Elections, and More

We recently had an election locally and I noticed how they were handing out 'How To Vote' cards which made me wonder. How much social engineering and manipulation do we experience each day/throughout our lives (please note, that all of the results are basically from the first few pages of any publicly available search engine)? - think about the education system and the way we're mostly taught to

July 16, 2016

GnuCOBOL: A Gnu Life for an Old Workhorse

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages.

Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

read more

Making surface mount pcbs with a CNC machine

The cool kidsTM like to use toaster ovens with thermocouples to bake their own surface mount boards at home. I've been exploring doing that using boards that I make on a CNC locally. The joy of designing in the morning and having the working product in the evening. It seems SOIC size is ok, but smaller SMT IC packages currently present an issue. This gives interesting fodder for how to increase precision down further. Doing SOIC and SMD LED/Resistors from a sub $1k CNC machine isn't too bad though IMHO. And unlike other pcb specific CNC machines I can also cut wood and metal with my machine :-p


Time to stock up on some SOIC microcontrollers for some full board productions. It will be very interesting to see if I can do an SMD usb connector. Makes it a nice complete black box to do something and talk ROS over USB.

July 12, 2016

Using Smatch static analysis on OpenPOWER OPAL firmware

For Skiboot, I’m always looking at new automated systems to find bugs in the code. A little while ago, I read about the Smatch tool developed by some folks at Oracle (they also wrote about using it on the Linux kernel).

I was eager to try it with skiboot to see if it could find anything.

Luckily, it was pretty easy. I built Smatch according to their documentation and then built skiboot:

make CHECK="/home/stewart/smatch/smatch" C=1 -j20 all check

Due to some differences in how we implement abort() and assert() in skiboot, I added “_abort”, “abort” and “assert_fail” to smatch_data/no_return_funcs in the Smatch source tree to silence some false positives.

It seems that there’s a few useful warnings there (some of which I’ve fixed in skiboot master already), along with some false positives around the preprocessor/complier tricks we do to ensure at compile time that an OPAL call definition has the correct number of arguments specified.

So far, so good though. Try it on your project!

July 11, 2016

Pia, Thomas and little A’s Excellent Adventure – Week 3

The last fortnight has just flown past! We have been getting into the rhythm of being on holidays, a difficult task for yours truly as the workaholic I am! Meanwhile we have also caught a lot more fish (up to 57 now, 53 were released), have been keeping up with the studies and little A has been (mostly) enjoying a broad range of new foods and experiences. The book is on hold for another week or two while I finish another project off.

Photos are added every few days to the flickr album.
IMAG1855

Studies

My studies are going well. The two (final) subject are “Law, Governance and Policy” and “White Collar Crime”. They are both great subjects and I’ve been thoroughly enjoying the readings, discussions and thinking critically about the issues therein. The White Collar Crime topic in particular has been fascinating! Each week we look at case studies of WCC in the news and there are some incredible issues every single week. A recent one directly relevant to us was the ACCC suing Heinz for a baby food advertised as “99% fruit” but made up of fruit concentrates and purees, resulting in a 67% sugar product. Wow! The advertising is all about how healthy it is and how it developed a taste for real foods in toddlers but it basically is just a sugar hit worse than a soft drink!

Fishing and weather

We have been doing fairly well and the largest trout so far was 69cm (7.5 pounds). We are exploring the area and finding some great new spots but there is certainly some crowding on weekends! Although Thomas was lamenting the lack of rain the first week, it then torrented leaving him to lament about too much rain! Hopefully now we’ll get a good mix of both rain (for fish) and sunshine. Meanwhile it has been generally much warmer than Canberra and the place we are staying in is always toasty warm so we are very comfortable.

Catchups in Wellington and Auckland

We are planning to go to Auckland for Gather later this month and to Wellington for GovHack at the end of July and then for the OS/OS conference in August. The plan is to catch up with ALL TEH PEEPS during those trips which we are really looking forward to! Little A and I did a little one day fly in fly out trip to Wellington last week to catch up with the data.govt.nz team to exchange information and experience with running government data portals. It was great to see Nadia, Rowan and the team and to see the recent work happening with the new beta.data.govt.nz and to share some of the experience we had with data.gov.au. Thanks very much to the team for great day and good luck in the next steps with your ambitious agenda! I know it will go well!

Visitors

Last week we had our first visitors. Thomas’ parents stayed with us for a week which has been lovely! Little A had a great time being pampered and we enjoyed showing them around. We had a number of adventures with them including some fishing, a trip to the local national park to see some beautiful volcanoes (still active!) and a place reminiscent of the Hydro Majestic in the Blue Mountains.

We also visited Te Porere Redoubt a Maori defensive structure including trenches, and a visit to the site of an old Maori settlement. The trench warfare skills developed by the Maori were used in the New Zealand wars and I got a few photos to show the deep trench running around the outside of the structure and then the labyrinth in the middle. There is a photo of a picture of a fortified Maori town showing that large spikes would have also been used for the defensive structure, and potentially some kind of roof? Incredible use of tactical structures for defence. One for you Sherro!

Wolverine baby

Finally, we had a small incident with little A which really showed how resilient little kids are. We were bushwalking with little A in a special backpack for carrying children. I had to step across a small gap and checked out the brush but only saw the soft leaves of a tree. I stepped across and suddenly little A screamed! Thomas was right on to it (I couldn’t see what was happening) and there had been a tiny low hanging piece of bramble (thorny vine) at little A’s face height! He quickly disentangled her and we sat her down to see the damage and console her. It had caught on her neck and luckily only gave her a few very shallow scratches but she was inconsolable. Anyway, a few cuddles later, some antiseptic cream and a warm shower and little A was perfectly happy, playing with her usual toys whilst Thomas and I were still keyed up. The next day the marks were dramatically faded and within a couple of days you could barely see them. She is healing super fast, like a baby Wolverine :) She is happily enjoying a range of foods now and gets a lot of walks and some time at the local playgroup for additional socialisation.

July 09, 2016

The Moon tonight

July 08, 2016

Nexus 6P and Galaxy S5 Mini

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.

Security

The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.

Design

This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.

Cables

This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.

July 06, 2016

Where to Get a POWER8 Development VM

POWER8 sounds great, but where the heck can I get a Power VM so I can test my code?

This is a common question we get at OzLabs from other open source developers looking to port their software to the Power Architecture. Unfortunately, most developers don't have one of our amazing servers just sitting around under their desk.

Thankfully, there's a few IBM partners who offer free VMs for development use. If you're in need of a development VM, check out:

So, next time you wonder how you can test your project on POWER8, request a VM and get to it!

linux.conf.au 2017 wants your talks!

lca2017-tweet-cfp-open

You might have noticed earlier this week that linux.conf.au 2017, which is happening in Hobart, Tasmania (and indeed, which I’m running!) has opened its call for proposals.

Hobart’s a wonderful place to visit in January – within a couple of hours drive, there’s wonderful undisturbed wilderness to go bushwalking in, historic sites from Tasmania’s colonial past, and countless wineries, distilleries, and other producers. Not to mention, the MONA Festival of Music and Arts will probably be taking place around the time of the conference. Coupled with temperate weather, and longer daylight hours than anywhere else in Australia, so there’s plenty of time to make the most of your visit.

linux.conf.au is – despite the name – one of the world’s best generalist Free and Open Source Software conferences. It’s been running annually since 1999, and this year, we’re inviting people to talk abut the Future of Open Source.

That’s a really big topic area, so here’s how our CFP announcement breaks it down:

THE FUTURE OF YOUR PROJECT
linux.conf.au is well-known for deeply technical talks, and lca2017 will be no exception. Our attendees want to be the first to know about new and upcoming developments in the tools they already use every day, and they want to know about new open source technology that they’ll be using daily in two years time.

OPENNESS FOR EVERYONE
Many of the techniques that have made Open Source so successful in the software and hardware world are now being applied to fields as disparate as science, data, government, and the law. We want to know how Open Thinking will help to shape your field in the future, and more importantly, we want to know how the rest of the world can help shape the future of Open Source.

THREATS FROM THE FUTURE
It’s easy to think that Open Source has won, but for every success we achieve, a new challenge pops up. Are we missing opportunities in desktop and mobile computing? Why is the world suddenly running away from open and federated communications? Why don’t the new generation of developers care about licensing? Let’s talk about how Software Freedom and Open Source can better meet the needs of our users and developers for years to come.

WHATEVER YOU WANT!
It’s hard for us to predict the future, but we know that you should be a part of it. If you think you have something to say about Free and Open Source Software, then we want to hear from you, even if it doesn’t fit any of the categories above.

My friend, and former linux.conf.au director, Donna Benjamin blogged about the CFP on medium and tweeted the following yesterday:

At @linuxconfau in Hobart, I’d like to hear how people are USING free & open source software, and what they do to help tend the commons.

Our CFP closes on Friday 5 August – and we’re not planning on extending that deadline – so put your thinking caps on. If you have an idea for the conference, feel free to e-mail me for advice, or you can always ask for help on IRC – we’re in #linux.conf.au on freenode – or you can find us on Facebook or Twitter.

What does the future of Open Source look like? Tell us by submitting a talk, tutorial, or miniconf proposal now! We can’t wait to hear what you have to say.

July 05, 2016

Speaking in July 2016

  • Texas LinuxFest – July 8-9 2016 – Austin, Texas – I’ve never spoken at this event before but have heard great things about it. I’ve got a morning talk about what’s in MariaDB Server 10.1, and what’s coming in 10.2.
  • db tech showcase – July 13-15 2016 – Tokyo, Japan – I’ve regularly spoken at this event and its a case of a 100% pure database conference, with a very captive audience. I’ll be talking about the lessons one can learn from other people’s database failures (this is the kind of talk that keeps on changing and getting better as the software improves).
  • The MariaDB Tokyo Meetup – July 21 2016 – Tokyo, Japan – Not the traditional meetup timing, since its 1.30pm-7pm, there will be many talks and its organised by the folk behind the SPIDER storage engine. It should be fun to see many people and food is being provided too. In Japanese: MariaDB コミュニティイベント in Tokyo, MariaDB Community Event in TOKYO.

Thunderbird Uses OpenGL – Who Knew?

I have a laptop and a desktop system (as well as a bunch of other crap, but let’s ignore that for a moment). Both laptop and desktop are running openSUSE Tumbleweed. I’m usually in front of my desktop, with dual screens, a nice keyboard and trackball, and the laptop is sitting with the lid closed tucked away under the desk. Importantly, the laptop is where my mail client lives. When I’m at my desk, I ssh from desktop to laptop with X forwarding turned on, then fire up Thunderbird, and it appears on my desktop screen. When I go travelling, I take the laptop with me, and I’ve still got my same email client, same settings, same local folders. Easy. Those of you considering heckling me for not using $any_other_mail_client and/or $any_other_environment, please save it for later.

Yesterday I had an odd problem. A new desktop system arrived, so I installed Tumbleweed, eventually ssh’d to my Laptop, started Thunderbird, and…

# thunderbird

…nothing happened. There’s usually a little bit of junk on the console at that point, and the Thunderbird window should have appeared on my desktop screen. But it didn’t. strace showed it stuck in a loop, waiting for something:

wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)

After an assortment of random dead ends (ancient and useless bug reports about Thunderbird and Firefox failing to run over remote X sessions), I figured I may as well attach a debugger to see if I could get any more information:

# gdb -p 22167
GNU gdb (GDB; openSUSE Tumbleweed) 7.11
[...]
Attaching to process 22167
Reading symbols from /usr/lib64/thunderbird/thunderbird-bin...
[...]
0x00007f2e95331a1d in poll () from /lib64/libc.so.6
(gdb) break
Breakpoint 1 at 0x7f2e95331a1d
(gdb) bt
#0 0x00007f2e95331a1d in poll () from /lib64/libc.so.6
#1 0x00007f2e8730b410 in ?? () from /usr/lib64/libxcb.so.1
#2 0x00007f2e8730cecf in ?? () from /usr/lib64/libxcb.so.1
#3 0x00007f2e8730cfe2 in xcb_wait_for_reply () from /usr/lib64/libxcb.so.1
#4 0x00007f2e86ecc845 in ?? () from /usr/lib64/libGL.so.1
#5 0x00007f2e86ec74b8 in ?? () from /usr/lib64/libGL.so.1
#6 0x00007f2e86e9a2a9 in ?? () from /usr/lib64/libGL.so.1
#7 0x00007f2e86e9654b in ?? () from /usr/lib64/libGL.so.1
#8 0x00007f2e86e966b3 in glXChooseVisual () from /usr/lib64/libGL.so.1
#9 0x00007f2e90fa0d6f in glxtest () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:230
#10 0x00007f2e90fa1003 in fire_glxtest_process () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:333
#11 0x00007f2e90f9b4cd in XREMain::XRE_mainInit (this=this@entry=0x7ffdfc66c448, aExitFlag=aExitFlag@entry=0x7ffdfc66c3ef) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:3134
#12 0x00007f2e90f9ee27 in XREMain::XRE_main (this=this@entry=0x7ffdfc66c448, argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, aAppData=aAppData@entry=0x7ffdfc66c648)
at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4362
#13 0x00007f2e90f9f0f2 in XRE_main (argc=1, argv=0x7ffdfc66d958, aAppData=0x7ffdfc66c648, aFlags=) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4484
#14 0x00000000004054c8 in do_main (argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, xreDirectory=0x7f2e9504a9c0) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:195
#15 0x0000000000404c4a in main (argc=1, argv=0x7ffdfc66d958) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:332
(gdb) continue
[Inferior 1 (process 22167) exited with code 01]

OK, so it’s libGL that’s waiting for something. Why is my mail client trying to do stuff with OpenGL?

Hang on! When I told gdb to continue, suddenly Thunderbird appeared, running properly, on my desktop display. WTF?

As far as I can tell, the problem is that my new desktop system has an NVIDIA GPU (nouveau drivers, BTW), and my laptop and previous desktop system both have Intel GPUs. Something about ssh’ing from the desktop with the NVIDIA GPU to the laptop with the Intel GPU, causes Thunderbird (and, indeed, any GL app — I also tried glxinfo and glxgears) to just wedge up completely. Whereas if I do the reverse (ssh from Intel GPU laptop to NVIDIA GPU desktop) and run GL apps, it works fine.

After some more Googling, I discovered I can make Thunderbird work properly over remote X like this:

# LIBGL_ALWAYS_INDIRECT=1 thunderbird

That will apparently cause glXCreateContext to return BadValue, which is enough to kick Thunderbird along. LIBGL_ALWAYS_SOFTWARE=1 works equally well to enable Thunderbird to function, while presumably still allowing it to use OpenGL if it really needs to for something (proof: LIBGL_ALWAYS_INDIRECT=1 glxgears fails, LIBGL_ALWAY_SOFTWARE=1 glxgears gives me spinning gears).

I checked Firefox too, and it of course has the same remote X problem, and the same solution.

Optical Action at a Distance

Generally when someone wants to install a Linux distro they start with an ISO file. Now we could burn that to a DVD, walk into the server room, and put it in our machine, but that's a pain. Instead let's look at how to do this over the network with Petitboot!

At the moment Petitboot won't be able to handle an ISO file unless it's mounted in an expected place (eg. as a mounted DVD), so we need to unpack it somewhere. Choose somewhere to host the result and unpack the ISO via whatever method you prefer. (For example bsdtar -xf /path/to/image.iso).

You'll get a bunch of files but for our purposes we only care about a few; the kernel, the initrd, and the bootloader configuration file. Using the Ubuntu 16.04 ppc64el ISO as an example, these are:

./install/vmlinux
./install/initrd.gz.
./boot/grub/grub.cfg

In grub.cfg we can see that the boot arguments are actually quite simple:

set timeout=-1

menuentry "Install" {
    linux   /install/vmlinux tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet
    initrd  /install/initrd.gz
}

menuentry "Rescue mode" {
    linux   /install/vmlinux rescue/enable=true --- quiet
    initrd  /install/initrd.gz
}

So all we need to do is create a PXE config file that points Petitboot towards the correct files.

We're going to create a PXE config file which you could serve from your DHCP server, but that does not mean we need to use PXE - if you just want a quick install you only need make these files accessible to Petitboot, and then we can use the 'Retrieve config from URL' option to download the files.

Create a petitboot.conf file somewhere accessible that contains (for Ubuntu):

label Install Ubuntu 16.04 Xenial Xerus
    kernel http://myaccesibleserver/path/to/vmlinux
    initrd http://myaccesibleserver/path/to/initrd.gz
    append tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet

Then in Petitboot, select 'Retrieve config from URL' and enter http://myaccesibleserver/path/to/petitboot.conf. In the main menu your new option should appear - select it and away you go!

July 04, 2016

Windows 10 to Linux

There is a lot of noise at the moment about Microsoft’s new operating system called Windows 10. Without repeating all the details you can have a look, say here or here or here. The essence of the story is that Microsoft is making it very difficult to avoid the new operating system. The advice being given is to not install the upgrade – which is anything but easy, since Windows 7 is supported until 2020.

The reality is that staying with Windows 7 is only delaying the inevitable. There is no reason to believe that Mircosoft’s offering in 2020 will be any better at respecting your ownership and every reason to think it will be worse. If you are one of these people considering sticking with Windows 7 then you have only two choices:

  • swallow your pride and update (either today or sometime in the next 4 years); or
  • migrate off the platform. If you migrate then, in practice, that means Linux (since Apple has similar beliefs about who really owns your computer).

In my opinion, if you actually want to own your own computer, you have to install Linux.


Swiss Professor starts Cybathlon

The Cybathlon will challenge assistive device developers to create technologies that thrive in day-to-day activities.

The prosthetic arm from the M.A.S.S. Impact team. (Credit: ETH Zurich) The prosthetic arm from the M.A.S.S. Impact team. (Credit: ETH Zurich)

While working as a professor in the sensory-motor systems lab at the Swiss Federal Institute of Technology in Zurich (ETH), Robert Riener noticed a need for assistive devices that would better meet the challenge of helping people with daily life. He knew there were solutions, but that it would require motivating developers to rise to the challenge.

So, Riener created Cybathlon, the first cyborg Olympics where teams from all over the world will participate in races on Oct. 8 in Zurich that will test how well their devices perform routine tasks. Teams will compete in six different categories that will push their assistive devices to the limit on courses developed carefully over three years by physicians, developers and the people who use the technology. Eighty teams have signed up so far.

Riener wants the event to emphasize how important it is for man and machine to work together—so participants will be called pilots rather than athletes, reflecting the role of the assistive technology.

“The goal is to push the development in the direction of technology that is capable of performing day-to-day tasks. And that way, there will an improvement in the future life of the person using the device,” says Riener.

Read more at http://blogs.discovermagazine.com/crux/2016/06/22/a-sneak-peek-at-the-first-cyborg-olympics/

and CYBATHLON: Championship for Athletes with Disabilities

July 01, 2016

Spartan: A New Architecture for Research Computing

Thursday July 30th, at the Gryphon Gallery at the University of Melbourne, was the official launch of the 'Spartan' high-performance computing and cloud hybrid. Speakers at the launch included Dr Stephen Giugni, Director, Research Platform Services., Prof Margaret Sheil, Acting Vice Chancellor of the University of Melbourne., Professor Richard Sinnott, Director, eResearch and Professor of Applied Computing Systems., Mr Bernard Meade, Head of Research Compute Services, Research Platform Services, and yours truly, in my role as HPC Support Engineer, Research Platform Services.

As I argued in my presentation, the great advantage of Spartan is that it is designed around what users need. Based on research from the previous general compute resource, Edward, most people wanted to submit lots of jobs with a relatively small core count and memory footprint with data parallel approaches, but some really needed a large core counts with a fast interconnect. Putting the two types of users of the same system was not ideal. Also, engineers tend to want performance from a system, whereas managers want flexibility. Spartan provides both through its partitioning system. I am convinced that this will be architecture of future research computing.

Spartan's launch has received extensive media coverage, including high ranking sites such as HPC Wire, Gizmodo, and Delimiter. In addition to the aforementioned speakers, particular thanks must also be given to Linh Vu, Daniel Tosello, and Chris Samuel for their engineering excellence in helping put together the system, and to Greg Sauter for his project management (and for his photography). Welcome to Spartan!

read more

Use your Electoral Right to Vote

Did you know…

In 1863, the state of Victoria allowed everyone on the municipal rolls to vote, which included women; who voted in the 1864 general election.  This was regarded as a mistake by the men in government, and the law was changed to exclude women in 1865.  It then took another 40 years before women got the vote again – first federally, then in each of the states individually.

People fought for these and other rights.  You now have the power to choose for what you believe is right.

Please use your electoral right to vote.  It’s important.

Linux Security Summit 2016 Schedule Published

The schedule for the 2016 Linux Security Summit is now published!

The keynote speaker for this year’s event is Julia Lawall.  Julia is a research scientist at Inria, the developer of Coccinelle, and the Linux Kernel coordinator for the Outreachy project.

Refereed presentations include:

See the schedule for the full list of talks.

Also included are updates from Linux kernel security subsystem maintainers, and snacks.

The event this year is co-located with LinuxCon North America in Toronto, and will be held on the 25th and 26th of August.  Standalone registration for the Linux Security Summit is $100 USD: click here to register.

You can also follow updates and news for the event via Twitter:  @LinuxSecSummit
See you there!

OLPC Australia training resources

Underpinning the OLPC Australia education programme is a cache of training resources. In addition to our Online Course and Learner Manual, we have a set of help videos, hosted on our Vimeo channel. I have updated the OLPC Disassembly instructions for the top and bottom of the XO with links to the videos.

OLPC Australia Education Newsletter

Editions 7 and 8 of the OLPC Australia Education Newsletter have come out in the past few weeks. In each edition, we will provide news, tips and tricks and stories from the field.

To subscribe, send an e-mail to education-newsletter+subscribe@laptop.org.au.

Creating an Education Programme

OLPC Australia had a strong presence at linux.conf.au 2012 in Ballarat, two weeks ago.

I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.

Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.

The references for this talk are on our development wiki.

Here’s a better version of the video I played near the beginning of my talk:

I should start by pointing out out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.

Investment in our Children’s Future

The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.

The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.

For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.

While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cites as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.

Comprehensive Education Programme

We have a responsibility to invest in our children’s education — it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.

We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.

OLPC Australia certificationsCertifications

Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.

OLPC Australia online training processOnline training process

We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child’s learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.

There’s a reason why I’m wearing a t-shirt that says “No, I won’t fix your computer.” We’re on a mission to develop a programme that is self-sustaining. We’ve set high goals for ourselves, and we are determined to meet them. We won’t get there overnight, but we’re well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.

As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.

OLPC Australia programme cycleProgramme cycle

Technology as an Enabler

Enabling this educational programme is the clever development and use of technology. That’s where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren’t IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.

The key principles of the Engineering Department are:

  • Technology is an integral and seamless part of the learning experience – the pen and paper of the 21st century.
  • To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
  • Empowering children to be content producers and collaborators, not just content consumers.
  • Open platform to allow learning from mistakes… and easy recovery.

OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the ‘last mile’ to the school. One thing I’m especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn’t work as they expect then they are totally at the mercy of the admins to fix it.

In an educational setting this is disastrous — it severely limits what our children can learn. We learn most from our mistakes, so let’s provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid 🙂

Evaluation

My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.

In federal parliament, Robert Oakeshott MP has been very supportive of our mission:

Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.

We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.

Join our mission

Schools can register their interest in our programme on our Education site.

Our Prospectus provides a high-level overview.

For a detailed analysis, see our Policy Document.

If you would like to get involved in our technical development, visit our development site.

Credits

Many thanks to colleagues Rangan Srikhanta (CEO) and Tracy Richardson (Education Manager) for some of the information and graphics used in this article.

Interview with Australian Council for Computers in Education Learning Network

Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

XO-AU OS 12.0 Release Candidate 2 released

Release Candidate 2 of the 2012 OLPC Australia operating system, XO-AU OS 12, has been released. We hope to make a final release in two weeks, in time for the start of term 2 of school in Queensland and Northern Territory.

To get started, visit our release notes page.

Installing the Release Candidate is no different from installing the XO-AU USB 3 stable release: extract the zip file to a USB stick and you’re ready to go.

The “What’s New” section outlines the changes in this release.

To provide feedback, please join our technical mailing list.

Following this, you can send your comments or ask questions on the list. The OLPC Australia Engineering team are active participants on this list, and we will reply. Remember, the better you can help us with quality information, the better we can make the product for you 🙂

HTML5 support in Browse

One of the most exciting improvements in OLPC OS 12.1.0 is a revamped Browse activity:

Browse, Wikipedia and Help have been moved from Mozilla to WebKit internally, as the Mozilla engine can no longer be embedded into other applications (like Browse) and Mozilla has stated officially that it is unsupported. WebKit has proven to be a far superior alternative and this represents a valuable step forward for Sugar’s future. As a user, you will notice faster activity startup time and a smoother browsing experience. Also, form elements on webpages are now themed according to the system theme, so you’ll see Sugar’s UI design blending more into the web forms that you access.

In short, the Web will be a nicer place on XOs. These improvements (and more!) will be making their way onto One Education XOs (such as those in Australia) in 2013.

Here are the results from the HTML5 Test using Browse 140 on OLPC OS 12.1.0 on an XO-1.75. The final score (345 and 15 bonus points) compares favourably against other Web browsers. Firefox 14 running on my Fedora 17 desktop scores 345 and 9 bonus points.

Update: Rafael Ortiz writes, “For the record previous non-webkit versions of browse only got 187 points on html5test, my beta chrome has 400 points, so it’s a great advance!

The HTML5 test - How well does your browser support HTML5 (01) The HTML5 test - How well does your browser support HTML5 (02) The HTML5 test - How well does your browser support HTML5 (03) The HTML5 test - How well does your browser support HTML5 (04) The HTML5 test - How well does your browser support HTML5 (05) The HTML5 test - How well does your browser support HTML5 (06) The HTML5 test - How well does your browser support HTML5 (07) The HTML5 test - How well does your browser support HTML5 (08) The HTML5 test - How well does your browser support HTML5 (09) The HTML5 test - How well does your browser support HTML5 (10)

Interviews from the field

Oracle, a sponsor of OLPC Australia, have posted some video interviews of a child and a teacher involved in the One Education programme.

A Complete Literacy Experience For Young Children

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboardA standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboardThe Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

XO-1 Training Pack

Our One Education programme is growing like crazy, and many existing deployments are showing interest. We wanted to give them a choice of using their own XOs to participate in the teacher training, rather than requiring them to purchase new hardware. Many have developer-locked XO-1s, necessitating a different approach than our official One Education OS.

The solution is our XO-1 Training Pack. This is a reconfiguration of OLPC OS 10.1.3 to be largely consistent with our 10.1.3-au release. It has been packaged for easy installation.

Note that this is not a formal One Education OS release, and hence is not officially supported by OLPC Australia.

If you’d like to take part in the One Education programme, or have questions, use the contact form on the front page.

Update: We have a list of improvements in 10.1.3-au builds over the OLPC OS 10.1.3 release. Note that some features are not available in the XO-1 Training Pack owing to the lesser storage space available on XO-1 hardware. The release notes have been updated with more detail.

Update: More information on our One News site.

OLPC Australia Education Newsletter, Edition 9

Edition 9 of the OLPC Australia Education Newsletter is now available.

In this edition, we provide a few classroom ideas for mathematics, profile the Jigsaw activity, de-mystify the Home views in Sugar and hear about the OLPC journey of Girraween Primary School.

To subscribe to receive future updates, send an e-​​mail to education-​newsletter+subscribe@​laptop.​org.​au.

Apache + YAJL

//github.com/Maxime2/stan-challenge - here on GitHub is my answer to Stan code challenge. It is an example how one can use SAX-like streaming parser inside an Apache module to process JSON with minimal delays.

Custom made Apache module gives you some savings on request processing time by avoiding invocation of any interpreter to process the request with any programming language (like PHP, Python or Go). The stream parser allows to start processing JSON as soon as the first buffer filled with data while the whole request is still in transmission. And again, as it is an Apache module, the response is starting to construct while request is processing (and still transmitting).

A Taste of IBM

As a hobbyist programmer and Linux user, I was pretty stoked to be able to experience real work in the IT field that interests me most, Linux. With a mainly disconnected understanding of computer hardware and software, I braced myself to entirely relearn everything and anything I thought I knew. Furthermore, I worried that my usefulness in a world of maintainers, developers and testers would not be enough to provide any real contribution to the company. In actual fact however, the employees at OzLabs (IBM ADL) put a really great effort into making use of my existing skills, were attentive to my current knowledge and just filled in the gaps! The knowledge they've given me is practical, interlinked with hardware and provided me with the foot-up that I'd been itching for to establish my own portfolio as a programmer. I was both honoured and astonished by their dedication to helping me make a truly meaningful contribution!

On applying for the placement, I listed my skills and interests. Having a Mathematics, Science background, I listed among my greatest interests development of scientific simulation and graphics using libraries such as Python matplotlib and R. By the first day they got me to work, researching and implementing a routine in R that would qualitatively model the ability of a system to perform common tasks - a benchmark. A series of these microbenchmarks were made; I was in my element and actually able to contribute to a corporation much larger than I could imagine. The team at IBM reinforced my knowledge from the ground up, introducing the rigorous hardware and corporate elements at a level I was comfortable with.

I would say that my greatest single piece of take-home knowledge over the two weeks was knowledge of the Linux Kernel project, Git and GitHub. Having met the arch/powerpc and linux-next maintainers in person placed the Linux and Open Source development cycle in an entirely new perspective. I was introduced to the world of GitHub, and thanks to a few rigorous lessons of Git, I now have access to tools that empower me to safely and efficiently write code, and to build a public portfolio I can be proud of. Most members of the office donated their time to instruct me on all fronts, whether to do with career paths, programming expertise or conceptual knowledge, and the rest were all very good for a chat.

Approaching the tail-end of Year Twelve, I was blessed with some really good feedback and recommendations regarding further study. If during the two weeks I had any query regarding anything ranging from work-life to programming expertise even to which code editor I should use (a source of much contention) the people in the office were very happy to help me. Several employees donated their time to teach me really very intensive and long lessons regarding the software development concepts, including (but not limited to!) a thorough and helpful lesson on Git that was just on my level of understanding.

Working at IBM these past two weeks has not only bridged the gap between my hobby and my professional prospects, but more importantly established friendships with professionals in the field of Software Development. Without a doubt this really great experience of an environment that rewards my enthusiasm will fondly stay in my mind as I enter the next chapter of my life!

Using X-Plane 10 with ArduPilot SITL

ArduPilot has been able to use X-Plane as a HIL (hardware in the loop) backend for quite some time, but it never worked particularly well as the limitations of the USB interface to the hardware prevented good sensor timings.

We have recently added the ability to use X-Plane 10 as a SITL backend, which works much better. The SITL (software in the loop) system runs ArduPilot natively on your desktop machine, and talks to X-Plane directly using UDP packets.

The above video demonstrates flying a Boeing 747-400 in X-Plane 10 using ArduPilot SITL. It flies nicely, and does an automatic takeoff and landing quite well. You can use almost any of the fixed wing aircraft in X-Plane with ArduPilot SITL, which opens up a whole world of simulation to explore. Many people create models of their own aircraft in order to test out how they will fly or to test them in conditions (such as very high wind) that may be dangerous to test with a real model.

I have written up some documentation on how to use X-Plane 10 with SITL to help people get started. Right now it only works with X-Plane 10 although I may add support for X-Plane 9 in the future.

Michael Oborne has added nice support for using X-Plane with SITL in the latest beta of MissionPlanner, and does nightly builds of the SITL binary for Windows. That avoids the need to build ArduPilot yourself if you just want to fly the standard code and not modify it yourself.

Limitations

There are some limitations to the X-Plane SITL backend. First off, X-Plane has quite slow network support. On my machine I typically get a sensor data rate of around 27Hz, which is far below the 1200 Hz we normally use for simulation. To overcome this the ArduPilot SITL code does sensor extrapolation to bring the rate up to around 900Hz, which is plenty for SITL to run. That extrapolation introduces small errors which can make the ArduPilot EKF state estimator unhappy. To avoid that problem we run with "EKF type 10" which is a fake AHRS interface that gets all state information directly from the simulator. That means you can't use the X-Plane SITL backend to test EKF settings.

The next limitation is that the simulation fidelity depends somewhat on the CPU load on your machine. That is an unfortunate consequence of X-Plane not supporting lock-step scheduling. So you may notice that simulated aircraft on your machine may not fly identically to the same aircraft on someone elses machine. You can reduce this effect by lowering the graphics settings in X-Plane.

We can currently only get joystick input from X-Plane for aileron, elevator, rudder and throttle. It would be nice to support flight mode switches, flaps and other controls that are normally used with ArduPilot. That is probably possible, but isn't implemented yet. So if you want a full controller then you can instead connect a joystick to SITL directly instead of via X-Plane (for example using the MissionPlanner joystick module or the mavproxy joystick module).

Finally, we only support fixed wing aircraft in X-Plane at the moment. I have been able to fly a helicopter, but I needed to give manual collective control from a joystick as we don't yet have a way to provide collective pitch input over the X-Plane data interface.

Manned AIrcraft and ArduPilot

Please don't assume that because ArduPilot can fly full sized aircraft in a simulator that you should use ArduPilot to fly real manned aircraft. ArduPilot is not suitable for manned applications and the development team would appreciate it if you did not try to use it for manned aircraft.

Happy Flying

I hope you enjoy flying X-Plane 10 with ArduPilot SITL!

June 30, 2016

Coalitions

In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

Are we now the USSR?, Brexit, and More

Look at what's happened and you'll see the parallels: - in many parts of the world the past and current social and economic policies on offer basically aren't delivering. Clear that there is a democratic deficit. The policies at the top aren't dealing with enough of the population's problems CrossTalk BREXIT - GOAL! (Recorded 24 June) https://www.youtube.com/watch?v=kgKIc0bobO4 The Schulz Brexit

June 29, 2016

LUV Main July 2016 Meeting: ICT in Education / To Search Perchance to Find

Jul 5 2016 18:30
Jul 5 2016 20:30
Jul 5 2016 18:30
Jul 5 2016 20:30
Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

  • Dr Gill Lunniss and Daniel Jitnah, ICT in Education
  • Tim Baldwin, To Search Perchance to Find: Improving Information Access over
    Technical Web User Forums

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 5, 2016 - 18:30

read more

LUV Beginners July Meeting: GNU COBOL

Jul 16 2016 12:30
Jul 16 2016 16:30
Jul 16 2016 12:30
Jul 16 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages. Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 16, 2016 - 12:30

read more

June 24, 2016

Kernel interfaces and vDSO test

Getting Suckered

Last week a colleague of mine came up to me and showed me some of the vDSO on PowerPC and asked why on earth does it fail vdsotest. I should come clean at this point and admit that I knew very little about the vDSO and hadn't heard of vdsotest. I had to admit to this colleague that I had no idea everything looked super sane.

Unfortunately (for me) I got hooked, vdsotest was saying it was getting '22' instead of '-1' and it was the case where the vDSO would call into the kernel. It plagued me all night, 22 is so suspicious. Right before I got to work the next morning I had an epiphany, "I bet 22 is EINVAL".

Virtual Dynamically linked Shared Objects

The vDSO is a mechanism to expose some kernel functionality into userspace to avoid the cost of a context switch into kernel mode. This is a great feat of engineering, avoiding the context switch can have a dramatic speedup for userspace code. Obviously not all kernel functionality can be placed into userspace and even for the functionality which can, there may be edge cases in which the vDSO needs to ask the kernel.

Who tests the vDSO? For the portion that lies exclusively in userspace it will escape all testing of the syscall interface which is really what kernel developers are so focused on not breaking. Enter Nathan Lynch with vdsotest who has done some great work!

The Kernel

When the vDSO can't get the correct value without the kernel, it simply calls into the kernel because the kernel is the definitive reference for every syscall. On PowerPC something like this happens (sorry, our vDSO is 100% asm): 1

/*
 * Exact prototype of clock_gettime()
 *
 * int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp);
 *
 */
V_FUNCTION_BEGIN(__kernel_clock_gettime)
  .cfi_startproc
    /* Check for supported clock IDs */
    cmpwi   cr0,r3,CLOCK_REALTIME
    cmpwi   cr1,r3,CLOCK_MONOTONIC
    cror    cr0*4+eq,cr0*4+eq,cr1*4+eq
    bne cr0,99f

    /* [snip] */

    /*
     * syscall fallback
     */
99:
    li  r0,__NR_clock_gettime
    sc
    blr

For those not familiar, this couldn't be more simple. The start checks to see if it is a clock id that the vDSO can handle and if not it jumps to the 99 label. From here simply load the syscall number, jump to the kernel and branch to link register aka 'return'. In this case the 'return' statement would return to the userspace code which called the vDSO function.

Wait, having the vDSO calling into the kernel call gets us the wrong result? Or course it should, vdsotest is assuming a C ABI with return values and errno but the kernel doesn't do that, the kernel ABI is different. How does this even work on x86? Ohhhhh vdsotest does 2

static inline void record_syscall_result(struct syscall_result *res,
                     int sr_ret, int sr_errno)
{
    /* Calling the vDSO directly instead of through libc can lead to:
     * - The vDSO code punts to the kernel (e.g. unrecognized clock id).
     * - The kernel returns an error (e.g. -22 (-EINVAL))
     * So we need to recognize this situation and fix things up.
     * Fortunately we're dealing only with syscalls that return -ve values
     * on error.
     */
    if (sr_ret < 0 && sr_errno == 0) {
        sr_errno = -sr_ret;
        sr_ret = -1;
    }

    *res = (struct syscall_result) {
        .sr_ret = sr_ret,
        .sr_errno = sr_errno,
    };
}

That little hack isn't working on PowerPC and here's why:

The kernel puts the return value in the ABI specified return register (r3) and uses a condition register bit (condition register field 0, SO bit), so unlike x86 on error the return value isn't negative. To make matters worse, the condition register is very difficult to access from C. Depending on your definition of 'access from C' you might consider it impossible, a fixup like that would be impossible.

Lessons learnt

  • vDSO supplied functions aren't quite the same as their libc counterparts. Unless you have very good reason, and to be fair, vdsotest does have a very good reason, always access the vDSO through libc
  • Kernel interfaces aren't C interfaces, yep, they're close but they aren't the same
  • 22 is in fact EINVAL
  • Different architectures are... Different!
  • Variety is the spice of life

P.S I have a hacky patch waiting review


  1. arch/powerpc/kernel/vdso64/gettimeofday.S 

  2. src/vdsotest.h 

June 21, 2016

Zuul and Ansible in OpenStack CI

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

Overview of OpenStack CI with Zuul and Ansible
  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Pia, Thomas and Little A’s Excellent Adventure – Week 1

We arrived in Auckland after a fairly difficult flight. Little A had a mild cold and did NOT cope with the cabin pressure well, so there was a lot of walking cuddles around the plane kitchen to not disturb other passengers. After a restful night we picked up our rental car, a roomy 4 wheel drive, and drove to Turangi, a beautiful scenic introduction to our 3 month adventure! Our plan is to spend 3 months in Turing as a bit of a babymoon: to get to know little A as she goes through that lovely 6-9 months development which includes crawling, learning to eat and other fun stuff. We are also planning to catch a LOT of trout (and even keep some!), catch up with some studies and reading, and take the time to plan out the next chapter of our life. I’m also hoping to write a book if I can, but more on that later :)

So each week we’ll blog some highlights! Photos will be added every few days to the flickr album.

Our NZ Adventure

Arrival

The weather in Turangi has been gorgeous all week. Sunny and much warmer than Canberra, but of course Thomas would rather rain as that would get the Trout moving in the river :) We are renting a 3 bedroom house with woodfire heating which is toasty warm and very comfortable. The only downside is that we have no internet at the house, and the data plan on my phone doesn’t work at all at the house. So we are fairly offline, which has its pros and cons :) Good for relaxing, reflection, studying, writing and planning. Bad for Pia who feels like she has lost a limb! Meanwhile, the local library has reasonable WiFi and we have become a regular visitors.

Little A

Little A has made some new steps this week. She learned how to do raspberries, which she now does frequently. She also rolled over completely unassisted for the first time and spends a lot of time trying to roll more. Finally, she decided she wanted to start on solids. We know this because when Thomas was holding her whilst eating a banana, he turned away for a second to speak to me and she launched herself onto the banana, gumming furiously! So we have now tried some mashed potato, pumpkin and some water from the sippy cup. In all cases she insists on grabbing the spoon or sippy cup to feed herself.

Studies

Both of us are doing some extra studies whilst on this trip. I’m finishing off my degree this semester with a subject on policy and law, and another on white collar crime. Both are fascinating! Thomas is reading up on some areas of law he wants to brush up on for work and fun.

Book

My book preparations are going well, and I will be blogging about that in a few weeks once I get a bit more done. Basically I’m writing a book about the history and future of our species, focusing on the major philosophical and technological changes that have come and are coming, and the key things we need to carefully think about and change if we are to take advantage of how the world itself has fundamentally changed. It is a culmination of things I’ve been thinking about and exploring for the last 15 years, so I hope it proves useful in making a better world for everyone :)

Fishing

Part of the reason we have based this little sabbatical at Turangi is because it is arguably the best Trout fishing in the world, and is one of Thomas’ favourite places. It is a quaint and sleepy little country town with everything we need. The season hasn’t really kicked off yet and the fish aren’t running upstream yet, but we still netted 12 fish this week, of which we kept one Rainbow Trout for a delicious meal of Manuka smoked fish :)

June 20, 2016

Building OPAL firmware for POWER9

Recently, we merged into the op-build project (the build scripts for OpenPOWER Firmware) a defconfig for building OPAL for (certain) POWER9 simulators. I won’t bother linking over to articles on the POWER9 chip or schedule (there’s search engines for that), but with this commit – if you happen to be able to get your hands on a POWER9 simulator, you can now boot to the petitboot bootloader on it!

We’re using upstream Linux 4.7.0-rc3 and upstream skiboot (master), so all of this code is already upstream!

Now, by no means is this complete. There’s some fairly fundamental things that are missing (e.g. PCI) – but how many other platforms can you build open source firmware for before you can even get your hands on a simulator?

June 17, 2016

Religious Conspiracies, Is Capitalism Collapsing 2?, and More

This is obviously a continuation of my past post, http://dtbnguyen.blogspot.com/2016/06/is-capitalism-collapsing-random.html You're probably wondering how on earth we've moved on to religious conspiracies. You'll figure this out in a second: - look back far enough and you'll realise that the way religion was practised and embraced in society was very different a long time ago and now. In fact,

Booting Fedora 24 cloud image with KVM

Fedora 24 is on the way, here’s how you can play with the cloud image on your local machine.

Download the image:
wget https://alt.fedoraproject.org/pub/alt/stage/24_RC-1.2/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2

Make a new local backing image (so that we don’t write to our downloaded image) called my-disk.qcow2:
qemu-img create -f qcow2 -b Fedora-Cloud-Base-24-1.2.x86_64.qcow2 my-disk.qcow2

The cloud image uses cloud-init to configure itself on boot which sets things like hostname, usernames, passwords and ssh keys, etc. You can also run specific commands at two stages of the boot process (see bootcmd and runcmd below) and output messages (see final_message below) which is useful for scripted testing.

Create a file called meta-data with the following content:
instance-id: FedoraCloud00
local-hostname: fedoracloud-00

Next, create a file called user-data with the following content:
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
 
bootcmd:
 - [ sh, -c, echo "=========bootcmd=========" ]
 
runcmd:
 - [ sh, -c, echo "=========runcmd=========" ]
 
# add any ssh public keys
ssh_authorized_keys:
  - ssh-rsa AAA...example...SDvZ user1@domain.com
 
# This is for pexpect so that it knows when to log in and begin tests
final_message: "SYSTEM READY TO LOG IN"

Cloud init mounts a CD-ROM on boot, so create an ISO image out of those files:
genisoimage -output my-seed.iso -volid cidata -joliet -rock user-data meta-data

If you want to SSH in you will need a bridge of some kind. If you’re already running libvirtd then you should have a virbr0 network device (used in the example below) to provide a local network for your cloud instance. If you don’t have a bridge set up, you can still boot it without network support (leave off the -netdev and -device lines below).

Now we are ready to boot this!
qemu-kvm -name fedora-cloud \
-m 1024 \
-hda my-disk.qcow2 \
-cdrom my-seed.iso \
-netdev bridge,br=virbr0,id=net0 \
-device virtio-net-pci,netdev=net0 \
-display sdl

You should see a window pop up and Fedora loading and cloud-init configuring the instance. At the login prompt you should be able to log in with the username fedora and password that you set in user-data.

June 15, 2016

Introducing snowpatch: continuous integration for patches

Continuous integration has changed the way we develop software. The ability to make a code change and be notified quickly and automatically whether or not it works allows for faster iteration and higher quality. These processes and technologies allow products to quickly and consistently release new versions, driving continuous improvement to their users. For a web app, it's all pretty simple: write some tests, someone makes a pull request, you build it and run the tests. Tools like GitHub, Travis CI and Jenkins have made this process simple and efficient.

Let's throw some spanners in the works. What if instead of a desktop or web application, you're dealing with an operating system? What if your tests can only be run when booted on physical hardware? What if instead of something like a GitHub pull request, code changes were sent as plain-text emails to a mailing list? What if you didn't have control the development of this project, and you had to work with an existing, open community?

These are some of the problems faced by the Linux kernel, and many other open source projects. Mailing lists, along with tools like git send-email, have become core development infrastructure for many large open source projects. The idea of sending code via a plain-text email is simple and well-defined, not reliant on a proprietary service, and uses universal, well-defined technology. It does have shortcomings, though. How do you take a plain-text patch, which was sent as an email to a mailing list, and accomplish the continuous integration possibilities other tools have trivially?

Out of this problem birthed snowpatch, a continuous integration tool designed to enable these practices for projects that use mailing lists and plain-text patches. By taking patch metadata organised by Patchwork, performing a number of git operations and shipping them off to Jenkins, snowpatch can enable continuous integration for any mailing list-based project. At IBM OzLabs, we're using snowpatch to automatically test new patches for Linux on POWER, skiboot, snowpatch itself, and more.

snowpatch is written in Rust, an exciting new systems programming language with a focus on speed and safety. Rust's amazing software ecosystem, enabled by its package manager Cargo, made development of snowpatch a breeze. Using Rust has been a lot of fun, along with the practical benefits of (in our experience) faster development, and confidence in the runtime stability of our code. It's still a young language, but it's quickly growing and has an amazing community that has always been happy to help.

We still have a lot of ideas for snowpatch that haven't been implemented yet. Once we've tested a patch and sent the results back to a patchwork instance, what if the project's maintainer (or a trusted contributor) could manually trigger some more intensive tests? How would we handle it if the traffic on the mailing list of a project is too fast for us to test? If we were running snowpatch on multiple machines on the same project, how would we avoid duplicating effort? These are unsolved problems, and if you'd like to help us with these or anything else you think would be good for snowpatch, we take contributions and ideas via our mailing list, which you can subscribe to here. For more details, view our documentation on GitHub.

Thanks for taking your time to learn a bit about snowpatch. In future, we'll be talking about how we tie all these technologies together to build a continuous integration workflow for the Linux kernel and OpenPOWER firmware. Watch this space!

This article was originally posted on IBM developerWorks Open. Check that out for more open source from IBM, and look out for more content in their snowpatch section.

Minor update on transaction fees: users still don’t care.

I ran some quick numbers on the last retargeting period (blocks 415296 through 416346 inclusive) which is roughly a week’s worth.

Blocks were full: median 998k mean 818k (some miners blind mining on top of unknown blocks). Yet of the 1,618,170 non-coinbase transactions, 48% were still paying dumb, round fees (like 5000 satoshis). Another 5% were paying dumbround-numbered per-byte fees (like 80 satoshi per byte).

The mean fee was 24051 satoshi (~16c), the mean fee rate 60 satoshi per byte. But if we look at the amount you needed to pay to get into a block (using the second cheapest tx which got in), the mean was 16.81 satoshis per byte, or about 5c.

tl;dr: It’s like a tollbridge charging vehicles 7c per ton, but half the drivers are just throwing a quarter as they drive past and hoping it’s enough. It really shows fees aren’t high enough to notice, and transactions don’t get stuck often enough to notice. That’s surprising; at what level will they notice? What wallets or services are they using?

June 14, 2016

Terry & ROS

After a number of adventures I finally got a ROS stack setup so that move_base, amcl, and my robot base all like each other well enough for navigation to function. Luckily I added some structural support to the physical base as the self driving control is a little snappier than I normally tend to drive the robot by hand.

There was an upgrade from Indigo to Kinetic in the mix and the coupled update to Ubuntu Xenial to match the ROS platform update. I found a bunch of ROS packages that I used are not currently available for Kinetic, so had an expanding catkin ws for self compiled system packages to complete the update. Really cool stuff like rosserial wasn't available. Then I found that a timeout there caused a bunch of error messages about mismatched read sizes. I downgrade to the indigo version of rosserial and the error was still there, so I assume it relates to the various serial drivers in the Linux kernel doing different timing than they did before. Still, one would have hoped that rosserial was more resilient to multiple partial packet delivery. But with a timeout bump all works again. FWIW I've seen similar in boost, you try to read 60 bytes and get 43 then need to get that remaining 17 and stuff the excess in a readback buffer for the next packet read attempt. The boost one hit me going from 6 to 10 channel io to a rc receiver-to-uart arduino I created. The "joy" of low level io.

I found that the issues stopping navigation from working for me out of the box on Indigo were still there in Kinetic.  So I now have a very cool bit of knowledge to tell if somebody has navigation working or is just assuming that what one reads equals what will work out of the box.

Probably the next ROS thing will be trying to get a moveit stack for the mearm. I've got one of these cut and so will soon have it built. It seems like an ideal thing to work on MoveIt for because its a simple low cost arm that anybody can cut out and servo up. I've long wanted a simple tutorial on MoveIt for affordable arms. It might be that I'm the one writing that tutorial rather than just reading it.

Video and other goodness to follow. As usual, persistence it the key^TM.

Celebrating explorers!

Nellie BlyWe continue publishing resources on explorers, a very diverse range from around the world and throughout time.  Of course James Cook was an interesting person, but isn’t it great to also offer students an opportunity to investigate some other people that they hadn’t yet heard the name of?  It is good to show the diversity and how it wasn’t just Europeans who explored.

And did you spot our selection of female explorers? Unfortunately there aren’t that many, but they did awesome work. Nellie Bly is my personal favourite (pictured on the right). Such fabulous initiative.

As small introductory gift this month for those who haven’t yet got a subscription, use this special link to our Explorers category page  to get 50% off the price of one explorer resource PDF, some will then be only $1. If you have come to the site via the link, the discount will automatically be applied to your cart on checkout, to the most expensive item from the Explorer category.Alternatively you can use coupon code NL1606EXPL. This offer is only valid until end June 2016.

Which one will you choose? You can write a comment on this post: tell us which explorer, and why!

June 13, 2016

Council Minutes Tuesday 07 June 2016

Fri, 2016-06-03 19:36 - 20:43

1. Meeting overview and key information
Present
Hugh, Kathy, Katie, Cherie, Sae Ra

Apologies:
Tony, Craige

Meeting opened by Hugh at 1936hrs and quorum was achieved

MOTION that the previous minutes of Hugh are correct
Moved: Kathy
Seconded: Cherie
2 abstention Carried

2. Log of correspondence
Motions moved on list
Nil

General correspondence

VPAC Closure
Email notice to be sent out.
ACTION complete, no further action

Insurance
UPDATE: Sohan was chased 25th April, awaiting update
UPDATE: an invoice has been sent. Kathy to raise the invoice and ping Sae Ra to approve in Westpac.
UPDATE: All now approved and paid

3. Review of action items from previous meetings

Membership Team:
Kathy, chatted wtih Agileware and have asked for 2 quotes, 1 with trial quote and one with full hosting quote

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.
UPDATE: Survey has been completed, and key findings and recommendations sent to the Linux Aus list.
UPDATE: Survey was sentout to the LA list. Look at wireframes and look at candidate platforms. Tl:dr membership team have been doing stuff.
UPDATE: Quotes received from Agileware and DevApp regarding hosting of CiviCRM, Kathy currently following up.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.
UPDATE: Tony still attempting to engage with the bank
There is an outstanding amount for the Ellis’ that needs to be sorted.

Please refer to Action Items list for more items.

4. Items for discussion

Standing item: review activities list for new items
https://github.com/linuxaustralia/constitution_and_policies/blob/master/activities-calendar.md

Call for bids for linux.conf.au 2019
ACTION: Tony to do the Python script magic and amend dates, then to formally announce the call for bids
In Progress

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress - Hugh to review action items from F2F re:amount and go from there.

Event Updates:
LCA2016 update
We believe we have processed all payments and finances
Event report well received. It also went to Geelong stakeholders.
Caught up with Donna Benjamin and donna now has the large format printer.

LCA2017 update
LCA2017 want to use a different payment provider, Stripe instead of SecurePay. Fees not significantly different.
ACTION: Tony to assess and respond to Chris Neugebauer, allowing us to use SecurePay in future if Stripe doesn’t work out
Papers committee up and running
3 On papers committee, 6/7 women, good diversity of employers
CfP sometime this month
Website/general graphics design took a little longer than expected but now pretty much sorted
F2F meeting at end of month, overall level of comfort high
May end handling accom. bookings themselves

LCA2018 update
LCA2018 team to be contacted re moving forward.

PyConAU update
In Progress

Drupal South Gold Coast 2016
Query from Jana if we have Not for Profit organisation
Register of stuff like are we NfP organisation
Kathy to follow-up with status.

OSDConf 2015
Post Event report released

GovHack
WEbsite has launched and looks spiffy. There are 30 events this year. Initial comms have gone out and sponsorship is on track.

JoomlaDay
In progress

DrupalGov
--

WordCamp Sunshine Coast 2016
Books can be closed.

WordCamp Sydney 2016
Very well organised and on top of everything

CiviCRM 2017 interested in aligning with LA
Nothing to report

5. Items for noting

6. Other business
6 Month check, how are we travelling.

Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

New Other Business
7. Other business carried from previous Council
ACTION: Cherie to review Minutes from F2F and add to next meeting agenda if further discussion required.

8. In Camera
2 items were discussed in camera

2043AEST close.

Council Minutes Thursday 18th February 2016

Thu, 2016-02-18 19:30 - 21:05

Welcome to the new Council.

1. Meeting overview and key information
Present
Kathy, Tony, Hugh, Craige, Katie, Sae Ra

Apologies:
Cherie

Meeting opened by Hugh Blemings at 1930hrs and quorum was achieved

ACTION: Sae Ra to follow-up with Cherie

MOTION that the previous minutes of 27 January are correct
Moved: Sae Ra
Seconded: Craige
Carried with 6 abstentions. Tony was not available for this call. Also new council was not in on this call.

2. Log of correspondence
Motions moved on list
Donation match to Give Where You Live for linux.conf.au Geelong 2016 in the amount $AUD 4840
MOTION by TONY BREEDS to approve up to $AUD8k expenditure to match funds raised by LCA2016 to be donated to the nominated charity.
SECONDED by HUGH BLEMINGS
Carried with 1 abstentions

EFA Donation in the amount $AUD 1500
MOTION by HUGH BLEMINGS That Linux Australia donate an amount of AU$1,500 to EFA to further the EFAs activities and build closer links between the two organisations.
SECONDED: KATHY REID
Carried

Motion to create a Membership team:
MOTION by KATHY REID That a subcommittee be formed in line with v1 of the Subcommittee policy https://github.com/linuxaustralia/constitution_and_policies/blob/master/subcommittee_policy_v1.md) to oversee and deliver requirements, options, recommendations and actions pertaining to the Membership platform of Linux Australia to be referred to as the "Membership Team" with initial members being;
Sae Ra Germaine
Cameron Tudball
Neill Cox
Michael Cordover
Luke John
Kathy Reid
SECONDED: TONY BREEDS
Passed with 1 abstention.

General correspondence
Update on progress of GovHack Subcommittee Policy and other GovHack related matters from RICHARD TUBB
Subcommittee policy to be reviewed as soon as it has been submitted by GovHack
Need to ensure we have correct insurance
ACTION: Kathy to prepare a event pack for events that are run under LA.

Discussion on list regarding LA Membership of Open Government Partnership

SIDE NOTE: Do we have a register anywhere of the organisations and partnerships of which Linux Australia is a member, and if we don’t then I (Kathy) am happy to put this together.
ACTION: Subcommittee register and a LA Member/Partnership register to be added in.

VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work

Strategic Plan submitted by KATHY REID
Tabled
Good community discussion
Should be considered by 2016 council
MOTION by CHRIS N that we commend the strategic plan as written by KATHY REID for consideration by the 2016 council.
SECONDED: Joshua Hesketh
Passed unanimously

linux.com.au Renewal Notification from ENETICA - 90 days
Josh H to pay with LA debit card

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 budget

LCA/EFA Social event

3. Review of action items from previous meetings
Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.
JOSH H to speak to Donna regarding this
UPDATE: Ongoing
UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.
We need to at least get the website to D8 and automate the updating process.
ACTION: Josh to get a backup of the site to Craig
ACTION: Craige to stage the website to see how easy it is to update.
UPDATE: Craige to log in to the website to elevate permissions.
UPDATE: Still in progress
ACTION: Josh H to tarball the site.
Outstanding action
In Progress
Completed
ACTION: Craige to send key to Josh H
In progress.
Desire to do things as open as possible so that others can contributes
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required
High Priority. Tony to update Action REgister

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.
UPDATE: Currently being followed up.
ACTION: Josh H to follow this up
Have received information back. Josh to continue following up.
News Limited is still outstanding
WordPress Foundation:
acts similarly as Linux Australia.
Runs multiple WordCamps in the US.
WordPress give access to names to, documentation and financial support to WordCamps outside of US and Australia.
There is a mandate for WordPress to not return a profit.
What to do with outstanding debt:
What to do in the future: WordPress foundation will act as a sponsor to the event.
If the event makes a loss. WordPress foundation is able to act as a safety net.
Debt outstanding for 2014:
MOTION by JOSH HESKETH we cancel the debt for the WordPress foundation for their sponsorship for WordCamp in Sydney
Seconded: Tony
Carried with 1 objection
ACTION: Josh H to Confirm Sponsorship with the WordCamp 2016 budget.
ACTION: Missing Payment from NewsCorp

Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
Use opportunity at LCA to onboard new admin team members if possible.

Please refer to Action Items list for more items.

4. Items for discussion
LCA2016 update
Looking to do another $65K of payments to be made.
Draft of the post-event report to be submitted
Significant positive feedback on the event
Very solid conference
Explanation of the speakers gifts.
Out of pocket expenses have been completed and reconciled.
MOTION HUGH, LA records its thanks for running a successful conference in Geelong.
Seconded: Katie

LCA2017 update
Ghosts. Dates are likely to be one of the first
Council to assist the creation of a list to who’s going to ghosts and what the best weekend is.
ACTION: Hugh to ping Chris re: Ghosts.
2 Keynotes have been locked in.
Chris N was awarded a grant of Python Software Foundation to work on a module for Symposium to replace the functionality of Zookeepr.

LCA2018 update
Request for VM and domain provisions
This has been passed on to the Admin team.
Need to vote a subcommittee, and the “community portion” of the subcommittee.
Need to formalise the budget with lower break even numbers.

PyConAU update
2nd deposit was paid.

Drupal South 2016
Formation process, they have a formed committee.
Conference is in November.

OSDConf 2015
few outside transactions in Xero
Need to seek a wrap up report

GovHack
29th of July-31st of July about a month later so that it is not held within School or University holidays
There will be State managers that will be overseeing the sites.
Richard Tubb is LA’s key contact.

JoomlaDay
Wrapped up, seeking a wrap-up report.

DrupalGov
Just ran last weekend.
Congratulations on a successful event.
need to seek a wrap-up report

WordCamp Sunshine Coast 2016
Formation process, they have a formed committee.
Budget to be confirmed.

5. Items for noting
NIL

6. Other business
Moving Meetings to IRC and Using MeetBot (Craige)
The usage of PLU’s for council meetings is a legacy of a time when no graceful solutions existed to facilitate meetings. Short of raising a motion, I would like to discuss how the rest of the council feels, conceptually, about moving meetings from PLU to IRC.
As I understand it, the PLU services are donated so there are no ongoing costs associated with continuing (can this be confirmed?)
Operationally, meetings run would be published immediately and automatically at the meeting’s close.
“In camera” items would need to be discussed post-meeting close.
Why?
Meeting minutes would be more accurate, more transparent and immediately published without taking up any additional council member’s time (free the secretary!).
Our meetings will be performed in line with many FOSS community meetings.
Example 1 meeting agenda.
Example 2 meeting logs.
Our meetings would be held in a very FOSS way that is consistent with our publicly stated values.
Our meeting agendas, sans “in Camera” items, would also be public in a way that is consistent with our publicly stated values.
If the council agreed in principal, I would move the motion that Craige be charged with working with the Admin Team to design and document how such a meeting system would both work in practice and implementation. This document would then be brought back to council for consideration of a trial implementation.
If a trial implementation is approved by council, at the conclusion of the trial period, the new system would be either adopted formally or rejected.
ACTION: Review the summary in April to see if this is something that LA would consider implementing. To be revisited in F2F

Admin Team transparency, accountability and scalability. Do we consider this a problem and what paths do we take to address it if we do?

FOSS As A Service - GNU Social
Many people wish to make greater use of Free and Open Source Software but do not have the time or the skills to run these services. It is a space that Linux Australia can potentially fill.
GNU Social, with it’s bi-directional Twitter and Facebook bridges is a simple, low-hanging fruit service that we could potentially offer our members.
In many respects, it offers superior communication to email, with members able to gracefully, create, opt in, opt out of conversation groups, such as policy, membership ctte or groups that they create organically as they desire (no mailman administration, no spam), if people want a group, they can make it happen.
It provides a low barrier of entry to the FOSS federated social networking paradigm that would actually be directly useful for our community as both a thought exercise and a practical communication tool.
It would help the LCA2017 team who are hoping to drive much of the social and conversational interaction / chatter away from Twitter and Email into GNU Social.
Like the item regarding MeetBot, I would like the council to have done the research and considered it’s position completely before any motion of implementation or trial is put before the broader community. I recognise that many of us have not used GNU Social before and perhaps have little or no understanding of the application and Federated Social networking conceptually. To that end, I would like to propose the following motion:
That Craige undertakes to write a document on GNU Social and Federated Social Networking, in collaboration with the Admin Team, on the council’s behalf that covers the following areas for their consideration:
How the membership would benefit from LA hosting such a service. What other hosting options are available.
How would the membership actually use the service, connect with Twitter, Facebook and other Federated Social Networks
How the service would be installed, managed on an ongoing basis and where it would be hosted.
How we would manage membership ie: whether the site be open or limited to Linux Australia members, etc.
ACTION: Craige to document the process and consult with the Admin Team.

New Mailing List Policy
ACTION: by Kathy Reid - to correct the Mailing List Policy pull requests to put it into Markdown format in pull request re: fix the technicality of the adding and removal of the Acknowledgement of Country

Proposed Code of Conduct Amendments, by Christopher Neugebauer
https://github.com/linuxaustralia/constitution_and_policies/pull/12 – “establishes that technology choices are not grounds for harassment of attendees of our events.”
ACTION: To be deferred to the next meeting
https://github.com/linuxaustralia/constitution_and_policies/pull/13 - “removes a "specific problem behaviour" that I believe was added to the code of conduct in jest several years ago, and has made it difficult to have meaningful discussions about the remainder of the provisions of the code of conduct.”
MOTION by Katie to accept the Pull request from Chris N.
Seconded: Craige
Carried with 2 abstentions.

Misc Pending Policy Pull Requests
https://github.com/linuxaustralia/constitution_and_policies/pull/15 Remove repeated second paragraph from Rationale

7. Other business carried from previous Council
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress

Request from LUKE JOHN to Officially oppose the TPP
Action (Josh S): Reach out to Kim Weatherall

8. In Camera
2 items were discussed in camera.

2105PM close.

Council Minutes Tuesday 24th May 2016

Tue, 2016-05-24 19:35 - 20:30

1. Meeting overview and key information
Present
Kathy, Cherie, Tony, Hugh

Apologies:
Sae Ra

Meeting opened by Hugh at 1935hrs and quorum was achieved

MOTION that the previous minutes of 25 April are correct
Moved: Kathy
Seconded: Cherie
CARRIED with one abstention

2. Log of correspondence
Motions moved on list
None

General correspondence
None

VPAC Closure
Email notice to be sent out.
ACTION complete, no further action

Insurance
UPDATE: Sohan was chased 25th April, awaiting update
UPDATE: an invoice has been sent. Kathy to raise the invoice and ping Sae Ra to approve in Westpac.
UPDATE: All now approved and paid

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.
UPDATE: Survey has been completed, and key findings and recommendations sent to the Linux Aus list.
UPDATE: Survey was sentout to the LA list. Look at wireframes and look at candidate platforms. Tl:dr membership team have been doing stuff.
UPDATE: Quotes received from Agileware and DevApp regarding hosting of CiviCRM, Kathy currently following up.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.
UPDATE: Tony still attempting to engage with the bank

Please refer to Action Items list for more items.

4. Items for discussion

Standing item: review activities list for new items
https://github.com/linuxaustralia/constitution_and_policies/blob/master/activities-calendar.md

Call for bids for linux.conf.au 2019
ACTION: Tony to do the Python script magic and amend dates, then to formally announce the call for bids

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress - Hugh to review action items from F2F re amount and go from there.

Event Updates:
LCA2016 update
ACTION: Kathy to send the event report

LCA2017 update
LCA2017 want to use a different payment provider, Stripe instead of SecurePay. Fees not significantly different.
ACTION: Tony to assess and respond to Chris Neugebauer, allowing us to use SecurePay in future if Stripe doesn’t work out

LCA2018 update
Email sent to ghosts: date issues, now options are 8th Jan or 25th, both have problems, not unsurmountable. Final dates TBC

PyConAU update
Earlybird sold out in 24 hours
All progressing well

Drupal South Gold Coast 2016
Cherie phoned Vlad and sent email asking what assistance is required, where they are in planning etc. ALso sent email to Donna B asking how involved she wants to be.
Westpac walkthrough date set

OSDConf 2015
A well written post event report went out. This item can likely now be removed from future agendas

GovHack
Email from Kathy
ACTION: Kathy to seek an informal indication of financial position so that we know the financial risk exposure.
Xero access: Tony to chat with auditor first. Accountant fees will be negligible. Once auditor meeting has happened, Jan’s bookkeeper will need to set up the separate instance.

JoomlaDay
Sponsorship outstanding
ACTION: Hugh to send another chasing email

DrupalGov
ACTION

WordCamp Sunshine Coast 2016
Nearly 200 pax, although 300 expected.
Bills appear to be paid
Possibly in black - accounts need to be completed
Kathy to watch for outstanding bills then close books
Post event report will follow (Kathy chasing up)

WordCamp Sydney 2016
Well organised, mature event team
Dates and venue locked in
Few $k in review already from ticket sales
Access to Xero complete
Should only require light touch

CiviCRM 2017 interested in aligning with LA
Tony suggested sponsoring
Agileware quote for hosting vgood (Kathy)

5. Items for noting

6. Other business

Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

New Other Business
7. Other business carried from previous Council
ACTION: Cherie to review Minutes from F2F and add to next meeting agenda if further discussion required.

8. In Camera
1 item was discussed in camera

2030AEST close.

Council Minutes Wednesday 13 January 2016

Wed, 2016-01-13 19:53 - Mon, 2016-06-13 20:39

1. Meeting overview and key information
Present
Josh H, Josh S, Tony, James, Chris N

Apologies:
Sae Ra, Craige M

Meeting opened by Joshua Hesketh at 19:53hrs and quorum was achieved

MOTION that the previous minutes of 30 December are correct
Moved: Joshua Hesketh
Seconded: Chris N
Passed 2 abstentions

2. Log of correspondence
Motions moved on list
Nil

General correspondence

VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work

Strategic Plan submitted by KATHY REID
Tabled
Good community discussion
Should be considered by 2016 council
MOTION by CHRIS N that we commend the strategic plan as written by KATHY REID for consideration by the 2016 council.
SECONDED: Joshua Hesketh
Passed unanimously

linux.com.au Renewal Notification from ENETICA - 90 days
Josh H to pay with LA debit card

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 budget

3. Review of action items from previous meetings
Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.
JOSH H to speak to Donna regarding this
UPDATE: Ongoing
UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.
We need to at least get the website to D8 and automate the updating process.
ACTION: Josh to get a backup of the site to Craig
ACTION: Craige to stage the website to see how easy it is to update.
UPDATE: Craige to log in to the website to elevate permissions.
UPDATE: Still in progress
ACTION: Josh H to tarball the site.
Outstanding action
In Progress
Completed
ACTION: Craige to send key to Josh H
In progress.
Desire to do things as open as possible so that others can contributes
ACTION: Needs analysis of website feature requirements. Josh H to put together basic set of needs for sending to list as a starting point.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.
UPDATE: Currently being followed up.
ACTION: Josh H to follow this up
Have received information back. Josh to continue following up.
News Limited is still outstanding
WordPress Foundation:
acts similarly as Linux Australia.
Runs multiple WordCamps in the US.
WordPress give access to names to, documentation and financial support to WordCamps outside of US and Australia.
There is a mandate for WordPress to not return a profit.
What to do with outstanding debt:
What to do in the future: WordPress foundation will act as a sponsor to the event.
If the event makes a loss. WordPress foundation is able to act as a safety net.
Debt outstanding for 2014:
MOTION by JOSH HESKETH we cancel the debt for the WordPress foundation for their sponsorship for WordCamp in Sydney
Seconded: Tony
Carried with 1 objection
ACTION: Josh H to Confirm Sponsorship with the WordCamp 2016 budget.
Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
Use opportunity at LCA to onboard new admin team members if possible.
Please refer to Action Items list for more items.

4. Items for discussion
LCA2016 update
Good sale numbers, 95% sold
Keynotes all announced

LCA2017 update
Venue contract signed and deposit paid
Planning for ghosts need to be sooner rather than latter
Action (Josh H): Put together list of ghosts/council who should attend

LCA2018 update
Site visits went well

PyConAU update
Putting together miniconfs

Drupal South 2016
New budget to be discussed on list

OSDConf 2015
Last outstanding invoices being followed up
Team to put together closure report before LCA

GovHack
Waiting to hear back re a subcommittee policy that would be amicable

JoomlaDay
Action (Josh H): seek closure report
Finalise xero reconciliation

DrupalGov
Customer numbers to be arranged

WordCamp Sunshine Coast 2016
Action (Tony B): Bank accounts to be arranged

5. Items for noting
Meeting room for AGM has been booked.
Office bearers to write reports
Josh H to invite a returning officer

6. Other business
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress

Council lunch at LCA2016
Josh H to arrange lunch for Monday
Tony to think about how to present his report

Request from LUKE JOHN to Officially oppose the TPP
Action (Josh S): Reach out to Kim Weatherall

7. In Camera
4 items were discussed in camera

20:39PM close.

Council Minutes Tuesday 1st March 2016

Tue, 2016-03-01 19:31 - 20:42

1. Meeting overview and key information
Present
Katie, Kathy (Chair), Sae Ra, Tony, Cherie, Craige,

Apologies:
Hugh

Meeting opened by Kathy at 2131hrs and quorum was achieved

MOTION that the previous minutes of 18 February are correct
Moved: Kathy
Seconded: Craige
Carried with 1 abstention

2. Log of correspondence

Motions moved on list
MOTION by Kathy Reid
That the changes to the Mailing List Policy to put it into Markdown
format in pull request
https://github.com/linuxaustralia/constitution_and_policies/pull/16
be approved and merged to Master
http://lists.linux.org.au/mailman/private/committee/2016-February/055548.html
Seconded: Tony breeds
Passed
Tony to merge.

General correspondence
PO Box Renewal - To be paid.

From Previous Meetings:
VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work

Strategic Plan submitted by KATHY REID
Tabled
Good community discussion
Should be considered by 2016 council
MOTION by CHRIS N that we commend the strategic plan as written by KATHY REID for consideration by the 2016 council.
SECONDED: Joshua Hesketh
Passed unanimously

linux.com.au Renewal Notification from ENETICA - 90 days
Josh H to pay with LA debit card

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 budget

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.
JOSH H to speak to Donna regarding this
UPDATE: Ongoing
UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.
We need to at least get the website to D8 and automate the updating process.
ACTION: Josh to get a backup of the site to Craig
ACTION: Craige to stage the website to see how easy it is to update.
UPDATE: Craige to log in to the website to elevate permissions.
UPDATE: Still in progress
ACTION: Josh H to tarball the site.
Outstanding action
In Progress
Completed
ACTION: Craige to send key to Josh H
In progress.
Desire to do things as open as possible so that others can contributes
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required
High Priority. Tony to update Action REgister

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.
UPDATE: Currently being followed up.
ACTION: Josh H to follow this up
Have received information back. Josh to continue following up.
News Limited is still outstanding
WordPress Foundation:
acts similarly as Linux Australia.
Runs multiple WordCamps in the US.
WordPress give access to names to, documentation and financial support to WordCamps outside of US and Australia.
There is a mandate for WordPress to not return a profit.
What to do with outstanding debt:
What to do in the future: WordPress foundation will act as a sponsor to the event.
If the event makes a loss. WordPress foundation is able to act as a safety net.
Debt outstanding for 2014:
MOTION by JOSH HESKETH we cancel the debt for the WordPress foundation for their sponsorship for WordCamp in Sydney
Seconded: Tony
Carried with 1 objection
ACTION: Josh H to Confirm Sponsorship with the WordCamp 2016 budget.
ACTION: Missing Payment from NewsCorp Assigned to Tony bReeds
Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
Use opportunity at LCA to onboard new admin team members if possible.

Please refer to Action Items list for more items.

ACTION: Kathy to create a Events checklist.

BAS was submitted and went through seamlessly.

4. Items for discussion

DrupalGov retrospective.
Kathy to follow up with Chris Skene and possibly connect with Donna Benjamin.
implementing a measure to make sure we have oversight over the committees. ACTION: Kathy to contact

Josh Hesketh that has a Credit Card and is a signatory to the bank accounts
MOTION by Kathy to keep Josh Hesketh as a signatory on LA Bank Accounts and approve his use of an LA Credit Card until such time that we have worked with Westpac to ensure that we have updated the signatories.
Seconded: Tony
Passed

Face to face timing and preparation - we should get the ball rolling on this.
Meeting in Melbourne.
ACTION Katie to Setup a doodle poll for an appropriate weekend for face-to-face

Event Updates:
LCA2016 update
Post event report drafted, awaiting some final financials before being distributed to Council
Most payments have now been made
Thank you certificates have gone out
Outstanding action items are:
upload slides
post-event report
archive the lca2016 website.

LCA2017 update
no new updates from last week.
ACTION: Craige to followup Chris’ westpac details.

LCA2018 update
Need to form the subcommittee to approve the budget and community members.
ACTION: Katie to run point with the team and follow-up
ACTION: to go onto the Face-to-face agenda.

PyConAU update
ACTION: Tony to take point.
The process around the CfP.

Drupal South 2016
ACTION: Kathy to take point.
Need to lock in a budget and to give the training.

OSDConf 2015
Post event report in progress.

GovHack
Sponsorship Agreement - for review
Subcommittee Policy - meeting to be convened, requesting participation from other Council members
Tony and Sae Ra to take point on behalf of LA to speak with GovHack.

JoomlaDay
Event has been finished. 1 sponsorship invoice is outstanding. Post event report to come.

DrupalGov
Not all information was gained. Information as above.

WordCamp Sunshine Coast 2016
Meeting with Luke
Need to get the team setup in the financial system and get them up and running on the event.

5. Items for noting
NIL

6. Other business
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

FOSS As A Service - GNU Social - Moved to Action Items list

New Mailing List Policy - Moved to Action Items list

Proposed Code of Conduct Amendments, by Christopher Neugebauer
https://github.com/linuxaustralia/constitution_and_policies/pull/12 – “establishes that technology choices are not grounds for harassment of attendees of our events.”
ACTION: To be deferred to the next meeting
https://github.com/linuxaustralia/constitution_and_policies/pull/13 - “removes a "specific problem behaviour" that I believe was added to the code of conduct in jest several years ago, and has made it difficult to have meaningful discussions about the remainder of the provisions of the code of conduct.”
MOTION by Katie to accept the Pull request from Chris N.
Seconded: Craige
Carried with 2 abstentions.

Misc Pending Policy Pull Requests
https://github.com/linuxaustralia/constitution_and_policies/pull/15 Remove repeated second paragraph from Rationale
MOTION: by Katie to merge pull request #15
Seconded Tony
Passed

Enable SPF on mailing lists
as per mailing list conversations
Council to seek Admin Team’s advice.
MOTION: Kathy The change to SPF is reverted so we go back to the state we were at previously
Seconded: Cherie.
Carried.
ACTION: Kathy to liaise with Admin Team.

7. Other business carried from previous Council
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress

Request from LUKE JOHN to Officially oppose the TPP
Action (Josh S): Reach out to Kim Weatherall
ACTION: Hugh to publish email.

8. In Camera
1 item was discussed in camera

2042PM Close

Council Minutes Tuesday 15th March 2016

Tue, 2016-03-15 19:36 - 20:45

1. Meeting overview and key information
Present
Hugh, Sae Ra, Katie, Cherie, Kathy

Apologies:
Tony, Craige,

Meeting opened by Hugh at 1936hrs and quorum was achieved

MOTION that the previous minutes of 1 March are correct
Moved: Kathy
Seconded: Cherie
Carried with 1 abstention

2. Log of correspondence
Motions moved on list
Nil

General correspondence

From Previous Meetings:
VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work

Strategic Plan submitted by KATHY REID
Tabled
Good community discussion
Should be considered by 2016 council
MOTION by CHRIS N that we commend the strategic plan as written by KATHY REID for consideration by the 2016 council.
SECONDED: Joshua Hesketh
Passed unanimously

linux.com.au Renewal Notification from ENETICA - 90 days
Josh H to pay with LA debit card
UPDATE: Kathy has paid this with her LA debit card, can be removed from agenda

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 budget

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress
Sae RA to drop a line to STeve

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.
JOSH H to speak to Donna regarding this
UPDATE: Ongoing
UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.
We need to at least get the website to D8 and automate the updating process.
ACTION: Josh to get a backup of the site to Craig
ACTION: Craige to stage the website to see how easy it is to update.
UPDATE: Craige to log in to the website to elevate permissions.
UPDATE: Still in progress
ACTION: Josh H to tarball the site.
Outstanding action
In Progress
Completed
ACTION: Craige to send key to Josh H
In progress.
Desire to do things as open as possible so that others can contributes
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required
High Priority. Tony to update Action Register

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.
UPDATE: Currently being followed up.
ACTION: Josh H to follow this up
Have received information back. Josh to continue following up.
News Limited is still outstanding
WordPress Foundation:
acts similarly as Linux Australia.
Runs multiple WordCamps in the US.
WordPress give access to names to, documentation and financial support to WordCamps outside of US and Australia.
There is a mandate for WordPress to not return a profit.
What to do with outstanding debt:
What to do in the future: WordPress foundation will act as a sponsor to the event.
If the event makes a loss. WordPress foundation is able to act as a safety net.
Debt outstanding for 2014:
MOTION by JOSH HESKETH we cancel the debt for the WordPress foundation for their sponsorship for WordCamp in Sydney
Seconded: Tony
Carried with 1 objection
ACTION: Josh H to Confirm Sponsorship with the WordCamp 2016 budget.
ACTION: Missing Payment from NewsCorp Assigned to Tony bReeds

Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
Use opportunity at LCA to onboard new admin team members if possible.

Please refer to Action Items list for more items.

ACTION: Kathy to create a Events checklist.

BAS was submitted and went through seamlessly.

4. Items for discussion

Kathy to follow up with Chris Skene and possibly connect with Donna Benjamin.
implementing a measure to make sure we have oversight over the committees. ACTION: Kathy to contact

Josh Hesketh that has a Credit Card and is a signatory to the bank accounts
MOTION by Kathy to keep Josh Hesketh as a signatory on LA Bank Accounts and approve his use of an LA Credit Card until such time that we have worked with Westpac to ensure that we have updated the signatories.
Seconded: Tony
Passed

Face to face timing and preparation - we should get the ball rolling on this.
Meeting in Melbourne.
ACTION Katie to Setup a doodle poll for an appropriate weekend for face-to-face
7th/8th May
ACTION: Kathy to invite Josh H to Face to FAce.

Our insurance coverage ends on 19th April 2016, we might want to discuss renewal and coverage as we only have about a month to renew

During F2F discussions, I'd like to suggest that Josh Hesketh is invited to May F2F (if he's available / willing) to provide good handover and guidance

GovHack in general - ie recapping Council on the Friday night discussion
View to form GovHack Australia as a subcommittee of Linux Australia. Went through the rundown of how things work, legalities, all went smoothly. Part of the conversation was around the autonomy of GovHack. GovHack to come to LA with a subcommittee policy. Community Membership, for oversight.

Membership committee next actions - recapping Council on the Thursday night discussion
As discussed.

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
Hugh to have a chat with Ben Dechrai

Event Updates:

LCA2016 update
Geelong Debrief (email from Josh H): ”I think it's useful for LCA teams to do a debrief style ghosts meeting with either a rep from the council or a ghost”
To take back to David

LCA2017 update
Ghosts has been organised. 16/17th April.
ACTION: Hugh ask for an update regarding Zookeepr.

LCA2018 update
Need to form the subcommittee to approve the budget and community members.
ACTION: Katie to run point with the team and follow-up
ACTION: to go onto the Face-to-face agenda.
Seem to be working along, domains etc.

PyConAU update
ACTION: Tony to take point.
CfP is about to open.

Drupal South 2016
ACTION: Kathy to take point.
Need to lock in a budget and to give the training.
Kathy to reach out.

OSDConf 2015
Post event report in progress.

GovHack
Sponsorship Agreement - for review
Subcommittee Policy - meeting to be convened, requesting participation from other Council members
Tony and Sae Ra to take point on behalf of LA to speak with GovHack.

JoomlaDay
Event has been finished. 1 sponsorship invoice is outstanding. Post event report to come.

DrupalGov
Covered by Kathy

WordCamp Sunshine Coast 2016
Meeting with Luke
Need to get the team setup in the financial system and get them up and running on the event.
Really professional vibe. Added to the website. We need to get them Xero access. Venue, sponsors etcs

WordCamp Sydney 2016
Monday 21st Kathy will talk to them re subcommittee policy.

5. Items for noting
PO Box and Mail Redirection has been paid on Kathy’s CC.

6. Other business
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

FOSS As A Service - GNU Social - Moved to Action Items list

New Mailing List Policy - Moved to Action Items list

Proposed Code of Conduct Amendments, by Christopher Neugebauer
https://github.com/linuxaustralia/constitution_and_policies/pull/12 – “establishes that technology choices are not grounds for harassment of attendees of our events.”
ACTION: To be deferred to Face-to-Face
https://github.com/linuxaustralia/constitution_and_policies/pull/13 - “removes a "specific problem behaviour" that I believe was added to the code of conduct in jest several years ago, and has made it difficult to have meaningful discussions about the remainder of the provisions of the code of conduct.”
MOTION by Katie to accept the Pull request from Chris N.
Seconded: Craige
Carried with 2 abstentions.

Misc Pending Policy Pull Requests
https://github.com/linuxaustralia/constitution_and_policies/pull/15 Remove repeated second paragraph from Rationale
MOTION: by Katie to merge pull request #15
Seconded Tony
Passed

Enable SPF on mailing lists
as per mailing list conversations
Council to seek Admin Team’s advice.
MOTION: Kathy The change to SPF is reverted so we go back to the state we were at previously
Seconded: Cherie.
Carried.
ACTION: Kathy to liaise with Admin Team.
UPDATE: Kathy emailed Admin Team 15th March for advice, awaiting advice
Emailed admin team.

Insurance Policy:
Is our coverage adequate?
Is there any extensions we need to
ACTION: Kathy to make the call to pay the insurance like for like.

7. Other business carried from previous Council
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress

Request from LUKE JOHN to Officially oppose the TPP
Confirmed. Hugh to put the wording around to a thank you and announcement.

8. In Camera
Nil items were discussed in camera

2045PM AEDST close.

Council Minutes Tuesday 26th April 2016

Tue, 2016-04-26 19:37 - 20:36

1. Meeting overview and key information
Present
Kathy, Cherie, Katie, Sae Ra

Apologies:
Tony, Craige, Hugh,

Meeting opened by Kathy at 1937hrs and quorum was achieved

MOTION that the previous minutes of 12 April are correct
Moved: Katie
Seconded: Sae Ra
Carried with 1 abstention

2. Log of correspondence
Motions moved on list
3 Motions
MOTION by KATHY REID that the GovHack 2016 budget provided by GovHack Subcommittee, and which is assumed to be approved by GovHack Subcommittee, is approved on condition that the following changes are made:
The midline budget should be refactored to estimate a 10% increase in revenue, not a 15% increase in revenue
An expense of $4k should be budgeted to help cover LA Xero / insurance costs
The current contingency should remain
Seconded by SAE RA GERMAINE
Carried

MOTION by KATHY REID that the Event Checklist be pulled into the Policies and Constitution repository to act as a guide for event teams and LA
Seconded by SAE RA GERMAINE
Carried
UPDATE: This has been merged to master on GitHub.

MOTION by KATHY REID that the proposed GovHack Subcommittee policy be approved as the basis on which a GovHack Subcommittee can be formed, replacing in this instance the existing Subcommittee Policy due to the special requirements of the GovHack Subcommittee
Seconded by SAE RA GERMAINE
Carried
ACTION: Kathy to work with Tony. The markdown format of this Subcommittee Policy needs to be PRd and merged to master branch on GitHub. Tony has the markdown format, just needs a PR and a merge.

General correspondence
VPAC Closure
Email notice to be sent out.
MOTION by KATHY REID: that Linux Australia would like to thank the Linux Australia Admin for their quick response and tireless efforts in keeping the Linux Australia infrastructure up and running.
Seconded: KATIE
Carried

MOTION by KATHY REID should the spam thread become more heated that Linux Australia Council to place the list in moderation.
Seconded: KATIE
Carried

Insurance
UPDATE: Sohan was chased 25th April, awaiting update
UPDATE: an invoice has been sent. Kathy to raise the invoice and ping Sae Ra to approve in Westpac.

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress
Sae Ra to drop a line to Steve

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.
UPDATE: Survey has been completed, and key findings and recommendations sent to the Linux Aus list.
UPDATE: Survey was sentout to the LA list. Look at wireframes and look at candidate platforms. Tl:dr membership team have been doing stuff.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.

Please refer to Action Items list for more items.

4. Items for discussion

Face to face timing and preparation - we should get the ball rolling on this.
Meeting in Melbourne.
ACTION Katie to Setup a doodle poll for an appropriate weekend for face-to-face
7th/8th May
ACTION: Kathy to invite Josh H to Face to Face.
UPDATE: Josh H has confirmed availability for F2F, he is happy to arrive the Friday night and participate for the full weekend. He is awaiting location / accommodation details.
Key items for preparation is accommodation and travel.
ACTION: Tony for travel
ACTION: Kathy for accommodation
FOR DISCUSSION: Agenda items for discussion, adding phone numbers to agenda. Do we want to prioritise agenda items?
ACTION Kathy: Make sure Josh H has the agenda.

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress.

Event Updates:
LCA2016 update
Council to formally review the post-event report. And determine if this needs to be sent to the list
Organise debrief.

LCA2017 update
Ghosts:
Was very successful.
The history of the venue is there and the running of it should be smooth.
Admin team is up to speed. The running of the conference will be great.

LCA2018 update
ACTION: Katie to chase up the team for information before face-to-face

PyConAU update
Deferred to next teleconference

Drupal South Gold Coast 2016
Cherie to be the point person for this.
ACTION: Kathy to take on the action item spreadsheet.

OSDConf 2015
ACTION: Katie review the event report, and if we should be releasing the report.

GovHack
Seem to be speeding along.
GovHack NZ are doing wonderful things.

JoomlaDay
Nothing to report.

DrupalGov
Outstanding reimbursement
ACTION Kathy: Outstanding reimbursement

WordCamp Sunshine Coast 2016
Just under 2 weeks until show time. They are ready to go.
ACTION Kathy: check to see how things are going.

WordCamp Sydney 2016
Ticking along really well. There’s a little bit of an issue with finance access. All should now be resolved.

5. Items for noting
Nil items for noting

6. Other business
Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

FOSS As A Service - GNU Social - Moved to Action Items list

7. In Camera
1 item was discussed in camera

2036AEST close.

Council Minutes Wednesday 27 January 2016

Wed, 2016-01-27 19:47 - 21:23

1. Meeting overview and key information
Present
Chris, Josh H, Sae Ra G, Josh S

Apologies:
James Iseppi, Tony B

Meeting opened by Josh Hat 1947hrs and quorum was achieved

MOTION that the previous minutes of 13 January are correct
Moved: Josh H
Seconded: Chris
carried with 1 abstention

2. Log of correspondence
Motions moved on list
MOTION by JOSHUA HESKETH to approve admin-team's request for no more than $970 to purchase replacement parts for HP server and new disks.
SECONDED SAE RA GERMAINE
PASSED

General correspondence

VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work
Work will be conducted whilst they are in Melbourne

Strategic Plan submitted by KATHY REID
Tabled
Good community discussion
Should be considered by 2016 council
MOTION by CHRIS N that we commend the strategic plan as written by KATHY REID for consideration by the 2016 council.
SECONDED: Joshua Hesketh
Passed unanimously

linux.com.au Renewal Notification from ENETICA - 90 days
Josh H to pay with LA debit card by the end of other week.

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 budget
There is only a few days left whilst donations are being matched.
MOTION JOSH H: The 2015 LA Council recommends that the incoming council to highly consider to donate to the Conservancy once a budget is in place.
Seconded: Sae Ra Germaine
Passed

LCA/EFA Social event
Donation to occur.
MOTION by JOSH HESKETH LA donate $1500 to the EFA
Seconded Josh S
Carried.

Payment of ATO
Completed.

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress
This has been completed and to be removed from the agenda

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.
JOSH H to speak to Donna regarding this
UPDATE: Ongoing
UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.
We need to at least get the website to D8 and automate the updating process.
ACTION: Josh to get a backup of the site to Craig
ACTION: Craige to stage the website to see how easy it is to update.
UPDATE: Craige to log in to the website to elevate permissions.
UPDATE: Still in progress
ACTION: Josh H to tarball the site.
Outstanding action
In Progress
Completed
ACTION: Craige to send key to Josh H
In progress.
Desire to do things as open as possible so that others can contributes
ACTION: Needs analysis of website feature requirements. Josh H to put together basic set of needs for sending to list as a starting point.
To be removed from the agenda.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required
In progress.

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.
UPDATE: Currently being followed up.
ACTION: Josh H to follow this up
Have received information back. Josh to continue following up.
News Limited is still outstanding
WordPress Foundation:
acts similarly as Linux Australia.
Runs multiple WordCamps in the US.
WordPress give access to names to, documentation and financial support to WordCamps outside of US and Australia.
There is a mandate for WordPress to not return a profit.
What to do with outstanding debt:
What to do in the future: WordPress foundation will act as a sponsor to the event.
If the event makes a loss. WordPress foundation is able to act as a safety net.
Debt outstanding for 2014:
MOTION by JOSH HESKETH we cancel the debt for the WordPress foundation for their sponsorship for WordCamp in Sydney
Seconded: Tony
Carried with 1 objection
ACTION: Josh H to Confirm Sponsorship with the WordCamp 2016 budget.
One outstanding invoice
No progress.
Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
Use opportunity at LCA to onboard new admin team members if possible.

Please refer to Action Items list for more items.

4. Items for discussion

LCA2016 update
Sold out
Everything is tracking well
5 days to go.

LCA2017 update
Presentation for Geelong to be prepared.
In Progress.
Subcommittee members:
Community members to be council members
Ghosts list has also been collated.

LCA2018 update
Site visits has occurred and went well.

PyConAU update
Westpac access has gone through
Budget has been accepted.

Drupal South 2016
Budget was passed within the subcommittee with 1 abstention
MOTION: JOSH Approve the budget as proposed by Vladimir on the 27th of January
Seconded: Sae Ra
Passed.

OSDConf 2015
Awaiting a closure report. Nothing new from the last fortnight

GovHack
Subcommittee policy has been put forward.
2016 Council will need to work with the GovHack team to form this subcommittee.

JoomlaDay
Seeking a closure report.
Check payment of invoices

DrupalGov
Requires access to Westpac.

WordCamp Sunshine Coast 2016
Requires access to Westpac.

5. Items for noting
Meeting room for AGM has been booked.
Office bearers to write reports
Josh H to invite a returning officer.
Josh to invite Terry Dawson

6. Other business
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress
To be removed from future agendas.

Council lunch at LCA2016
Josh H to arrange lunch for Monday
Tony to think about how to present his report
Council dinner to occur the

Request from LUKE JOHN to Officially oppose the TPP
Action (Josh S): Reach out to Kim Weatherall
In Progress

7. In Camera
5 items were discussed in camera

2123PM close

Council Minutes Tuesday 29th March 2016

Tue, 2016-03-29 19:31 - 21:00

1. Meeting overview and key information
Present
Hugh, Sae Ra, Kathy, Cherie, Tony

Apologies:
Katie, Craige

Meeting opened by Hugh at 1931hrs and quorum was achieved

MOTION that the previous minutes of 15 March are correct
Moved: Cherie
Seconded: Kathy
Carried with 1 abstention

2. Log of correspondence
Motions moved on list
1 Motion Pending

MOTION by Kathy Reid that in alignment with the Subcommittee Policy (v2) a Subcommittee is formed to run WordCamp Sydney 2016 with the following membership;

Wil Brown (chair) -
Kristen Symonds (treasurer) -
Peter Shilling (organiser) -
James Carmody (organiser) -
Isabel Brison (organiser) -
Jude Love (organiser) -
Dion Beetson (organiser) -
Dee Teal (community rep) -
Peter Bui (community rep) -

SECONDED: Tony Breeds
Passed unanimously.

General correspondence

GovHack Subcommittee policy
ACTION:
Kathy to Draft the motion
Suggest the changes to govhack on the termination terms to 30 days to return any finances when starting the partnership.

From Previous Meetings:
VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work
ACTION: Sae Ra to ping steve.

Proposing that Linux Australia fund Software Freedom Conservancy by FRANCOIS MARIER
Defer for 2016 F2F - Removed from the Council Agenda.

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress
Sae Ra to drop a line to Steve

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
Update: Still in progress
Money shuffling is still required
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney
ACTION: Missing Payment from NewsCorp Assigned to Tony Breeds
Payment has now been received.

Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
ACTION: Sae Ra to ask Steve

Please refer to Action Items list for more items.

ACTION: Kathy to create a Events checklist.

4. Items for discussion

Kathy to follow up with Chris Skene and possibly connect with Donna Benjamin.
implementing a measure to make sure we have oversight over the committees.
ACTION: Kathy to contact
Eventbrite payment has been chased. And post event report is sought.

Face to face timing and preparation - we should get the ball rolling on this.
Meeting in Melbourne.
ACTION Katie to Setup a doodle poll for an appropriate weekend for face-to-face
7th/8th May
ACTION: Kathy to invite Josh H to Face to Face.
UPDATE: Josh H has confirmed availability for F2F, he is happy to arrive the Friday night and participate for the full weekend. He is awaiting location / accommodation details.
Key items for preparation is accommodation and travel.
ACTION: Tony for travel
ACTION: Kathy for accommodation

Our insurance coverage ends on 19th April 2016, we might want to discuss renewal and coverage as we only have about a month to renew

GovHack in general - ie recapping Council on the Friday night discussion
View to form GovHack Australia as a subcommittee of Linux Australia. Went through the rundown of how things work, legalities, all went smoothly. Part of the conversation was around the autonomy of GovHack. GovHack to come to LA with a subcommittee policy. Community Membership, for oversight.

Membership committee next actions - recapping Council on the Thursday night discussion

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress.

Schwag options for Linux Australia to send out to events.

Event Updates:

LCA2016 update
Previous Meeting
Geelong Debrief (email from Josh H): ”I think it's useful for LCA teams to do a debrief style ghosts meeting with either a rep from the council or a ghost”
To take back to David

LCA2017 update
Previous Meeting
Ghosts has been organised. 16/17th April.
ACTION: Hacking away.

LCA2018 update
Previous Meeting
Need to form the subcommittee to approve the budget and community members.
ACTION: Katie to run point with the team and follow-up
ACTION: to go onto the Face-to-face agenda.
Seem to be working along, domains etc.

PyConAU update
Previous Meeting
ACTION: Tony to take point.
CfP is about to open.
Arranging Keynotes.

Drupal South 2016
Previous Meeting
ACTION: Kathy to take point.
Need to lock in a budget and to give the training.
Kathy to reach out.

OSDConf 2015
Previous Meeting
Post event report in progress.

GovHack
Previous Meeting
Sponsorship Agreement - for review
Subcommittee Policy - meeting to be convened, requesting participation from other Council members
Tony and Sae Ra to take point on behalf of LA to speak with GovHack.

JoomlaDay
Tony to ping the organisers.
Previous Meeting
Event has been finished. 1 sponsorship invoice is outstanding. Post event report to come.

DrupalGov
Previously covered.
Previous Meeting
Covered by Kathy

WordCamp Sunshine Coast 2016
Previous Meeting
Meeting with Luke
Need to get the team setup in the financial system and get them up and running on the event.
Really professional vibe. Added to the website. We need to get them Xero access. Venue, sponsors etcs

WordCamp Sydney 2016
Kathy to communicate.
Previous Meeting
Monday 21st Kathy will talk to them re subcommittee policy.

5. Items for noting
Nil

6. Other business
MemberDB
Emails have been sent out regarding the survey. Follow-up will be provided once results have been collated.

Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

FOSS As A Service - GNU Social - Moved to Action Items list

New Mailing List Policy - Moved to Action Items list

Proposed Code of Conduct Amendments, by Christopher Neugebauer
https://github.com/linuxaustralia/constitution_and_policies/pull/12 – “establishes that technology choices are not grounds for harassment of attendees of our events.”
ACTION: To be deferred to Face-to-Face
https://github.com/linuxaustralia/constitution_and_policies/pull/13 - “removes a "specific problem behaviour" that I believe was added to the code of conduct in jest several years ago, and has made it difficult to have meaningful discussions about the remainder of the provisions of the code of conduct.”
MOTION by Katie to accept the Pull request from Chris N.
Seconded: Craige
Carried with 2 abstentions.
Misc Pending Policy Pull Requests
https://github.com/linuxaustralia/constitution_and_policies/pull/15 Remove repeated second paragraph from Rationale
MOTION: by Katie to merge pull request #15
Seconded Tony
Passed

Enable SPF on mailing lists
as per mailing list conversations
Council to seek Admin Team’s advice.
MOTION: Kathy The change to SPF is reverted so we go back to the state we were at previously
Seconded: Cherie.
Carried.
ACTION: Kathy to liaise with Admin Team.
UPDATE: Kathy emailed Admin Team 15th March for advice, awaiting advice
Emailed admin team.

Insurance Policy:
Is our coverage adequate?
Is there any extensions we need to
ACTION: Kathy to make the call to pay the insurance like for like.

New Other Business

7. Other business carried from previous Council
Meetup payments for LCA, Humbug, LibrePlanet.
Clinton Roy has been funding the account.
We are currently paying for the SLUG meetup.
Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.
LCA is now under the Linux Australia account
ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.
Humbug meetup has a subscription for the next 6 months.
James: to chase up with Clinton re LibrePlanet.
UPDATE: In Progress

Request from LUKE JOHN to Officially oppose the TPP
Confirmed. Hugh to put the wording around to a thank you and announcement.
Motion by Kathy Reid Thanks goes to Josh Stewart for his help with putting together the TPP submission.
Seconded Hugh Blemings

2100AEDST close.

Council Minutes Tuesday 12th April 2016

Tue, 2016-04-12 19:37 - 20:24

1. Meeting overview and key information
Present
Kathy, Sae Ra, Katie, Tony, Hugh

Apologies:
Cherie, Craige,

Meeting opened by Hugh at 1937hrs and quorum was achieved

MOTION that the previous minutes of 29 March are correct
Moved: Hugh
Seconded: Kathy
Carried 1 abstention

2. Log of correspondence
Motions moved on list
1 Motion
MOTION by Kathy Reid that the proposed GovHack Subcommittee policy be approved as the basis on which a GovHack Subcommittee can be formed, replacing in this instance the existing Subcommittee Policy due to the special requirements of the GovHack Subcommittee
SECONDED by Sae Ra Germaine
Carried

General correspondence

Insurance
Certificate of currancy

GovHack
Motion has been moved, to accept the subcommittee policy
GovHack Subcommittee to get to the LA Council the Budget
ACTION: Tony
ACTION: Kathy - Email govhack to get the projected numbers/actuals from 2016 and outline the proposed plan
Late tomorrow/midday thursday, council motion published on list that accepts budget and go.
Access to Xero can be granted tomorrow morning, invoices can be generated and then sent out once the budget has been approved.
MOTION BY HUGH BLEMINGS to form the GovHack Subcommittee including the GovHack Global Operations team with Sae Ra Germaine and Tony Breeds as standing council members until community members can be found.
Seconded: SAE RA GERMAINE
Passed.

From Previous Meetings:
VPAC Closure
MemberDB is not hard to move. Most of linux.org.au. There’s a few LUG things/DNS stuff.
ACTION: Josh H to ping Steve
MemberDB is moved to a new host
Needs a quick code audit
Double check results of previous elections all match old host
Josh H to follow up with Steve any further work
ACTION: Sae Ra to ping steve.
Update:
Work in progress.

3. Review of action items from previous meetings

Request from infrastructure subcommittee for assistance around previous LCA websites.
ACTION: Sae Ra to work with Steve W around this
In Progress
Sae Ra to drop a line to Steve

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.

Admin Team draft budget from STEVEN WALSH
UPDATE: Awaiting for a more firm budget
UPDATE: Still awaiting
UPDATE: Steve is on holidays and it will be followed up later.
UPDATE: Tony to follow up.
In Progress.
ACTION: Sae Ra to ask Steve

Please refer to Action Items list for more items.

ACTION: Kathy to create a Events checklist.

4. Items for discussion

Kathy to follow up with Chris Skene and possibly connect with Donna Benjamin.
implementing a measure to make sure we have oversight over the committees.
ACTION: Kathy to contact
Eventbrite payment has been chased. And post event report is sought.
1 more transaction needs to be reviewed and finalised. Waiting on the post event report.

Face to face timing and preparation - we should get the ball rolling on this.
Meeting in Melbourne.
ACTION Katie to Setup a doodle poll for an appropriate weekend for face-to-face
7th/8th May
ACTION: Kathy to invite Josh H to Face to Face.
UPDATE: Josh H has confirmed availability for F2F, he is happy to arrive the Friday night and participate for the full weekend. He is awaiting location / accommodation details.
Key items for preparation is accommodation and travel.
ACTION: Tony for travel
ACTION: Kathy for accommodation

Our insurance coverage ends on 19th April 2016, we might want to discuss renewal and coverage as we only have about a month to renew
As above

GovHack in general - ie recapping Council on the Friday night discussion
As above
Membership committee next actions - recapping Council on the Thursday night discussion
As above

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress.

Schwag options for Linux Australia to send out to events.

Event Updates:
LCA2016 update
Post event report was submitted.

LCA2017 update
Ghosts this weekend
Kathy to forward twitter details to Chris.

LCA2018 update
All good

PyConAU update
The word is getting out.
They were initially blocked for a period of time regarding CfP.

Drupal South 2016
Progressing ok, invoices have been generated. On track for their conference.

OSDConf 2015
Post Event report has been received and all books have been closed.
Kathy to draft ODSConf call for bids

GovHack
As Above

JoomlaDay
Nothing to report

DrupalGov
Finalising payment and seeking a final report.

WordCamp Sunshine Coast 2016
In progress.

WordCamp Sydney 2016
Starting to bring sponsorship in. Site structure has been sorted out. Will require a budget.
ACTION: Kathy to touch base with WordCamp Sydney to vote on the budget.

5. Items for noting
Nil

6. Other business
MemberDB

Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

FOSS As A Service - GNU Social - Moved to Action Items list

New Mailing List Policy - Moved to Action Items list

Proposed Code of Conduct Amendments, by Christopher Neugebauer
https://github.com/linuxaustralia/constitution_and_policies/pull/13 - “removes a "specific problem behaviour" that I believe was added to the code of conduct in jest several years ago, and has made it difficult to have meaningful discussions about the remainder of the provisions of the code of conduct.”
MOTION by Katie to accept the Pull request from Chris N.
Seconded: Craige
Carried with 2 abstentions.

7. Other business carried from previous Council
Request from LUKE JOHN to Officially oppose the TPP
Confirmed. Hugh to put the wording around to a thank you and announcement.
Motion by Kathy Reid Thanks goes to Josh Stewart for his help with putting together the TPP submission.
Seconded Hugh Blemings

8. In Camera
Nil items were discussed in camera

2024hrs close.

June 11, 2016

Cleaning up obsolete config files on Debian and Ubuntu

As part of regular operating system hygiene, I run a cron job which updates package metadata and looks for obsolete packages and configuration files.

While there is already some easily available information on how to purge unneeded or obsolete packages and how to clean up config files properly in maintainer scripts, the guidance on how to delete obsolete config files is not easy to find and somewhat incomplete.

These are the obsolete conffiles I started with:

$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/tunables/ntpd 5519e4c01535818cb26f2ef9e527f191 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete
 /etc/apparmor.d/usr.sbin.ntpd a00aa055d1a5feff414bacc89b8c9f6e obsolete
 /etc/bash_completion.d/initramfs-tools 7eeb7184772f3658e7cf446945c096b1 obsolete
 /etc/bash_completion.d/insserv 32975fe14795d6fce1408d5fd22747fd obsolete
 /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf 8df3896101328880517f530c11fff877 obsolete
 /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf d81013f5bfeece9858706aed938e16bb obsolete

To get rid of the /etc/bash_completion.d/ files, I first determined what packages they were registered to:

$ dpkg -S /etc/bash_completion.d/initramfs-tools
initramfs-tools: /etc/bash_completion.d/initramfs-tools
$ dpkg -S /etc/bash_completion.d/insserv
initramfs-tools: /etc/bash_completion.d/insserv

and then followed Paul Wise's instructions:

$ rm /etc/bash_completion.d/initramfs-tools /etc/bash_completion.d/insserv
$ apt install --reinstall initramfs-tools insserv

For some reason that didn't work for the /etc/dbus-1/system.d/ files and I had to purge and reinstall the relevant package:

$ dpkg -S /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
$ dpkg -S /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf

$ apt purge system-config-printer-common
$ apt install system-config-printer

The files in /etc/apparmor.d/ were even more complicated to deal with because purging the packages that they come from didn't help:

$ dpkg -S /etc/apparmor.d/abstractions/evince
evince: /etc/apparmor.d/abstractions/evince
$ apt purge evince
$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete

I was however able to get rid of them by also purging the apparmor profile packages that are installed on my machine:

$ apt purge apparmor-profiles apparmor-profiles-extra evince ntp
$ apt install apparmor-profiles apparmor-profiles-extra evince ntp

Not sure why I had to do this but I suspect that these files used to be shipped by one of the apparmor packages and then eventually migrated to the evince and ntp packages directly and dpkg got confused.

If you're in a similar circumstance, you want want to search for the file you're trying to get rid of on Google and then you might end up on http://apt-browse.org/ which could lead you to the old package that used to own this file.

APM:Plane 3.6.0 released

The ArduPilot development team is proud to announce the release of version 3.6.0 of APM:Plane. This is a major update so please read the notes carefully.

The biggest changes in this release are:

  • major update to PX4Firmware code
  • major update to QuadPlane code
  • addition of MAVLink2 support

Updated PX4Firmware

The updated PX4Firmware tree greatly improves support for the new Pixracer boards as well as improving scheduling performance and UAVCAN support. It also adds OneShot support for multirotor motors in QuadPlanes.

QuadPlane Updates

The QuadPlane changes are very extensive in this release. A lot of new features have been added, including:

  • improved automatic weathervaning
  • greatly improved support for mixed fixed wing and VTOL missions
  • automatic RTL with VTOL land
  • VTOL GUIDED mode support
  • greatly improved transition code
  • new tuning system for VTOL motors
  • extensive upgrade to logging system for much better flight analysis

 
The new QuadPlane features are documented at:

http://ardupilot.org/plane/docs/quadplane-support.html

Please read the documentation carefully if you are flying a QuadPlane!

Tiltrotors and Tiltwings

This release has initial support for a variety of tiltrotors and tiltwing configurations. So far testing of these types of aircraft has been limited to simulations and it should be considered very experimental.

MAVLink2 Support

The new MAVLink2 support will allow for greatly expanded MAVLink protocol features in the future, and includes support for signing of MAVLink connections for the first time, making them secure against malicious attacks. I will do a separate blog post on upgrading to MAVLink2 soon. MAVLink1 is still the default for this release, but you can enable MAVLink2 on a per port basis by setting SERIALn_PROTOCOL=2.

Credits

Many thanks to everyone who has contributed to this release. Tom and I have been delighted at the number and quality of contributions across the community, and to the extensive testing and flight logs that have been provided.

We would also like to give a special thanks to UAV Solutions for sponsoring the development of many of the new QuadPlane features in this release and Airphrame for development of the improvements to the landing code. We'd also like to thanks aAVIonix for providing hardware for testing of ADSB features.

Special thanks also for the great contributions by many long term contributors to the project. Major contributions to this release have been made by:

  • Peter Barker
  • Lucas De Marchi
  • Gustavo Jose de Sousa
  • Leonard Hall
  • Paul Riseborough
  • Francisco Ferreira
  • Michael du Breuil
  • Grant Morphett
  • Michael Oborne
  • The PX4 development team

among many others. Tom and I really appreciate the effort!

Some of you may also have noticed that Tom Pittenger from Airphrame is now co-lead with me on the fixed wing support for ArduPilot. Tom has been a major contributor for a long time and the dev team was delighted to appoint him as co-lead.

Other Changes
Detailed changes in this release include:

  • added motortest for all quad motors in sequence
  • merge upstream PX4Firmware changes
  • new AC_AttitudeControl library from copter for quadplane
  • modified default gains for quadplanes
  • new velocity controller for initial quadplane landing
  • smooth out final descent for VTOL landing
  • changed default loop rate for quadplanes to 300Hz
  • support up to 16 output channels (two via SBUS output only)
  • fixed bug with landing flare for high values of LAND_FLARE_SEC
  • improved crash detection logic
  • added in-flight transmitter tuning
  • fix handling of SET_HOME_POSITION
  • added Q_VFWD_GAIN for forward motor in VTOL modes
  • added Q_WVANE_GAIN for active weathervaning
  • log the number of lost log messages
  • move position update to 50hz loop rather then the 10hz
  • Suppress throttle when parachute release initiated, not after release.
  • support Y6 frame class in quadplane
  • log L1 xtrack error integrator and remove extra yaw logging
  • limit roll before calculating load factor
  • simplify landing flare logic
  • smooth-out the end of takeoff pitch by reducing takeoff pitch min via TKOFF_PLIM_SEC
  • added support for DO_VTOL_TRANSITION as a mission item
  • fixed is_flying() for VTOL flight
  • added Q_ENABLE=2 for starting AUTO in VTOL
  • reload airspeed after VTOL landing
  • lower default VTOL ANGLE_MAX to 30 degrees
  • Change mode to RTL on end of mission rather then staying in auto
  • implemented QRTL for quadplane RTL
  • added Q_RTL_MODE parameter for QRTL after RTL approach
  • reduced the rate of EKF and attitude logging to 25Hz
  • added CHUTE_DELAY_MS parameter
  • allow remapping of any input channel to any output channel
  • numerous waf build improvements
  • support fast timer capture for camera trigger feedback
  • numerous improvements for Pixracer support
  • added more general tiltrotor support to SITL
  • only save learned compass offsets when disarmed
  • support MISSION_ITEM_INT for more accurate waypoint positions
  • change parachute deployment altitude to above ground not home
  • added AP_Tuning system for QuadPlane tuning
  • added initial support for tiltrotors and tiltwings
  • added LOG_REPLAY and LOG_DISARMED parameters
  • added Q_GUIDED_MODE parameter
  • major update to QuadPlane documentation
  • added MAVLink2 support
  • fixed origin vs home altitude discrepancy
  • improved Lidar based landing glide slope
  • fixed throttle failsafe with THR_PASS_STAB=1
  • prevent EKF blocking during baro and airspeed cal
  • allow for ground testing of parachutes with CHUTE_MINALT=0
  • fixed elevator stick mixing for above 50% input
  • added QuadPlane ESC calibration

Happy flying!

June 10, 2016

Is Capitalism Collapsing? Random Thoughts, and More

Sounds like a crazy question? Delve into the details and you'll see what the problems are though:- in a previous post we found out that in 14 countries across the world growth was stagnating. I wanted to look deeper into this. The trend is basically across the board all around the world. It's the same phenomemon over and over again across basically all countries with few exceptions (and even

June 09, 2016

libferris 2.0

A new libferris is coming. For a while I've been chipping away at porting libferris and it's tree over to using boost instead of the loki and sigc++ libraries. This has been a little difficult in that it is a major undertaking and that you need to get it working or things segv in wonderful ways.

Luckily there are tests for things like stldb4 so I could see that things were in decent shape along the way. I have also started to bring back the dejagnu test suite for libferris into the main tree. This has given me some degree of happiness that libferris is working ok with the new boost port.

As part of that I've been working on allowing libferris to store it's settings in a configurable location. It's a chicken and egg problem how to set that configuration, as you need to be able to load a configuration in order to be able to set the setting. At the moment it is using an environment variable. I think I'll expand that to allow a longer list of default locations to be searched. So for example on OSX libferris can check /Applications/libferris.app/whatever as a fallback so you can just install and run the ferris suite without any need to do move setup than a simple drag and drop.

For those interested, this is all pushed up to github so you can grab and use right now. Once I have expanded the test suite more I will likely make an announced 2.0 release with tarballs and possibly deb/rpm/dmg distributions.

New filesystems that I've had planned are for mounting MQTT, ROS, and YAML.

June 08, 2016

Interning at Ozlabs

I am sadly coming to the end of my six(ish) month internship with Ozlabs (funded by ACS). So here I am writing about my experience in the hopes that future prospective interns can read about how they should come and work with the previously dubbed Linux Gods.

What is your background?

Despite embracing being a nerd at school, my opinion of computers prior to starting my Engineering degree was that they were boring and for geeky boys who didn't want to interact with the 'real' world. However when having to choose a specialisation of Engineering I was drawn towards Computer Systems as everything else seemed obvious * but Computer Systems was this great mystical unknown.

Fast forward three years, and I had seen glimpses into the workings of this magical computer world. I had learnt about transistors, logic gates and opamps; I had designed circuits that actually worked; and I had bashed my head against a wall trying to find obscure bugs. I had dabbled in a range of languages from the low levels of VHDL and embedded C, to the abstract world of Python and Java and delved into the obscure world of declarative prologs and relational reinforcement learning. Now it was time to solidify some of these concepts and get some experience under my belt so I could feel less like a monkey bashing random keys on my keyboard. Enter Ozlabs!

What did you do at Ozlabs?

After being handed a nice laptop and the root passwords, I faced the inevitable battle of getting everything setup. With the help of my mentor, the prestigious Michael Ellerman, and various other Ozlabs residents I picked off some low hanging fruit such as removing unused code and tidying up a few things. This allowed me to get familiar with the open-source workflow, the kernel building process, IRC, do more with Git then just push and pull, and finally come face-to-face with the seemingly impossible: Vim and virtual machines.

I then got to learn about Transactional Memory (TM) - a way of making a bunch of instructions on one processor appear to be one atomic operation to other processors. I took some old TM tests from Mikey and checked that they did indeed pass and fail when they were supposed to and refurbished them a little, learning how to run kernel self-tests and a bit about powerpc assembly along the way.

Eventually my fear of shell scripts was no match for my desire to be able to build and install a kernel with one command and so I finally got around to writing a build script. Accidentally rebooting a bare-metal machine instead of my VM running on it may have had a significant contribution to this...

The next interesting task I got to tackle was to implement a virtual memory dump that other architectures like x86 have, so we can see how the pages in memory are laid out along with information about these pages. This involved understanding x86's implementation and relating that to POWER's memory management. At Uni I never quite understood the fuss about pages and virtual memory and so it was great to be able to build up an appreciation and play around with page tables, virtual to real addresses, and hashtable.

I then moved onto SROP mitigation! After a lot of reading and re-reading, I decided to first understand how to use SROP to make an exploit on POWER which meant some assembly, diving into the signal code and finally meeting and spending time with GDB. Once again I had x86 code to port over to POWER, the main issue being making sure that I didn't break existing things - aka hours and hours of running the kernel self-tests and the Linux Test Project tests and some more scripting, with the help of Chris Smart, to collate the results.

You can judge all my submitted patches here.

What was your overall experience like at Ozlabs?

I moved to Canberra shortly after finishing exams and so hadn't had the time to ponder expectations of Ozlabs. Everyone was super friendly and despite being, not just the only female but, the only kiwi among a whoooole lot of Aussies I experienced a distinct lack of discrimination (apart from a bit of banter about accents).

Could I wear my normal clothes (and not stuffy business clothes)? Check. Did I get to work on interesting things? Check. Could I do my work without having to go through lots of unnecessary hoops and what not? Check. Could I develop my own workflow and learn all the things? Check. Did I get to delve into a few different areas? Check. Was I surrounded by super smart people who were willing to help me learn? Check.

All in all, I have had a great time here, learnt so much and you should definitely come and work at Ozlabs! Hopefully you'll see me back on this blog in a few months :)

* My pre-university, perhaps somewhat naiive, opinion: Civil and Mechanical is just physics. Chemical and Materials is just chemistry. Electrical seems interesting but who wants to work with power lines? Biomedical is just math and biology. Software is just abstract high level nonsense. But how a computer works?? That is some magical stuff.

Sysadmin Skills and University Degrees

I think that a major deficiency in Computer Science degrees is the lack of sysadmin training.

Version Control

The first thing that needs to be added is the basics of version control. CVS (which is now regarded as obsolete) was initially released when I was in the first year of university. But SCCS and RCS had been in use for some time. I think that the people who designed my course were remiss in not adding any mention of version control (not even strategies for saving old versions of your work), one could say that they taught us about version control by letting us accidentally delete our assignments. :-#

If a course is aimed at just teaching programmers (as most CS degrees are) then version control for group assignments should be a standard part of the course. Having some marks allocated for the quality of comments in the commit log would also be good.

A modern CS degree should cover distributed version control, that means covering Git as it’s the most popular distributed version control system nowadays.

For people who want to work as sysadmins (as opposed to developers who run their own PCs) a course should have an optional subject for version control of an entire system. That includes tools like etckeeper for version control of system configuration and tools like Puppet for automated configuration and system maintenance.

Dependencies

It’s quite reasonable for a CS degree to provide simplified problems for the students to solve so they can concentrate on one task. But in the real world the problems are more complex. One of the more difficult parts of managing real systems is dependencies. You have issues of header files etc at compile time and library versions at deployment. Often you need a program to run on systems with different versions of the OS which means making it compile for both and deal with differences in behaviour.

There are lots of hacky things that people do to deal with dependencies in systems. People link compiled programs statically, install custom versions of interpreters in user home directories or /usr/local for daemons, and do many other things. These things can have bad consequences including data loss, system downtime, and security problems. It’s not always wrong to do such things, but it’s something that should only be done with knowledge of the potential consequences and a plan for mitigating them. A CS degree should teach the potential advantages and disadvantages of these options to allow graduates to make informed decisions.

Backups

I’ve met many people who call themselves computer professionals and think that backups aren’t needed. I’ve seen production systems that were designed in a way that backups were impossible. The lack of backups is a serious problem for the entire industry.

Some lectures about backups could be part of a version control subject in a general CS degree. For a degree that majors in Sysadmin at least one subject about backups is appropriate.

For any backup (even backing up your home PC) you should have offsite backups to deal with fire damage, multiple backups of different ages (especially important now that encryption malware is a serious threat), and a plan for how fast you can restore things.

The most common use of backups is to deal with the case of deleting the wrong file. Unfortunately this case seems to be the most rarely mentioned.

Another common situation that should be covered is a configuration error that results in a system that won’t boot correctly. It’s a very common problem and one that can be solved quickly if you are prepared but which can take a long time if you aren’t.

For a Sysadmin course it is important to cover backups of systems in remote datacenters.

Hardware

A good CS degree should cover the process of selecting suitable hardware. Programmers often get to advise on the hardware used to run their code, especially at smaller companies. Reliability features such as RAID, ECC RAM, and clustering should be covered.

Planning for upgrades is a very important part of this which is usually not taught. Not only do you need to plan for an upgrade without much downtime or cost but you also need to plan for what upgrades are possible. Next year will your system require hardware that is more powerful than you can buy next year? If so you need to plan for a cluster now.

For a Sysadmin course some training about selecting cloud providers and remote datacenter hosting should be provided. There are many complex issues that determine whether it’s most appropriate to use a cloud service, hosted virtual machines, hosted physical servers managed by the ISP, hosted physical servers purchased by the client, or on-site servers. Often a large system will involve 2 or more of those options, even some small companies use 3 or more of those options to try and provide the performance and reliability they need at a price they can afford.

We Need Sysadmin Degrees

Covering the basic coding skills takes a lot of time. I don’t think we can reasonably expect a CS degree to cover all that and also give good coverage to sysadmin work. While some basic sysadmin skills are needed by every programmer I think we need to have separate majors for people who want a career in system administration.

Sysadmins need some programming skills, but that’s mostly scripting and basic debugging. Someone who’s main job is as a sysadmin can probably expect to never make any significant change to a program that’s more than 10,000 lines long. A large amount of the programming in a CS degree can be replaced by “file a bug report” for a sysadmin degree.

This doesn’t mean that sysadmins shouldn’t be doing software development or that they aren’t good at it. One noteworthy fact is that it appears that the most common job among developers of the Debian distribution of Linux is System Administration. Developing an OS involves some of the most intensive and demanding programming. But I think that more than a few people who do such work would have skipped a couple of programming subjects in favour of sysadmin subjects if they were given a choice.

Suggestions

Did I miss anything? What other sysadmin skills should be taught in a CS degree?

Do any universities teach these things now? If so please name them in the comments, it is good to help people find universities that teach them what they want to learn and help them in their career.

Simple remote mail queue monitoring

In order to monitor some of the machines I maintain, I rely on a simple email setup using logcheck. Unfortunately that system completely breaks down if mail delivery stops.

This is the simple setup I've come up with to ensure that mail doesn't pile up on the remote machine.

Server setup

The first thing I did on the server-side is to follow Sean Whitton's advice and configure postfix so that it keeps undelivered emails for 10 days (instead of 5 days, the default):

postconf -e maximal_queue_lifetime=10d

Then I created a new user:

adduser mailq-check

with a password straight out of pwgen -s 32.

I gave ssh permission to that user:

adduser mailq-check sshuser

and then authorized my new ssh key (see next section):

sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys

Laptop setup

On my laptop, the machine from where I monitor the server's mail queue, I first created a new password-less ssh key:

ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub

which I then installed on the server.

Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check:

0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq | grep -v "Mail queue is empty"

and that's it. I get a (locally delivered) email whenever the mail queue on the server is non-empty.

There is a race condition built into this setup since it's possible that the server will want to send an email at 2am. However, all that does is send a spurious warning email in that case and so it's a pretty small price to pay for a dirt simple setup that's unlikely to break.

June 07, 2016

It’s OK to be Wrong in Public

I’ve spent a reasonably long time with computers. I’ve been doing something with either software or hardware (mostly software) for pretty close to three quarters of my current lifespan. I started when I was about 10, but (perhaps unsurprisingly) nobody was paying me for my work yet then. Flash forwards a few decades, and I have a gig I rather enjoy with SUSE, working on storage stuff.

OK, “yay Tim”. Enough of the backstory, what’s the point?

The point (if I can ball up my years of experience, and the experience of the world at large), is that, in aggregate, we write better software if we do it in the open. There’s a whole Free Software vs. Open Source thing, and the nuances of that discussion are interesting and definitely important, but to my mind this is all rather less interesting than the mechanics of how F/OSS projects actually work in practice. In particular, given that projects are essentially communities, and communities are made up of individuals, how does an individual join an existing project, and become accepted and confident in that space?

If you’re an individual looking for something to work on, whether or not you think about it in these terms, you’re effectively looking for a community to join. You’re hopefully going to be there for a while.

But you’re one little person, and there’s a big established community that already knows how everything works. Whatever you’re proposing has probably already been thought of by someone else, and your approach is probably wrong. It’s utterly terrifying, especially when anything you push to a git repo or public mailing list will probably be online for the rest of your life.

Fuck that line of thinking. It’s logical reasoning, but it’s utterly unhelpful in terms of joining a project. It might be correct in broad strokes if you squint at it just right, but you’re bringing new eyes to something. You’ll probably see things established community members didn’t, or if not, you’ll be able to help smooth the way for the next newcomer. One of the kinks though is speaking up about $WEIRD_THING_IN_PROJECT. Is it actually broken, or do you just have no idea what’s going on yet because you’re new? Do you speak up? Do you raise a bug? Put in a pull request? Risk shame in public if you’re wrong?

I might be slightly biased. This is either because I’ve been doing this long enough that I no longer suffer too much if someone tells me I’ve made a mistake (I’ve made lots of them, and hopefully learned from all of them), or it’s because I’m a scary looking white dude, dripping with privilege. Probably it’s a mix of both, but the most important thing I think I ever learned is that it’s OK to be wrong in public. If in doubt, you should:

  • Listen for long enought to get a feel for the mailing list (or forum, or whatever).
  • Ask the question you think is stupid.
  • Submit the pull request you hope is helpful, but are actually sure is incomplete or inadequate.
  • Propose the new architecture you’re certain will be shot down.
  • Don’t take it personally if you do get shot down. This can be a crawling horror of difficulty, and only goes away with either arrogance or time (hopefully time, which I’m assured will eventually compost into wisdom).

If you don’t get a helpful answer to the stupid question, if you don’t get constructive feedback for the pull request or new architecture, if some asshole does shoot you down, this is not the project or community for you.

If someone helps you, you might have found something worth pursuing. If that pans out, keep asking stupid questions, and keep submitting pull requests you’re worried about. You’ll learn something, and so will everyone else, and the world will eventually be a better place.

Using the Atom editor for Linux kernel development

Atom is a text editor. It's new, it's shiny, and it has a lot of good and bad sides. I work in a lab full of kernel developers, and in the kernel, there are no IDEs. There's no real metadata you can get out of your compiler (given the kernel isn't very clang friendly), there's certainly nothing like that you can get out of your build system, so "plain old" text editors reign supreme. It's a vim or Emacs show.

And so Atom comes along. Unlike other shiny new text editors to emerge in the past 10 or so years, it's open source (unlike Sublime Text), it works well on Linux, and it's very configurable. When it first came out, Atom was an absolute mess. There was a noticeable delay whenever you typed a key. That has gone, but the sour impression that comes from replacing a native application with a web browser in a frame remains.

Like the curious person I am, I'm always trying out new things to see if they're any good. I'm not particularly tied to any editor; I prefer modal editing, but I'm no vim wizard. I eventually settled on using Emacs with evil-mode (which I assumed would make both Emacs and vim people like me, but the opposite happened), which was decent. It was configurable, it was good, but it had issues.

The Atom editor

So, let's have a look at how Atom stacks up for low-level work. First of all, it's X only. You wouldn't use it to change one line of a file in /etc/, and a lot of kernel developers only edit code inside a terminal emulator. Most vim people do this since gvim is a bit wonky, and Emacs people can double-dip; using Emacs without X for small things and Emacs with X for programming. You don't want to do that with Atom, if nothing else because of its slow startup time.

Now let's look at configurability. In my opinion, no editor will ever match the level of configurability of Emacs, however the barrier to entry is much lower here. Atom has lots of options exposed in a config file, and you can set them there or you can use an equivalent GUI. In addition, a perk of being a browser in a frame is that you can customise a lot of UI things with CSS, for those inclined. Overall, I'd say Emacs > Atom > vim here, but for a newbie, it's probably Atom > Emacs > vim.

Okay, package management. Atom is the clear winner here. The package repository is very easy to use, for users and developers. I wrote my own package, typed apm publish and within a minute a friend could install it. For kernel development though, you don't really need to install anything, Atom is pretty batteries-included. This includes good syntax highlighting, ctags support, and a few themes. In this respect, Atom feels like an editor that was created this century.

Atom's inbuilt package management

What about actually editing text? Well, I only use modal editing, and Atom is very far from being the best vim. I think evil-mode in Emacs is the best vim, followed closely by vim itself. Atom has a vim-mode, and it's fine for insert/normal/visual mode, but anything involving a : is a no-go. There's a plugin that's entirely useless. If I tried to do a replacement with :s, Atom would lock up and fail to replace the text. vim replaced thousands of occurrences with in a second. Other than that, Atom's pretty good. I can move around pretty much just as well as I could in vim or Emacs, but not quite. Also, it support ligatures! The first kernel-usable editor that does.

Autocompletions feel very good in Atom. It completes within a local scope automatically, without any knowledge of the type of file you're working on. As far as intelligence goes, Atom's support for tags outside of ctags is very lacking, and ctags is stupid. Go-to definition sometimes works, but it lags when dealing with something as big as the Linux kernel. Return-from definition is very good, though. Another downside is that it can complete from any open buffer, which is a huge problem if you're writing Rust in one tab and C in the other.

Atom's fuzzy file matching is pretty good

An experience I've had with Atom that I haven't had with other editors is actually writing a plugin. It was really easy, mostly because I stole a lot of it from an existing plugin, but it was easy. I wrote a syntax highlighting package for POWER assembly, which was much more fighting with regular expressions than it was fighting with anything in Atom. Once I had it working, it was very easy to publish; just push to GitHub and run a command.

Sometimes, Atom can get too clever for its own good. For some completely insane reason, it automatically "fixes" whitespace in every file you open, leading to a huge amount of git changes you didn't intend. That's easy to disable, but I don't want my editor doing that, it'd be much better if it highlighted whitespace it didn't like by default, like you can get vim and Emacs to do. For an editor designed around git, I can't comprehend that decision.

Atom can also fuzzy match its commands

Speaking of git, the editor pretty much has everything you'd expect for an editor written at GitHub. The sidebar shows you what lines you've added, removed and modified, and the gutter shows you what branch you're on and how much you've changed all-up. There's no in-built support for doing git things inside the editor, but there's a package for it. It's pretty nice to get something "for free" that you'd have to tinker with in other editors.

Overall, Atom has come a long way and still has a long way to go. I've been using it for a few weeks and I'll continue to use it. I'll encourage new developers to use it, but it needs to be better for experienced programmers who are used to their current workflow to consider switching. If you're in the market for a new editor, Atom might just be for you.

June 06, 2016

Software Carpentry

Today I taught my first Software Carpentry talk, specifically the Intro to Shell. By most accounts it went well.

After going through the course today I think I’ve spotted two issues that I’ll try to fix upstream.

Firstly, command substitution is a concept that is covered, and used incorrectly IMO. Command substitution is fine when you know you’re only going to get back one value, e.g. running an identify on an image to get its dimensions. But when you’re getting back an arbitrary long list of files, you’re only option is to use xargs. Using xargs also means that we can drop another concept to teach.

The other thing that Isn’t covered, but I think should be, is reverse isearch of the history buffer, it’s something that I use in my day to day use of the shell, not quite as much as tab completion, but it’s certainly up there.

A third, minor issue that I need to check, but I don’t think brace expansion was shown in the loop example. I think this should be added, as the example I ended up using showed looping over strings, numbers and file globs, which is everything you ever really end up using.

Software Carpentry uses different coloured sticky notes attached to learners laptops to indicate how they’re going. It’s really useful as a presenter out the front, if there’s a sea of green you’re good to go, if there are a few reds with helpers you’re probably OK to continue, but if there’s too many reds, it’s time to stop and fix the problem. At the end of the session we ask people to give feedback, here for posterity:

Red (bad):

  • Course really should be called Intro to Unix rather than bash
  • use of microphone might be good (difficult to hear, especially when helpers answer questions around)
  • Could have provided an intro into why  unix is advantageous over other programs
  • grep(?) got a bit complicated, could have explained more
  • start session with overview to set context eg. a graphic
  • why does unix shell suck so much, I blame you personally

Orange(not so bad):

  • maybe use the example data a bit more

Green(good):

  • patient, very knowledgeable
  • really knew his stuff
  • information generally easy to follow. good pacing overall good
  • good. referred to help files, real world this as go to for finding stuff out (mistranscribed i’m sure)
  •  good pace, good basis knowledge is taught

 

 


Filed under: Uncategorized

June 04, 2016

LUV Beginners June Meeting: MPE on SIMH / Programming with Python

Jun 18 2016 12:30
Jun 18 2016 16:30
Jun 18 2016 12:30
Jun 18 2016 16:30
Location: 

Infoxchange, 33 Elizabeth St. Richmond

Please note change of topic

Rodney Brown will demonstrate the HP minicomputer OS "Multi-Programming Executive" running on the SIMH emulator.

Andrew Pam will lead a hands-on Python programming class using the learnpython.org website. Suitable for people with no programming skills or with existing skills in other languages. Bring your own laptop or use the desktop machines on site.


The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 18, 2016 - 12:30

read more

June 01, 2016

I Just Ordered a Nexus 6P

Last year I wrote a long-term review of Android phones [1]. I noted that my Galaxy Note 3 only needed to last another 4 months to be the longest I’ve been happily using a phone.

Last month (just over 7 months after writing that) I fell on my Note 3 and cracked the screen. The Amourdillo case is good for protecting the phone [2] so it would have been fine if I had just dropped it. But I fell with the phone in my hand, the phone landed face down and about half my body weight ended up in the middle of the phone which apparently bent it enough to crack the screen. As a result of this the GPS seems to be less reliable than it used to be so there might be some damage to the antenna too.

I was quoted $149 to repair the screen, I could possibly have found a cheaper quote if I had shopped around but it was a good starting point for comparison. The Note 3 originally cost $550 including postage in 2014. A new Note 4 costs $550 + postage now from Shopping Square and a new Note 3 is on ebay with a buy it now price of $380 with free postage.

It seems like bad value to pay 40% of the price of a new Note 3 or 25% the price of a Note 4 to fix my old phone (which is a little worn and has some other minor issues). So I decided to spend a bit more and have a better phone and give my old phone to one of my relatives who doesn’t mind having a cracked screen.

I really like the S-Pen stylus on the Samsung Galaxy Note series of phones and tablets. I also like having a hardware home button and separate screen space reserved for the settings and back buttons. The downsides to the Note series are that they are getting really expensive nowadays and the support for new OS updates (and presumably security fixes) is lacking. So when Kogan offered a good price on a Nexus 6P [3] with 64G of storage I ordered one. I’m going to give the Note 3 to my father, he wants a phone with a bigger screen and a stylus and isn’t worried about cracks in the screen.

I previously wrote about Android device service life [4]. My main conclusion in that post was that storage space is a major factor limiting service life. I hope that 64G in the Nexus 6P will solve that problem, giving me 3 years of use and making it useful to my relatives afterwards. Currently I have 32G of storage of which about 8G is used by my music video collection and about 3G is free, so 64G should last me for a long time. Having only 3G of RAM might be a problem, but I’m thinking of trying CyanogenMod again so maybe with root access I can reduce the amount of RAM use.

May 31, 2016

Speaking in June 2016

I have a few upcoming speaking engagements in June 2016:

  • Nerdear.la – June 9-10 2016 – Buenos Aires, Argentina – never been to this event but MariaDB Corporation are sponsors and I’m quite excited to be back in Buenos Aires. I’m going to talk about the MySQL ecosystem in 2016.
  • SouthEast LinuxFest – June 10-12 2016 – Charlotte, NC, USA – I have a few talks here, a bit bummed that I’m going to be missing the speaker dinner, but I expect this to be another great year. Learn about MariaDB Server/MySQL Security Essentials, the MySQL ecosystem in 2016, and about distributions from the view of a package.
  • NYC MySQL Meetup – June 27 2016 – New York, USA – I’m going to give a talk on lessons you can learn from other people’s database failures. I did this at rootconf.in, and it was well received so I’m quite excited to give this again.
  • Community Open House for MongoDB – June 30 2016 – New York, USA – I’m going to give my first MongoDB talk at the Community Open House for MongoDB – My First Moments with MongoDB, from the view of someone who’s been using MySQL for a very long time.

So if you’re in Buenos Aires, Charlotte or New York, I’m looking forward to seeing you to talk all things databases and open source.

May 30, 2016

Is Western Leadership Required? More Social Systems, and More

Western Leadership stuff: From time to time you hear things about Western leadership being 'required'. Some of it sounds borderline authoritarian/dictatorial at times. I wanted to take a look further at this: - examples of such quotes include "Instead, America should write the rules. America should call the shots. Other countries should play by the rules that America and our partners set, and

plsh2

PL/sh - is a nice extension to PostgreSQL allowing to write stored procedures in an interpreted language, e.g. bash, python, perl, php, etc.

I found it useful though having a major drawback that the amount of data you can pass via arguments of such procedures may hit command line limitations, i.e. no more 254 spaces and no more 2MB (or even less).

So I have made a change that the value of the first argument is passed via stdin to the script implementing the stored procedure, the rest of arguments is passed as $1, $2, $3, etc. This change is allow to overcome above mentioned limitations in case when big amount of data is passed via one parameter.

Here is a tiny example I have added to the test suite with new functionality:


CREATE FUNCTION perl_concat2(text, text) RETURNS text LANGUAGE plsh2 AS '
#!/usr/bin/perl
print while (<STDIN>);
print $ARGV[0];
';
SELECT perl_concat2('pe', 'rl');

You may get modified PL/sh in my repository on GitHub: github.com/Maxime2/plsh. It has been implemented as a new procedural language plsh2, so you do not need to change anything in already created procedures/functions using plsh (and you can continue use it as before).

How to find out which process is listening on a port

Say that you notice UDP port 323 is open (perhaps via netstat -lun) and you’ve no idea what that is!

With lsof it’s easy to find out which process is guilty:


[15:27 chris ~]$ sudo lsof -i :323
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
chronyd 1044 chrony 1u IPv4 19197 0t0 UDP localhost:323
chronyd 1044 chrony 2u IPv6 19198 0t0 UDP localhost:323

In this case, it’s chrony, the modern time keeping daemon.

As Jonh pointed out in the comments, you can also use netstat with the -p flag.

For example, show all processes listening (-l) on both TCP (-t) and UDP (-u) by port number (-n) showing the process (-p), while I grep for port 323 to show what’s running:

[19:08 chris ~]$ sudo netstat -lutnp |grep 323
udp 0 0 127.0.0.1:323 0.0.0.0:* 1030/chronyd
udp6 0 0 ::1:323 :::* 1030/chronyd

Codec 2 Masking Model Part 5

In the last post in this series I was getting close to a fully quantised 700 bit/s codec. However as I pushed through I discovered a bug in the post-filter. I was accidentally cheating and using some of the encoder information in the decoder. When I corrected the bug the quality dropped significantly. I’ve hit these sorts of bugs before – the simulation code is complex and it’s easy to “declare victory” prematurely.

So I have abandoned the AbyS approach for now. Oh well, that’s “research and disappointment” for you. Plenty of new ideas though….

For the last few months I have been working on another solution that vector quantises a “fixed rate” version of the spectrum. The masking functions are still used to smooth the spectrum before sampling at the fixed rate. Much like we low pass filter time domain samples before sampling, the masking functions reduce the “bandwidth” and hence sample “rate” we need to represent the spectrum. Here is a block diagram of the current “700C” candidate codec:

The bit allocation is pitch (Wo) 6 bits, 1 bit for voicing, 16 bits for the amplitude VQ, 4 bits for energy and 1 bit spare. All updated every 40ms. The new work is in the “Decimate in Frequency” block, expanded here:

As the pitch of the speech varies, the number of harmonics used to represent the speech, L, varies. The goal is take a vector of L amplitude samples, vector quantise, and send them over a channel. To vector quantise them we need fixed length vectors. So a Discrete Fourier Transform (DFT) is used to resample the L amplitude samples to fixed vectors of length 20 (I have chosen k=10).

BTW a DFT is the generic form of a Fast Fourier Transform (FFT). A FFT is a computationally efficient (fast) way of computing a DFT.

The steps are similar to sampling a time domain signal. The bandwidth of the signal is limited by using the masking function to smooth the variations in the amplitude envelope. The use of masking functions means the smoothing matches the response of the ear, and no perceptually important information is lost.

I’ve recently been playing with OFDM modems, so I used a “cyclic suffix” to further smooth the DFT coefficients. DFTs like cyclic signals. If you have a DFT of an 8kHz signal, the sample at 3900Hz is the “close” to the sample at 0 Hz. If there is a step jump in amplitude – you get a lot of high frequency information in the DFT coefficients which is harder to quantise. So I throw away the last 500Hz of the speech signal (3500-4000 Hz), and replace it with a curve that ensures a smooth match between 3500 Hz and 0 Hz.

Yeah, I don’t know how I dream this stuff up either …… do I use the Force? Too much red wine or espresso? Experience? A life mispent on computers? Subconscious innovation? Plagiarism?

In the past I’ve tried to resample and VQ the spectrum of sinusoidal codecs a few times, without much success. Jean Marc also suggested something similar a few posts back. Anyhoo, getting somewhere this time around.

Here are some plots that show the algorithm in action for a frame of female speech:

Here are the amplitude samples (red crosses). The blue line has the cyclic suffix, note how it meets the first amplitude sample near 0Hz.

This figure shows the difference in the DFT coefficients with (blue) and without (green) the cyclic suffix:

Here is the cumulative energy of DFT coefficients, note that with the cyclic suffix (blue) low frequency energy dominates:

This figure shows a typical 2k=20 length vector that we vector quantise. Note it has zero mean – we extract the DC coefficient and separately quantise this as the frame energy.

Samples

Sample 1300 700C Candidate
hts1a Listen Listen
hts2a Listen Listen
forig Listen Listen
ve9qrp_10s Listen Listen
mmt1 Listen Listen
vkqi Listen Listen
cq_ref Listen Listen

Through a couple of years of on-air operation we have established that the 1300 bit/s codec (as used in FreeDV 1600 with 300 bit/s of FEC) has acceptable speech quality for HF. So the goal of this work is similar quality at 700 bit/s.

For some samples above (e.g. hts1a and mmt1a), 1300 is superior to the current 700C candidate. For others (e.g. hts2a and vk5qi) 700 sounds a little better. So I think I’m in the ball park.

There’s a bit of clipping at the start of cq_ref, and some level variations between the two modes on some samples. The 700C candidate has a few problems with unvoiced sounds, e.g. the intake of breath on ve9qrp_10, and the “ch” sound at the start of chicken in hts2a. Not sure why.

The cq_ref_1300 sample is a bit poor as the LPC technique used for spectral amplitudes falls over when the spectral dynamic range is high. In this sample the LF energy has much higher energy than the HF, i.e. a strong “Low Pass Filter” effect or spectral slope.

Next step is some refactoring – the Octave code is an untidy mess of 6 months of dead ends and false starts. A mirror of real world R&D I guess. Creating something new is not a tidy process. At least in my head. So many aspects of this algorithm that I could explore but I’d rather get this on the air and see if we really have something here. Would love to have some help with a port from Octave to C. Contact me if you’d like to work in this area.

May 27, 2016

Oryx and Crake




ISBN: 9780385721677
LibraryThing
I bought this book ages ago, on the recommendation of a friend (I don't remember who), but I only just got around to reading it. Its a hard book to read in places -- its not hopeful, or particularly fun, and its confronting in places -- especially the plot that revolves around child exploitation. There's very little to like about the future society that Atwood posits here, but perhaps that's the point.

Despite not being a happy fun story, the book made me think about things like genetic engineering in a way I didn't before and I think that's what Atwood was seeking to achieve. So I'd have to describe the book as a success.

Tags for this post: book margaret_atwood apocalypse genetic_engineering
Related posts: The Exterminator's Want Ad; Cyteen: The Vindication; East of the Sun, West of the Moon; The White Dragon; Runner; Cyteen: The Betrayal
Comment Recommend a book

Using OpenVPN on iOS and OSX

I have written instructions on how to connect to your own OpenVPN server using Network Manager as well as Android.

Here is how to do it on iOS and OSX assuming you have followed my instructions for the server setup.

Generate new keys

From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:

./build-key iphone   # "iphone" as Name, no password

and for your laptop:

./build-key osx      # "osx" as Name, no password

Using OpenVPN Connect on iOS

The app you need to install from the App Store is OpenVPN Connect.

Once it's installed, connect your phone to your computer and transfer the following files using iTunes:

  • ca.crt
  • iphone.crt
  • iphone.key
  • iphone.ovpn
  • ta.key

You should then be able to select it after launching the app. See the official FAQ if you run into any problems.

iphone.ovpn is a configuration file that you need to supply since the OpenVPN Connect app doesn't have a configuration interface. You can use this script to generate it or write it from scratch using this template.

On Linux, you can also create a configuration file using Network Manager 1.2, use the following command:

nmcli connection export hafnarfjordur > iphone.ovpn

though that didn't quite work in my experience.

Here is the config I successfully used to connect to my server:

client
remote hafnarfjordur.fmarier.org 1194
ca ca.crt
cert iphone.crt
key iphone.key
cipher AES-256-CBC
auth SHA384
comp-lzo yes
proto udp
tls-remote server
remote-cert-tls server
ns-cert-type server
tls-auth ta.key 1

Using Viscosity on Mac OSX

One of the possible OpenVPN clients you can use on OSX is Viscosity.

Here are the settings you'll need to change when setting up a new VPN connection:

  • General
    • Remote server: hafnarfjordur.fmarier.org
  • Authentication
    • Type: SSL/TLS client
    • CA: ca.crt
    • Cert: osx.crt
    • Key: osx.key
    • Tls-Auth: ta.key
    • direction: 1
  • Options
    • peer certificate: require server nsCertType
    • compression: turn LZO on
  • Networking
    • send all traffic on VPN
  • Advanced
    • add the following extra OpenVPN configuration commands:

      cipher AES-256-CBC
      auth SHA384
      

Adventures in Container Sharding – SQLite performance problem and the pivot point.

Hey world it’s been a while, turns out I’m not much of a blogger. But I know how useful for myself it is to do write-ups occasionally so I can actually find them later.. having said that. In my last post I mentioned I was an OpenStack Developer.. and it’s still true. I spend my time hacking and working on Openstack Swift the awesome OpenSource object storage cluster.

One thing I’ve been trying to tackle recently is container sharding in Swift, I will not go into full details as there is a Swift Spec that is relatively recent, and I’ve also gave a high level talk on it at LCA in Geelong.

The tl;dr being, Swift accounts and containers (or the metadata layer of Swift) are SQLite databases that get treated like objects themselves and replicated throughout the cluster. Which works amazingly well. Until you add millions and millions of objects to a container. And what I’m talking about here is container level object metadata, not the objects themselves. When this happens, SQLite being a file starts to have latency and locking issues, as one would expect. The solution to this is shard up these container databases throughout the cluster, which is what I’ve been working on.

At the last OpenStack summit in Austin, the awesome people at SwiftStack, whom I work quite closely with in the community gave me a container database they generated that has 700,000,000 objects in it (metadata again). This SQLite file is about 105G so not small. Plugging this into a small cluster I have to test my sharding implementation has been interesting to say the least.

When sharding a container down, we have a simple idea, split it in half. That is to say find someplace in the object table to pivot on. We can then keep pivoting giving us a list of ranges (which can be treated as a binary tree). The problem is finding the pivot point. In all my testing up til now I had what I thought was the perfect and simple way:

SELECT name
FROM object
WHERE deleted=0 ORDER BY name LIMIT 1 OFFSET (
SELECT object_count / 2
FROM policy_stat);

This did amazingly well in all my tests.. but I obviously never got big enough. This simple SQL statement would do plenty well if sharding in Swift was turned on from day dot, but the sharding plans for Swift is for once it’s solved in this POC to add to Swift as a beta which can be turned ‘on’ at a container by container basis when you want. After it graduates from beta but is still a switch. To finally once we are confident in it’s ability to have it on permanently. In the latter case container’s would never get big enough to worry about.. However, in the earlier stages a user would only turn it on when the container is _very_ slow.

Using the pivot SQL statement on the large container I was given ground to a halt, I’m sure it would have come back to be eventually, but I got tired of waiting after what seemed for ages.. there has to be a better way.

Turns out the OFFSET statement in SQLite, even when hitting an index still does a scan to find the offset.. This is slow when you get to a very large table size. Turns out under the hood, an Index is stored as a double-linked list, which I’m sure probably has optimisations but anyway I was struggling to think of a good way to find a good enough middle value that didn’t involve some table scanning. You can see from the SQL statement, we know have many objects we have in the container, but the problem is because swift is eventually consistent we need to temporally store objects that have been deleted. So randomly picking an index doesn’t help, and it wont necessarily be in name order.

So on really large containers OFFSET needs to be thrown out the window. Turns out the sharding implementation can deal with shrinking the number of shards, merging smaller ranges together, not just growing/splitting. This means we don’t actually need to be exact, we also don’t actually need to split on an existing object, just a name that would be somewhere in the middle and so long as it’s cutting down the large container then it’s good enough. So what can we do?

Turns out there is an optimisation in SQLite, because an index is a double-linked list and ordered by it’s index, it’s really quick if all we want to do is go to the first or last element. So that’s what I’ve done:

SELECT min(name) as name FROM object WHERE deleted = 0;
SELECT max(name) as name FROM object WHERE deleted = 0;

These two statements are blindingly fast due to the fact that we already have a compound index on name and delete (for cleaning up). Note however they have to be run as 2 separate commands, combine the two into one and you loose your optimisation and you’ll have to scan all elements. Having the min and max name is a good start, and even when dealing with already sharded containers, they are just smaller ranges so this still works. The question is now what?

In the perfect work we have an even distribution of objects between the min and max names, so we just need to find a middle name between the two to pivot on. Turns out even in a not evenly distributed container we will still be shrinking the container, even at worst case only be a few objects. But these will be cleaned up later (merged into a neighbour range by the implementation). And so long as the container gets smaller, eventually it’ll shrink small enough to be usable.

Next step is finding the middle value, to do this I just wrote some python:

from itertools import izip_longest
import sys

lower = unicode(sys.argv[1])
upper = unicode(sys.argv[2])

def middle_str(str1, str2):
    result = []
    for l, u in izip_longest(map(ord, str1), map(ord, str2), fillvalue=0):
        result.append((l + u) // 2)
    return u''.join(map(unichr, result))

if __name__ == "__main__":
    print(middle_str(lower, upper))

What does it do. Calling middle_str(min, max) will grab the unicode versions of the strings, turn them into there interger values, find the middle and turn them back into a word. After matching the prefix that is. So:

$ python middle_str.py 'aaaaaaaaa' 'zzzzzzzzz'
mmmmmmmmm

$ python middle_str.py 'aaaaaaaaa' 'aazzzzzzz'
aammmmmmm

$ python middle_str.py 'DFasjiojsaoi' 'ZZsdkmfi084f'
OPjkjkjiQLQg

I am now plugging this into my implementation and lets tackle this large container again.

May 26, 2016

More Social and Economic Systems, Music Stuff, and More

- modern economics just gibberish. St Louis Federal Reserve Database. Unemployment data distorted. One hour at soup kitchen (even volunteering) is 'employed'? International rule not Western centric. This rule (and others) developed in last 20 years. US still in sustained slump? Shadow stats. Neo-liberalism assumes that private sector will allocate more effectively? Using margin to control and

May 23, 2016

Third party testing with Turbo-Hipster

Why is this hipster voting on my code?!

Soon you are going to see a new robot barista leaving comments on Nova code reviews. He is obsessed with espresso, that band you haven’t heard of yet, and easing the life of OpenStack operators.

Doing a large OpenStack deployment has always been hard when it came to database migrations. Running a migration requires downtime, and when you have giant datasets that downtime could be hours. To help catch these issues Turbo-Hipster (http://josh.people.rcbops.com/2013/09/building-a-zuul-worker/) will now run your patchset’s migrations against copies of real databases. This will give you valuable feedback on the success of the patch, and how long it might take to migrate.

Depending on the results, Turbo-Hipster will add a review to your patchset that looks something like this:

Example turbo-hipster post

What should I do if Turbo-Hipster fails?

That depends on why it has failed. Here are some scenarios and steps you can take for different errors:

FAILURE – Did not find the end of a migration after a start

  • If you look at the log you should find that a migration began but never finished. Hopefully there’ll be a traceroute for you to follow through to get some hints about why it failed.

WARNING – Migration %s took too long

  • In this case your migration took a long time to run against one of our test datasets. You should reconsider what operations your migration is performing and see if there are any optimisations you can make, or if each step is really necessary. If there is no way to speed up your migration you can email us at rcbau@rcbops.com for an exception.

FAILURE – Final schema version does not match expectation

  • Somewhere along the line the migrations stopped and did not reach the expected version. The datasets start at previous releases and have to upgrade all the way through. If you see this inspect the log for traceroutes or other hints about the failure.

FAILURE – Could not setup seed database. FAILURE – Could not find seed database.

  • These two are internal errors. If you see either of these, contact us at rcbau@rcbops.com to let us know so we can fix and rerun the tests for you.

FAILURE – Could not import required module.

  • This error probably shouldn’t happen as Jenkins should catch it in the unit tests before Turbo-Hipster launches. If you see this, please contact us at rcbau@rcbops.com and let us know.

If you receive an error that you think is a false positive, leave a comment on the review with the sole contents of recheck migrations.

If you see any false positives or have any questions or problems please contact us on rcbau@rcbops.com

git.openstack.org adventures

Over the past few months I started to notice occasional issues when cloning repositories (particularly nova) from git.openstack.org.

It would fail with something like

git clone -vvv git://git.openstack.org/openstack/nova .
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The problem would occur sporadically during our 3rd party CI runs causing them to fail. Initially these went somewhat ignored as rechecks on the jobs would succeed and the world would be shiny again. However, as they became more prominent the issue needed to be addressed.

When a patch merges in gerrit it is replicated out to 5 different cgit backends (git0[1-5].openstack.org). These are then balanced by two HAProxy frontends which are on a simple DNS round-robin.

                          +-------------------+
                          | git.openstack.org |
                          |    (DNS Lookup)   |
                          +--+-------------+--+
                             |             |
                    +--------+             +--------+
                    |           A records           |
+-------------------v----+                    +-----v------------------+
| git-fe01.openstack.org |                    | git-fe02.openstack.org |
|   (HAProxy frontend)   |                    |   (HAProxy frontend)   |
+-----------+------------+                    +------------+-----------+
            |                                              |
            +-----+                                    +---+
                  |                                    |
            +-----v------------------------------------v-----+
            |    +---------------------+  (source algorithm) |
            |    | git01.openstack.org |                     |
            |    |   +---------------------+                 |
            |    +---| git02.openstack.org |                 |
            |        |   +---------------------+             |
            |        +---| git03.openstack.org |             |
            |            |   +---------------------+         |
            |            +---| git04.openstack.org |         |
            |                |   +---------------------+     |
            |                +---| git05.openstack.org |     |
            |                    |  (HAProxy backend)  |     |
            |                    +---------------------+     |
            +------------------------------------------------+

Reproducing the problem was difficult. At first I was unable to reproduce locally, or even on an isolated turbo-hipster run. Since the problem appeared to be specific to our 3rd party tests (little evidence of it in 1st party runs) I started by adding extra debugging output to git.

We were originally cloning repositories via the git:// protocol. The debugging information was unfortunately limited and provided no useful diagnosis. Switching to https allowed for more CURL output (when using GIT_CURL_VERBVOSE=1 and GIT_TRACE=1) but this in itself just created noise. It actually took me a few days to remember that the servers are running arbitrary code anyway (a side effect of testing) and therefore cloning from the potentially insecure http protocol didn’t provide any further risk.

Over http we got a little more information, but still nothing that was conclusive at this point:

git clone -vvv http://git.openstack.org/openstack/nova .

error: RPC failed; result=18, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

After a bit it became more apparent that the problems would occur mostly during high (patch) traffic times. That is, when a lot of tests need to be queued. This lead me to think that either the network turbo-hipster was on was flaky when doing multiple git clones in parallel or the git servers were flaky. The lack of similar upstream failures lead me to initially think it was the former. In order to reproduce I decided to use Ansible to do multiple clones of repositories and see if that would uncover the problem. If needed I would have then extended this to orchestrating other parts of turbo-hipster in case the problem was systemic of something else.

Firstly I need to clone from a bunch of different servers at once to simulate the network failures more closely (rather than doing multiple clones on the one machine or from the one IP in containers for example). To simplify this I decided to learn some Ansible to launch a bunch of nodes on Rackspace (instead of doing it by hand).

Using the pyrax module I put together a crude playbook to launch a bunch of servers. There is likely much neater and better ways of doing this, but it suited my needs. The playbook takes care of placing appropriate sshkeys so I could continue to use them later.

    ---
    - name: Create VMs
      hosts: localhost
      vars:
        ssh_known_hosts_command: "ssh-keyscan -H -T 10"
        ssh_known_hosts_file: "/root/.ssh/known_hosts"
      tasks:
        - name: Provision a set of instances
          local_action:
            module: rax
            name: "josh-testing-ansible"
            flavor: "4"
            image: "Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)"
            region: "DFW"
            count: "15"
            group: "raxhosts"
            wait: yes
          register: raxcreate

        - name: Add the instances we created (by public IP) to the group 'raxhosts'
          local_action:
            module: add_host
            hostname: "{{ item.name }}"
            ansible_ssh_host: "{{ item.rax_accessipv4 }}"
            ansible_ssh_pass: "{{ item.rax_adminpass }}"
            groupname: raxhosts
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

        - name: Sleep to give time for the instances to start ssh
          #there is almost certainly a better way of doing this
          pause: seconds=30

        - name: Scan the host key
          shell: "{{ ssh_known_hosts_command}} {{ item.rax_accessipv4 }} &gt;&gt; {{ ssh_known_hosts_file }}"
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

    - name: Set up sshkeys
      hosts: raxhosts
      tasks:
       - name: Push root's pubkey
         authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

From here I can use Ansible to work on those servers using the rax inventory. This allows me to address any nodes within my tenant and then log into them with the seeded sshkey.

The next step of course was to run tests. Firstly I just wanted to reproduce the issue, so in order to do that it would crudely set up an environment where it can simply clone nova multiple times.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"

By default Ansible runs with 5 folked processes. Meaning that Ansible would work on 5 servers at a time. We want to exercise git heavily (in the same way turbo-hipster does) so we use the –forks param to run the clone on all the servers at once. The plan was to keep launching servers until the error reared its head from the load.

To my surprise this happened with very few nodes (less than 15, but I left that as my minimum testing). To confirm I also ran the tests after launching further nodes to see it fail at 50 and 100 concurrent clones. It turned out that the more I cloned the higher the failure rate percentage was.

Now that I had the problem reproducing, it was time to do some debugging. I modified the playbook to capture tcpdump information during the clone. Initially git was cloning over IPv6 so I turned that off on the nodes to force IPv4 (just in case it was a v6 issue, but the problem did present itself on both networks). I also locked git.openstack.org to one IP rather than randomly hitting both front ends.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      vars:
        cap_file: tcpdump_{{ ansible_hostname }}_{{ ansible_date_time['epoch'] }}.cap
      tasks:
        - name: Disable ipv6 1/3
          sysctl: name="net.ipv6.conf.all.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 2/3
          sysctl: name="net.ipv6.conf.default.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 3/3
          sysctl: name="net.ipv6.conf.lo.disable_ipv6" value=1 sysctl_set=yes
        - name: Restart networking
          service: name=networking state=restarted
        - name: Lock git.o.o to one host
          lineinfile: dest=/etc/hosts line='23.253.252.15 git.openstack.org' state=present
        - name: start tcpdump
          command: "/usr/sbin/tcpdump -i eth0 -nnvvS -w /tmp/{{ cap_file }}"
          async: 6000000
          poll: 0 
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"
          #shell: "git clone http://github.com/openstack/nova"
          ignore_errors: yes
        - name: kill tcpdump
          command: "/usr/bin/pkill tcpdump"
        - name: compress capture file
          command: "gzip {{ cap_file }} chdir=/tmp"
        - name: grab captured file
          fetch: src=/tmp/{{ cap_file }}.gz dest=/var/www/ flat=yes

This gave us a bunch of compressed capture files that I was then able to seek the help of my colleagues to debug (a particular thanks to Angus Lees). The results from an early run can be seen here: http://119.9.51.216/old/run1/

Gus determined that the problem was due to a RST packet coming from the source at roughly 60 seconds. This indicated it was likely we were hitting a timeout at the server or a firewall during the git-upload-pack of the clone.

The solution turned out to be rather straight forward. The git-upload-pack had simply grown too large and would timeout depending on the load on the servers. There was a timeout in apache as well as the HAProxy config for both frontend and backend responsiveness. The relative patches can be found at https://review.openstack.org/#/c/192490/ and https://review.openstack.org/#/c/192649/

While upping the timeout avoids the problem, certain projects are clearly pushing the infrastructure to its limits. As such a few changes were made by the infrastructure team (in particular James Blair) to improve git.openstack.org’s responsiveness.

Firstly git.openstack.org is now a higher performance (30GB) instance. This is a large step up from the previous (8GB) instances that were used as the frontend previously. Moving to one frontend additionally meant the HAProxy algorithm could be changed to leastconn to help balance connections better (https://review.openstack.org/#/c/193838/).

                          +--------------------+
                          | git.openstack.org  |
                          | (HAProxy frontend) |
                          +----------+---------+
                                     |
                                     |
            +------------------------v------------------------+
            |  +---------------------+  (leastconn algorithm) |
            |  | git01.openstack.org |                        |
            |  |   +---------------------+                    |
            |  +---| git02.openstack.org |                    |
            |      |   +---------------------+                |
            |      +---| git03.openstack.org |                |
            |          |   +---------------------+            |
            |          +---| git04.openstack.org |            |
            |              |   +---------------------+        |
            |              +---| git05.openstack.org |        |
            |                  |  (HAProxy backend)  |        |
            |                  +---------------------+        |
            +-------------------------------------------------+

All that was left was to see if things had improved. I rerun the test across 15, 30 and then 45 servers. These were all able to clone nova reliably where they had previously been failing. I then upped it to 100 servers where the cloning began to fail again.

Post-fix logs for those interested:
http://119.9.51.216/run15/
http://119.9.51.216/run30/
http://119.9.51.216/run45/
http://119.9.51.216/run100/
http://119.9.51.216/run15per100/

At this point, however, I’m basically performing a Distributed Denial of Service attack against git. As such, while the servers aren’t immune to a DDoS the problem appears to be fixed.

New Blog

Welcome to my new blog.

You can find my old one here: http://josh.opentechnologysolutions.com/blog/joshua-hesketh

I intend on back-porting those posts into this one in due course. For now though I’m going to start posting about my adventures in openstack!Wordpress