Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

October 10, 2015

World Mental Health Day 2015

On this year’s World Mental Health Day, some info from Lifeline and Mental Health Australia:

Mental Health Begins with Me

Did you know 70% of people with mental health issues don’t seek help? […] As a community we can encourage others to take care of their mental health by breaking down the barriers that stop people seeking help when they need it.

How can you help?

Make your mental health promise and share it today.  Encourage your friends and family to do the same and share their promises here or on social media using the hashtag #WMHD2015.

Here are some great tips and promises to make to yourself this 10/10 (October 10th):

  1. Sleep well
  2. Enjoy healthy food
  3. Plan and prioritize your day
  4. Tune into the music you love
  5. Cut down on bad food and booze
  6. Switch off your devices and tune out
  7. Hangout with people who make you feel good
  8. Join in, participate and connect
  9. Exercise your body and mind
  10. Seek advise and support when you need it


October 09, 2015

Julia cluster install with MPI packages

Julia is a high-level, high-performance dynamic language for technical computing. With a "just in time" compiler, it is very fast, especially compared to languages like MATLAB, Octave, R etc. However it is relatively new and a cluster installation and package deployment has a few quirks.

Download the source from github. By default you will be building the latest unstable version of Julia from the git checkout. However, we want to run 0.4.0 (or rather, 0.4rc4) which is the last stable release.

# cd /usr/local/src/JULIA

read more

October 08, 2015

Trouble Shooting MySQL ERROR 1136 (21S01)

I encountered this gem of an error working with MySQL today:

ERROR 1136 (21S01) at line 144: Column count doesn't match value count at row 1


A mismatch between the number of destination columns and the number of columns specified in your MySQL script will generate this error.

In Detail

I was converting the database for one application (Storyboard) to another (Phabricator]. Something I've not done before and the code was more than six months old and a hand me down from a previous dev. It worked then but the SQL schema for both has changed since then.

This is the code snippet it choked on:

insert into user
     if(full_name is NULL, username, full_name),
    if(updated_at is NULL, unix_timestamp(now()), unix_timestamp(updated_at)),
    NULL, 0, 0, '', storyboard.make_cert(255),
    0, 0, is_superuser, 'UTC', 1, 1,
   from storyboard.users;

It turns out that this error is telling me that the destination table (phabricator_users.user) does not have the same number of columns as the select statement from storyboard.users.

Examining storyboard.users first, I found this:

mysql> desc users;
| Field        | Type         | Null | Key | Default | Extra          |
| id           | int(11)      | NO   | PRI | NULL    | auto_increment |
| created_at   | datetime     | YES  |     | NULL    |                |
| updated_at   | datetime     | YES  |     | NULL    |                |
| email        | varchar(255) | YES  | UNI | NULL    |                |
| is_staff     | tinyint(1)   | YES  |     | NULL    |                |
| is_active    | tinyint(1)   | YES  |     | NULL    |                |
| is_superuser | tinyint(1)   | YES  |     | NULL    |                |
| last_login   | datetime     | YES  |     | NULL    |                |
| openid       | varchar(255) | YES  |     | NULL    |                |
| full_name    | varchar(255) | YES  | MUL | NULL    |                |
| enable_login | tinyint(1)   | NO   |     | 1       |                |
11 rows in set (0.00 sec)

Examining phabricator_users.user I found this:

mysql> desc user;
| Field              | Type             | Null | Key | Default | Extra          |
| id                 | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| phid               | varchar(64)      | NO   | UNI | NULL    |                |
| userName           | varchar(64)      | NO   | UNI | NULL    |                |
| realName           | varchar(128)     | NO   | MUL | NULL    |                |
| sex                | char(1)          | YES  |     | NULL    |                |
| translation        | varchar(64)      | YES  |     | NULL    |                |
| passwordSalt       | varchar(32)      | YES  |     | NULL    |                |
| passwordHash       | varchar(32)      | YES  |     | NULL    |                |
| dateCreated        | int(10) unsigned | NO   |     | NULL    |                |
| dateModified       | int(10) unsigned | NO   |     | NULL    |                |
| profileImagePHID   | varchar(64)      | YES  |     | NULL    |                |
| consoleEnabled     | tinyint(1)       | NO   |     | NULL    |                |
| consoleVisible     | tinyint(1)       | NO   |     | NULL    |                |
| consoleTab         | varchar(64)      | NO   |     | NULL    |                |
| conduitCertificate | varchar(255)     | NO   |     | NULL    |                |
| isSystemAgent      | tinyint(1)       | NO   |     | 0       |                |
| isDisabled         | tinyint(1)       | NO   |     | NULL    |                |
| isAdmin            | tinyint(1)       | NO   |     | NULL    |                |
| timezoneIdentifier | varchar(255)     | NO   |     | NULL    |                |

phabricator_users.user has 19 columns, yet the original MySQL syntax has 24 columns listed. So there's been a schema change in the intervening 6+ months since the script was last run.

Identifying missing fields and removing surplus columns from the script resulted in this version of the stanza:

insert into user
     if(full_name is NULL, email, full_name),
     if(updated_at is NULL, unix_timestamp(now()), unix_timestamp(updated_at)),
     NULL, 0, 0, '', storyboard.make_cert(255),
     0, 0, is_superuser, 'UTC'
   from storyboard.users;

... which unsurprisingly now worked.

October 07, 2015

Lunch Time Science is returning

Right, the next step in getting things back on track, Angry Beanie wise is rebooting the #lunchtimescience daily updates and the podcast that will be attached to it.

So starting next Monday, #lunchtimescience will return and I'm already on the hunt for the first scientist to profile for the show.

For Science!

Blog Catagories: 
Two for our gummint to think upon...


...oh, and one more for a person who can do no wrong... in their own eyes, anyhow... you do actually have a possible future, one which centres on choosing to no longer be totally self-righteous, and sticking to that choice:

MariaDB Developers Meeting + User Group NL

Next week, all the MariaDB Server developers will descend to Amsterdam for the developer’s meeting. As you know the meeting is open to all interested parties, so we hope to see you in Amsterdam Tuesday Oct 13 – Thursday Oct 15. The schedule is now online as well.

In addition to that, Monday Oct 12 2015, there is also a meetup planned with the MySQL User Group NL. As the organiser Daniël van Eeden wrote, this is a one of a kind meetup: “This is a very unique event, it is not often possible to find so many MariaDB developers together and speaking about what they work on.”

Yes, we’re doing it lightning talk style (ok, not strictly, 5 minutes), but everyone will talk about something they’re working on or passionate about that you don’t find in MySQL. I understand that there will be pizza and beverages too.

All in, a packed week in Amsterdam, and here’s to focusing on the MariaDB Server 10.2 release cycle.

Lunchtime geocaching

So, I didn't get to sleep last night until 4:30am because of a combination of work meetings and small children, so today was a pretty weird day for me. I was having a lot of trouble concentrating at lunch time, so I decided a walk was the least worst thing I could do with my time. I decided to knock off some of the few remaining geocaches in southern Tuggeranong that i haven't found yet.

This walk was odd -- it started and ended in a little bit of Theodore they never got around to actually building, and I can't find any documentation online about why. It then proceeded through a nice little green strip that has more than its share of rubbish dumped, Cleanup Australia needs to do a visit here! Then there were the Aboriginal axe grinding grooves (read more) just kind of in the middle of the green strip with no informational signage or anything. Finally, a geocache at an abandoned look out, which would have been much nicer if it wasn't being used as an unofficial dump now.

That said, a nice little walk, but I have no real desire to revisit this one any time soon.


Interactive map for this route.

Tags for this post: blog pictures 20151007 photo canberra bushwalk


October 06, 2015

Call for Media Participation 2016 Geelong - LCA By the Bay - is now seeking applications from suitable media outlets to participate in one of the most respected technical conferences in the Asia Pacific region.

When: Monday 1st February to Friday 5th February

Where: Deakin University, Geelong Waterfront Campus, Geelong CBD

Up to five Media Passes will be available, providing;

  • free Professional registration to all five days of the conference, valued at around $AUD 1000, including access to Penguin Dinner and Professional Delegates Networking Session access to specially designated Media areas
  • Access to interview Keynotes and Speakers will be via the Conference Media Liaison, and by agreement with the Keynotes and Speakers themselves.

Travel and accommodation is not included in the Media Pass, and Media Representatives will need to make their own way to the conference.

To be considered for a Media Pass, please provide the following information via email to

  • The registered business name of your media outlet, and ABN or ACN
  • Examples of the online or physical media you produce
  • Whether you’ve attended in the past, and if so, examples of media coverage generated at the event
  • The name of the staff member who would be attending
  • Acknowledgement that you agree with the Conference’s Code of Conduct

Decisions to grant a Media Pass will be made based on the following principles;

  • ensuring diverse channel and cohort coverage of
  • prioritising media outlets who have a significant focus on Linux and / or open source technologies
  • that the holder of a Media Pass have not previously breached the Conference Code of Conduct

We encourage you to remain up to date with conference news through the following channels;

October 05, 2015

Geo-Politics, Soundcloud Scraper/Downloader, Apple Issues and More

- if you don't quite understand the difference between Western media and why some Russian media is branded as being propogandist watch some of the interviews of Putin and compare it with other news outlets. In general things are much more 'controlled', too 'coherent' (with regards to perspective), and at times it feels as though the questions and answers have been prepared before hand

Vladimir Putin 60 Minutes interview FULL 9-27-15 Vladimir Putin 60 minutes Interview Charlie Rose

Putin Speaks English for CNN

Vladimir Putin: An Enigmatic Leader's Rise To Power - Best Documentary 2015

Vladimir Putin Rage


Putin: Who gave NATO right to kill Gaddafi? 
Putin: We won't let anyone achieve military dominance over Russia

Putin: America is a bully and threat to stability

Putin slams US in address to nation

Putin on US Foreign Policy Elite

Putin: Quit lecturing Russia on democracy!

Putin talks NSA, Syria, Iran, drones in exclusive RT interview (FULL VIDEO)

'Do you realise what you've done?' Putin addresses UNGA 2015 (FULL SPEECH)

Vladimir Putin: "KGB Spymaster"

- I think a lot of people underestimate Putin. They know that he's attempting to look after Russia's (and his) best interests but the thing I'm wondering is whether or not they realise how far he's willing to push back and how multi-faceted he really is. It's clear that he can come off as a thug but look at the USSR's history. Their is no way that he look after Russia's best interests without at least projecting strength. I'm not sure he could have lasted long within the KGB/FSB if we was a pure thug/'gansta' as seems to be portrayed by some people

- at times, I look at Putin's reactions and it feels as though there was some tacit agreement to have him bring it back to a position of global strength. Hearing some stories about him (and other heads of state of Russia as well as other USSR member states) it feels as though every time Russia has has tried to help the West, the West has not returned the favour (the truth is probably somewhere in the middle). This is especially the case with perceived lack of  enough investment into Russia, the expansion of NATO, and Western interests close to and inside of former USSR states (all of this going against earlier documented promises). Many Westerners have been booted out of former USSR states for appearing to want to interfere with internal politics. The problem is that if this is true, Putin will feel as though he's being pushed into a corner from which he has no option except to react forcefully. The irony is that this time the West isn't dealing with a pure politician. As stated previously I feel he's far more intelligent and multi-faceted than that. Think carefully; with the moves that he's currently making in the Middle East, some of his other moves in other USSR states as well as in the East any possible new Eurasian Union (if it comes off) is much stronger (and better prepared) as a (China's influence and future success is a different issue altogether...) power bloc to challenge the current Western powers

- this point is pivotal in the Syrian conflict. It also gives perspective of how the Chinese/Russians view the world and what they will do in future if they continue to get stronger

- as stated previously, I don't think that any confrontation between the supposed Eurasian powers versus the West and it's allies is going to be as clear cut as some people say. In the past you could put this down to 'propoganda' but the fact is they have demonstrated their technologies and have footage of it. Nearly everything you've thought of both sides have also thought of on both sides as well. Estimates of how far China is behind the West in defense technology (on a broad basis) can vary anywhere between 5-30 years. My guess is that it's about 10-20 years (more likely towards the upper end with regards to development. Mass production and other issues are another problem entirely). Less, if they allocate resources correctly, increase their defense budget, gain further intelligence, and can make certain breakthroughs....

China's new YJ-18 missile: 'S'-shape movement at supersonic terminal speed

China Missile 中国导弹 WU-14 10 times sound speed can tear apart US anti-missile network

U.S. and Chinese Air Superiority Capabilities

An Assessment of Relative Advantage, 1996–2017,400789 

- one of the things that I think Westerners generally mis-interpret is that freedom doesn't not necessarily require choice. If that were the case, the Middle East and many parts of Eurasia would have fallen apart a long time ago. Look at the way the Chinese government has handled their overheating sharemarket. In the West, investors and institutions would blame the government (for recent massive/drastic falls) but would understand that that is part of life. In China, interviews with some people is identical to the response that is given by a lot of former Soviet spies. Failure and betrayal are much more closely aligned

- people keep on arguing about how much they spend on defense and how spending equates to quality. The problem is that price doesn't necessarily equate to value. Anybody who has lived long enough knows this.... Who cares if it's cheap or expensive if it's effective in fulfilling it's goal?

- guess this answers my previous thought about how far the Chinese are willing to project out. With respect to the functioning of the UN it is fascinating to see how the persepectives of the Russian and Chinese will play in the future especially if they continue their pathway towards strong, sustainable economic growth. What has surprised me is how early (relatively) they've been to push out

- people (any country) get hysterical at times in this discussion on who will 'lead the world' in future. Moreover, it is at this point that power projection and deterrance begin to take on bizarre dimensions. Think about how strange it sounds when the someone who projects power considers that it a deterrent against someone who considers an immobile object a deterrent

- I don't think China wants to win back Taiwan (or other contested territories) by having to have armed conflict. They want these territories to come back willingly to the 'motherland'. If they don't have that choice they want to have the exact same option that Russia has to it (with other former states of the USSR). Moreover, if they invade/take over contested territory they want their military to be strong enough such that they don't have to resort to nuclear weapons to intimidate others into backing down. They don't see it as that either. They see it as recovery of lost territory that has been documented (the same goes for other countries in the region though)  

- with some of the moves of recent in the Middle East one has to wonder how much respect countries in that region actually has for the West?

- turning local populations can take decades and even then they may still want you gone. This means choosing your battles (and scoping them) more carefully, staying there for the long haul, or ensuring that the side that you back will be able to take control. Ironically, this potentially means coming to an agreement with Russia on and having at least partial representation by former elements of Syria's current government. The Middle East is becoming more and more bizarre (and confusing) by the day. There are few if any clean hands in our world now,_terrorists_and_narcotrafficers

- if you've never heard of Chomksy his perspective on the world can come off incredibly paranoid if you've never heard too many other non-Western perspectives. It is interesting little (and how much at other times) separates many of us though

Bernie Sanders + Noam Chomsky: Deciphering Foreign Policy Jargon

Noam Chomsky: US is world's biggest terrorist

Noam Chomsky: US terrorism (2015)

2014 "Noam Chomsky": Why you can not have a Capitalist Democracy!

"Who does control the world?" - Noam Chomsky - BBC interview 2003

Noam Chomsky: Rebel without a Pause - Documentary

- the more you read the more obvious it is why there are so many defecters from from the West rather than the other way around. While things are brutal in many non-Western countries they are more up front. In the West things are at a different level, often less obvious and often hidden in the shadows. Potential agents, employees only get an idea of what the 'real world' is like when they join the service/s. I guess this is also the reason why if there are non-Western defectors they are often based on idelogical grounds

- if you know enough about finance and economics you'll realise that most GDP figures are distorted since everyone chooses different constituent parts. It's not just an issue related to China alone. In fact, in the past there were stories about them under-reporting GDP figures because technically their measures were different

This script is to facilitate automated retrieval of music from the website, after it was found that existing website download programs such as Teleport Pro, HTTrack, and FlashGet were too inefficient. 

It works by reverse engineering the storage scheme of files on the website, the lack of need for registration and login credentials, and taking advantage of this so that we end up with a more efficient automated download tool.

Obviously, the script can be modified on an ad-hoc basis to be able to download from virtually any website. As this is the very first version of the program (and I didn't have access to the original server while I was cleaning this up it may be VERY buggy). Please test prior to deployment in a production environment.

OS X: About OS X Recovery

How to Make an OS X Yosemite Boot Installer USB Drive

How to install Windows using Boot Camp

How to Create a Windows 10 Installer USB Drive from Mac OS X

If all you want is to try a later version of Mac OS X then try virtualisation...

I can log into my iTunes account but can not access my account details, what's wrong?

Came across a bizarre wireless bug recently on Mac OS X Snow Leopard

This is a bunch of quotes that I've collected recently.

- Colonialism was neither romantic nor beautiful. It was exploitative and brutal. The legacy of colonialism still lives quite loudly to this day. Scholars have argued that poor economic performance, weak property rights and tribal tensions across the continent can be traced to colonial strategies. So can other woes. In a place full of devastation and lawlessness, diseases spread like wildfire, conflict breaks out and dictators grab power."

- The United States makes an improper division between surveillance conducted on residents of the United States, and the surveillance that is conducted with almost no restraint upon the rest of the world. This double standard has proved poisonous to the rights of Americans and non-Americans alike. In theory, Americans enjoy better protections. In practice there are no magical sets of servers and Internet connections that carry only American conversations. To violate the privacy of everyone else in the world, the U.S. inevitably scoops up its own citizens' data. Establishing nationality as a basis for discrimination also encourages intelligence agencies to make the obvious end-run: spying on each other's citizens, and then sharing that data. Treating two sets of innocent targets differently is already a violation of international human rights law. In reality, it reduces everyone to the same, lower standard.

- Australian actively managed global funds continue to deliver woeful returns, with 67 per cent performing worse than the S&P benchmark indexes, rising to 85 per cent over three years and almost 90 per cent over five years.

"On average, international equity funds posted a strong gain of 23.4 per cent in the past one-year period. However, the majority of funds in this peer group, at 67.3 per cent, underperformed the S&P Developed Ex-Australia LargeMidCap, which recorded a return of 25.5 per cent over the same period," Ms Luk said.

Every single Australian bond fund has underperformed the index this year, and the longer term results are not significantly more promising: 83.4 per cent underperformed over the last three years, and 86 per cent over five years.

- Thursday’s speech was not the first time the Pope has spoken out about the arms trade. He referred to it as “the industry of death” in a talk with Italian schoolchildren in May. “Why do so many powerful people not want peace? Because they live off war,” he said.

“This is serious. Some powerful people make their living with the production of arms and sell them to one country for them to use against another country,” he said. “The economic system orbits around money and not men, women. … So war is waged in order to defend money. This is why some people don’t want peace: They make more money from war, although wars make money but lose lives, health, education.”

- A politics and solidarity that depend on demonizing others, that draws on religious sectarianism or narrow tribalism or jingoism may at times look like strength in the moment, but over time its weakness will be exposed. And history tells us that the dark forces unleashed by this type of politics surely makes all of us less secure. Our world has been there before. We gain nothing from going back

- The fall of Kunduz may also be a good time to look at whether the Afghan Army needs to shuffle assets around, he adds. In the immediate aftermath of the Taliban takeover, the government in Kabul rushed well-regarded Afghan commandos to the region, for example.

That’s to be expected, but “militarily, you want to make sure you know what the situation is before you throw a bunch of forces into it,” Barno notes. This includes assessing the level of training and capability of Afghan forces posted up there. “Are there enough forces, and were those forces trained and led properly?” he adds.

Finally, it’s worth keeping in mind that up until this point, there have been essentially two models for dealing with non-governed spaces in the post-9/11 world, Scharre argues.

“First, you can send in 100,000 troops in and occupy and try to rebuild it – that’s a model that has costs millions in dollars and thousands in lives,” he says.

The other model is drones and air attacks, “which don’t seem to ever fully solve the problem,” Scharre adds. “In Syria, in Anbar, Iraq we’re grappling with this.”

Kunduz could underline the need to consider new models, he says – “one where US soldiers aren’t fighting, but some level of support is reasonable.”

- “Many military conflicts started with the silent connivance to the ideas of one people’s superiority over others. In this sense the modern ideologies of exceptionalism are extremely dangerous,” Naryshkin stated.

- In the heady days of the Cold War, Americans tended to view Soviet decision making as a black box: You know what goes in, you know what comes out, but you are clueless about what is happening inside. Soviet policy was thus believed to be both enigmatic and strategic. There was little room for personality or personal philosophy; understanding the system was the only way.

- There's a quote that's often attributed to Winston Churchill: "Russia is never as strong as you fear or as weak as you hope."

- Both sides of the debate are correct—but neither side is telling the whole story. As a good friend on the Hill recently told me: “In political communications, facts are an interesting aside, but are completely irrelevant. What we do here is spin.” That’s exactly what’s happening here—both sides are selectively cherry picking facts to make their case—spin.

- Danny Dalton: Some trust fund prosecutor, got off-message at Yale thinks he's gonna run this up the flagpole? Make a name for himself? Maybe get elected some two-bit congressman from nowhere, with the result that Russia or China can suddenly start having, at our expense, all the advantages we enjoy here? No, I tell you. No, sir! Corruption charges! Corruption? Corruption is government intrusion into market efficiencies in the form of regulations. That's Milton Friedman. He got a goddamn Nobel Prize. We have laws against it precisely so we can get away with it. Corruption is our protection. Corruption keeps us safe and warm. Corruption is why you and I are prancing around in here instead of fighting over scraps of meat out in the streets. Corruption is why we win.

- Bryan Woodman: But what do you need a financial advisor for? Twenty years ago you had the highest Gross National Product in the world, now you're tied with Albania. Your second largest export is secondhand goods, closely followed by dates which you're losing five cents a pound on... You know what the business community thinks of you? They think that a hundred years ago you were living in tents out here in the desert chopping each other's heads off and that's where you'll be in another hundred years, so, yes, on behalf of my firm I accept your money.

- “The ‘Russian’ attitude,” Isaiah Berlin wrote, “is that man is one and cannot be divided.” You can’t divide your life into compartments, hedge your bets and live with prudent half-measures. If you are a musician, writer, soldier or priest, integrity means throwing your whole personality into your calling in its purest form.

- Russia is a more normal country than it used to be and a better place to live, at least for the young. But when you think of Russia’s cultural impact on the world today, you think of Putin and the oligarchs. Now the country stands for grasping power and ill-gotten money.

There’s something sad about the souvenir stands in St. Petersburg. They’re selling mementos of things Russians are sort of embarrassed by — old Soviet Army hats, Stalinist tchotchkes and coffee mugs with Putin bare-chested and looking ridiculous. Of the top 100 universities in the world, not a single one is Russian, which is sort of astonishing for a country so famously intellectual.

This absence leaves a mark. There used to be many countercultures to the dominant culture of achievement and capitalism and prudent bourgeois manners. Some were bohemian, or religious or martial. But one by one those countercultures are withering, and it is harder for people to see their situations from different and grander vantage points. Russia offered one such counterculture, a different scale of values, but now it, too, is mainly in the past.

- 1) Xi removed over 28,000 officials in 2 years. This is old data from early 2015. Officials no longer go to high-end restaurants, wear luxury. Most senior officials who sent their kids and wives to foreign countries have recalled their kids and wives back. Those who didn't was told crystal clear that they will be sidelined. Can any other leader around the world do that, at such a large scale?

2) CCP turned itself from a communist dictatorship and autarky in 1978 to a capitalist technocratic oligarchy and largest trading country in 2015, gradually, without major political turmoil. (Viewed from today's color revolution standard, Tiananmen Square in 1989 is child's play.) Can any other polity in the world claim the same success?

TPP, Russia and Mandatory Data Retention

It's 11:26pm on Monday night, we're watching Dark Matter and the news has come out that the Trans Pacific Partnership has been signed off. Given that we've not actually been allowed to see what the full detail of the treaty contains and the only parts that we have seen have been leaks that actively threaten our ability to make our own laws and maintain our institutions (hello PBS), this is not a thing that I am happy about.

So on top of that, news has also come out that a Russian jet has violated Turkish airspace. This is also a thing that fills me with not happiness. Especially considering the fact that when Russia first fully entered the conflict in Syria they demanded that NATO stay out of Syrian airspace. The amount of dick swinging going on at the moment pretty much ensures a massive cock up.

Oh yes, the icing on the cake is the fact that the mandatory data retention regime backed by both the Coalition and the Labor party is going to come into play in about a week. This is despite the fact that there are still questions about who the hell is going to pay for it.

So wheee, it's a wonderful world really.

Blog Catagories: 

The flow of things ….

The theme of this blog entry was triggered by a set of slides that were presented at this OSCON this year on the topic of flow. Flow being the wonderful energised state where you are fully focused upon and enjoying

the activity at hand.

For reference the presentation was: OSCON2015: Coding in the FLOW (Slides)


The conference presentation goes on to describe what the presenter thinks are the criteria needed for when you are coding, but I think there is a degree of generality here that can be applied to anything technical or skilled. They were described as:

  • G = Clear, attainable goals
  • F = Immediate and relevant feedback
  • S = Matched Skill and Challenge

For myself, I think I can add at least one other criteria

  • A = Available Time

In terms of my tinkering away at little software projects, my most recent project has been npyscreenreactor. npyscreen is a Python library around the Python curses bindings. npyscreenreactor is an implementation of interfacing that library with the Python Twisted library.  Twisted is an event driven networking engine for python. The reactor part of the name refers to a design pattern for how to write event based service handlers and have them run concurrently.  (See Reactor Pattern)

The project was written to support virtualcoke.  virtualcoke is an emulator of the behaviour of the PLC that drives the UCC Coke Machine. This is written primarly to avoid club members needing to have access  to the coke machine to test code to speak to the machine and the development of the reactor was needed to enable use of the PyModbus Twisted module.

This project, npyscreenreactor, has taken sometime to come to fruition with an initial working release of the code in March 2015, some bug fixing in June, working examples in August and probably what will now be a

stable version in September.

For this the goal, feedback, and skill have been there. However, the available time/energy has not (due to other commitments, such as work).  The wider project that will use virtualcoke, I still need to throw some energy at, but it is now lower down my list of priorities.

In things apart from this, flow has been less forthcoming of late and I’ll need to work on it.  The challenge being to set up a positive reinforcing cycle where the achieving the goal generates warm fuzzies and more enthusiasm to work harder.

Storage Limitations on Android Devices

Many Android devices come with storage configurations that are surprising to end-users. A product that is advertised as having 32 gigabytes of memory may in fact turn out to have much less available in terms of installing applications.

read more

October 04, 2015

Submission on Trans-Pacific Partnership

Status of the Submission

As of August 15 the Department of Foreign Affairs and Trade of the Commonwealth of Australia stated that it "continues to welcome public submissions and comments on Australia's participation in TPP negotiations: (

read more

Twitter posts: 2015-09-28 to 2015-10-04

October 01, 2015

A searchable database of walk waypoints

Over the last year I've become increasingly interested in bush walking, especially around the ACT. It quickly became evident that John Evan's site is an incredibly valuable resource, especially if you're interested in trig points or border markers.

However, I do most of my early walk planning and visualization in Google Earth before moving to Garmin Basecamp to generate walkable maps. I wanted a way to hook John's database of GPS logs into Google Earth, so that I could plan walks more effectively. For example, John often marks gates in fences, underpasses under major roads, and good routes through scrub in his GPS tracks.

After a fair bit of playing, I ended up with this KML file which helps me do those things. Its basically magic -- the file is just a link to a search engine which has a database of GPS waypoints based off walks John and I have logged. These are then rendered in Google Earth as if they were in a static KML file. You can also download the search results as KML for editing and so forth as well.

So, I'd be interested in other people's thoughts on if this is a useful thing. I'd also be very interested in other donated GPS logs of walks and bike rides around Canberra, especially if they have waypoints marked for interesting things. If you have any comments at all, please email me at

Tags for this post: walks gps search google earth

Related posts: HP iPaq GPS FA256A; MelbourneIT are into search engine optimisation?; Historical revisionism; Searching for a technorati search plug in for Mozilla Firefox; Well, that's Google blog search live then; Google book search


Garran green strip

When I was a teenager my best mate lived in a house which backs onto this smallish reserve and we used to walk his dog here heaps. I had a few spare moments yesterday, so I was keen to do a quick explore and see what its like now. The short answer is that its still nice -- good terrain, nice mature trees, and a few geocaches. I think this one would be a good walk for cubs.


Interactive map for this route.

Tags for this post: blog pictures 20151001 photo canberra bushwalk


PAPR spec publicly available to download

PAPR is the Power Architecture Platform Reference document. It’s a short read at only 890 pages and defines the virtualised environment that guests run in on PowerKVM and PowerVM (i.e. what is referred to as ‘pseries’ platform in the Linux kernel).

As part of the OpenPower Foundation, we’re looking at ensuring this is up to date, documents KVM specific things as well as splitting out the bits that are common to OPAL and PAPR into their own documents.

September 30, 2015

Wandering around Curtin

I decided to go on a little walk on the way home from a work lunch and I don't regret it. This is a nice area, which I was exploring for geocaches. I probably wouldn't have come here at all, but it was the second part of the "Trees of Curtin" walk from Best Bush, Town and Village Walks in and around the ACT that I had done the first half of ages ago.

I am glad I came back for the second half -- to be honest I was pretty bored with the first half (a bike path beside a major road mostly), whereas this is much more like walking around in nature. The terrain is nice, no thistles, and plenty of horses. A nice afternoon walk overall.

Now back to reviewing Mitaka specs.


Interactive map for this route.

Tags for this post: blog pictures 20150930 photo canberra bushwalk


September 29, 2015

Second trail run

I went for my second trail run last night. This one was on much rockier terrain, and I ended up tweaking my right knee. I think that was related to the knee having to stabilize as I ran over uneven rocks. I'll experiment by finding a different less awkward trail to run and seeing what happens I suppose.

Interactive map for this route.

Tags for this post: blog canberra trail run

Related posts: First trail run; Chicken run; Update on the chickens; Boston; Random learning for the day


Julian Burnside: What sort of country are we? | The Conversation

On Teaching Programming

Being involved with teaching young students to code, I have come to the tentative conclusion that many coding kids have not actually been taught programming. This has been going on for a while, so some of this cohort are now themselves teaching others.

I have noticed that many people doing programming actually lack many of the fundamental skills that would make their programs efficient, less buggy and even just functional.

A few years back, Esther Schindler wrote an article Old-school programming techniques you probably don’t miss (ComputerWorld, April 2009).

Naturally, many (most!) of the things described there are familiar to me, and it’s interesting to review them. But contrary to Esther, I still do apply some of those techniques – I don’t want to miss them, as they serve a very important purpose, in understanding as well as for producing better code. And I teach them to students.

Programming is about smartly applied laziness. Students are typically aghast when I use that word, which is exactly why I use it, but the point is that smartly applied laziness is not the same as slackness. It’s simply a juicy way of describing “efficient”.

Ein_Dienstmagdt_zu_Dantzig by unknown artistSuppose you need to shift some buckets of water.  You could carry one bucket at a time, but you’ll quickly find that it’s hard on your arm and shoulders, as well as wasting the other arm you have. So we learn that if you have more than one bucket to shift, carrying only one bucket at a time is not the best way of going about it. Similarly, trying to carry three or more buckets is probably going to cost more time than it saves, as well as likely spilling water all over the place.

Thus, and this was of course worked out many centuries ago, carrying two buckets works best and is the most efficient as well as being quite comfortable – particularly when using a neat yet simple tool called a yoke (as pictured).

Inevitably, most kids will have at some time explored this issue themselves (perhaps while camping), and generally come to the same conclusion and insight. This is possible because the issue is fairly straightforward, and not obscured by other factors. In programming, things are not always so transparent.

Our modern programming tools (high-level languages, loose typing, visual programming, extensive APIs and libraries) enable us to have more convenience. But that convenience can only be applied judiciously when the programmer has the knowledge and skills required to make appropriate judgements. Without that, code can still be produced rapidly, but the results are not so good.

Some would say “good enough”, and that is somewhat true – when you have an abundance of computing power, memory and storage, what do a few bytes or cycles matter? But add together many of those inefficiencies, and it does become a rather dreadful mess. These days the luxury of abundance has become seriously abused. In our everyday life using laptops, smart-phones, tablets and other devices, we frequently encounter the consequences, and somehow regard it as “normal”. However, crashing apps (extreme case but very common) are not normal, and we should not regard any of this as good enough.

I see kids being taught to code using tools such as MIT’s Scratch. I reckon that’s fine as a tool, but in my observations so far the kids are only being shown how the system works. Some kids will have a natural knack for it and figure out how to do things properly, others will plod along and indeed get through by sheer determination, and some will give up – they might conclude that programming is not for them. I think that’s more than a pity. It’s wrong.

When you think about it, what’s actually happening… in natural language, do we just give a person a dictionary and some reference to grammar, and expect them to effectively use that language? We wouldn’t (well actually, it is what my French teachers did, which is why I didn’t pick up that language in school). And why would computer programming languages be different?

Given even a few fundamental programming techniques, the students become vastly more competent and effective and produce better code that actually works reliably. Is such understanding an optional extra that we don’t really care about, or should it be regarded as essential to the teaching?

I think we should set the bar higher. I believe that anyone learning programming should learn fundamentals of how and why a computer works the way it does, and the various techniques that make a computer program efficient and maintainable (among other attributes). Because programming is so much more than syntax.

NASA Confirms Signs of Water Flowing on Mars, Possible Niches for Life | NY Times

September 28, 2015

Old Joe and Goorooyarroo

Steve, Mel, Michael and I went for a walk to Old Joe trig yesterday. I hadn't been to Goorooyarroo at all before, and was quite impressed. The terrain is nice, with some steep bits as you get close to the border (its clear that the border follows the water catchment from a walk around here). Plenty of nice trees, not too many thistles, and good company. A nice morning walk.

We bush bashed to the trig straight up the side of the hill, and I think there were gentler (but longer) approaches available -- like for instance how we walked down off the hill following the fence line. That said, the bush bash route wasn't terrible and its probably what I'd do again.

I need to come back here and walk this border segment, that looks like fun. There are also heaps of geocaches in this area to collect.


Interactive map for this route.

Tags for this post: blog pictures 20150928 photo canberra bushwalk


UCI brain-computer interface enables paralyzed man to walk

Proof-of-concept study shows possibilities for mind-controlled technology.


In the preliminary proof-of-concept study, led by UCI biomedical engineer Zoran Nenadic and neurologist An Do, a person with complete paralysis in both legs due to spinal cord injury was able – for the first time – to take steps without relying on manually controlled robotic limbs.

So this is using brainwave-detecting technology to reconnect a person’s brain with part of their body. A very practical example of how science can (re)enable people, in this case give them back their freedom of mobility. That’s fantastic.

Complementary, Honda’s ASIMO robot research can enable people to walk with artificial legs.

Don’t think this is just something that happens in labs! The basic tech is accessible. I have a single sensor EEG headset here, and some years ago I did a demo at a conference entitled “look ma, no hands” where I controlled the slide advance of the presentation on my laptop by doing a “long blink”.

September 27, 2015

Twitter posts: 2015-09-21 to 2015-09-27

Interactive Self-Care Guide

Interesting find:

[…] interactive flow chart for people who struggle with self care, executive dysfunction, and/or who have trouble reading internal signals. It’s designed to take as much of the weight off of you as possible, so each decision is very easy and doesn’t require much judgement.

Some readers may find it of use. I think it’d be useful to have the source code for this available so that a broad group of people can tweak and improve it, or make personalised versions.

September 24, 2015

New episode out this weekend

So I finally managed to catch up with Chris Arnade and have a chat about his Faces of Addiction project last friday night. I don't think I was at my best (but, after a year off and the interview being at 11pm I'm going to cut myself a little slack).

I'll be putting the episode out this weekend and will let everyone know when it's up.


Trolling Self-Driving Cars

XKCD’s Randall nails it beautifully, as usual…

sure you can code around this particular “attack vector”, but there are infinite possibilities… these are things we do have to consider along the way.

How we got to test_init_instance_retries_reboot_pending_soft_became_hard

I've been asked some questions about a recent change to nova that I am responsible for, and I thought it would be easier to address those in this format than trying to explain what's happening in IRC. That way whenever someone compliments me on possibly the longest unit test name ever written, I can point them here.

Let's start with some definitions. What is the difference between a soft reboot and a hard reboot in Nova? The short answer is that a soft reboot gives the operating system running in the instance an opportunity to respond to an ACPI power event gracefully before the rug is pulled out from under the instance, whereas a hard reboot just punches the instance in the face immediately.

There is a bit more complexity than that of course, because this is OpenStack. A hard reboot also re-fetches image meta-data, and rebuilds the XML description of the instance that we hand to libvirt. It also re-populates any missing backing files. Finally it ensures that the networking is configured correctly and boots the instance again. In other words, a hard reboot is kind of like an initial instance boot, in that it makes fewer assumptions about how much you can trust the current state of the instance on the hypervisor node. Finally, a soft reboot which fails (probably because the instance operation system didn't respond to the ACPI event in a timely manner) is turned into a hard reboot after libvirt.wait_soft_reboot_seconds. So, we already perform hard reboots when a user asked for a soft reboot in certain error cases.

Its important to note that the actual reboot mechanism is similar though -- its just how patient we are and what side effects we create that change -- in libvirt they both end up as a shutdown of the virtual machine and then a startup.

Bug 1072751 reported an interesting edge case with a soft reboot though. If nova-compute crashes after shutting down the virtual machine, but before the virtual machine is started again, then the instance is left in an inconsistent state. We can demonstrate this with a devstack installation:

    Setup the right version of nova cd /opt/stack/nova git checkout dc6942c1218279097cda98bb5ebe4f273720115d Patch nova so it crashes on a soft reboot cat - > /tmp/patch <<EOF > diff --git a/nova/virt/libvirt/ b/nova/virt/libvirt/ > index ce19f22..6c565be 100644 > --- a/nova/virt/libvirt/ > +++ b/nova/virt/libvirt/ > @@ -34,6 +34,7 @@ import itertools > import mmap > import operator > import os > +import sys > import shutil > import tempfile > import time > @@ -2082,6 +2083,10 @@ class LibvirtDriver(driver.ComputeDriver): > # is already shutdown. > if state == power_state.RUNNING: > dom.shutdown() > + > + # NOTE(mikal): temporarily crash > + sys.exit(1) > + > # NOTE(vish): This actually could take slightly longer than the > # FLAG defines depending on how long the get_info > # call takes to return. > EOF patch -p1 < /tmp/patch restart nova-compute inside devstack to make sure you're running the patched version... Boot a victim instance cd ~/devstack source openrc admin glance image-list nova boot --image=cirros-0.3.4-x86_64-uec --flavor=1 foo Soft reboot, and verify its gone nova list nova reboot cacf99de-117d-4ab7-bd12-32cc2265e906 sudo virsh list ...virsh list should now show no virtual machines running as nova-compute crashed before it could start the instance again. However, nova-api knows that the instance should be rebooting... $ nova list +--------------------------------------+------+---------+----------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+---------+----------------+-------------+------------------+ | cacf99de-117d-4ab7-bd12-32cc2265e906 | foo | REBOOT | reboot_started | Running | private= | +--------------------------------------+------+---------+----------------+-------------+------------------+ start nova-compute again, nova-compute detects the missing instance on boot, and tries to start it up again... sg libvirtd '/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf' \ > & echo $! >/opt/stack/status/stack/; fg || \ > echo "n-cpu failed to start" | tee "/opt/stack/status/stack/n-cpu.failure" [...snip...] Traceback (most recent call last): File "/opt/stack/nova/nova/conductor/", line 444, in _object_dispatch return getattr(target, method)(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/", line 213, in wrapper return fn(self, *args, **kwargs) File "/opt/stack/nova/nova/objects/", line 728, in save columns_to_join=_expected_cols(expected_attrs)) File "/opt/stack/nova/nova/db/", line 764, in instance_update_and_get_original expected=expected) File "/opt/stack/nova/nova/db/sqlalchemy/", line 216, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oslo_db/", line 146, in wrapper ectxt.value = e.inner_exc File "/usr/local/lib/python2.7/dist-packages/oslo_utils/", line 195, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/local/lib/python2.7/dist-packages/oslo_db/", line 136, in wrapper return f(*args, **kwargs) File "/opt/stack/nova/nova/db/sqlalchemy/", line 2464, in instance_update_and_get_original expected, original=instance_ref)) File "/opt/stack/nova/nova/db/sqlalchemy/", line 2602, in _instance_update raise exc(**exc_props) UnexpectedTaskStateError: Conflict updating instance cacf99de-117d-4ab7-bd12-32cc2265e906. Expected: {'task_state': [u'rebooting_hard', u'reboot_pending_hard', u'reboot_started_hard']}. Actual: {'task_state': u'reboot_started'}

So what happened here? This is a bit confusing because we asked for a soft reboot of the instance, but the error we are seeing here is that a hard reboot was attempted -- specifically, we're trying to update an instance object but all the task states we expect the instance to be in are related to a hard reboot, but the task state we're actually in is for a soft reboot.

We need to take a tour of the compute manager code to understand what happened here. nova-compute is implemented at nova/compute/ in the nova code base. Specifically, ComputeVirtAPI.init_host() sets up the service to start handling compute requests for a specific hypervisor node. As part of startup, this method calls ComputeVirtAPI._init_instance() once per instance on the hypervisor node. This method tries to do some sanity checking for each instance that nova thinks should be on the hypervisor:

  • Detecting if the instance was part of a failed evacuation.
  • Detecting instances that are soft deleted, deleting, or in an error state and ignoring them apart from a log message.
  • Detecting instances which we think are fully deleted but aren't in fact gone.
  • Moving instances we thought were booting, but which never completed into an error state. This happens if nova-compute crashes during the instance startup process.
  • Similarly, instances which were rebuilding are moved to an error state as well.
  • Clearing the task state for uncompleted tasks like snapshots or preparing for resize.
  • Finishes deleting instances which were partially deleted last time we saw them.
  • And finally, if the instance should be running but isn't, tries to reboot the instance to get it running.

It is this final state which is relevant in this case -- we think the instance should be running and its not, so we're going to reboot it. We do that by calling ComputeVirtAPI.reboot_instance(). The code which does this work looks like this:

    try_reboot, reboot_type = self._retry_reboot(context, instance) current_power_state = self._get_power_state(context, instance) if try_reboot: LOG.debug("Instance in transitional state (%(task_state)s) at " "start-up and power state is (%(power_state)s), " "triggering reboot", {'task_state': instance.task_state, 'power_state': current_power_state}, instance=instance) self.reboot_instance(context, instance, block_device_info=None, reboot_type=reboot_type) return [...snip...] def _retry_reboot(self, context, instance): current_power_state = self._get_power_state(context, instance) current_task_state = instance.task_state retry_reboot = False reboot_type = compute_utils.get_reboot_type(current_task_state, current_power_state) pending_soft = (current_task_state == task_states.REBOOT_PENDING and instance.vm_state in vm_states.ALLOW_SOFT_REBOOT) pending_hard = (current_task_state == task_states.REBOOT_PENDING_HARD and instance.vm_state in vm_states.ALLOW_HARD_REBOOT) started_not_running = (current_task_state in [task_states.REBOOT_STARTED, task_states.REBOOT_STARTED_HARD] and current_power_state != power_state.RUNNING) if pending_soft or pending_hard or started_not_running: retry_reboot = True return retry_reboot, reboot_type

So, we ask ComputeVirtAPI._retry_reboot() if a reboot is required, and if so what type. ComputeVirtAPI._retry_reboot() just uses nova.compute.utils.get_reboot_type() (aliased as compute_utils.get_reboot_type) to determine what type of reboot to use. This is the crux of the matter. Read on for a surprising discovery!

nova.compute.utils.get_reboot_type() looks like this:

    def get_reboot_type(task_state, current_power_state): """Checks if the current instance state requires a HARD reboot.""" if current_power_state != power_state.RUNNING: return 'HARD' soft_types = [task_states.REBOOT_STARTED, task_states.REBOOT_PENDING, task_states.REBOOTING] reboot_type = 'SOFT' if task_state in soft_types else 'HARD' return reboot_type

So, after all that it comes down to this. If the instance isn't running, then its a hard reboot. In our case, we shutdown the instance but haven't started it yet, so its not running. This will therefore be a hard reboot. This is where our problem lies -- we chose a hard reboot. The code doesn't blow up until later though -- when we try to do the reboot itself.

    @wrap_exception() @reverts_task_state @wrap_instance_event @wrap_instance_fault def reboot_instance(self, context, instance, block_device_info, reboot_type): """Reboot an instance on this host.""" # acknowledge the request made it to the manager if reboot_type == "SOFT": instance.task_state = task_states.REBOOT_PENDING expected_states = (task_states.REBOOTING, task_states.REBOOT_PENDING, task_states.REBOOT_STARTED) else: instance.task_state = task_states.REBOOT_PENDING_HARD expected_states = (task_states.REBOOTING_HARD, task_states.REBOOT_PENDING_HARD, task_states.REBOOT_STARTED_HARD) context = context.elevated()"Rebooting instance"), context=context, instance=instance) block_device_info = self._get_instance_block_device_info(context, instance) network_info = self.network_api.get_instance_nw_info(context, instance) self._notify_about_instance_usage(context, instance, "reboot.start") instance.power_state = self._get_power_state(context, instance) [...snip...]

And there's our problem. We have a reboot_type of HARD, which means we set the expected_states to those matching a hard reboot. However, the state the instance is actually in will be one correlating to a soft reboot, because that's what the user requested. We therefore experience an exception when we try to save our changes to the instance. This is the exception we saw above.

The fix in my patch is simply to change the current task state for an instance in this situation to one matching a hard reboot. It all just works then.

So why do we decide to use a hard reboot if the current power state is not RUNNING? This code was introduced in this patch and there isn't much discussion in the review comments as to why a hard reboot is the right choice here. That said, we already fall back to a hard reboot in error cases of a soft reboot inside the libvirt driver, and a hard reboot requires less trust of the surrounding state for the instance (block device mappings, networks and all those side effects mentioned at the very beginning), so I think it is the right call.

In conclusion, we use a hard reboot for soft reboots that fail, and a nova-compute crash during a soft reboot counts as one of those failure cases. So, when nova-compute detects a failed soft reboot, it converts it to a hard reboot and trys again.

Tags for this post: openstack reboot nova nova-compute

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic


Suicide doesn’t take away the pain, it gives it to someone else

"Suicide doesn't take away the pain, it gives it to someone else"This is something that I feel quite strongly about. Both of my parents have tried to commit suicide when I was young, at different times and stages of my life. The first one was when I was about 11 and I don’t remember too much about it, there was a lot of pain flying around the family at that time and I was probably shielded from the details. The second parent (by then long divorced from the other parent) tried when I was 21 and away at uni in a different city. That one I remember vividly, even though I wasn’t there.

My reactions to the second were still those of a child. Perhaps when it’s a parent, one’s reactions are always those of a child. For me the most devastating thought was a purely selfish one (as fits a child) “Do I mean that little to them? Am I not even worth staying alive for?” The pain of that thought was overwhelming.

At the time I was young, saw myself as an optimist and simply could not relate in any way to the amount of pain that would bring one to such an action. I was angry. I described suicide as “the most selfish act anyone could do”.

Now decades of time and a world of life experience later, I have stared into that dark abyss myself and I know the pain that leads one there. I know how all-encompassing the pain and darkness seems and how the needs of others fade. An end to the pain is all one wants and it seems inconceivable that one’s life has any relevance any more. In fact, one can even argue to oneself that others would be better off without one there.

In those dark times it was the certain knowledge of that pain I had experienced myself as one (almost) left behind that kept me from that road more firmly than anything else. By then I was a parent myself and there was just no way I was going to send my children the message that they meant so little to me they were not even worth living for.  Although living seemed to be the hardest thing I could do, there was no hesitation that they were worth it.

And beyond the children there are always others. Others who will be affected by a suicide, no matter of whom. None of us is truly alone. We all have parents, we may have siblings. Even if all our family is gone and we feel we have no friends, it is likely that there are people who care. The person at the corner shop from whom you buy milk on weekends and who may think “should I have known? Is there anything I could have done?” Even if you can argue that there is no-one that would notice or care, let’s be frank, someone is going to have to deal with the body and winding up of financial and other affairs. And I’m sure it’s really going to make their day!

Whenever I hear about trains being delayed because of incidents on the track I am immediately concerned for those on the train, not least of all the drivers. What have they ever done to that person to deserve the images that will now be impossible to erase from memory, which will haunt their nights and dark moments and which may lead them to require therapy.

There are many people, working for many organisations, some sitting at telephones in shifts 24 hrs a day, who want more than anything else to help people wrestling with these dark issues. They care. They really do. About everyone.

Help is always available. So let’s all acknowledge that suicide Always causes pain to others.

Need help?

APM:Plane 3.4.0 released

The ArduPilot development team is proud to announce the release of version 3.4.0 of APM:Plane. This is a major release with a lot of changes so please read the notes carefully!

First release with EKF by default

This is the also the first release that enables the EKF (Extended Kalman Filter) for attitude and position estimation by default. This has been in development for a long time, and significantly improves flight performance. You can still disable the EKF if you want to using the AHRS_EKF_USE parameter, but it is strongly recommended that you use the EKF. Note that if an issue is discovered with the EKF in flight it will automatically be disabled and the older DCM system will be used instead. That should be very rare.

In order to use the EKF we need to be a bit more careful about the setup of the aircraft. That is why in the last release we enabled arming and pre-arm checks by default. Please don't disable the arming checks, they are there for very good reasons.

Last release with APM1/APM2 support

This will be the last major release that supports the old APM1/APM2 AVR based boards. We have finally run out of flash space and memory. In the last few releases we spent quite a bit of time trying to squeeze more and more into the small flash space of the APM1/APM2, but it had to end someday if ArduPilot is to continue to develop. I am open to the idea of someone else volunteering to keep doing development of APM1/APM2 so if you have the skills and inclination do please get in touch. Otherwise I will only do small point release changes for major bugs.

Even to get this release onto the APM1/APM2 we had to make sacrifices in terms of functionality. The APM1/APM2 release is missing quite a few features that are on the Pixhawk and other boards. For example:

  • no rangefinder support for landing
  • no terrain following
  • no EKF support
  • no camera control
  • no CLI support
  • no advanced failsafe support
  • no HIL support (sorry!)
  • support for far fewer GPS types

that is just the most obvious major features that are missing on APM1/APM2. There are also numerous other smaller things where we need to take shortcuts on the APM1/APM2. Some of these features were

available on older APM1/APM2 releases but needed to be removed to allow us to squeeze the new release onto the board. So if you are happy with a previous release on your APM2 and want a feature that is in that older release and not in this one then perhaps you shouldn't upgrade.

PID Tuning

While most people are happy with autotune to tune the PIDs for their planes, it is nice also to be able to do fine tuning by hand. This release includes new dataflash and mavlink messages to help with that

tuning. You can now see the individual contributions of the P, I and D components of each PID in the logs, allowing you to get a much better picture of the performance.

A simple application of this new tuning is you can easily see if your trim is off. If the Pitch I term is constantly contributing a signifcant positive factor then you know that ArduPilot is having to

constantly apply up elevator, which means your plane is nose heavy. The same goes for roll, and can also be used to help tune your ground steering.

Vibration Logging

This release includes a lot more options for diagnosing vibration issues. You will notice new VIBRATION messages in MAVLink and VIBE messages in the dataflash logs. Those give you a good idea of your

(unfiltered) vibration levels. For really detailed analysis you can setup your LOG_BITMASK to include raw logging, which gives you every accel and gyro sample on your Pixhawk. You can then do a FFT on the

result and plot the distribution of vibration level with frequency. That is great for finding the cause of vibration issues. Note that you need a very fast microSD card for that to work!

Rudder Disarm

This is the first release that allows you to disarm using the rudder if you want to. It isn't enabled by default (due to the slight risk of accidentially disarming while doing aerobatics). You can enable it

with the ARMING_RUDDER parameter by setting it to 2. It will only allow you to disarm if the autopilot thinks you are not flying at the time (thanks to the "is_flying" heuristics from Tom Pittenger).

More Sensors

This release includes support for a bunch more sensors. It now supports 3 different interfaces for the LightWare range of Lidars (serial, I2C and analog), and also supports the very nice Septentrio RTK

dual-frequency GPS (the first dual-frequency GPS we have support for). It also supports the new "blue label" Lidar from Pulsed Light (both on I2C and PWM).

For the uBlox GPS, we now have a lot more configurability of the driver, with the ability to set the GNSS mode for different constellations. Also in the uBlox driver we support logging of the raw carrier phase and pseudo range data, which allows for post-flight RTK analysis with raw-capable receivers for really accurate photo missions.

Better Linux support

This release includes a lot of improvements to the Linux based autopilot boards, including the NavIO+, the PXF and ERLE boards and the BBBMini and the new RasPilot board. If you like the idea of flying

with Linux then please try it out!

On-board compass calibrator

We also have a new on-board compass calibrator, which also adds calibration for soft iron effects, allowing for much more accurate compass calibration. Support for starting the compass calibration in the

various ground stations is still under development, but it looks like this will be a big improvement to compass calibration.

Lots of other changes!

The above list is just a taste of the changes that have gone into this release. Thousands of small changes have gone into this release with dozens of people contributing. Many thanks to everyone who helped!

Other key changes include:

  • fixed return point on geofence breach
  • enable messages for MAVLink gimbal support
  • use 64 bit timestamps in dataflash logs
  • added realtime PID tuning messages and PID logging
  • fixed a failure case for the px4 failsafe mixer
  • added DSM binding support on Pixhawk
  • added ALTITUDE_WAIT mission command
  • added vibration level logging
  • ignore low voltage failsafe while disarmed
  • added delta velocity and delta angle logging
  • fix LOITER_TO_ALT to verify headings towards waypoints within the loiter radius
  • allow rudder disarm based on ARMING_RUDDER parameter
  • fix default behaviour of flaps
  • prevent mode switch changes changing WP tracking
  • make TRAINING mode obey stall prevention roll limits
  • disable TRIM_RC_AT_START by default
  • fixed parameter documentation spelling errors
  • send MISSION_ITEM_REACHED messages on waypoint completion
  • fixed airspeed handling in SITL simulators
  • enable EKF by default on plane
  • Improve gyro bias learning rate for plane and rover
  • Allow switching primary GPS instance with 1 sat difference
  • added NSH over MAVLink support
  • added support for mpu9250 on pixhawk and pixhawk2
  • Add support for logging ublox RXM-RAWX messages
  • lots of updates to improve support for Linux based boards
  • added ORGN message in dataflash
  • added support for new "blue label" Lidar
  • switched to real hdop in uBlox driver
  • improved auto-config of uBlox
  • raise accel discrepancy arming threshold to 0.75
  • improved support for tcp and udp connections on Linux
  • switched to delta-velocity and delta-angles in DCM
  • improved detection of which accel to use in EKF
  • improved auto-detections of flow control on pixhawk UARTs
  • Failsafe actions are not executed if already on final approach or land.
  • Option to trigger GCS failsafe only in AUTO mode.
  • added climb/descend parameter to CONTINUE_AND_CHANGE_ALT
  • added HDOP to uavcan GPS driver
  • improved sending of autopilot version
  • prevent motor startup with bad throttle trim on reboot
  • log zero rangefinder distance when unhealthy
  • added PRU firmware files for BeagleBoneBlack port
  • fix for recent STORM32 gimbal support
  • changed sending of STATUSTEXT severity to use correct values
  • added new RSSI library with PWM input support
  • fixed MAVLink heading report for UAVCAN GPS
  • support LightWare I2C rangefinder on Linux
  • improved staging of parameters and formats on startup to dataflash
  • added new on-board compass calibrator
  • improved RCOutput code for NavIO port
  • added support for Septentrio GPS receiver
  • support DO_MOUNT_CONTROl via command-long interface
  • added CAM_RELAY_ON parameter
  • moved SKIP_GYRO_CAL functionality to INS_GYR_CAL
  • added detection of bad lidar settings for landing

Note that the documentation hasn't yet caught up with all the changes in this release. We are still working on that, but meanwhile if you see a feature that interests you and it isn't documented yet then please ask.

September 23, 2015

SNR and Eb/No Worked Example

German Hams Helmut and Alfred have been doing some fine work with FreeDV 700B at power levels as low as 50mW and SNRs down to 0dB over a 300km path. I thought it might be useful to show how SNR relates to Eb/No and Bit Error Rate (BER). Also I keep having to work this out myself on scraps of paper so nice to get it written down somewhere I can Google.

This plot shows the Eb/No versus BER for of a bunch of modems and channels. The curves show how much (Eb/No) we need for a certain Bit Error Rate (BER). Click for a larger version.

The lower three curves show the performance of modems in an AWGN channel – a channel that just has additive noise (like a very slow fading HF channel or VHF). The Blue curve just above the Red (ideal QPSK) is the cohpsk modem in an AWGN channel. Time for some math:

The energy/bit Eb = power/bit rate = S/Rb. The total noise the demod sees is No (noise power in 1Hz) multiplied by the bandwidth B, so N=NoB. Re-arranging a bit we get:

    SNR = S/N = EbRb/NoB

or in dB:

    SNR(db) = Eb/No(dB) + 10log10(Rb/B)

So for FreeDV 700B, the bit rate Rb = 700, B = 3000 Hz (for SNR in a 3000Hz bandwidth) so we get:

    SNR = Eb/No – 6.3

Now, say we need a BER of 2% or 0.02 for speech, the lower Blue curve says we need an Eb/No = 4dB, so we get:

    SNR = 4 – 6.3 = -2.3dB

So if the modem is working down to “just” 0dB we are about 2dB worse than theoretical. This is due to the extra bandwidth taken by the pilot symbols (which translates to 1.5dB), some implementation “loss” in the sync algorithms, and non linearities in the system.

I thought it worth explaining this a little more. These skills will be just as important to people experimenting with the radios of the 21st century as Ohms law was in the 20th.

LUV Main October 2015 Meeting: Networking Fundamentals / High Performance Open Source Storage

Oct 6 2015 18:30
Oct 6 2015 20:30
Oct 6 2015 18:30
Oct 6 2015 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Fraser McGlinn, Networking Fundamentals, Troubleshooting and Packet Analysis

• Sam McLeod, High Performance, Open Source Storage

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

October 6, 2015 - 18:30

read more

First trail run

So, now I trail run apparently. This was a test run for a hydration vest (thanks Steven Hanley for the loaner!). It was fun, but running up hills is evil.

Interactive map for this route.

Tags for this post: blog canberra trail run

Related posts: Second trail run; Chicken run; Update on the chickens; Boston; Random learning for the day


Reset keyboard shortcuts in GNOME

Recently we had a Korora user ask how to reset the keybindings in GNOME, which they had changed.

I don’t think that the shortcuts program has a way to reset them, but you can use dconf-editor.

Open the dconf-editor program and browse to:


Anything that’s been modified should be in bold font. Select it then down the bottom on the right click the “Set to Default” button.

Hope that helps!

Government as an API: how to change the system

A couple of months ago I gave a short speech about Gov as an API at an AIIA event. Basically I believe that unless we make government data, content and transaction services API enabled and mashable, then we are simply improving upon the status quo. 1000 services designed to be much better are still 1000 services that could be integrated for users, automated at the backend, or otherwise transformed into part of a system rather than the unique siloed systems that we have today. I think the future is mashable government, and the private sector has already gone down this path so governments need to catch up!

When I rewatched it I felt it captured my thoughts around this topic really well, so below is the video and the transcript. Enjoy! Comments welcome.

The first thing is I want to talk about gov as an API. This is kind of like on steroids, but this goes way above and beyond data and gets into something far more profound. But just a step back, the to the concept of Government as a platform. Around the world a lot of Governments have adopted the idea of Government as a platform: let’s use common platforms, let’s use common standards, let’s try and be more efficient and effective. It’s generally been interpreted as creating platforms within Government that are common. But I think that we can do a lot better.

So Government as an API is about making Government one big conceptual API. Making the stuff that Government does discoverable programmatically, making the stuff that it does consumable programmatically, making Government the platform or a platform on which industry and citizens and indeed other Governments can actually innovate and value add. So there are many examples of this which I’ll get to but the concept here is getting towards the idea of mashable Government. Now I’m not here representing my employers or my current job or any of that kind of stuff. I’m just here speaking as a geek in Government doing some cool stuff. And obviously you’ve had the Digital Transformation Office mentioned today. There’s stuff coming about that but I’m working in there at the moment doing some cool stuff that I’m looking forward to telling you all about. So keep an eye out.

But I want you to consider the concept of mashable Government. So Australia is a country where we have a fairly egalitarian democratic view of the world. So in our minds and this is important to note, in our minds there is a role for Government. Now there’s obviously some differences around the edges about how big or small or how much I should do or shouldn’t do or whatever but the concept is that, that we’re not going to have Government going anywhere. Government will continue to deliver things, Government has a role of delivering things. The idea of mashable Government is making what the Government does more accessible, more mashable. As a citizen when you want to find something out you don’t care which jurisdiction it is, you don’t care which agency it is, you don’t care in some cases you know you don’t care who you’re talking to, you don’t care what number you have to call, you just want to get what you need. Part of the problem of course is what are all the services of Government? There is no single place right now. What are all of the, you know what’s all the content, you know with over a thousand websites or more but with lots and lots of websites just in the Federal Government and thousands more across the state and territories, where’s the right place to go? And you know sometimes people talk about you know what if we had improved SEO? Or what if we had improved themes or templates and such. If everyone has improved SEO you still have the same exact problem today, don’t you? You do a google search and then you still have lots of things to choose from and which one’s authoritative? Which one’s the most useful? Which one’s the most available?

The concept of Government as an API is making content, services, API’s, data, you know the stuff that Government produces either directly or indirectly more available to collate in a way that is user centric. That actually puts the user at the centre of the design but then also puts the understanding that other people, businesses or Governments will be able to provide value on top of what we do. So I want to imagine that all of that is available and that everything was API enabled. I want you to imagine third party re-use new applications, I mean we see small examples of that today. So to give you a couple of examples of where Governments already experimenting with this idea. obviously my little baby is one little example of this, it’s a microcosm. But whilst ever data, open data was just a list of things, a catalogue of stuff it was never going to be that high value.

So what we did when we re-launched a couple of years ago was we said what makes data valuable to people? Well programmatic access. Discovery is useful but if you can’t get access to it, it’s almost just annoying to be able to find it but not be able to access it. So how do we make it most useful? How do we make it most reusable, most high value in capacity shall we say? In potentia? So it was about programmatic access. It was about good meta data, it was about making it so it’s a value to citizens and industry but also to Government itself. If a Government agency needs to build a service, a citizen service to do something, rather than building an API to an internal system that’s privately available only to their application which would cost them money you know they could put the data in Whether it’s spatial or tabular and soon to be relational, you know different data types have different data provision needs so being able to centralise that function reduces the cost of providing it, making it easy for agencies to get the most out of their data, reduce the cost of delivering what they need to deliver on top of the data also creates an opportunity for external innovation. And I know that there’s already been loads of applications and analysis and uses of data that’s on and it’s only increasing everyday. Because we took open data from being a retrospective, freedom of information, compliance issue, which was never going to be sexy, right? We moved it towards how you can do things better. This is how we can enable innovation. This is how agencies can find each other’s data better and re-use it and not have to keep continually repeat the wheel. So we built a business proposition for that started to make it successful. So that’s been cool.

There’s been experimentation of gov as an API in the ATO. With the SBR API. With the ABN lookup or ABN lookup API. There’s so many businesses out there. I’m sure there’s a bunch in the room. When you build an application where someone puts in a business name into a app or into an application or a transaction or whatever. You can use the ABN lookup API to validate the business name. So you know it’s a really simple validation service, it means that you don’t have, as unfortunately we have right now in the whole of Government contracts data set 279 different spellings for the Department of Defence. You can start to actually get that, use what Government already has as validation services, as something to build upon. You know I really look forward to having whole of Government up to date spatial data that’s really available so people can build value on top of it. That’ll be very exciting. You know at some point I hope that happens but. Industry, experimented this with energy ratings data set. It’s a very quick example, they had to build an app as you know Ministers love to see. But they built a very, very useful app to actually compare when you’re in the store. You know your fridges and all the rest of it to see what’s best for you. But what they found, by putting the data on they saved money immediately and there’s a brilliant video if you go looking for this that the Department of Industry put together with Martin Hoffman that you should have a look at, which is very good. But what they found is by having the data out there, all the companies, all the retail companies that have to by law put the energy rating of every electrical device they sell on their brochures traditionally they did it by goggling, right? What’s the energy rating of this, whatever other retail companies using we’ll use that.

Completely out of date and unauthorised and not true, inaccurate. So by having the data set publically available kept up to date on a daily basis, suddenly they were able to massively reduce the cost of compliance for a piece of regulatory you know, so it actually reduced red tape. And then other application started being developed that were very useful and you know Government doesn’t have all the answers and no one pretends that. People love to pretend also that Government also has no answers. I think there’s a healthy balance in between. We’ve got a whole bunch of cool, innovators in Government doing cool stuff but we have to work in partnership and part of that includes using our stuff to enable cool innovation out there.

ABS obviously does a lot of work with API’s and that’s been wonderful to see. But also the National Health Services Directory. I don’t know who, how many people here know that? But you know it’s a directory of thousands, tens of thousands, of health services across Australia. All API enabled. Brilliant sort of work. So API enabled computing and systems and modular program design, agile program design is you know pretty typical for all of you. Because you’re in industry and you’re kind of used to that and you’re used to getting up to date with the latest thing that’ll make you competitive.

Moving Government towards that kind of approach will take a little longer but you know, but it has started. But if you take an API enabled approach to your systems design it is relatively easy to progress to taking an API approach to exposing that publically.

So, I think I only had ten minutes so imagine if all the public Government information services were carefully, were usefully right, usefully discoverable. Not just through using a google search, which appropriate metadata were and even consumable in some cases, you know what if you could actually consume some of those transaction systems or information or services and be able to then re-use it somewhere else. Because when someone is you know about to I don’t know, have a baby, they google for it first right and then they go to probably a baby, they don’t think to come to government in the first instance. So we need to make it easier for Government to go to them. When they go to, why wouldn’t be able to present to them the information that they need from Government as well. This is where we’re starting to sort of think when we start following the rabbit warren of gov as an API.

So, start thinking about what you would use. If all of these things were discoverable or if even some of them were discoverable and consumable, how would you use it? How would you innovate? How would you better serve your customers by leveraging Government as an API? So Government has and always will play a part. This is about making Government just another platform to help enable our wonderful egalitarian and democratic society. Thank you very much.

Postnote: adopting APIs as a strategy, not just a technical side effect is key here. Adopting modular architecture so that agencies can adopt the best of breed components for a system today, tomorrow and into the future, without lock in. I think just cobbling APIs on top of existing systems would miss the greater opportunity of taking a modular architecture design approach which creates more flexible, adaptable, affordable and resilient systems than the traditional single stack solution.

September 22, 2015

More JSF Thoughts, Theme Hospital, George W. Bush Vs Tony Abbott, and More

- people in charge of running the PR behind the JSF program have handled it really badly at times. If anyone wants to really put the BVR combat perspective back into perspective they should point back to the history of other 'sealth aircraft' such as the B-2 instead of simply repeating the mantra, it will work in the future. People can judge the past, they can only speculate about the future and watch as problem after problem seems to be highlighted with the program

- for a lot of countries the single engined nature of the aircraft makes little sense. Will be interesting how the end game plays out. It seems clear that some countries have been co-erced into purchasing the JSF rather than the JSF earning it's stripes entirely on merit

Norway to reduce F-35 order?

F-35 - Runaway Fighter - the fifth estate

- one thing I don't like about the program is the fact that if there is crack in the security of the program all countries participating in the program are in trouble. Think about computer security. Once upon a time it was claimed that Apple's Mac OS X and that Google's technology was best and that Android was impervious to security threats. It's become clear that these beliefs are nonsensical. If all allies switch to stealth based technologies all enemies will switch to trying to find a way to defeat it

- one possible attack against stealth aircraft I've ben thinking of revolves around sensory deprivation of the aircrafts sensors. It is said that the AESA RADAR capability of the JSF is capable of frying other aircraft's electronics. I'd be curious to see how attacks against airspeed, attitude, and other sensors would work. Both the B-2 and F-22 have had trouble with this...

- I'd be like the US military to be honest. Purchase in limited numbers early on and test it or let others do the same thing. Watch and see how the program progresses before making joining in

- never, ever make the assumption that the US will give back technology that you have helped to develop alongside them if they have iterated on it. A good example of this is the Japanese F-2 program which used higher levels of composite in airframe structure and the world's first AESA RADAR. Always have backup or keep a local research effort going even if the US promise to transfer knowledge back to a partner country

- as I've stated before the nature of detterance as a core defensive theory means that you are effectively still at war because it diverts resources from other industries back into defense. I'm curious to see how economies would change if everyone mutually agreed to drop weapons and platforms with projected power capabilities (a single US aircraft carrier alone costs about $14B USD, a B-2 bomber $2B, a F-22 fighter $250M USD, a F-35 JSF ~$100M USD, etc...) and only worried about local, regional, defense...

- people often accuse the US of poking into areas where they shouldn't. The problem is that they have so many defense agreements that it's difficult for them not to. They don't really have a choice sometimes. The obvious thing is whether or not they respond in a wise fashion

- in spite of what armchair generals keep on saying the Chinese and Russians would probably make life at least a little difficult for the US and her allies if things came to a head. It's clear that a lot of weapons platform's and systems that are now being pursued are struggles for everyone who is engaged in them (technically as well as cost wise) and they already have some possible counter measures in place. How good they actually are is the obvious question though. I'm also curious how good their OPSEC is. If they're able to seal off their scientists entirely in internal test environments then details regarding their programs and capabilities will be very difficult to obtain owing the the heavy dependence by the West purely on SIGINT/COMINT capabilities. They've always had a hard time gaining HUMINT but not the other way around...

- some analysts/journalists say that the 'Cold War' never really ended, that it's effectively been in hibernation for a while. The interesting thing is that in spite of what China has said regarding a peaceful rise it is pushing farther out with it's weapons systems and platforms. You don't need an aircraft carrier to defend your territory. You just need longer range weapons systems and platforms. It will be interesting to see how far China chooses to push out in spite of what is said by some public servants and politicians it is clear that China wants to take a more global role

- technically, the US wins many of the wars that it chooses. Realistically, though it's not so clear. Nearly every single adversary now engages in longer term, guerilla style tactics. In Afghanistan, Iraq, Iran, Libya, and elsewhere they've basically been waiting for allied forces to clear out before taking their opportunity

- a lot of claims regarding US defense technology superiority makes no sense. If old Soviet era SAM systems are so worthless against US manufactured jets then why bother to going to such extents with regard to cyberwarfare when it comes to shutting them down? I am absolutely certain that there is no way that the claim that some classes of aircraft have never been shot down is not true

- part of me wonders just exactly how much effort and resources are the Chinese and Russians genuinely throwing at their 5th gen fighter programs. Is it possible that they are simply waiting until most of the development is completed by the West and then they'll 'magically' have massive breakthroughs and begin full scale production of their programs? They've had a history of stealing and reverse engineering a lot of technology for a long time now

- the US defense budget seems exhorbitant. True, their requirements are substantially different but look at the way they structure a lot of programs and it becomes obvious why as well. They're often very ambitious with multiple core technologies that need to be developed in order for the overall program to work. Part of me thinks that their is almost a zero sum game at times. They think that they can throw money at some problems and it will be solved. It's not as simple as that. They've been working on some core problem problems like directed energy weapons and rail guns for a long time now and have had limited success. If they want a genuine chance at this they're better off understanding the problem and then funding the core science. It's much like their space and intelligence programs where a lot of other spin off technologies were subsequently developed

- reading a lot of stuff online and elsewhere it becomes much clearer that both sides often underestimate one another (less often by people in the defense or intelligence community) . You should track and watch things based on what people do, not what they say

- a lot of countries just seem to want to stay out of the geo-political game. They don't want to choose sides and couldn't care less. Understandable, seeing the role that both countries play throughout the world now

- the funny thing is that some of the countries that are pushed back (Iran, North Korea, Russia, etc...) don't have much too lose. US defense alone has struggled to identify targets worth bombing in North Korea and how do you force a country to comply if they have nothing left to lose such as Iran or North Korea? It's unlikely China or Russia will engage in all out attack in the near to medium future. It's likely they'll continue to do the exact same thing and skirt around the edges with cyberwarfare and aggressive intelligence collection

- It's clear that the superpower struggle has been underway for a while now. The irony is that this is game of economies as well as technology. If the West attempt to compete purely via defense technology/deterrence then part of me fears they will head down the same pathway that the USSR went. It will collapse under the strain of a defense (and other industries) that are largely worthless (under most circumstances) and does nothing for the general poplation. Of course, this is partially offset by a potential new trade pact in the APAC region but I am certain that this will inevitably still be in favour of the US especially with their extensive SIGINT/COMINT capability, economic intelligence, and their use of it in trade negotiations

- you don't really realise how many jobs and money is on the line with regards to the JSF program until you do the numbers

An old but still enjoyable/playable game with updates to run under Windows 7

Watching footage about George W. Bush it becomes much clearer that he was somewhat of a clown who realised his limitations. It's not the case with Tony Abbott who can be scary and hilarious at times

Last Week Tonight with John Oliver: Tony Abbott, President of the USA of Australia (HBO)

Must See Hilarious George Bush Bloopers! - VERY FUNNY

Once upon a time I read about a Chinese girl who used a pin in her soldering iron to do extremely fine soldering work. I use solder paste or wire glue. Takes less time and using sticky/masking tape you can achieve a really clean finish!/

Lightning network thoughts

I’ve been intrigued by micropayments for, like, ever, so I’ve been following Rusty’s experiments with bitcoin with interest. Bitcoin itself, of course, has a roughly 10 minute delay, and a fee of effectively about 3c per transaction (or $3.50 if you count inflation/mining rewards) so isn’t really suitable for true microtransactions; but pettycoin was going to be faster and cheaper until it got torpedoed by sidechains, and more recently the lightning network offers the prospect of small payments that are effectively instant, and have fees that scale linearly with the amount (so if a $10 transaction costs 3c like in bitcoin, a 10c transaction will only cost 0.03c).

(Why do I think that’s cool? I’d like to be able to charge anyone who emails me 1c, and make $130/month just from the spam I get. Or you could have a 10c signup fee for webservice trials to limit spam but not have to tie everything to your facebook account or undergo turing trials. You could have an open wifi access point, that people don’t have to register against, and just bill them per MB. You could maybe do the same with tor nodes. Or you could setup bittorrent so that in order to receive a block I pay maybe 0.2c/MB to whoever sent it to me, and I charge 0.2c/MB to anyone who wants a block from me — leechers paying while seeders earn a profit would be fascinating. It’d mean you could setup a webstore to sell apps or books without having to sell your sell your soul to a corporate giant like Apple, Google, Paypal, Amazon, Visa or Mastercard. I’m sure there’s other fun ideas)

A bit over a year ago I critiqued sky-high predictions of bitcoin valuations on the basis that “I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints)” — which is currently playing out as “OMG the block size is too small” debates. But the cool thing about lightning is that it lets you avoid that problem entirely; hundreds, thousands or millions of transactions over weeks or years can be summarised in just a handful of transactions on the blockchain.

(How does lightning do that? It sets up a mesh network of “channels” between everyone, and provides a way of determining a route via those channels between any two people. Each individual channel is between two people, and each channel is funded with a particular amount of bitcoin, which is split between the two people in whatever way. When you route a payment across a channel, the amount of that payment’s bitcoins moves from one side of the channel to the other, in the direction of the payment. The amount of bitcoins in a channel doesn’t change, but when you receive a payment, the amount of bitcoins on your side of your channels does. When you simply forward a payment, you get more money in one channel, and less in another by the same amount (or less a small handling fee). Some bitcoin-based crypto-magic ensues to ensure you can’t steal money, and that the original payer gets a “receipt”. The end result is that the only bitcoin transactions that need to happen are to open a channel, close a channel, or change the total amount of bitcoin in a channel. Rusty gave a pretty good interview with the “Let’s talk bitcoin” podcast if the handwaving here wasn’t enough background)

Of course, this doesn’t work very well if you’re only spending money: it doesn’t take long for all the bitcoins on your lightning channels to end up on the other side, and at that point you can’t spend any more. If you only receive money over lightning, the reverse happens, and you’re still stuck just as quickly. It’s still marginally better than raw bitcoin, in that you have two bitcoin transactions to open and close a channel worth, say, $200, rather than forty bitcoin transactions, one for each $5 you spend on coffee. But that’s only a fairly minor improvement.

You could handwave that away by saying “oh, but once lightning takes off, you’ll get your salary paid in lightning anyway, and you’ll pay your rent in lightning, and it’ll all be circular, just money flowing around, lubricating the economy”. But I think that’s unrealistic in two ways: first, it won’t be that way to start with, and if things don’t work when lightning is only useful for a few things, it will never take off; and second, money doesn’t flow around the economy completely fluidly, it accumulates in some places (capitalism! profits!) and drains away from others. So it seems useful to have some way of making degenerate scenarios actually work — like someone who only uses lightning to spend money, or someone who receives money by lightning but only wants to spend cold hard cash.

One way you can do that is if you imagine there’s someone on the lightning network who’ll act as an exchange — who’ll send you some bitcoin over lightning if you send them some cash from your bank account, or who’ll deposit some cash in your bank account when you send them bitcoins over lightning. That seems like a pretty simple and realistic scenario to me, and it makes a pretty big improvement.

I did a simulation to see just how well that actually works out. With “Alice” as a coffee consumer, who does nothing with lightning but buy $5 espressos from “Emma” and refill her lightning wallet by exchanging cash with “Xavier” who runs an exchange, converting dollars (or gold or shares etc) to lightning funds. Bob, Carol and Dave run lightning nodes and take a 1% cut of any transactions they forward. I uploaded a video to youtube that I think helps visualise the payment flows and channel states (there’s no sound):

It starts off with Alice and Xavier putting $200 in channels in the network; Bob, Carol and Dave putting in $600 each, and Emma just waiting for cash to arrive. The statistics box in the top right tracks how much each player has on the lightning network (“ln”), how much profit they’ve made (“pf”), and how many coffees Alice has ordered from Emma. About 3000 coffees later, it ends up with Alice having spent about $15,750 in real money on coffee ($5.05/coffee), Emma having about $15,350 in her bank account from making Alice’s coffees ($4.92/coffee), and Bob, Carol and Dave having collectively made about $400 profit on their $1800 investment (about 22%, or the $0.13/coffee difference between what Alice paid and Emma received). At that point, though, Bob, Carol and Dave have pretty much all the funds in the lightning network, and since they only forward transactions but never initiate them, the simulation grinds to a halt.

You could imagine a few ways of keeping the simulation going: Xavier could refresh his channels with another $200 via a blockchain transaction, for instance. Or Bob, Carol and Dave could buy coffees from Emma with their profits. Or Bob, Carol and Dave could cash some of their profits out via Xavier. Or maybe they buy some furniture from Alice. Basically, whatever happens, you end up relying on “other economic activity” happening either within lightning itself, or in bitcoin, or in regular cash.

But grinding to a halt after earning 22% and spending/receiving $15k isn’t actually too bad even as it is. So as a first pass, it seems like a pretty promising indicator that lightning might be feasible economically, as well as technically.

One somewhat interesting effect is that the profits don’t get distributed particularly evenly — Bob, Carol and Dave each invest $600 initially, but make $155.50 (25.9%), $184.70 (30.7%) and $52.20 (8.7%) respectively. I think that’s mostly a result of how I chose to route payments — it optimises the route to choose channels with the most funds in order to avoid payments getting stuck, and Dave just ends up handling less transaction volume. Having a better routing algorithm (that optimises based on minimum fees, and relies on channel fees increasing when they become unbalanced) might improve things here. Or it might not, and maybe Dave needs to quote lower fees in general or establish a channel with Xavier in order to bring his profits up to match Bob and Carol.

Building a Rope Bridge Using Quadcopters

Or, how to do something really useful with these critters…

Quadcopters are regularly in the news, as they’re fairly cheap and lots of people are playing about with them and quite often creating a nuisance or even dangerous situations. I suppose it’s a phase, but I don’t blame people for wondering what positive use quadcopters can have.

At STEM and Management University ETH Zurich (Switzerland), software tools have been developed to calculate the appropriate structure for a rope bridge, after a physical location has been measured up. The resulting structure is also virtually tested before the quadcopters start, autonomously, with the actual build.

The built physical structure can hold humans crossing. Imagine this getting used in disaster areas, to help save people. Just one example… quite fabulous, isn’t it!

The experiments are done in the Flying Machine Arena of ETH Zurich, a 10x10x10 meter space with fast motion capture cameras.

Camp Cottermouth

I spent the weekend at a Scout camp at Camp Cottermouth. The light on the hills here in the mornings is magic.


Interactive map for this route.

Tags for this post: blog pictures 20150920 photo canberra bushwalk


September 21, 2015

Mocking python objects and object functions using both class-level and function-level mocks

Had some fun solving an issue with partitions larger than 2Tb, and came across a little gotcha when it comes to mocking in python when a) you want to mock both an object and a function in that object, and b) when you want to mock.patch.object at both the test class and test method level.

Say you have a function you want to test that looks like this:

def make_partitions(...):
        dp = disk_partitioner.DiskPartitioner(...)

where the DiskPartitioner class looks like this:

class DiskPartitioner(object):

    def __init__(self, ...):

    def add_partition(self, ...):

and you have existing test code like this:

@mock.patch.object(utils, 'execute')
class MakePartitionsTestCase(test_base.BaseTestCase):

and you want to add a new test function, but adding a new patch just for your new test.

You want to verify that the class is instantiated with the right options, and you need to mock the add_partition method as well. How do you use the existing test class (with the mock of the execute function), add a new mock for the DiskPartitioner.add_partition function, and the __init__ of the DiskPartitioner class?

After a little trial and error, this is how:

    @mock.patch.object(disk_partitioner, 'DiskPartitioner',
    def test_make_partitions_with_gpt(self, mock_dp, mock_exc):

        # Need to mock the function as well
        mock_dp.add_partition = mock.Mock(return_value=None)
        disk_utils.make_partitions(...)   # Function under test

Things to note:

1) The ordering of the mock parameters to test_make_partitions_with_gpt isn't immediately intuitive (at least to me).  You specify the function level mocks first, followed by the class level mocks.

2) You need to manually mock the instance method of the mocked class.  (i.e. the add_partition function)

You can see the whole enchilada over here in the review.

September 20, 2015

Codec 2 Masking Model Part 2

I’ve been making steady progress on my new ideas for amplitude quantisation for Codec 2. The goal is to increase speech quality, in particular for very low bit rate 700 bits/ modes.

Here are the signal processing steps I’m working on:

The signal processing algorithms I have developed since Part 1 are coloured in blue. I still need to nail the yellow work. The white stuff has been around for years.

Actually I spent a few weeks on the yellow steps but wasn’t satisfied so looked for something a bit easier to do for a while. The progress has made me feel like I am getting somewhere, and pumped me up to hit the tough bits again. Sometimes we need to organise the engineering to suit our emotional needs. We need to see (or rather “feel”) constant progress. Research and Disappointment is hard!

Transformations and Sample Rate Changes

The goal of a codec is to reduce the bit rate, but still maintain some target speech quality. The “quality bar” varies with your application. For my current work low quality speech is OK, as I’m competing with analog HF SSB. Just getting the message through after a few tries is a lower bar, the upper bar being easy conversation over that nasty old HF channel.

While drawing the figure above I realised that a codec can be viewed as a bunch of processing steps that either (i) transform the speech signal or (ii) change the sample rate. An example of transforming is performing a FFT to convert the time domain speech signal into the frequency domain. We then decimate in the time and frequency domain to change the sample rate of the speech signal.

Lowering the sample rate is an effective way to lower the bit rate. This process is called decimation. In Codec 2 we start with a bunch of sinusoidal amplitudes that we update every 10ms (100Hz sampling rate). We then throw away every 3 out of 4 to give a sample rate of 25Hz. This means there are less samples to every second, so the bit rate is reduced.

At the decoder we use interpolation to smoothly fill in the missing gaps, raising the sample rate back up to 100Hz. We eventually transform back to the time domain using an inverse FFT to play the signal out of the speaker. Speakers like time domain signals.

In the figure above we start with chunks of speech samples in the time domain, then transform into the frequency domain, where we fit a sinusoidal, then masking model.

The sinusoidal model takes us from a 512 point FFT to 20-80 amplitudes. Its fits a sinusoidal speech model to the incoming signal. The number of sinusoidal amplitudes varies with the pitch of the incoming voice. It is time varying, which complicates our life if we desire a constant bit rate.

The masking model fits a smoothed envelope that represents the way we produce and hear speech. For example we don’t talk in whistles (unless you are R2D2) so no point wasting bits in being able to code very narrow bandwidths signals. The ear masks weak tones near strong ones so no point coding them either. The ear also has a log frequency and amplitude response so we take advantage of that too.

In this way the speech signal winds it’s way through the codec, being transformed this way and that, as we carve off samples until we get something that we can send over the channel.

Next Steps

Need to sort out those remaining yellow blocks, and come up with a fully quantised codec candidate.

An idea that occurred to me while drawing the diagram – can we estimate the mask directly from the FFT samples? We may not need the intermediate estimation of the sinusoidal amplitudes any more.

It may also be possible to analyse/synthesise using filters modeling the masks running in the time domain. For example on the analysis side look at the energy at the output at a bunch of masking filters spaced closely enough that we can’t perceive the difference.

Writing stuff up on a blog is cool. It’s “the cardboard colleague” effect: the process of clearly articulating your work can lead to new ideas and bug fixes. It doesn’t matter who you articulate the problems too, just talking about them can lead to solutions.

Twitter posts: 2015-09-14 to 2015-09-20

Terry Motor Upgrade -- no stopping it!

I have now updated the code and PID control for the new RoboClaw and HD Planetary motor configuration. As part of the upgrade I had to move to using a lipo battery because these motors stall at 20 amps. While it is a bad idea to leave it stalled, it's a worse idea to have the battery have issues due to drawing too much current. It's always best to choose where the system will fail rather than letting the cards fall where the may. In this case, leaving it stalled will result in drive train damage in the motors, not a controller board failure, or a lipo issue.

One of the more telling images is below which compares not only the size of the motors but also the size of the wires servicing the power to the motors. I used 14AWG wire with silicon coating for the new motors so that a 20A draw will not cause any issues in the wiring. Printing out new holders for the high precision quadrature encoders took a while. Each print was about 1 hour long and there was always a millimetre or two that could be changed in the design which then spurred another print job.

Below is the old controller board (the 5A roboclaw) with the new controller sitting on the bench in front of Terry (45A controller). I know I only really needed the 30A controller for this job, but when I decided to grab the items the 30A was sold out so I bumped up to the next model.

The RoboClaw is isolated from the channel by being attached via nylon bolts to a 3d printed cross over panel.

One of the downsides to the 45A model, which I imagine will fix itself in time, was that the manual didn't seem to be available. The commands are largely the same as for the other models in the series, but I had to work out the connections for the quad encoders and have currently powered them of the BEC because the screw terminal version of the RoboClaw doesn't have +/- terminals for the quads.

One little surprise was that these motors are quite magnetic without power. Nuts and the like want to move in and the motors will attract each other too. Granted it's not like they will attract themselves from any great distance, but it's interesting compared to the lower torque motors I've been using in the past.

I also had a go at wiring 4mm connectors to 10AWG cable. Almost got it right after a few attempts but the lugs are not 100% fixed into their HXT plastic chassis because of some solder or flux debris I accidentally left on the job. I guess some time soon I'll be wiring my 100A monster automotive switch inline in the 10AWG cable for solid battery isolation when Terry is idle. ServoCity has some nice bundles of 14AWG wire (which are the yellow and blue ones I used to the motors) and I got a bunch of other wire from HobbyKing.

September 19, 2015

Hooking into docking and undocking events to run scripts

In order to automatically update my monitor setup and activate/deactivate my external monitor when plugging my ThinkPad into its dock, I found a way to hook into the ACPI events and run arbitrary scripts.

This was tested on a T420 with a ThinkPad Dock Series 3 as well as a T440p with a ThinkPad Ultra Dock.

The only requirement is the ThinkPad ACPI kernel module which you can find in the tp-smapi-dkms package in Debian. That's what generates the ibm/hotkey events we will listen for.

Hooking into the events

Create the following ACPI event scripts as suggested in this guide.

Firstly, /etc/acpi/events/thinkpad-dock:

event=ibm/hotkey LEN0068:00 00000080 00004010
action=su francois -c "/home/francois/bin/external-monitor dock"

Secondly, /etc/acpi/events/thinkpad-undock:

event=ibm/hotkey LEN0068:00 00000080 00004011
action=su francois -c "/home/francois/bin/external-monitor undock"

then restart udev:

sudo service udev restart

Finding the right events

To make sure the events are the right ones, lift them off of:

sudo acpi_listen

and ensure that your script is actually running by adding:

logger "ACPI event: $*"

at the begininng of it and then looking in /var/log/syslog for this lines like:

logger: external-monitor undock
logger: external-monitor dock

If that doesn't work for some reason, try using an ACPI event script like this:

action=logger %e

to see which event you should hook into.

Using xrandr inside an ACPI event script

Because the script will be running outside of your user session, the xrandr calls must explicitly set the display variable (-d). This is what I used:

logger "ACPI event: $*"
xrandr -d :0.0 --output DP2 --auto
xrandr -d :0.0 --output eDP1 --auto
xrandr -d :0.0 --output DP2 --left-of eDP1

September 17, 2015

Phase from Magnitude Spectra

For my latest Codec 2 brainstorms I need to generate a phase spectra from a magnitude spectra. I’m using ceptral/minimum phase techniques. Despite plenty of theory and even code on the Internet it took me a while to get something working. So I thought I’d post an worked example here. I must admit the theory still makes my eyes glaze over. However a working demo is a great start to understanding the theory if you’re even nerdier than me.

Codec 2 just transmits the magnitude of the speech spectrum to the decoder. The phases are estimated at the encoder but take too many bits to encode, and aren’t that important for communications quality speech. So we toss them away and reconstruct them at the decoder using some sort of rule based approach. I’m messing about with a new way of modeling the speech spectrum so needed a new way to generate the phase spectra at the decoder.

Here is the mag_to_phase.m function, which is a slightly modified version of this Octave code that I found in my meanderings on the InterWebs. I think there is also a Matlab/Octave function called mps.m which does a similar job.

I decided it to test it using a 10th order LPC synthesis filter. These filters are known to have a minimum-phase phase spectra. So if the algorithm is working it will generate exactly the same phase spectra.

So we start with 40ms of speech:

Then we find the phase spectra (bottom) given the magnitude spectrum (top):

On the bottom the green line is the measured phase spectrum of the filter, and the blue line is what the mag_to_phase.m function came up with. They are identical, I’ve just offset them by 0.5 rads on the plot. So it works Yayyyy – we can find a minimum phase spectra from just the magnitude spectra of a filter.

This is the impulse response, which the algorithm spits out as an intermediate product. One interpretation of minimum phase (so I’m told) is that the energy is all collected near the start of the pulse:

As the DFT is cyclical the bit on the right is actually concatenated with the bit on the left to make one continuous pulse centered on time = 0. All a bit “Dr Who” I know but this is DSP after all! With a bit of imagination you can see it looks like one period of the original input speech in the first plot above.

September 16, 2015

Exploring for a navex

I feel like I need more detailed maps of Mount Stranger than I currently have in order to layout a possible navex. I there spent a little time this afternoon wandering down the fire trail to mark all the gates in the fence. I need to do a little more of this before its ready for a navex.

Interactive map for this route.

Tags for this post: blog canberra bush walk

Related posts: Walking to work; First jog, and a walk to Los Altos


10 Foot Pound Boots for Terry

A sad day when your robot outgrows it's baby motors. On carpet this happened when the robot started to tip the scales at over 10kg. So now I have some lovely new motors that can generate almost 10 foot pounds of torque.

This has caused me to move to a more rigid motor attachment and a subsequent modofication and reprint of the rotary encoder holders (not shown above). The previous motors were spur motors, so I could rotate the motor itself within its mounting bracket to mate the large gear to the encoders. Not so anymore. Apart from looking super cool the larger alloy gear gives me an 8 to 1 reduction to the encoders, nothing like the feeling of picking up 3 bits of extra precision.

This has also meant using some most sizable cables. The yellow and purple cables are 14 AWG silicon wires. For the uplink I have an almost store bought 12AWG and some hand made 10 AWG monsters. Each motor stalls at 20A so there is the potential of a noticable amount of current to flow around the base of Terry now.

September 15, 2015

Returning to data and Gov 2.0 from the DTO

I have been working at the newly created Digital Transformation Office in the Federal Government since January this year helping to set it up, create a vision, get some good people in and build some stuff. I was working in and then running a small, highly skilled and awesome team focused on how to dramatically improve information (websites) and transaction services across government. This included a bunch of cool ideas around whole of government service analytics, building a discovery layer (read APIs) for all government data, content and services, working with agencies to improve content and SEO, working on reporting mechanisms for the DTO, and looking at ways to usefully reduce the huge number of websites currently run by the Federal public service amongst other things. You can see some of our team blog posts about this work.

It has been an awesome trip and we built some great stuff, but now I need to return to my work on data, gov 2.0 and supporting the Australian Government CTO John Sheridan in looking at whole of government technology, procurement and common platforms. I can also work more closely with Sharyn Clarkson and the Online Services Branch on the range of whole of government platforms and solutions they run today, particularly the highly popular GovCMS. It has been a difficult choice but basically it came down to where my skills and efforts are best placed at this point in time. Plus I miss working on open data!

I wanted to say a final public thank you to everyone I worked with at the DTO, past and present. It has been a genuine privilege to work in the diverse teams and leadership from across over 20 agencies in the one team! It gave me a lot of insight to the different cultures, capabilities and assumptions in different departments, and I think we all challenged each other and created a bigger and better vision for the effort. I have learned much and enjoyed the collaborative nature of the broader DTO team.

I believe the DTO has two major opportunities ahead: as a a force of awesome and a catalyst for change. As a force of awesome, the DTO can show how delivery and service design can be done with modern tools and methods, can provide a safe sandpit for experimentation, can set the baseline for the whole APS through the digital service standard, and can support genuine culture change across the APS through training, guidance and provision of expertise/advisers in agencies. As a catalyst for change, the DTO can support the many, many people across the APS who want transformation, who want to do things better, and who can be further empowered, armed and supported to do just that through the work of the DTO. Building stronger relationships across the public services of Australia will be critical to this broader cultural change and evolution to modern technologies and methodologies.

I continue to support the efforts of the DTO and the broader digital transformation agenda and I wish Paul Shetler and the whole team good luck with an ambitious and inspiring vision for the future. If we could all make an approach that was data/evidence driven, user centric, mashable/modular, collaborative and cross government(s) the norm, we would overcome the natural silos of government, we would establish the truly collaborative public service we all crave and we would be better able to support the community. I have long believed that the path of technical integrity is the most important guiding principle of everything I do, and I will continue to contribute to the broader discussions about “digital transformation” in government.

Stay tuned for updates on the blog, and I look forward to spending the next 4 months kicking a few goals before I go on maternity leave :)

First PETW in over two years!

So tomorrow night I'm going to be conducting my first interview for Purser Explores The World in over two years, wootness!

And even more awesome it's going to be with New York based photographer Chris Arnade who has been documenting the stories of people battling addiction and poverty in the New York neighbourhood of the South Bronx via the Faces of Addiction project.

I'm excited, both because this is the first PETW episode in aaaages, and also because the stories that Chris tells both through his photography plus his facebook page and other media humanise people that have long been swept under the rug by society.

FreeDV Voice Keyer and Spotting Demo

I’ve added a Voice Keyer feature to the FreeDV GUI program. It will play a pre-recorded wave file, key your transmitter, then pause to listen. Use the Tools-PTT menu to select the wave file to use, the rx pause duration, and the number of times to repeat. If you hit space bar the keyer exits. It also stops if it detects a valid FreeDV sync for 5 seconds, to avoid congesting the band if others are using it.

I’m going to leave the voice keyer running while I’m working at my bench, to stimulate local FreeDV activity.

Spotting Demo

FreeDV has a low bit rate text message stream that allows you to send information such as your call-sign and location. Last year I added some code to parse the received text messages, and generate a system command if certain patterns are received. In the last few hours I worked up a simple FreeDV “spotting” system using this feature and a shell script.

Take a look at this screen shot of Tool-Options:

I’m sending myself the text message “s=vk5dgr hi there from David” as an example. Every time FreeDV receives a text message it issues a “rx_txtmsg” event. This is then parsed by the regular expressions on the left “rx_txtmsg s=(.*)”. If there is a match, the system command on the right is executed.

In this case any events with “rx_txmsg s=something” will result in the call to the shell script “ something”, passing the text to the shell script. Here is what the script looks like:



echo `date -u` "  " $1 "<br>" >> $SPOTFILE

tail -n 25 $SPOTFILE > /tmp/spot.tmp1

mv /tmp/spot.tmp1 $SPOTFILE

lftp -e "cd www;put $SPOTFILE;quit" $FTPSERVER

So this script adds a time stamp, limits the script to the last 25 lines, then ftps it to my webserver. You can see the web page here. It’s pretty crude, but you get the idea. It needs proper HTML formatting, a title, and a way to prevent the same persons spot being repeated all the time.

You can add other regular expressions and systems commands if you like. For example you could make a bell ring if someone puts your callsign in a text message, or put a pin on a map at their grid coordinates. Or send a message to FreeDV QSO finder to say you are “on line” and listening. If a few of us set up spotters around the world it will be a useful testing tool, like websdr for FreeDV.

To help debug you can mess with the regular expressions and system commands in real time, just click on Apply.

I like to use full duplex (Tools-PTT Half duplex unchecked) and “modprobe snd-aloop” to loopback the modem audio when testing. Talking to myself is much easier than two laptops.

If we get any really useful regexp/system commands we can bake them into FreeDV. I realise not everyone is up to regexp coding!

I’ll leave this running for a bit on 14.236 MHz, FreeDV 700B. See if you can hit it!

FreeDV QSO Party Weekend

A great weekend with the AREG team working FreeDV around VK; and despite some poor band conditions, the world.

We were located at Younghusband, on the banks of the river Murray, a 90 minute drive due East of Adelaide:

We had two K3 radios, one with a SM1000 on 20M, and one using laptop based FreeDV on 40M:

Here is the enormous 40M beam we had available, with my young son at the base:

It was great to see FreeDV 700B performing well under quite adverse conditions. Over time I became more and more accustomed to the sound of 700B, and could understand it comfortably from a laptop loudspeaker across the room.

When we had good quality FreeDV 1600 signals, it really sounded great, especially the lack of SSB channel noise. As we spoke to people, we noticed a lot of other FreeDV traffic popping up around our frequency.

We did have some problems with S7 power line hash from a nearby HT line. The ambient HF RF noise issue is a problem for HF radio everywhere these days. I have some ideas for DSP based active noise cancellation using 2 or more receivers that I might try in 2016. Mark, VK5QI, had a novel solution. He connected FreeDV on his laptop to Andy’s (VK5AKH) websdr in Adelaide. With the lower noise level we successfully completed a QSO with Gerhard, 0E3GBB, in Austria. Here are Andy (left) and Mark working Gerhard:

I was in the background most of the time, working on FreeDV on my laptop! Thank you very much AREG and especially Chris, VK5CP, for hosting the event!

September 14, 2015

Setting up a network scanner using SANE

Sharing a scanner over the network using SANE is fairly straightforward. Here's how I shared a scanner on a server (running Debian jessie) with a client (running Ubuntu trusty).

Install SANE

The packages you need on both the client and the server are:

You should check whether or your scanner is supported by the latest stable release or by the latest development version.

In my case, I needed to get a Canon LiDE 220 working so I had to grab the libsane 1.0.25+git20150528-1 package from Debian experimental.

Test the scanner locally

Once you have SANE installed, you can test it out locally to confirm that it detects your scanner:

scanimage -L

This should give you output similar to this:

device `genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner

If that doesn't work, make sure that the scanner is actually detected by the USB stack:

$ lsusb | grep Canon
Bus 001 Device 006: ID 04a9:190f Canon, Inc.

and that its USB ID shows up in the SANE backend it needs:

$ grep 190f /etc/sane.d/genesys.conf 
usb 0x04a9 0x190f

To do a test scan, simply run:

scanimage > test.ppm

and then take a look at the (greyscale) image it produced (test.ppm).

Configure the server

With the scanner working locally, it's time to expose it to network clients by adding the client IP addresses to /etc/sane.d/saned.conf:

## Access list

and then opening the appropriate port on your firewall (typically /etc/network/iptables in Debian):

-A INPUT -s -p tcp --dport 6566 -j ACCEPT

Then you need to ensure that the SANE server is running by setting the following in /etc/default/saned:


if you're using the sysv init system, or by running this command:

systemctl enable saned.socket

if using systemd.

I actually had to reboot to make saned visible to systemd, so if you still run into these errors:

$ service saned start
Failed to start saned.service: Unit saned.service is masked.

you're probably just one reboot away from getting it to work.

Configure the client

On the client, all you need to do is add the following to /etc/sane.d/net.conf:

connect_timeout = 60

where myserver is the hostname or IP address of the server running saned.

Test the scanner remotely

With everything in place, you should be able to see the scanner from the client computer:

$ scanimage -L
device `net:myserver:genesys:libusb:001:006' is a Canon LiDE 220 flatbed scanner

and successfully perform a test scan using this command:

scanimage > test.ppm

September 13, 2015

On running

I've been running for a little while now, but I don't mention it here much. I think I mostly don't mention it because I normally just post photos here, and I don't tend to stop and take happy snaps on my runs. The runs started off pretty modest -- initially I struggled with shin splints after more than a couple of minutes. I've worked through that and a couple of injuries along the way and am consistently doing 5km runs now.

That said, my longest runs have been in the last week when I did a 7.5km and an 8.1km. I'm building up to 10km, mostly because its a nice round number. I think ultimately trail running might be the thing for me, I get quite bored running around suburbs over and over again.

Its interesting that I come from an aggressively unsporting family, but yet all of my middle aged siblings and I have started running in the last year or two. Its a mid-life crisis thing perhaps?

Interactive map for this route.

Tags for this post: blog running fitness sport

Related posts: First jog, and a walk to Los Altos; Martin retires from his work netball league


September 12, 2015

Software Freedom Day Meeting 2015

Sep 19 2015 11:00
Sep 19 2015 16:00
Sep 19 2015 11:00
Sep 19 2015 16:00

Electron Workshop 31 Arden Street, North Melbourne.

There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!

This month, Free Software Melbourne[1], Linux Users of Victoria[2] and Electron Workshop[3] are joining forces to bring you the local Software Freedom Day event for Melbourne.

The event will take place on Saturday 19th September between 11am and 4pm at:

Electron Workshop

31 Arden Street, North Melbourne.


Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 19, 2015 - 11:00

read more

Adjourned 2015 LUV Annual General Meeting

Sep 19 2015 15:30
Sep 19 2015 16:15
Sep 19 2015 15:30
Sep 19 2015 16:15

Boardroom, Electron Workshop, 31 Arden Street, North Melbourne

Confirmation of adjourned LUV 2015 AGM

This notice is to confirm that Linux Users of Victoria Inc. will be holding the adjournment of its Annual General Meeting, on Saturday 19th September 2015. The meeting will be held in the Boardroom of Electron Workshop at 3.30pm.

Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.

LUV would like to thank Electron Workshop for making their boardroom available for this meeting, also Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 19, 2015 - 15:30

read more

CBC Navigation Course

So today was the day long map and compass class with the Canberra Bushwalking Club. I liked this walk a lot. It was a good length at about 15km, and included a few things I'd wanted to do for a while like wander around McQuoids Hill and the northern crest of Urambi hills. Some nice terrain, and red rocks gorge clearly requires further exploration.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150912 photo canberra bushwalk




  • Keynote, Brenda Wallace, State of Copyright. Lots of interesting information, NZ centric though (naturally) video
  • Keynote, Allison Kaptur, Ways we can be more effective learners. Focusing on the idea of a mindset, the context of our learning and different approaches, a hilight of the conference for me video
  • Keynote, Katie Bell, Python as a Teaching Language video
  • Brian Thorne, Blind Analytics, algorithms on encrypted data. I’ll probably have to watch this fifty times to actually understand.Video
  • Katie McLaughlin, Build a Better Hat Rack. Being nice in open source, not assuming that people know your work is appreciated. not all work is
  • Chris Neugebauer, Python’s type hints, in comparison to Javascripts. A very promising talk about the future of type hinting in python. video
  • Martin Henschke, Eloise “Ducky” Macdonald-Meyer, Coding workshops for school kids in Tasmania. Similar outcomes to the keynote. video
  • Tim Mitchell, Database Migrations using alembic, programmatically upgrading database scemas, hilights for me were the use cases. The example used was a good one as it’s multi staged (it’s a bad example for other reasons :) video
  • Jeremy Stott, Practical Web Security, takes a hobby project and secures it. Lots of little tips. video
  • Thomi Richards, Connascence in Python, a language for talking about the different types of coupling video
  • Cory Benfield, You Don’t care About Efficiency. All about sync code wasting cpu cycles while IO is happening, don’t do that. Doesn’t cover threading though. video
  • Lee Symes, Why Python is awesome, always good to see how languages are learning off each other video
  • Various (including myself), Lightning talks video


  • Ben Shaw, Micro-Services: Is HTTP the only way? video
  • Chris LeBlanc, Cython, I’ve done a lot of Cython, but there’s a a lot of features and it’s a fast moving target video
  • Steve Baker, The Pythonista’s 3D printing toolchain video
  • Tom Eastmen, security. Tom gives lightning talk about serial protocols everywhere, based on that, this should be good video
  • Fraser Tweedale, Integrating Python apps with Centralised Identity systems. I believe that this talk is mostly focused on configuring your web server to do authnz, rather than coding it incorrectly. video
  • Rand Huso, MPI and IoC video
  • Gagan Sharma, Simon Salinas, Custom Python Applications in Neuroscience video


  • Fei Long Wang, Zaqar, struggled to find a reason this exists, it might just need to exist to be an open replacement for SQS. video
  • WxPython Tuning app for FreeEMS, a Python app taking serial data from a car control system. They jumped to using threads and mutexes and things and didn’t seem to try to use an async read from the serial port, when they were already using a GUI mainloop. I asked why not, but they didn’t seem to understand my question. There doesn’t appear to be a video, may be because of a poorly named command line tool that sounds like a swear word.

Filed under: Uncategorized

September 11, 2015

Running a Shell in a Daemon Domain

allow unconfined_t logrotate_t:process transition;

allow logrotate_t { shell_exec_t bin_t }:file entrypoint;

allow logrotate_t unconfined_t:fd use;

allow logrotate_t unconfined_t:process sigchld;

I recently had a problem with SE Linux policy related to logrotate. To test it out I decided to run a shell in the domain logrotate_t to interactively perform some of the operations that logrotate performs when run from cron. I used the above policy to allow unconfined_t (the default domain for a sysadmin shell) to enter the daemon domain.

Then I used the command “runcon -r system_r -t logrotate_t bash” to run a shell in the domain logrotate_t. The utility runcon will attempt to run a program in any SE Linux context you specify, but to succeed the system has to be in permissive mode or you need policy to permit it. I could have written policy to allow the logrotate_t domain to be in the role unconfined_r but it was easier to just use runcon to change roles.

Then I had a shell in the logrotate_t command to test out the post-rotate scripts. It turned out that I didn’t really need to do this (I had misread the output of an earlier sesearch command). But this technique can be used for debugging other SE Linux related problems so it seemed worth blogging about.

2015 Science in Society Journalism Awards winners announced


You may or may not know that I’ve been moonlighting with Python for a number of years now.  This year I have written a new book Python for Kids for Dummies and, as of this week, it’s available from Amazon:

When I was engaged to do the book it was only going to be released in the US. I’ve recently learnt that it will be available in Australia (and elsewhere I guess) as well.

It is set to be available in Australia in a week or two (as at 11 Sept) from places like:



Angus and Robertson

Robinsons (Melbourne)

QBD (Hello Banana Benders!)

and Dymocks (or so I’m told, but it’s not on their website. I’ve always found Dymocks’ website difficult) – update saw a copy in Dymocks today (20 Sept 2015)

September 10, 2015

Ironic on a NUC - part 2 - Running services, building and deploying images, and testing

This is a continuation of the previous post on Ironic on a NUC - setting things up.  If you're following along at home, read that first.

Creating disk images for deployment

Now let's build some images for use with Ironic.  First off, we'll need a deploy ramdisk image for the initial load, and we'll also need the image that we want to deploy to the hardware.  We can build these using diskimage builder, part of the triple-o effort.

So let's do that in a virtual environment:

mrda@irony:~/src$ mkvirtualenv dib

(dib)mrda@irony:~/src$ pip install diskimage-builder six

And because we want to use some of the triple-o elements, we'll refer to these as we do the build. Once the images are built we'll put them in a place where tftpd-hpa can serve them.

(dib)mrda@irony:~/src$ export ELEMENTS_PATH=~/src/tripleo-image-elements/elements
(dib)mrda@irony:~/src$ mkdir images
(dib)mrda@irony:~/src$ cd images
(dib)mrda@irony:~/src/images$ disk-image-create ubuntu baremetal localboot dhcp-all-interfaces local-config -o my-image

(dib)mrda@irony:~/src/images$ ramdisk-image-create ubuntu deploy-ironic -o deploy-ramdisk

(dib)mrda@irony:~/src/images$ cp -rp * /tftpboot

Starting Ironic services

I like to do my development in virtualenvs, so we'll run our services there. Let's start with ironic-api.

(dib)mrda@irony:~/src/images$ deactivate

mrda@irony:~/src/images$ cd ~/src/ironic/

mrda@irony:~/src/ironic (master)$ tox -evenv --notest

mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@irony:~/src/ironic (master)$ ironic-api -v -d --config-file etc/ironic/ironic.conf.local

Now in a new terminal window for our VM, let's run ironic-conductor:

mrda@irony:~/src/images$ cd ~/src/ironic/

mrda@irony:~/src/ironic (master)$ source .tox/venv/bin/activate
(venv)mrda@krypton:~/src/ironic (master)$ python develop

(venv)mrda@krypton:~/src/ironic (master)$ ironic-conductor -v -d --config-file etc/ironic/ironic.conf.local

(If you get an error about unable to load the pywsman library, follow the workaround over here in a previous blog post)

Running Ironic Client

Let's open a new window on the VM for running an ironic command-line client to exercise what we've built:

mrda@irony:~$ cd src/python-ironicclient/

mrda@irony:~/src/python-ironicclient (master)$ tox -evenv --notest

mrda@irony:~/src/python-ironicclient (master)$ source .tox/venv/bin/activate

Now we need to fudge authentication, and point at our running ironic-api:

(venv)mrda@irony:~/src/python-ironicclient (master)$ export OS_AUTH_TOKEN=fake-token

(venv)mrda@irony:~/src/python-ironicclient (master)$ export IRONIC_URL=http://localhost:6385/

Let's try it out and see what happens, eh?

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic driver-list


| Supported driver(s) | Active host(s) |


| pxe_amt             | test-host      |


Looking good! Let's try registering the NUC as an Ironic node, specifying the deployment ramdisk:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-create -d pxe_amt -i amt_password='<the-nuc-amin-password>' -i amt_username='admin' -i amt_address='' -i deploy_ramdisk='file:///tftpboot/deploy-ramdisk.initramfs' -i deploy_kernel='file:///tftpboot/deploy-ramdisk.kernel' -n thenuc
| Property     | Value                                                                    |
| uuid         | 924a5447-930e-4d27-837e-6dd5d5f10e16                                     |
| driver_info  | {u'amt_username': u'admin', u'deploy_kernel': u'file:///tftpboot/deploy- |
|              | ramdisk.kernel', u'amt_address': u'', u'deploy_ramdisk':       |
|              | u'file:///tftpboot/deploy-ramdisk.initramfs', u'amt_password':           |
|              | u'******'}                                                               |
| extra        | {}                                                                       |
| driver       | pxe_amt                                                                  |
| chassis_uuid |                                                                          |
| properties   | {}                                                                       |
| name         | thenuc                                                                   |

Again more success!  Since we're not using Nova to manage or kick-off the deploy, we need to tell ironic where the instance we want deployed is, along with some of the instance information:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-update thenuc add instance_info/image_source='file:///tftpboot/my-image.qcow2' instance_info/kernel='file:///tftpboot/my-image.vmlinuz' instance_info/ramdisk='file:///tftpboot/my-image.initrd' instance_info/root_gb=10
| Property               | Value                                                                   |
| target_power_state     | None                                                                    |
| extra                  | {}                                                                      |
| last_error             | None                                                                    |
| updated_at             | None                                                                    |
| maintenance_reason     | None                                                                    |
| provision_state        | available                                                               |
| clean_step             | {}                                                                      |
| uuid                   | 924a5447-930e-4d27-837e-6dd5d5f10e16                                    |
| console_enabled        | False                                                                   |
| target_provision_state | None                                                                    |
| provision_updated_at   | None                                                                    |
| maintenance            | False                                                                   |
| inspection_started_at  | None                                                                    |
| inspection_finished_at | None                                                                    |
| power_state            | None                                                                    |
| driver                 | pxe_amt                                                                 |
| reservation            | None                                                                    |
| properties             | {}                                                                      |
| instance_uuid          | None                                                                    |
| name                   | thenuc                                                                  |
| driver_info            | {u'amt_username': u'admin', u'amt_password': u'******', u'amt_address': |
|                        | u'', u'deploy_ramdisk': u'file:///tftpboot/deploy-            |
|                        | ramdisk.initramfs', u'deploy_kernel': u'file:///tftpboot/deploy-        |
|                        | ramdisk.kernel'}                                                        |
| created_at             | 2015-09-10T00:55:27+00:00                                               |
| driver_internal_info   | {}                                                                      |
| chassis_uuid           |                                                                         |
| instance_info          | {u'ramdisk': u'file:///tftpboot/my-image.initrd', u'kernel':            |
|                        | u'file:///tftpboot/my-image.vmlinuz', u'root_gb': 10, u'image_source':  |
|                        | u'file:///tftpboot/my-image.qcow2'}                                     |

Let's see what we've got:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-list


| UUID                                 | Name   | Instance UUID | Power State | Provisioning State | Maintenance |


| f8af4d4e-e3da-4a04-9596-8e4fef15e4eb | thenuc | None          | None        | available          | False       |


We now need to create a network port in ironic, and associate it with the mac address of the NUC.  But I'm lazy, so let's extract the node UUID first:

(venv)mrda@irony:~/src/python-ironicclient (master)$ NODEUUID=$(ironic node-list | tail -n +4 | head -n -1 | awk -F "| " '{print $2}')
(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic port-create -n $NODEUUID -a <nuc-mac-address>
| Property  | Value                                |
| node_uuid | 924a5447-930e-4d27-837e-6dd5d5f10e16 |
| extra     | {}                                   |
| uuid      | c6dddc3d-b9b4-4fbc-99e3-18b8017c7b01 |
| address   | <nuc-mac-address>                    |

So let's validate everything we've done, before we try this out in anger:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-validate thenuc


| Interface  | Result | Reason        |


| boot       | True   |               |

| console    | None   | not supported |

| deploy     | True   |               |

| inspect    | None   | not supported |

| management | True   |               |

| power      | True   |               |

| raid       | None   | not supported |


And one more thing to do before we really start things rolling - ensure the NUC is listening to us:

(venv)mrda@irony:~/src/python-ironicclient (master)$ telnet 16992
Connected to
Escape character is '^]'.

telnet> close
Connection closed.

You might have to try that a couple of times to wake up the AMT interface, but it's important that you do to ensure you don't get a failed deploy.

And then we take the node active, which will DHCP the deploy ramdisk, which will in turn write the user image to the disk - if everything goes well.  This will also take quite a long time, so time to go make that cup of tea :-)

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-provision-state thenuc active

Your NUC should have just booted into your user image and should be ready for you to use!

Postscript #1:

Actually, that's not what really happened.  It's what I would have liked to happen.

But there were some issues. Firstly, ironic-conductor complained about not being able to find 'ironic-rootwrap'.  And then once I symlinked that into place, it couldn't find the config for rootwrap, so I symlinked that into place.  Then it complained that iscsiadm didn't have the correct permissions in rootwrap to do it's thing...

So I gave up, and did the thing I didn't want to.  Back on the VM I ended up doing a "sudo python install" in the ironic directory so everything got installed into the correct system place and then I could restart ironic-conductor.

It should all work in develop mode, but clearly it doesn't, so in the interests of getting something up and going (and finish this blog post :) I did the quick solution and installed system-wide. Perhaps I'll circle back and work out why someday :)

Postscript #2:

When doing this, the deployment can fail for a number of reasons.  To recover, you need to delete the enrolled node and start again once you've worked out what the problem is, and worked out how to fix it.  To do that you need to do:

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-set-maintenance thenuc on

(venv)mrda@irony:~/src/python-ironicclient (master)$ ironic node-delete thenuc

Isn't it great that we can use names for nodes instead of UUIDs for most operations :)

Ironic on a NUC - part 1 - Setting things up

Just because a few people have asked, here is what I did to get a standalone Ironic installation going and running in an Intel NUC.

Why a NUC?  Well, the Intel NUC is a cute little piece of hardware that is well suited as a test lab machine that can sit on my desk.  I'm using a DC53427HYE, which is an i5 with vPro.  vPro is a summary term for a bunch of Intel technologies, including AMT (Active Management Technology). This allows us to remotely manage this desktop for things like power management - think of this as an analogy to IPMI for servers.

Getting the VM ready

I like to do my development in VMs, after all, isn't that what the cloud is for? :-) So first off using your virtualisation technology of choice, build a VM with Ubuntu 14.04.2 server on it.  I've allocated 2Gb RAM and 30Gb disk.  The reason for the larger than average disk is so that I have room for building ramdisk and deployment disk images. I've called this box 'irony'.

On the VM you'll need a few extra things installed once you've got the base OS installed:

mrda@irony:~$ sudo apt-get install python-openwsman ack-grep python-dev python-pip libmysqlclient-dev libxml2-dev git rabbitmq-server mysql-server isc-dhcp-server tftpd-hpa syslinux syslinux-common libxslt1-dev qemu-utils libpq-dev python-yaml open-iscsi

mrda@irony:~$ sudo pip install virtualenvwrapper six tox mysql-python

Thinking about the network

For this set up, I'm going to run separate networks for the control plane and data plane.  I've added a USB NIC to the NUC so I can separate the networks. My public net connection to the internet will be on the 192.168.1.X network, whereas the service net control plane will be on 10.x.x.x.  To do this I've added a new network interface to the VM, changed the networking to bridging for both NICs, and assigned eth0 and eth1 appropriately, and updated /etc/network/interfaces in the VM, so the right adapter is on the right network.  It ended up looking like this in /etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary (public) network interface
auto eth0
iface eth0 inet dhcp

# Control plane
auto eth1
iface eth1 inet static

Setting up DHCP 

We need to make sure we're listening for DHCP requests on the right interface

mrda@irony:~$ sudo sed -i 's/INTERFACES=""/INTERFACES="eth1"/' /etc/default/isc-dhcp-server

Now configure your DHCP server to hand out an address to the NUC, accounting for some of the uniqueness of the device :) The tail of my /etc/dhcp/dhcpd.conf looks a bit like this:

allow duplicates;

ignore-client-uids true;


subnet netmask {

    group {

        host nuc {

            hardware ethernet <your-nucs-mac-address>;

            fixed-address; # NUC's IP address

            allow booting;

            allow bootp;

            next-server <this-servers-ip-address>;

            filename "pxelinux.0";




There some more background on this in a previous blog post.

Setting up TFTP

mrda@irony:~$ sudo mkdir /tftpboot

mrda@irony:~$ sudo chmod a+rwx /tftpboot/

We'll need to configure tftpd-hpa rather specifically, so /etc/default/tftpd-hpa looks like this:




TFTP_OPTIONS="-vvvv --map-file /tftpboot/map-file"

We'll also need to create /tftpboot/map-file which will need to look like this:

re ^(/tftpboot/) /tftpboot/\2
re ^/tftpboot/ /tftpboot/
re ^(^/) /tftpboot/\1
re ^([^/]) /tftpboot/\1

This is because of a weird combination of the feature sets of tftpd-hpa, isc-dhcp-server, ironic and diskimage-builder. Basically the combination of relative and dynamic paths are incompatible, and we need to work around the limitations by setting up a map-file. This would be a nice little patch one-day to send upstream to one or more of these projects. Of course, if you're deploying ironic in a production Openstacky way where you use neutron and dnsmasq, you don't need the map file - it's only when you configure all these things handrolicly that you face this problem.

And we'll want to make sure the PXE boot stuff is all in place ready to be served over TFTP.

mrda@irony:~$ sudo cp /usr/lib/syslinux/pxelinux.0 /tftpboot/

mrda@irony:~$ sudo cp /usr/lib/syslinux/chain.c32 /tftpboot/

And now let's start these services

mrda@irony:~$ service tftpd-hpa restart

mrda@irony:~$ service isc-dhcp-server restart

Installing Ironic

Just install straight from github (HEAD is always installable, right?)

mrda@irony:~$ mkdir ~/src; cd ~/src

mrda@irony:~/src$ git clone 

mrda@irony:~/src$ git clone 

mrda@irony:~/src$ git clone

Configuring Ironic

Now we'll need to configure ironic to work standalone. There's a few config options that'll need to be changed from the default including changing the authentication policy, setting the right driver for AMT, setting a hostname and turning off that pesky power state syncing task.

mrda@irony:~$ cd src/ironic/

mrda@irony:~/src/ironic (master)$ cp etc/ironic/ironic.conf.sample etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s/#auth_strategy=keystone/auth_strategy=noauth/" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s/#enabled_drivers=pxe_ipmitool/enabled_drivers=pxe_amt/" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s/#host=.*/host=test-host/" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s/#sync_power_state_interval=60/sync_power_state_interval=-1/" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s%#api_url=<None>%api_url=" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ sed -i "s/#dhcp_provider=neutron/dhcp_provider=none/" etc/ironic/ironic.conf.local

There's also the little matter of making sure the image directories are ready:

mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/images

mrda@irony:~/src/ironic (master)$ sudo mkdir -p /var/lib/ironic/master_images

mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/images

mrda@irony:~/src/ironic (master)$ sudo chmod a+rwx /var/lib/ironic/master_images

Initialising the database

Since we've decided to use MySQL instead of SQLite, we'll need to setup the schema and update the database connection string.

mrda@irony:~/src/ironic (master)$ mysql -u root -p -e "create schema ironic"
mrda@irony:~/src/ironic (master)$ sed -i "s/#connection=.*/connection=mysql:\/\/root:<database-password>@localhost\/ironic/" etc/ironic/ironic.conf.local

mrda@irony:~/src/ironic (master)$ ironic-dbsync --config-file etc/ironic/ironic.conf.local create_schema

And that's everything that needs to be done to prepare the VM for running ironic. The next post will cover starting the ironic services, building images for deployment, and poking ironic from the command line.

Sympathy - Empathy - Compassion

Sympathy - Empathy - Compassion
Communication can be surprisingly hard.
Human beings talk a lot.  Well many humans do.  Some less so, Some more so, but I reckon, in general, there's a lot of daily jibber jabber.
Some of that talking is light hearted "Small Talk". Some of it is world changing speeches.  Some of it is the daily to and fro we need to get stuff done; at work, at play, at home.
Some talking leads to conflict.  Whilst thinking and talking about how to resolve conflict in more constructive ways, me and Gina Likins puzzled over how we could talk about compassion, and the important part it plays in conflict resolution.  We talked about Empathy and Sympathy, and examined various definitions of all these words.
In the end, this is what we came up with. We were thinking about Free and Open Source Software communities. But it seems to have resonated with others, so, I thought I'd post it here, and expand a little on each point.
Sympathy, Empathy and Compassion are "Big feels".  So, this is just my take on how we might more easily distinguish them from each other.


This is when we feel sorry for someone.  We acknowledge that something has happened which isn't good for them, and is causing them some level of distress.  In the software world, it might be like noticing that someone has reported a bug.


This goes a step further. This is when we really acknowledge something is wrong. Perhaps we've experienced it too, we understand the issue is real. In software terms, perhaps we can replicate the bug, and acknowledge it's an issue that needs addressing.


The next step comes when we are motivated to help. We've not just acknowledged there's a problem, but are willing to do something to help fix it. In software terms, it means stepping up to help fix the bug that's been reported.
Bug trackers and issue queues can be sources of minor conflict.  We lack the non-verbal queues provided by tone of voice and body language that help when raising issues.  It's natural to feel defensive.  But perhaps, taking a moment to reflect on how we communicate, and what we hear, what we mean, and how we respond, could really help us get more done, together.

CONFIRMED: The Last Great Prediction Of The Big Bang! | Medium

Seventy years ago, we had taken fascinating steps forward in our conception of the Universe. Rather than living in a Universe governed by absolute space and absolute time, we lived in one where space and time were relative, depending on the observer. We no longer lived in a Newtonian Universe, but rather one governed by general relativity, where matter and energy cause the fabric of spacetime itself to curve.


[Now] A leftover glow unlike any other — of neutrinos — has finally been seen.

Read the full article.

“Two degrees above absolute zero was never so hot.”

UW scientists are pioneering research on ‘body maps’ in babies’ brains | UW Today

The pattern of infants’ brain activity corresponded to the body parts being used, providing the first evidence that watching someone else use a specific body part prompts a corresponding pattern of activity in the infant neural body map.

And much more… interesting research!

September 09, 2015

NASA releases 3D-printable models to the public |

As part of its continuing program of education and outreach, NASA has released 22 printable models of NASA and European space probes, asteroids, and planetary landscapes for the hobbyist and space enthusiast.

The 3D models are available from the NASA website for free and are printable on any desktop 3D printer using plastic filaments. It’s the latest in a long tradition of NASA science, technology artwork made available to the public going back to its founding in 1958.

Danish Happiness

This article “Denmark is the Happiest Country” (Huffington Post, Oct 2013) is from a while ago, but quite worthwhile particularly in the current federal political context in Australia.

Relevant aspects are personal welfare as well as the overall social and political climate, and economic prosperity. It’s a pretty complete package.

The article notes that people feeling well cared for is a key factor, and that’s obviously not about seeing more police on the streets or politicians making scary announcements about terrorism. It’s also about caring for those around us. That’s a local social aspect, as well as an international issue (how to respond to a refugee crisis).

Other matters of importance are equality (in every respect), education, healthcare.

Worth a read. Of course, those who most need to learn these things are the people who don’t think there’s anything wrong with what they’re currently doing. But still.

State of Electronics – Getting Started

My friend Jon Oxer (Freetronics) contributed to a short film by Karl Moller (State of Electronics) about how different people (including well knowns such as Dick Smith) first became interested in electronics. Beautifully done, it’s a worthwhile watch.

Jon also writes about his first electronics experience:

When I was in primary school, we all went on a visit to a local community radio station somewhere around Northcote. They let each of us speak on the radio for a moment, which was very exciting for a little kid!

However, the really significant thing was that one of the staff found out I was interested in electronics so he drew the circuit diagram of a crystal receiver on a scrap of paper for me to take home with me. I later scrounged up the various parts required to assemble it, and spent hours listening to stations such as 3RPH.

It blew my mind that something as simple as a crystal set could receive radio signals without even needing a battery, so it would just sit there playing the sound 24/7 and never go flat. It was like having magical powers to cast a spell on an inanimate object and make it do something interesting.

September 08, 2015

Lost and alone in the dark

I've been doing the Canberra Bush Walking Club navigation course for the last couple of weeks, and last night's exercise was a night time dead reckoning navigation session. The course is really good by the way and I've been enjoying it a lot.

It should be pointed out that I also wasn't lost, or alone, but it sure was dark.

Anyway, the basic idea of the exercise is that you're given a hand drawn map, and a set of markers. You determine the bearing from each marker to the next, and the distance to walk. You then set off on your adventure. Getting a bearing or distance wrong matters, because you either need to stop and find the next way point, or carry the mistake on to the next marker. The markers were generally things like "gate in fence" or "two big trees".

It turns out for me the hardest part is walking in a straight line when its dark. If you look at the GPS logged map below, you can see that the consistent error is that I tend to veer slowly to the right. That's a pretty useful thing to know, because it means I can correct a bit more for it next time. I got the line of march (not the bearing!) pretty badly wrong on the way to the dam, and that resulted in a bit of an adventure to find that way point. I think we missed the next way point as well because we carried the mistake on by setting off from the wrong point on the dam for the next leg.

I really enjoyed this little walk, and I think I need to do a few more of these to get better at this skill. It seems arbitrary, but if my GPS ever fails and the weather is terrible it might come down to a skill like this keeping me moving in the right direction or not.

Also, I think this would make a super good exercise for scouts. Now to try and convince them its fun...

Interactive map for this route.

Tags for this post: blog canberra bushwalk navigation

Related posts: Outline mode numbering of headings; Destinator 3 GPS navigation for the PocketPC


Testing the FreeDV API

James Ahlstrom (the author of the Quisk SDR software) submitted a patch for the FreeDV API to change the sample rate used for FreeDV 700 from the obscure 7500 Hz to a more useful 8000 Hz. FreeDV 700(B) mode uses the coherent PSK modem (COHPSK) which I spent the first six months of 2015 developing but haven’t written much about yet. This modem is responsible for much of the low SNR performance improvement of FreeDV 700(B) over 1600.

There were a couple of challenges with this patch:

  1. 8000 to 7500Hz is not a neat ratio.
  2. To accommodate differences in the tx and rx sample clocks, the demodulator has a time varying number of input samples. Occasionally the demodulator asks for a few more, or a few less samples.

On a personal note it’s really wonderful (and unusual) to have a high quality, challenging DSP patch submitted – thank you so much Jim!

I thought it would be worth documenting how I tested the patch; (i) so others can appreciate the magic of extremely long command line Linux DSP simulations and (ii) so I don’t forget it for next time. This blog is useful as my on-line log book as it’s search-able.

If anyone has any questions, just ask in the comments.

1/ I tested it with clock offsets of +5 and -5 Hz, e.g.

$ ./freedv_tx 700B ../../raw/ve9qrp.raw - --testframes | sox -t raw -r 8000 -s -2 - -t raw -r 7995 -s -2 -t raw - | ./freedv_rx 700B - /dev/null --testframes

bits: 78512 errors: 0 BER: 0.00

2/ Looking at freedv_rx_log.txt, I can see “nin” getting occasional low values:

frame: 1345  demod sync: 1  nin:640 demod snr: 21.25 dB  bit errors: 0

frame: 1346  demod sync: 1  nin:613 demod snr: 21.38 dB  bit errors: 0

frame: 1347  demod sync: 1  nin:640 demod snr: 21.57 dB  bit errors: 0

That’s the demod asking for “a few less samples please” to accommodate for the sample clock offset. If something is going to go wrong, this is where it will be. However we can see sync is not lost and no bit errors introduced. CHECK

3/ Then I tested in an AWGN channel at an Eb/No of around 4dB, which for coherent PSK should give us a (Bit Error Rate) BER 1%. Lets throw in a 10Hz frequency offset as well:

$ ./freedv_tx 700B ../../raw/ve9qrp.raw - --testframes | sox -t raw -r 8000 -s -2 - -t raw -r 7995 -s -2 -t raw - | ./cohpsk_ch - - -22 10 0 1 | ./freedv_rx 700B - /dev/null --testframes

NodB: -22.00 foff: 10.00 Hz fading: 0 inclip: 1.00


peak pwr.....:  137.73

av input pwr.:   20.04

av pwr fading:   20.03

noise pwr....:   23.62

clipping.....:    0.00 %

PAPR (dB)....:    8.37 (target 7.00)

C/No (dB)....:   35.03

SNR3k........:    0.26

Eb/No(Rb=700):    4.82


bits: 78512 errors: 925 BER: 0.01



4/ For comparison, the Fs=7500 Hz modem without the patch:

$ ./freedv_tx 700B ../../raw/ve9qrp.raw - --testframes | sox -t raw -r 7500 -s -2 - -t raw -r 7505 -s -2 -t raw - | ./cohpsk_ch - - -22 10 0 1 | ./freedv_rx 700B - /dev/null --testframes


NodB: -22.00 foff: 10.00 Hz fading: 0 inclip: 1.00

peak pwr.....:  136.33

av input pwr.:   20.06

av pwr fading:   20.05

noise pwr....:   23.62

clipping.....:    0.00 %

PAPR (dB)....:    8.32 (target 7.00)

C/No (dB)....:   35.03

SNR3k........:    0.26

Eb/No(Rb=700):    4.82


bits: 77952 errors: 1162 BER: 0.01

Ok so with the patch the modem gets better (less bit errors, 925 versus 1162)! Wellllll, I think it’s due to this line in cohpsk_ch.c where we work out the noise power:

    /*  N = var = NoFs */


    No = pow(10.0, NodB/10.0);

    variance = COHPSK_FS*No;

We scale the noise power according to Fs, which is still set to 7500Hz, despite our sample rate being raised to 8000Hz. So we are injecting a little less noise than we should be, lowering the BER. I need cohpsk_ch to test the core cohpsk modem (cohpsk_tx, cohpsk_rx) that runs at Fs=7500Hz AND freedv_tx/rx that now runs at Fs=8000Hz. So not sure how to handle this atm. Maybe a freedv_ch that rebuilds cohpsk_ch with FS #defined to 8000? Meh, worry about that later.

Now I need to refactor the FreeDV GUI program for these API changes…..

Reading Further

This blog post on Testing a FDMDV modem has some examples of nasty sample clock offsets caused by PC sound cards. The FDMDV modem is used for FreeDV 1600.

Council Minutes 12 August 2015

Wed, 2015-08-12 19:45 - 20:50

1. Meeting overview and key information


Josh H, Josh S, Craige M, Sae Ra, James, Chris, Tony



Meeting opened by Josh H at 1945hrs and quorum was achieved

MOTION that the previous minutes of 28 July are correct

Moved: Sae Ra Germaine

Seconded: Josh H

Carried with 2 abstentions

2. Log of correspondence

Motions moved on list


Motions carried over from list

MOTION by JOSHUA HESKETH to APPROVE the JoomlaDay Brisbane 2015 Budget version as of the 27/07/15

Seconded: James Iseppi

Passed with 3 abstentions

General correspondence

Mirroring of OpenVZ files from SERGEY BRONNIKOV

Forwarded to the Mirroring Team about the ability to take care of the mirror.

Currently the server may not have the capacity but to ask again in 6 months time.

Drupal 8 Invoice

The invoice was paid but there was a discrepancy with the invoice.

This is due to the Overseas Bank Fees.

Marked as resolved.

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craige

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

UPDATE: Still in progress

ACTION: Josh H to tarball the site.

Outstanding action

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: Still in progress

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

UPDATE: Organisers are meeting and will submit a closing report.

UPDATE: Still in progress

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.

UPDATE: Currently being followed up.

Admin Team draft budget from STEVEN WALSH

UPDATE: Awaiting for a more firm budget

UPDATE: Still awaiting

Sponsorship for DrupalCamp event:

for Silver Sponsorship valued at $500

MOTION: Approve the expense of $500 for DrupalCamp Sydney

Seconded James

Passed unanimously.

ACTION: Josh H to let DrupalCamp know.

4. Items for discussion

LCA2016 update

Call for papers has closed. Would be interesting to see the unique entry numbers.

Papers F2F in 2 weeks time in Sydney

LCA2017 update

No news as yet. Team to meet in the next month or so.

LCA2018 update

There have been no expressions of interest yet.

PyCon AU update

Was a raging success and highly praised. Currently wrapping up the finances. Books will be closed soon and closure report will be sought. Thank you to the PyConAU Brisbane 2014 and 2015

New subcommittee proposal to be submitted for PyConAU 2016.

Drupal South

Correspondence: Quotes from Venue

UPDATE: Waiting on a budget

UPDATE: The treasurer has stepped down. The Lead is looking at options and will get back to us.

ACTION: Tony to remove Si’s access from Xero and Westpac

WordCamp Brisbane

Wrapping up and will be sending through a closure report


ACTION: Josh H to follow-up on budget status

ACTION: Josh to ping OSDConf

Payment for venue has been processed, banking and finances have been sorted.

UPDATE: Josh H to ping OSDConf


Wrapped out and voting will come to a close. 3 finalists have been picked.

UPDATE: Motion will be moved on list tonight.

Red Carpet awards:

Craige McWhirter is able to go to represent LA.

Josh and Josh will reply on list if they are able to go.


Teleconf has been conducted and Xero and Westpac has been introduced to the key members of the team.

Risks have been covered.

Tony to rename the accounts in Westpac.

5. Items for noting

Second F2F

Dates have been set for 14-15-16th F2F

UPDATE: to be deferred until the next Council meeting.

Meeting room for AGM has been booked.

6. Other business

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.

UPDATE: Carried to next meeting

MOTION: Josh H moves that LA becomes a Demand Class member of auDA

Seconded: Tony B

Passed unanimously.

ACTION: Josh H to sign up with LA CC

UPDATE: In progress


David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

UPDATE: In Progress.

UPDATE: to be completed during PyconAU

UPDATE: Migration work has been completed and things are improving. 3 actions are still outstanding with the registration page.

ACTION: Josh S to catch up with David Bell regarding the documentation.

UPDATE: In progress.

UPDATE: James have sent through a request of high priority actions

To be removed from future agendas

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh Stewart to contact Renai

UPDATE: Contact has been made.

Meetup payments for LCA, Humbug, LibrePlanet.

Clinton Roy has been funding the account.

We are currently paying for the SLUG meetup.

Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.

LCA is now under the Linux Australia account

ACTION: Josh H to find out how to consume the other Meetup events: Humbug, LibrePlanet.

7. In camera

3 items were discussed in camera

2050PM close.

Council Minutes 29 July 2015

Wed, 2015-07-29 19:48 - 20:21

Meeting overview and key information


Josh Hesketh, Sae Ra Germaine, Craige McWhirter, James Iseppi, Josh Stewart


Tony Breeds, Chris Neugebauer

Meeting opened by Josh H at 1948hrs and quorum was achieved

1. MOTION that the previous minutes of 15 July are correct

Moved: Josh H

Seconded: Craige

Passed with one abstention

2. Log of correspondence

Motions moved on list


General correspondence


3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craig

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

UPDATE: Still in progress

ACTION: Josh H to tarball the site.

Outstanding action

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: Still in progress

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

UPDATE: Organisers are meeting and will submit a closing report.

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.

UPDATE: Currently being followed up.

GovHack 2015 as a subcommittee

MOTION by Josh H We accept govhack as an LA Sub-committee with the task of running GovHack at a national level with:

Geoff Mason - lead

Alysha Thomas

Pia Waugh - as the liaison to LA

Sharen Scott

Diana Ferry

Alex Sadleir

Richard Tubb

Jan Bryson

Keith Moss

Under the Sub-committee policy v1 to allow the committee to run with autonomy and to use an external entity for administration.

Seconded Chris

Passed Unanimously

The old Subcommittee policy will need to come into effect

UPDATE: Bill from GovHack for Tony to process.

Need to discuss the Linux Australia Prize. Who will be judging the prize. The prize value is at $2000 “Best use or contribution to Open Source”.

For future GovHack events an option to offer tickets to a LA run conference.

MOTION: Josh H Moves LA sponsors GovHack for the prize of “Best use of or contribution to Open Source” for the value of $2000.

Seconded by Sae Ra

Passed unanimously.

UPDATE: Josh H to find out if we need to do anything to Judge the OpenSource Bounty.

5th September Awards ceremony.

Admin Team draft budget from STEVEN WALSH

UPDATE: Awaiting for a more firm budget

UPDATE: Still awaiting

4. Items for discussion

LCA2016 update

CFP closes this sunday. Get those submissions in!

LCA2017 update

No items

LCA2018 update

There have been no expressions of interest.

Reminder to go out after CFP have closed.

PyCon AU update

The PyCon AU graph is up-to-date -- 431 registrations (vs 377 on the same day last year); everything is looking pretty excellent so far

Everything is very positive.

It’s on this weekend!

Drupal South

Correspondence: Quotes from Venue

UPDATE: Waiting on a budget

WordCamp Brisbane

Wrapping up and will be sending through a closure report


ACTION: Josh H to follow-up on budget status

ACTION: Josh to ping OSDConf

Payment for venue has been processed, banking and finances have been sorted.


Mentioned previously. 1 Month from now Prizes will be finalised.


Budget has been submitted by CARLY WILLATS

Deferred to on list.

5. Items for noting

Second F2F

Dates have been set for 14-15-16th F2F

ACTION: Josh H to book the Hotel and conference room.

Tony has advised that he is no longer available.

Possibly change dates to 12/13 September or October?

6. Other business

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.

UPDATE: Carried to next meeting

MOTION: Josh H moves that LA becomes a Demand Class member of auDA

Seconded: Tony B

Passed unanimously.

ACTION: Josh H to sign up with LA CC

UPDATE: In progress


David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

UPDATE: In Progress.

UPDATE: to be completed during PyconAU

ACTION: Josh S to catch up with David Bell regarding the documentation.

UPDATE: In progress.

ACTION: Josh H to catch up with James at PyconAU

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh Stewart to contact Renai

UPDATE: Contact has been made.

Meetup payments for LCA, Humbug, LibrePlanet.

Clinton Roy has been funding the account.

We are currently paying for the SLUG meetup.

Deferred until current meetup account is evaluated and if it can use the LA account or if we have to create a new account.

7. In Camera

2 items were discussed in camera

2021PM close.

September 07, 2015

The Effect of Screens Before Bedtime

Staring at screens right before sleep turns out to be a lot worse than previously thought. Dr. Dan Siegel, clinical professor of psychiatry at the UCLA School of Medicine, lays out all of the negative effects that bedtime screen viewing can have on the brain and body.

And of course, that’s aside from the effects of the contents! If you read any news, you may find yourself all riled up and annoyed at something you can do absolutely nothing about (or at least not at that moment). If you check your work email, just as problematic.

Caterpillar Gets Operation and Goes on Tour

Our robotic caterpillar mascot needed a little operation today: its middle horizontal servo had been acting up in recent weeks, so I replaced it. I’m happy to report it’s made a full recovery!

Robotic caterpillar gets spinal operationalOpenSTEM’s robotic caterpillar gets a spinal operation

Mirobot v2Later today we’re visiting Chermside library (during their renovation they’re currently located at North Regional Business Centre, 960 Gympie Road, opposite the Westfield Chermside Shopping Centre).

We’ll also be demonstrating the Mirobot drawing turtles from our popular Robotics Program.

Song Lyrics Flow Chart

I’m not a great fan of flow-charts when it comes to programming, but I reckon this one with the lyrics of the classic Beatles song Hey Jude is pretty cool: (click on the image to see it at full size)


If you like that kind of geeky fun… a few years ago there was a Twitter trend #songsincode (links to the archive) in which thousands of people captured the title of a song using a bit of (pseudo-)code (of whatever programming language), of course also bound by Twitter’s 140 character message limit.

It saw crafty things like

substring("the tiger",6,1)

(by @antallan),

baabaa=new sheep{color:black};echo(baabaa.hasWool?string.format("{0}!\n{0}, {1} bags full","Yes Sir",baa.bags.count):"No");

(by @web_bod) and many many more.

OpenSTEM at Open Source Developers’ Conference 2015

OSDC koala logoHorays! Arjen and Claire are both selected to speak at the Open Source Developers’ Conference 2015, scheduled for 27-29 October in beautiful Hobart, Tasmania:

OSDC is a great conference, low cost (about $300 including lunches and the conference dinner!) with a friendly, knowledgeable and engaging group of people and sessions. Always enjoyable and highly educational.

We hear that OSDC registration has been opened, the draft schedule is up and limited earlybird tickets are available.


Of course we’ll bring along caterpillar (he hasn’t travelled outside QLD yet), and assorted other goodies and gadgets for everybody to see and explore.

The Making of Elite (Video Game)

I’ve written earlier that my first “real” computer was the Acorn BBC Micro. I did have a Sinclair ZX80 a bit before then, but that little machine only had 1KB of RAM and a very shabby flat touch keyboard. The Acorn BBC had a real properly clicky keyboard, vastly more RAM (32KB 😉 and twice the CPU speed as well as many more options for attaching devices and hacking around in both hardware and software space. I didn’t quite realise all that, but I knew I wanted one and it was an amazing experience to play with and learn from.

In 1984 a new game came out, Elite. Very different from the other stuff that had been around. It was 3D, in space, and didn’t have the typical “3 lives, a few challenges, and N minutes of game play”. In the video below, BBC’s Peter Snow visits the Elite authors David Braben and Ian Bell, to explore how the game was developed. It’s a most interesting story. I didn’t know the 3D space radar screen had been a last minute addition. Indeed it turned out to be one of the key features!

The video doesn’t mention that many aspects of Elite were based on the Classic Traveller RPG (Role Playing Game), including ship designs and trading approach. The authors did acknowledge this elsewhere. It’s fine, just good to know about heritage and influences.

Of course the gaming world has moved on, itself influenced in a major way by Elite. The game Elite itself is also still around, with David Braben most recently developing Elite: Dangerous.

Elite taught interesting lessons for game developers, which are sometimes forgotten today. It’s not all about the fancy graphics, the game play experience is much more than that (and actually not necessarily about the graphics at all). Size matters, even today – inevitably, if you have load more data from disk, it’s slightly slower. So if you’re more space efficient, the broader game will appear snappier: more responsive and faster.

“Tight code” and design is not typically something programmers focus on now, but it’s a real asset. It requires proper understanding of a system, down the software layers into the hardware. It’s worthwhile, even if you’re not planning on writing the next amazing video game. I don’t like to delve into the common Intel based PCs as they’re overly complex and really quite messy. But looking at Arduino micro-controller environments is entirely feasible, and also Raspberry Pi. And that’s also the kind of understanding that we work on in our OpenSTEM workshops and programs for school classes.

The Raspberry Pi is succeeding in ways its makers almost imagined | The Register

Bebras Computational Thinking Challenge

Bebras is an international initiative whose goal is to promote computational thinking for teachers and students (ages 8-17 / school years 3-12). Bebras is aligned with and supports the new Australian Digital Technologies Curriculum. Bebras Australia is run by NICTA under the Digital Careers program, funded by the Australian Government as represented by the Department of Communications.

The Bebras Australia Computational Thinking Challenge 2015 is from 7 to 18th September 2015. It’s completely free and looks cool (I couldn’t resist and did a couple of the challenges – fun!)

Teachers can register and get things organised now (visit download the coordinators handbook to learn how to set up your students in the competition server. You must also obtain a parental consent form for each student.)

Anyone can go to the Bebras challenge site and play with last year’s challenges already to get a feel for what it’s like.

Solar Team Eindhoven presents vehicle that generates more power than it uses |

Solar Team Eindhoven presented Stella Lux, an intelligent, solar-powered family car that generates more power than it uses. Read all about it + photos + specs.

Generating more power than you use is very beneficial in an electric car, as it means you’ll be able to charge the batteries while driving – that is, the driving will not actually use any battery charge as long as you have enough sunlight. So you only use the battery charge at other times: seriously cloudy spells, or night. It also means that it generates so much that even with less sun lots of power is generated (= high efficiency).

Extremely useful.

Press coverage for OpenSTEM Robotics Program at Grovely State School


A journalist and photographer from Brisbane’s North-West News visited Grovely State School, providing this very nice write-up. This is a great acknowledgement of all the work and achievements by the students in the senior classes on electronics soldering, robotics and programming!


It’s been fabulous working with the students and staff at Grovely, and everybody is having a great time – almost forgetting that the OpenSTEM Robotics Program is real curriculum related school work rather than just an incursion experience!

Serendipitously, the Queensland government has recently announced an intention to focus specifically on programming and robotics in education:

“Our goal is to make sure our students are at the cutting edge of innovation through the development of skills to become the technology architects of the digital age,” Queensland Premier Annastacia Palaszczuk said, “This will include an assessment of coding and computer science, as well as early stage robotics, something I firmly believe should be a part of our education system.”

Advance Queensland’ package announcement (July 2015)

We’d love to work with your school too, contact us today! We’re  currently accepting expressions of interest for the second half of Term 4 (2015) and 2016, and we’re also happy to visit you to meet and discuss your ideas and needs. We love our Robotics Program, but we do much more!

The best person who ever lived is an unknown Ukrainian man |

Out of everyone who ever existed, who has done the most good for humanity? It’s a difficult question.


In 1958, Viktor Zhdanov was a deputy minister of health for the Soviet Union. In May of that year, at the Eleventh World Health Assembly meeting in Minneapolis, Minnesota, during the Soviet Union’s first appearance in the assembly after a nine-year absence, Zhdanov presented a lengthy report with a visionary plan to eradicate smallpox. At the time, no disease had ever before been eradicated. No one knew if it could even be done. And no one expected such a suggestion to come from the Soviet Union; in fact, Zhdanov had had to fight internal pressure from the USSR to convince them of his plans. When he spoke to the assembly of the WHO, he conveyed his message with passion, conviction, and optimism, boldly suggesting that the disease could be eradicated within ten years.

Two Documentaries Introduce Delia Derbyshire, the Pioneer in Electronic Music | OpenCulture

September 06, 2015

Twitter posts: 2015-08-31 to 2015-09-06

Parallel and MPI Octave

There are some excellent packages for GNU Octave, the free and open-source numerical computation language that is "highly compatible" with the proprietary and closed-source MATLAB (tm).

read more

Some Interesting R Library Quirks

A researcher uses an HPC system to analyse DNA methylation data. However when they try to install some related libraries, the installation fails.

read more

September 05, 2015

A Long Term Review of Android Devices

Xperia X10

My first Android device was The Sony Ericsson Xperia X10i [1]. One of the reasons I chose it was for the large 4″ screen, nowadays the desirable phones (the ones that are marketed as premium products) are all bigger than that (the Galaxy S6 is 5.1″) and even the slightly less expensive phones are bigger. At the moment Aldi is advertising an Android phone with a 4.5″ screen for $129. But at the time there was nothing better in the price range that I was willing to pay.

I devoted a lot of my first review to the default apps for SMS and Email. Shortly after that I realised that the default email app is never going to be adequate (I now use K9 mail) and the SMS app is barely adequate (but I mostly use instant messaging). I’ve got used to the fact that most apps that ship with an Android device are worthless, the camera app and the app to make calls are the only built in apps I regularly use nowadays.

In the bug list from my first review the major issue was lack of Wifi tethering which was fixed by an update to Android 2.3. Unfortunately Android 2.3 ran significantly more slowly which decreased the utility of the phone.

The construction of the phone is very good. Over the last 2 years the 2 Xperia X10 phones I own have been on loan to various relatives, many of whom aren’t really into technology and can’t be expected to take good care of things. But they have not failed in any way. Apart from buying new batteries there has been no hardware failure in either phone. While 2 is a small sample size I haven’t see any other Android device last nearly as long without problems. Unfortunately I have no reason to believe that Sony has continued to design devices as well.

The Xperia X10 phones crash more often than most Android phones with spontaneous reboots being a daily occurrence. While that is worse than any other Android device I’ve used it’s not much worse.

My second review of the Xperia X10 had a section about ways of reducing battery use [2]. Wow, I’d forgotten how much that sucked! When I was last using the Xperia X10 the Life360 app that my wife and I use to track each other was taking 15% of the battery, on more recent phones the same app takes about 2%. The design of modern phones seems to be significantly more energy efficient for background tasks and the larger brighter displays use more energy instead.

My father is using one of the Xperia phones now, when I give him a better phone to replace it I will have both as emergency Wifi access points. They aren’t useful for much else nowadays.

Samsung Galaxy S

In my first review of the Galaxy S I criticised it for being thin, oddly shaped, and slippery [3]. After using it for a while I found the shape convenient as I could easily determine the bottom of the phone in my pocket and hold it the right way up before looking at it. This is a good feature for a phone that’s small enough to rotate in my pocket – the Samsung Galaxy Note series of phones is large enough to not rotate in a pocket. In retrospect I think that being slippery isn’t a big deal as almost everyone buys a phone case anyway. But it would still be better for use on a desk if the bulge was at the top.

I wrote about my Galaxy S failing [4]. Two of my relatives had problems with those phones too. Including a warranty replacement I’ve seen 4 of those phones in use and only one worked reliably. The one that worked reliably is now being used by my mother, it’s considerably faster than the Xperia X10 because it has more RAM and will probably remain in regular use until it breaks.


I tried using CyanogenMod [5]. The phone became defective 9 months later so even though CyanogenMod is great I don’t think I got good value for the amount of time spent installing it. I haven’t tried replacing the OS of an Android phone since then.

I really wish that they would start manufacturing phones that can have the OS replaced as easily as a PC.

Samsung Galaxy S3 and Wireless Charging

The Galaxy S3 was the first phone I owned which competes with phones that are currently on sale [6]. A relative bought one at the same time as me and her phone is running well with no problems. But my S3 had some damage to it’s USB port which means that the vast majority of USB cables don’t charge it (only Samsung cables can be expected to work).

After I bought the S3 I bought a Qi wireless phone charging device [7]. One of the reasons for buying that is so if a phone gets a broken USB port then I can still use it. It’s ironic that the one phone that had a damaged USB port also failed to work correctly with the Qi card installed.

The Qi charger is gathering dust.

One significant benefit of the S3 (and most Samsung phones) is that it has a SD socket. I installed a 32G SD card in the S3 and now one of my relatives is happily using it as a media player.

Nexus 4

I bought a Nexus 4 [8] for my wife as she needed a better phone but didn’t feel like paying for a Galaxy S3. The Nexus 4 is a nice phone in many ways but the lack of storage is a serious problem. At the moment I’m only keeping it to use with Google Cardboard, I will lend it to my parents soon.

In retrospect I made a mistake buying the Nexus 4. If I had spent a little more money on another Galaxy S3 then I would have had a phone with a longer usage life as well as being able to swap accessories with my wife.

The Nexus 4 seems reasonably solid, the back of the case (which is glass) broke on mine after a significant impact but the phone continues to work well. That’s a tribute to the construction of the phone and also the Ringke Fusion case [9].

Generally the Nexus 4 is a good phone so I don’t regret buying it. I just think that the Galaxy S3 was a better choice.

Galaxy Note 2

I got a Samsung Galaxy Note 2 in mid 2013 [10]. In retrospect it was a mistake to buy the Galaxy S3, the Note series is better suited to my use. If I had known how good it is to have a larger phone I’d have bought the original Galaxy Note when it was first released.

Generally everything is good about the Note 2. While it only has 16G of storage (which isn’t much by today’s standards) it has an SD socket to allow expansion. It’s currently being used by a relative as a small tablet. With a 32G SD card it can fit a lot of movies.

Bluetooth Speakers

I received Bluetooth speakers in late 2013 [11]. I was very impressed by them but ended up not using them for a while. After they gathered dust for about a year I started using them again recently. While nothing has changed regarding my review of the Hive speakers (which I still like a lot) it seems that my need for such things isn’t as great as I thought. One thing that made me start using the Bluetooth speakers again is that my phone case blocks the sound from my latest phone and makes it worse than phone sound usually is.

I bought Bluetooth speakers for some relatives as presents, the relatives seemed to appreciate them but I wonder how much they actually use them.

Nexus 5

The Nexus 5 [12] is a nice phone. When I first reviewed it there were serious problems with overheating when playing Ingress. I haven’t noticed such problems recently so I think that an update to Android might have made it more energy efficient. In that review I was very impressed by the FullHD screen and it made me want a Note 3, at the time I planned to get a Note 3 in the second half of 2014 (which I did).

Galaxy Note 3

Almost a year ago I bought the Samsung Galaxy Note 3 [13]. I’m quite happy with it at the moment but I don’t have enough data for a long term review of it. The only thing to note so far is that in my first review I was unhappy with the USB 3 socket as that made it more difficult to connect a USB cable in the dark. I’ve got used to the socket and I can now reliably plug it in at night with ease.

I wrote about Rivers jeans being the only brand that can fit a Samsung Galaxy Note series phone in the pocket [14]. The pockets of my jeans have just started wearing out and I think that it’s partly due to the fact that I bought a Armourdillo Hybrid case [15] for my Note 3. I’ve had the jeans for over 3 years with no noticable wear apart from the pockets starting to wear out after 10 months of using the Armourdillo case.

I don’t think that the Armourdillo case is bad, but the fact that it has deep grooves and hard plastic causes it to rub more on material when I take the phone out of my pocket. As I check my phone very frequently this causes some serious wear. This isn’t necessarily a problem given that a phone costs 20* more than a pair of jeans, if the case was actually needed to save the phone then it would be worth having some jeans wear out. But I don’t think I need more protection than a gel case offers.

Another problem is that the Armourdillo case is very difficult to remove. This isn’t a problem if you don’t need access to your phone, IE if you use a phone like the Nexus 5 that doesn’t permit changing batteries or SD cards. But if you need to change batteries, SD cards, etc then it’s really annoying. My wife seems quite happy with her Armoudillo case but I don’t think it was a good choice for me. I’m considering abandoning it and getting one of the cheap gel cases.

The sound on the Note 3 is awful. I don’t know how much of that is due to a limitation in the speaker and how much is due to the case. It’s quite OK for phone calls but not much good for music.


I’m currently on my third tablet. One was too cheap and nasty so I returned it. Another was still cheap and I hardly ever used it. The third is a Galaxy Note 10 which works really well. I guess the lesson is to buy something worthwhile so you can use it. A tablet that’s slower and has less storage than a phone probably isn’t going to get used much.

Phone Longevity

I owned the Xperia X10 for 22 months before getting the Galaxy S3. As that included 9 months of using a Galaxy S I only had 13 months of use out of that phone before lending it to other people.

The Galaxy S3 turned out to be a mistake as I replaced it in only 7 months.

I had the Note 2 for 15 months before getting the Note 3.

I have now had the Note 3 for 11 months and have no plans for a replacement any time soon – this is the longest I’ve owned an Android phone and been totally satisfied with it. Also I only need to use it for another 4 months to set a record for using an Android phone.

The Xperia was “free” as part of a telco contract. The other phones were somewhere between $500 and $600 each when counting the accessories (case, battery, etc) that I bought with them. So in 4 years and 7 months I’ve spent somewhere between $1500 and $1800 on phones plus the cost of the Xperia that was built in to the contract. The Xperia probably cost about the same so I’ll assume that I spent $2000 on phones and accessories. This seems like a lot. However that averages out to about $1.20 per day (and hopefully a lot less if my Note 3 lasts another couple of years). I could justify $1.20 per day for either the amount of paid work I do on Android phones or the amount of recreational activities that I perform (the Galaxy S3 was largely purchased for Ingress).


I think that phone companies will be struggling to maintain sales of high end phones in the future. When I chose the Xperia X10 I knew I was making a compromise, the screen resolution was an obvious limitation on the use of the device (even though it was one of the best devices available). The storage in the Xperia was also a limitation. Now FullHD is the minimum resolution for any sort of high-end device and 32G of storage is small. I think that most people would struggle to observe any improvement over a Nexus 5 or Note 3 at this time. I think that this explains the massive advertising campaign for the Galaxy S6 that is going on at the moment. Samsung can’t sell the S6 based on it being better than previous phones because there’s not much that they can do to make it obviously better. So they try and sell it for the image.

Outernet modem and LDPC using Octave

I’ve recently started a project to develop a L-band satellite modem with the good people at Outernet. The application is one way broadcast data into the developing world. The “earth station”, or user terminal hardware, is an Outernet LNB and purpose-built patch antenna connected to low cost USB SDR dongle receiver. The modem will use QPSK, a LDPC code, and deliver payload data at around 2400 bit/s in a 5kHz RF bandwidth. with the modem and LDPC codec running in software on a small Linux machine.

Here is the prototype Outernet L-band antenna:

These days I’m choosy about accepting contract work, as I am very focused on my HF/VHF Digital Voice work. I’m not interested in closed source jobs just to earn money, and/or build a closed product for commercial reasons alone. There needs to be a social bottom line. However, like me, Outernet are focused on “improving the world a little bit”, and they agreed to LGPL the code. So I’m in!

As well as satellite communication, this modem will be useful for terrestrial data. If you have the MIPs, it will scale to different bit rates. As usual, I will be producing Octave simulations and a real-time gcc-compilable C version of the modem.

CML LDPC and Octave

As a first step I’ve been bringing back to life some QPSK modem and LDPC Octave simulations that I developed as prototypes for HF FreeDV. They use the Coded Modulation Library (CML). These simulations were based on Octave code sent to me by Bill, VK5DSP, who helped me work out how to use the CML library.

These simulations will be useful for anyone who wants to try LPDC in Octave. I have documented how to install and use the CML library in ldpc.m, and written ldpcut.m to demonstrate how to use the CML LDPC functions.

Here is a plot of a 1152 bit codeword, rate 1/2 code, compared to uncoded PSK. Notice how the bit errors drop off a cliff at an Eb/No of 1.5dB? Pretty cool. It’s runs at about 20,000 bit/s on a single core of my modest laptop, without any attempt at optimisation.

September 04, 2015

Distracting adventures in ZFS upgrades

Last week I wanted to play around with some software packages for logging and charting of environmental measurements and events (specifically, two packages, openhab, and emoncms)

Wanting to save time (sweet irony!), rather than building up a VM and manually configuring the tools, I figured I’d use docker. Except that the workstation I wanted to use was running Debian Squeeze was still on kernel 3.2, which doesn’t support docker. Oh, and a ZfsOnLinux (ZoL) zraid for the root filesystem.

So the steps to get to docker involved upgrading the kernel, ZFS, and by the way, the nvidia drivers.

Mistake #1. I should have just built a Xubuntu 14.04 VM and run docker inside that!

Before upgrading the kernel from Debian backports, I decided to ensure ZfsOnLinux was updated. I (correctly, confirmed) anticipated the most problems with ZoL. Anyway, I knew that upgrading ZoL would be fraught with danger so I read all the documentation, and upgrade advice, and so on, and took all the recommended precautions.

But of course, after going through two cycles of apt-get and dpkg-reconfigure and  rebuilding the initramfs and so on, after rebooting, BAM! A variant of the dreaded “failed to mount the root filesystem” error. Reported close by was a missing kernel module error for something called zcommon.

After a bit of digging and breaking the virtual glass on an emergency boot partition I worked out that I had missed upgrading one of the packages required for ZoL. Why it was not an automatic dependency I don’t know, but after installing something called “libvnpair” the system booted further. And then stopped again.

This one would take rather a bit more work to track down. Semi-helpfully, the entire error message was:

Manually import the root pool at the command prompt and then exit.
Hint: Try: zpool import -R / -N ${ZFS_RPOOL}

At this point, the initramfs was dropping my system to a rescue shell, and via the above message advising me to import the ZFS pool containing the root filesystem. So I tried its helpful suggestion to execute the ‘zpool import’ command, which actually succeeded, and after some more fiddling manually mount various file systems proceeded to boot the system. However, this manual process only got me out of trouble once, and still needed to be resolved.

To get further I had to instrument the initramfs file scripts/zfs with a bunch of echo statements and rebuild the initramfs. (The script files bundled in when rebuilding initramfs on Debian are located under /usr/share/initramfs-tools/scripts) This let me reboot and work out where the zpool import was failing (or not even being called at all.)

As it turns out, zpool was not being called, at all, in a way that would work for my partitioning scheme. The logic in scripts/zfs runs a whole bunch of permutations trying to locate the pool, but if a variable called ROOT is empty it skips executing zpool as required. The solution, as it turns out, was to update my grub with ‘root=zfs:AUTO‘ – previously, my kernel did not require this kernel argument, but now, having upgraded ZoL, from 0.6.2 to 0.6.4, it did.

So, what caused this? There were a lot of year or so old threads discussing upgrade errors related to ZfsOnLinux but none of them quite matched my specific scenario.

One possibility is this:

* I run a separate boot filesystem from the usual /boot, containing a hand crafted grub, which can execute various tools such as Gparted, various minimal linux installs for rescue purposes, memcheckx86 and other tools.

* Whenever I upgrade the kernel on this system I need to copy over the vmlinux and initramfs files to this originating boot filesystem from /boot (which is never used by my grub)

* I wonder if ZoL  may have added the root=zfs:AUTO option to the Debian grub update facility, but I neglected to check for changes to the generated /boot/grub/grub.cfg and apply any changes to  my real grub.cfg. And wham!

However, I couldn’t find any references to zfs in /etc/grub.d, so this hypothesis may well be wrong. Via occams razor, perhaps its just that my setup on this particular workstation is more complex or unusual than most users of ZfsOnLinux. Anyway, onward and upwards.

I’ shortly to decide on which of OpenHAB or EmonCMS I’ll be using for my Hackaday Prize finals entry. Stay tuned!

September 03, 2015

Cross-compiling a PowerPC64 LE kernel and hitting a GCC bug

Being new at OzLabs I’m dipping my toes into various projects and having a play with PowerPC and so I thought I’d cross-compile the Linux kernel on Fedora. Traditionally PowerPC has been big endian, however it also supports little endian so I wanted to build all the things.

Fedora uses a single cross toolchain that can build all four variants, whereas Debian/Ubuntu splits this out into two different toolchains (a BE and an LE one).

Install dependencies in Fedora:

$ sudo dnf install gcc make binutils-powerpc64-linux-gnu gcc-powerpc64-linux-gnu gcc-c++-powerpc64-linux-gnu bc ncurses-devel

Get the v4.2 kernel:

$ git clone --branch v4.2 --depth 1 && cd linux

Successful big endian build of the kernel, using the default config for pseries:

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make pseries_defconfig

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make -j$(nproc)

# clean after success

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make clean

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make mrproper

Building a little endian kernel however, resulted in a linker problem:

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make pseries_defconfig

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make menuconfig

# change architecture to little endian:

# Endianness selection (Build big endian kernel) --->

# (X) Build little endian kernel

$ ARCH=powerpc CROSS_COMPILE=powerpc64-linux-gnu- make V=1

Here was the result:

powerpc64-linux-gnu-gcc -mlittle-endian -mno-strict-align -m64 -Wp,-MD,arch/powerpc/kernel/vdso64/ -nostdinc -isystem /usr/lib/gcc/powerpc64-linux-gnu/5.2.1/include -I./arch/powerpc/include -Iarch/powerpc/include/generated/uapi -Iarch/powerpc/include/generated -Iinclude -I./arch/powerpc/include/uapi -Iarch/powerpc/include/generated/uapi -I./include/uapi -Iinclude/generated/uapi -include ./include/linux/kconfig.h -D__KERNEL__ -Iarch/powerpc -DHAVE_AS_ATHIGH=1 -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -std=gnu89 -msoft-float -pipe -Iarch/powerpc -mtraceback=no -mabi=elfv2 -mcmodel=medium -mno-pointers-to-nested-functions -mcpu=power7 -mno-altivec -mno-vsx -mno-spe -mspe=no -funit-at-a-time -fno-dwarf2-cfi-asm -mno-string -Wa,-maltivec -fno-delete-null-pointer-checks -O2 --param=allow-store-data-races=0 -Wframe-larger-than=2048 -fno-stack-protector -Wno-unused-but-set-variable -fomit-frame-pointer -fno-var-tracking-assignments -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -Werror=implicit-int -Werror=strict-prototypes -Werror=date-time -DCC_HAVE_ASM_GOTO -Werror -shared -fno-common -fno-builtin -nostdlib -Wl, -Wl,--hash-style=sysv -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(" -D"KBUILD_MODNAME=KBUILD_STR(" -Wl,-T arch/powerpc/kernel/vdso64/ arch/powerpc/kernel/vdso64/sigtramp.o arch/powerpc/kernel/vdso64/gettimeofday.o arch/powerpc/kernel/vdso64/datapage.o arch/powerpc/kernel/vdso64/cacheflush.o arch/powerpc/kernel/vdso64/note.o arch/powerpc/kernel/vdso64/getcpu.o -o arch/powerpc/kernel/vdso64/

/usr/bin/powerpc64-linux-gnu-ld: arch/powerpc/kernel/vdso64/sigtramp.o: file class ELFCLASS64 incompatible with ELFCLASS32

/usr/bin/powerpc64-linux-gnu-ld: final link failed: File in wrong format

collect2: error: ld returned 1 exit status

arch/powerpc/kernel/vdso64/Makefile:26: recipe for target 'arch/powerpc/kernel/vdso64/' failed

make[2]: *** [arch/powerpc/kernel/vdso64/] Error 1

scripts/ recipe for target 'arch/powerpc/kernel/vdso64' failed

make[1]: *** [arch/powerpc/kernel/vdso64] Error 2

Makefile:949: recipe for target 'arch/powerpc/kernel' failed

make: *** [arch/powerpc/kernel] Error 2

All those files were 64bit, however:

arch/powerpc/kernel/vdso64/cacheflush.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

arch/powerpc/kernel/vdso64/datapage.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

arch/powerpc/kernel/vdso64/getcpu.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

arch/powerpc/kernel/vdso64/gettimeofday.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

arch/powerpc/kernel/vdso64/note.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

arch/powerpc/kernel/vdso64/sigtramp.o: ELF 64-bit LSB relocatable, 64-bit PowerPC or cisco 7500, version 1 (SYSV), not stripped

An strace of the failing powerpc64-linux-gnu-gcc command above showed that collect2 (and ld) were being called with an option setting the format to 32bit:

24904 execve("/usr/libexec/gcc/powerpc64-linux-gnu/5.2.1/collect2", ["/usr/libexec/gcc/powerpc64-linux"..., "-plugin", "/usr/libexec/gcc/powerpc64-linux"..., "-plugin-opt=/usr/libexec/gcc/pow"..., "-plugin-opt=-fresolution=/tmp/cc"..., "--sysroot=/usr/powerpc64-linux-g"..., "--build-id", "--no-add-needed", "--eh-frame-hdr", "--hash-style=gnu", "-shared", "--oformat", "elf32-powerpcle", "-m", "elf64lppc", "-o", ...], [/* 66 vars */]

Alan Modra tracked it down to some 32bit hard-coded entries in GCC sysv4.h and sysv4le.h and submitted a patch to the GCC mailing list (Red Hat bug).

I re-built the Fedora cross-gcc package with his patch and it solved the linker problem for me. Hurrah!

September 02, 2015

Getting started with Xmonad

Over the years I've found window managers and desktop environments increasingly getting in my way. It was for this reason I ditched GNOME for Enlightenment 0.17 and again why I recently ditched Enlightenment 0.19 for Xmonad.

I'd tried Xmonad back in 2013 because Haskell but walked away after a fairly half-hearted look. I gave it a much more serious look this time and I'm happy to say that it's here to stay.

Xmonad is light, fast and behaves exactly how I expect it to.

Here's how I went about settling into Xmonad on Debian Jessie:

An Overview of My Xmonad Environment

The current environment I've settled on with Xmonad looks like this:


For the impatient, follow these steps below and you should be on your way:

$ sudo apt-get install xmonad libghc-xmonad-contrib-dev libghc-xmonad-dev suckless-tools trayer xscreensaver scrot
$ wget myconfig and install it here
$ wget this xinitrc to here
$ wget -O /tmp/
$ mkdir -p ~/.fonts/OpenSans
$ unzip -d ~/.fonts/OpenSans /tmp/
$ fc-cache -fv

Select "Default Session" when logging in via GDM, LightDM etc. Enjoy.

Get the Gear

For those of us that are a little more Zen about learning, let's install the packages:

$ sudo apt-get install xmonad libghc-xmonad-contrib-dev libghc-xmonad-dev suckless-tools trayer xscreensaver scrot
Optional Font Installation:

The configuration I provide for xmobar uses the Open Sans font. You can change this to any font that you desire. However if you wish to use Open Sans, here is how you go about getting and installing it:

  1. Download the font:
$ mkdir ~/.fonts/OpenSans
$ wget -O /tmp/
  1. Unzip the font:
$ cd ~/.fonts/OpenSans
$ unzip /tmp/
  1. Rebuild the font cache information files:
$ fc-cache -fv

This will provide you with the all the tools that make up my Xmonad environment.

Configure Your Xsession

The very minimum that you'll need to do is edit / create ~/.xsession to have at least this line:


You can get more creative than this if you like, I recommend reading Getting started with xmonad and How can I use xmonad with a display manager?.

Configure Xmonad

Xmonad, unsurprisingly uses Haskell as it's configuration language. Here is the ~/.xmonad/xmonad.hs I used at the time of writing:

import XMonad
import XMonad.Hooks.DynamicLog
import XMonad.Hooks.ManageDocks
import XMonad.Util.Run(spawnPipe)
import XMonad.Util.EZConfig(additionalKeys)
import System.IO

main = do
        -- Spawn the xmobar status bar
        xmproc <- spawnPipe "/usr/bin/xmobar ~/.xmobarrc"
        xmonad $ defaultConfig
                -- Set your preferred mouse focus behaviour
                { focusFollowsMouse = False
                -- Set your default terminal
                , terminal = "terminology"
                -- Configure some default interactions between Xmonad and xmobar
                , manageHook = manageDocks <+> manageHook defaultConfig
                , layoutHook = avoidStruts $ layoutHook defaultConfig
                , logHook = dynamicLogWithPP $ xmobarPP
                        { ppOutput = hPutStrLn xmproc
                        , ppTitle = xmobarColor "green" "" . shorten 50
                , modMask = mod4Mask -- Rebind Mod to the Windows key
                -- Configure some additional keys
                } `additionalKeys`
                        -- Lock the screen with Windows+Shift+z
                        [ ((mod4Mask .|. shiftMask, xK_z), spawn "xscreensaver-command -lock")
                        -- Capture the current window with control+PrintScreen
                        , ((controlMask, xK_Print), spawn "sleep 0.2; scrot -s")
                        -- Capture the current desktop with PrintScreen
                        , ((0, xK_Print), spawn "scrot")
                        -- Turn off the display port with Windows+d - Useful for undocking my laptop
                        , ((mod4Mask, xK_d), spawn "/usr/bin/xrandr --output DP1 --off")
                        -- Turn on the display port and set it as the primary display with Windows+Shift+d - tailor to your screen
                        , ((mod4Mask .|. shiftMask, xK_d), spawn "/usr/bin/xrandr --output DP1 --primary ; /usr/bin/xrandr --output LVDS1 --mode 1280x800; /usr/bin/xrandr --output DP1 --mode 2560x1440; /usr/bin/xrandr --output DP1 --left-of LVDS1")
                        -- Volume control keys, if needed. Yours may be different.
                        , ((0 , 0x1008FF11), spawn "amixer set Master 2%-")
                        , ((0 , 0x1008FF13), spawn "amixer set Master 2%+")
                        , ((0 , 0x1008FF12), spawn "amixer set Master toggle")

Configure xmobar

Xmobar uses ~/.xmobarrc as it's configuration file. I am using the Open Sans font. If you did not download and install that in the earlier steps, you may want to change the font = line to a font that you do have installed.

Here is my .xmobarrc file:

Config { font = "xft:Open Sans:size=10:antialias=true"
        , bgColor = "black"
        , fgColor = "grey"
        , position = TopW L 90
        , lowerOnStart = True
        , commands = [ Run Cpu ["-L","3","-H","70","--normal","green","--high","red", "-t", "CPU <total>"] 10
                , Run CpuFreq ["-t", "<cpu0>GHz <cpu1>GHz"] 10
                , Run Memory ["-t", "MEM <usedratio>%"] 10
                , Run Date "%a %b %_d %H:%M" "date" 10
                , Run Battery ["-t","BAT <left>", "-h", "green", "60", "-l", "red", "10"] 10
                , Run StdinReader
                , Run Weather "YSSY" ["-t"," <tempC>C","-L","10","-H","35","--normal","green","--high","red","--low","lightblue"] 36000
                , Run DynNetwork [] 10
        , sepChar = "%"
        , alignSep = "}{"
        , template = "%StdinReader% }{ %cpu% %cpufreq% | %memory% | %dynnetwork% | %battery% %YSSY% <fc=#ee9a00>%date%</fc>"

I'd recommend looking up your nearest weather station here and replacing YSSY with that station's code. Unless you live in Sydney, Australia, in which case, you're set.

Configure Trayer

I use trayer as my system tray and I place it to the right of the screen in the gap left by my xmobar configuration:

trayer --edge top --align right --SetDockType true --SetPartialStrut true \
 --expand true --width 10  --transparent true --alpha 0 --tint 0x000000 --height 19 &

I currently run this manually but it could easily be launched automatically elsewhere.

Login & Get Comfortable

There's a useful list of default key bindings and handy cheatsheet which you'll need to familiarise yourself with.

I've found xmonad to be fast and unobtrusive with default behaviour that continually surprises me by doing exactly what I expected.

Very happy convert here.

tears in rain | Wil Wheaton

I walked out of the loading dock, through a cloud of rotting garbage, and into the alleyway behind the theater. A curtain of rain fell between me and my destination, a little over a block away.

“Do you want to wait here, while I get you an umbrella?” Liz, the producer from Wizards of the Coast, asked me.

“No,” I said, stepping into the rain, extending my arms outward and turning my palms and face to the sky, “it’s been so long since I felt rain fall on my body, I’m not going to let this opportunity pass me by.”

I walked down the sidewalk, surrounded by other PAX attendees. Some were not bothered by the rain, while others held up programs and newspapers and other things to keep it away. A man walked his dog next to me. The dog was unperturbed by the weather. We got to the corner and waited for the light to change. The rain intensified and it was glorious.

“Are you sure this is okay?” She said.

“Oh yes, this is so much more than okay,” I answered, “this is perfect.”


I’ve been feeling pretty much the opposite of awesome for several weeks, now, and actually getting to sit down, face to face, in a semi-quiet few moments with real people who wanted to be there with me was … restorative, I guess is the best word. One player told me, “Thank you for everything you do. From Tabletop to Titansgrave — which is the best thing I’ve ever seen — to talking so openly about anxiety and depression.”


Read Wil’s entire post at

September 01, 2015

Native Instrument's 'pkg' File Format, Web Documentation Compiler/Scraper, Thoughts on the JSF/Counter-Stealth Technologies, and More

Sometimes you get installation errors in Windows and it is absolutely infuriating because the manufacturer offers you no easy way of correcting (and understanding where the error actually is because of arcane error messages) the file short of re-downloading the entire ISO, etc... This is when some basic reverse engineering skills come in handy.

As you can see below, Native Instrument's 'pkg' file format is actually just a xar archive which includes several other archived files as well.

xar -x -f "Maschine 2 Factory Library Library Part 5.pkg"

user@machine:/media/sda1/NIMCN2FL100$ cd Folder

user@machine:/media/sda1/NIMCN2FL100/Folder$ ls

Bom  PackageInfo  Payload  Scripts

user@machine:/media/sda1/NIMCN2FL100/Folder$ file *

Bom:         Mac OS X bill of materials (BOM) file

PackageInfo: XML document text

Payload:     gzip compressed data, from Unix

Scripts:     gzip compressed data, from Unix

user@machine:/media/sda1/NIMCN2FL100/Folder$ mv Payload Payload.gz

user@machine:/media/sda1/NIMCN2FL100/Folder$ mv Scripts Scripts.gz

user@machine:/media/sda1/NIMCN2FL100/Folder$ gunzip Scripts.gz

user@machine:/media/sda1/NIMCN2FL100/Folder$ ls

Bom  PackageInfo  Payload.gz  Scripts

user@machine:/media/sda1/NIMCN2FL100/Folder$ vim Scripts

user@machine:/media/sda1/NIMCN2FL100/Folder$ gunzip Payload.gz

user@machine:/media/sda1/NIMCN2FL100/Folder$ ls

Bom  PackageInfo  Payload  Scripts

user@machine:/media/sda1/NIMCN2FL100/Folder$ file *

Bom:         Mac OS X bill of materials (BOM) file

PackageInfo: XML document text

Payload:     ASCII cpio archive (pre-SVR4 or odc)

Scripts:     ASCII cpio archive (pre-SVR4 or odc)

The trend towards placing all documentation online can be infuriating at times. It means you have to have a permanent connection on at all times for referencing. The only option is to run a web crawler/web site downloader over it but as I've discovered in the past the performance of such programs can be frustrating. Recently, I had a similar encounter with some trading software. I wrote a custom script to download all relevant files and then compiling them into a single PDF file.

This reminds me, you don't always have to resort to multi-threading to achieve parallelism/higher performance (I've come across some people who have almost basically assumed this). In fact, in some languages you can't even do it.

If you've ever watched some of the online courses from MOOCs and seen some of the better quality YouTube productions (or other free video upload sites out there) you're sometimes left wondering why you or others would want to pay. Anyhow, as stated previously I don't like being online all the time and want to download things for offline perusal. Recently, I had a problem with regards to merging them though. One, it wasn't being done properly and two, MP3 tag information was corrupted. I found out mp3wrap and vbrfix did the job.

You may have to chance some code in mp3wrap if you use more than 256 files though. ./configure, make, make install...

mp3wrap.h:#define MAXNUMFILE 512

Can't believe that in this day and age we don't have better mp3 file verification checking options. Guessing I haven't found the right tool yet?

- as stated before the US seems more guarded with regards to the program than most other people
- most advances in 'stealth' have so far come from iterative science and technology advances. I think the next major advance will likely come from left field though. Something which affects the science in general rather than something more specific to stealth/defense technology alone. Like 'stealth' my guess is that it may take a bit of time before we feel the impact of such technology in the real world though

- if you look at the program more carefully it's much more obvious how savings can accrue over the life of the JSF program. It's clear that the core designs across each of the variants is no longer as similar as was originally intended but modular design, self diagnostics/testing, etc... will still play a role over the entire lifetime of the program

Marine In The F-35 Test Force Shares His Experiences

- the irony is that some of the primary mechanisms that are currently used to reduce temperature for IR stealth are actually used in satellite technology and more inauspicious areas like motorsport technology

- one of the odd things which has struck me about the Russia/Chinese with regards to their defense/intelligence setup is that even if they have the ability to re-produce some Western technologies they sometimes choose not to. Think about the Buran. They're generally more practical and economical with regards to use of technology. Look at the way finances were handled during the KGB era. They were miserly when it came to budgeting for possible assets when compared to their Western counterparts. That's why I'm not entirely certain that the reason why they are behind the eight ball on aircraft stealth is simply because they don't have the ability to bridge the gap... At various times throughout history they've held the lead with regards to submarine, missile, and various other core defense technologies. I think that it may simply come down to the fact that they may be trying to do their best allocate their resources in such a way that to acheive their defensive needs for the best price? Either way, I don't think that a war involving the US and near peer threats such as Russia and China is going to be clear cut as some people think (especially when the modernisation of their militaries are complete). It will be somewhat of a slugfest...

- even if the JSF can perform CAS duties relatively well there's something we're missing here. The JSF is incredibly expensive and the way that stealth is so integral to the aircraft means that every time the aircraft gets hit its RCS increases. Moreover, the cost of the shell of the aircraft is exhorbitant compared to current technology. For anyone to assume that the JSF is not going to get hit in CAS duties is nonsensical especially if it's going to try to takeover the role of the A10 (in a like for like replacement) at some point. I still prefer a group of loitering drones that can be called in for an immediate support at any point if and when required. It should be cheaper, quicker, and more survivable (if designed correctly)... This could be a moot point though if Allied nations only continue to engage in non near-peer threat engagements such as been the case recently though and the trend continues towards higher altitude CAS...

- the most common argument that the Russians/Chinese have when the West accuses them of something is that the West isn't any different. The funny thing about this is that technically they're right. It's just the level that each side is willing to stoop to. The US spies on friend and foe alike using mostly technological means though while the former is more reliant on HUMINT. In the context of economic intelligence I'd be very interested to know just exactly how the numbers add up knowing how much the West spends on technical intelligence and the same goes for the Russians/Chinese as well... Both sides sound rediculous at times accusing one another of any wrong doing...,7340,L-4696268,00.html

- if you think about the nature of defense now it's somewhat bemusing. Our concept of what defense seems to be almost completely focused around the notion of force projection and qualitative/quantitative superiority. To me some of what is done has little to do with defense anymore as it does attempting to shape the world in the way we want simply because we can...

- there's so much information out there regarding a lot of sensitive military technology I just find it hard to believe some of the classification levels for information that are held when it comes to some stuff and why we would hold people accountable for stuff that is already out in the open and confirmed by official sources. Sometimes it seems as though much of what the Russians/Chinese need to reverse engineer some technologies can either be purchased or else obtained from free and open sources

JSF Making Stealth productionJuly48 TEXT READABLE.pdf


F-35 High Energy Laser

Stealth question - Reduction in RCS

- as stated previously, most people who watch this space know that stealth bombers can be tracked from thousands of kilometers away provided equipment is tuned correctly, environmental conditions are favourable, etc... To me, a lot of the power projection capabilities (or anything which facilitates them) makes me feel as though they have homing beacons. This includes sensor technology which relies on any active measures such as long range RADAR, AWACS, re-fuelling, AEGIS class warships, aircraft carriers, etc... They're just asking to be hit (by near peer threats) which probably explains the reasons behind increasing sensory capabilities of 5th gen aircraft such as the JSF and increasing de-centralisation of capabilities in 6th gen aircraft designs

- if you know a little bit about military technology you'll be aware that one of the things that are of slight concern are electronic hardware which when illuminated by certain, unique frequencies basically become homing beacons. The thing is, if you think about this for a bit isn't it possible to achieve the same thing using microchips (or anything that is symmetrical on an aircraft)? I mean, one of the core tenets of RADAR stealth is incorporating non-planform design. Namely, reducing parallel and symmetrical shapes. The legs on a microchip are spaced evenly and symmetrically apart. Provided sufficient power and at favourable angles isn't it also possible to achieve the same thing using electronics (and other objects) aboard most aircraft (epsecially if their designs are unique)? The main issues would be power and projection of course...

- in the Iraq war much of the RADAR capabilities were knocked out extremely early. The opposite has been true in Syria where much equipment has been turned off and turned on only periodically. Much like my beacon idea I'm wondering why we couldn't use the same concept to detect SAM and RADAR systems. If we know the rough design, then we should know the rough frequency/wavelength that they operate with... Radiate at sufficient power and at the right angle and they should re-radiate? Perhaps a job for drones which would search for equipment based on rough intelligence estimates for mobile equipment in particular?

- people often harp on about how Western defense technology is superior but we've never really seen a genuine encounter between near peer threats for a long time. It's also clear that neither side. Operationally, both Russia and the United States have never really given in to their partners on national security concerns. Their partners often don't receieve the same information nor do they recieve the same technology (same fear that the US has regarding the F-22 Raptor. They don't want to have to go to war against equipment that is equally adept which they built. They also worry about OPSEC of allied nations believing that we could leak information since we spend less on national security). I wouldn't be surprised if (much like during the Cold War) much of the publicly available information we have regarding upper end equipment is substantially wrong

- much has been made of DAS in the JSF. Some Russian fighter aircraft have had much of this basic, core functionality (all around sensory capability) for decades

- a RAND defense analyst recently floated around the idea of a slow moving aircraft with large payload capabilities as an alternative to conventional fighter jets. It would actually render 5th gen fighters completely irrelevant if implemented correctly...

- I've been looking at the design of the PAK-FA slightly more closely and noticed how it had multi-band RADAR capabilities for various purposes. The thing which struck me was the updated R-77 capability whereby the jet provides information in unison with the missile's own internal targeting system. Even if one fails, the other system has enough redundancy to be able to potentially re-acquire a lock. Interestingly, if we think about this slightly what if we use the same idea in combination with jets and ships or jets and large RADAR. Updated information from longer wavelength RADAR in combination with the missiles' or jets' on targeting systems would allow for an increased chance of lock and reduced chances of being outwitted through decoys as well...

- modern rules of engagement may mean that visual identification may be necessary before a pilot can launch an attack rendering any BVR capabilities a moot point

- for a while now the defense and intelligence have used animals such as dolphins and sea lions for various purposes including recon as well as force proection. Something I've been curious about is whether or not we can use animals as radiators undersea as well as in the air... Think about this, if you are in a room with furniture and you scream it sounds different to if it were empty. It's the same if you scream in front of someone versus if you scream with nothing in front of you. If you can activate all animals in your surronding area (using beacons at abnormal frequencies or otherwise training them and attaching 'radiators' to them) to either create sound you can detect the presence of other objects in your immediate vicinity (without giving away your position as well). Roughly the same principle that allowed the detection and shooting down of the original F-117 stealth bomber

If you've ever worked with laptops (or any device) with dead batteries you've probably wondered about how to restore them to working condition...

August 31, 2015

Inspecting ODF round trips for attribute retention

Given an office application one might like to know which attributes are preserved properly across a load and save cycle. For example, is the background color or margin size mutated just by loading and saving an ODF file with OfficeAppFoo version 0.1.

The odfautotests project includes many tests on simple ODF documents to see how well each office application preserves the information in the document. Though testing ODF attribute preservation might not be as simple as one might first imagine. Consider the below document with a single paragraph using a custom style:


  <text:p text:style-name="style">hello world</text:p>


In the styles.xml file one might see something like the following:






     <style:text-properties fo:background-color="transparent" />


This input is obviously designed to see how well the fo:background-color style information is preserved by office applications. One thing to notice is that the style:family attribute in the above is paragraph.

If one loads and saves a document with the above fragments in it using LibreOffice 4.3.x then they might see something like the following in the output ODF file. In content.xml:

<text:p text:style-name="TestStyle">hello world</text:p>

And in the styles.xml file the background-color attribute is preserved:

<style:style style:name="TestStyle"



      <style:text-properties fo:background-color="transparent"/>


One can test if the attribute has been preserved using XPath selecting on the @style-name of the text:p and then making sure that the matching style:style has the desired fo:background-color sub attribute.

The XPath might look something like the below, which has been formatted for display:



  or (not(@s:display-name) and @s:name='TestStyle')]


Performing the load and save using Word 2016 is quite interesting. The resulting content.xml file might have:

<style:style style:name="P1"




     <style:paragraph-properties fo:break-before="page"/>



<office:text text:use-soft-page-breaks="true">

  <text:p text:style-name="P1">hello world</text:p>


and in styles.xml the background-color setting is pushed up to the paragraph style level.

<style:style style:name="TestStyle"



      <style:text-properties fo:hyphenate="false"/>


<style:default-style style:family="paragraph">


<style:text-properties ... fo:background-color="transparent"

So to see if the output ODF has the fo:background-color setting one has to consider not just the directly used style "P1" but also parent style elements which might contain the attribute instead. In this case it was pushed right up to the paragraph style.

For the Word output the above XPath doesn't necessarily work. If the attribute we are looking for has been pushed up to paragraph then we should look for it there instead. Also, if we are looking at the paragraph level then we need to be sure that there is no attribute directly at the lower, TestStyle, level. Also it helps to ensure in the selection that the paragraph is really a parent of the TestStyle, or P1 in the above.

After a bit of pondering I found an interesting solution that can evaluate using plain XPath1.0. To test the value I pick off the fo:background-color from both the TestStyle and also the paragraph level. If those values are passed to concat() then, if the attribute is only at the TestStyle or paragraph level we get something that can be used to test the value. If the attribute appears at both levels are are in trouble.

For example:

<style:style style:name="TestStyle"

<style:text-properties ... fo:background-color="transparent"  />


<style:default-style style:family="paragraph">

<style:text-properties ... fo:background-color="#FF0000"/>


Considering the semantic XPath query of concat( TestStyle/@fo:background-color, paragraph/@fo:background-color ) the result would be  transparent#FF0000 which would not match a string comparison with 'transparent'.

The trick is to use an array selector on the second item in the concat() call. If we only return the paragraph/@fo:background-color value if there is no value associated with the TestStyle then the concat will effectively only return one or the other (directly on TestStyle or nothing on TestStyle and the attribute from paragraph).

With this the query can allow the office application to move the information to a parent of the style and still match for a test.

Walk to the Southern Most Point

I've just realized that I didn't post any pics of my walk to the most southern point of the ACT. The CBC had a planned walk to the southern most point on the ACT border and I was immediately intrigued. So, I took a day off work and gave it a go. It was well worth the experience, especially as Matthew the guide had a deep knowledge of the various huts and so forth in the area. A great day.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150818-southernmostpoint photo canberra bushwalk


An awesome day at CMAC

Anyone who has been involved in aeromodelling for a while dreams of having one of those days when everything works right. It doesn't happen often, but when it does it sure is nice!

CanberraUAV had one of those days yesterday. It was a wonderful sunny winters day at our local flying field and we were test flying our latest creations.

First up was the "Vampire Mark 2", a combined plane quadcopter built by Jack Pittar. It consists of a senior telemaster with a 15cc petrol engine, but with the addition of 4 quadcopter motors. It was the maiden flight yesterday, and it was setup with a Pixhawk controlling the quad engines and the rest controlled manually as a normal RC model. We flew with two pilots (Justin Galbraith and Jack Pittar). The takeoff was vertical as a quadcopter, and it then transitioned to fixed wing flight using the extremely simple method of advancing the throttle on the plane while lowering the throttle on the telemaster. Transition was very easy and the plane reached 31m/s in forward flight at full throttle. The landing transition was equally easy. Jack lowered the throttle on the plane while Justin raised the throttle on the quadcopter. No problems!

Given this was the first flight of a highly experimental aircraft we were pretty pleased with the result! Jack is thinking of building an even bigger version soon that will be able to complete the 2016 OBC mission with plenty of room for equipment.

Next up was our JS90 helicopter, originally built by Ryan Pope and adapted for autonomous flight.

This is the flybar version of the JS90-v2 heli from Hobbyking, with a OS GT15HZ petrol engine fitted, along with a Pixhawk2 and a new "Blue Label" Lidar from pulsedlight. We've been flying (and crashing!) this heli for a while now, but yesterday was finally the day when we got to try high speed autonomous flight.

apart from a small gap where we lost telemetry in the north west corner you can see the tracking in the auto mission was great. Once we learned how to tune a flybar heli (which turns out to be extremely simple!) it flies really well. We did have some issues getting it to fly as fast as we want. Above about 17m/s it occasionally pulled back and stopped for a second before continuing. We knew it could do more as it happily flew at over 27m/s in ALT_HOLD mode. With some help from Randy and a small code change to help with tuning we think we've tracked down the cause of that issue and expect to be doing 27m/s AUTO missions next weekend.

Next up was another quad plane, this one quite different from the big telemaster build!

We had been trying to track down a problem with loiter on the Parrot Bebop when running ArduPilot. We suspected there may have been a GPS lag issue, so we wanted to get some flight data that would allow us to compare the performance of a uBlox GPS with the GPS in the Bebop for dynamic flight. We thought a good way to do that would be to strap the Bebop to a plane and take it for a fly. The results were very interesting! For this flight we saw a lag on the Bebop GPS of over 5 seconds, which must be some sort of buffering issue. We'll chat to Julien from Parrot to see if we can track down the issue.

Next we thought it would be fun to see if something else could lift the tiny Bebop. Peter had his Solo there, so we strapped the Bebop to it and went for a fast fly in drift mode. Great fun!

Overall it was a fantastic day! Next week we're really looking forward to trying the Trex700 petrol conversion that Greg has built which you can see in the background in this photo of our build day on Saturday. The build looks really good and we expect it to perform even better than the JS90, as Greg has managed to fit a Pixhawk while still being able to install the canopy. That should reduce drag quite a lot.

The switch of focus for CanberraUAV to VTOL aircraft has been a lot of work, but the results are really paying off and we're having a lot of fun in the process. We hope that we'll have a lot more weekends like this one in the future.

FreeDV SM1000 v SSB demo

Great demo of FreeDV 1600 (SM1000 at one end) and SSB between AA6E and K5MVP. You can really notice the reduced audio bandwidth and ever present noise of SSB compared to FreeDV. This is just the start – we are gradually improving the low SNR robustness and speech quality of FreeDV. Thanks so much Martin for posting this.

I like watching the fading FreeDV signal. I think there is a “lowpass” effect on the signal – more power allocated to low frequency carriers. This may be due to the transmitter tx filter, or possibly the SM1000. FreeDV is only as good as the SNR of the weakest carrier. Ideally they should all be the same power. This is one of the “tuning” issues I’d like to look into over the next few months.

August 30, 2015

Twitter posts: 2015-08-24 to 2015-08-30