Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

August 31, 2015

FreeDV SM1000 v SSB demo

Great demo of FreeDV 1600 (SM1000 at one end) and SSB between AA6E and K5MVP. You can really notice the reduced audio bandwidth and ever present noise of SSB compared to FreeDV. This is just the start – we are gradually improving the low SNR robustness and speech quality of FreeDV. Thanks so much Martin for posting this.

I like watching the fading FreeDV signal. I think there is a “lowpass” effect on the signal – more power allocated to low frequency carriers. This may be due to the transmitter tx filter, or possibly the SM1000. FreeDV is only as good as the SNR of the weakest carrier. Ideally they should all be the same power. This is one of the “tuning” issues I’d like to look into over the next few months.

August 30, 2015

Twitter posts: 2015-08-24 to 2015-08-30

August 29, 2015

Letting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:   3600    IN  CNAME

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --service_on=raw/

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

August 28, 2015

Running OPAL in qemu – the powernv platform

Ben has a qemu tree up with some work-in-progress patches to qemu to support the PowerNV platform. This is the “bare metal” platform like you’d get on real POWER8 hardware running OPAL, and it allows us to use qemu like my previous post used the POWER8 Functional Simulator – to boot OpenPower firmware.

To build qemu for this, follow these steps:

apt-get -y install gcc python g++ pkg-config libz-dev libglib2.0-dev \
  libpixman-1-dev libfdt-dev git
git clone
cd qemu
./configure --target-list=ppc64-softmmu
make -j `grep -c processor /proc/cpuinfo`

This will leave you with a ppc64-softmmu/qemu-system-ppc64 binary. Once you’ve built your OpenPower firmware to run in a simulator, you can boot it!

Note that this qemu branch is under development, and is likely to move/change or even break.

I do it like this:

cd ~/op-build/output/images;  # so skiboot.lid is in pwd
~/qemu/ppc64-softmmu/qemu-system-ppc64 -m 1G -M powernv \
-kernel zImage.epapr -nographic \
-cdrom ~/ubuntu-vivid-ppc64el-mini.iso

and this lets me test that we launch the Ubunut vivid installer correctly.

You can easily add other qemu options such as additional disks or networking and verify that it works correctly. This way, you can do development on some skiboot functionality or a variety of kernel and op-build userspace (such as the petitboot bootloader) without needing either real hardware or using the simulator.

This is useful if, say, you’re running on ppc64el, for which the POWER8 functional simulator is currently not available on.

doing nothing on modern CPUs

Sometimes you don’t want to do anything. This is understandably human, and probably a sign you should either relax or get up and do something.

For processors, you sometimes do actually want to do absolutely nothing. Often this will be while waiting for a lock. You want to do nothing until the lock is free, but you want to be quick about it, you want to start work once that lock is free as soon as possible.

On CPU cores with more than one thread (e.g. hyperthreading on Intel, SMT on POWER) you likely want to let the other threads have all of the resources of the core if you’re sitting there waiting for something.

So, what do you do? On x86 there’s been the PAUSE instruction for a while and on POWER there’s been the SMT priority instructions.

The x86 PAUSE instruction delays execution of the next instruction for some amount of time while on POWER each executing thread in a core has a priority and this is how chip resources are handed out (you can set different priorities using special no-op instructions as well as setting the Relative Priority Register to map how these coarse grained priorities are interpreted by the chip).

So, when you’re writing spinlock code (or similar, such as the implementation of mutexes in InnoDB) you want to check if the lock is free, and if not, spin for a bit, but at a lower priority than the code running in the other thread that’s doing actual work. The idea being that when you do finally acquire the lock, you bump your priority back up and go do actual work.

Usually, you don’t continually check the lock, you do a bit of nothing in between checking. This is so that when the lock is contended, you don’t just jam every thread in the system up with trying to read a single bit of memory.

So you need a trick to do nothing that the complier isn’t going to optimize away.

Current (well, MySQL 5.7.5, but it’s current in MariaDB 10.0.17+ too, and other MySQL versions) code in InnoDB to “do nothing” looks something like this:

ulint ut_delay(ulint   delay)
        ulint   i, j;
        j = 0;
        for (i = 0; i < delay * 50; i++) {
                j += i;
        if (ut_always_false) {
                ut_always_false = (ibool) j;

On x86, UT_RELAX_CPU() ends up being the PAUSE instruction.

On POWER, the UT_LOW_PRIORITY_CPU() and UT_RESUME_PRIORITY_CPU() tunes the SMT thread priority (and on x86 they’re defined as nothing).

If you want an idea of when this was all written, this comment may be a hint:

/*!< in: delay in microseconds on 100 MHz Pentium */

But, if you’re not on x86 you don’t have the PAUSE instruction, instead, you end up getting this code:

# elif defined(HAVE_ATOMIC_BUILTINS)
#  define UT_RELAX_CPU() do { \
     volatile lint      volatile_var; \
     os_compare_and_swap_lint(&volatile_var, 0, 1); \
   } while (0)

Which you may think “yep, that does nothing and is not optimized away by the compiler”. Except you’d be wrong! What it actually does is generates a lot of memory traffic. You’re now sitting in a tight loop doing atomic operations, which have to be synchronized between cores (and sockets) since there’s no real way that the hardware is going to be able to work out that this is only a local variable that is never accessed from anywhere.

Additionally, the ut_always_false and j variable there is also attempts to trick the complier into not optimizing the loop away, and since ut_always_false is a global, you’re generating traffic to a single global variable too.

Instead, what’s needed is a compiler barrier. This simple bit of nothing tells the compiler “pretend memory has changed, so you can’t optimize around this point”.

__asm__ __volatile__ ("":::"memory")

So we can eliminate all sorts of useless non-work and instead do what we want: do nothing (a for loop for X iterations that isn’t optimized away by the compiler) and don’t have side effects.

In MySQL bug 74832 I detailed this with the appropriately produced POWER assembler. Unfortunately, this patch (submitted under the OCA) has sat since November 2014 (so, over 9 months) with no action. I’m a bit disappointed by that to be honest.

Anyway, the real moral of this story is: don’t implement your own locking primitives. You’re either going to get it wrong or you’ll be wrong in a few years when everything changes under you.

See also:

August 27, 2015

D8 Accelerate - Game over?

D8 Accelerate Chook Raffle - Game Over!

The Drupal 8 Accelerate campaign has raised over two hundred and thirty thousand dollars ($233,519!!).  That's a lot of money! But our goal was to raise US$250,000 and we're running out of time. I've personally helped raise $12,500 and I'm aiming to raise 8% of the whole amount, which equals $20,000. I've got less than $7500 now to raise. Can you help me? Please chip in.

Most of my colleagues on the board have contributed anchor funding via their companies. As a micro-enterprise, my company Creative Contingencies is not in a position to be able to that, so I set out to crowdfund my share of the fundraising effort.

I'd really like to shout out and thank EVERYONE who has made a contribution to get me this far.Whether you donated cash, or helped to amplify my voice, thank you SO so soooo much. I am deeply grateful for your support.

If you can't, or don't want to contribute because you do enough for Drupal that's OK! I completely understand. You're awesome. :) But perhaps you know someone else who is using Drupal, who will be using Drupal you could ask to help us? Do you know someone or an organisation who gets untold value from the effort of our global community? Please ask them, on my behalf, to Make a Donation

If you don't know anyone, perhaps you can help simply by sharing my plea? I'd love that help. I really would!

And if you, like some others I've spoken with, don't think people should be paid to make Free Software then I urge you to read Ashe Dryden's piece on the ethics of unpaid labor in the Open Source Community. It made me think again.

Do you want to know more about how the money is being spent? 


Perhaps you want to find out how to apply to spend it on getting Drupal8 done?


Are you curious about the governance of the program?


And just once more, with feeling, I ask you to please consider making a donation.

So how much more do I need to get it done? To get to GAME OVER?

  • 1 donation x $7500 = game over!
  • 3 donations x $2500
  • 5 donations x $1500
  • 10 donations x $750
  • 15 donationsx $500 <== average donation
  • 75 donations x $100 <== most common donation
  • 100 donations x $75
  • 150 donations x $50
  • 500 donations x $15
  • 750 donations x $10 <== minimum donation

Thank you for reading this far. Really :-)

August 26, 2015

Linux Security Summit 2015 – Wrapup, slides

The slides for all of the presentations at last week’s Linux Security Summit are now available at the schedule page.

Thanks to all of those who participated, and to all the events folk at Linux Foundation, who handle the logistics for us each year, so we can focus on the event itself.

As with the previous year, we followed a two-day format, with most of the refereed presentations on the first day, with more of a developer focus on the second day.  We had good attendance, and also this year had participants from a wider field than the more typical kernel security developer group.  We hope to continue expanding the scope of participation next year, as it’s a good opportunity for people from different areas of security, and FOSS, to get together and learn from each other.  This was the first year, for example, that we had a presentation on Incident Response, thanks to Sean Gillespie who presented on GRR, a live remote forensics tool initially developed at Google.

The keynote by sysadmin, Konstantin Ryabitsev, was another highlight, one of the best talks I’ve seen at any conference.

Overall, it seems the adoption of Linux kernel security features is increasing rapidly, especially via mobile devices and IoT, where we now have billions of Linux deployments out there, connected to everything else.  It’s interesting to see SELinux increasingly play a role here, on the Android platform, in protecting user privacy, as highlighted in Jeffrey Vander Stoep’s presentation on whitelisting ioctls.  Apparently, some major corporate app vendors, who were not named, have been secretly tracking users via hardware MAC addresses, obtained via ioctl.

We’re also seeing a lot of deployment activity around platform Integrity, including TPMs, secure boot and other integrity management schemes.  It’s gratifying to see the work our community has been doing in the kernel security/ tree being used in so many different ways to help solve large scale security and privacy problems.  Many of us have been working for 10 years or more on our various projects  — it seems to take about that long for a major security feature to mature.

One area, though, that I feel we need significantly more work, is in kernel self-protection, to harden the kernel against coding flaws from being exploited.  I’m hoping that we can find ways to work with the security research community on incorporating more hardening into the mainline kernel.  I’ve proposed this as a topic for the upcoming Kernel Summit, as we need buy-in from core kernel developers.  I hope we’ll have topics to cover on this, then, at next year’s LSS.

We overlapped with Linux Plumbers, so LWN was not able to provide any coverage of the summit.  Paul Moore, however, has published an excellent write-up on his blog. Thanks, Paul!

The committee would appreciate feedback on the event, so we can make it even better for next year.  We may be contacted via email per the contact info at the bottom of the event page.

August 24, 2015

The Legacy of Autism and the Future of Neurodiversity

The New York Times published an interesting review of a book entitled “NeuroTribes: The Legacy of Autism and the Future of Neurodiversity”, authored by Steve Silberman (534 pp. Avery/Penguin Random House).

Silberman describes how autism was discovered by a few different people around the same time, but with each the publicity around their work is warped by their environment and political situation.

This means that we mainly know the angle that one of the people took, which in turn warps our view of Aspergers and autism. Ironically, the lesser known story is actually that of Hans Asperger.

I reckon it’s an interesting read.

Mark got a booboo

Mark Latham losing his AFR column because an advertiser thought his abusive tweets and articles weren't worth being associated with isn't actually a freedom of speech issue.

Nope, not even close to it.

Do you know why?


No one is stopping Latho from spouting his particular brand of down home "outer suburban dad" brand of putresence.

Hell, all he has to do to get back up and running is go and setup a wordpress account and he can be back emptying his bile duct on the internet along with the rest of us who don't get cushy newspaper jobs because we managed to completely screw over our political career in a most spectacular way

Hey, he could setup a Patreon account and everyone who wants to can support him directly, either monthly sub, or a per flatulence rate.

This whole thing reeks of a massive sense of entitlement, both with Latho himself and his media supporters. Bolt, Devine and others who have lept to his defence all push this idea that any move to expose writers to consequences arising from their rantings is some sort of mortal offense against democracy and freedom. Of course, while they do this, they demand the scalps of anyone who dares to write abusive rants against their own positions.


Oh and as I've been reminded, Australia doesn't actually have Freedom of Speech as they do in the US.

Blog Catagories: 

Dual Rav 4 SM1000 Installation

Andy VK5AKH, and Mark VK5QI, have mirror image SM1000 mobile installations, same radio, even the same car! Some good lessons learned on testing and debugging microphone levels that will be useful for other people installing their SM1000. Read all about it on Mark’s fine blog.

Codec 2 Masking Model Part 1

Many speech codecs use Linear Predictive Coding (LPC) to model the short term speech spectrum. For very low bit rate codecs, most of the bit rate is allocated to this information.

While working on the 700 bit/s version of Codec 2 I hit a few problems with LPC and started thinking about alternatives based on the masking properties of the human ear. I’ve written Octave code to prototype these ideas.

I’ve spent about 2 weeks on this so far, so thought I better write it up. Helps me clarify my thoughts. This is hard work for me. Many of the steps below took several days of scratching on paper and procrastinating. The human mind can only hold so many pieces of information. So it’s like a puzzle with too many pieces missing. The trick is to find a way in, a simple step that gets you a working algorithm that is a little bit closer to your goal. Like evolution, each small change needs to be viable. You need to build a gentle ramp up Mount Improbable.

Problems with LPC

We perceive speech based on the position of peaks in the speech spectrum. These peaks are called formants. To clearly perceive speech the formants need to be distinct, e.g. two peaks with a low level (anti-formant) region between them.

LPC is not very good at modeling anti-formants, the space between formants. As it is an all pole model, it can only explicitly model peaks in the speech spectrum. This can lead to unwanted energy in the anti-formants which makes speech sound muffled and hard to understand. The Codec 2 LPC postfilter improves the quality of the decoded speech by suppressing interformant-energy.

LPC attempts to model spectral slope and other features of the speech spectrum which are not important for speech perception. For example “flat”, high pass or low pass filtered speech is equally easy for us to understand. We can pass speech through a Q=1 bandpass or notch filter and it will still sound OK. However LPC wastes bits on these features, and get’s into trouble with large spectral slope.

LPC has trouble with high pitched speakers where it tends to model individual pitch harmonics rather than formants.

LPC is based on “designing” a filter to minimise mean square error rather than the properties of the human ear. For example it works on a linear frequency axis rather than log frequency like the human ear. This means it tends to allocates bits evenly across frequency, whereas an allocation weighted towards low frequencies would be more sensible. LPC often produces large errors near DC, an important area of human speech perception.

LPC puts significant information into the bandwidth of filters or width of formants, however due to masking the ear is not very sensitive to formant bandwidth. What is more important is sharp definition of the formant and anti-formant regions.

So I started thinking about a spectral envelope model with these properties:

  1. Specifies the location of formants with just 3 or 4 frequencies. Focuses on good formant definition, not the bandwidth of formants.
  2. Doesn’t care much about the relative amplitude of formants (spectral slope). This can be coarsely quantised or just hard coded using, e.g. voiced speech has a natural low pass spectral slope.
  3. Works in the log amplitude and log frequency domains.

Auditory Masking

Auditory masking refers to the “capture effect” of the human ear, a bit like an FM receiver. If you hear a strong tone, then you cant hear slightly weaker tones nearby. The weaker ones are masked. If you can’t hear these masked tones, there is no point sending them to the decoder. So we can save some bits. Masking is often used in (relatively) high bit rate audio codecs like MP3.

I found some Octave code for generating masking curves (Thanks Jon!), and went to work applying masking to Codec 2 amplitude modelling.

Masking in Action

Here are some plots to show how it works. Lets take a look at frame 83 from hts2a, a female speaker. First, 40ms of the input speech:

Now the same frame in the frequency domain:

The blue line is the speech spectrum, the red the amplitude samples {Am}, one for each harmonic. It’s these samples we would like to send to the decoder. The goal is to encode them efficiently. They form a spectral envelope, that describes the speech being articulated.

OK so lets look at the effect of masking. Here is the masking curve for a single harmonic (m=3, the highest one):

Masking theory says we can’t hear any harmonics beneath the level of this curve. This means we don’t need to send them over the channel and can save bits. Yayyyyyy.

Now lets plot the masking curves for all harmonics:

Wow, that’s a bit busy and hard to understand. Instead, lets just plot the top of all the masking curves (green):

Better. We can see that the entire masking curve is dominated by just a few harmonics. I’ve marked the frequencies of the harmonics that matter with black crosses. We can’t really hear the contribution from other harmonics. The two crosses near 1500Hz can probably be tossed away as they just describe the bottom of an anti-formant region. So that leaves us with just three samples to describe the entire speech spectrum. That’s very efficient, and worth investigating further.

Spectral Slope and Coding Quality

Some speech signals have a strong “low pass filter” slope between 0 an 4000Hz. Others have a “flat” spectrum – the high frequencies are about the same level as low frequencies.

Notice how the high frequency harmonics spread their masking down to lower frequencies? Now imagine we bumped up the level of the high frequency harmonics, e.g. with a first order high pass filter. Their masks would then rise, masking more low frequency harmonics, e.g. those near 1500Hz in the example above. Which means we could toss the masked harmonics away, and not send them to the decoder. Neat. Only down side is the speech would sound a bit high pass filtered. That’s no problem as long as it’s intelligible. This is an analog HF radio SSB replacement, not Hi-Fi.

This also explains why “flat” samples (hts1a, ve9qrp) with relatively less spectral slope code well, whereas others (kristoff, cq_ref) with a strong spectral slope are harder to code. Flat speech has improved masking, leaving less perceptually important information to model and code.

This is consistent with what I have heard about other low bit rate codecs. They often employ pre-processing such as equalisation to make the speech signal code better.

Putting Masking to work

Speech compression is the art of throwing stuff away. So how can we use this masking model to compress the speech? What can we throw away? Well lets start by assuming only the samples with the black crosses matter. This means we get to toss quite a bit of information away. This is good. We only have to transmit a subset of {Am}. How I’m not sure yet. Never mind that for now. At the decoder, we need to synthesise the speech, just from the black crosses. Hopefully it won’t sound like crap. Let’s work on that for now, and see if we are getting anywhere.

Attempt 1: Lets toss away any harmonics that have a smaller amplitude than the mask (Listen). Hmm, that sounds interesting! Apart from not being very good, I can hear a tinkling sound, like trickling water. I suspect (but haven’t proved) this is because harmonics are coming and going quickly as the masking model puts them above and below the mask, which makes them come and go quickly. Little packets of sine waves. I’ve heard similar sounds on other codecs when they are nearing their limits.

Attempt 2: OK, so how about we set the amplitude of all harmonics to exactly the mask level (Listen): Hmmm, sounds a bit artificial and muffled. Now I’ve learned that muffled means the formants are not well formed. Needs more difference between the formats and anti-formant regions. I guess this makes sense if all samples are exactly on the masking curve – we can just hear ALL of them. The LPC post filter I developed a few years ago increased the definition of formants, which had a big impact on speech quality. So lets try….

Attempt 3: Rather than deleting any harmonics beneath the mask, lets reduce their level a bit. That way we won’t get tinkling – harmonics will always be there rather than coming and going. We can use the mask instead of the LPC post filter to know which harmonics we need to attenuate (Listen).

That’s better! Close enough to using the original {Am} (Listen), however with lots of information removed.

For comparison here is Codec 2 700B (Listen and Codec 2 1300 (aka FreeDV 1600 when we add FEC) Listen. This is the best I’ve done with LPC/LSP to date.

The post filter algorithm is very simple. I set the harmonic magnitudes to the mask (green line), then boost only the non-masked harmonics (black crosses) by 6dB. Here is a plot of the original harmonics (red), and the version (green) I mangle with my model and send to the decoder for synthesis:

Here is a spectrogram (thanks Audacity) for Attempt 1, 2, and 3 for the first 1.6 seconds (“The navy attacked the big”). You can see the clearer formant representation with Attempt 3, compared to Attempt 2 (lower inter-formant energy), and the effect of the post filter (dark line in center of formants).

Command Line Kung Fu

If you want to play along:

~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --dump kristoff


octave:49> newamp_batch("../build_linux/src/kristoff");


~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --amread kristoff_am.out -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

The “newamp_fbf” script lets you single step through frames.


To synthesise the speech at the decoder I also need to come up with a phase for each harmonic. Phase and speech is still a bit of a mystery to me. Not sure what to do here. In the zero phase model, I sampled the phase of the LPC synthesis filter. However I don’t have one of them any more.

Lets think about what the LPC filter does with the phase. We know at resonance phase shifts rapidly:

The sharper the resonance the faster it swings. This has the effect of dispersing the energy in the pitch pulse exciting the filter.

So with the masking model I could just choose the center of each resonance, and swing the phase about madly. I know where the center of each resonance is, as we found that with the masking model.

Next Steps

The core idea is to apply a masking model to the set of harmonic magnitudes {Am} and select just 3-4 samples of that set that define the mask. At the decoder we use the masking model and a simple post filter to reconstruct a set of {Am_} that we use to synthesise the decoded speech.

Still a few problems to solve, however I think this masking model holds some promise for high quality speech at low bit rates. As it’s completely different to conventional LPC/LSP I’m flying blind. However the pieces are falling into place.

I’m currently working on i) how to reduce the number of samples to a low number ii) how to determine which ones we really need (e.g. discarding interformant samples); and iii) how to represent the amplitude of each sample with a low or zero number of bits. There are also some artifacts with background noise and chunks of spectrum coming and going.

I’m pretty sure the frequencies of the samples can be quantised coarsely, say 3 bits each using scalar quantisation, or perhaps 8 bit/s frame using VQ. There will also be quite a bit of correlation between the amplitudes and frequencies of each sample.

For voiced speech there will be a downwards (low pass) slope in the amplitudes, for unvoiced speech more energy at high frequencies. This suggests joint VQ of the sample frequencies and amplitudes might be useful.

The frequency and amplitude of the mask samples will be highly correlated in time (small frame to frame variations) so will have good robustness to bit errors if we apply trellis decoding techniques. Compared to LPC/LSP the bandwidth of formants is “hard coded” by the masking curves, so the dreaded LSPs-too-close due to bit errors R2D2 noises might be a thing of the past. I’ll explore robustness to bit errors when we get to the fully quantised stage.

August 23, 2015

Twitter posts: 2015-08-17 to 2015-08-23

August 22, 2015

A Miserable Debt Free Life Part 2

The first post was very popular, and sparked debate all over the Internet. I’ve read many of the discussions, and would like to add a few points.

Firstly I don’t feel I did a very good job of building my assets – plenty of my friends have done much better in terms of net worth and/or early retirement. Many have done the Altruism thing better than I. Sites like Mr. Money Moustache do a better job at explaining the values I hold around money. Also I’ve lost interest in more accumulation, but my lifestyle seems interesting to people, hence these posts.

The Magical 10%

The spreadsheet I put up was not for you. It was just a simple example, showing how compound interest, savings and time can work for you. Or against you, if you like easy credit and debt. A lot of people seem hung up on the 10% figure I used.

I didn’t spell out exactly what my financial strategy is for good reason.

You need to figure out how to achieve your goals. Maybe its saving, maybe it’s getting educated to secure a high income, or maybe it’s nailing debt early. Some of my peers like real estate. I like shares, a good education, professional experience, and small business. I am mediocre at most of them. I looked at other peoples stories, then found something that worked for me.

But you need to work this out. It’s part of the deal, and you are not going to get the magic formula from a blog post by some guy sitting on a couch with too much spare time on his hands and an Internet connection.

The common threads are spending less than your earn, investment, and time. And yes, this is rocket science. The majority of the human race just can’t do it. Compound interest is based on exponential growth – which is completely under-appreciated by the human race. We just don’t get exponential growth.


Another issue around the 10% figure is risk. People want guarantees, zero risk, a cook book formula. Life doesn’t work like that. I had to deal with shares tumbling after 9/11 and the GFC, and a divorce. No one on a forum in the year 2000 told me about those future events when I was getting serious about saving and investing. Risk and return are a part of life. The risk is there anyway – you might lose your job tomorrow or get sick or divorced or have triplets. It’s up to you if you want to put that risk to work or shy away from it.

Risk can be managed, plan for it. For example you can say “what happens if my partner loses his job for 12 months”, or “what happens if the housing market dips 35% overnight”. Then plug those numbers in and come up with a strategy to manage that risk.

Lets look at the down side. If the magical 10% is not achieved, or even if a financial catastrophe strikes, who is going to be in a better position? Someone who is frugal and can save, or someone maxed out on debt who can’t live without the next pay cheque?

There is a hell of lot more risk in doing nothing.

Make a Plan and Know Thy Expenditure

Make your own plan. There is something really valuable in simply having a plan. Putting some serious thought into it. Curiously, I think this is more valuable than following the plan. I’m not sure why, but the process of planning has been more important to me than the actual plan. It can be a couple of pages of dot points and a single page spreadsheet. But write it down.

Some people commented that they know what they spend, for example they have a simple spreadsheet listing their expenses or a budget. Just the fact that they know their expenditure tells me they have their financial future sorted. There is something fundamental about this simple step. The converse is also true. If you can’t measure it, you can’t manage it.

No Magic Formula – It’s Hard Work

If parts of my experience don’t work for you, go and find something that does. Anything of value is 1% inspiration and 99% perspiration. Creating your own financial plan is part of the 99%. You need to provide that. Develop the habit of saving. Research investment options that work for you. Talk to your successful friends. Learn to stop wasting money on stuff you don’t need. Understand compound interest in your saving and in your debt. Whatever it takes to achieve your goals. These things are hard. No magic formula. This is what I’m teaching my kids.

Work your System

There is nothing unique about Australia, e.g. middle class welfare, socialised medicine, or high priced housing. Well it is quite nice here but we do speak funny and the drop bears are murderous. And don’t get me started on Tony Abbott. The point is that all countries have their risks and opportunities. Your system will be different to mine. Health care may suck where you live but maybe house prices are still reasonable, or the average wage in your profession is awesome, or the cost of living really low, or you are young without dependents and have time in front of you. Whatever your conditions are, learn to make them work for you.

BTW why did so few people comment on the Altruism section? And why so many on strategies for retiring early?

Cracking a Combination Lock, Some Counter-Stealth Thoughts, and More Apple Information

Someone was recently trying to sell a safe but they didn't have the combination (they had proof of ownership if you're wondering). Anybody who has been faced with this situation is often torn because sometimes the item in question is valuable but the safe can be of comparable value so it's a lose lose situation. If you remember that the original combination then all is fine and well (I first encountered this situation in a hotel when I locked something but forgot the combination. It took me an agonising amount of time to recall the unlock code). If not, you're left with physical destruction of the safe to get back in, etc...

Tips on getting back in:

- did you use mneumonics of some sort to get at the combination?

- is there a limitation on the string that can be entered (any side intelligence is useful)?

- is there a time lock involved?

- does changing particular variables make it easier to get back in non-descructively?

- keep a log on the combinations that you have tried to ensure you don't re-cover the same territory

In this case, things were a bit odd. It had rubber buttons which when removed exposed membrane type switches which could be interfaced via an environmental sensor acquisition and interface device (something like an Arduino)(if you're curious this was designed and produced by a well known international security firm proving that brand doesn't always equate to quality). Once you program it and wire things up correctly, it's simply a case of letting your robot and program run until you open the safe. Another option is a more robust robot where it pushes buttons but obviously this takes quite a bit more hardware (which can make the project pretty expensive and potentially unworthwhile) to get working.

As I covered in my book on 'Cloud and Internet Security' please use proper locks with adequate countemeasures (time locks, variable string lengths, abnormal characters, shim proof, relatively unbreakable, etc...) and have a backup in case something goes wrong.

Been thinking about stealth design and counter measures a bit more.

- when you look at the the 2D thrust vectoring configuration of the F-22 Raptor you think why didn't they go 3D at times. One possible reason may be the 'letterbox effect'. It was designed as an air superiority fighter predominantly that relies heavily on BVR capabilities. From front on the plume effect is diminished (think about particle/energy weapon implementation problems) making it more difficult to detect. Obviously, this potentially reduces sideward movement (paricularly in comparison with 3D TVT options. Pure turn is more difficult but combined bank and turn isn't). Obvious tactic is to force the F-22 into sideward movements if it is ever on your tail (unlikely, due to apparently better sensor technology though)

- the above is a null point if you factor in variable thrust (one engine fires at a higher rate of thrust relative to the other) but it may result in feedback issues. People who have experience with fly by wire systems or high performance race cars which are undertuned will better understand this

- people keep on harping on about how 5th gen fighters can rely more heavily on BVR capabilities. Something which is often little spoken of is the relatively low performance of AAM (Air to Air Missile) systems (Morever, there is a difference between seeing, achieving RADAR lock, and achieving a kill). There must be upgrades along the way/in the pipeline to make 5th gen fighters a viable/economic option into the future

- the fact that several allied nations (Japan, Korea, and Turkey are among them currently)(India, Indonesia, and Russia are among those who are developing their own based on non-Western design) are developing their own indiginous 5th gen fighters which have characteristics more similar to the F-22 Raptor (the notable exception may be Israel who are maintaining and upgrading their F-15 fleet) and have air superiority in mind tells us that the F-35 is a much poorer brother to the F-22 Raptor in spite of what is being publicly said

Warplanes: No Tears For The T-50

- it's clear that the US and several allied nations believe that current stealth may have limited utility in the future. In fact, the Israeli's have said that within 5-10 years the JSF may lost any significant advantage that it currently has without upgrades

- everyone knows of the limited utility of AAM (Air to Air Missile) systems. It will be interesting to see whether particle/energy weapons are retrofitted to the JSF or whether they will be reserved entirely for 6th gen fighters. I'd be curious to know how much progress they've made with regards to this particularly with regards to energy consumption

- even if there have been/are intelligence breaches in the design of new fighter jets there's still the problem of production. The Soviets basically had the complete blue prints for NASA's Space Shuttle but ultimately decided against using it on a regular basis/producing more because like the Americans they discovered that it was extremely uneconomical. For a long time, the Soviets have trailed the West with regards to semiconductor technology which means that their sensor technology may not have caught up. This mightn't be the case with the Chinese. Ironically, should the Chinese fund the Russians and they work together they may achieve greater progress then working too independently

- some of the passive IRST systems out have current ranges of about 100-150km mark (that is publicly acknowledged)

- disoriention of gyroscopes has been used as a strategy against UCAV/UAVs. I'd be curious about how such technology would work against modern fighters which often go into failsafe mode (nobody wants to lose a fighter jet worth 8 or more figures. Hence, the technology) when the pilot blacks out... The other interesting thing would be how on field technologies such as temporal sensory deprivation (blinding, deafening, dis-orirentation, etc...) could be used in unison from longer range. All technologies which have been tested and used against ground based troops before)

- I've been thinking/theorising about some light based detection technologies to aircraft in general. One option I've been considering is somewhat like a sperical ball. The spherical ball is composed of lenses which focus in on a centre which is composed of sensors which would be a hybrid based technology based on the photoelectric effect and spectrascopic theory. The light would automatically trigger a voltage (much like a solar cell) while use of diffraction/spectrascopic theory would enable identification of aircraft from long range using light. The theory behind this is based on the way engine plumes work and the way jet fuels differ. Think about this carefully. Russian rocket fuel is very different from Western rocket fuel. I suspect it's much the same for jet fuel. We currently identify star/planet composition on roughly the same theory. Why not fighter aircraft? Moreover, there are other distinguishing aspects of the jet fighter nozzle exhausts (see my previous post and the section on LOAN systems, Think about the length and shape of each one based on their current flight mode (full afterburner, cruising, etc...) and the way most engine exhausts are unique (due to a number of different reasons including engine design, fuel, etc...). Clearly, the F-22, F-35, B-2, and other stealth have very unique nozzle shapes when compared to current 4th gen fighter options and among one another. The other thing is that given sufficient research (and I suspect a lot of time) I believe that the benefits of night or day flight will/could be largely mitigated. Think about the way in which light and camera filters (and night vision) work. They basically screen out based on frequency/wavelength to make things more visible. You should be able achieve the same thing during daylight. The other bonus of such technology is that it is entirely passive giving the advantage back to the party in defense and intelligence is relatively easy to collect. Just show up at a demonstration or near an airfield... 

- such technology may be a moot point as we have already made progress on cloaking (effectively invisible to the naked eye) technology (though exact details are classified as is a lot of other details regarding particle/energy weapons and shielding technologies)... There's also the problem of straight lines. For practical purposes, light travels in straight lines... OTH type capabilities are beyond such technology (for the time being. Who knows what will happen in the future?)

- someone may contest that I seem to be focusing in on exhaust only but as as you aware this style of detection should also work against standard objects as well (though it's practicallity would be somewhat limited). Just like RADAR though you give up on being able to power through weather and other physical anomalies because you can't use a conventional LASER. For me, this represents a balance between being detected from an attackers perspective and being able to track them from afar... If you've ever been involved in a security/bug sweep you will know that a LASER even of modest power can be seen from quite a distance away

- everybody knows how dependent allied forces are upon integrated systems (sensors, re-fuelling, etc...)

- never fly straight and level against a 5th gen fighter. Weave up and down and side to side even on patrols to maximise the chances of detection earlier in the game because all of them don't have genuine all aspect stealth

- I've been thinking of other ways of defending against low observability aircraft. The first is based on 'loitering' weapons. Namely, weapons which move at low velocity/loiter until they come within targeting range of aicraft. Then they 'activate' and chase their target much like a 'moving mine' (a technology often seen in cartoons?). Another is essentially turning off all of your sensors once they become within targeting range. Once they end up in passive detection range, then you fire in massive, independent volleys knowing full well that low observability aircraft have low payload capability owing to comprimises in their design

- as stated previously, I very much doubt that the JSF is as bad some people are portraying

- it's clear that defense has become more integrated with economics now by virtue of the fact that most of our current defense theory is based on the notion of deterrence. I beleive that the only true way forward is reform of the United Nations, increased use of un-manned technologies, and perhaps people coming to terms with their circumstances more differently (unlikely given how long humanity has been around), etc... There is a strong possibility that the defense estabilshment's belief that future defense programs could be unaffordable could become true within the context of deterence and our need to want to control affairs around the word. We need cheaper options with the ability to 'push up' when required...

All of this is a moot point though because genuine 5th gen fighters should be able to see you from a mile off and most countries who have entered into the stealth technology arena are struggling to build 5th gen options (including Russia who have a long history in defense research and manufacturing). For the most part, they're opting for a combination of direct confrontation and damage limitation through reduction of defensive projection capability through long range weapons such as aicraft carrier destroying missiles, targeting of AWACS/refuelling systems, etc... and like for like battle options...

I've been working on more Apple based technolgy of late (I've been curious about the software development side for a while). It's been intriguing taking a closer look at their hardware. Most people I've come across have been impressed by the Apple ecosystem. To be honest, the more I look at the technology borne from this company the more 'generic' them seem. Much of the technology is simply repackaged but in a better way. They've had more than their fair share of problems.

How to identify MacBook models

How to identify MacBook Pro models

A whole heap of companies including graphic card, game console, and computer manufacturers were caught out with BGA implementation problems (basically, people tried to save money by reducing the quality of solder. These problems have largely been fixed much like the earlier capacitor saga). Apple weren't immune

Lines on a screen of an Apple iMac. Can be due to software settings, firmware, or hardware

Apparently, Macbooks get noisy headjacks from time to time. Can be due to software settings or hardware failure

One of the strangest things I've found is that in spite of a core failure of primary storage device people still try to sell hardware for almost what the current market value of a perfectly functional machine is. Some people still go for it but I'm guessing they have spare hardware lying around

There are some interesting aspects to their MagSafe power adapters. Some aspects are similar to authentication protocols used by manufacturers such as HP to ensure that that everthing is safe and that only original OEM equipment is used. Something tells me they don't do enough testing though. They seem to have a continuous stream of anomalous problems. It could be similar to the Microsoft Windows security problem though. Do you want an OS delivered in a timely fashion or one that is deprecated but secure at a later date (delivered in a lecture by a Microsoft spokesman a while back). You can't predict everything that happens when things move into mass scale production but I would have thought that the 'torquing' problem would have been obvious from a consumer engineering/design perspective from the outset...

Upgrading Apple laptop hard drives is similar in complexity to that of PC based laptops

One thing has to be said of Apple hardware construction. It's radically different to that of PC based systems. I'd rather deal with a business class laptop that is designed to be upgraded and probably exhibits greater reliability to be honest. Opening a lot of their devices has told me that form takes too much in the ratio between form and function

One frustrating aspect of the Apple ecosystem is that they gradually phase out support of old hardware by inserting pre-requisite checking. Thankfully, as others (and I) have discovered bypassing some of their checks can be trivial at times

August 21, 2015

Hamburgers versus Oncology

On a similar, but slightly lighter note, this blog was pointed out to me. The subject is high (saturated) fat versus carbohydrate based diets, which is an ongoing area of research, and may (may) be useful in managing diabetes. This gentleman is a citizen scientist (and engineer no less) like myself. Cool. I like the way he using numbers and in particular the way data is presented graphically.

However I tuned out when I saw claims of “using ketosis to fight cancer”, backed only by an anecdote. If you are interested, this claim is throughly debunked on

Bullshit detection 101 – if you find a claim of curing cancer, it’s pseudo-science. If the evidence cited is one persons story (an anecdote) it’s rubbish. You can safely move along. It shows a dangerous leaning towards dogma, rather than science. Unfortunately, these magical claims can obscure useful research in the area. For example exploring a subtle, less sensational effect between a ketogenic diet and diabetes. That’s why people doing real science don’t make outrageous claims without very strong evidence – its kills their credibility.

We need short circuit methods for discovering pseudo science. Otherwise you can waste a lot of time and energy investing spurious claims. People can get hurt or even killed. Takes a lot less effort to make a stupid claim than to prove it’s stupid. These days I can make a call by reading about 1 paragraph, the tricks used to apply a scientific veneer to magical claims are pretty consistent.

A hobby of mine is critical thinking, so I enjoy exploring magical claims from that perspective. I am scientifically trained and do R&D myself, in a field that I earned a PhD in. Even with that background, I know how hard it is to create new knowledge, and how easy it is to fool myself when I want to believe.

I’m not going to try bacon double cheeseburger (without the bun) therapy if I get cancer. I’ll be straight down to Oncology and take the best that modern, evidence based medicine can give, from lovely, dedicated people who have spent 20 years studying and treating it. Hit me with the the radiation and chemotherapy Doc! And don’t spare the Sieverts!

Is Alt-Med Responsible for 20% of Cancer Deaths?

In my meanderings on the InterWebs this caught my eye:

As a director of a cancer charity I work with patients everyday; my co-director has 40-yrs experience at the cancer coalface.We’re aware there are many cancer deaths that can be prevented if we could reduce the number of patients delaying or abandoning conventional treatment while experimenting with alt/med.It is ironic that when national cancer deaths are falling the numbers of patients embracing alt/med is increasing and that group get poor outcomes.If about 46,000 patients die from cancer in 2015, we suspect 10-20% will be caused by alt/med reliance. This figure dwarfs the road toll, deaths from domestic violence, homicide. suicide and terrorism in this country.

This comment was made by Pip Cornell, in the comments on this article discussing declining cancer rates. OK, so Pips views are anecdotal. She works for a charity that assists cancer sufferers. I’m putting it forward as a theory, not a fact. More research is required.

The good news is evidence based medicine is getting some traction with cancer. The bad news is that Alt-med views may be killing people. I guess this shouldn’t surprise me, Alt-med (non evidence-based medicine) has been killing people throughout history.

The Australian Government has recently introduced financial penalties for parents who do not vaccinate. Raw milk has been outlawed after it killed a toddler. I fully support these developments. Steps in the right direction. I hope they take a look at the effect of alt-med on serious illness like cancer.

August 19, 2015

The Purpose of a Code of Conduct

On a private mailing list there have been some recent discussions about a Code of Conduct which demonstrate some great misunderstandings. The misunderstandings don’t seem particular to that list so it’s worthy of a blog post. Also people tend to think more about what they do when their actions will be exposed to a wider audience so hopefully people who read this post will think before they respond.


The first discussion concerned the issue of making “jokes”. When dealing with the treatment of other people (particularly minority groups) the issue of “jokes” is a common one. It’s fairly common for people in positions of power to make “jokes” about people with less power and then complain if someone disapproves. The more extreme examples of this concern hate words which are strongly associated with violence, one of the most common is a word used to describe gay men which has often been associated with significant violence and murder. Men who are straight and who conform to the stereotypes of straight men don’t have much to fear from that word while men who aren’t straight will associate it with a death threat and tend not to find any amusement in it.

Most minority groups have words that are known to be associated with hate crimes. When such words are used they usually send a signal that the minority groups in question aren’t welcome. The exception is when the words are used by other members of the group in question. For example if I was walking past a biker bar and heard someone call out “geek” or “nerd” I would be a little nervous (even though geeks/nerds have faced much less violence than most minority groups). But at a Linux conference my reaction would be very different. As a general rule you shouldn’t use any word that has a history of being used to attack any minority group other than one that you are a member of, so black rappers get to use a word that was historically used by white slave-owners but because I’m white I don’t get to sing along to their music. As an aside we had a discussion about such rap lyrics on the Linux Users of Victoria mailing list some time ago, hopefully most people think I’m stating the obvious here but some people need a clear explanation.

One thing that people should consider “jokes” is the issue of punching-down vs punching-up [1] (there are many posts about this topic, I linked to the first Google hit which seems quite good). The basic concept is that making jokes about more powerful people or organisations is brave while making “jokes” about less powerful people is cowardly and serves to continue the exclusion of marginalised people. When I raised this issue in the mailing list discussion a group of men immediately complained that they might be bullied by lots of less powerful people making jokes about them. One problem here is that powerful people tend to be very thin skinned due to the fact that people are usually nice to them. While the imaginary scenario of less powerful people making jokes about rich white men might be unpleasant if it happened in person, it wouldn’t compare to the experience of less powerful people who are the target of repeated “jokes” in addition to all manner of other bad treatment. Another problem is that the impact of a joke depends on the power of the person who makes it, EG if your boss makes a “joke” about you then you have to work on your CV, if a colleague or subordinate makes a joke then you can often ignore it.

Who does a Code of Conduct Protect

One member of the mailing list wrote a long and very earnest message about his belief that the CoC was designed to protect him from off-topic discussions. He analysed the results of a CoC on that basis and determined that it had failed due to the number of off-topic messages on the mailing lists he subscribes to. Being so self-centered is strongly correlated with being in a position of power, he seems to sincerely believe that everything should be about him, that he is entitled to all manner of protection and that any rule which doesn’t protect him is worthless.

I believe that the purpose of all laws and regulations should be to protect those who are less powerful, the more powerful people can usually protect themselves. The benefit that powerful people receive from being part of a system that is based on rules is that organisations (clubs, societies, companies, governments, etc) can become larger and achieve greater things if people can trust in the system. When minority groups are discouraged from contributing and when people need to be concerned about protecting themselves from attack the scope of an organisation is reduced. When there is a certain minimum standard of treatment that people can expect then they will be more willing to contribute and more able to concentrate on their contributions when they don’t expect to be attacked.

The Public Interest

When an organisation declares itself to be acting in the public interest (EG by including “Public Interest” in the name of the organisation) I think that we should expect even better treatment of minority groups. One might argue that a corporation should protect members of minority groups for the sole purpose of making more money (it has been proven that more diverse groups produce better quality work). But an organisation that’s in the “Public Interest” should be expected to go way beyond that and protect members of minority groups as a matter of principle.

When an organisation is declared to be operating in the “Public Interest” I believe that anyone who’s so unable to control their bigotry that they can’t refrain from being bigoted on the mailing lists should not be a member.

August 18, 2015

The next step in the death of the regional networks

So we were flicking around youtube this evening as we are wont to do and we came across this ad

Now, an ad on youtube is nothing special, however what is special about this one is the fact that it's a local ad. That fishing shop is fifteen minutes from where I live and it's not the first local ad that I've seen on Youtube lately.

This means two things. Youtube can tell that I'm from the area the ad is targetted at, and local businesses now have an alternative to the local tv networks for advertising, an alternative that is available across multiple platforms, has a constant source of new content and is deeply embedded in the internet enabled culture that the networks have been ignoring for the past fifteen years.

Getting rid of the 2/3 rule, or removing the 75% reach rule won't save the networks. Embracing the internet and engaging with people in that space, just might.

Blog Catagories: 

Watching (some) Bluray movies on Ubuntu 14.04 using VLC

While the Bluray digital restrictions management system is a lot more crippling than the one preventing users from watching their legally purchased DVDs, it is possible to decode some Bluray discs on Linux using vlc.

First of all, install the required packages as root:

apt install vlc libaacs0 libbluray-bdj libbluray1
mkdir /usr/share/libbluray/
ln -s /usr/share/java/libbluray-0.5.0.jar /usr/share/libbluray/libbluray.jar

The last two lines are there to fix an error you might see on the console when opening a Bluray disc with vlc:

libbluray/bdj/bdj.c:249: libbluray.jar not found.
libbluray/bdj/bdj.c:349: BD-J check: Failed to load libbluray.jar

and is apparently due to a bug in libbluray.

Then, as a user, you must install some AACS decryption keys. The most interesting source at the moment seems to be

mkdir ~/.config/aacs
cd ~/.config/aacs

but it is still limited in the range of discs it can decode.

OLPC and Measuring if Technology Helps

I have a penchant for dating teachers who have worked in Australia’s 3rd world. This has given me a deep, personal appreciation of just how hard developing world education can be.

So I was wondering: where has the OLPC project gone? And in particular, has it helped people? I have had some experience with this wonderful initiative, and it was the subject of much excitement in my geeky, open source community.

I started to question the educational outcomes of the OLPC project in 2011. Too much tech buzz, and I know from my own experiences (and those of friends in the developing world) that parachuting rich white guy technology into the developing world then walking away just doesn’t work. It just makes geeks and the media feel good, for a little while at least.

Turns out 2.5M units have been deployed world wide, quite a number for any hardware project. One Education alone has an impressive 50k units in the field, and are seeking to deploy many more. Rangan Srikhanta from One Education Australia informed me (via a private email) that a 3 year study has just kicked off with 3 Universities, to evaluate the use of the XO and other IT technology in the classroom. Initial results in 2016. They have also tuned their deployment strategy to address better use of deployed XOs.

Other studies have questioned the educational outcomes of the OLPC project. Quite a vigorous debate in the comments there! I am not a teacher, so don’t profess to have the answers, but I did like this quote:

He added: “…the evidence shows that computers by themselves have no effect on learning and what really matters is the institutional environment that makes learning possible: the family, the teacher, the classroom, your peers.”

Measurement Matters

It’s really important to make sure the technology is effective. I have direct experience of developing world technology deployments that haven’t reached critical mass despite a lot of hard work by good people. With some initiatives like OLPC, even after 10 years (an eternity in IT, but not long in education) there isn’t any consensus. This means it’s unclear if the resources are being well spent.

I have also met some great people from other initiatives like AirJaldi and Inveneo who have done an excellent job of using geeky technology to consistently help people in the developing world.

This matters to me. These days I am developing technology building blocks (like HF Digital Voice), rather than working directly on deployments in the developing world. Not as sexy, I don’t get to sweat amongst the palm trees, or show videos of “unboxing” shiny technology in dusty locations. But for me at least, a better chance to “improve the world a little bit” using my skills and resources.

Failure is an Option

When I started Googling for recent OLPC developments I discovered many posts declaring OLPC to be a failure. I’m not so sure. It innovated in many areas, such as robust, repairable, eco-friendly IT technology purpose designed for education in the developing world. They have shipped 2.5M units, which I have never done with any of my products. It excited and motivated a lot of people (including me).

When working on the Village Telco I experienced difficult problems with interference on mesh networks and frustration working with closed source chip set vendors. I started asking fundamental questions about sending voice over radio and lead me to my current HF Digital Voice work – which is 1000 times (60db) more efficient than VOIP over Wifi and completely open source.

Pushing developing world education and telecommunications forward is a huge undertaking. Mistakes will be made, but without trying we learn nothing, and get no closer to solutions. So I say GO failure.

I have learned to push for failure early – get that shiny tech out in the field and watch how it breaks. Set binary pass/fail conditions. Build in ways to objectively measure it’s performance. Avoid gold plating and long development cycles before fundamental assumptions have been tested.

Measuring the Effectiveness of my Own Work

Lets put the spotlight on me. Can I can measure the efficacy of my own work in hard numbers? This blog gets visited by 5000 unique IPs a day (150k/month). Unique IPs is a reasonable measure for a blog, and it’s per day, so it shows some recurring utility.

OK, so how about my HF radio digital voice software? Like the OLPC project, that’s a bit harder to measure. Quite a few people trying FreeDV but an unknown number of them are walking away after an initial tinker. A few people are saying publicly it’s not as good as SSB. So “downloads”, like the number of XO laptops deployed, is not a reliable metric of the utility of my work.

However there is another measure. An end-user can directly compare the performance of FreeDV against analog SSB over HF radio. Your communication is either better or it is not. You don’t need any studies, you can determine the answer yourself in just a few minutes. So while I may not have reached my technical goals quite get (I’m still tweaking FreeDV 700), I have a built in way for anyone to determine if the technology I am developing is helping anyone.

BTRFS Training

Some years ago Barwon South Water gave LUV 3 old 1RU Sun servers for any use related to free software. We gave one of those servers to the Canberra makerlab and another is used as the server for the LUV mailing lists and web site and the 3rd server was put aside for training. The servers have hot-swap 15,000rpm SAS disks – IE disks that have a replacement cost greater than the budget we have for hardware. As we were given a spare 70G disk (and a 140G disk can replace a 70G disk) the LUV server has 2*70G disks and the 140G disks (which can’t be replaced) are in the server for training.

On Saturday I ran a BTRFS and ZFS training session for the LUV Beginners’ SIG. This was inspired by the amount of discussion of those filesystems on the mailing list and the amount of interest when we have lectures on those topics.

The training went well, the meeting was better attended than most Beginners’ SIG meetings and the people who attended it seemed to enjoy it. One thing that I will do better in future is clearly documenting commands that are expected to fail and documenting how to login to the system. The users all logged in to accounts on a Xen server and then ssh’d to root at their DomU. I think that it would have saved a bit of time if I had aliased commands like “btrfs” to “echo you must login to your virtual server first” or made the shell prompt at the Dom0 include instructions to login to the DomU.

Each user or group had a virtual machine. The server has 32G of RAM and I ran 14 virtual servers that each had 2G of RAM. In retrospect I should have configured fewer servers and asked people to work in groups, that would allow more RAM for each virtual server and also more RAM for the Dom0. The Dom0 was running a BTRFS RAID-1 filesystem and each virtual machine had a snapshot of the block devices from my master image for the training. Performance was quite good initially as the OS image was shared and fit into cache. But when many users were corrupting and scrubbing filesystems performance became very poor. The disks performed well (sustaining over 100 writes per second) but that’s not much when shared between 14 active users.

The ZFS part of the tutorial was based on RAID-Z (I didn’t use RAID-5/6 in BTRFS because it’s not ready to use and didn’t use RAID-1 in ZFS because most people want RAID-Z). Each user had 5*4G virtual disks (2 for the OS and 3 for BTRFS and ZFS testing). By the end of the training session there was about 76G of storage used in the filesystem (including the space used by the OS for the Dom0), so each user had something like 5G of unique data.

We are now considering what other training we can run on that server. I’m thinking of running training on DNS and email. Suggestions for other topics would be appreciated. For training that’s not disk intensive we could run many more than 14 virtual machines, 60 or more should be possible.

Below are the notes from the BTRFS part of the training, anyone could do this on their own if they substitute 2 empty partitions for /dev/xvdd and /dev/xvde. On a Debian/Jessie system all that you need to do to get ready for this is to install the btrfs-tools package. Note that this does have some risk if you make a typo. An advantage of doing this sort of thing in a virtual machine is that there’s no possibility of breaking things that matter.

  1. Making the filesystem
    1. Make the filesystem, this makes a filesystem that spans 2 devices (note you must use the-f option if there was already a filesystem on those devices):

      mkfs.btrfs /dev/xvdd /dev/xvde
    2. Use file(1) to see basic data from the superblocks:

      file -s /dev/xvdd /dev/xvde
    3. Mount the filesystem (can mount either block device, the kernel knows they belong together):

      mount /dev/xvdd /mnt/tmp
    4. See a BTRFS df of the filesystem, shows what type of RAID is used:

      btrfs filesystem df /mnt/tmp
    5. See more information about FS device use:

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem to change it to RAID-1 and verify the change, note that some parts of the filesystem were single and RAID-0 before this change):

      btrfs balance start -dconvert=raid1 -mconvert=raid1 -sconvert=raid1 –force /mnt/tmp

      btrfs filesystem df /mnt/tmp
    7. See if there are any errors, shouldn’t be any (yet):

      btrfs device stats /mnt/tmp
    8. Copy some files to the filesystem:

      cp -r /usr /mnt/tmp
    9. Check the filesystem for basic consistency (only checks checksums):

      btrfs scrub start -B -d /mnt/tmp
  2. Online corruption
    1. Corrupt the filesystem:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    2. Scrub again, should give a warning about errors:

      btrfs scrub start -B /mnt/tmp
    3. Check error count:

      btrfs device stats /mnt/tmp
    4. Corrupt it again:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    5. Unmount it:

      umount /mnt/tmp
    6. In another terminal follow the kernel log:

      tail -f /var/log/kern.log
    7. Mount it again and observe it correcting errors on mount:

      mount /dev/xvdd /mnt/tmp
    8. Run a diff, observe kernel error messages and observe that diff reports no file differences:

      diff -ru /usr /mnt/tmp/usr/
    9. Run another scrub, this will probably correct some errors which weren’t discovered by diff:

      btrfs scrub start -B -d /mnt/tmp
  3. Offline corruption
    1. Umount the filesystem, corrupt the start, then try mounting it again which will fail because the superblocks were wiped:

      umount /mnt/tmp

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=200

      mount /dev/xvdd /mnt/tmp

      mount /dev/xvde /mnt/tmp
    2. Note that the filesystem was not mountable due to a lack of a superblock. It might be possible to recover from this but that’s more advanced so we will restore the RAID.

      Mount the filesystem in a degraded RAID mode, this allows full operation.

      mount /dev/xvde /mnt/tmp -o degraded
    3. Add /dev/xvdd back to the RAID:

      btrfs device add /dev/xvdd /mnt/tmp
    4. Show the filesystem devices, observe that xvdd is listed twice, the missing device and the one that was just added:

      btrfs filesystem show /mnt/tmp
    5. Remove the missing device and observe the change:

      btrfs device delete missing /mnt/tmp

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem, not sure this is necessary but it’s good practice to do it when in doubt:

      btrfs balance start /mnt/tmp
    7. Umount and mount it, note that the degraded option is not needed:

      umount /mnt/tmp

      mount /dev/xvdd /mnt/tmp
  4. Experiment
    1. Experiment with the “btrfs subvolume create” and “btrfs subvolume delete” commands (which act like mkdir and rmdir).
    2. Experiment with “btrfs subvolume snapshot SOURCE DEST” and “btrfs subvolume snapshot -r SOURCE DEST” for creating regular and read-only snapshots of other subvolumes (including the root).

Making good Canon LP-E6 battery-pack contacts

battery-pack contacts

Canon LP-E6 battery packs (such as those using in my 70D camera) have two fine connector wires used for charging them.  These seem to be a weak point, as (if left to themselves) they eventually fail to connect well, which means that they do not charge adequately, or (in the field) do not run the equipment at all.

One experimenter discovered that scrubbing them with the edge of a stiff business card helped to make

with (nonCanon this time) charger contacts

them good.  So I considered something more extensive.

Parts: squeeze-bottle of cleaner (I use a citrus-based cleaner from PlanetArk, which seems to be able to clean almost anything off without being excessively invasive); spray-can

equipment required
of WD-40; cheap tooth-brush, paper towels (or tissues, or bum-fodder).

Method: lightly

brush head
spray cleaner onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.

Lightly spray WD-40 onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.

wider view of brush on contacts

(optional) When thoroughly dry, add a touch of light machine oil. This wards off moisture.

This appears to be just as effective with 3rd-party battery packs.

August 17, 2015

Rethreading the Beanie

So there hasn't been any activity over at (actual podcast wise) since October last year.

I keep meaning to reboot things but I never quite get around to it and I've been thinking about why.

I think it boils down to two problems:

Firstly, I suffer from "Been there done that itis", that is once I've done something I tend to start looking around for the next challenge.

Secondly I suffer from a severe case of "creators block".

For instance, I have twelve different episode ideas for Purser Explores The World, a series I really enjoy making because it means I get to learn new things and talk to interesting people. I mean I've covered everything from Richard the Thirds remains being discovered to what it means to be a geek and using crowd sourcing to deal with disasters

Some of the ideas I want to cover in new episodes include a visit to HARS, a local Air Museum with some really fascinating exhibits, the recent controversy over Amnesty Internationals new approach to sex workers rights and the idea that we've already gone past the point of no return with regards to AI controlled weapons systems.

Then there's For Science. With Magdeline Lum and Maia Sauren, then Mel Thompson, we covered everything from Space Archeology to Salami made from infant bacteria and How not to science.

I want to start podcasting about science again. Since the last episode of For Science I've tried to keep things up with #lunchtimescience but I miss the audio production side of things and I really miss the back and forth that we had going. I'm planning on dipping my toes back into the water with a weekly Lunchtime For Science podcast. This will be a shorter format, coming at the end of each week to summarise the news and hopefully introduce a couple of new segments.

And finally there's WTF Australia. Not sure what to do about this one. Bernie and I had a lot of fun doing the weekly hangouts but at the end we just sort of drifted. 

So as you can see, all the plans, just a lack of the ability to get over the blockage.

Blog Catagories: 

August 16, 2015

Twitter posts: 2015-08-10 to 2015-08-16

Exploring the solar system

At last week's telescope driver training I found out that Melbourne contains a 1 to 1 billion scale model of the solar system. It's an artwork by Cameron Robbins and Christopher Lansell.

The Sun is located at St Kilda marina and the planets are spaced out along the beach and foreshore back towards the city.

Since the weather was lovely today, I thought ... why not? The guide says you can walk from the Sun to Pluto in about an hour and a half, which would make your speed approximately three times the speed of light, or warp 1.44, if you like.



The Sun. It's big.


Mercury, 4.9mm across and still at the car park, 58m from the Sun.


Venus, 1.2cm across on the beach at 108m from the Sun.

Earth Moon

Earth (1.3cm) and Moon (3.5mm) are on the beach as well, at 42m from Venus (and 150m from the Sun).


Mars is 6.8mm across and at the far end of the beach, 228m from the Sun.

The walk now takes you past the millions of tiny grains of sand that make up the asteroid belt and the beach.


Jupiter is 14cm across and lives 778m from the Sun, near the Sea Baths.

The outer solar system is rather large, so it would be wise to purchase an ice cream at this point.


Saturn (12cm, rings 28cm) is nearly twice as far away from the Sun at 1.4km near St Kilda harbour as Jupiter was.


Uranus (5cm) is so far from the Sun (2.9km) that it's actually in the next suburb, near the end of Wright Street in Middle Park.


Neptune (4.9cm) is in the next suburb again (Port Melbourne) at 4.9km from the Sun.


No longer a planet, but still included. Pluto is a 2mm pellet on the beach at Garden City (yet another suburb along again) at 5.9km from the Sun.


On this scale, the nearest star (Proxima Centauri, an uassuming red dwarf) is about 40 trillion kilometers away from the sun. Which, on a one to billion scale, happens to be about the same as once around the Earth. 


Proxima Centauri

Proxima Centauri is just on the other side of the Sun from the rest of the solar system, a cool 4.2 light years away.


Ye Mappe

Google map of the Melbourne solar system.

LUV Main September 2015 Meeting: Cross-Compiling Code for the Web / Annual General Meeting

Sep 1 2015 18:30
Sep 1 2015 20:30
Sep 1 2015 18:30
Sep 1 2015 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Ryan Kelly, Cross-Compiling Code for the Web

• Annual General Meeting and lightning talks

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

September 1, 2015 - 18:30

read more

August 15, 2015

Broadband Speeds, New Data

Thanks to edmundedgar on reddit I have some more accurate data to update my previous bandwidth growth estimation post: OFCOM UK, who released their November 2014 report on average broadband speeds.  Whereas Akamai numbers could be lowered by the increase in mobile connections, this directly measures actual broadband speeds.

Extracting the figures gives:

  1. Average download speed in November 2008 was 3.6Mbit
  2. Average download speed in November 2014 was 22.8Mbit
  3. Average upload speed in November 2014 was 2.9Mbit
  4. Average upload speed in November 2008 to April 2009 was 0.43Mbit/s

So in 6 years, downloads went up by 6.333 times, and uploads went up by 6.75 times.  That’s an annual increase of 36% for downloads and 37% for uploads; that’s good, as it implies we can use download speed factor increases as a proxy for upload speed increases (as upload speed is just as important for a peer-to-peer network).

This compares with my previous post’s Akamai’s UK numbers of 3.526Mbit in Q4 2008 and 10.874Mbit in Q4 2014: only a factor of 3.08 (26% per annum).  Given how close Akamai’s numbers were to OFCOM’s in November 2008 (a year after the iPhone UK release, but probably too early for mobile to have significant effect), it’s reasonable to assume that mobile plays a large part of this difference.

If we assume Akamai’s numbers reflected real broadband rates prior to November 2008, we can also use it to extend the OFCOM data back a year: this is important since there was almost no bandwidth growth according to Akamai from Q4 2007 to Q7 2008: ignoring that period gives a rosier picture than my last post, and smells of cherrypicking data.

So, let’s say the UK went from 3.265Mbit in Q4 2007 (Akamai numbers) to 22.8Mbit in Q4 2014 (OFCOM numbers).  That’s a factor of 6.98, or 32% increase per annum for the UK. If we assume that the US Akamai data is under-representing Q4 2014 speeds by the same factor (6.333 / 3.08 = 2.056) as the UK data, that implies the US went from 3.644Mbit in Q4 2007 to 11.061 * 2.056 = 22.74Mbit in Q4 2014, giving a factor of 6.24, or 30% increase per annum for the US.

As stated previously, China is now where the US and UK were 7 years ago, suggesting they’re a reasonable model for future growth for that region.  Thus I revise my bandwidth estimates; instead of 17% per annum this suggests 30% per annum as a reasonable growth rate.

In this case, it was similar to having the garbage dispose of itself... except for the kidnappees also taken and reprogrammed into psychopathy, as was done to the kidnapper when they were an infant.

August 14, 2015

The Basics: The Age of Entitlement

The first new album from The Basics in several years, and it’s a good ‘un.

They’ve been making use of their time off – Wally became an international star as Gotye, Kris spent 2 years working in Kenya with the Red Cross, and both Tim and Kris ran for parliament in the recent Victorian state election.

Now, mix those experiences together, you have an idea of what’s on the new album. A combination of politically aware rock, both locally focussed 1 with Whatever Happened To The Working Class?, globally focussed with Tunaomba Saidia; and some Gotye-esque pop such as Good Times, Sunshine!.

Anyway, listen to it now below, then go out and buy it.

  1. Is it too much to call them a modern Midnight Oil? I don’t think so.

August 13, 2015

Your "Infrastructure as Code" is still code!

Whether you’re a TDD zealot, or you just occasionally write a quick script to reproduce some bug, it’s a rare coder who doesn’t see value in some sort of automated testing. Yet, somehow, in all of the new-age “Infrastructure as Code” mania, we appear to have forgotten this, and the tools that are commonly used for implementing “Infrastructure as Code” have absolutely woeful support for developing your Infrastructure Code. I believe this has to change.

At present, the state of the art in testing system automation code appears to be, “spin up a test system, run the manifest/state/whatever, and then use something like serverspec or testinfra to SSH in and make sure everything looks OK”. It’s automated, at least, but it isn’t exactly a quick process. Many people don’t even apply that degree of rigour to their system config systems, and rely on manual testing, or even just “doing it live!”, to shake out the bugs they’ve introduced.

Speed in testing is essential. As the edit-build-test-debug cycle gets longer, frustration grows exponentially. If it takes two minutes to get a “something went wrong” report out of my tests, I’m not going to run them very often. If I’m not running my tests very often, then I’m not likely to write tests much, and suddenly… oops. Everything’s on fire. In “traditional” software development, the unit tests are the backbone of the “fast feedback” cycle. You’re running thousands of tests per second, ideally, and an entire test run might take 5-10 seconds. That’s the sweet spot, where you can code and test in a rapid cycle of ever-increasing awesomeness.

Interestingly, when I ask the users of most infrastructure management systems about unit testing, they either get a blank look on their face, or, at best, point me in the direction of the likes of Test Kitchen, which is described quite clearly as an integration platform, not a unit testing platform.

Puppet has rspec-puppet, which is a pretty solid unit testing framework for Puppet manifests – although it isn’t widely used. Others, though… nobody seems to have any ideas. The “blank look” is near-universal.

If “infrastructure developers” want to be taken seriously, we need to learn a little about what’s involved in the “development” part of the title we’ve bestowed upon ourselves. This means knowing what the various types of testing are, and having tools which enable and encourage that testing. It also means things like release management, documentation, modularity and reusability, and considering backwards compatibility.

All of this needs to apply to everything that is within the remit of the infrastructure developer. You don’t get to hand-wave away any of this just because “it’s just configuration!”. This goes double when your “just configuration!” is a hundred lines of YAML interspersed with templating directives (SaltStack, I’m looking at you).

August 12, 2015

Developer On Fire

I recently had a chat with Dave from Developer On Fire. We talked about a bunch of things – what I work on, how I got started in programming, as well as some thoughts on remote working and maintaining a healthy work/life balance.

The post is here, or you can listen to the audio below.

A Miserable Debt Free Life

I have a lifestyle that is different to many people, and I have been encouraged to write about it. According to my friends, I am “living the dream”. I get up in the morning and can choose to do anything I want. I don’t work for money any more. I’ve been able to do this since I was 38 (I’m 48 now). I don’t appear to want for anything (material).

This is not a HowTo on retiring young, just a little about my story. Use it as a source of ideas.

Ten years ago I had an executive job in the sat-com industry, and prior to that I had a moderately successful small business, and a stint in academia. Although I was an effective manager, small businessman, and engineer, I was consistently dissatisfied. I did enjoy some parts of these jobs: Digital Signal Processing (DSP), open source, helping people, engineering, teaching thereof, annoying my managers, doing coffee and extremely long pub lunches. Rather than knuckle under and be a good corporate lad I decided to to focus on what I enjoyed most. Especially the coffee and pub lunches.

So I quit corporate life to be a full time “hacker”. I use the term hacker in the positive sense: I develop clever technology. Then, rather than using it to make a profit, I give the technology away in the hope that it will help people. I’ve had some success at this goal over the last 10 years.

My corporate wardrobe is now my pajamas. I spend most of the day sitting on my couch (thanks Dave for the couch BTW!) hacking on my laptop, with daily forays on my bike to a cafe by the beach. This gets me exercise, some social connection, and caffeine.

Once a month I travel interstate to a friends house, borrow their bike, cook for them, and sit on their couch and hack. Mixing a bit of travel with my “work”.

At the moment I average 6 hours of real, focused, head over the laptop work a day. Which is the equivalent of 2 days in a “real job” where you have meetings, managers to annoy, and pub lunches to attend.


  • In my final years of corporate life I listened to podcasts about using technology to help people by a guy called David Bornstein. This idea was quite appealing, a good use of my skills.
  • At the same time a couple of friends (Scott and Horse) put the idea in my head of lifestyles not aimed merely at continual material accumulation. One of them had paid off his house but didn’t see any reason to “upgrade” with more debt; the other just bailed on an engineering career to play volleyball and guitar, living off his savings for a bit. Huh? I found myself admiring them.
  • Volunteer work my Father did for disabled people.
  • A book called Affluenza by Clive Hamilton, which deals with our growing addiction to materialistic lifestyles.
  • Travel. Especially to the developing world, Timor Leste, India, townships in South Africa.

But How Do You Get Money to Live?

Money you need = income – how much you spend.

I live frugally, but am always happy to spend money on my kids (for stuff they really need) or entertaining friends. Most nights I dine in rather than going out, and can cook a bunch of meals in 10 minutes that feed 4 people for $10.

My living costs (including food, bills, housing, schooling, medical, transport) are about $40,000/pa, before any discretionary purchases like new IT or holidays. That is for a household of 2.5 people (I share care of one child).

I drive an electric car which costs very little to drive and maintain, which I supplement with the occasional loan/rental of petrol cars.

My kids go to public schools; my peers spend up to $40k/year on private education. I am home every day for them when school ends, can help them in almost any subject (although they never ask of course as I am their Dad), and attend every interview to monitor their progress at school.

I am not convinced there is any significant advantage from private schools, but acknowledge the emotional buttons and peer group pressure around private education is strong for many parents. My kids are doing pretty well, e.g. one at University, another getting good grades at the best science and maths school in the state.

I get income from a variety of sources, but the total is rather low compared to my peers. And that’s OK. Currently there is some income from SM1000 sales. In the past it’s been from VOIP products like the IP0X VOIP systems and a little contract work. I have some passive income from shares, enough to cover my rent. So it’s a bit like owning my home. These shares have been accumulated over 20 years simply by saving and reinvesting. No get rich quick schemes here.

Planning is good if you want to get somewhere. Here is a simple financial plan, start with $10k, start saving $100/week (5% more every year), invest at 10% (you get to work out where). Repeat for 20 years and you have enough to buy a house, or generate some passive income. Yes I know it doesn’t include inflation, and returns vary over time, blah, blah, blah. Your turn – come up with a model that does include these factors.

In Australia the government gives much of the population “middle class welfare”, a few $100/week which covers much of my food and bills. We also have free public health care. So the country you live in helps. On the down side the houses here are really expensive to buy, an average of $500,000 (10 years average income), and public transport poor. Every country has it’s pros and cons.

I have modest financial skills and good habits. Primarily the ability to spend less than I earn and avoidance of debt. I use a trusted share broker to choose conservative shares, but I decide the overall strategy. Some people like real estate for investment, it doesn’t really matter.

Saving and time is the key. Conversely if you can’t save, it doesn’t matter if you earn $200k. At the end of your time you will have nothing but a pile of debt, useless possessions, an endless need to work hard, broken relationships, and stress.

Every few years I go without income and live off savings for 12 months while I develop a new product. Living off savings for a while has ceased to worry me. However I understand most people are 1 pay cheque away from serious financial trouble. How about you? How long could you live without a pay cheque? It’s a good check on your financial health.

Effective Altruism

I’ve recently realised that working for free to help people is a form Altruism. So instead of traveling to a village to help less fortunate people I’d like to invent a widget (or part of a widget) that might let 1000′s of villages communicate. Or something like that. At the moment I’m focused on digital voice over HF radio that has many applications, e.g. in humanitarian and remote communications. I’m inventing technology building blocks that let me help the world a little bit, and stretch my professional skills (it’s post doctoral R&D in my field of signal processing).

Now, if you can name an enterprise, you can engineer it. Use numbers to make it better.

So, I’m currently reading a book called The Most Good You Can Do by Peter Singer. This guy is applying a numerical framework to “Effective Altruism”. Engineering it. For example he calculates the impact of donating a kidney (really!), or saving a persons life with $x versus preventing blindness in 10 people with the same $x. Is it better to work for 200k and donate 150 to a charity, or work for 50k in the same charity? Quite an easy to read book, but some fascinating ideas.


I have to think carefully and be sensitive to connect with the life of people with “real jobs”. When my friends head off to work every day, it’s a mental shift for me to understand. Yes I understand we need to be fed and housed. However I note a large portion of this work seems to be around paying for things we don’t really need, or making minimum payments on enormous debt. Wage slavery? But hey, my bank shares keep going up, and the dividends keep getting bigger, so who I am to question this system!

Until age 38 I was very focused on material accumulation. I had a Porsche 911 (called Helmut), several investment properties, a wife, and several pairs of trousers. At that stage I just ran out of things to buy. That was disconcerting for someone who came of age in the 1980′s. It felt, inexplicably, like a miserable debt free life. So I had a bit of a think. It’s taken me a little while to shift my attitude to money, however I am gradually letting go. Old habits die hard. I still feel bad when not saving and accumulating.

Re money all you need is the ability to save and time. Many people seem compelled to piss every cent they get down the drain. This is encouraged by the “growth paradigm” that governments push, easy debt, materialism, and lack of financial literacy.

Here is how I’m making my kids rich. It’s working too – they are in a better position than I was at their age and a million miles ahead of their peers. Or for that matter a lot of people my age (30 years older than them). It’s not just their net worth either: I get them involved, building their skills in handling money and investing. Showing a 9 year old a dividend statement. Getting a 17 year old to build a spreadsheet predicting growth of his assets over 10 years. Giving a child part of my web business to run. Making them wait for new material possessions. Not giving them everything they want, of that their friends have. Forming good habits early.

I struggle with the idea of debt for non-essential items, or huge, barely serviceable debt that is impossible to pay down quickly. Certainly not for a bigger house or a $500 outfit or gadgets “paid” for on a credit card. Or strongly depreciating items like cars (unless it’s electric of course). Useless debt turns the financial model above on it’s head. If you waste $100/week now YOU get to pay the banks $500k over 20 years. Then go back to work to do it again for another 20!

I am very fortunate and feel I must help others with my good fortune. We are all going to die one day. Everything that matters to us, everything we ever owned, every problem we have – will all be dust. Quickly forgotten, after a few kind words at your funeral. However helping improve the lives of others matters. That can endure. I can’t imagine a life where I am not helping others. Working just for more toys or my own needs is not enough.

So now I’m going to sit on my couch, do some hacking, then give it away.

And no – you are not going to see me in my PJs!

Reading Further

A Miserable Debt Free Life Part 2

August 11, 2015

LUV Beginners August Meeting: BTRFS and ZFS on Linux hands on tutorial

Aug 15 2015 12:30
Aug 15 2015 16:30
Aug 15 2015 12:30
Aug 15 2015 16:30

RMIT Building 91, 110 Victoria Street, Carlton South

Russell Coker will demonstrate how to setup and use BTRFS and ZFS filesystems and recover from errors that are fatal to other filesystems.

This tutorial will be run on Xen servers run by Russell. The OS images will be available on USB sticks for anyone who wants to run it on their own virtual machines. Xen and Kvm should work without much effort and other virtual machines should work with a little more effort.

Russell has done lots of Linux development over the years, mostly involved with Debian.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 15, 2015 - 12:30

read more

FreeDV Robustness Part 7 – FreeDV 700B

For the last few weeks I’ve been working on improving the quality of the 700 bit/s Codec 2 mode. It’s been back to “research and disappointment” however I’m making progress, even uncovering some mysteries that have eluded me for some time.

I’ve just checked in a “FreeDV 700B” mode, and ported it to the FreeDV API and FreeDV GUI application. Here are some samples:

Clean SNR=20dB 700 700B 1600 SSB
HF fast fading SNR=2dB 700 700B 1600 SSB

I think 700B is an incremental improvement on 700, but still way behind FreeDV 1600 on clean speech. Then again it’s half the bit rate of 1600 and doesn’t fall over in HF channels. On air reports suggest the difference in robustness between 1600 and 700 on real HF channels is even more marked than above. Must admit I never thought I’d say the FreeDV 1600 quality sounds good!

FreeDV 700B uses a wider Band Pass Filter (BPF) than 700, and a 3 stage Vector Quantiser (VQ) for the LSPs, rather than scalar quantisers. VQ tends to deliver better quality for a given bit rate as it takes into account correlations between LSPs. This lets us lower the bit rate for a given quality. The down side is VQs tend to be more sensitive to bit errors, use more storage, and more MIPs. As HF is a high bit error rate channel, I have shied away from VQ until now.

Cumulative Quality, Additive Distortion

Using the c2sim program I can gradually build up the complete codec (see below for command line examples):

Sinusoidal Model
p=6 LPC Amplitude Modelling
phase0 Phase Model
Decimated to 40ms
Fully Quantised 700B

As each processing step is added, the quality drops. There is also another drop in quality when you get bit errors over the channel. It’s possible to mix and match c2sim command line options to homebrew your own speech codec, or test the effect of a particular processing step. It’s also a very good idea to try different samples, vk5qi and ve9qrp tend to code nicely, tougher samples are hts1a, hts2a, cq_ref, and kristoff.

I have a model in my mind “Overall Quality = 0BER quality – effect of bit errors”. Now 700B isn’t as robust as 700 as it uses Vector Quantisation (VQ). However as 700B starts from a higher baseline, when errors occur the two modes sound roughly the same. This was news to me, and a welcome development – as VQ is a powerful technique for high quality while lowering the bit rate. A good reason to release early and often – push a prototype through to a testable state.

The model also seems to apply for the various processing and quantisation steps. The BPF p=6 LPC model causes quite a quality drop, so you have to be very careful with LSP quantisation to maintain quality after that step. With p=10 LPC, the quality starts off high, so it appears less sensitive to quantisation.

Quantiser Design and Obligitory Plots to break up Wall of Text

A quantiser takes a floating point number, and represents it with a finite number of bits. For example it might take the pitch in the range of 50 to 500 Hz, and convert it to a 7 bit integer. Your sound card takes a continuous analog voltage and converts it to a 16 bit number. We can then send those bits over a channel. Less bits is better, as it lowers your bit rate. The trade off is distortion, as the number of bits drop you start to introduce quantisation noise.

A quantiser can be implemented as a look up table, there are a bunch of those in codec2-dev/src/codebook, please take a look.

Quantiser design is one of those “nasty details” of codec development. It reminds me of fixed point DSP, or echo cancellation. Lots of tricks no one has really documented, and the theory never seems to quite work without experienced-based tweaking. No standard engineering practice.

Anyhoo, I came up with a simple test so see if all quanister indexes are being used. In this case it was a 3 bit quantiser for the LPC energy. This is effectively the volume or gain of the coded speech in the current frame. So I ran a couple of speech samples through c2sim, dumped the energy quantiser index, and tabulated the results:

vk5qi (quieter)


octave:40> tabulate(eind)

     bin     Fa       Fr%        Fc

       1    306     22.58%      306

       2    164     12.10%      470

       3    298     21.99%      768

       4    329     24.28%     1097

       5    199     14.69%     1296

       6      9      0.66%     1305

       7      1      0.07%     1306


ve9qrp_10s (normal volume)


octave:42> tabulate(eind)

     bin     Fa       Fr%        Fc

       1     73      7.30%       73

       2     68      6.80%      141

       3     88      8.80%      229

       4    240     24.00%      469

       5    328     32.80%      797

       6    132     13.20%      929

       7     44      4.40%      973

This looks reasonable to me. The louder sample has a distribution skewed towards the higher energy bins, as expected. Although I note the 8th bin is never used. This means we are effectively “wasting” bits. So perhaps we could reduce the range of the quantiser, or it could be a bug. The speech sounds pretty similar with/without the energy quantiser applied in c2sim. This is good.

Here are some plots I generated in the last few weeks that illustrate quantiser design. My first attempt to improve 700 involved new scalar quantisers for the 6 LSPs (click for a larger image):

These are histograms of the indexes of each quantiser. A graphical form of the tabulations above, except for the 6 LSP quantisers rather than energy. Note how some indexes are hardly used? This may indicate wasted bits. Or not. It could be those few samples that hardly register on the histogram REALLY matter for the speech quality. Welcome to speech coding…..

Here is a single frame frozen in time:

The “A enc” and “A dec” lines are the LPC spectrum before and after quantisation. This is represented by 6 LSP frequencies that are the vertical lines plotted at the bottom. Notice how the two spectrums and LSP frequencies are slightly different? This is the effect of quantisation.

Here is a another view of many frames, using a measure called Spectral Distortion (SD):

SD is the difference between the original and quantised LPC spectrums, averaged over all frames and measured in dB. The top plot shows a histogram of the SD for each frame. Most frames have a small SD but there is a long tail of outliers. Some of these matter to us, and some don’t. For example no one cares about a large SD when coding background noise.

The bottom plot shows how SD is distributed across frequency. The high SD at higher frequencies is intentional, as the ear is less sensitive there. The SD drops to zero after 2600 Hz due to the band pass filter roll off.

Next Steps

I’m pleased that with a few weeks work I could incrementally improve the codec quality. Lots more work could be done here, I had to skip over a bunch of ideas in order to get something usable early.

I was also very pleased that once I had the Codec 2 700B mode designed and tested in c2sim, I could quickly get it “on the air”. I “pushed” it through the FreeDV “stack” in a few hours. This involves adding the new mode to codec2.c as separate encoder and decoder functions, modifying c2enc/c2dec, modifying the FreeDV API (including freedv_tx/freedv_rx), testing it over a fading channel simulation with cohpsk_ch, and adding the new mode to the FreeDV GUI program. The speed of integration is very pleasing, and is a sign of a good set of tools, good design, and a well thought out and partitioned implementation.

OK, so back to work. After listening to a few samples on the 700/700B/1600 modes I had the brainstorm of trying p=10 LPC using the same VQ design as 700B. The “one small step leads to another” R&D technique that is the outcome of steady, consistent work. Initials results are very encouraging, as good as FreeDV 1600 for some samples. At half the bit rate. So I’ll hold off on a general release of 700B until I’ve had a chance to run this 700C candidate to ground.

Command Line Kung Fu

Here’s how I simulate operation over a HF channel:

~/codec2-dev/build_linux/src$ ./freedv_tx 700B ../../raw/ve9qrp_10s.raw - | ./cohpsk_ch - - -24 0 2 1 | ./freedv_rx 700B - - | play -t raw -r 8000 -s -2 -

Here’s how I use c2sim to “build” a fully quantised codec. I start with the basic sinusoidal model:

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

Lets add 6th order LPC modelling:

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav --bpfb --lpc 6 --lpcpf -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

Hmm, lets compare to 10th order LPC (this doesn’t need the band pass filter as explained here):

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav --lpc 10 --lpcpf -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

Unfortunately we can’t send the sinusoidal phases over the channel, so we replace them with the phase0 model.

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav --phase0 --postfilter --bpfb --lpc 6 --lpcpf -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

Next step is to use a 18 bit VQ for the LSPs, decimate from 10ms to 40ms frames, and quantise the pitch and energy. Which gives us the fully quantised codec.

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav --phase0 --postfilter --bpfb --lpc 6 --lpcpf --lspmel --lspmelvq --dec 4 --sq_pitch_e_low -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

To make a real world usable codec we then need to split the signal processing into a separate encoder and decoder (via functions in codec2.c). However this won’t change the quality, the decoded speech will sound exactly the same.

If you are keen enough to try any of the above and have any questions please email the codec2-dev list, or post a comment below.

Last Few Weeks Progress

Jotting this down for my own record, so I don’t forget any key points. However feel free to ask any questions:

  • I have been working with LSPs using a warped frequency (mel scale) axis that models the log frequency response of our ear.
  • Tried a new set of scalar quantisters but not happy with quality.
  • Studied effect of microphones on low bit rate speech coding. Systemaically tracking down anything that can affect speech quality. Every little bit helps, and improves my understanding of the problems I need to solve.
  • Finally worked out why pathological samples (cq_ref, kristoff, k6hx) don’t code well with LPC models.
  • Explored p=6 LPC models, and found out just how important clear formant definition is for speech perception.
  • Explored vector quantisation for p=6 LPC model, including Octave and C mbest search implementations.
  • Engineered an improved 700 bit/s mode, implemented in Octave, C, integrated into FreeDV API and FreeDV GUI program.
  • Extended my simulation and test software, c2sim, melvq.m
  • Formed ideas on additive distortion in speech coding, importance of clear definition of formants
  • Came up with an idea for a non-LPC model that preserves clear formants with less emphasis on factors that don’t affect intelligibility, such as HP/LP spectral slope, format widths. LPC/LSP has some weaknesses and only indirectly preserves the attributes of the speech spectrum that matter, such as formant/anti-formant structure.
  • Reading Further

    Codec 2 Page

August 10, 2015

Council Minutes Wednesday 15 July 2015

Wed, 2015-07-15 19:47 - 20:29

1. Meeting overview and key information


Josh Hesketh, Josh Stewart, Craige McWhirter, Sae Ra Germaine


James Iseppi, Chris Neugebauer, Tony Breeds

Meeting opened by Josh H at 1947hrs and quorum was achieved

MOTION that the previous minutes of 1 July are correct

Moved: Josh H

Seconded: Sae Ra

Passed with 2 abstentions.

2. Log of correspondence

Motions moved on list


General correspondence

GovHack 2015 as a subcommittee

MOTION by Josh H We accept govhack as an LA Sub-committee with the task of running GovHack at a national level with:

- Geoff Mason - lead

- Alysha Thomas

- Pia Waugh - as the liaison to LA

- Sharen Scott

- Diana Ferry

- Alex Sadleir

- Richard Tubb

- Jan Bryson

- Keith Moss

Under the Sub-committee policy v1 to allow the committee to run with autonomy and to use an external entity for administration.

Seconded Chris

Passed Unanimously

The old Subcommittee policy will need to come into effect

UPDATE: Bill from GovHack for Tony to process.

Need to discuss the Linux Australia Prize. Who will be judging the prize. The prize value is at $2000 “Best use or contribution to Open Source”.

For future GovHack events an option to offer tickets to a LA run conference.

A few tickets could be given

MOTION: Josh H Moves LA sponsors GovHack for the prize of “Best use of or contribution to Open Source” for the value of $2000.

Seconded by Sae Ra

Passed unanimously.

MOTION: Josh H Moves LA to reach out to the Geelong GovHack team and Geelong LCA team to discuss local sponsorship of LCA tickets

Seconded Chris

Passed unanimously.

Josh H to take this action

UPDATE: Josh H to find out if we need to do anything to Judge the OpenSource Bounty.

Invoice from LESLIE POOLE - Reminder notice from Donna and Leslie have arrived.

Supporting the Drupal Accelerate fund

UPDATE: In Progress. Tony to process in Xero

UPDATE: Drupal to send through an updated invoice. - In Progress

UPDATE: Paid and processed, to be removed from the next agenda

Admin Team draft budget from STEVEN WALSH

UPDATE: Awaiting for a more firm budget

UPDATE: Still awaiting

Insurance claim for GovHack

The TV has been damaged.

MOTION by Josh Hesketh: Approve Spacecubed to purchase a replacement up to $600

Seconded: Josh Stewart

Passed unanimously

ACTION: Josh H to action this.

JoomlaDay Brisbane Subcommittee:

Proposed members:

- Jeff Wilson: Site Chair

- Carly Willats: Treasurer

- Julian Murray

- Tim Plummer: Community Member

- Shane Thorpe: Community Member

MOTION: Josh H approve the formation of Joomla Day brisbane, with the members mentioned above. With Tim and Shane as Community members

Seconded: Sae Ra Germaine

Passed Unanimously

DrupalSouth 2016 Subcommittee:

Proposed Subcommittee members:

- Vladimir Roudakov: Site Chair, Community Brisbane

- Simon Hobbs: Treasurer,

- Josh Waihi*: Observer, Community NZ

- Brian Gilbert*: Observer

- Josh Li*: Observer, Community Canberra

- Tony Aslett*: Community Brisbane

- Sean Cleary*: Community Brisbane

- Murray Woodman: Community Sydney

- Josh Martin: Community Melb

- Jamie Jones: Community Gold Coast

- Owen Lansbury*: Observer

MOTION: Josh H approve the formation of DrupalSouth 2016 subcommittee with the members mentioned above.

Seconded: Sae Ra Germaine

Passed Unanimously

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craig

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

UPDATE: Still in progress

ACTION: Josh H to tarball the site.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: In progress

Update: To be done on friday.

Update: Still in progress

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

UPDATE: Still in progress

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.

UPDATE: Still in progress

4. Items for discussion

- LCA2016 update

Going along very well.

- LCA2017 update

Going along well

- LCA2018 update

There have been no expressions of interest.

- PyCon AU update

In a few weeks

Sales are going well.

- Drupal South

Correspondence: Quotes from Venue

UPDATE: require some final details to finalise the sub-committee. DrupalSouth to submit a budget.

Covered in the points above

- WordCamp Brisbane

Wrap up report required

- OSDConf

ACTION: Josh H to follow-up on budget status

ACTION: Josh to ping OSDConf

Payment for venue has been processed, banking and finances have been sorted.

- GovHack

Logos have been updated

Insurance claim

Judging needs to be sorted.

- JoomlaDay

Covered in previous notes.

5. Items for noting

Second F2F

Dates have been set for 14-15-16th F2F

ACTION: Josh H to book the Hotel and conference room.

6. Other business

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.

UPDATE: Carried to next meeting

MOTION: Josh H moves that LA becomes a Demand Class member of auDA

Seconded: Tony B

Passed unanimously.

ACTION: Josh H to sign up with LA CC

UPDATE: In progress


David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

UPDATE: In Progress.

ACTION: Josh S to catch up with David Bell regarding the documentation.

UPDATE: In progress.

ACTION: Josh H to catch up with James at PyconAU

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh Stewart to contact Renai

UPDATE: Contact has been made.

Mailing list request from RUSSELL COKER regarding Science Cafe

To be discussed next Council Meeting.

MOTION by Josh Hesketh Linux Australia approves Russell Coker’s Science Cafe Mailing List

Seconded: Sae Ra Germaine

Passed unanimously

7. In camera

2 Items were discussed in camera

2029PM close.

Minutes of Council Meeting 1 July 2015

Wed, 2015-07-01 19:50 - 20:39

1. Meeting overview and key information


Chris Neugebauer, Tony Breeds, Sae Ra Germaine, James Iseppi, Josh Hesketh


Josh Stewart, Craige McWhirter

Meeting opened by Josh H at 1950hrs and quorum was achieved

MOTION that the previous minutes of 17 June are correct

Moved: Josh H

Seconded: Chris

Passed with 2 abstentions

2. Log of correspondence

Motions moved on list


General correspondence

GovHack 2015 as a subcommittee

MOTION by Josh H We accept govhack as an LA Sub-committee with the task of running GovHack at a national level with:

- Geoff Mason - lead

- Alysha Thomas

- Pia Waugh - as the liaison to LA

- Sharen Scott

- Diana Ferry

- Alex Sadleir

- Richard Tubb

- Jan Bryson

- Keith Moss

Under the Sub-committee policy v1 to allow the committee to run with autonomy and to use an external entity for administration.

Seconded Chris

Passed Unanimously

The old Subcommittee policy will need to come into effect

UPDATE: Bill from GovHack for Tony to process.

Need to discuss the Linux Australia Prize. Who will be judging the prize. The prize value is at $2000 “Best use or contribution to Open Source”.

For future GovHack events an option to offer tickets to a LA run conference.

A few tickets could be given

MOTION: Josh H Moves LA sponsors GovHack for the prize of “Best use of or contribution to Open Source” for the value of $2000.

Seconded by Sae Ra

Passed unanimously.

MOTION: Josh H Moves LA to reach out to the Geelong GovHAck team and Geelong LCA team to discuss local sponsorship of LCA tickets

Seconded Chris

Passed unanimously.

Josh H to take this action

Invoice from LESLIE POOLE - Reminder notice from Donna and Leslie have arrived.

Supporting the Drupal Accelerate fund

UPDATE: In Progress. Tony to process in Xero

UPDATE: Drupal to send through an updated invoice. - In Progress

Admin Team draft budget from STEVEN WALSH

UPDATE: Awaiting for a more firm budget

UPDATE: Still awaiting

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craig

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

UPDATE: Still in progress

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: In progress

Update: To be done on friday.

Update: Still in progress

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

Potential sponsorship of GovHack.

More information is required on the types of sponsorship that LA can look at.

Clarify with GovHack. LA may not be able to sponsor a prize as you would also need to

UPDATE: Criteria would need to be developed. LA would be able to provide their own judge. Josh S to come with some wording and criteria motion to be held on list.

Value of the prize also to be discussed after budget has been analysed by Josh H and Tony B.

To be removed from further agendas

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

UPDATE: Would be interested in changing the subcommittee structure for ongoing conferences. Conference committees to draft a policy.

4. Items for discussion

- LCA2016 update

All going well. CFP closes on Monday

- LCA2017 update


- LCA2018 update

Formal call for bids to go out

UPDATE: Tony to send out a formal call.

- PyCon AU update

Sponsorship is going well. Registrations are tracking well.

- Drupal South

ACTION: Follow-up on DrupalSouth 2016 enquiry. will need to setup a sub-committee

UPDATE: To work out the sub-committee details with organisers.

Correspondence: Quotes from Venue

UPDATE: require some final details to finalise the sub-committee. DrupalSouth to submit a budget.

- WordCamp Brisbane

In Progress

- OSDConf

ACTION: Josh H to follow-up on budget status

ACTION: Josh to ping OSDConf

Payment for venue needs to be processed.

- GovHack

Report from Pia

Do we need to get a Logo to them for the website?

Also one for the LA website?

ACTION: Sae Ra to update LA Website with GovHack logo.

5. Items for noting

Nil items for noting

6. Other business

Backlog of minutes

MOTION by Josh H Minutes to be published to

Seconded: Craige

Passed unanimously

Bank account balances need rebalancing

ACTION: Tony to organise transfers to occur including NZ account.

Appropriate treasurers to be notified.

UPDATE: to be discussed on friday

To be removed from further agendas

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.

UPDATE: Carried to next meeting

MOTION: Josh H moves that LA becomes a Demand Class member of auDA

Seconded: Tony B

Passed unanimously.

ACTION: Josh H to sign up with LA CC


David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

UPDATE: In Progress.

ACTION: Josh S to catch up with David Bell regarding the documentation.

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh S to contact Renai

UPDATE: In Progress

Mailing list request from RUSSELL COKER regarding Science Cafe

To be discussed next Council Meeting.

7. In Camera

3 Items were discussed in camera.

PyCon Australia 2015!

I was at PyCon Australia 2015 in Brisbane last week, and I presented a couple of talks!

  • Python’s New Type Hints In Action… In JavaScript looked at the tarpit surrounding PEP 484, by introducing Pythonistas to TypeScript, an implementation of the same type system but for JavaScript. There’s a video on youtube and notes on github.
  • Test-Driven Repair looked at the issue of adding tests to code that hadn’t really considered it. I proposed some ideas about how to go about adding tests and refactoring your code to make future testing easy. There was a lot of good discussion after this talk, and this one represents an improvement over the version I presented at OSCON a week earlier. Once again, there’s a video on YouTube and notes on Github.

This was the second year of PyCon Australia in Brisbane, it was pretty excellent. I’m looking forward to next year’s, which will be in Melbourne!

LinuxCon North America in Seattle

I’m excited to be at LinuxCon North America in Seattle next week (August 17-19 2015). I’ve spoken at many LinuxCon events, and this one won’t be any different. Part of the appeal of the conference is being able to visit a new place every year.

MariaDB Corporation will have a booth, so you’ll always be able to see friendly Rod Allen camped there. In between talks and meetings, there will also be Max Mether and quite possibly all the other folk that live in Seattle (Kolbe Kegel, Patrick Crews, Gerry Narvaja).

For those in the database space, don’t forget to come attend some of our talks (represented by MariaDB Corporation and Oracle Corporation):

  1. MariaDB: The New MySQL is Five Years Old & Everywhere by Colin Charles
  2. MySQL High Availability in 2015 by Colin Charles
  3. Handling large MySQL and MariaDB farms with MaxScale by Max Mether
  4. The Proper Care and Feeding of a MySQL Database for a Linux Administrator by Dave Stokes
  5. MySQL Security in a Cloudy World by Dave Stokes

See you in Seattle soon!

City2Surf 2015

It was a Big Run in Sydney yesterday - City2Surf 2015, with 80,000+ participants and more $4.1 mln funds raised to various charities.

This year I have entered the Blue start:

City2Surf 2015: Blue startCity2Surf 2015: Blue start

And finished in 1:22:08, 5 minutes 1 second faster than last year! :)

After finish 2015After finish 2015

A friend of mines who also participated in City2Surf 2015 is raising donations to Operation Smile Australia, - they make cleft surgeries in developing countries. The goal of funding two new smiles has reached with help of many supporters, though we need a little bit more to make them four! Please consider to donate!

August 09, 2015

Twitter posts: 2015-08-03 to 2015-08-09

Advanced VHF Digital Radio using Codec 2

Justin, VK7TW, has published a video of my recent presentation at Gippstech, which was held in July 2015. Good summary of FreeDV, the SM1000, and the exciting possibilities for VHF Digital Voice. Thanks Justin! Here are the Open Office slides of the presentation.

Oh, yes and in the cut out image below my head doesn’t really have the right side missing, although after week of speech coding R&D and/or a night on red wine it does often feel that way :-)

August 07, 2015

An Open Letter to Prime, WIN and Southern Cross

So, you've decided to launch a campaign to "Save our voices" to try and "rescue" regional voices in news and current affairs.

That's nice and all, but what exactly are you proposing here?

I mean seriously, what do you think it will take to "rescue" an industry that has essentially been left to rot because the people in charge have completely missed the biggest shift in media consumption since the advent of the radio?

If you really want to "Save our voices" and ensure that regional Australia still has access to regional news, then may I suggest that the first thing you do is hire someone who knows something about the internet, someone who understands that the world has moved on and that if companies like WIN and Prime want to survive, then they're going to have to compete with not only the Netflixs and Youtubes, but their own content partners.

Demanding that the government relax media ownership laws or any other sort of government intervention isn't going to save the regional media industry. People have a whole wide internet to go for their entertainment and news needs, there is absolutely no reason for them to stick with their local media, we're so far beyond the age of the 6 o'clock bulletin being the core around which people organised their evenings it's not funny.

Oh and while I'm at it, dear WIN Television, it's a bit hard to take your call for support for local news services seriously when you don't have any news on your website.

So, as someone who has worked in the industry (albiet over 11 years ago), someone who relies on regional media to find out what's going on in his neck of the woods, and someone who is EXTREMELY frustrated that we're still having the "What even is the internet?" discussion in 2015, I am begging you to please hire someone with clue.

For all of the good hard working people within your organisations who are facing the very real possibility of job losses if you don't switch gears, I am begging you to please hire someone with clue.

Just do it. Really.


Blog Catagories: 

Apple iCloud Device Locking and General Apple Information

If you work in IT you probably have people ask you random questions out of nowhere from time to time. I was recently asked about how to bypass Apple iCloud device locking.

First of all, my opinion of this. I just try to avoid this space (from any perspective). If it sounds too good/cheap to be true it probably is, yadayada...

There does seem to be some tools online to enable checking prior to purchase but obviously even that isn't full proof. For example, if the seller knows that the goods have been locked but never connects to Apple servers then it is impossible/unlikely that the device in question will be locked prior to be the sale. They could feign ignorance also when confronted, law enforcement and the legal system may offer no avenue for recourse, etc...

Safe to give out the serial number of a Mac I'm selling?

iPhone 6 Plus Are "Stolen Goods" from Futu_Online eBay Promotion

If you've been watching this space for a while you'll know that about the Doucli bypass. This seems to work based on MITM (Man in the Middle Attack) principles (I haven't taken too close a look at this).

For those who don't know what this is is that any communications that go from Apple to your device now go through a third party (Doucli). Doucli filters out any traffic which relates to iCloud locking or simply inserts a different set of communications which can then unlock the device. For anyone who knows how this is done this can be extremely tedious and difficult especially if the defender has taken extensive counter-measures against attack.

If you are interested in possible avenues of attacking it here goes:

- preventing it from locking your device should be simple enough. Don't connect it to the Internet and allow it to hook up with Apple servers. Earlier versions of the Doucli hack depend on DNS host file hacking. Later version of Apple software seems to block this behaviour though. Easiest way around this is to setup a layered defense/attack with DNS re-directs occuring at multiple points between you and Apple whether it may be via software (relevant configuration files, virtual machines, containers, etc...) and/or hardware (networking hardware, servers, etc...)

- the network/server setup of Apple systems is such that the authentication servers may not be isolated from the store purchases making things slightly more difficult (there are plenty of programs out there to do this). If you must use a second/intermediary system to which downloads music/software and use this to transfer to another system which is never connected online. This allows you to have the benefits of the purchasing online while not having to deal with iCloud authentication issues. Your device can not be locked without relevant identifying information being transferred between yourself and Apple (obviously, if this becomes a widespread means of bypassing iCloud then they'll be counter-measures which are deployed, etc...)

- the game keeps on changing. As cracks in the protocol/system are identified attackers and Apple have to continually change the game. If you really want to understand it, you're best trying to understand live packet manipulation and reverse engineering/cracking or DRM systems

- I've looked at this and for me the easiest way to attack is via direct hardware if your device is locked. It requires no advance knowledge of the software/protocol and is reliant entirely on the way in which data is stored on the device itself (obviously, this only makes the problem slightly easier to deal with). It's similar to the way in which firmware reset mode works on embedded devices such as eBooks and to the way in which bypass is achieved in physical security systems. The only troubling thing may access. They're BGA! Realistically this could mean that this type of attack is neigh on impossible (I think it may be possible though. When I have dead hardware lying around I often play around with it. A single copper fibre and the right type of signal/voltage may be enough to create the type of data corruption that I require). Effectively, the type of attack that I envisage revolves around storage corruption. Since, everything is stored via a combination of encrypted keys at multiple layers my belief is that destroying/corrupting the storage and restoring iOS clean and bypassing Apple servers is easier than engaging in a continual race against Apple (making the assumption that restoration of iOS can be completed independently of iCloud lock checking)

Toshiba THGBX2G7B2JLA01 16 GB NAND Flash


- clearly, I'm working on the premise that attacking hardware is easier than attacking software since it is more difficult to change. To change the pin-out structure on a single chip requires re-tooling on a mass scale for chips that may also be used in other devices making it un-economical for both Apple and flash chip manufacturers to engage in. Once a design is out there, we can just figure it out and it should work across that entire design specification/model though... Of course, this could be somewhat of a moot point because a lot of Apple devices aren't easily upgradeable, change layout on each iteration, etc...

- another type of attack revolves around changing identifying information on the device and then clearing iOS. That said, you don't know whether or not Apple may have some sort of unique/class based identification system which may block non-Apple identified systems from accessing their servers. Either way, it requires a second system to act as an intermediary

- insider at Apple who removes gives you a 'clean sheet'

- that said, much of what I'm saying here is theoretical. I don't have access to an iPod/iPad at the moment so I don't know The best I've been able to manage are online teardowns

Cracking Open: Apple iPad Air 2

- just don't get why some groups simply don't release downloadable software which can be used to bypass. A local/loopback proxy would likely have minimal system impact if the protocol break feels as simple as it could possibly be. My guess is that at least some hacker/cracker groups are using the (supposedly) free and altruistic bypasses as a means of gaining access to people's private details. All the more reason to avoid these third party hacks and buy equipment 'clean'...

- if you're used to researching DRM and disassembly/reverse engineering of files some of the above may seem foreign to you. Believe me, it's not that much of a leap up. Conceptually, many of the same techniques and theories are employed. You just have to get used to a new setting. That's all...

Identify your iPod model

Diagnostic mode for Apple iPod devices

Sources/options for replacement storage on iPod Classics

Source for replacement of Apple parts locally
Enabling alternative filesystem support on Mac OS X Yosemite

Booting Live Linux discs on an Apple Macbook

Mac OS X Live discs are an interesting option for those who are interested in testing/trying Mac OS X without wanting to purchase hardware beforehand.

How to install latest Mac OS X on iMac without original DVD
Create a bootable installer for OS X Mavericks or Yosemite

The Crossroad

ISBN: 9781743519103


Written by a Victoria Cross recipient, this is the true story of a messed up kid who made something of himself. Mark's dad died of cancer when he was young, and his mum was murdered. Mark then went through a period of being a burden on society, breaking windows for fun and generally being a pain in the butt. But then one day he decided to join the army...

This book is very well written, and super readable. I enjoyed it a lot, and I think its an important lesson about how troubled teenagers are sometimes that way because of pain in their past, and can often still end up being a valued contributor to society. I have been recommending this book to pretty much everyone I meet since I started reading it.

Tags for this post: book mark_donaldson combat sas army afghanistan

Related posts: Goat demoted
Comment Recommend a book

Terrible pong

The kids at coding club have decided that we should write an implementation of pong in python. I took a look at some options, and decided tkinter was the way to go. Thus, I present a pong game broken up into stages which are hopefully understandable to an 11 year old: Operation Terrible Pong.

Tags for this post: coding_club python game tkinter

Related posts: More coding club; Implementing SCP with paramiko; Coding club day one: a simple number guessing game in python; Packet capture in python; mbot: new hotness in Google Talk bots; Calculating a SSH host key with paramiko


August 06, 2015

Translation services from Google and Yandex

You're perhaps aware of Google Translation services, and if you know more than one human language you can contribute and help to improve this service via Google Translate Community (BETA).

You also might be interested to know that Yandex, a Russian google, has their Yandex Translation Service running, which in many cases gives better translation for Russian - English pair of languages.

August 05, 2015

Snow in the Forest

“How’d you solve the icing problem?”

“Icing problem?”

“Might want to look into it.”

Iron Man (2008)

On a particularly cold Saturday morning a couple of years ago, my mobile phone couldn’t get any signal for a few hours. But I didn’t really care, because I had breakfast to eat, animals to feed, and nobody I urgently needed to talk to at the time. Also it came good again shortly after midday.

The following week the same thing happened, but for rather longer, i.e. well into the evening. This was enough to prompt me to use my landline to call Optus (our mobile phone provider) and report a fault. The usual dance ensued:

“Have you tried turning it off and on again?”


“Have you tried a different handset?”


“A different SIM?”


“Holding the phone in your other hand?”


“Sacrificing a lamb to the god Mercury?”


I might be misremembering the details of the conversation, but you get the idea. Long story short, I got a fault lodged.

Later I received a call – on my mobile – asking if my mobile was working again. “Indeed it is, and you wouldn’t have been able to make this call if it wasn’t”, I replied. Then I asked what the problem had been. “Let me check”, said the support person. “Uhm… It says here there was… 100mm of ice on the local tower.”

Flash forwards to a couple of days ago, when snow fell down to sea level for the first time since 2005, and my mobile phone was dead again. I can only assume they haven’t solved the icing problem, and that maybe the local NBN fixed wireless tower suffers from the same affliction, as that was dead too for something like 24 hours.

It was very pretty though.

Snow in the Forest

August 04, 2015

Searching for open bugs in a launchpad project

The launchpad API docs are OMG terrible, and it took me way too long to work out how to do this, so I thought I'd document it for later. Here's how you list all the open bugs in a launchpad project using the API:

    import argparse
    import os
    from launchpadlib import launchpad
    LP_INSTANCE = 'production'
    CACHE_DIR = os.path.expanduser('~/.launchpadlib/cache/')
    def main(username, project):
        lp = launchpad.Launchpad.login_with(username, LP_INSTANCE, CACHE_DIR)
        for bug in lp.projects[project].searchTasks(status=["New",
                                                            "In Progress"]):
            print bug
    if __name__ == '__main__':
        parser = argparse.ArgumentParser(description='Fetch bugs from launchpad')
        args = parser.parse_args()
        main(args.username, args.project)

Tags for this post: launchpad api

Related posts: Taking over a launch pad project; Juno nova mid-cycle meetup summary: the next generation Nova API


The Bitcoin Blocksize: A Summary

There’s a significant debate going on at the moment in the Bitcoin world; there’s a great deal of information and misinformation, and it’s hard to find a cogent summary in one place.  This post is my attempt, though I already know that it will cause me even more trouble than that time I foolishly entitled a post “If you didn’t run code written by assholes, your machine wouldn’t boot”.

The Technical Background: 1MB Block Limit

The bitcoin protocol is powered by miners, who gather transactions into blocks, producing a block every 10 minutes (but it varies a lot).  They get a 25 bitcoin subsidy for this, plus whatever fees are paid by those transactions.  This subsidy halves every 4 years: in about 12 months it will drop to 12.5.

Full nodes on the network check transactions and blocks, and relay them to others.  There are also lightweight nodes which simply listen for transactions which affect them, and trust that blocks from miners are generally OK.

A normal transaction is 250 bytes, and there’s a hard-coded 1 megabyte limit on the block size.  This limit was introduced years ago as a quick way of avoiding a miner flooding the young network, though the original code could only produce 200kb blocks, and the default reference code still defaults to a 750kb limit.

In the last few months there have been increasing runs of full blocks, causing backlogs for a few hours.  More recently, someone deliberately flooded the network with normal-fee transactions for several days; any transactions paying less fees than those had to wait for hours to be processed.

There are 5 people who have commit access to the bitcoin reference implementation (aka. “bitcoin-core”), and they vary significantly in their concerns on the issue.

The Bitcoin Users’ Perspective

From the bitcoin users perspective, blocks should be infinite, and fees zero or minimal.  This is the basic position of respected (but non-bitcoin-core) developer Mike Hearn, and has support from bitcoin-core ex-lead Gavin Andresen.  They work on the wallet and end-user side of bitcoin, and they see the issue as the most urgent.  In an excellent post arguing why growth is so important, Mike raises the following points, which I’ve paraphrased:

  1. Currencies have network effects. A currency that has few users is simply not competitive with currencies that have many.
  2. A decentralised currency that the vast majority can’t use doesn’t change the amount of centralisation in the world. Most people will still end up using banks, with all the normal problems.
  3. Growth is a part of the social contract. It always has been.
  4. Businesses will only continue to invest in bitcoin and build infrastructure if they are assured that the market will grow significantly.
  5. Bitcoin needs users, lots of them, for its political survival. There are many people out there who would like to see digital cash disappear, or be regulated out of existence.

At this point, it’s worth mentioning another bitcoin-core developer: Jeff Garzik.  He believes that the bitcoin userbase has been promised that transactions will continue to be almost free.  When a request to change the default mining limit from 750kb to 1M was closed by the bitcoin lead developer Wladimir van der Laan as unimportant, Jeff saw this as a symbolic moment:

What Happens If We Don’t Increase Soon?

Mike Hearn has a fairly apocalyptic view of what would happen if blocks fill.  That was certainly looking likely when the post was written, but due to episodes where the blocks were full for days, wallet designers are (finally) starting to estimate fees for timely processing (miners process larger fee transactions first).  Some wallets and services didn’t even have a way to change the setting, leaving users stranded during high-volume events.

It now seems that the bursts of full blocks will arrive with increasing frequency; proposals are fairly mature now to allow users to post-increase fees if required, which (if all goes well) could make for a fairly smooth transition from the current “fees are tiny and optional” mode of operation to a “there will be a small fee”.

But even if this rosy scenario is true, this begsavoids the bigger question of how high fees can become before bitcoin becomes useless.  1c?  5c?  20c? $1?

So What Are The Problems With Increasing The Blocksize?

In a word, the problem is miners.  As mining has transitioned from a geek pastime, semi-hobbyist, then to large operations with cheap access to power, it has become more concentrated.

The only difference between bitcoin and previous cryptocurrencies is that instead of a centralized “broker” to ensure honesty, bitcoin uses an open competition of miners. Given bitcoin’s endurance, it’s fair to count this a vital property of bitcoin.  Mining centralization is the long-term concern of another bitcoin-core developer (and my coworker at Blockstream), Gregory Maxwell.

Control over half the block-producing power and you control who can use bitcoin and cheat anyone not using a full node themselves.  Control over 2/3, and you can force a rule change on the rest of the network by stalling it until enough people give in.  Central control is also a single point to shut the network down; that lets others apply legal or extra-legal pressure to restrict the network.

What Drives Centralization?

Bitcoin mining is more efficient at scale. That was to be expected[7]. However, the concentration has come much faster than expected because of the invention of mining pools.  These pools tell miners what to mine, in return for a small (or in some cases, zero) share of profits.  It saves setup costs, they’re easy to use, and miners get more regular payouts.  This has caused bitcoin to reel from one centralization crisis to another over the last few years; the decline in full nodes has been precipitous by some measures[5] and continues to decline[6].

Consider the plight of a miner whose network is further away from most other miners.  They find out about new blocks later, and their blocks get built on later.  Both these effects cause them to create blocks which the network ignores, called orphans.  Some orphans are the inevitable consequence of miners racing for the same prize, but the orphan problem is not symmetrical.  Being well connected to the other miners helps, but there’s a second effect: if you discover the previous block, you’ve a head-start on the next one.  This means a pool which has 20% of the hashing power doesn’t have to worry about delays at all 20% of the time.

If the orphan rate is very low (say, 0.1%), the effect can be ignored.  But as it climbs, the pressure to join a pool (the largest pool) becomes economically irresistible, until only one pool remains.

Larger Blocks Are Driving Up Orphan Rates

Large blocks take longer to propagate, increasing the rate of orphans.  This has been happening as blocks increase.  Blocks with no transactions at all are smallest, and so propagate fastest: they still get a 25 bitcoin subsidy, though they don’t help bitcoin users much.

Many people assumed that miners wouldn’t overly centralize, lest they cause a clear decentralization failure and drive the bitcoin price into the ground.  That assumption has proven weak in the face of climbing orphan rates.

And miners have been behaving very badly.  Mining pools orchestrate attacks on each other with surprising regularity; DDOS and block withholding attacks are both well documented[1][2].  A large mining pool used their power to double spend and steal thousands of bitcoin from a gambling service[3].  When it was noticed, they blamed a rogue employee.  No money was returned, nor any legal action taken.  It was hoped that miners would leave for another pool as they approached majority share, but that didn’t happen.

If large blocks can be used as a weapon by larger miners against small ones[8], it’s expected that they will be.

More recently (and quite by accident) it was discovered that over half the mining power aren’t verifying transactions in blocks they build upon[4].  They did this in order to reduce orphans, and one large pool is still doing so.  This is a problem because lightweight bitcoin clients work by assuming anything in the longest chain of blocks is good; this was how the original bitcoin paper anticipated that most users would interact with the system.

The Third Side Of The Debate: Long Term Network Funding

Before I summarize, it’s worth mentioning the debate beyond the current debate: long term network support.  The minting of new coins decreases with time; the plan of record (as suggested in the original paper) is that total transaction fees will rise to replace the current mining subsidy.  The schedule of this is unknown and generally this transition has not happened: free transactions still work.

The block subsidy as I write this is about $7000.  If nothing else changes, miners would want $3500 in fees in 12 months when the block subsidy halves, or about $2 per transaction.  That won’t happen; miners will simply lose half their income.  (Perhaps eventually they form a cartel to enforce a minimum fee, causing another centralization crisis? I don’t know.)

It’s natural for users to try to defer the transition as long as possible, and the practice in bitcoin-core has been to aggressively reduce the default fees as the bitcoin price rises.  Core developers Gregory Maxwell and Pieter Wuille feel that signal was a mistake; that fees will have to rise eventually and users should not be lulled into thinking otherwise.

Mike Hearn in particular has been holding out the promise that it may not be necessary.  On this he is not widely supported: that some users would offer to pay more so other users can continue to pay less.

It’s worth noting that some bitcoin businesses rely on the current very low fees and don’t want to change; I suspect this adds bitterness and vitriol to many online debates.


The bitcoin-core developers who deal with users most feel that bitcoin needs to expand quickly or die, that letting fees emerge now will kill expansion, and that the infrastructure will improve over time if it has to.

Other bitcoin-core developers feel that bitcoin’s infrastructure is dangerously creaking, that fees need to emerge anyway, and that if there is a real emergency a blocksize change could be rolled out within a few weeks.

At least until this is resolved, don’t count on future bitcoin fees being insignificant, nor promise others that bitcoin has “free transactions”.

[1] “Bitcoin Mining Pools Targeted in Wave of DDOS Attacks” Coinbase 2015

[2] “Block Withholding Attacks – Recent Research” N T Courtois 2014

[3] “GHash.IO and double-spending against BetCoin Dice” mmtech et. al 2013

[4] “Questions about the July 4th BIP66 fork”

[5] “350,000 full nodes to 6,000 in two years…” P Todd 2015

[6] “Reachable nodes during the last 365 days.”

[7] “Re: Scalability and transaction rate” Satoshi 2010

[8] “[Bitcoin-development] Mining centralization pressure from non-uniform propagation speed” Pieter Wuille 2015

August 03, 2015

FreeDV QSO Party Weekend – September 12/13th

My good friends at the Amateur Radio Experimenters Group (AREG), an Adelaide based Ham radio club, have organised a special FreeDV QSO Party Weekend on Sat/Sun September 12/13th. This is a great chance to try out FreeDV, work VK5 using open source HF digital voice, and even talk to me!

All the details including paths, frequencies, and times over on the AREG site.

How not to report abuse

This is the entire complaint that was received:

We are an IT security company from Spain.

We have detected sensitive information belonging to Banesco Banco Universal, C.A. clientes.

As authorized representative in the resolution of IT security incidents affecting Banesco Banco Universal, C.A., we demand the deletion of the content related to Banesco Banco Universal, C.A, clients. This content violates the law about electronic crime in Venezuela (see “Ley Especial sobre Delitos Informáticos de Venezuela”, Chapter III, Articles 20 y 23).

Note the complete lack of any information regarding what URLs, or even which site(s), they consider to be problematic. Nope, just “delete all our stuff!” and a wave of the “against the law somewhere!” stick.

August 02, 2015

Craige McWhirter: PyConAu - Day Two

Keynote: Consequences of an Insightful Algorithm

by Carina C. Zona

Carina gave a fantastic talk of the very real consequences of our interactions with data mining and the algorithms used to target us. A must watch when the video comes out.

Update: Video is now available here. Slides are over there.

Adventures in pip land

by Robert Collins

A humorous and contructive talk on the pitfalls of pip. Strongly recommended that people no longer use it.

Arrested Development - surviving the awkward adolescence of a microservices-based application

By Scott Triglia

  • Logging is a super power.
  • Be explicit
  • Measure everything
  • Scale via automation

Rapid prototyping with teenagers

by Katie Bell

  • Run coding camps and competitions for kids with no prior experience required.
  • Courses and competitions designed to bootstrap kids into programming basics.

Test-Driven Repair

by Christopher Neugebauer

  • Bad tests are better than no tests.
  • Write tests early.
  • Results in better interfaces.
  • Clearer separation.
  • Write tests first.
  • An interface is anything that gives OK isolation to the code you want to test.
  • Good interfaces tests will test very branch.
  • Measure all the things.
  • Measure setup time
  • Measure execution time
  • Run tests thoroughly nightly.
  • Make it easy to translate a bug report into a test case.
  • Retain your test cases and run them regularly.

Playing to lose: making sensible security decisions by assuming the worst

by Tom Eastman

  • Minimise the attack surface.
  • Reduce the privileges of the account used by the app.
  • Eliminate SQL injections.
  • White list input validation.
  • "Escape" all data appropriately.
  • Use Content Security Policy to protect against cross site scripting.
  • Don't have all your eggs in one basket.
  • You can always have better logging. Off site logging is useful.
  • Take a look at the ELK stack.

Twitter posts: 2015-07-27 to 2015-08-02

The End of All Things

ISBN: 1447290496


I don't read as much as I should these days, but one author I always make time for is John Scalzi. This is the next book in the Old Man's War universe, and it continues from where The Human Division ended on a cliff hanger. So, let's get that out of the way -- ending a book on a cliff hanger is a dick move and John is a bad bad man. Then again I really enjoyed The Human Division, so I will probably forgive him.

I don't think this book is as good as The Human Division, but its a solid book. I enjoyed reading it and it wasn't a chore like some books this far into a universe can be (I'm looking at you, Asimov share cropped books). The conclusion to the story arc is sensible, and not something I would have predicted, so overall I'm going to put this book on my mental list of the very many non-terrible Scalzi books.

Tags for this post: book john_scalzi combat aliens engineered_human old_mans_war age colonization human_backup cranial_computer personal_ai

Related posts: The Last Colony ; The Human Division; Old Man's War ; The Ghost Brigades ; Old Man's War (2); The Ghost Brigades (2)
Comment Recommend a book

Korora 22 (Selina) available

We’ve finally had time to finalise Korora 22 and images are now available. I strongly recommend downloading with BitTorrent if you can.


We are not shipping Adobe Flash by default from 22 onwards, due to consistent security flaws. We still include the repository however, so users can install via the package manager or command line if they really want it:

sudo dnf install flash-plugin

Alternatively, install Google Chrome which includes the latest version of Flash.

Also, KDE 4 is not available for this release, so if you are not ready to move to KDE 5, then please stick to Korora 21.

August 01, 2015

Setting the wifi regulatory domain on Linux and OpenWRT

The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.


For most Linux-based computers, you can look and change the current regulatory domain using these commands:

iw reg get
iw reg set CA

where CA is the two-letter country code when the device is located.

On Debian and Ubuntu, you can make this setting permanent by putting the country code in /etc/default/crda.

Finally, to see the list of channels that are available in the current config, use:

iwlist wlan0 frequency


On OpenWRT-based routers (including derivatives like Gargoyle), looking and setting the regulatory domain temporarily works the same way (i.e. the iw commands above).

In order to persist your changes though, you need to use the uci command:

uci set
uci set
uci commit wireless

where wireless.radio0 and wireless.radio1 are the wireless devices specific to your router. You can look them up using:

uci show wireless

To test that it worked, simply reboot the router and then look at the selected regulatory domain:

iw reg get

Scanning the local wifi environment

Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).

Craige McWhirter: PyConAu - Day One

Keynote: Designed for education: a Python solution

by Carrie Anne Philbin

Excellent keynote on using Python in education. Many interesting insights into what's being done well, what needs improving and how to contribute.

Slow Down, Compose Yourself - How Composition Can Help You Write Modular, Testable Code

by Amber "Hawkie" Brown

  • Modularity and testability traits are desirable.
  • Tests are the only way ti ensure your code works
  • When designing a system, plan for these traits
  • Fake components can be used to return hard to replicate conditions ie: a database returning "no more space"
  • Ensure you have correctness at every level.
  • Suggests using practices from better (functional programming) languages like Haskell with Python.

Tales from Managing an Open Source Python Game

by Josh Bartlett

  • Trosnoth is a 2D side scrolling game.
  • Team strategy game
  • Involve people at every step of the process.
  • Have a core group of interested people and allow them to contribute and become involved to increase commitment.
  • Build community.
  • Get people excited
  • Be prepared to be wrong.
  • Encourage people to be involved.
  • Start with "good enough".
  • Re-writing everything blocked contributions for the period of the re-write.
  • Look for ideas anywhere.
  • Mistakes are a valuable learning opportunity.
  • Your people and your community are what makes the project work.

Ansible, Simplicity, and the Zen of Python

by Todd Owen (

  • Ansible is a push model of configuration management and automation.
  • Pull model of Chef and Puppet is unnecessarily complex.
  • Ansible's simplicity is a great asset
  • Graceful and clear playbooks.
  • Simple is better than complex.
  • Complex is better than complicated.
  • Flat is better than nested.
  • Ansible modules are organised without hierarchy.
  • Ansible only uses name spaces for roles.
  • Explicit is better than implicit. Implicit can become invisible to users.
  • Ansible id very consistent.
  • Simple by design.
  • Easy to learn.

Docker + Python

by Tim Butler

  • Prefers to think of containers as application containers.
  • Docker is an abstraction layer focussed on the application it delivers.
  • Easy, repeatable deployments.
  • Not a fad, introduced 15 years again.
  • Google launch over 2 billion containers per week.
  • Docker is fast, lightweight, isolated and easy.
  • Rapidly changing and improving.
  • Quickly changing best practices.
  • Containers are immutable.
  • No mature multi-host deployment.
  • Security issues addressed via stopping containers and starting a patched one (milliseconds)
  • Simple and clear subcommands.
  • Docker compose for orchestration.
  • Docker machine creates to the Docker host for you.
  • Future
  • Volume plugin
  • New networking system so it actually works at scale.

Testing ain't hard, even for SysAdmins

by Geoff Crompton

  • Salt stack provides:
  • configuration management
  • loosely coupled infrastructure coordination.
  • Orchestration
    • remote execution
    • discovery
  • unittest 101
  • create a test directory
  • write a test
  • gave an example of a "hello world" of tests.
  • uses unittest.main.
  • Nose extends unittest to better facilitate multiple unit tests.
  • Mock allows the replacement of modules for testing.
  • Keep tests small.

Python on the move: The state of mobile Python

by Russell Keith-Magee

  • iOS
  • Claims Python on mobile is very viable.
  • Most of the failing bugs are not services you want on mobile anyway.
  • libffi problems needs to be resolved.
  • Android
  • Compiler issues not quite resolved.
  • libffi problems needs to be resolved.
  • What not Jython? Does not compile on Android. Nor will many of it's dependencies.
  • Jython is probably not the right solution anyway.
  • Thinks you may be able to compile Python directly to Java...uses Byterun as an example.
  • Kivy works now and worth using.
  • His project Toga is coming along nicely.
  • Admits he may be on a fools errand but thinks this is achievable.

Journey to becoming a Developer: an Administrators story

by Victor Palma

  • The sysadmin mindset of get things fixed as fast as possible needs to be shed.
  • Take the time to step back and consider problems.
  • Keep things simple, explicit and consistent.
  • Do comments in reStructured Text.
  • Unless you're testing, you're not really coding.
  • Tools include tox and nose.
  • Don't be afraid to experiment.
  • Find something you are passionate about and manipulate that data.

Guarding the gate with Zuul

by Joshua Hesketh

  • Gerrit does code review.
  • Zuul tests things right before they merge and will only merge them if they pass.
  • Only Zuul can commit to master.
  • Zuul uses gearman to manage Jenkins jobs.
  • Uses NNFI - nearest non-failing item
  • Use jenkins-gearman instead of jenkins-gerrit to reproduce the work flow.

July 31, 2015

Craige McWhirter: OpenStack Miniconf at PyConAu

OpenStack: A Vision for the Future

by Monty Taylor

  • Create truth in realistic acting
  • Know what problem you're trying to solve.
  • Develop techniques to solve the problem.
  • Don't confuse the techniques with the result.
  • Willingness to change with new information.

What Monty Wants

  • Provide computers and networks that work.
  • Should not chase 12-factor apps.
  • Kubernetes / CoreOS are already providing these frameworks
  • OpenStack should provide a place for these frameworks to work.
  • By default give a directly routable IP.

The Future of Identity (Keystone) in OpenStack

by Morgan Fainberg

  • Moving to Fernet Tokens as the default, everywhere.
  • Lightweight
  • No database requirement
  • Limited token size
  • Will support all the features of existing token types.
  • Problems with UUID or PKI tokens:
  • SQL back end
  • PKI tokens are too large.
  • Moving from bespoke WSGI to Flask
  • Moving to a KeystoneAuth Library to remove the need for the client to be everywhere.
  • Keystone V3 API...everywhere. Focus on removing technical debt.
  • V2 API should die.
  • Deprecating the Keystone client in favour of the openstack client.
  • Paste.ini functionality being moved to core and controlled via policy.json

Orchestration and CI/CD with Ansible and OpenStack

by Simone Soldateschi

  • Gave a great overview of OpenStack / CoreOS / Containers
  • All configuration management sucks. Ansible sucks less.
  • CI/CD pipelines are repeatable.

Practical Federation

by Jamie Lennox

  • SAML is the initially supported WebSSO.
  • Ipsilon has SAML frontend, supports SSSD / PAM on the backend.
  • Requires Keystone V3 API everywhere.
  • Jamie successfully did live demo that demonstrated the work flow.


by Angus Lees

  • Uses Linux kernel separation to restrict available privileges.
  • Gave a brief history of rootwrap`.
  • Fast and safe.
  • Still in beta

OpenStack Works, so now what?

by Monty Taylor

  • Shade's existence is a bug.
  • Take OpenStack back to basics
  • Keeps things simple.

July 30, 2015

Podcasting with WPTavern

Earlier in the week, I joined Jeff from WPTavern to chat about WordPress Security, the recent WordPress 4.2.3 release, and my favourite food.

Check out the full article here.

Customising a systemd unit file

Once in a while you want to start a daemon with differing parameters from the norm.

For example, the default parameters to Fedora's packaging of ladvd give too much access to unauthenticated remote network units when it allows those units to set the port description on your interfaces[1]. So let's use that as our example.

With systemd unit files in /etc/systemd/system/ shadow those in /usr/lib/systemd/system/. So we could copy the ladvd.service unit file from /usr/lib/... to /etc/..., but we're old, experienced sysadmins and we know that this will lead to long run trouble. /usr/lib/systemd/system/ladvd.service will be updated to support some new systemd feature and we'll miss that update in the copy of the file.

What we want is an "include" command which will pull in the text of the distributor's configuration file. Then we can set about changing it. Systemd has a ".include" command. Unfortunately its parser also checks that some commands occur exactly once, so we can't modify those commands as including the file consumes that one definition.

In response, systemd allows a variable to be cleared; when the variable is set again it is counted as being set once.

Thus our modification of ladvd.service occurs by creating a new file /etc/systemd/system/ladvd.service containing:

.include /usr/lib/systemd/system/ladvd.service
# was ExecStart=/usr/sbin/ladvd -f -a -z
# but -z allows string to be passed to kernel by unauthed external user
ExecStart=/usr/sbin/ladvd -f -a


[1] At the very least, a security issue equal to the "rude words in SSID lists" problem. At it's worst, an overflow attack vector.

July 29, 2015

Low Order LPC and Bandpass Filtering

I’ve been working on the Linear Predictive Coding (LPC) modeling used in the Codec 2 700 bit/s mode to see if I can improve the speech quality. Given this mode was developed in just a few days I felt it was time to revisit it for some tuning.

LPC fits a filter to the speech spectrum. We update the LPC model every 40ms for Codec 2 at 700 bit/s (10 or 20ms for the higher rate modes).

Speech Codecs typically use a 10th order LPC model. This means the filter has 10 coefficients, and every 40ms we have to send them to the decoder over the channel. For the higher bit rate modes I use about 37 bits/frame for this information, which is the majority of the bit rate.

However I discovered I can get away with a 6th order model, if the input speech is filtered the right way. This has the potential to significantly reduce the bit rate.

The Ear

Our ear perceives speech based on the frequency of peaks in the speech spectrum. When the peaks in the speech spectrum are indistinct, we have trouble understanding what is being said. The speech starts to sound muddy. With analog radio like SSB (or in a crowded room), the troughs between the peaks fill with noise as the SNR degrades, and eventually we can’t understand what’s being said.

The LPC model is pretty good at representing peaks in the speech spectrum. With a 10th order LPC model (p=10) you get 10 poles. Each pair of poles can represent one peak, so with p=10 you get up to 5 independent peaks, with p=6, just 3.

I discovered that LPC has some problems if the speech spectrum has big differences between the low and high frequency energy. To find the LPC coefficients, we use an algorithm that minimises the mean square error. It tends to “throw poles” at the highest energy part of signal (frequently near DC), while ignoring the still important, lower energy peaks at higher frequencies above 1000Hz. So there is a mismatch in the way LPC analysis works and how our ears perceive speech.

For example I found that samples like hts1a and ve9qrp code quite well, but cq_ref and kristoff struggle. The former have just 12dB between the LF and HF parts of the speech spectrum, the latter 40dB. This may be due to microphones, input filtering, or analog shaping.

Another problem with using an unconventionally low LPC order like p=6 is that the model “runs out of poles”. Some speech signals may have 4 or 5 peaks, so the poor LPC model gets all confused and tries to reach a compromise that just sounds bad.

My Experiments

I messed around with a bunch of band pass filters that I applied to the speech samples before LPC modeling. These filters whip the speech signal into a shape that the LPC model can work with. I ran various samples (hts1a, hts2a, cq_ref, ve9qrp_10s, kristoff, mmt1, morig, forig, x200_ext, vk5qi) through them to come up with the best compromise for the 700 bits/mode.

Here is what p=6 LPC modeling sounds like with no band pass filter. Here is a sample of p=6 LPC modeling with a 300 to 2600Hz input band pass filter with very sharp edges.

Even though the latter sample is band limited, it is easier to understand as the LPC model is doing a better job of clearly representing those peaks.

Filter Implementation

After some experimentation with sox I settled on two different filter types: a sox “bandpass 1000 2000″ worked on some, whereas on others with more low frequency content “bandpass 1500 2000″ sounded better. Some helpful discussions with Glen VK1XX had suggested that a two band AGC was common in broadcast audio pre-processing, and might be useful here.

However through a process of frustrated experimentation (I was stuck on cq_ref for a day) I found that a very sharp skirted filter between 300 and 2600Hz did a pretty good job. Like p=6 LPC, a 2600Hz cut off is quite uncommon for speech coding, but SSB users will find it strangely familiar…….

Note that for the initial version of the 700 bit/s mode (currently in use in FreeDV 700) I have a different band pass filter design I chose more or less at random on the day that sounds like this with p=6 LPC. This filter now appears to be a bit too severe.


Here is a little chunk of speech from hts1a:

Below are the original (red) and p=6 LPC models (green line) without and with a sox “bandpass 1000 2000″ filter applied. If the LPC model was perfect green and red would be superimposed. Open each image in a new browser tab then jump back and forth. See how the two peaks around 550 and 1100Hz are better defined with the bandpass filter? The error (purple) in the 500 – 1000 Hz region is much reduced, better defining the “twin peaks” for our long suffering ears.

Here are three spectrograms of me saying “D G R”. The dark lines represent the spectral peaks we use to perceive the speech. In the “no BPF” case you can see the spectral peaks between 2.2 and 2.3 seconds are all blurred together. That’s pretty much what it sounds like too – muddy and indistinct.

Note that compared to the original, the p=6 BPF spectrogram is missing the pitch fundamental (dark line near 0 Hz), and a high frequency peak at around 2.5kHz is indistinct. Turns out neither of these matter much for intelligibility – they just make the speech sound band limited.

Next Steps

OK, so over the last few weeks I’ve spent some time looking at the effects of microphone placement, and input filtering on p=6 LPC models. Now time to look at quantisation of the 700 mode parameters then try it again over the air and see if the speech quality is improved. To improve performance in the presence of bit errors I’d also like to get the trellis based decoding into a real world usable form. When the entire FreeDV 700 mode (codec, modem, error handling) is working OK compared to SSB, time to look at porting to the SM1000.

Command Line Magic

I’m working with the c2sim program, which lets me explore Codec 2 in a partially quantised or incomplete state. I pipe audio in and out between various sox stages.

Note these simulations sound a lot better than the final Codec 2 at 700 bit/s as nothing else is quantised/decimated, e.g. it’s all at a 10ms frame rate with original phases. It’s a convenient way to isolate the LPC modeling step with as much fidelity as we can.

If you want to sing along here are a couple of sample command lines. Feel free to ask me any questions:

sox -r 8000 -s -2 ../../raw/hts1a.raw -r 8000 -s -2 -t raw - bandpass 1000 2000 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -


sox -r 8000 -s -2 ../../raw/cq_ref.raw -r 8000 -s -2 -t raw - sinc 300 sinc -2600 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -

Reading Further

Open Source Low Rate Speech Codec Part 2

LPC Post Filter for Codec 2

Geocaching with a view

I went to find a couple of geocaches in a jet lag fuelled caching walk this morning. Quite scenic!


Interactive map for this route.

Tags for this post: blog pictures 20150729 photo sydney

Related posts: In Sydney!; In Sydney for the day; A further update on Robyn's health; RIP Robyn Boland; Weekend update; Bigger improvements


July 28, 2015

Chet and I went on an adventure to LA-96

So, I've been fascinated with American nuclear history for ages, and Chet and I got talking about what if any nuclear launch facilities there were in LA. We found LA-96 online and set off on an expedition to explore. An interesting site, its a pity there are no radars left there. Apparently SF-88 is the place to go for tours from vets and radars.


See more thumbnails

I also made a quick and dirty 360 degree video of the view of LA from the top of the nike control radar tower:

Interactive map for this route.

Tags for this post: blog pictures 20150727-nike_missile photo california

Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; Views from a lookout on Mulholland Drive, Bel Air


July 27, 2015

July 26, 2015

Geocaching with TheDevilDuck

In what amounts to possibly the longest LAX layover ever, I've been hanging out with Chet at his place in Altadena for a few days on the way home after the Nova mid-cycle meetup. We decided that being the dorks that we are we should do some geocaching. This is just some quick pics some unexpected bush land -- I never thought LA would be so close to nature, but this part certainly is.


Interactive map for this route.

Tags for this post: blog pictures 20150727 photo california bushwalk

Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica


Twitter posts: 2015-07-20 to 2015-07-26

July 25, 2015

Microphone Placement and Speech Codecs

This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.

People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.

So why does microphone placement matter?

Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.

The Model

A microphone is a bit like a radio front end:

We assume linearity (the microphone signal isn’t clipping).

Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.

Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.

When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.

A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.

Science in my Lounge Room 1 – Proximity Effect

I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.

To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).

This spreadsheet shows the results over a couple of runs (levels in dB).

So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!

Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.

Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.

Science in my Lounge Room 2 – Multipath

So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.

The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.

Lets take a look at the frequency response close up and at 25cm:

Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:

  1. The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
  2. They both have a resonance around 500Hz.
  3. The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.

OK, lets look at the reflections:

A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.

Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.

Looking at a zoomed version of the 25cm spectrum:

I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.

Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:

Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.

OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!

If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.

How to break a low bit rate speech codec

Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.

This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.

Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?

Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.


In Sep 2014 I had some interesting discussions around the effect of microphones, small speakers, and speech samples with Mike, OH2FCZ, who has is an audio professional. Thanks Mike!

July 24, 2015

Linux Security Summit 2015 Update: Free Registration

In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee.  This year, LSS has been upgraded to a hosted event.  I didn’t realize that this meant that LSS registration was available entirely standalone.  To quote an email thread:

If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration.  You will only have access to The Linux Security Summit.

Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.

There may be a number of people who registered for LinuxCon but who only wanted to attend LSS.   In that case, please contact the program committee at

Apologies for any confusion.

July 23, 2015

Virtualenv and library fun

Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.

Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)

For example:

mrda@host:~$ python -c 'import pywsman'

# Works

mrda@host:~$ tox -evenv --notest
(venv)mrda@host:~$ python -c 'import pywsman'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pywsman
# WAT?

Let's try something else that's installed system-wide
(venv)mrda@host:~$ python -c 'import six'
# Works

Why does six work, and pywsman not?
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*
-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info
-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/
-rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*
-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/
-rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/

The only thing that comes to mind is that pywsman wraps a .so

A work-around is to tell venv that it should use the system-wide install of pywsman, like this:

# Kill the old venv first
(venv)mrda@host:~$ deactivate
mrda@host:~$ rm -rf .tox/venv

Now startover
mrda@host:~$ tox -evenv --notest --sitepackages pywsman
(venv)mrda@host:~$ python -c "import pywsman"
# Fun and Profit!

Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation

A while back (several years ago) I wrote about self replacing code in my 'Cloud and Security' report (p.399-402)(I worked on it on and off over an extended period of time) within the context of building more secure codebases. DARPA are currently funding projects within this space. Based on I've seen it's early days. To be honest it's not that difficult to build if you think about it carefully and break it down. Much of the code that is required is already in wide spread use and I already have much of the code ready to go. The problem is dealing with the sub-components. There are some aspects that are incredibly tedious to deal with especially within the context of multiple languages.

If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...

If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.,infosec-firms-oppose-misguided-exploit-export-controls.aspx

It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?

If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.

If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.

There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.

July 22, 2015

Configuring Zotero PDF full text indexing in Debian Jessie


Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd
$ mv .mozilla/firefox/*.default/zotero .zotero

Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

    $ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine
Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero
$ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m)
$ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero
$ wget -O
$ chmod a+x

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero
$ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version
$ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing
  pdftotext version 0.26.5 is installed
  pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.

LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences

Aug 4 2015 18:30
Aug 4 2015 20:30
Aug 4 2015 18:30
Aug 4 2015 20:30

200 Victoria St. Carlton VIC 3053


• Jon Oxer, Open Machines Building Open Hardware

• Chris Samuel, VLSCI: Supercomputing for Life Sciences

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 4, 2015 - 18:30

read more


I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.

Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).

There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).

So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!

There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...

Self Driving Cars

I’m a believer in self driving car technology, and predict it will have enormous effects, for example:

  1. Our cars currently spend most of the time doing nothing. They could be out making money for us as taxis while we are at work.
  2. How much infrastructure and frustration (home garage, driveways, car parks, finding a park) do we devote to cars that are standing still? We could park them a few km away in a “car hive” and arrange to have them turn up only when we need them.
  3. I can make interstate trips laying down sleeping or working.
  4. Electric cars can recharge themselves.
  5. It throws personal car ownership into question. I can just summon a car on my smart phone then send the thing away when I’m finished. No need for parking, central maintenance. If they are electric, and driverless, then very low running costs.
  6. It will decimate the major cause of accidental deaths, saving untold misery. Imagine if your car knew the GPS coordinates of every car within 1000m, even if outside of visual range, like around a corner. No more t-boning, or even car doors opening in the path of my bike.
  7. Speeding and traffic fines go away, which will present a revenue problem for governments like mine that depend on the statistical likelihood of people accidentally speeding.
  8. My red wine consumption can set impressive new records as the car can drive me home and pour me into bed.

I think the time will come when computers do a lot better than we can at driving. The record of these cars in the US is impressive. The record for humans in car accidents dismal (a leading case of death).

We already have driverless planes (autopilot, anti-collision radar, autoland), that do a pretty good job with up to 500 lives at a time.

I can see a time (say 20 years) when there will be penalties (like a large insurance excess) if a human is at the wheel during an accident. Meat bags like me really shouldn’t be in control of 1000kg of steel hurtling along at 60 km/hr. Incidentally that’s 144.5 kJ of kinetic energy. A 9mm bullet exits a pistol with 0.519 kJ of energy. No wonder cars hurt people.

However many people are concerned about “blue screens of death”. I recently had an email exchange on a mailing list, here are some key points for and against:

  1. The cars might be hacked. My response is that computers and micro-controllers have been in cars for 30 years. Hacking of safety critical systems (ABS or EFI or cruise control) is unheard of. However unlike a 1980′s EFI system, self driving cars will have operating systems and connectivity, so this does need to be addressed. The technology will (initially at least) be closed source, increasing the security risk. Here is a recent example of a modern car being hacked.
  2. Planes are not really “driverless”, they have controls and pilots present. My response is that long distance commercial aircraft are under autonomous control for the majority of their flying hours, even if manual controls are present. Given the large number of people on board an aircraft it is of course prudent to have manual control/pilot back up, even if rarely used.
  3. The drivers of planes are sometimes a weak link. As we saw last year and on Sep 11 2001, there are issues when a malicious pilot gains control. Human error is also behind a large number of airplane incidents, and most car accidents. It was noted that software has been behind some airplane accidents too – a fair point.
  4. Compared to aircraft the scale is much different for cars (billions rather than 1000s). The passenger payload is also very different (1.5 people in a car on average?), and the safety record of cars much much worse – it’s crying out for improvement via automation. So I think automation of cars will eventually be a public safety issue (like vaccinations) and controls will disappear.
  5. Insurance companies may refuse a claim if the car is driverless. My response is that insurance companies will look at the actuarial data as that’s how they make money. So far all of the accidents involving Google driverless cars have been caused by meat bags, not silicon.

I have put my money where my mouth is and invested in a modest amount of Google shares based on my belief in this technology. This is also an ethical buy for me. I’d rather have some involvement in an exciting future that saves lives and makes the a world a better place than invest in banks and mining companies which don’t.

July 21, 2015

Joint Strike Fighter F-35 Notes

Below are a bunch of thoughts, collation of articles about the F-35 JSF, F-22 Raptor, and associated technologies...

- every single defense analyst knows that comprimises had to be made in order to achieve a blend of cost effectiveness, stealth, agility, etc... in the F-22 and F-35. What's also clear is that once things get up close and personal things mightn't be as clear cut as we're being told. I was of the impression that the F-22 would basically outdo anything and everything in the sky all of the time. It's clear that based on training excercises that unless the F-22's have been backing off it may not be as phenomenal as we're being led to believe (one possible reason to deliberately back off is to not provide intelligence on max performance envelope to provide less of a target for near peer threats with regards to research and engineering). There are actually a lot of low speed manouvres that I've seen a late model 3D-vectored Sukhoi perform that a 2D-vectored F-22 has not demonstrated. The F-35 is dead on arrival in many areas (at the moment. Definitely from a WVR perspective) as many people have stated. My hope and expectation is that it will have significant upgrades throughout it's lifetime

F22 vs Rafale dogfight video

Dogfight: Rafale vs F22 (Close combat)


- in the past public information/intelligence regarding some defense programs/equipment have been limited to reduce the chances of a setting off arms race. That way the side who has disemminated the mis-information can be guaranteed an advantage should there be a conflict. Here's the problem though, while some of this may be such, I doubt that all of it is. My expectation that due to some of the intelligence leaks (many terabytes. Some details of the breach are available publicly) regarding designs of the ATF (F-22) and JSF (F-35) programs is also causing some problems as well. They need to overcome technical problems as well as problems posed by previous intelligence leaks. Some of what is being said makes no sense as well. Most of what we're being sold on doesn't actually work (yet) (fusion, radar, passive sensors, identification friend-or-foe, etc...)...

- if production is really as problematic as they say that it could be without possible recourse then the only thing left is to bluff. Deterrence is based on the notion that your opponent will not attack because you have a qualitative or quantitative advantage... Obviously, the problem if there is actual conflict we have a huge problem. We purportedly want to be able to defend ourselves should anything potentially bad occur. The irony is that our notion of self defense often incorporates force projection in far off, distant lands...

F22 Raptor Exposed - Why the F22 Was Cancelled

F-35 - a trillion dollar disaster


JSF 35 vs F18 superhornet

- we keep on giving Lockheed Martin a tough time regarding development and implementation but we keep on forgetting that they have delivered many successful platforms including the U-2, the Lockheed SR-71 Blackbird, the Lockheed F-117 Nighthawk, and the Lockheed Martin F-22 Raptor

f-22 raptor crash landing

- SIGINT/COMINT often produces a lot of a false positives. Imagine listening to every single conversation that you overheard every single conversation about you. Would you possibly be concerned about your security? Probably more than usual despite whatever you might say? As I said previously in posts on this blog it doesn't makes sense that we would have such money invested in SIGINT/COMINT without a return on investment. I believe that we may be involved in far more 'economic intelligence' then we may be led to believe

- despite what is said about the US (and what they say about themselves), they do tell half-truths/falsehoods. They said that the Patriot missile defense systems were a complete success upon release with ~80% success rates when first released. Subsequent revisions of past performance have indicated actual success rate of about half that. It has been said that the US has enjoyed substantive qualitative and quantitative advantages over Soviet/Russian aircraft for a long time. Recently released data seems to indicate that it is closer to parity (not 100% sure about the validity of this data) when pilots are properly trained. There seems to be indications that Russian pilots may have been involved in conflicts where they shouldn't have been or were unknown to be involved...

- the irony between the Russians and US is that they both deny that their technology is worth pursuing and yet time seems to indicate otherwise. A long time ago Russian scientists didn't bother with stealth because they though it was overly expensive without enough of a gain (especially in light of updated sensor technology) and yet the PAK-FA/T50 is clearly a test bed for such technology. Previously, the US denied that that thrust vectoring was worth pursuing and yet the the F-22 clearly makes use of it

- based on some estimates that I've seen the F-22 may be capable of close to Mach 3 (~2.5 based on some of the estimates that I've seen) under limited circumstances

- people keep on saying maintaining a larger, indigenous defense program is simply too expensive. I say otherwise. Based on what has been leaked regarding the bidding process many people basically signed on without necessarily knowing everything about the JSF program. If we had more knowledge we may have proceeded a little bit differently

- a lot of people who would/should have classified knowledge of the program are basically implying that it will work and will give us a massive advantage give more development time. The problem is that there is so much core functionality that is so problematic that this is difficult to believe...

- the fact that pilots are being briefed not to allow for particular circumstances tells us that there are genuine problems with the JSF

- judging by the opinions in the US military many people are guarded regarding the future performance of the aircraft. We just don't know until it's deployed and see how others react from a technological perspective

- proponents of the ATF/JSF programs keep on saying that since you can't see it you can't shoot. If that's the case, I just don't understand why we don't push up development of 5.5/6th gen fighters (stealth drones basically) and run a hybrid force composed of ATF, JSF, and armed drones (some countries including France are already doing this)? Drones are somewhat of a better known quantity and without life support issues to worry about should be able to go head to head with any manned fighter even with limited AI and computing power. Look at the following videos and you'll notice that the pilot is right on the physical limit in a 4.5 gen fighter during an excercise with an F-22. A lot of stories are floating around indicating that the F-22 enjoys a big advantage but that under certain circumstance it can be mitigated. Imagine going up against a drone where you don't have to worry about the pilot blacking out, pilot training (incredibly expensive to train. Experience has also told us that pilots need genuine flight time not just simulation time to maintain their skills), a possible hybrid propulsion system (for momentary speed changes/bursts (more than that provided by afterburner systems) to avoid being hit by a weapon or being acquired by a targeting system), and has more space for weapons and sensors? I just don't understand how you would be better off with a mostly manned fleet as opposed to a hybrid fleet unless there are technological/technical issues to worry about (I find this highly unlikely given some of the prototypes and deployments that are already out there)

F22 vs Rafale dogfight video

Dogfight: Rafale vs F22 (Close combat)


- if I were a near peer aggressor or looking to defend against 5th gen threats I'd just to straight to 5.5/6th gen armed drone fighter development. You wouldn't need to fulfil all the requirements and with the additional lead time you may be able to achieve not just parity but actual advantages while possibly being cheaper with regards to TCO (Total Cost of Ownership). There are added benefits going straight to 5.5/6th gen armed drone development. You don't have to compromise so much on design. The bubble shaped (or not) canopy to aide dogfighting affects aerodynamic efficiency and actually is one of the main causes of increased RCS (Radar Cross Section) on a modern fighter jet. The pilot and additional equipment (ejector sear, user interface equipment, life support systems, etc...) would surely add a large amount of weight which can now be removed. With the loss in weight and increase in aerodynamic design flexibility you could save a huge amount of money. You also have a lot more flexibility in reducing RCS. For instance, some of the biggest reflectors of RADAR signals is the canopy (a film is used to deal with this) and the pilot's helmet and one of the biggest supposed selling points of stealth aircraft are RAM coatings. They're incredibly expensive though and wear out (look up the history of the B-2 Spirit and the F-22 Raptor). If you have a smaller aicraft to begin with though you have less area to paint leading to lower costs of ownership while retaining the advantages of low observable technology

- the fact that it has already been speculated that 6th gen fighters may focus less on stealth and speed and more on weapons capability means that the US is aware of increasingly effective defense systems against 5th gen fighters such as the F-22 Raptor and F-35 JSF which rely heavily on low observability 

- based on Wikileaks and other OSINT (Open Source Intelligence) everyone involved with the United States seems to acknowledge that they get a raw end of the deal to a certain extent but they also seem to acknowledge/imply that life is easier with them than without them. Read enough and you'll realise that even when classified as a closer partner rather than just a purchaser of their equipment you sometimes don't/won't receive much extra help

- if we had the ability I'd be looking to develop our own indigineous program defense programs. At least when we make procurements we'd be in a better position to be able to make a decision as to whether what was being presented to us was good or bad. We've been burnt on so many different programs with so many different countries... The only issue that I may see is that the US may attempt to block us from this. It has happened in the past with other supposed allies before...

- I just don't get it sometimes. Most of the operations and deployments that US and allied countries engage in are counter-insurgency and CAS significant parts of our operations involving mostly un-manned drones (armed or not). 5th gen fighters help but they're overkill. Based on some of what I've seen the only two genuine near peer threats are China and Russia both of whom have known limitations in their hardware (RAM coatings/films, engine performance/endurance, materials design and manufacturing, etc...). Sometimes it feels as though the US looks for enemies that mightn't even exist. Even a former Australian Prime-Ministerial advister said that China doesn't want to lead the world, "China will get in the way or get out of the way." The only thing I can possibly think of is that the US has intelligence that may suggest that China intends to project force further outwards (which it has done) or else they're overly paranoid. Russia is a slightly different story though... I'm guessing it would be interesting reading up more about how the US (overall) interprets Russian and Chinese actions behinds the scenes (lookup training manuals for allied intelligence officers for an idea of what our interpretation of what their intelligence services are like)

- sometimes people say that the F-111 was a great plane but in reality there was no great use of it in combat. It could be the exact same circumstance with the F-35

- there could be a chance the aircraft could become like the B-2 and the F-22. Seldom used because the actual true, cost of running it is horribly high. Also imagine the ramifications/blowback of losing such an expensive piece of machinery should there be a chance that it can be avoided

- defending against 5th gen fighters isn't easy but it isn't impossible. Sensor upgrades, sensor blinding/jamming technology, integrated networks, artificial manipulation of weather (increased condensation levels increases RCS), faster and more effective weapons, layered defense (with strategic use of disposable (and non-disposable) decoys so that you can hunt down departing basically, unarmed fighters), experimentation with cloud seeing with substances that may help to speed up RAM coating removal or else reduce the effectiveness of stealth technology (the less you have to deal with the easier your battles will be), forcing the battle into unfavourable conditions, etc... Interestingly, there have been some accounts/leaks of being able to detect US stealth bombers (B-1) lifting off from some US air bases from Australia using long range RADAR. Obviously, it's one thing to be able to detect and track versus achieving a weapons quality lock on a possible target

RUSSIAN RADAR CAN NOW SEE F-22 AND F-35 Says top US Aircraft designer

- following are rough estimate on RCS of various modern defense aircraft. It's clear that while Chinese and Russian technology aren't entirely on par they make the contest unconfortably close. Estimates on the PAK-FA/T-50 indicate RCS of about somewhere between the F-35 and F-22. Ultiamtely this comes back down to a sensor game. Rough estimates seem to indicate a slight edge to the F-22 in most areas. Part me thinks that the RCS of the PAK-FA/T-50 must be propoganda, the other part leads me to believe that there is no way countries would consider purchase of the aircraft if it didn't offer a competitive RCS

- it's somehwat bemusing that that you can't take pictures/videos from certain angles of the JSF in some of the videos mentioned here and yet there are heaps of pictures online of LOAN systems online including high resolution images of the back end of the F-35 and F-22
F 22 Raptor F 35 real shoot super clear

- people keep on saying that if you can't see and you can't lock on to stealth aircraft they'll basically be gone by the time. The converse is true. Without some form of targeting system the fighter in question can't lock on to his target. Once you understand how AESA RADAR works you also understand that given sufficient computing power, good implementation skills, etc... it's also subject to the same issue that faces the other side. You shoot what you can't see and by targeting you give away your position. My guess is that detection of tracking by RADAR is somewhat similar to a lot of de-cluttering/de-noising algorithms (while making use of wireless communication/encryption & information theories as well) but much more complex... which is why there has been such heavy investment and interest in more passive systems (infra-red, light, sound, etc...)

F-35 JSF Distributed Aperture System (EO DAS)

Lockheed Martin F-35 Lightning II- The Joint Strike Fighter- Full Documentary.

4195: The Final F-22 Raptor

Rafale beats F 35 & F 22 in Flight International

Eurofighter Typhoon fighter jet Full Documentary

Eurofighter Typhoon vs Dassault Rafale

DOCUMENTARY - SUKHOI Fighter Jet Aircrafts Family History - From Su-27 to PAK FA 50

Green Lantern : F35 v/s UCAVs

Building the LLVM Fuzzer on Debian.

I've been using the awesome American Fuzzy Lop fuzzer since late last year but had also heard good things about the LLVM Fuzzer. Getting the code for the LLVM Fuzzer is trivial, but when I tried to use it, I ran into all sorts of road blocks.

Firstly, the LLVM Fuzzer needs to be compiled with and used with Clang (GNU GCC won't work) and it needs to be Clang >= 3.7. Now Debian does ship a clang-3.7 in the Testing and Unstable releases, but that package has a bug (#779785) which means the Debian package is missing the static libraries required by the Address Sanitizer options. Use of the Address Sanitizers (and other sanitizers) increases the effectiveness of fuzzing tremendously.

This bug meant I had to build Clang from source, which nnfortunately, is rather poorly documented (I intend to submit a patch to improve this) and I only managed it with help from the #llvm IRC channel.

Building Clang from the git mirror can be done as follows:

  mkdir LLVM
  cd LLVM/
  git clone
  (cd llvm/tools/ && git clone
  (cd llvm/projects/ && git clone
  (cd llvm/projects/ && git clone
  (cd llvm/projects/ && git clone

  mkdir -p llvm-build
  (cd llvm-build/ && cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=$(HOME)/Clang/3.8 ../llvm)
  (cd llvm-build/ && make install)

If all the above works, you will now have working clang and clang++ compilers installed in $HOME/Clang/3.8/bin and you can then follow the examples in the LLVM Fuzzer documentation.

FreeDV Robustness Part 6 – Early Low SNR Results

Anyone who writes software should be sentenced to use it. So for the last few days I’ve been radiating FreeDV 700 signals from my home in Adelaide to this websdr in Melbourne, about 800km away. This has been very useful, as I can sample signals without having to bother other Hams. Thanks John!

I’ve also found a few bugs and improved the FreeDV diagnostics to get a feel for how the system is working over real world channels.

I am using a simple end fed dipole a few meters off the ground and my IC7200 at maximum power (100W I presume, I don’t have a power meter). A key goal is comparable performance to SSB at low SNRs on HF channels – that is where FreeDV has struggled so far. This has been a tough nut to crack. SSB is really, really good on HF.

Here is a sample taken this afternoon, in a marginal channel. It consists of analog/DV/analog/DV speech. You might need to listen to it a few times, it’s hard to understand first time around. I can only get a few words in analog or DV. It’s right at the lower limit of intelligibility, which is common in HF radio.

Take a look at the spectrogram of the off air signal. You can see the parallel digital carriers, the diagonal stripes is the frequency selective fading. In the analog segments every now and again some low frequency energy pops up above the noise (speech is dominated by low frequency energy).

This sample had a significant amount of frequency selective fading, which occasionally drops the whole signal down into the noise. The DV mutes in the middle of the 2nd digital section as the signal drops out completely.

There was no speech compressor on SSB. I am using the “analog” feature of FreeDV, which allows me to use the same microphone and quickly swap between SSB and DV to ensure the HF channel is roughly the same. I used my laptops built in microphone, and haven’t tweaked the SSB or DV audio with filtering or level adjustment.

I did confirm the PEP power is about the same in both modes using my oscilloscope with a simple “loop” antenna formed by clipping the probe ground wire to the tip. It picked up a few volts of RF easily from the nearby antenna. The DV output audio level is a bit quiet for some reason, have to look into that.

I’m quite happy with these results. In a low SNR, barely usable SSB channel, the new coherent PSK modem is hanging on really well and we could get a message through on DV (e.g. phonetics, a signal report). When the modem locks it’s noise free, a big plus over SSB. All with open source software. Wow!

My experience is consistent with this FreeDV 700 report from Kurt KE7KUS over a 40m NVIS path.

Next step is to work on the DV speech quality to make it easy to use conversationally. I’d say the DV speech quality is currently readability 3 or 4/5. I’ll try a better microphone, filtering of the input speech, and see what can be done with the 700 bit/s Codec.

One option is a new mode where we use the 1300 bit/s codec (as used in FreeDV 1600) with the new, cohpsk modem. The 1300 bit/s codec sounds much better but would require about 3dB more SNR (half an s-point) with this modem. The problem is bandwidth. One reason the new modem works so well is that I use all of the SSB bandwidth. I actually send the 7 x 75 symbol/s carriers twice, to get 14 carriers total. These are then re-combined in the demodulator. This “diversity” approach makes a big difference in the performance on frequency selective fading channels. We don’t have room for that sort of diversity with a codec running much faster.

So time to put the thinking hat back on. I’d also like to try some nastier fading channels, like 20m around the world, or 40m NVIS. However I’m very pleased with this result. I feel the modem is “there”, however a little more work required on the Codec. We’re making progress!

July 20, 2015

Why DANE isn't going to win

In a comment to my previous post, Daniele asked the entirely reasonable question,

Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?

Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.

My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.

DNS Is Awful

Quoting Google security engineer Adam Langley:

But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.

Consider that TXT records are far, far older than TLSA records. It seems likely that TLSA records would fail to be retrieved greater than 4% of the time. Extrapolate to the likely failure rate for lookup of TLSA records would be, and imagine what that would do to the reliability of DANE verification. It would either be completely unworkable, or else would cause a whole new round of “just click through the security error” training. Ugh.

This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.

Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.

Finally, performance is also a concern. Having to go out-of-band to retrieve TLSA records delays page generation, and nobody likes slow page loads.


Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.

1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.

DNS Providers are Awful

While we all poke fun at CAs who get compromised, consider how often someone’s DNS control panel gets compromised. Now ponder the fact that, if DANE is supported, TLSA records can be manipulated in that DNS control panel. Those records would then automatically be DNSSEC signed by the DNS provider and served up to anyone who comes along. Ouch.

In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?


None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.

July 19, 2015

Twitter posts: 2015-07-13 to 2015-07-19

Craige McWhirter: How To Configure Debian to Use The Tiny Programmer ISP Board

So, you've gone and bought yourself a Tiny Programmer ISP, you've plugged into your Debian system, excitedly run avrdude only to be greeted with this:

% avrdude -c usbtiny -p m8

avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted
avrdude: initialization failed, rc=-1
         Double check connections and try again, or use -F to override
         this check.

avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted

avrdude done.  Thank you.

I resolved this permissions error by adding the following line to /etc/udev/rules.d/10-usbtinyisp.rules:

SUBSYSTEM=="usb", ATTR{idVendor}=="1781", ATTR{idProduct}=="0c9f", GROUP="plugdev", MODE="0660"

Then restarting udev:

% sudo systemctl restart udev

Plugged the Tiny Programmer ISP back in the laptop and ran avrdude again:

% avrdude -c usbtiny -p m8

avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.00s

avrdude: Device signature = 0x1e9587
avrdude: Expected signature for ATmega8 is 1E 93 07
         Double check chip, or use -F to override this check.

avrdude done.  Thank you.

You should now have avrdude love.

Enjoy :-)

July 18, 2015

Casuarina Sands to Kambah Pool

I did a walk with the Canberra Bushwalking Club from Casuarina Sands (in the Cotter) to Kambah Pool (just near my house) yesterday. It was very enjoyable. I'm not going to pretend to be excellent at write ups for walks, but will note that the walk leader John Evans has a very detailed blog post about the walk up already. We found a bunch of geocaches along the way, with John doing most of the work and ChifleyGrrrl and I providing encouragement and scrambling skills. A very enjoyable day.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150718-casurina_sands_to_kambah_pool photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches


July 17, 2015

OSX Bundling Soprano and other joys

Libferris has been moving to use more Qt/KDE technologies over the years. Ferris is also a fairly substantial software project in it's own right, with many plugins and support for multiple libraries. Years back I moved from using raw redland to using soprano for RDF handling in libferris.

Over recent months, from time to time, I've been working on an OSX bundle for libferris. The idea is to make installation as simple as copying to /Applications. I've done some OSX packaging before, so I've been exposed to the whole library paths inside dylib stuff, and also the freedesktop specs expecting things in /etc or whatever and you really want it to look into /Applications/YouApp/Contents/Resources/.../etc/whatever.

The silver test for packaging is to rename the area that is used to build the source to something unexpected and see if you can still run the tools. The Gold test is obviously to install from the app.dmz onto a fresh machine and see that it runs.

I discovered a few gotchas during silver testing and soprano usage. If you get things half right then you can get to a state that allows the application to run but that does not allow a redland RDF model to ever be created. If your application assumes that it can always create an in memory RDF store, a fairly secure bet really, then bad things will befall the app bundle on osx.

Plugins are found by searching for the desktop files first and then loading the shared libary plugin as needed. The desktop files can be found with the first line below, while the second line allows the plugin shared libraries to be found and loaded.

export SOPRANO_DIRS=/Applications/

export LD_LIBRARY_PATH=/Applications/

You have to jump through a few more hoops. You'll find that the plugin ./lib/soprano/ links to lib/librdf.0.dylib and librdf will link to other redland libraries which themselves link to things like libxml2 which you might not have bundled yet.

There are also many cases of things linking to QtCore and other Qt libraries. These links are normally to nested paths like Library/Frameworks/QtCore.framework/Versions/4/QtCore which will not pass the silver test. Actually, links inside dylibs like that tend to cause the show to segv and you are left to work out where and why that happened. My roll by hand solution is to create softlinks to these libraries like QtCore in the .../lib directory and then resolve the dylib links to these softlinks.

In the end I'd also like to make an app bundle for specific KDE apps. Just being able to install okular by drag and drop would be very handy. It is my preferred reader for PDF files and having a binary that doesn't depend on a build environment (homebrew or macports) makes it simpler to ensure I can always have okular even when using an osx machine.

Selling Software Online, Installer, Packaging, and Packing Software, Desktop Automation, and More

Selling software online is deceptively simple. Actually making money out of it can be much more difficult.

Heaps of packaging/installer programs out there. Some cross platform solutions out there as well. Interestingly, just like a lot of businesses out there (even a restaurant that I frequent will offer you a free drink if you 'Like' them via Facebook) now they make use of guerilla style marketing techniques. Write a blog article for them and they may provide you with a free license.

I've always wondered how much money software manufacturers make from bloatware and other advertising... It can vary drastically. Something that to watch for are silent/delayed installs though. Namely, installation of software even though it doesn't show up the Window's 'Control Panel'.

Even though product activation/DRM can be simple to implement (depending on the solution), cost can vary drastically depending on the company and solution that is involved.

Sometimes you just want to know what packers and obfuscation a company may have used to protect/compress their program. It's been a while since I looked at this and it looks like things were just like last time. A highly specialised tool with few genuinely good, quality candidates...

A nice way of earning some extra/bonus (and legal) income if you have a history being able to spot software bugs.

If you've never used screen/desktop automation software before there are actually quiet a few options out there. Think of it as 'Macros' for the Windows desktop. The good thing is that a lot of them may use a scripting language for the backend and have other unexpected functionality as well opening up further opportunities for productivity and automation gains.

A lot of partition management software claim to be able to basically handle all circumstances. The strange thing is that disk cloning to an external drive doesn't seem to be handled as well. The easiest/simplest way seems to be just using a caddy/internal in combination with whatever software you may be using.

There are some free Australian accounting solutions out there. A bit lacking feature wise though.,7-accounting-packages-for-australian-small-businesses-compared-including-myob-quickbooks-online-reckon-xero.aspx

Every once in a while someone sends you an email in a 'eml' format which can't be decoded by your local mail client. Try using 'ripmime'...

Terry && EL

After getting headlights Terry now has a lighted arm. This is using the 3 meter EL wire and a 2xAA battery inverter to drive it. The around $20 entry point to bling is fairly hard to resist. The EL tape looks better IMHO but seems to be a little harder to work with from what I've read about cutting the tape and resoldering / reconnecting.

I have a 1 meter red EL tape which I think I'll try to wrap around the pan/tilt assembly. From an initial test it can make it around the actobotics channel length I'm using around twice. I'll probably print some mounts for it so that the tape doesn't have to try to make right angle turns at the ends of the channel.

July 16, 2015

Terry - Lights, EL and solid Panner

Terry the robot now has headlights! While the Kinect should be happy in low light I found some nice 3 watt LEDs on sale and so headlights had to happen. The lights want a constant current source of 700mA so I grabbed an all in one chip solution do to that and mounted the lights in series. Yes, there are a load of tutorials on building a constant current driver for a few bucks around the net, but sometimes I don't really want to dive in and build every part. I think it will be interesting at some stage to test some of the constant current setups and see the ripple and various metrics of the different designs. That part of he analysis is harder to find around the place.

And just how does this all look when the juice is flowing I hear you ask. I have tilted the lights ever so slightly downwards to save the eyes from the full blast. Needless to say, you will be able to see Terry coming now, and it will surely see you in full colour 1080 glory as you become in the sights. I thought about mounting the lights on the pan and tilt head unit, but I really don't want these to ever get to angles that are looking right into a person's eyes as they are rather bright.

On another note, I now have some EL wire and EL tape for Terry itself. So the robot will be glowing in a sublte way itself. The EL tape is much cooler looking than the wire IMHO but the tape is harder to cut (read I probably won't be doing that). I think the 1m of tape will end up wrapped around the platform on the pan and tilt board.

Behind the LED is quite a heatsink, so they shouldn't pop for quite some time. In the top right you can just see the heatshrink direct connected wires on the LED driver chip and the white wire mounts above it. I have also trimmed down the quad encoder wires and generally cleaned up that area of the robot.

A little while ago I moved the pan mechanism off axle. The new axle is hollow and setup to accomodate a slip ring at the base. I now have said slip ring and am printing a crossover plate for that to mount to channel. Probably by the next post Terry will be able to continuiously rotate the panner without tangling anything up. The torque multiplier of the brass to alloy wheels together with the 6 rpm gearmotor having very high torque means that the panner will tend to stay where it is. Without powering the motor the panner is nearly impossible to move, the grub screws will fail before the motor gives way.

Although the EL tape is tempting, the wise move is to fit the slip ring first.

July 15, 2015


I am on vacation this week, so I took this afternoon to do some walking and geocaching...

That included a return visit to Narrabundah trig to clean up some geocaches I missed last visit:


Interactive map for this route.

And exploring the Lindsay Pryor arboretum because I am trying to collect the complete set of arboretums in Canberra:


Interactive map for this route.

And then finally the Majura trig, which was a new one for me:


See more thumbnails

Interactive map for this route.

I enjoyed the afternoon. I found a fair few geocaches, and walked for about five hours (not including driving between the locations). I would have spent more time geocaching at Majura, except I made it back to the car after sunset as it was.

Tags for this post: blog pictures 20150715-wanderings photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger


July 14, 2015

8 Mega Watts in your bare hands

I recently went on a nice road trip to Gippstech, an interstate Ham radio conference, with Andrew, VK5XFG. On the way, we were chatting about Electric Cars, and how much of infernal combustion technology is really just a nasty hack. Andrew made the point that if petrol cars had been developed now, we would have all sorts of Hazmat rules around using them.

Take refueling. Gasoline contains 42MJ of energy in every litre. On one of our stops we took 3 minutes to refuel 36 litres. That’s 42*36/180 or 8.4MJ/s. Now one watt is 1J/s, so that’s a “power” (the rate energy is moving) of 8.4MW. Would anyone be allowed to hold an electrical cable carrying 8.4MW? That’s like 8000V at 1000A. Based on an average household electricity consumption of 2kW, that’s like hanging onto the HT line supplying 4200 homes.

But it’s OK, as long as your don’t smoke or hold a mobile phone!

The irony is that while I was sitting on 60 litres of high explosive, spraying fumes along the Princes Highway and bitching about petrol cars I was enjoying the use of one. Oh well, bring on the Tesla charge stations and low cost EVs. Infrastructure, the forces of mass production and renewable power will defeat the evils of fossil fuels.

Reading Further

Energy Equivalents of a Krispy Kreme Factory.

Fuel Consumption of a Pedestrian Crossing

LUV Beginners July Meeting: Ask the experts

Jul 18 2015 12:30
Jul 18 2015 16:30
Jul 18 2015 12:30
Jul 18 2015 16:30

RMIT Building 91, 110 Victoria Street, Carlton South

This month we'll be asking attendees for their pressing Linux and Open Source issues, and our resident Linux experts will then attempt to explain the topics to your satisfaction! If you've got something you always wanted to know more about, or something you'd like to know how to do, come along and ask.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 18, 2015 - 12:30

Minutes of Council Meeting 17 June 2015

Wed, 2015-06-17 19:49 - 20:41

1. Meeting overview and key information


Josh Hesketh, Sae Ra Germaine, Christopher Neugebauer, Craige McWhirter, Josh Stewart, James Iseppi


Tony Breeds

Meeting opened by Josh H at 1949hrs and quorum was achieved

Key Stats

Stats not recorded this fortnight

MOTION that the previous minutes of 03 June are correct

Moved: Josh H

Seconded: Chris

Passed Unanimously

2. Log of correspondence

Motions moved on list


General correspondence

GovHack 2015 as a subcommittee

MOTION by Josh H We accept govhack as an LA Sub-committee with the task of running GovHack at a national level with:

Geoff Mason - lead

Alysha Thomas

Pia Waugh - as the liaison to LA

Sharen Scott

Diana Ferry

Alex Sadleir

Richard Tubb

Jan Bryson

Keith Moss

Under the Sub-committee policy v1 to allow the committee to run with autonomy and to use an external entity for administration.

Seconded Chris

Passed Unanimously

The old Subcommittee policy will need to come into effect

Invoice from LESLIE POOLE - Reminder notice from Donna and Leslie have arrived.

Supporting the Drupal Accelerate fund

UPDATE: In Progress. Tony to process in Xero

Admin Team draft budget from STEVEN WALSH

UPDATE: To be discussed when Tony is available and Council Budget has been revised.

Also includes the requirement of a wildcard cert for *

MOTION by Josh H accepts the expenditure of $150 per year on a wildcard SSL certificate on

Seconded: James Iseppi

Passed unanimously.

UPDATE: Awaiting for a more firm budget

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craig

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

ACTION with Josh Hesketh to ensure 3 year server support package in progress

Actions are in progress with Admin Team

UPDATE: A budget will be put forward by the admin team. An initial online hackfest has been conducted. Pending item.

UPDATE: Ongoing.

Update: To be removed from the agenda.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: In progress

Update: To be done on friday.

ACTION: Josh H to check with PyconAU to check their budgetary status.

UPDATE: Budget looks fine and trust the treasurer’s accounting abilities.

ACTION: JOSH to seek actuals in budget from PyconAU committee

UPDATE: Completed

Update: to be removed from agenda

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

Potential sponsorship of GovHack.

More information is required on the types of sponsorship that LA can look at.

Clarify with GovHack. LA may not be able to sponsor a prize as you would also need to

UPDATE: Criteria would need to be developed. LA would be able to provide their own judge. Josh S to come with some wording and criteria motion to be held on list.

Value of the prize also to be discussed after budget has been analysed by Josh H and Tony B.

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

4. Items for discussion

LCA2016 update

Cfp has opened and going very well.

LCA2017 update

Nothing to report

PyCon AU update

Registrations opened. Early birds are looking to sell out very quickly.

Sponsorship is looking good and

ACTION: Sae Ra to approve payment

Drupal South

ACTION: Follow-up on DrupalSouth 2016 enquiry. will need to setup a sub-committee

UPDATE: To work out the sub-committee details with organisers.

WordCamp Brisbane

Seeking a closure report


ACTION: Josh H to follow-up on budget status

5. Items for noting

6. Other business

Backlog of minutes

ACTION: Josh H to help Sae Ra with updating the website and mailing list.

UPDATE: Ongoing.

UPDATE: Completed.

MOTION by Josh H Minutes to be published to

Seconded: Craige

Passed unanimously

Bank account balances need rebalancing

ACTION: Tony to organise transfers to occur including NZ account.

Appropriate treasurers to be notified.

UPDATE: to be discussed on friday

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.



David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

ACTION: Josh S to catch up with David Bell regarding the documentation.

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh S to contact Renai

7. In Camera

2 Items were discussed in camera

2041 Close

MySQL on NUMA machines just got better!

A followup to my previous entry , my patch that was part of Bug #72811 Set NUMA mempolicy for optimum mysqld performance has been merged!

I hope it’s enabled by default so that everything “just works”.

I also hope it filters down through MariaDB and Percona Server fairly quickly.

Also, from the release notes on that bug, I think we can expect 5.7.8 any day now.

July 13, 2015

Quartz trig

A morning of vacation geocaching, wandering, and walking to quartz trig. Quartz was a disappointment as its just a bolt in the ground, but this was a really nice area I am glad I wandered around in. This terrain would be very good for cubs and inexperienced scouts.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150713-quartz photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger


July 12, 2015

Twitter posts: 2015-07-06 to 2015-07-12

Git: Renaming/swapping “master” with a branch on Github

I was playing around with some code and after having got it working I thought I’d make just one more little quick easy change to finish it off and found that I was descending a spiral of additional complexity due to the environment in which it had to work. As this was going to be “easy” I’d been pushing the commits to master on Github (I’m the only one using this code) and of course a few reworks in I’d realised that this was never going to work out well and needed to be abandoned.

So, how to fix this? The ideal situation would be to just disappear all the commits after the last good one, but that’s not really an option, so what I wanted was to create a branch from the last good point and then swap master and that branch over. Googling pointed me to some possibilities, including this “deprecated feedback” item from “githubtraining” which was a useful guide so I thought I should blog what worked for me in case it helps others.

  1. git checkout -b good $LAST_GOOD_COMMIT # This creates a new branch from the last good commit
  2. git branch -m master development # This renames the "master" branch to "development"
  3. git branch -m good master # This renames the "good" branch to "master".
  4. git push origin development # This pushes the "development" branch to Github
  5. In the Github web interface I went to my repos “Settings” on the right hand side (just above the “clone URL” part) and changed the default branch to “development“.
  6. git push origin :master # This deletes the "master" branch on Github
  7. git push --all # This pushes our new master branch (and everything else) to Github
  8. In the Github web interface I went and changed my default branch back to “master“.

…and that was it, not too bad!

You probably don’t want to do this if anyone else is using this repo though. 😉

This item originally posted here:

Git: Renaming/swapping “master” with a branch on Github

Trellis Decoding for Codec 2

OK, so FreeDV 700 was released a few weeks ago and I’m working on some ideas to improve it. Especially those annoying R2D2 noises due to bit errors at low SNRs.

I’m trying some ideas to improve the speech quality without the use of Forward Error Correction (FEC).

Speech coding is the art of “what can I throw away”. Speech codecs remove a bunch of redundant information. As much as they can. Hopefully with whats left you can still understand the reconstructed speech.

However there is still a bit of left over redundancy. One sample of a model parameter can look a lot like the previous and next sample. If our codec quantisation was really clever, adjacent samples would look like noise. The previous and next samples would look nothing like the current one. They would be totally uncorrelated, and our codec bit rate would be minimised.

This leads to a couple of different approaches to the problem of sending coded speech over channel with bit errors:

The first, conventional approach is to compress the speech as much as we can. This lowers the bit rate but makes the coded speech very susceptible to bit errors. One bit error might make a lot of speech sound bad. So we insert Forward Error correction (FEC) bits, raising the overall bit rate (not so great), but protecting the delicate coded speech bits.

This is also a common approach for sending data over dodgy channels. For data, we cannot tolerate any bit errors, so we use FEC, which can correct every error (or die trying).

However speech is not like data. If we get a click or a pop in the decoded speech we don’t care much. As long as we can sorta make out what was said. Our “Brain FEC” will then work out what the message was.

Which leads us to another approach. If we leave a little redundancy in the coded speech, we can use that to help correct or at least smooth out the received speech. Remember that for speech, it doesn’t have to be perfect. Near enough is good enough. That can be exploited to get us gain over a system that uses FEC.

Turns out that in the Bit Error Rate (BER) ranges we are playing with (5-10%) it’s hard to get a good FEC code. Many of the short ones break – they introduce more errors than they correct. The really good ones are complex with large block sizes (1000s of bits) that introduce unacceptable delay. For example at 700 bit/s, a 7000 bit FEC codeword is 10 seconds of coded speech. Ooops. Not exactly push to talk. And don’t get me started on the memory, MIPs, implementation complexity, and modem synchronisation issues.

These ideas are not new, and I have been influenced by some guys I know who have worked in this area (Philip and Wade if you’re out there). But not influenced enough to actually look up and read their work yet, lol.

The Model

So the idea is to exploit the fact that each codec model parameter changes fairly slowly. Another way of looking at this is the probability of a big change is low. Take a look at the “trellis” diagram below, drawn for a parameter that is represented by a 2 bit “codeword”:

Lets say we know our current received codeword at time n is 00. We happen to know it’s fairly likely (50%) that the next received bits at time n+1 will be 00. A 11, however, is very unlikely (0%), so if we receive a 11 after a 00 there is very probably an error, which we can correct.

The model I am using works like this:

  1. We examine three received codewords: the previous, current, and next.
  2. Given a received codeword we can work out the probability of each possible transmitted codeword. For example we might BPSK modulate the two bit codeword 00 as -1 -1. However when we add noise the receiver will see -1.5 -0.25. So the receiver can then say, well … it’s most likely -1 -1 was sent, but it also could have been a -1 1, and maybe the noise messed up the last bit.
  3. So we work out the probability of each sequence of three codewords, given the probability of jumping from one codeword to the next. For example here is one possible “path”, 00-11-00:

    total prob =

    (prob a 00 was sent at time n-1) AND

    (prob of a jump from 00 at time n-1 to 11 at time n) AND

    (prob a 11 was sent at time n) AND

    (prob of a jump from 11 at time n to 00 at time n+1) AND

    (prob a 00 was sent at time n+1)

  4. All possible paths of the three received values are examined, and the most likely one chosen.

The transition probabilities are pre-computed using a training database of coded speech. Although it is possible to measure these on the fly, training up to each speaker.

I think this technique is called maximum likelihood decoding.

Demo and Walk through

To test this idea I wrote a GNU Octave simulation called trellis.m

Here is a test run for a single trellis decode. The internal states are dumped for your viewing pleasure. You can see the probability calculations for each received codeword, the transition probabilities for each state, and the exhaustive search of all possible paths through the 3 received codewords. At the end, it get’s the right answer, the middle codeword is decoded as a 00.

For convenience the probability calculations are done in the log domain, so rather than multiplies we can use adds. So a large negative “probability” means really unlikely, a positive one likely.

Here is a plot of 10 seconds of a 4 bit LSP parameter:

You can see a segments where it is relatively stable, and some others where it’s bouncing around. This is a mesh plot of the transition probabilities, generated from a small training database:

It’s pretty close to a “eye” matrix. For example, if you are in state 10, it’s fairly likely the next state will be close by, and less likely you will jump to a remote state like 0 or 15.

Here is test run using data from several seconds of coded speech:

octave:130> trellis

loading training database and generating tp .... done

loading test database .... done

Eb/No: 3.01 dB nerrors 28 29 BER: 0.03 0.03 std dev: 0.69 1.76

We are decoding using trellis based decoding, and simple hard decision decoding. Note how the number of errors and BER is the same? However the std dev (distance) between the transmitted and decoded codewords is much better for trellis based decoding. This plot shows the decoder errors over 10 seconds of a 4 bit parameter:

See how the trellis decoding produces smaller errors?

Not all bit errors are created equal. The trellis based decoding favours small errors that have a smaller perceptual effect (we can’t hear them). Simple hard decision decoding has a random distribution of errors. Sometimes you get the Most Significant Bit (MSB) of the binary codeword flipped which is bad news. You can see this effect above, with a 4 bit codeword, a MSB error means a jump of +/- 8. These large errors are far less likely with trellis decoding.


Hear are some samples that compare trellis based decoding to simple hard decision decoding, when applied to Codec2 at 700 bit/s on a AWGN channel using PSK. Only the 6 LSP parameters are tested (short term spectrum), no errors or correction are applied to the excitation parameters (voicing, pitch, energy).

Eb/No (dB) BER Trellis Simple (hard dec)
big 0.00 Listen Listen
3.0 0.02 Listen Listen
0.0 0.08 Listen Listen

At 3dB, the trellis based decoding removes most of the effects of bit errors, and it sounds similar to the no error reference. Compared to simple decoding, the bloops and washing machine noises have gone away. At 0dB Eb/No, the speech quality is improved, with some exceptions. Fast changes, like the “W” in double-you, and the “B” in Bruce become indistinct. This is because when the channel noise is high, the probability model favours slow changes in the parameters.

Still – getting any sort of speech at 8% bit error rates with no FEC is pretty cool.

Further Work

These techniques could also be applied to FreeDV 1600, improving the speech quality with no additional overhead. Further work is required to extend these ideas to all the codec parameters, such as pitch, energy, and voicing.

I need to train the transition probabilities with a larger database, or make it train in real time using off air data.

We could include other information in the model, like the relationship of adjacent LSPs, or how energy and pitch change slowly in strongly voiced speech.

Now 10% BER is an interesting, rarely explored area. The data guys start to sweat above 1E-6, and assume everyone else does. At 10% BER FEC codes don’t work well, you need a really long block size or a low FEC rate. Modems struggle due to syncronisation issues. However at 10% the Eb/No versus BER curves start to get flat, so a few dB either way doesn’t change the BER much. This suggests small changes in intelligibility (not much of a threshold effect). Like analog.


For speech, we don’t need to correct all errors; we just need to make it sound like they are corrected. By leaving some residual redundancy in the coded speech parameters we can use probability models to correct errors in the decoded speech with no FEC overhead.

This work is another example of experimental work we can do with an open source codec. It combines knowledge of the channel, the demodulator and the codec parameters to produce a remarkable result – improved performance with no FEC.

This work is in it’s early stages. But the gains all add up. A few more dB here and there.


  1. I found this description of Soft Decision Viterbi decoding useful.
  2. Last year I did some related work on natural versus gray coding.

July 11, 2015

Electronics (TV) Repair, Working at Amazon, and Dealing With a Malfunctioning Apple iDevice

I obviously do component level electronics repair from time to time (I've been doing electronics repair/modification since I was fairly young on devices ranging from food processers all the way up to advanced electronic component level repair such as laptops). One of recent experiments was with large screen flat panel (Plasma, LCD, LED, etc...) television sets. Some general notes:

- take precautions. If you've ever watched some of those guys on YouTube, you'll realise that they are probably amateur electrcians and have probably never been shocked/electrocuted before. It's one thing to work with small electronic devices. It's an entirely different matter to be working with mains voltage. Be careful...

- a lot of the time electronic failure will take occur gradually over time (although the amount of time can vary drastically obviously)

- don't just focus on repairing it so that power can flow through the circuit once more. It's possible that it will just fail once more. Home in on the problem area, and make sure everything's working. That way you don't have to keep on dealing with other difficulties down the track

- it may only be possible to test components outside of circuit. While testing components with a multimeter will help you may need to purchase more advanced and expensive diagnostic equipment to really figure out what the true cause of the problem is

- setup a proper test environment. Ideally, one where you have a seperate circuit and where there are safety mechanisms in place to reduce the chances of a total blackout in your house and to increase your personal safety

- any information that you take from this is at your own risk. Please don't think that any of the information here will turn you into a qualified electronics technician or will allow you to solve most problems that you will face

- a lot of the time information on the Internet can be helpful but only applies to particular conditions. Try to understand and work the problem rather than just blindly following what other people do. It may save you a bit of money over the long term,-reboots,-or-the-standby-light-is-blinking

Philips 32PFL5522D/05 - Completely dead (no power LED or signs of life) - Diagnosis and repair
how fix tv

- electronics repair is becoming increasingly un-economical. Parts may be impossible to find and replacing the TV rather than fixing it may actually be cheaper (especially when the screen is cracked. It's almost certain that a new replacement is going to cost more than the set itself). The only circumstances where it's likely to be worth it is if you have cheap spare parts on hand or the type of failure involves a relatively small, minor, component. The other thing you should know is that while the device may be physically structured in such a way to appear modularised it may not fail in such a fashion. I've been reading about boards which fail but actually have no mechanism to stop it from bleeding into other modules which means you end up in an infinite, failure loop. Replace one bad component with a good one and the leftover apparently good component fails and takes out the new, good board eventually. The cycle then continues on forever before the technician realises this or news of such design spreads. You may have to replace both boards at the same time which then makes the repair un-economical

- spare parts can be extremely difficult to source or are incredibly expensive. Moreover, the quality of the replacement parts can vary drastically in quality. If at all possible work with a source of known quality. Else, ask for demo parts particularly with Asian suppliers who may provide them for free and as a means of establishing a longer term business relationship

- be careful when replacing parts. Try to do your bet to replace like for like. Certain systems will operate in a degraded state if/when using sub-par replacements but will ultimately fail down the line

- use all your senses (and head) to track down a failure more quickly (sight and smell in particular for burnt out components). Sometimes, it may not be obvious where the actual failure is as opposed to where it may appear to be coming from. For instance, one set I looked at had a chirping power supply. It had actually suffered from failures of multiple components which made it appear/sound as though the transformer had failed. Replacement of all relevant components (not the transformer) resulted in a functional power supply unit and stopping of the chirping sound

- as with musical instruments, teardowns may be the best that you can get with regards to details of how a device should work. This is nothing like school/University where you are given a rough idea of how it should work. You may be completely blind here...

- components may be shared across different manufacturers. It doesn't mean that they will work if swapped though. They could be using different version of the same base reference board (similar to the way in which graphics, sound, telecommunications, and network cards rely on reference designs in the ICT sector)


Magnavox has a very similar layout to a similar size Phillips LCD TV

Apparently, Amazon are interested in some local talent.

There are some bemusing tales of recruitment and the experience of working there though.,21.htm

If your iPhone, iPad, or iPod touch doesn't respond or doesn't turn on. If your device is in a lot of trouble I often just run the following command on the storage, 'dd if=/dev/zero of=/dev/[iPod storage node]'. This will create a corrupted filesystem and force restoration of the iOS to factory settings/setup.

Sometimes digitizers play up. Apparently, a lot of strange behaviour can occur if certain cables are bent improperly or if there isn't enough space/insulation between certain components.

Identify your iPad model.

If your device is suffering from device corruption issues you may need to backup your music first...

A lot of substances can be used to remove scratches from your electronic device. Some of them not so obvious in the way that they actually work (solvents and abrasives are the most common techniques that are used).

Labor on refugees

Sorry, technical folk, this is going to be a political blog post.

I recently got an email from my local member, Andrew Leigh, that raised an issue I feel passionately about; here is my response.

On 09/07/15 14:55, Andrew Leigh wrote:[snip]
> ▪ Some people have asked me *why Labor supported the government’s bill to
> continue regional processing*. This is a tough question, on which reasonable
> people can disagree, but the best answer to this is to read Bill Shorten’s
> speech to the House of Representatives
> on the day the legislation was introduced.
Hi Andrew,

I'm sorry, but I cannot agree with the logic Bill Shorten and the Labor party has expressed in that speech.

Firstly, anyone watching the international problems with refugees will realise that Australia's intake is pitiful and stingy compared to some of its key allies and comparable nations and especially when compared to its population size and lifestyle. It is hypocritical to say "we don't want people to risk journeying across the sea from Indonesia, but we're happy for them to remain illegal immigrants there", especially when you look at the life that those people face as refugees there.

As an aside, though, I would say that it is still partly correct - it is more humane for them to remain in Indonesia than to be detained indefinitely in the inhuman, underresourced and tortuous conditions on Manus Island and Nauru. It is shameful to me that the Labor party can ignore this obvious contradiction.

But more importantly, the logic that we're somehow denying "people smugglers a product to sell" by pushing boats back into international waters shows no understanding of people smuggling as a business. Australia is still very much a destination, it's just that people now come with visas on planes and they pay even more for this than they used to. There is still a thriving trade in getting people into Australia, it's just been made more expensive - in the same way that making heroin illegal has not caused it to suddenly vanish from the face of the earth.

All we're doing by punishing people who come by boat to seek refuge in Australia is punishing the very desperate, the worst off, the people who have literally fled with their clothes and nothing else.

Other people with money still arrive, overstay their visas, get jobs as illegal immigrants or on tourism visas. The ABC has exposed some of these ridiculous, unethical companies trading on foreign tourists and grey market labourers. The Labor party, of all parties, should be standing up for these people's rights yet it seems remarkably silent on this issue.

The point that I think Labor needs to learn and the point I ask you to express to your colleagues there is that we don't want Labor to return to its policies in 2010. We thought those were inhuman and unjust then, and we still do now. Invoking them as a justification for supporting the Government now is bad.

Personally, I want Labor to do three things with regard to refugees:

  1. Move back to on-shore detention and processing. The current system is vastly more expensive than it needs to be, and makes it more difficult for UN officials and our own members of parliament and judiciary to be able to examine the conditions of detention. The Coalition keeps telling everyone about how expensive their budget is but seems remarkably silent on why we're paying so much to keep refugees offshore.
  2. Provide better ways of settling refugees, such that one can cut the "people smuggler" middle men out of the deal.

    For example, set up refugee processing in places such as Sri Lanka and Afghanistan where many refugees come from. Set a fixed price per person for transportation and processing in Australia, such that it undercuts the people smugglers - according to figures I read in 2010 this could be $10,000 and still be 50% less than black market figures.

  3. Ensure accountability and transparency of the companies such as Serco that are running these centres. If the government was running them and people were being abused, the government would be held accountable; when private companies do this the government wipes its hands and doesn't do a thing.
And on a more conversational note, I'd be interested in your views on this as an economist. There is obviously an economy of people smuggling - do we understand it? Is there any economic justification for offshore detention? All markets must work with a certain amount of illegal activity - can we work _with_ the black market rather than trying to work against it?

I do appreciate your updates and information and I look forward to more of your podcasts.

All the best,


Gather 2015 – Afternoon Sessions

Panel: “How we work” featuring Lance Wiggs, Dale Clareburt, Robyn Kamira, Amie Holman – Moderated by Nat Torkington

  • Flipside of Startups given by Nat
  • Amie – UX and Services Designer for the Govt, thinks her job is pretty cool. Puts services online.
  • Lance – Works for NZTE better by capital programme. Also runs an early stage fund. Multiple starts and fails
  • Dale – Founded of Weirdly. Worked her way up to top of recruitment company (small to big). Decided to found something for herself.
  • Robyn – Started business 25 years ago. IT consultant, musician, writer.
  • Nat – Look at what you are getting from the new job. Transition to new phase in life. Want ot be positive.
  • Types of jobs: Working for someone else, work for yourself, hire other people, investor. Each has own perks, rewards and downsides.
  • Self employed
    • Big risk around income, peaks and troughs. Robyn always lived at the bottom of the trough level of income. Some people have big fear where next job is coming from.
    • Robyn – Charged Govt as much as possible. Later on charged just below what the really big boys charged. Also has lower rates for community orgs. Sniffed around to find out the rates. Sometimes asked the client. Often RFPs don’t explicityly say so you have to ask.
    • Pricing – You should be embarrassed about how much you charge for services.
    • Robyn – Self promotion is really hard. Found that contracts came out of Wellington. Book meetings in cafes back to back. Chat to people, don’t sell directly.
  • Working for others
    • Amie – Working in a new area of government. But it an area that is growing. Fairly permissive area, lots of gaps that they can fill.
    • Dale – Great experience as an employee. In environment with lot of autonomy in a fast growing company.
    • Lance – Worked from Mobile – Lots of training courses, overseas 6 months after hired. 4 years 4 different cities, steep learning curve, subsidized housing etc. “Learning curve stopped after 4 years and then I left”.
    • Big companies downside: Multiple stakeholders, Lots of rules
    • Big company upside: Can do startup on the side, eg a Family . Secure income. Get to play with big money and big toys.
  • Startup
    • Everything on steroids
    • Really exciting
    • Starting all parts of a company at once
    • Responsibility for business and people in it
    • Crazy ups and downs. Brutal emotional roller-coaster
    • Lance lists 5 businesses off the top of his head that failed that he was at. 3 of which he was the founder
    • Worst that can happen is that you can lose your house
    • Is this life for everyone? – Dale “yes it can be, need to go in with your eyes open”.  “Starting a business can be for everyone. I’m the poorest I’ve ever been now but I’m the happiest I’ve ever been”
    • At a startup you are not working for yourself, you are working for everybody else. Dale says she trys to avoid that.
    • Robyn – “If you life is gone when you are in a business then you are doing it wrong.”
    • If you are working from home you can get isolated, get some peer support and have a drink, coffee with some others.
  • Robyn – Recomends “How to make friends and influence People”
  • Dale
    • Jobhunters – Look for companies 1st and specific job 2nd
    • Startup – Meet everyone that you know and ask their opinion on your pitch
    • Young People going to Uni – You have to get work experience, as a recruiter she looks at experience 1st and pure academic history second.
  • Lance
    • Balance between creating income, creating wealth, learning
    • Know what you are passionate about and good at
    • It is part of our jobs to support everyone around us. Promote other people
  • Amie
    • Find the thing that is your passion
    • When you are deliverying your passion then you are delivering sometime relevant

 Pick and Mix

  • Random Flag generator – @polemic
    • See Wikipedia page for parts of a flag
    • 3 hex numbers are palet
    • 4 numbers represent the pattern
    • Next number will be the location
    • next number which color will be assigned
    • Last number will be a tweak number
    • Up to 8 or 9 of the above
    • Took python pyevolve and run evolution on them.
  • Alex @4lexNZ , @overtime
    • E-sports corporate gaming league
    • untested in NZ
    • Someone suggested cold calling CEOs or writing them letter
  • Simon @slyall (yes me)
    • Low volume site for announcements
  •  Mutate testing
    • Tweak test values of code, to reverse fuzzing
  • Landway learning  – @kiwimrdee
    • Looking for computers to borrow for class
    • They teach lots of stuff
  • Poetry for computers – @kiwimrdee
    • Hire somebody english/arts background who understand language rather than somebody from a CS background who understand machines
    • Lossless image compression for the web
    • Tools vary across the platform
  • Glen – Make computers learn to play Starcraft 1
    • Takes replays of humans playing starcraft
    • Getting computer to learn to play from that DB
    • It is struggling
  • Emergent political structures in tabletops games

Never check in a bag – How to pack

  • 48 hour bag
    • Laptop and power
    • Always – Zip up pouch, tissues , hand sanitizer, universal phone charger, breath mints, the littlest power plug (check will work in multiple voltages), Food bar, chocolate.
    • If more than 48 hours – notebook, miso soup, headphones, pen, laptop charger, apple plugs ( See “world travel kit” on apple site)
    • Get smallest power plug that will charge your laptop
    • Bag 3 – Every video adapter in the world, universal power adapter, airport express.
    • TP-link battery powered wifi adapters
    • If going away just moves laptop etc to this bag
    • Packing Cell
      • Enough clothes to get me through 48 hours
      • 2 * rolled tshirts (ranger rolling)
      • 2 pairs of underwear
      • 2 pairs of socks
      • Toileties. Ziplock back that complies with TSA rules for gels etc.
      • Other toiletries in different bag
      • Rip off stuff from hotels, also Kmart and local stores.
      • Put toiletries ziplock near door to other bag so easy to get out for security.
      • Leave packing cell in Hotel when you go out
    • Learn to Ranger roll socks and shirts etc.
  • 6 weeks worth of stuff
    • In the US you can have huge carry-on
    • Packs 2 weeks worth of clothes
    • Minaal Bag (expensive but cool).
    • Schnozzel bag – Vacuum pack clothing bag
  • Airlines allow 1 carryon bag up to 7 kgs + 1 bag for other items (heavy stuff can go into that)
  • Pick multi-color packing sell so you can color-code them.
  • Elizabeth Holmes and Matilda Kahl and Steve Jobs all wear same stuff every day.
  • Wear Ballet Heals on the plane
  • Woman no more than 2 pairs of shoes every, One of which must be good for walking long distances
  • Always be charging

 Show us your stack

  • I was running this session so didn’t take any notes.
  • We had people from about 5 compnies give a quick overview of some stuff they are running.
  • A bit of chat beforehand
  • Next year if I do this I probably need to do 5 minutes time limits for everyone

Close from Rochelle

  • Thanks to Sponsors
  • Thanks to Panellists
  • Thanks to catering and volunteer teams
  • Will be back in 2016



July 10, 2015

Gather 2015 – Morning Sessions

Today I’m at the Gather 2015 conference. This was originally “Barcamp Auckland” before they got their own brand and went off to do random stuff. This is about my 5th year or so here (I missed one or two).

Website is . They do random stuff as well as the conference.


  • Welcome and intro to conference history from Ludwig
  • Rochelle thanks the sponsors
  • Where to go for dinner, no smoking, watch out for non-lanyard people., fire alarms, etc
  • Quiet room etc

Lessions learnt from growing a small team from 5-15

  • Around 30 people. Run by Ben, works at sitehost, previously worked at Pitch
  • Really hard work. Takes  a lot of time and real effort to build a great team
  • Need dedicate time and resources to growing team, Need someone who is focussed on growing the team and keeping the current team working
  • Cringe when people say “HR” but you need some in the sort of role and early on.
    • At around 16 people and doesn’t have full HR person yet. Before FT have someone with scheduled time to focus on team or company culture. In ideal world that person might not be in a manager role but be a bit senior (so they hear what the lower level employees say.
  • Variety and inclusion are keep to happy team
    • Once you are at 10+ members team will be diverse so “one size fits all” won’t work anymore. Need to vary team activities, need to vary rewards. Even have team lunches at different places.
  • Hire for culture and fit
    • From the first person
    • Easier to teach someone skills than to be a good team member
    • Anecdote: Hired somebody who didn’t fit culture, was abrasive, good worker but lost productivity from others.
    • Give people a short term trial to see if they fit in.
  • You will need to change the way communicate as a team as it grows
    • A passing comment is not enough to keep everybody in the loop
    • Nobody wants to feel alienated
    • Maybe chat software, noticeboard, shared calendar.
  • Balance the team work the members do
    • Everybody needs to enjoy the work.
    • Give people interesting rewarding work, new tech, customer interaction
    • Share the no-fun stuff too. Even roster if you have to. Even if somebody volunteers don’t make them always do it.
  • Appreciate you team members
    • Praise them if they have put a lot of work into something
    • Praise them before you point out the problems
    • Listen to ideas no matter who they come from.
    • 5 Questions/Minutes rule
  • If someone is working not well, wonder if problem is elsewhere in their life. Maybe talk to them. Job of everyone in the team
  • Appreciate your teams work, reward them for it
  • Do what feels right for your team. What works for some teams might not work for all. No “one size fits all”
  • Building great teams isn’t science it is an art. Experiment a bit.
  • Taking the time to listen to 10 people instead of just 5 takes longer. Maybe this can be naturally taken on by others in the team, no just the “boss”.
  • Have a buddy for each new hire. But make sure the buddys don’t get overloaded my constantly doing this with each new hire.
  • Going from 10 to 100 ppl. They same thing doesn’t work at each company size.
  • The point where you can get everybody in a room till when you can’t. At that point you have multiple teams and tribalism.
  • If you have a project across multiple teams then try and put everybody in that project together in a room.
  • Have people go to each others standups
  • Hire people who can handle change
  • Problem if you you buy a small company, they small company may want to keep their culture.
  • Company that does welcome dinners not farewell dinners
  • Make sure people can get working when they arrive, have an email address etc, find out if they have preferences like nice keyboard.
  • Don’t hire when you are extremely busy that you can’t properly get them onboard (or you may pick the wrong person). Never hire impulsively. Hire ahead of time.
  • Don’t expect them to be fully productive straight away. Give them something small to start on, no too complicated, no to crazy dependant on your internal crazy systems. But make sure it is within their skill level in case they struggle.
  • Maybe summer student projects. Find good people without being stuck with someone. Give them a project that isn’t high enough priority for the FT people.
  • Create training material

 Writing for fun and profit

  • Run by Peter Ravlich
  • Scrivener – IDE for writing
  • Writing full time (with support from partner), currently doing 4 projects simitaniously
  • Less community for Fantasy Writers than for literary writers. Bias in NZ against Genre fiction
  • Community – SpecficNZ – For speculative fiction. SciFi con each year and have a stand at Armageddon each year. $30 per year
  • If you write roleplaying games look at selling via
  • Remember if publishing with Amazon then remember to be non-exclusive
  • For feature writing you need to know editors who like you and like your work.
  • “Just keep writing” , only way you’ll ever get better
  • Writing a weekly column:
    • The Best way: Write articles week ahead of time, edited by his wife, sent to the editor well in advance.
    • Leaving to last minute, not pre-editing quality varies, speakers column got dropped
  • Find the type of writing that you like and are good at.
  • Run everything past a reading group. “Am I on the right track?”
  • Treated writing as a jobs. Scheduled “Write for an hour, edit for 30 minutes, lunch, then repeat”. Make yourself.
  • Lots of sites that that push you to write a set number of words. Give you badges, pictures of kittens or punishment to keep you to a wordcount
  • Join a online writing group and post regular updates and get a bit of feedback
  • Daily Routines Blog or spinoff book for some ideas
  • Developmental editor or Structural editor
    • Developmental editor – Go to early, guidelines of what you should be doing, what direction you should be going. What is missing. Focused at plot level.
    • Structural Editor – Goes though line-by-line
  • Need to find editor who suits your style of writing, knows genre is important. Looks for those who have edited books/authors in your area.
  • Self editing – set aside novel, change font, new device, read though again. Change context so looking at it with new eyes.
  • Get contract with editor reviewed by Lawyer with experience in the industry (and on your side)
  • Most traditional publishers expect to see an edited novel
  • Talk to agents, query those who work with authors in similar areas to you.
  • Society of Authors
    • Have some legal experts, give you a reference
  • Kindle K-boards, a bit romance orientated but very good for technical stuff.
  • Go to poetry or reading/writing group. Get stuff out to other people. Once you have got it out to some, even just a small group then small jump to send it out to billions.
  • Have a stretegy on how to handle reviews, probably don’t engage with them.
  • Anne Friedman – Disapproval Matrix
  • You are your own toughest reviewer
  • Often people who went to journalism school, although not many actual journalists
  • Starling Literary Journal
  • Lists of Competitions and festivals in various places
  • Hackathon ( Step it up 2015 ) coming up, one group they want is for journalists who want to get more money into the job

The World of Vexillology – Flag Design

  • Dan Newman
  • +
  • NZ Flag design cutoff this coming Thursday (the 16th of July)
  • People interesting in how the flag design originates, eg how Navel custom influences designs
  • 6000 odd submissions -> 60 shortlist -> 4 voted in referendum -> 1 vs current
  • 60 people at meeting in Wellington, less in other places.
  • Government Website
  • first time a country changed a flag by referendium not at the time of signifcant event (eg independence)
  • A lot of politicians are openly republican, but less push and thought in rest of population
  • Concern that silver flag looks like corporate logo
  • Easier to pretend you are an Australian and ask them “What would the NZ flag look like?” . Eg “Green Kangaroo on yellow” , “While silver fern or Kiwi on Black background”
  • Also lots of other countries use the Southern Cross
  • most countries the National team colors are close to that of the flag
  • Feeling if even flag changes now, then after “full independence” will change again
  • What will happen if Celebs come out if favour of a specific design
  • Different colours have different associations ( in different places )
  • All sorts of reasons why different colours are on a flag
  • The Silver fan looks like a fish to some
  • Needs to look good scaled down to emoji size

Bootstrapping your way to freedom

  • From Mark Zeman – Speedcurve
  • Previous gather sessions have been orientated toward VC and similar funding
  • There is an alternative where you self-fund
  • Design teacher – all students wanted to work on LOTR cause it was where all the publicity was.
  • Boostrapping – Doing it your way, self funded, self sustaining, usually smaller
  • Might take Capital later down the track
  • 3Bs seen as derogatory
  • Lots of podcasts, conferences and books etc
  • See Jason Cohen, many bits in present taken from him
  • The “ideal” bootstrapped business. Look at it from your own constraints
  • Low money, low time, self funded, try to create a cash machine
  • SAAS business lower end is very low. Very small amount per year
  • Low time if working on the side
  • Trying to get to maybe $10k/month to go fulltime
  • Reoccurring revenue. 150 customers at $66/month. Not many customers, not huge value product but has to be a reasonable amount.
  • Maybe not one-off product
  • Enterprise vs consumer space
  • Hard to get there with $0.99 one-offs in App store
  • Annual plans create cashflow
  • Option Boutique product. Be honest about who you are, how big you really are, don’t pretend to be a big company
  • B2B is a good space to be in. You can call 150 business and engage with them.
  • Not critical, Not real time (unless you want to be up at 3am)
  • Pick something that has “naturally reoccurring pain”. eg “Not a wedding planner” , probably multiple times per month
  • Aftermarkets. eg “Plugins for wordpress. Something small, put 20 hours into it, put it up for $5″. See also Xero, Salesforce.
  • Pick Big Markets, lots of potential customers
  • “Few NZ clients great for GST since I just get refunds”
  • Better By design. Existing apps mean there is already a market. Took an existing Open source product ( and put a nice wrapper on it
  • Number of companies have published their numbers. Look at the early days of them while it took them to get to $10k/month (eg many took a year or two to get there).
  • Option to do consultancy on the side if you go “full time”. Cover the gap between you new business and your old wage. Had a 1 year contract that let him go half time on new biz but cover old expenses.
  • Don’t have false expectations on how quickly it will happen
  • Hard when it was a second job. Good because it was different from the day-job, but a lot of work.
  • Prototype and then validate. In most cases you should go the other way around.
  • If you want to talk to someone have something to offer. Have a pay it forward.
  • Big enterprises have people too. Connect to one guy inside and they can buy your product out of his monthly credit card bill.
  • Not everybody is doing all the cool techniques. Even if you are a “B” then you are ahead of a lot of the “C”s . eg creating sites with responsive design.
  • 1/3 each – Building Business, Building Audience, Building Product
  • Loves doing his GST etc
  • In his case he did did each in turn. Product , then Audience then Business
  • Have a goal. Do you want to be a CEO? Or just a little company?
  • His Success measures – Fun, time with kids, travel, money, flexability, learning, holidays, adventures, ideas, sharing
  • Resources: Startups for the Rest of us. A Smart Bear Blog, Amy Hoy – Unicorn Free, GrowthHacker TV, Microconf Videos


Landscape design: a great analogy for the web

Map of the garden at Versailles

I often find myself describing the digital domain to people who don't live and breathe it like I do. It's an intangible thing, and many of the concepts are coded in jargon. It doesn't help that every technology tool set uses it's own specific language, sometimes using the same words for very different things, or different words for the same things. What's a page? A widget? A layout? A template? A module, plugin or extension? It varies. The answer "depends".

Analogies can be a helpful communication tool to get the message across, and get everyone thinking in parallel.

One of my favourites, is to compare a web development project, to a landscape design project.

One of the first things you need to know, is who is this landscape for and what sort of landscape is it? The design required for a public park is very different to one suitable for the back courtyard of an inner city terrace house.

You also need to know what the maintenance resources will be. Will this be watered and tended daily? What about budget? Can we afford established plants, or should we plan to watch the garden grow from seeds or seedlings?

The key point of comparison, is that a garden, whether big or small, is a living thing. It will change, it will grow. It may die from neglect. It may become an un-manageable jungle without regular pruning and maintenance.

What analogies do you use to talk about digital design and development?

Image: XIIIfromTOKYO - Plan of the gardens of Versailles - Wikipedia - CC-BY-SA 3.0