Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

## April 17, 2014

Zoe slept until 7:45am this morning, which is absolutely unheard of in our house. She did wake up at about 5:15am yelling out for me because she'd kicked her doona off and lost Cowie, but went back to sleep once I sorted that out.

She was super grumpy when she woke up, which I mostly attributed to being hungry, so I got breakfast into her as quickly as possible and she perked up afterwards.

Today there was a free magic show at the Bulimba Library at 10:30am, so we biked down there. I really need to work on curbing Zoe's procrastination. We started trying to leave the house at 10am, and as it was, we only got there with 2 minutes to spare before the show started.

Magic Glen put on a really good show. He was part comedian, part sleight of hand magician, and he did a very entertaining show. There were plenty of gags in it for the adults. Zoe started out sitting in my lap, but part way through just got up and moved closer to the front to sit with the other kids. I think she enjoyed herself. I'd have no hesitation hiring this guy for a future birthday party.

Zoe had left her two stuffed toys from the car at Megan's house on Tuesday after our Port of Brisbane tour, and so after the magic show we biked to her place to retrieve them. It was close to lunch by this stage, so we stayed for lunch, and the girls had a bit of a play in the back yard while Megan's little sister napped.

It was getting close to time to leave for our flu shots, so I decided to just bike directly to the doctor from Megan's place. I realised after we left that we'd still left the stuffed toys behind, but the plan was to drive back after our flu shots and have another swim their neighbour's pool, so it was all good.

We got to the doctor, and waited for Sarah to arrive. Sarah and I weren't existing patients at Zoe's doctor, but we'd decided to get the flu shot as a family to try and ease the experience for Zoe. We both had to do new patient intake stuff before we had a consult with Zoe's doctor and got prescriptions for the flu shot.

I popped next door to the adjacent pharmacy get the prescriptions filled, and then the nurse gave us the shots.

For the last round of vaccinations that Zoe received, she needed three, and she screamed the building down at the first jab. The poor nurse was very shaken, so we've been working to try and get her to feel more at ease about this one.

Zoe went first, and she took a deep breath, and she was winding up to freak out when she had her shot, but then it was all over, and she let the breath go, and looked around with a kind of "is that it?" reaction. She didn't even cry. I was so proud of her.

I got my shot, and then Sarah got hers, and we had to sit in the waiting room for 10 minutes to make sure we didn't turn into pumpkins, and we were on our way.

We biked home, I grabbed our swim gear, and we drove back to Megan's place.

The pool ended up being quite cold. Megan didn't want to get in, and Zoe didn't last long either. Megan's Mum was working back late, so I invited Megan, her Dad and her sister over for dinner, and we headed home so I could prepare it. One of Zoe's stuffed toys had been located.

We had a nice dinner of deviled sausages made in the Thermomix, and for a change I didn't have a ton of leftovers. Jason had found the other stuffed toy in his truck, so we'd finally tracked them both down.

After Megan and family went home, I got Zoe to bed without much fuss, and pretty much on time. I think she should sleep well tonight.

## April 16, 2014

I ordered some alginate the other day, and it arrived yesterday, but we were out, so I had to pick it up from the post office this morning.

Anshu and I picked it up before Zoe was dropped off. We had a couple of attempts at making some, but didn't quite get the ratios or the quantity right, and we were too slow, so we'll have to try again. The plan is to try and make a cast of Zoe's hand, since we were messing around with plaster of Paris recently. I've found a good Instructable to try and follow.

Nana and her dragon boating team were competing in the Australian Dragon Boat Championships over Easter, and her first race was today. It also ended up that today was the best day to try and go and watch, so when she called to say her first race would be around noon, I quickly decided we should jump in the car and head up to Kawana Waters.

We abandoned the alginate, and I slapped together a picnic lunch for Zoe and I, and we bid Anshu farewell and drove up.

Zoe's fever seemed to break yesterday afternoon after Sarah picked her up, and she slept well, but despite all that, she napped in the car on the way up, which was highly unusual, but helped pass the time. She woke up when we arrived. I managed to get a car park not too far from the finish line, and we managed to find Nana, whose team was about the enter the marshaling area.

Her boat was closest to the shore we were watching from, and her boat came second in their qualifying round for the 200 metre race, meaning they went straight through to the semi-finals.

The semi-finals were going to be much later, and I wanted to capitalise on the fact that we were going to have to drive right past my Mum and Dad's place on the way home to try and see my sister and her family, since we missed them on Monday.

We headed back after lunch and a little bit of splashing around in the lake, and ended up staying for dinner at Mum and Dad's. Zoe had a great time catching up with her cousin Emma, and fooling around with Grandpa and Uncle Michael.

She got to bed a little bit late by the time we got home, but I'm hopeful she'll sleep well tonight.

## April 15, 2014

Tonight Melbourne got to experience the tail end of a lunar eclipse as the moon rose in eclipse at 17:48. We took a friend on a trip up to the (apparently now closed) Olinda Golf Course to view the moon rise. It was nice and clear and after roaming around a bit to find a place where we should have been able to see the eclipsed moon we found a suitable spot but couldn’t see the moon itself. Mars was visible in the right area but of course the salient point of a lunar eclipse is that the moon is in the earths shadow and so wasn’t findable until it started to exit at third contact. Got a few photos, of which this was the best.

We had to head back down the hill as Donna had an appointment at 7pm but later on our friend called up and said excitedly “Have you seen the moon? Go and look!”. I went out to see but the hills were still in the way then, so later on I headed out with the camera once the moon was visible and got some more photos as the moon headed towards fourth contact (when it exits the shadow of the Earth).

This item originally posted here:

Lunar Eclipse 15th April 2014

Sarah dropped Zoe around this morning at about 8:30am. She was still a bit feverish, but otherwise in good spirits, so I decided to stick with my plan for today, which was a tour of the Port of Brisbane.

Originally the plan had been to do it with Megan and her Dad, Jason, but Jason had some stuff to work on on his house, so I offered to take Megan with us to allow him more time to work on the house uninterrupted.

I was casting around for something to do to pass the time until Jason dropped Megan off at 10:30am, and I thought we could do some foot painting. We searched high and low for something I could use as a foot washing bucket, other than the mop bucket, which I didn't want to use because of potential chemical residue. I gave up because I couldn't anything suitable, and we watched a bit of TV instead.

Jason dropped Megan around, and we immediately jumped in the car and headed out to the Port. I missed the on ramp for the M4 from Lytton Road, and so we took the slightly longer Lytton Road route, which was fine, because we had plenty of time to kill.

The plan was to get there for about 11:30am, have lunch in the observation cafe on the top floor of the visitor's centre building, and then get on the tour bus at 12:30pm. We ended up arriving much earlier than 11:30am, so we looked around the foyer of the visitor's centre for a bit.

It was quite a nice building. The foyer area had some displays, but the most interesting thing (for the girls) was an interactive webcam of the shore bird roost across the street. There was a tablet where you could control the camera and zoom in and out on the birds roosting on a man-made island. That passed the time nicely. One of the staff also gave the girls Easter eggs as we arrived.

We went up to the cafe for lunch next. The view was quite good from the 7th floor. On one side you could look out over the bay, notably Saint Helena Island, and on the other side you got quite a good view of the port operations and the container park.

Lunch didn't take all that long, and the girls were getting a bit rowdy, running around the cafe, so we headed back downstairs to kill some more time looking at the shore birds with the webcam, and then we boarded the bus.

It was just the three of us and three other adults, which was good. The girls were pretty fidgety, and I don't think they got that much out of it. The tour didn't really go anywhere that you couldn't go yourself in your own car, but you did get running commentary from the driver, which made all the difference. The girls spent the first 5 minutes trying to figure out where his voice was coming from (he was wired up with a microphone).

The thing I found most interesting about the port operations was the amount of automation. There were three container terminals, and the two operated by DP World and Hutchinson Ports employed fully automated overhead cranes for moving containers around. Completely unmanned, they'd go pick a container from the stack and place it on a waiting truck below.

What I found even more fascinating was the Patrick terminal, which used fully automated straddle carriers, which would, completely autonomously move about the container park, pick up a container, and then move over to a waiting truck in the loading area and place it on the truck. There were 27 of these things moving around the container park at a fairly decent clip.

Of course the girls didn't really appreciate any of this, and half way through the tour Megan was busting to go to the toilet, despite going before we started the tour. I was worried about her having an accident before we got back, she didn't, so it was all good.

I'd say in terms of a successful excursion, I'd score it about a 4 out of 10, because the girls didn't really enjoy the bus tour all that much. I was hoping we'd see more ships, but there weren't many (if any) in port today. They did enjoy the overall outing. Megan spontaneously thanked me as we were leaving, which was sweet.

We picked up the blank cake I'd ordered from Woolworths on the way through on the way home, and then dropped Megan off. Zoe wanted to play, so we hung around for a little while before returning home.

Zoe watched a bit more TV while we waited for Sarah to pick her up. Her fever picked up a bit more in the afternoon, but she was still very perky.

## April 14, 2014

We had a bit of a rough night last night. I noticed Zoe was pretty hot when she had a nap yesterday after not really eating much lunch. She still had a mild fever after her nap, so I gave her some paracetamol (aka acetaminophen, that one weirded me out when I moved to the US) and called for a home doctor to check her ears out.

Her ears were fine, but her throat was a little red. The doctor said it was probably a virus. Her temperature wasn't so high at bed time, so I skipped the paracetamol, and she went to bed fine.

She did wake up at about 1:30am and it took me until 3am to get her back to bed. I think it was a combination of the fever and trying to phase out her white noise, but she just didn't want to sleep in her bed or her room. At 3am I admitted defeat and let her sleep with me.

She had only a slightly elevated temperature this morning, and otherwise seemed in good spirits. We were supposed to go to a family lunch today, because my sister and brother are in town with their respective families, but I figured we'd skip that on account that Zoe may have still had something, and coupled with the poor night's sleep, I wasn't sure how much socialising she was going to be up for.

My ear has still been giving me grief, and I had a home doctor check it yesterday as well, and he said the ear canal was 90% blocked. First thing this morning I called up to make an appointment with my regular doctor to try and get it flushed out. The earliest appointment I could get was 10:15am.

So we trundled around the corner to my doctor after a very slow start to the day. I got my ear cleaned out and felt like a million bucks afterwards. We went to Woolworths to order an undecorated mud slab cake, so I can try doing a trial birthday cake. I've given up on trying to do the sitting minion, and significantly scaled back to just a flat minion slab cake. The should be ready tomorrow.

The family thing was originally supposed to be tomorrow, and was only moved to today yesterday. My original plan had been to take Zoe to a free Dora the Explorer live show that was on in the Queen Street Mall.

I decided to revert back to the original plan, but by this stage, it was too late to catch the 11am show, so the 1pm show was the only other option. We had a "quick" lunch at home, which involved Zoe refusing the eat the sandwich I made for her and me convincing her otherwise.

Then I got a time-sensitive phone call from a friend, and once I'd finished dealing with that, there wasn't enough time to take any form of public transport and get there in time, so I decided to just drive in.

We parked in the Myer Centre car park, and quickly made our way up to the mall, and made it there comfortably with 5 minutes to spare.

The show wasn't anything much to phone home about. It was basically just 20 minutes of someone in a giant Dora suit acting out was was essentially a typical episode of Dora the Explorer, on stage, with a helper. Zoe started out wanting to sit on my lap, but made a few brief forays down to the "mosh pit" down the front with the other kids, dancing around.

After the show finished, we had about 40 minutes to kill before we could get a photo with Dora, so we wandered around the Myer Centre. I let Zoe choose our destinations initially, and we browsed a cheap accessories store that was having a sale, and then we wandered downstairs to one of the underground bus station platforms.

After that, we made our way up to Lincraft, and browsed. We bought a $5 magnifying glass, and I let Zoe do the whole transaction by herself. After that it was time to make our way back down for the photo. Zoe made it first in line, so we were in and out nice and quick. We got our photos, and they gave her a little activity book as well, which she thought was cool, and then we headed back down the car park. In my haste to park and get top side, I hadn't really paid attention to where we'd parked, and we came down via different elevators than we went up, so by the time I'd finally located the car, the exit gate was trying to extract an extra$5 parking out of me. Fortunately I was able to use the intercom at the gate and tell my sob story of being a nincompoop, and they let us out without further payment.

We swung by the Valley to clear my PO box, and then headed home. Zoe spontaneously announced she'd had a fun day, so that was lovely.

We only had about an hour and half to kill before Sarah was going to pick up Zoe, so we just mucked around. Zoe looked at stuff around the house with her magnifying glass. She helped me open my mail. We looked at some of the photos on my phone. Dayframe and a Chromecast is a great combination for that. We had a really lovely spell on the couch where we took turns to draw on her Magna Doodle. That was some really sweet time together.

Zoe seemed really eager for her mother to arrive, and kept asking how much longer it was going to be, and going outside our unit's front door to look for her.

Sarah finally arrived, and remarked that Zoe felt hot, and so I checked her temperature, and her fever had returned, so whatever she has she's still fighting off.

I decided to do my Easter egg shopping in preparation for Sunday. A friend suggested this cool idea of leaving rabbit paw tracks all over the house in baby powder, and I found a template online and got that all ready to go.

I had a really great yoga class tonight. Probably one of the best I've had in a while in terms of being able to completely clear my head.

I'm looking forward to an uninterrupted night's sleep tonight.

## April 13, 2014

The Minesweeper Master is the Problem C in the Google Code Jam 2014 Qualification Round.

Here is my solution for it in C language:


#include <stdio.h>
#include <string.h>

#define N 50

char mines[N][N];

int putMines(int R0, int R1, int C0, int C1, int M, int f) {
int j;
int sR = R1 - R0;
int sC = C1 - C0;
if (sR == 0) return M;
if (sC == 0) return M;

if (sR > sC && sR > 2 && M >= C1 - C0) {
M -= C1 - C0;
for(j = C0; j < C1; j++) mines[R0][j] = '*';
return (M > 0) ? putMines(R0 + 1, R1, C0, C1, M, f) : 0;
}
if (sC > 2 && M >= R1 - R0) {
M -= R1 - R0;
for (j = R0; j < R1; j++) mines[j][C0] = '*';
return (M > 0) ? putMines(R0, R1, C0 + 1, C1, M, f) : 0;
}
if (sR > 2 && M >= C1 - C0) {
M -= C1 - C0;
for(j = C0; j < C1; j++) mines[R0][j] = '*';
return (M > 0) ? putMines(R0 + 1, R1, C0, C1, M, f) : 0;
}
if (sR > sC && (sC > 2 || f)) {
for (j = R0; M > 0 && j < R1 - 2 + 2 * f; j++) {
mines[j][C0] = '*';
M--;
}
return (M > 0) ? putMines(R0, R1, C0 + 1, C1, M, f) : 0;
}
if (sR > 2 || f) {
for (j = C0; M > 0 && j < C1 - 2 + 2 * f; j++) {
mines[R0][j] = '*';
M--;
}
return (M > 0) ? putMines(R0 + 1, R1, C0, C1, M, f) : 0;
}
return M;
}

main() {
int i, T;
int R, C, M;
int j, k;
int mR, fill, mine, cr, cc;
int d, rC, U;

scanf("%d\n", &T);
for (i = 0; i < T; i++) {
printf("Case #%d:\n", i + 1);
scanf("%d %d %d", &R, &C, &M);

cr = R - 1;
cc = C - 1;

memset(mines, (int)'.', N * N);
M = putMines(0, R, 0, C, M, (R * C - M) == 1);

if ( M ) printf("Impossible\n");
else {
mines[cr][cc] = 'c';
for (k = 0; k < R; k++) {
for (j = 0; j < C; j++) {
printf("%c", mines[k][j]);
}
printf("\n");
}
}
}
}


I got one of those rare opportunities to calibrate Zoe's outlook on people on Friday. I feel pretty happy with the job I did.

Once we arrived at the New Farm Park ferry terminal, the girls wanted to have some morning tea, so we camped out in the terminal to have something to eat. Kim had had packed two poppers (aka "juice boxes") for Sarah so they both got to have one. Nice one, Kim!

Not long after we started morning tea, an older woman with some sort of presumably intellectual disability and her carer arrived to wait for a ferry. I have no idea what the disability was, but it presented as her being unable to speak. She'd repeatedly make a single grunting noise, and held her hands a bit funny, and would repeatedly stand up and walk in a circle, and try to rummage through the rubbish bin next to her. I exchanged a smile with her carer. The girls were a little bit wary of her because she acting strange. Sarah whispered something to me inquiring what was up with her. Zoe asked me to accompany her to the rubbish bin to dispose of her juice box.

I didn't feel like talking about the woman within her earshot, so I waited until they'd boarded their ferry, and we'd left the terminal before talking about the encounter. It also gave me a little bit of time to construct my explanation in my head.

I specifically wanted to avoid phrases like "something wrong" or "not right". For all I knew she could have had cerebral palsy, and had a perfectly good brain trapped inside a malfunctioning body.

So I explained that the woman had "special needs" and that people with special needs have bodies or brains that don't work the same way as us, and so just like little kids, they need an adult carer to take care of them so they don't hurt themselves or get lost. In the case of the woman we'd just seen, she needed a carer to make sure she didn't get lost or rummage through the rubbish bin.

That explanation seemed to go down pretty well, and that was the end of that. Maybe next time such circumstances permit, I'll try striking up a conversation with the carer.

The Cookie Clicker Alpha is the Problem B of the Google Code Jam 2014 Qualification Round.

Here is my solution for it in C language:


#include <stdio.h>
#include <math.h>
#include <stdlib.h>

#define EPS 0.0000001

main() {
int T, i, c, p, m;
double C, F, X, Y;

struct Item {
double y;
double a;
} *H;

H = (struct Item *) malloc(10000000 * sizeof(struct Item));

scanf("%d\n", &T);
for (i = 0; i < T; i++) {
printf("Case #%d: ", i + 1);
c = p = m = 0;
scanf("%lf %lf %lf\n", &C, &F, &X);
Y = 10000000.0;
H[p].y = 0.0;
H[p].a = 2.0;

while (c <= p) {
if (fabs(H[c].cookies - X) < EPS) {
if (H[c].y < Y) Y = H[c].y;
} else if (H[c].y < Y) {
H[p + 1].y = H[c].y + C/H[c].a;
H[p + 1].a = H[c].a + F;

H[p + 2].y = H[c].y + X/H[c].a;
H[p + 2].a = H[c].a;
p = p + 2;
}
c++;
}
printf("%.7lf\n", Y);
}
}


The Magic Trick is the Problem A of Google Code Jam 2014 Qualification Round.

Here is my solution for it in C language:


#include <stdio.h>

main() {
int i, j, T, first, second,
F[4], S[4], t[4], r, n;

scanf("%d\n", &T);
for (i = 0; i < T; i++) {
printf("Case #%d: ", i + 1);
scanf("%d", &first);
for(j = 0; j < 4; j++) {
if (j == first - 1) {
scanf("%d %d %d %d", F, F+1, F+2, F+3);
} else {
scanf("%d %d %d %d", t, t+1, t+2, t+3);
}
}
scanf("%d", &second);
for(j = 0; j < 4; j++) {
if (j == second - 1) {
scanf("%d %d %d %d", S, S+1, S+2, S+3);
} else {
scanf("%d %d %d %d", t, t+1, t+2, t+3);
}
}
n = 0;
for(j = 0; j < 4; j++) {
if (F[j] == S[0] || F[j] == S[1] || F[j] == S[2] || F[j] == S[3]) {
n++;
r = F[j];
}
}
switch(n) {
case 0: printf("Volunteer cheated!\n"); break;
case 1: printf("%d\n", r); break;
}
}
}


## April 12, 2014

Accepting a position on the Graduate Development Program will involve a change in your personal circumstances. For some, it may mean leaving home and relocating, for others, it will be your first full-time role, and for others, it will mean new work and a new team. Please outline what sort of changes you anticipate you will need to consider to commence work.

As much as I hate writing answers to selection criteria, sometimes the questions posed do make me think.

Filed under: Life Tagged: Change, life

The main reason that credit cards need to be replaced is that they have a single set of numbers that is used for all transactions. If credit cards were designed properly for modern use (IE since 2000 or so) they would act as a smart-card as the recommended way of payment in store. Currently I have a Mastercard and an Amex card, the Mastercard (issued about a year ago) has no smart-card feature and as Amex is rejected by most stores I’ve never had a chance to use the smart-card part of a credit card. If all American credit cards had a smart card feature which was recommended by store staff then the problems that Brian documents would never have happened, the attacks on Target and other companies would have got very few card numbers and the companies that make cards wouldn’t have a backlog of orders.

If a bank was to buy USB smart-card readers for all their customers then they would be very cheap (the hardware is simple and therefore the unit price would be low if purchasing a few million). As banks are greedy they could make customers pay for the readers and even make a profit on them. Then for online banking at home the user could use a code that’s generated for the transaction in question and thus avoid most forms of online banking fraud – the only possible form of fraud would be to make a $10 payment to a legitimate company become a$1000 payment to a fraudster but that’s a lot more work and a lot less money than other forms of credit card fraud.

A significant portion of all credit card transactions performed over the phone are made from the customer’s home. Of the ones that aren’t made from home a significant portion would be done from a hotel, office, or other place where a smart-card reader might be conveniently used to generate a one-time code for the transaction.

The main remaining problem seems to be the use of raised numbers. Many years ago it used to be common for credit card purchases to involve using some form of “carbon paper” and the raised numbers made an impression on the credit card transfer form. I don’t recall ever using a credit card in that way, I’ve only had credit cards for about 18 years and my memories of the raised numbers on credit cards being used to make an impression on paper only involve watching my parents pay when I was young. It seems likely that someone who likes paying by credit card and does so at small companies might have some recent experience of “carbon paper” payment, but anyone who prefers EFTPOS and cash probably wouldn’t.

If the credit card number (used for phone and Internet transactions in situations where a smart card reader isn’t available) wasn’t raised then it could be changed by posting a sticker with a new number that the customer could apply to their card. The customer wouldn’t even need to wait for the post before their card could be used again as the smart card part would never be invalid. The magnetic stripe on the card could be changed at any bank and there’s no reason why an ATM couldn’t identify a card by it’s smart-card and then write a new magnetic stripe automatically.

These problems aren’t difficult to solve. The amounts of effort and money involved in solving them are tiny compared to the costs of cleaning up the mess from a major breach such as the recent Target one, the main thing that needs to be done to implement my ideas is widespread support of smart-card readers and that seems to have been done already. It seems to me that the main problem is the incompetence of financial institutions. I think the fact that there’s no serious competitor to Paypal is one of the many obvious proofs of the incompetence of financial companies.

The effective operation of banks is essential to the economy and the savings of individuals are guaranteed by the government (so when a bank fails a lot of tax money will be used). It seems to me that we need to have national banks run by governments with the aim of financial security. Even if banks were good at their business (and they obviously aren’t) I don’t think that they can be trusted with it, an organisation that’s “too big to fail” is too big to lack accountability to the citizens.

## April 11, 2014

This week I setup an old Dell Optiplex 755 tower with Ubuntu 12.04.4, TvHeadEnd and Realtek RTL2832U USB tuner to perform some DVB-T recordings. The installation I performed of TvHeadEnd is the exact same one I documented some months back when I used the same USB tuner on a Raspberry Pi. You can read about it here.

The installation was flawless and simple as you’d expect. The system has been running a few days now and capturing what I want. It also allows me to point VLC client on other machines at the system to network stream any of the DVB-T channels the tuner can tune against (also shown in the previous post linked above).

Thinking of buying another tuner to be honest, so I can record from 2 different channels that don’t share the same stream/multiplex id.

Today I woke nearly an hour earlier than I'm used to, and got on a plane at a barely undignified hour, to travel for over three hours to visit a good friend of mine, Peter Miller, in Gosford.

Peter may be known to my readers, so I won't be otiose in describing him merely as a programmer with great experience who's worked in the Open Source community for decades. For the last couple of years he's been battling Leukaemia, a fight which has taken its toll - not only on him physically and on his work but also on his coding output. It's a telling point for all good coders to consider that he wrote tests on his good days - so that when he was feeling barely up to it but still wanted to do some coding he could write something that could be verified as correct.

I arrived while he was getting a blood transfusion at a local hospital, and we had spent a pleasurable hour talking about good coding practices, why people don't care about how things work any more, how fascinating things that work are (ever seen inside a triple lay-shaft synchronous mesh gearbox?), how to deal with frustration and bad times, how inventions often build on one another and analogies to the open source movement, and many other topics. Once done, we went back to his place where I cooked him some toasted sandwiches and we talked about fiction, the elements of a good mystery, what we do to plan for the future, how to fix the health care system (even though it's nowhere near as broken as, say, the USA), dealing with road accidents and fear, why you can never have too much bacon, what makes a good Linux Conference, and many other things.

Finally, we got around to talking about code. I wanted to ask him about a project I've talked about before - a new library for working with files that allows the application to insert, overwrite, and delete any amount of data anywhere in the file without having to read the entire file into memory, massage it, and write it back out again. Happily for me this turned out to be something that Peter had also given thought to, apropos of talking with Andrew Cowie about text editors (which was one of my many applications for such a system). He'd also independently worked out that such a system would also allow a fairly neat and comprehensive undo and versioning system, which was something I thought would be possible - although we differed on the implementation details, I felt like I was on the right track.

We discussed how such a system would minimise on-disk reads and writes, how it could offer transparent, randomly seekable, per-block compression, how to recover from partial file corruption, and what kind of API it should offer. Then Peter's son arrived and we talked a bit about his recently completed psychology degree, why psychologists are treated the same way that scientists and programmers are at parties (i.e. like a form of social death), and how useful it is to consider human beings as individual when trying to help them. Then it was time for my train back to Sydney and on to Canberra and home.

Computing is famous, or denigrated, as an industry full of introverts, who would rather hack on code than interact with humans. Yet many of us are extroverts who don't really enjoy this mould we are forced into. We want to talk with other people - especially about code! For an extrovert like myself, having a chance to spend time with someone knowledgeable, funny, human, and sympathetic is to see sun again after long days of rain. I'm fired up to continue work on something that I thought was only an idle, personal fantasy unwanted by others.

I can only hope it means as much to Peter as it does to me.

Oh man, am I exhausted.

I've known my friend Kim for longer than we remembered. Until Zoe was born, I thought the connection was purely that our grandmothers knew each other. After Zoe was born, and we gave her my birth mother's name as her middle name, Kim's mother sent me a message indicating that she knew my mother. More on that in a moment.

Kim and I must have interacted when we were small, because it predates my memory of her. My earliest memories are of being a pen pal with her when she lived in Kingaroy. She had a stint in South Carolina, and then in my late high school years, she moved relatively close to me, at Albany Creek, and we got to have a small amount of actual physical contact.

Then I moved to Canberra, and she moved to Melbourne, and it was only due to the wonders of Facebook that we reconnected while I was in the US.

Fast forward many years, and we're finally all back in Brisbane again. Kim is married and has a daughter named Sarah who is a couple of years older than Zoe, and could actually pass of as her older sister. She also has as a younger son. Since we've been back in Brisbane, we've had many a play date at each other's homes, and the girls get along famously, to the point where Sarah was talking about her "best friend Zoe" at show and tell at school.

The other thing I learned since reconnecting with Kim in the past year, is that Kim's aunt and my mother were in the same grade at school. Kim actually arranged for me to have a coffee with her aunt when she was visiting from Canberra, and she told me a bunch of stuff about my Mum that I didn't know, so that was really nice.

Kim works from home part time, and I offered to look after Sarah for a day in the school holidays as an alternative to her having to go to PCYC holiday care. Today was that day.

I picked up Zoe from Sarah this morning, as it was roughly in the same direction as Kim's place, and made more sense, and we headed over to Kim's place to pick up Sarah. We arrived only a couple of minutes later than the preferred pick up time, so I was pretty happy with how that worked out.

The plan was to bring Sarah back to our place, and then head over to New Farm Park on the CityCat and have a picnic lunch and a play in the rather fantastic playground in the park over there.

I hadn't made Zoe's lunch prior to leaving the house, so after we got back home again, I let the girls have a play while I made Zoe's lunch. After some play with Marble Run, the girls started doing some craft activity all on their own on the balcony. It was cute watching them try to copy what each other were making. One of them tried gluing two paper cups together by the narrow end. It didn't work terribly well because there wasn't a lot of surface to come into contact with each other.

I helped the girls with their craft activity briefly, and then we left on foot to walk to the CityCat terminal. Along the way, I picked up some lunch for myself at the Hawthorne Garage and added it to the small Esky I was carrying with Zoe's lunchbox in it. It was a beautiful day for a picnic. It was warm and clear. I think Sarah found the walk a bit long, but we made it to the ferry terminal relatively incident free. We got lucky, and a ferry was just arriving, and as it happened, they had to change boats, as they do from time to time at Hawthorne, so we would have had plenty of time regardless, as everyone had to get off one boat and onto a new one.

We had a late morning tea at the New Farm Park ferry terminal after we got off, and then headed over to the playground. I claimed a shady spot with our picnic blanket and the girls did their thing.

I alternated between closely shadowing them around the playground and letting them run off on their own. Fortunately they stuck together, so that made keeping track of them slightly easier.

For whatever reason, Zoe was in a bit of a grumpier mood than normal today, and wasn't taking too kindly to the amount of turn taking that was necessary to have a smoothly oiled operation. Sarah (justifiably) got a bit whiny when she didn't get an equitable amount of time getting the call the shots on what the they did, but aside from that they got along fine.

There was another great climbing tree, which had kids hanging off it all over the place. Both girls wanted to climb it, but needed a little bit of help getting started. Sarah lost her nerve before Zoe did, but even Zoe was a surprisingly trepidatious about it, and after shimmying a short distance along a good (but high) branch, wanted to get down.

The other popular activity was a particularly large rope "spider web" climbing frame, which Sarah was very adept at scaling. It was a tad too big for Zoe to manage though, and she couldn't keep up, which frustrated her quite a bit. I was particularly proud of how many times she returned to it to try again, though.

We had our lunch, a little more play time, and the obligatory ice cream. I'd contemplated catching the CityCat further up-river to Sydney Street to then catching the free CityHopper ferry, but the thought of then trying to get two very tired girls to walk from the Hawthorne ferry terminal back home didn't really appeal to me all that much, so I decided to just head back home.

That ended up being a pretty good call, because as it was, trying to get the two of them back home was like herding cats. Sarah was fine, but Zoe was really dragging the chain and getting particularly grumpy. I had to deploy every positive parenting trick that I currently have in my book to keep Zoe moving, but we got there eventually. Fortunately we didn't have any particularly deadline.

The girls did some more playing at home while I collapsed on the couch for a bit, and then wanted to do some more craft. We made a couple of crowns and hot-glued lots of bling onto them.

We drove back to Kim's place after that, and the girls played some more there. Sarah nearly nodded off on the way home. Zoe was surprisingly chipper. The dynamic changed completely once we were back at Sarah's house. Zoe seemed fine to take Sarah's direction on everything, so I wonder how much of things in the morning were territorial, and Sarah wasn't used to Zoe calling the shots when she was at Zoe's place.

Kim invited us to stay for dinner. I wasn't really feeling like cooking, and the girls were having a good time, so I decided to stay for dinner, and after they had a bath together we headed home. Zoe stayed awake all the way home, and went to bed without any fuss.

It's pretty hot tonight, and I'm trialling Zoe sleeping without white noise, so we'll see how tonight pans out.

If you are a MySQL power user in Korea, its well worth joining the Korean MySQL Power User Group. This is a group led by senior DBAs at many Korean companies. From what I gather, there is experience there using MySQL, MariaDB, Percona Server and Galera Cluster (many on various 5.5, some on 5.6, and quite a few testing 10.0). No one is using WebScaleSQL (yet?). The discussion group is rather active, and I’ve got a profile there (I get questions translated for me).

This is just a natural evolution of the DBA Dinners that were held once every quarter. Organised by OSS Korea, and sometimes funded by SkySQL, people would eat & drink, while hearing a short message about updates in the MySQL world (usually by me, but we’ve had special guests like Werner Vogels, CTO Amazon; recently we’ve seen appearances by Monty, Patrik Sallner, Michael Carney where mostly all we do then is eat & drink).

So from meetups to getting information online, in a quick fashion. Much hunger for open source in Korea, very smart people working there on services feeding the population (where some even make it outside of the local market). The future of open source in Korea is definitely very bright.

Related posts:

## April 10, 2014

If you use Amazon Elastic Compute Cloud (EC2), you are always given choices of AMIs (by default; there are plenty of other AMIs available for your base-os): Amazon Linux AMI, Red Hat Enterprise Linux, SUSE Enterprise Server and Ubuntu. In terms of cost, the Amazon Linux AMI is the cheapest, followed by SUSE then RHEL.

I use EC2 a lot for testing, and recently had to pay a “RHEL tax” as I needed to run a RHEL environment. For most uses I’m sure you can be satisfied by the Amazon Linux AMI. The last numbers suggest Amazon Linux is #2 in terms of usage on EC2.

Anyway, recently Amazon Linux AMI came out with the 2014.03 release (see release notes). You can install MySQL 5.1.73 or MySQL 5.5.36 (the latter makes the most sense today) easily without additional repositories.

The most interesting part of the release notes though? When the 2014.09 release comes out, it would mark 3 years since they’ve gone GA with the Amazon Linux AMI. They are likely to remove MySQL 5.1 (its old and deprecated upstream). And:

We are considering switching from MySQL to MariaDB.

This should be interesting going forward. MariaDB in the EC2 AMI would be a welcome addition naturally. I do wonder if the choice will be offered in RDS too. I will be watching the forums closely

Related posts:

Today was jam packed, from the time Zoe got dropped off to the time she was picked up again.

I woke up early to go to my yoga class. It had moved from 6:15am to 6:00am, but was closer to home. I woke up a bunch of times overnight because I wanted to make sure I got up a little bit earlier (even though I had an alarm set) so I was a bit tired.

Sarah dropped Zoe off, and we quickly inspected our plaster fish from yesterday. Because the plaster had gotten fairly thick, it didn't end up filling the molds completely, so the fish weren't smooth. Zoe was thrilled with them nonetheless, and wanted to draw all over them.

After that, we jumped in the car to head out to The Workshops Rail Museum. We were meeting Megan there.

We arrived slightly after opening time. I bought an annual membership last time we were there, and I'm glad we did. The place is pretty good. It's all indoors, and it's only lightly patronised, even for school holidays, so it was nice and quiet.

Megan and her Dad and sister arrived about an hour later, which was good, because it gave Zoe and I a bit of time to ourselves. We had plenty of time on the diesel engine simulator without anyone else breathing down our neck wanting a turn.

The girls all had a good time. We lost Megan and Zoe for a little bit when they decided to take off and look at some trains on their own. Jason and I were frantically searching the place before I found them.

There was a puppet show at 11am, and the room it was in was packed, so we plonked all three kids down on the floor near the stage, and waited outside. That was really nice, because the kids were all totally engrossed, and didn't miss us at all.

After lunch and a miniature train ride we headed home. Surprisingly, Zoe didn't nap on the way home.

Jason was house sitting for some of his neighbours down the street, and he'd invited us to come over and use their pool, so we went around there once we got back home. The house was great. They also had a couple of chickens.

The pool was really well set up. It had a zip line that ran the length of the pool. Zoe was keen to give it a try, and she did really well, hanging on all the way. They also had a little plastic fort with a slippery slide that could be placed at the end of the pool, and the girls had a great time sliding into the pool that way.

We got back home from all of that fun and games about 15 minutes before Sarah arrived to pick Zoe up, so it was really non-stop day.

## April 09, 2014

Zoe slept in even later this morning. I'm liking this colder weather. We had nothing particular happening first thing today, so we just snuggled in bed for a bit before we got started.

Tumble Tastics were offering free trial classes this week, so I signed Zoe up for one today. She really enjoyed going to Gold Star Gymnastics in the US, and has asked me about finding a gym class over here every now and then.

Tumble Tastics is a much smaller affair than Gold Star, but at 300 metres from home on foot, it's awesomely convenient. Zoe scootered there this morning.

It seems to be physically part of what I'm guessing used to be the Church of Christ's church hall, so it's not big at all, but the room that Zoe had her class in still had plenty of equipment in it. There were 8 kids in her class, all about her size. I peeked around the door and watched.

Most of the class was instructor led and mainly mat work, but then part way through, the parents were invited in, and the teacher walked us all through a course around the room, using the various equipment, and the parents had to spot for their kids.

The one thing that cracked me up was when the kids were supposed to be tucking into a ball and rocking on their backs. Zoe instead did a Brazilian Jiu-Jitsu break-fall and fell backwards slapping the mat instead. It was good to see that some of what she learned in those classes has kicked in reflexively.

She really enjoyed the rope swing and hanging upside down on the uneven bars.

The class ran for 50 minutes (I was only expecting it to last 30 minutes) and Zoe did really well straight off. I think we'll make this her 4th term extra-curricular activity.

We scootered home the longer way, because we were in no particular hurry. Zoe did some painting when we got home, and then we had lunch.

After lunch we goofed off for a little bit, and then we did quiet time. Zoe napped for about two and a half hours, and then we did some plaster play.

I'd picked up a fish ice cube tray from IKEA on the weekend for 99 cents (queue Thrift Shop), and I bought a bag of plaster of Paris a while back, but haven't had a chance to do anything with it yet. I bribed Zoe into doing quiet time by telling her we'd do something new with the ice cube tray I'd bought.

We mixed up a few paper cups with plaster of Paris in them and then I squirted some paint in. I'm not sure if the paint caused a reaction, or the plaster was already starting to set by the time the paint got mixed in, but it became quite viscous as soon as the paint was mixed in. We did three different colours and used tongue depressers to jam it into the tray. Zoe seemed to twig that it was the same stuff as the impressions of her baby feet, which I thought was a pretty clever connection to make.

After that, there was barely enough time to watch a tiny bit of TV before Sarah arrived to pick Zoe up. I told her that her plaster would be set by the time she got dropped off in the morning.

I procrastinated past the point of no return and didn't go for a run. Instead I decided to go out to Officeworks and print out some photos to stick in the photo frame I bought from IKEA on the weekend.

For some months now, there have been some back & forth emails with Matt, one of the senior DBAs behind the popular messaging service, KakaoTalk (yes, they are powered by MariaDB). Today I got some positive information: the book published entirely in the Korean language, titled Real MariaDB is now available.

It covers MariaDB 10.0. Where appropriate, there are also notes on MySQL 5.6 (especially with regards to differences). This is Matt’s fourth MySQL-related book, and there’s a community around it as well. The foreword is written by Monty and I.

If you’re reading the Korean language, this is the manual to read. It should push MariaDB further in this market, and the content is relatively quite advanced covering a lot of optimization explanations, configuration options, etc. At 628 pages, it is much, much better than the Korean translation of the Knowledge base!

Related posts:

Welcome to April already! Last month's talk on OpenDCP had a great reception, and I hope you're all not too busy getting new keys after that OpenSSL Heatbleed vulnerability.

NOTE: for this month only, TasLUG in will be meeting in the downstairs room at SoHo rather than upstairs.

When: Thursday, April 17th, 18:00 for an 18:30 start

Where: DOWNSTAIRS, Hotel Soho, 124 Davey St, Hobart. (Map)

Agenda:

• 18:00 - early mingle, chin wagging, etc
• 18:30 - Question and answer session, News of Note.

• 19:00 - Mathew Oakes - The open-source graphics train wreck

train wreck

1.

a chaotic or disastrous situation that holds a peculiar fascination for observers.

"his train wreck of a private life guaranteed front-page treatment"

• 20:00 - Meeting end. Dinner and drinks are available at the venue during the meeting.

We will probably get to a discussion on the Hobart LCA 2017 bid, ideas for upcoming Software Freedom Day in September, the Statewide meetup, Committee nomination and voting, so our pre-talk discussion should be packed full of jam.

Note for May: There will be no Hobart meeting next month in May - instead we should all be heading to our statewide meetup at Ross! If you need a lift, contact one of us on the mailing list or IRC so many of us can get along and bring your open source stuff to show off!

Also in April:

26th - Launceston meeting

May:

24th - Statewide Meet-up - Ross Town Hall

June:

19th - Hobart: No talk scheduled, idea being thrown about to make it an OpenStack short talk night.

July:

11-13th - Gov Hack 2014 - There's at least a Hobart venue for this event.

September:

20th - Software Freedom Day - events in Hobart and Launceston

## April 08, 2014

Zoe did indeed sleep in this morning, by a whole 30 minutes. It was nice. She seemed no worse for wear for her lip injury, and it was looking better this morning.

Wow, "bimonthly" is ambiguous. I had my "every two month" in person co-parenting sync up lunch with Sarah today. Phew, that was a mouthful. Anyway, I had that today, and normally that would fall on a Kindergarten day, but it's school holidays. So we paid grandma and grandpa a visit, and they looked after Zoe for me so I could make the meeting.

Mum and Dad have been away on a driving holiday, so Zoe hasn't seen them for a while, and it's been even longer since we've been to their house. She really loves going to their house because it's big, with a big back yard with a swing set. There's all sorts of exciting things like grandpa's worm farm, a sand pit, a china tea set, a piano, a tricycle and remote controlled cars. Zoe basically just works her way around the house entertaining herself. It's great. I usually get to put my feet up and read the newspaper.

After I got back from my lunch meeting, we headed over to Greenslopes Private Hospital to visit my cousin, who's just had major surgery. On the way, Zoe napped in the car. I made a brief side trip to clear my post office box along the way.

Amusingly, Zoe wakes up from short naps in the car way better than at Kindergarten. I don't know if it has anything to do with the quality of sleep she's getting or what it is, but I easily woke her up and extracted her from the car when we arrived at the hospital. No meltdowns. And that's pretty typical of car naps.

I've had a discomfort in my right ear for the last couple of days, and it grew into increasing pain throughout the day today. It got to the point where, while I was driving home, that I deciding to get it looked at by a doctor, ASAP. One of my favourite things about being back in Australia is the availability of home visiting doctors.

It was actually faster and cheaper for me to get a home doctor out to look at me tonight than it was to get an appointment with my regular doctor. I wouldn't have gotten an appointment until some time tomorrow at the earliest (assuming he had appointments available), because I made the decision to see a doctor after 5pm, when they'd closed. Instead, I had a doctor at my door in a little more than 2 hours of making the request. It also worked out cheaper, because the home doctor bulk bills Medicare, whereas my regular doctor does not.

Add in the massive convenience of not having to lug a small child anywhere while I get seen by a doctor, and it's a major convenience. I love socialised healthcare.

It turned out I have an outer ear infection. So all we had to do after the doctor came was find a pharmacy that was still open after 7pm to get my ear drop prescription filled.

All of that mucking around meant that Zoe got to bed a little later than usual. It's another cool night tonight, so I'm hoping she'll sleep well and have another sleep in.

The ardupilot development team is proud to announce the release of version 3.0.0 of APM:Plane. This is a major release with a lot of new features.

For each release I try to highlight the two or 3 key new features that have gone in since the last release. That is a more difficult task this time around because there are just so many new things. Still, I think the most important ones are the new Extended Kalman Filter (EKF) for attitude/position estimation, the extensive dual sensors support and the new AP_Mission library.

We have also managed to still keep support for the APM1 and APM2, although quite a few of the new features are not available on those boards. We don't yet know for how long we'll be able to keep going on supporting these old boards, so if you are thinking of getting a new board then you should get a Pixhawk, and if you want the best performance from the APM:Plane code then you should swap to a Pixhawk now. It really is a big improvement.

New Extended Kalman Filter

The biggest change for the 3.0.0 release (and in fact the major reason why we are calling it 3.0.0) is the new Extended Kalman Filter from Paul Riseborough. Using an EKF for attitude and position estimation was never an option on the APM2 as it didn't have the CPU power or memory to handle it. The Pixhawk does have plenty of floating point performance, and Paul has done a fantastic job of taking full advantage of the faster board.

As this is the first stable release with the EKF code we have decided to not enable it by default. It does however run all the time in parallel with the existing DCM code, and both attitude/position solutions are logged both to the on-board SD card and over MAVLink. You can enable the EKF code using the parameter AHRS_EKF_USE=1, which can be set and unset while flying, allowing you to experiment with using the EKF either by examining your logs with the EKF disabled to see how it would have done or by enabling it while flying.

The main thing you will notice with the EKF enabled is more accurate attitude estimation and better handling of sensor glitches. A Kalman filter has an internal estimate of the reliability of each of its sensor inputs, and is able to weight them accordingly. This means that if your accelerometers start giving data that is inconsistent with your other sensors then it can cope in a much more graceful way than our old DCM code.

The result is more accurate flying, particularly in turns. It also makes it possible to use higher tuning gains, as the increased accuracy of the attitude estimation means that you can push the airframe harder without it becoming unstable. You may find you can use a smaller value for NAVL1_PERIOD, giving tighter turns, and higher gains on your roll and pitch attitude controllers.

Paul has written up a more technical description of the new EKF code here:

Dual Sensors

The second really big change for this release is support for dual-sensors. We now take full advantage of the dual accelerometers and dual gyros in the Pixhawk, and can use dual-GPS for GPS failover. We already had dual compass support, so the only main sensors we don't support two of now are the barometer and the airspeed sensor. I fully expect we will support dual baro and dual airspeed in a future release.

You might wonder why dual sensors is useful, so let me give you an example. I fly a lot of nitro and petrol planes, and one of my planes (a BigStik 60) had a strange problem where it would be flying perfectly in AUTO mode, then when the throttle reached a very specific level the pitch solution would go crazy (sometimes off by 90 degrees). I managed to recover in MANUAL each time, but it certainly was exciting!

A careful analysis of the logs showed that the culprit was accelerometer aliasing. At a very specific throttle level the Z

accelerometer got a DC offset of 11 m/s/s. So when the plane was flying along nice and level the Z accelerometer would change from -10 m/s/s to +1 m/s/s. That resulted in massive errors in the attitude solution.

This sort of error happens because of the way the accelerometer is sampled. In the APM code the MPU6000 (used on both the APM2 and Pixhawk) samples the acceleration at 1kHz. So if you have a strong vibrational mode that is right on 1kHz then you are sampling the "top of the sine wave", and get a DC offset.

The normal way to fix this issue is to improve the physical anti-vibration mounting in the aircraft, but I don't like to fix

problems like this by making changes to my aircraft, as if I fix my aircraft it does nothing for the thousands of other people running the same code. As the lead APM developer I instead like to fix things in software, so that everyone benefits.

The solution was to take advantage of the fact that the Pixhawk has two accelerometers, one is a MPU6000, and the 2nd is a LSM303D. The LSM303D is sampled at 800Hz, whereas the MPU6000 is sampled at 1kHz. It would be extremely unusual to have a vibration mode with aliasing at both frequencies at once, which means that all we needed

to do was work out which accelerometer is accurate at any point in time. For the DCM code that involved matching each accelerometer at each time step to the combination of the GPS velocity vector and current attitude, and for the EKF it was a matter of producing a weighting for the two accelerometers based on the covariance matrix.

The result is that the plane flew perfectly with the new dual accelerometer code, automatically switching between accelerometers as aliasing occurred.

Since adding that code I have been on the lookout for signs of aliasing in other logs that people send me, and it looks like it is more common than we expected. It is rarely so dramatic as seen on my BigStik, but often results in some pitch error in turns. I am hopeful that with a Pixhawk and the 3.0 release of APM:Plane that these types of problems will now be greatly reduced.

For the dual gyro support we went with a much simpler solution and just average the two gyros when both are healthy. That reduces noise, and works well, but doesn't produce the dramatic improvements that the dual accelerometer code resulted in.

Dual GPS was also quite a large development effort. We now support connecting a 2nd GPS to the serial4/5 port on the Pixhawk. This allows you to protect against GPS glitches, and has also allowed us to get a lot of logs showing that even with two identical GPS modules it is quite common for one of the GPS modules to get a significant error

during a flight. The new code currently switches between the two GPS modules based on the lock status and number of satellites, but we are working on a more sophisticated switching mechanism.

Supporting dual GPS has also made it easier to test new GPS modules. This has enabled us to do more direct comparisons between the Lea6 and the Neo7 for example, and found the Neo7 performs very well. It also helps with developing completely new GPS drivers, such as the Piksi driver (see notes below).

New AP_Mission library

Many months ago Brandon Jones re-worked our mission handling code to be a library, making it much cleaner and fixing a number of long term annoyances with the behaviour. For this release Randy built upon the work that Brandon did and created the new AP_Mission library.

The main feature of this library from the point of view of the developers is that it has a much cleaner interface, but it also has some new user-visible features. The one that many users will be glad to hear is that it no longer needs a "dummy waypoint" after a jump. That was always an annoyance when creating complex missions.

The real advantage of AP_Mission will come in future releases though, as it has the ability to look ahead in the mission to see what is coming, allowing for more sophisticated navigation. The copter code already takes advantage of this with the new spline waypoint feature, and we expect to take similar advantage of this in APM:Plane in future releases.

New Piksi GPS driver

One of the most exciting things to happen in the world of GPS modules in the last couple of years is the announcement by SwiftNav that they would be producing a RTK capable GPS module called the Piksi at a price that (while certainly expensive!) is within reach of more dedicated hobbyists. It offers the possibility of decimeter and possibly even centimetre level relative positioning, which has a lot of potential for small aircraft, particularly for landing control and more precise aerial mapping.

This release of APM:Plane has the first driver for the Piksi. The new driver is written by Niels Joubert, and he has done a great job. It is only a start though, as this is a single point positioning driver. It will allow you to use your new Piksi if you were part of the kickstarter, but it doesn't yet let you use it in RTK mode. Niels and the SwiftNav team are working on a full RTK driver which we hope will be in the next release.

Support for more RC channels

This release is the first to allow use of more than 8 RC input channels. We now support up to 18 input channels on  SBus on Pixhawk, with up to 14 of them able to be assigned to functions using the RCn_FUNCTION settings. For my own flying I now use a FrSky Taranis with X8R and X6R receivers and they work very nicely. Many thanks to the PX4 team, and especially to Holger and Lorenz for their great work on improving the SBus code.

Flaperon Support

This release is the first to have integrated flaperon support, and also includes much improved flaps support in general. You can now set a FLAP_IN_CHANNEL parameter to give an RC channel for manual flap control, and setup a  FLAPERON_OUTPUT to allow you to setup your ailerons for both manual and automatic flaperon control.

We don't yet have a full wiki page on setting up flaperons, but you can read about the parameters here:

http://plane.ardupilot.com/wiki/arduplane-parameters/#Flap_input_channel_ArduPlaneFLAP_IN_CHANNEL

Geofence improvements

Michael Day has made an number of significant improvements to the geo-fencing support for this release. It is now possible to enable/disable the geofence via MAVLink, allowing ground stations to control the fence.

There are also three new fence control parameters. One is FENCE_RET_RALLY which when enabled tells APM to fly back to the closest rally point on a fence breach, instead of flying to the centre of the fence area. That can be very useful for more precise control of fence breach handling.

The second new parameter is FENCE_AUTOENABLE, which allows you to automatically enable a geofence on takeoff, and disable when doing an automatic landing. That is very useful for fully automated missions.

The third new geofence parameter is FENCE_RETALT, which allows you to specify a return altitude on fence breach. This can be used to override the default (half way between min and max fence altitude).

Automatic Landing improvements

Michael has also been busy on the automatic landing code, with improvements to the TECS speed/height control when landing and new TECS_LAND_ARSPD and TECS_LAND_THR parameters to control airspeed and throttle when landing. This is much simpler to setup than DO_CHANGE_SPEED commands in a mission.

Michael is also working on automatic path planning for landing, based on the rally points code. We hope that will get into a release soon.

Detailed Pixhawk Power Logging

One of the most common causes of issues with autopilots is power handling, with poor power supplies leading to brownouts or sensor malfunction. For this release we have enabled detailed logging of the information available from the on-board power management system of the Pixhawk, allowing us to log the status of 3 different power sources (brick input, servo rail and USB) and log the voltage level of the servo rail separately from the 5v peripheral rail on the FMU.

This new logging should make it much easier for us to diagnose power issues that users may run into.

New SERIAL_CONTROL protocol

This release adds a new SERIAL_CONTROL MAVLink message which makes it possible to remotely control a serial port on a Pixhawk from a ground station. This makes it possible to do things like upgrade the firmware on a 3DR radio without removing it from an aircraft, and will also make it possible to attach to and control a GPS without removing it from the plane.

There is still work to be done in the ground station code to take full advantage of this new feature and we hope to provide documentation soon on how to use u-Blox uCenter to talk to and configure a GPS in an aircraft and to offer an easy 3DR radio upgrade button via the Pixhawk USB port.

Lots of other changes!

There have been a lot of other improvements in the code, but to stop this turning into a book instead of a set of release notes I'll stop the detailed description there. Instead here is a list of the more important changes not mentioned above:

• raised default LIM_PITCH_MAX to 20 degrees
• support a separate steering channel from the rudder channel
• faster mission upload on USB
• new mavlink API for reduced memory usage
• fixes for the APM_OBC Outback Challenge module
• fixed accelerometer launch detection with no airspeed sensor
• greatly improved UART flow control on Pixhawk
• added BRD_SAFETYENABLE option to auto-enable the safety switch on PX4 and Pixhawk on boot
• fixed pitot tube ordering bug and added ARSPD_TUBE_ORDER parameter
• fixed log corruption bug on PX4 and Pixhawk
• new Replay tool for detailed log replay and analysis
• flymaple updates from Mike McCauley
• fixed norm_input for cruise mode attitude control
• added UBX monitor messages for detailed hardware logging of u-Blox status
• added MS4525 I2C airspeed sensor voltage compensation

I hope that everyone enjoys flying this new APM:Plane release as much as we enjoyed producing it! It is a major milestone in the development of the fixed wing code for APM, and I think puts us in a great position for future development.

Happy flying!

## April 07, 2014

Today was a really good day, right up until the end, when it wasn't so good, but could have been a whole lot worse, so I'm grateful for that.

I've been wanting to walk out to King Island at low tide with Zoe for a while, but it's taken about a month to get the right combination of availability, weather and low tide timing to make it possible.

Today, there was a low tide at about 10:27am, which I thought would work out pretty well. I wasn't sure if the tide needed to be dead low to get to King Island, so I thought we could get there a bit early and possibly follow the tide out. I invited Megan and Jason to join us for the day and make a picnic of it.

It turned out that we didn't need a really low tide, the sand bar connecting King Island to Wellington Point was well and truly accessible well before low tide was reached, so we headed out as soon as we arrived.

I'd brought Zoe's water shoes, but from looking at it, thought it would be walkable in bare feet. We got about 10 metres out on the sand and Zoe started freaking out about crabs. I think that incident with the mud crab on Coochiemudlo Island has left her slightly phobic of crabs.

So I went back to Jason's car and got her water shoes. I tried to allay her fears a bit by sticking my finger in some of the small holes in the sand, and even got her to do it too.

I'm actually glad that I did get her water shoes, because the shell grit got a bit sharp and spiky towards King Island, so I probably would have needed to carry her more than I did otherwise.

Along the way to the island we spotted a tiny baby mud crab, and Zoe was brave enough to hold it briefly, so that was good.

We walked all the way out and partially around the island and then across it before heading back. The walk back was much slower because where was a massive headwind. Zoe ran out of steam about half way back. She didn't like the sand getting whipped up and stinging her legs, and the wind was forcing the brim of her hat down, so I gave her a ride on my shoulders for the rest of the way back.

We had some lunch after we got back to Wellington Point, and Zoe found her second wind chasing seagulls around the picnic area.

After an ice cream, we went over to the playground and the girls had a great time playing. It was a pretty good park. There was this huge tree with a really big, thick, horizontal branch only about a metre or two off the ground. All the kids were climbing on it and then shimmying along the branch to the trunk. Zoe's had a few climbs in trees and seems not afraid of it, so she got up and had a go. She did really well and did a combination of scooting along, straddling the branch and doing a Brazilian Jiu-Jitsu-style "bear crawl" along the branch.

It was funny seeing different kids' limits. Zoe was totally unfazed by climbing the tree. Megan was totally freaking out. But when it came to walking in bare feet in an inch of sea water, Zoe wanted to climb up my leg like a rat up a rope, in case there were crabs. Each to their own.

Zoe wanted to have a swim in the ocean, so I put her into her swimsuit, but had left the water shoes back in the car. Once again, she freaked out about crabs as soon as we got ankle deep in the water, and was freaking out Megan as well, so the girls elected to go back to playing in the park.

After a good play in the park, we headed back home. We'd carpooled in Jason's truck, with both girls in the back. I'd half expected Zoe to fall asleep on the way back, but the girls were very hyped up and had a great time playing games and generally being silly in the back.

When we got back to our place, Jason was in need of a coffee, so we walked to the Hawthorne Garage and had coffee and babyccinos, before Megan and Jason went home.

It was about 3:30pm at this point, and I wanted to make a start on dinner. I was making a wholemeal pumpkin quiche, which I've made a few times before, and I discovered we were low on linseed. I thought I'd push things and see if Zoe was up for a scooter ride to the health food shop to get some more and kill some time.

She was up for it, but ran out of steam part way across Hawthorne Park. Fortunately she was okay with walking and didn't want me to carry her and the scooter. It took us about an hour to get to the health food shop.

Zoe immediately remembered the place from the previous week where we'd had to stop for a post-meltdown pit stop and declared she needed to go to the toilet again.

We finally made it out of the shop. I wasn't looking forward to the long walk back home, but there were a few people waiting for a bus at the bus stop near the health food shop, and on checking the timetable, the bus was due in a couple of minutes, so we just waited for the bus. That drastically shortened the trip back.

Zoe managed to drop the container of linseed on the way home from the bus stop, but miraculously the way it landed didn't result in the loss of too much of the contents, it just split the container. So I carefully carried the container home the rest of the way.

By this stage it was quite a bit later than I had really wanted to be starting dinner, but we got it made, and Zoe really liked the pumpkin quiche, and ate a pretty good dinner.

It was after dinner when things took a turn for the worse.

Zoe was eating an ice block for dessert, and for whatever reason, she'd decided to sit in the corner of the kitchen next to the dishwasher, while I was loading it. I was carrying over one of the plates, and the knife managed to fall off the plate, bounce off the open dishwasher door and hit her in the mouth, splitting her lip.

Zoe was understandably upset, and I was appalled that the whole thing had happened. She never sits on the kitchen floor, let alone in the corner where the dishwasher is. And this knife came so close to her eye.

Fortunately the lip didn't look too bad. It stopped bleeding quickly, and we kept some ice on it and the swelling went down.

I hate it when accidents happen on my watch. I feel like I'm fighting the stigma of the incompetent single Dad, or the abusive single Dad, so when Zoe sustains an injury to the face like a fat lip, which could be misinterpreted, I, well, really hate it. This was such a freak accident, and it could have gone so much worse. I'm just so glad she's okay.

Zoe recovered pretty well from it, and I was able to brush her teeth without aggravating her lip. She went to bed well, and I suspect she's going to sleep really well. It's a bit cooler tonight, so I'm half-expecting a sleep in in the morning with any luck.

For many years, MySQL had only supported a small part of UTF-8, a section commonly referred to as plane 0, the “Basic Multilingual Plane”, or the BMP. The UTF-8 spec is divided into “planes“, and plane 0 contains the most commonly used characters. For a long time, this was reasonably sufficient for MySQL’s purposes, and WordPress made do with this limitation.

It has always been possible to store all UTF-8 characters in the latin1 character set, though latin1 has shortcomings. While it recognises the connection between upper and lower case characters in Latin alphabets (such as English, French and German), it doesn’t recognise the same connection for other alphabets. For example, it doesn’t know that ‘Ω’ and ‘ω’ are the upper and lower-case versions of the Greek letter omega. This creates problems for searching text, when you generally want to match characters regardless of their case.

With the release of MySQL 5.5, however, the utf8mb4 character set was added, and a whole new world opened up. Plane 1 contains many practical characters from historic scripts, music notation and mathematical symbols. It also contains fun characters, such an Emoji and various game symbols. Plane 2 is dedicated to CJK Ideographs, an attempt to create a common library of Chinese, Japanese and Korean characters.

For many websites, being able to use Emoji without installing an extra plugin is an excellent reason to switch your WordPress database to utf8mb4, but unfortunately it’s not quite that simple. MySQL still has a few more limitations that cause problems with  utf8mb4.

Without further ado, here’s how to configure MySQL, so that WordPress can use utf8mb4. If you don’t have the ability to configure your MySQL server directly, you should speak to your host. If they don’t want to, it’s probably time to look for a new host.

You need to be running MySQL 5.5.14, or higher. If you’re not already running at least MySQL 5.5 (ideally 5.6), you should be doing that anyway, as they provides significant performance and stability improvements over previous versions. For help with upgrading MySQL, check out the MySQL manual.

### Configure MySQL

Before we convert your tables, we need to configure MySQL correctly.

In your my.cnf file, add the following settings to the [mysqld] section. Remember to double check that you’re not duplicating settings in your my.cnf file – if any of these options are already set to something different, you’ll need to change that setting, rather than add a new setting.

default-engine=InnoDB

innodb-file-format=barracuda
innodb-file-per-table=true
innodb-large-prefix=true

collation-server=utf8mb4_unicode_ci
character-set-server=utf8mb4


### Use InnoDB

Next, convert your WordPress tables to InnoDB and utf8mb4:

ALTER TABLE wp_posts ENGINE=InnoDB ROW_FORMAT=DYNAMIC;
ALTER TABLE wp_posts CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;


You’ll need to run these two queries for each WordPress table, I used wp_posts as an example. This is a bit tedious, but the good news is that you’ll only ever need to run them once. A word of warning, you should be prepared for some downtime if you have particularly large tables.

### Configure WordPress

Finally, you can tell WordPress to use utf8mb4 by changing your DB_CHARSET setting in your wp-config.php file.

define( 'DB_CHARSET', 'utf8mb4' );


And there you have it. I know, it’s not pretty. I’d really like to add this to WordPress core so you don’t need to go through the hassle, but currently only a very small percentage of WordPress sites are using MySQL 5.5+ and InnoDB – in order to justify it, we need to see lots of sites upgrading! You can head on over to the core ticket for further reading, too – login and click the star at the top to show your support, there’s no need to post “+1″ comments.

Oh, and a final note on Emoji – Chrome support is pretty broken. There’s an extension to add Emoji to Chrome, but it interferes with WordPress’ post editor. If you really want to use Emoji in your posts, Safari or Firefox would be better options.

Dug out my LiPo 3S batteries on the weekend with the intent of charging them up for some use in the Traxxas E-Revo brushless. Unfortunately it appeared they had gone bad. As I revived one of them I found one of the cells had gone and the other I never bothered tinkering with.

I’ve since wrote them off as failed and purchased a new set. Now waiting for the new ones to arrive so I can have battery connectors soldered onto them.

## April 06, 2014

Disclaimer: The below is my personal opinion and does not represent the views of the 2015 LCA organising committee. Some details have been left out, stuff may change, names may be wrong, may contain nuts, etc.

In January 2015 the Linux.conf.au conference will be held in Auckland, New Zealand. Each year the conference brings together 600 ( +-100 ) Linux developers and users for 5 days for talks, chat and social events. LCA 2015 will be the 12th Linux.conf.au I’ve attended (every year since 2004) and the first I’ve helped organise. It will be the 3rd time the conference has been held in New Zealand.

Each year’s LCA is held in a different city by a group who bid for and run it. The Auckland team consists of a “core team” of about 10 under the overall lead of Cherie Ellis, another dozen “supporters” (including me). Others volunteers  will be recruited closer to the time and there are also external groups like the papers committee and people from Auckland University doing various jobs.

The majority of the conference will be held in the Owen G Glenn Building at Auckland University. The is single big building with several large lecture theatres along with big central areas and smaller rooms. The currently plan is for just about the whole conference proper to happen there.

Over half the attendees with probably stay at nearby student accommodation, this is cheap, nearby and lets people mingle with other attendees after-hours. There will also be some planned social events (like the conference dinner) elsewhere in Auckland.

Since January 2014 when Auckland was announced as the winning bid for 2015 the pace has gradually been picking up. Over 30 main positions have been filled (most with both a main and backup person) and the core team is meeting (usually online) weekly and the second supporters meeting is coming up.

The amount of stuff to organise is pretty big. As well as the venues, there is food, travel, accommodation, swag, the programme, the websites, network, dinners, registration, etc etc. A huge amount of stuff which will take up many hours per week for the rest of 2015.

At the end of March there was a “Ghosts visit”, this is where half a dozen previous conference organisers ( “Ghosts of conferences past” ) come over for a weekend to look over the setup and talk to the group. The purpose is twofold, the Ghosts check that everything is on track and look for problems, while the 2015 organisers get to pick the Ghost’s brains

Large Brain possibly belonging to Ghost

Even the Ghosts’ event itself is a small test of the organizers’ ability. They have  to fly, meeting, accommodate, hosts, feed and otherwise look after half a dozen people, a mini rehearsal  for the full conference.

Apr 19 2014 12:30
Apr 19 2014 17:00
Apr 19 2014 12:30
Apr 19 2014 17:00
Location:

VPAC Training Room, 110 Victoria Street, Carlton South

Configuring the Gnome 3 Desktop, by Terry Kemp.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

April 19, 2014 - 12:30

There is a BTRFS bug in kernel 3.13 which is triggered by Kmail and causes Kmail index files to become seriously corrupt. Another bug in BTRFS causes a kernel GPF when an application tries to read such a file, that results in a SEGV being sent to the application. After that the kernel ceases to operate correctly for any files on that filesystem and no command other than “reboot -nf” (hard reset without flushing write-back caches) can be relied on to work correctly. The second bug should be fixed in Linux 3.14, I’m not sure about the first one.

In the mean time I have several systems running Kmail on BTRFS which have this problem.

(strace tar cf – . |cat > /dev/null) 2>&1|tail

To discover which file is corrupt I run the above command after a reboot. Below is a sample of the typical output of that command which shows that the file named “.trash.index” is corrupt. After discovering the file name I run “reboot -nf” and then delete the file (the file can be deleted on a clean system but not after a kernel GPF). Of recent times I’ve been doing this about once every 5 days, so on average each Kmail/BTRFS system has been getting disk corruption every two weeks. Fortunately every time the corruption has been on an index file so I don’t need to restore from backups.

newfstatat(4, ".trash.index", {st_mode=S_IFREG|0600, st_size=33, …}, AT_SYMLINK_NOFOLLOW) = 0

openat(4, ".trash.index", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_CLOEXEC) = 5

fstat(5, {st_mode=S_IFREG|0600, st_size=33, …}) = 0

+++ killed by SIGSEGV +++

So it turns out that doing Jury Duty leaves you with a bit of thinking to do afterwards.

For those who don't know I spent last week sitting on a jury that was dealing with a particularly not nice case. I won't go into details about which case, a) I'm not actually sure how much I am allowed to say and b) in a way, it's not really that important. Suffice to say, it wasn't a minor case and it dealt with areas that were very uncomfortable making.

First off I have to say I was massively impressed with the court staff. They were helpful, friendly and most important, understanding of the sort of pressures twelve people from very diverse backgrounds found themselves under.

Secondly, I probably couldn't have asked for a better group of people to be empanelled with. Each of us approached the case with what I could say is a "professional" outlook. We were very concious of the responsibility we bore and the possible consequences of our decision, whichever way it went.

Trying to look at the experience dispassionately, it was interesting. Turned up on Monday with about fifty other people, we were shown a video about jury duty and assigned a jury number. Then we all filed into the court for the jury selection process. This consists of the Judges assistant pulling 12 numbers from a box at random.

Once the jurors box is full the Defence and Prosecution teams get to challenge jurors they feel may not be the sort of person they want on the jury.

I was drawn early in the piece and  "survived" the challenge process which meant that I was now committed to however long the trial was going to take.

The next four days was a mix of boredom (procedural faffing looks exciting on TV, but tends to lead to yawns in real life), avid interest as evidence was presented and finally apprehension as we were directed to retire to consider our verdict.

At the end of it, we delivered our verdict and being thanked by the Judge, were dismissed to rejoin the real world.

Except of course you can't just leave this sort of thing in the court house. For the last couple of days I've been swinging through a whole variety of emotions, ranging from relief that the experience was over through frustration, sadness, anger and pride that I was able to approach things dispassionately and with an eye to evidence over feeling.

Tomorrow I return to the real world, I'll be on the train at 6:42am and won't get back home until 6:50pm. In between then I'll be working with my colleagues, solving problems, writing code and generally getting back on track. However I think it's going to be a while before jury duty really fades from my mind.

Blog Catagories:

### Related Posts:

• No related posts
I've been going to the ACT Woodcraft Guild for the last year or so learning to turn wood on a lathe. I'm by no means an expert, but here are some of my early efforts.

Tags for this post: wood turning 20140406-woodturning photo

Comment

## April 04, 2014

Zoe slept through last night, which was lovely. We had nothing really planned for today.

The Bulimba Library has a story time thing at 10:30am on a Friday. I've never bothered to take Zoe because it would have been too much of a hustle to get her there after her Brazilian Jiu-Jitsu class, particularly by bike. Since today we didn't have a BJJ class, it made getting to story time much easier.

So we had a nice lazy start to the day, and then Zoe just wanted to procrastinate anyway, so since we were in no particular rush, I let her play in her room for a bit and I read for a while. We eventually had teeth and hair brushed and biked over to the library.

Not having been to the story time thing before, I assumed it was in the kids area, but it turned out it was downstairs in the general purpose room, which we only discovered after story time had started. I'm certainly glad I never busted a gut to get to the library in time for it, because it wasn't anything particularly exciting. They did give all the kids a colouring sheet and there was some colouring and a couple of songs, but really it was nothing to phone home about.

I did run into Jacob and his mum Laura from Kindergarten. They live locally, and so I invited them over for lunch and a play date. They had some grocery shopping to do, and I had to go to the pharmacy to get a prescription filled, so we agreed to meet at our place in about an hour.

We biked to the pharmacy to get my prescription filled and some more sunscreen for Zoe, and the pharmacist gave her a free Chupa Chups

We biked home, and grabbed some balloons from the convenience store next door for our Science Friday activity. I was slightly more organised today, and figured out what I wanted to do in the morning while Zoe was watching TV. We did yeast and sugar and warm water in a bottle with a balloon on top and watched it inflate itself with all the carbon dioxide produced by the yeast eating the sugar.

The only problem was the balloons we bought were total rubbish. They'd been on the shelf for too long and they'd all stuck to themselves and either popped when I tried to blow them up or had holes in them. So we marched back to the corner store in our lab coats to get some more, and the store keeper gave Zoe one of the little flashlights she keeps playing with in the bowl at the check out.

We were able to complete our Science Friday activity before Laura and Jacob and his baby brother Ethan came over for lunch. Zoe didn't have a particularly good lunch, I think it was the distraction of having Jacob there as well.

After lunch, they couldn't really agree on anything to play. Laura said Jacob just plays out in the yard at Kindergarten the whole day. The one thing they both managed to agree on briefly was bubbles.

Ethan was getting tired and needed a nap by about 2pm, so they left. Zoe wanted to ride her scooter, but I had to clean up from lunch and the aftermath of the play date on the balcony first, so she went and played some games on her Nexus 7 while I did that, and then we headed out on the scooter.

It was about 2:30pm by this point, and the Hawthorne Markets, which have now moved to a Friday "twilight" thing, were due to start at 4pm, so I figured we could just kill time on the scooter.

It's funny how the scooter is now the in thing as of Tuesday. It's sat unused on the balcony for most of the year, but I'm glad she wants to use it now because it guarantees she moves faster than if she were to walk and want to be picked up every 5 minutes.

We scootered around to the playground and she played there until about 3:45pm, when we headed around to the markets.

We ran into Nicky Noo from the Ooniverse Family Cafe, who we'd met on Tuesday, and she made Zoe another dog balloon and she got her face painted.

After that, it was the obligatory jumping castle for a while. It was getting time to get home for Sarah to pick her up, when I dropped the dog balloon on the ground and it popped. That made Zoe very sad. Then the lack of a decent lunch kicked in. Zoe wanted some poffertjes, but they would have taken too long, so I gave her a few of the free samples to tide her over, and we headed home.

Sarah was waiting for her when we got home, so we parted ways at that point.

I like how the days that have nothing planned often end up as full if not fuller than the days when I do have something planned.

## April 03, 2014

Zoe slept pretty well last night. She only woke up briefly at 4am because Cowie had fallen out of bed and she couldn't find her.

Today was the last Playgroup of the term. Megan, her little sister and her Dad came as well to check it out, which was nice, because Zoe then had someone to actively play with in addition to me.

After Playgroup, we went to the adjacent Bulimba Memorial Park with Megan, and then had some lunch at Grill'd. Megan's Dad wanted to do some work on their house while Megan's little sister napped, so I offered to give Megan a play date at our place.

The plan was to watch a movie and chill out. The girls picked Ratatouille and I made a batch of popcorn for them. Unfortunately Megan seemed to be less of a square eyes than Zoe, and she lost interest after a bit, so we stopped watching the movie and moved out to the balcony to do some craft.

Zoe had been wanting to make a crown for Mummy's boss for a while, so we made a couple of crowns with the hot glue gun. I had bought this bag of mixed craft "jewels" and it's probably the best single craft thing I've bought. Zoe loves gluing them onto everything.

After that, Zoe pulled out the bag of coloured foil confetti. If the gems were the best thing I've bought, this would have to be the worst. So far, all it's done is leak in the drawer it's been stored in, and I've been avoiding using it because it was going to be messy.

Today, Zoe wanted to glue it onto the outside of her cardboard box, so I decided to give in and embrace the mess, and boy, did we make a mess.

It probably ended up being the longest bit of cooperative play the girls did. They'd alternate between handing each other a fistful of confetti while I applied globs of glue where directed. Probably about 10 percent of each handful ended up stuck to the rocket, so the balcony looked like quite a mess by the end of it all, but at least it was a dry mess, so I could just vacuum it all up. I suspect I'll be encountering dregs for quite a while, because I doubt it's stuck to the cardboard particularly well.

After that, the girls played indoors for a bit, and watched a bit more of the movie, but Megan seemed to be scared of Anton Ego, so I think that was why it wasn't holding her attention.

The other activity that the girls seemed to thoroughly enjoy was tearing around the living room squealing while they took turns at throwing a grapefruit-sized beach ball at me, and I threw it back at them.

Jason came back to pick up Megan, and I started dinner. Not that long after Megan left, Sarah arrived to watch Zoe for me so I could go visit my cousin in hospital. I had dinner on the table pretty much as soon as she walked in the door, and headed out.

A couple of months ago, following the news of PayPal being partially responsible for a person’s identity theft, I activated Two Factor Authentication on my PayPal account. First up, I was fairly unimpressed with their configuration options. In order to use 2FA, my options were to buy a dongle to generate the security codes, or have the codes SMSed to me. Neither of these are particularly good – I don’t want to have to pay for and carry around a dongle everywhere, and SMS isn’t a secure protocol, as SIM cards can be cloned or hacked. If someone really wanted to get into my account, then this wouldn’t present much of a barrier.

Then, there’s the login process. For some reason, PayPal doesn’t automatically send me an SMS, I need to click an extra button for that while logging in. This isn’t so much a security problem as a weird UX. Also, the Android app doesn’t support 2FA, so I can’t use that at all.

The real fun started last night, however. I tried to login to my PayPal account, and was prompted to enter my security code. No problem, I clicked the Send SMS button, and waited. And waited. I clicked it again. Waited. Tried to login again, and repeated the process a few times. No luck.

Okay, so their SMS service was having issues. Apart from the security issues with SMS, it’s also a notoriously unreliable protocol, regularly causing problems exactly like this. While I was pondering this, I noticed there was an option to bypass the 2FA. I clicked the button, and was prompted to answer my two security questions: my favourite author, and my favourite movie. Unfortunately, I’d set these questions 10 years ago when I first created my PayPal account, and never thought about them since. It turns out that 22 year old me had very different taste in film and literature than 32 year old me, and I had no idea what the answers were. Defeated, I went to bed.

This morning, I decided to try again, with the same result. This time, I called their customer support centre, to see if they could at least give me an update on when SMSes would be working again. Unfortunately, it seemed the customer support representative wasn’t familiar with how PayPal’s 2FA worked, so after a bit of back-and-forth explaining the situation, the CSR said they’d “reset my account” (I don’t know what this means), and it should be working again in 15 minutes.

Half an hour later, still not working, so I call back. Fortunately, this CSR was aware of the SMS issues they were having, and was able to fill me in. Unfortunately, it seems PayPal hadn’t really thought about the implications of their policy for this situation, as he immediately offered to disable 2FA on my account for me.

I’ll just let that sink in for a moment. At this point, I’d only loosely identified myself – I had an identity code from the PayPal support site, that I was able to get with just my username and password. The support systems probably showed my current phone number as matching my 2FA phone number, but they shouldn’t be relying on that at all – the source phone number can be easily spoofed, Skype even offers this as a service.

Sadly, it’s clearly evident that PayPal’s 2FA is broken in a bunch of different ways. You can still keep your account secure by choosing a strong password, and making sure you only login to your PayPal account on devices your trust.

Even if PayPal are in no hurry to mend their ways, here are some things for developers to make sure their own 2FA system is secure:

• Don’t offer SMS as the only option. SMS-based 2FA is okay for guarding against mass account hijacking, but cannot prevent a targeted attack. As we’ve seen, it’s also wildly unreliable.
• You should be using a standard method for generating your 2FA codes, such as RFC 6238, which is used by a bunch of different websites, like Google and WordPress.com.
• Make your 2FA system as easy to use as possible – your users should want to use it, because it doesn’t get in their way, but makes their account safe.
• Teach your support reps the 2FA mantra: “Something you have, and something you know”. In the case of PayPal, they’d already confirmed something I know (my password), so they could’ve easily confirmed something I have, like my ID or my credit card.
• If you’re going to use security questions, prompt your users to re-enter them occasionally, so they don’t forget.

## April 02, 2014

I had a busy morning this morning. I had a small flood of emails in response to various inquiries I made yesterday, so it took me a while to get my inbox under control.

One of Zoe's Kindergarten teachers is returning to New Zealand as of the end of term 1, so they were having an afternoon tea for her today. Parents were requested to "bring a plate", so I thought I'd bake some spinach, pumpkin and feta muffins that I've been meaning to try making. After my chiropractic adjustment I got stuck into that.

I'd hoped to finalise my US taxes today, but I learned I could deduct my moving expenses, so now I have to dig up the documentation for them.

I had a later than usual massage, and drove directly from there to Kindergarten, muffins in hand. They'd omitted rest time today to facilitate the afternoon tea, so the upside of that was Zoe was happy.

Zoe's Brazilian Jiu-Jitsu teacher had said that she could try out one of the 4-7 year old classes, so for her last class of the 10 week block, we went to one of them this afternoon. I wasn't sure if today would be a good day for it or not, given she'd had Kindergarten and hadn't napped, but I gave her the option, and she wanted to do it.

It worked out well, aside from Zoe really not wanting to wear her gi (I managed to convince her to wear the pants). There were three other kids of varying school ages in the class, and Zoe followed the instructions really well.

I'm really glad we signed up the classes. I'm pretty sure Zoe has enjoyed herself, and she's definitely bonded with the teacher. I'll miss it.

I can't believe how fast this term has gone by. There's now two weeks of "school holidays", where I'll have to entertain Zoe for the entire week. I'm not too worried about finding things to do, I'm more worried about contention with all the other kids on school holidays.

Zoe was pretty tired tonight, I'd say with the combination of Kindergarten, no nap and BJJ class. Nonetheless, she procrastinated all the way through the bed time routine anyway. When I finally got her to bed, she fell asleep without a peep. She went on her second "bushwalk" at Kindergarten yesterday and picked up three mosquito bites, so I'm hoping they don't trouble her overnight.

Noticed that the Simpana 10 Advanced client properties – firewall – outgoing routes user interface changed a little between SP5a and SP6. It may have happened on SP5b, but I couldn’t test at the time, but saw the difference certainly between our jump from SP5a to SP6.

Check out the pictures below to see what I mean. Looks like they relabelled some items.

Advanced Client Properties – Outgoing routes – SP5a

Advanced Client Properties – Outgoing routes – SP6

## April 01, 2014

The Guardian is collecting experiences from students regarding mental health at university. I must have missed this item earlier as there are only a few days left now to get your contribution in. Please take a look and put in your thoughts! It’s always excellent to see mental health discussed. It helps us and society as a whole.

Late last year I compared the prices of mobile providers after Aldi started getting greedy [1]. Now Aldi have dramatically changed their offerings [2] so at least some of the phones I manage have to be switched to another provider.

There are three types of use that are of interest to me. One is for significant use, that means hours of calls per month, lots of SMS, and at least 2G of data transfer. Another is for very light use, maybe a few minutes of calls per month where the aim is to have the lowest annual price for an almost unused phone. The third is somewhere in between – and being able to easily switch between plans for moderate and significant use is a major benefit.

Firstly please note that I have no plans to try and compare all telcos, I’ll only compare ones that seem to have good offers. Ones with excessive penalty clauses or other potential traps are excluded.

### Sensible Plans

The following table has the minimum costs for plans where the amount paid counts as credit for calls and data, this makes it easy to compare those plans.

Plan Cost per min or SMS Data Minimum cost
AmaySIM As You Go [3] $0.12$0.05/meg, $19.90 for 2.5G in 30 days,$99.90 for 10G in 365days $10 per 90 days AmaySIM Flexi [4]$0.09 500M included, free calls to other AmaySIM users, $19.90 for 2.5G in 30 days,$99.90 for 10G in 365days $19.90 per 30 days Aldi pre-paid [5]$0.12 $0.05/meg,$30 for 3G in 30 days $15 per 365 days Amaysim has a$39.90 “Unlimited” plan which doesn’t have any specific limits on the number of calls and SMS (unlike Aldi “Unlimited”) [6], that plan also offers 4G of data per month. The only down-side is that changing between plans is difficult enough to discourage people from doing so, but if you use your phone a lot every month then this would be OK. AmaySIM uses the Optus network.

### Which One to Choose

For my relatives who only rarely use their phones the best options are the AmaySIM “As You Go” [3] plan which costs $40 per 360 days and the Aldi prepaid which costs$15 per year. Those relatives are already on Aldi and it seems that the best option for them is to keep using it.

My wife typically uses slightly less than 1G of data per month and makes about 25 minutes of calls and SMS. For her use the best option is the AmaySIM “As You Go” [3] plan which will cost her about $4 in calls per month and$99.90 for 10G of data which will last 10 months. That will average out to about $13 per month. It could end up being a bit less because the 10G of data that can be used in a year gives an incentive to reduce data use while previously with Aldi she had no reason to use less than 2G of data per month. Her average cost will be$11.30 per month if she can make 10G of data last a year. The TeleChoice “Global Liberty Starter” [9] plan is also appealing, but it is a little more expensive at $20 per month, it would be good value for someone who averages more than 83 minutes per month and also uses almost 1G of data. Some of my relatives use significantly less than 1G of data per month. For someone who uses less than 166MB of billable data per month then the Aldi pre-paid rate of$0.05 per meg [5] is the best, but with a modern phone that does so many things in the background and a plan that rounds up data use it seems almost impossible to be billed for less than 300MB/month. Even when you tell the phone not to use any mobile data some phones still do, on a Nexus 4 and a Nexus 5 I’ve found that the only way to prevent being billed for 3G data transfer is to delete the APN from the phone’s configuration. So it seems that the AmaySIM “As You Go” [3] plan with a 10G annual data pack is the best option.

One of my relatives needs less than 1G of data per month and not many calls, but needs to be on the Telstra network because their holiday home is out of range of Optus. For them the TeleChoice Global Liberty Starter [9] plan seems best.

I have been averaging a bit less than 2G of data transfer per month. If I use the AmaySIM “As You Go” [3] plan with the 10G data packs then I would probably average about $18 worth of data per month. If I could keep my average number of phone calls below$10 (83 minutes) then that would be the cheapest option. However I sometimes spend longer than that on the phone (one client with a difficult problem can involve an hour on the phone). So the TeleChoice i28 plan looks like the best option for me, it gives $650 of calls at a rate of$0.97 per minute + $0.40 connection (that’s$58.60 for a hour long call – I can do 11 of those calls in a month) and 2G of data. The Telstra coverage is an advantage for TeleChoice, I can run my phone as a Wifi access point so my wife can use the Internet when we are out of Optus range.

Please let me know if there are any good Australian telcos you think I’ve missed or if there are any problems with the above telcos that I’m not aware of.

parenting and a haircut with a spot of painting

How's that for some alliteration?

Today was a really good day. And that's before I started drinking red wine.

I got up this morning and successfully banged out a 10km run. It wasn't pretty, but I did it in under an hour, so I was happy.

I got home, and after breakfast I pretty much flopped on the couch with my laptop and procrastinated instead of doing my taxes. But it was productive procrastination. I:

• booked flights to the US for our trip in July
• sought some quotes for outsourcing the production of Zoe's birthday cake
• made some pizza dough for dinner with Anshu
• booked a haircut for Zoe and I
• Got taken hook, line and sinker by an April Fools joke
• found a couple of patent lawyers who will give me an initial consultation for free instead of charging me $250 plus GST (yay River City Labs) I also (finally) got my taxes to a point where I'm ready to send them off to my US accountant and deal with the rest of it incrementally. So it was a productive day! I had a follow up appointment with my podiatrist in the afternoon to see how my orthotics were going. I biked to Kindergarten early, ditched the bike trailer, and then biked over to the podiatrist, and made it back to Kindergarten about 10 minutes before pick up time. Zoe was, unsurprisingly, fast asleep. I decided to try applying sunscreen to her while she was asleep as a way of killing two birds with one stone. I got as far as getting her legs done before she woke up and had a massive meltdown. Poor kid really doesn't deal well with being woken up. One of the teachers took pity on us and distracted Zoe by letting her cuddle one of the baby chicks, which snapped her out of it for the duration, but she had another meltdown once it was over. Another teacher gave her a cuddle for a bit, and she eventually calmed down enough for me to get sunscreen on her arms. I'd foolishly left the bike trailer separated from the bike, so I had to drag the trailer back to the bike, whilst carrying Zoe. Fortunately another teacher took sympathy on us and helped me with the trailer. Turns out trying to drag a single-wheeled trailer single-handed whilst carrying a toddler and having excess sunscreen on my hands is extremely difficult. We finally got the trailer on the bike, and Zoe in the trailer, and headed towards home. Zoe's ballerina pumps aren't great on the bike because the straps on the pedals cross the tops of her exposed feet and irritate her, so there were multiple meltdowns on the way home, culminating in needing to go to the toilet "right now" before we got home. I stopped at the health food shop on the way home, to see if they had a toilet we could use. Luckily I'm a customer and the naturopath let us use the toilet in the clinic. Zoe had another meltdown in there, announcing she "didn't like being woken up". Poor kid. It wasn't a good afternoon for her. I'm just glad I was in a sufficiently good mood to be able to deal with it all in a satisfactorily positive parenting way. We finally made it home, and I'd promised her we could have a big cuddle on the couch once we got home, so we did that and read a library book, and then it was time to head to the hairdressers for our haircuts. We started out on foot, and had made it one block from home, and she saw another kid on a scooter and announced she wanted to ride her scooter too. Initially tried saying we couldn't do it this time, because we'd be late, but she was on the verge of having another meltdown, so I capitulated, and we went back and grabbed it. I'm actually really glad I did, because she was as happy as Larry from that point on, and we were only a couple of minutes late at the hairdresser. We did her fringe trim first, and then she had a great play in the kid's corner while I got my haircut. She even cleaned up the corner better than she found it without argument. We then had plenty of time to scooter back home, so I decided to check out the 'OO'niverse Family Cafe, which is next door to the Hawthorne Cinemas. It's this thing I've never gotten around to checking out, and it was the second thing I was glad I did this afternoon. It's not a big place, and it was super quiet. There was just us and two twin Kindergarten-aged girls being babysat. Zoe had a banana milkshake and got a dolphin painted on her arm and a balloon dog made, and I had a coffee and we just chilled out for a bit while I chatted with the owner (who was babysitting the twins). By this stage Anshu was already at my place, and Sarah wasn't far off leaving work to pick up Zoe, so we made our way back home. Zoe had a great time playing with Anshu until Sarah arrived. So I was basically really happy that I managed to turn around a massively molten afternoon and give Zoe a really good afternoon instead. Anshu and I then proceeded to made a couple of really fantastic pizzas. I really love my Thermomix. I made the pizza dough earlier today, and tonight I made some pizza sauce, some pesto sauce and caramelized some onions in it, and we were still done with dinner by 8pm. I was asked to write up a MySQL related post on how to backup MySQL with the Simpana 10 MySQL iDA and how to configure a test machine. I am currently working on this and should be posting it in the next month, so keep an eye out. It will follow along the same lines as the PostgreSQL one I posted earlier. As seen here. I've been referring to Anshu as "my girlfriend" in all my blog posts because I haven't gotten around to writing this post yet. I've finally gotten around to it. Anshu and I met at a speed dating event 8 months ago. I quite enjoyed the speed dating experience, and having done it, would prefer it over Internet dating. I think it helped that at the time I was working from home, getting above and beyond the amount of alone time that my introversion required for me to recharge, so I was in the right frame of mind for it. I did pretty well, I got 6 matches from the night, one of which was Anshu. Anshu is an Indian-Australian dual national. She emigrated about 12 years ago to do her Masters degree here, and decided to stay. This is my first inter-ethnic relationship, and it's been a very interesting expansion of my cultural horizons. Anshu is vegetarian, so I've expanded my vegetarian cooking repertoire significantly since we've been seeing each other. I already do "Meatless Monday" with Zoe, so it wasn't that difficult a transition for me. ## March 31, 2014 Zoe slept really well last night, and had a good breakfast of porridge this morning. We biked to Kindergarten for the first time in ages, as it wasn't raining. Drop off went nice and smoothly. I can't believe this is the last week of term 1 already. Today was an exceptional day, because Sarah had the day off, and picked up Zoe from Kindergarten instead of me. As tonight Zoe is with her, I got about 3 extra hours up my sleeve. The house was a bit of a mess, I decided to switch today with my Wednesday "clean the house day" and use the extra time to do a more thorough clean. Part way through that, an acquaintance, who recently separated from his wife, dropped by for a chat. We ended up chatting for about 3 hours, so I dialed back my cleaning to something more standard. My business debit card arrived in the mail today. It was exciting to see something with my name and my company name on it. I've scheduled a bank transfer to fund my business with the first loan I'll be making to it, so it'll have some cash as of the start of second quarter. All I need now is the cheque book to arrive, and I can go pay the patent lawyer a visit. I had contemplated going for a run tonight before my yoga class, but I ended up faffing around with trying to fix the song order on the USB stick that has all of Zoe's music on it. The new head unit isn't playing one album in the right order, and it's phenomenally annoying. To this end, I discovered fatsort, which is a godsend. Yoga was in the new studio tonight for the first time. I'm really happy that my teacher is growing her business. The new studio is even closer to home than the old one, which is lovely. Typing Animal wrote an interesting article about the dangers of stainless steel in a medical environment [1]. Apparently silver and copper are best due to the oligodynamic effect. Instead of stainless steel drinking bottles they should sell silver plated drinking bottles for kids, I’m sure that lots of parents would pay extra for that. Mark Kendall gave an interesting TED talk about a replacement for the hypodermic syringe in vaccinations [2]. His invention can reduce the cost of immunisation while increasing the effectiveness and avoiding problems with people who have a needle phobia. Waleed Aly wrote an insightful article about George Brandis’ attempt to change the Racial Discrimination Act specifically to allow Andrew Bolt to be racist [10]. He describes it as “the whitest piece of proposed legislation I’ve encountered” which is significant in a country with as much racism as Australia. Really we need stronger laws against racism, there should be no right to be bigoted. A German Court has ruled that “non commercial” licenses don’t permit non-commercial organisations to re-publish material [11]. This seems bogus to me, I’d be happy to have my non-commercial licensed work published by a non-commercial publishing organisation – just as long as they don’t run adverts on the page. Susie Hill wrote an article about the SPARX computer game that is designed to treat adolescent depression [13]. They are working on a “rainbow” edition for GLBT kids and a version for Maoris. Unfortunately their web site is down right now and the version at archive.org says that it’s currently only available to participants in a clinical trial. Josh Sanburn wrote an article for Time about people in the Deep South who claim to be Christian giving away guns to encourage people to attend church [17]. This is the same part of the world where people who claimed to be Christian used their “religion” as an excuse for supporting slavery. I’m quitting bourbon, too much evil comes from that part of the world and I’m not buying anything that comes from there. ## March 30, 2014 Bitcoincerns — as in Bitcoin concerns! Get it? Hahaha. Despite having an interest in ecash, I haven’t invested in any bitcoins. I haven’t thought about it any depth, but my intuition says I don’t really trust it. I’m not really sure why, so I thought I’d write about it to see if I could come up with some answers. The first thing about bitcoin that bothered me when I first heard about it was the concept of burning CPU cycles for cash — ie, setup a bitcoin miner, get bitcoins, …, profit. The idea of making money by running calculations that don’t provide any benefit to anyone is actually kind of offensive IMO. That’s one of the reasons I didn’t like Microsoft’s Hashcash back in the day. I think that’s not actually correct, though, and that the calculations being run by miners are actually useful in that they ensure the validity of bitcoin transfers. I’m not particularly bothered by the deflationary expectations people have of bitcoin. The “wild success” cases I’ve seen for bitcoin estimate their value by handy wavy arguments where you take a crazy big number, divide it by the 20M max bitcoins that are available, and end up with a crazy big number per bitcoin. Here’s the argument I’d make: someday many transactions will take place purely online using bitcoin, let’s say 75% of all transactions in the world by value. Gross World Product (GDP globally) is$40T, so 75% of that is $30T per year. With bitcoin, each coin can participate in a transaction every ten minutes, so that’s up to about 52,000 transactions a year, and there are up to 20M bitcoins. So if each bitcoin is active 100% of the time, you’d end up with a GWP of 1.04T bitcoins per year, and an exchange rate of$28 per bitcoin, growing with world GDP. If, despite accounting for 75% of all transactions, each bitcoin is only active once an hour, multiply that figure by six for $168 per bitcoin. That assumes bitcoins are used entirely as a medium of exchange, rather than hoarded as a store of value. If bitcoins got so expensive that they can only just represent a single Vietnamese Dong, then 21,107 “satoshi” would be worth$1 USD, and a single bitcoin would be worth $4737 USD. You’d then only need 739k bitcoins each participating in a transaction once an hour to take care of 75% of the world’s transactions, with the remaining 19M bitcoins acting as a value store worth about$91B. In the grand scheme of things, that’s not really very much money. I think if you made bitcoins much more expensive than that you’d start cutting into the proportion of the world’s transactions that you can actually account for, which would start forcing you to use other cryptocurrencies for microtransactions, eg.

Ultimately, I think you’d start hitting practical limitations trying to put 75% of the world’s transactions through a single ledger (ie hitting bandwidth, storage and processing constraints), and for bitcoin, that would mean having alternate ledgers which is equivalent to alternate currencies. That would involve some tradeoffs — for bitcoin-like cryptocurrencies you’d have to account for how volatile alternative currencies are, and how amenable the blockchains are to compromise, but, provided there are trusted online exchanges to convert one cryptocurrency into another, that’s probably about it. Alternate cryptocurrencies place additional constraints on the maximum value of bitcoin itself, by reducing the maximum amount of GWP happening in bitcoin versus other currencies.

It’s not clear to me how much value bitcoin has as a value store. Compared to precious metals, is much easier to transport, much easier to access, much less expensive to store and secure. On the other hand, it’s much easier to destroy or steal. It’s currently also very volatile. As a store of value, the only things that would make it better or worse than an alternative cryptocurrency are (a) how volatile it is, (b) how easy it is to exchange for other goods (liquidity), and (c) how secure the blockchain/algorithms/etc are. Of those, volatility seems like the biggest sticking point. I don’t think it’s unrealistic to imagine wanting to store, say, $1T in cryptocurrency (rather than gold bullion, say), but with only 20M bitcoins, that would mean each bitcoin was worth at least$50,000. Given a current price of about $500, that’s a long way away — and since there are a lot of things that could happen in the meantime, I think high volatility at present is a pretty plausible outcome. I’m not sure if it’s possible or not, but I have to wonder if a bitcoin based cryptocurrency designed to be resistant to volatility would be implementable. I’m thinking (a) a funded exchange guaranteeing a minimum exchange rate for the currency, and (b) a maximum number of coins and coin generation rate for miners that makes that exchange plausible. The exchange for, let’s call it “bitbullion”, should self-fund to some extent by selling new bitbullion at a price of 10% above guidance, and buying at a price of 10% below guidance (and adjusting guidance up or down slightly any time it buys or sells, purely in order to stay solvent). I don’t know what the crypto underlying the bitcoin blockchain actually is. I’m surprised it’s held up long enough to get to where bitcoin already is, frankly. There’s nominally$6B worth of bitcoins out there, so it would seem like you could make a reasonable profit if you could hack the algorithm. If there were hundreds of billions or trillions of dollars worth of value stored in cryptocurrency, that would be an even greater risk: being able to steal $1B would tempt a lot of people, being able to destroy$100B, especially if you could pick your target, would tempt a bunch more.

So in any event, the economic/deflation concerns seem assailable to me. The volatility not so much, but I’m not looking to replace my bank at the moment, so that doesn’t bother me either.

I’m very skeptical about the origins of bitcoin. The fact it’s the first successful cryptocurrency, and also the first definitively non-anonymous one is pretty intriguing in my book. Previous cryptocurrencies like Chaum’s ecash focussed on allowing Alice to pay Bob $1 without there being a record of anything other than Alice is$1 poorer, and Bob is $1 richer. Bitcoin does exactly the opposite, providing nothing more than a globally verifiable record of who paid whom how much at what time. That seems like a dream come true for law enforcement — you don’t even have to get a warrant to review the transactions for an account, because everyone’s accounts are already completely public. Of course, you still have to find some way to associate a bitcoin wallet id with an actual person, but I suspect that’s a challenge with any possible cryptocurrency. I’m not quite sure what the status of the digicash/ecash patents are/were, but they were due to expire sometime around now (give or take a few years), I think. The second thing that strikes me as odd about bitcoin is how easily it’s avoided being regulated to death. I had expected the SEC to decide that bitcoins are a commodity with no real difference to a share certificate, and that as a consequence they can only be traded using regulated exchanges by financial professionals, or similar. Even if bitcoins still count as new enough to only have gotten a knee-jerk regulatory response rather than a considered one (with at$500 a pop and significant mainstream media coverage, I doubt), I would have expected something more along the lines of “bitcoin trading is likely to come under regulation XYZ, operating or using an unregulated exchange is likely to be a crime, contact a lawyer” rather than “we’re looking into it”. That makes it seem like bitcoin has influential friends who aren’t being very vocal in public, and conspiracy theories involving NSA and CIA/FBI folks suggesting leaving bitcoin alone for now might help fight crime, seem more plausible than ones involving Gates or Soros or someone secretly creating a new financial world order.

The other aspect is that it seems like there’s only really four plausible creators of bitcoin: one or more super smart academic types, a private startup of some sort, an intelligence agency, or a criminal outfit. It seems unlikely to me that a criminal outfit would create a cryptocurrency with a strong audit trail, but I guess you never know. It seems massively unlikely that a legitimate private company would still be secret, rather than cashing out. Likewise it seems unlikely that people who’d just done it because it seemed like an interesting idea would manage to remain anonymous still; though that said, cryptogeeks are weird like that.

If it was created by an intelligence agency, then its life to date makes some sense: advertise it as anonymous online cash that’s great for illegal stuff like buying drugs and can’t be tracked, sucker in a bunch of criminals to using it, then catch them, confiscate the money, and follow the audit trail to catch more folks. If that’s only worked for silk road folks, that’s probably pretty small-time. If bitcoin was successfully marketed as “anonymous, secure cryptocurrency” to organised crime or terrorists, and that gave you another angle to attack some of those networks, you could be on to something. It doesn’t seem like it would be difficult to either break into MtGox and other trading sites to gain an initial mapping between bitcoins and real identities, or to analyse the blockchain comprehensively enough to see through most attempts at bitcoin laundering.

Not that I actually have a problem with any of that. And honestly, if secret government agencies lean on other secret government agencies in order to create an effective and efficient online currency to fight crime, that’s probable a win-win as far as I’m concerned. One concern I guess I have though, is that if you assume a bunch of law-enforcement cryptonerds build bitcoin, is that they might also have a way of “turning it off” — perhaps a real compromise in the crypto that means they can easily create forks of the blockchain and make bitcoins useless, or just enough processor power that they can break it by bruteforce, or even just some partial results in how to break bitcoin that would destroy confidence in it, and destroy the value of any bitcoins. It’d be fairly risky to know of such a flaw, and trust that it wouldn’t be uncovered by the public crypto research community, though.

All that said, if you ignore the criminal and megalomaniacal ideas for bitcoin, and assume the crypto’s sound, it’s pretty interesting. At the moment, a satoshi is worth 5/10,000ths of a cent, which would be awesome for microtransactions if the transaction fee wasn’t at 5c. Hmm, looks like dogecoin probably has the right settings for microtransactions to work. Maybe I should have another go at the pay-per-byte wireless capping I was thinking of that one time… Apart from microtransactions, some of the conditional/multiparty transaction possibilities are probably pretty interesting too.

For a while now I had a multiple wifi routers all providing access points, and a connection to each other, using a feature called WDS. All of the routers run OpenWRT. Recently one of them died and everything kind of stopped working properly. I actually had the following configuration: TP-LINK <--wired,bridged--> ASUS WL500G <--wireless,WDS,bridged--> Linksys […]

Went on to complete the backyard work, and sort out another area down the side of the house. I just need to circle back and fix up some drainage which I will do in the next few weeks, as I can’t make a mess of the backyard until after my sons birthday party. So will get stuck into fixing that in the next 2-3 weeks. I put some posts in on the side of the house so we can use it to block the dog down the side when we have events on in the backyard. It also keeps out from running down that side of the house when it’s wet and destroying it.

A few more pictures below;

## March 28, 2014

Today was another wet day. Sarah dropped Zoe around in the morning, and she watched a little bit of TV before we drove to Brazilian Jiu-Jitsu class. My cycling fitness is going to go completely to hell.

Zoe's really taken to her teacher, Patrick. The classes have pretty much been just Zoe and I on Friday's, with the occasional other kids, and so she's formed a pretty close relationship with her teacher. The last few weeks, Zoe's really liked to help Patrick set up and tidy up the space before and after the classes. She loves to tell Patrick about what's been going on in her life. It's really sweet to watch. Today there was another 3 year old boy, but he was much less focused than Zoe.

I will miss the classes, because I've seen real self-defense value in what they've been teaching. For proper comparison, I should see tae kwon do clases as well, but currently I'd be pretty happy with what I've seen taught in Brazilian Jiu-Jitsu. I can see the practical application of it. We're going to go watch Patrick compete in May, and I'm looking forward to seeing some adult-level stuff.

Megan's Dad was looking for something to do with his kids, so I suggested we meet at Lollipops Playland & Cafe in Cannon Hill after BJJ class. We were a bit late in getting there, mostly because Zoe wanted to hang around after class for a bit, but we got there eventually, and Zoe and Megan had a great time.

We stayed for lunch and the girls played a bit more. It's the first time I've used a play cafe since coming back to Australia, and I found it almost overwhelmingly loud and busy. These places must love it when it rains. It was nice that Zoe's at an age now where she can go off and play on her own. Having Megan there was a great help in that regard, but the girls did keep losing each other, although that wasn't a big problem for either of them.

While we were at the play cafe, I received an email saying that the Kindergarten working bee for tomorrow had been canceled on account of the wet weather, so that's opened up our Saturday significantly.

After that, we headed down to Bunkers to place the order for the bunk bed I want to get Zoe for her birthday. I'd expected with all the morning's activities, for her to pass out in the car immediately, but surprisingly she lasted the distance, and had a great time sampling all the different bunk beds at the store.

Even on the way back home, it took her a little while to fall asleep, so I extended the car ride by driving around the Port of Brisbane to see how much there was to see that might be interesting for Zoe when she's awake.

After that, we went to the car wash to get the car cleaned, because it's been a while. Zoe had a babyccino and we sorted through the DreamWorks cards we'd traded with Megan that morning.

It was time for dinner after the car wash, so we headed home, and Zoe watched some TV while I prepared dinner.

At bath time, I realised that we hadn't had time to do anything for Science Friday. Yesterday at Bunnings I bought a roll of clear tubing for the heck of it, so we made a siphon in the bath. Zoe thought the tube was great fun, and had a good time blowing bubbles in the bath with it. Tomorrow we'll have to use some bubble bath for added entertainment.

Bedtime went smoothly enough.

I am going to assume that this is a test deployment and as such will expect that you have installed your CentOS 5.10 x64 Linux the way you want it, and I will follow on from that point on what I needed to perform to get the distribution release of PostgreSQL to work with Simpana 10 PostgreSQL iDA to perform a backup. Of course some assumed knowledge present.

1. Install the postgresql packages onto your  CentOS client.

$sudo yum install postgresql84 postgresql-server postgresql-devel 2. Startup postgresql server for the first time, you need to run initdb switch instead of start for the first time only. 3.$ sudo service postgresql initdb
4. We should also enable the service to run at boot moving forward

$sudo chkconfig postgresql on 5. Before we change the authentication method below, we need to set a password that we know for the postgres user in the postgresql database. To perform this we need to change to the postgres user and connect to postgresql database and update the password for the user to something we know.$ sudo su -

# su – postgres

$psql 6. Now at the postgres prompt update the password for the postgres user, unless you want to make your own. Won’t discuss how, just going to show how to set postgresql user password. Be sure to remember what you set the password too, it will be required later on. postgres=# ALTER USER postgres WITH PASSWORD ‘password’; ALTER ROLE postgres=#\q 7. Postgresql packages distributed with CentOS don’t use md5 password authentication, it defaults to peer/ident based authentication. In this example we will flip this to md5 based authentication, and we will touch on a peer/ident based authentication example in a later post. Perform the changes below to enable md5 authentication.$ cd /var/lib/pgsql/data

$sudo vi pg_hba.conf Find the line at the bottom of the file that looks like the one below; local all all ident You need to change this to have md5 on the end, i.e. replace ident to be md5 instead. Save the changes. 8. Now restart postgresql for the changes to take effect. (required)$ sudo service postgresql stop

$sudo service postgresql start 9. Now you can test that this has worked by execution as root the command below, and when prompted for the postgres user password authenticate using the password set in step 6. # psql -U postgres If it worked, you will get the famous postgres=# prompt, in which you can enter \q [enter] to quit it. 10. Next up we now need to enable archive logs. We need to edit the postgres.conf file which on CentOS rpm based install is /var/lib/pgsql/data and the lines we need to add in the Archiving section is below; archive_mode = on archive_command = ‘cp %p /var/postgresql/archive/%f’ Save those additions and move on below. 11. Make sure to create the folders/destination used in the archive_command above and ensure postgres user can write to it etc. 12. Now restart postgresql for the changes to take effect. (required)$ sudo service postgresql stop

$sudo service postgresql start 13. Install the Simpana PostgreSQL iDA. 14. Once installed refresh the Simpana Console and attempt to create your PostgreSQL instance. See the dialog below for the values I used in this configuration. Of course the username is the postgres user and password we configured in step 6. Note the archive log directory is the one we used in the archive_command string at step 10 too. 15. If everything goes to plan you should have your instance created and now you can do configuration against the DumpBaseBackupSet subclient and/or FSBasedBackupSet subclient. For the difference between what each does, I recommend you review the documentation. As each backupset has its own unique capabilities. See the bottom of the Backup documentation page for explanations. 16. Assign a Storage Policy to each subclient and run a backup of each to confirm it works. CommVault Documentation references: ## March 27, 2014 My yoga teacher was out sick this morning. I had grand plans of instead biking to the pool and going for a swim, but when my alarm went off, and the weather outside was grey and miserable, having an extra half an hour lie in seemed more attractive. I think I made the right choice, because I felt like a million bucks today. I did all of the preparation to bake a batch of carrot and kale muffins before Zoe arrived, and we baked a batch as soon as she arrived and had them out of the oven in time to drive to Playgroup. I was expecting a larger crowd today on account of the wet weather, but it turned out quite the opposite. That said, Zoe still had a good time. There's really no other kids her age though, so it's still winding up as a "play with Dad in a different environment". My ABN came through yesterday, so after Playgroup we walked to the bank to give it to them and sort out a business credit card. Zoe was super well behaved while I did that, so we grabbed a fresh hot cross bun from Brumby's across the street afterwards. I needed some more stamps, so I figured we could just walk down to the post office down the other end of Oxford Street while we were there. That was slow going, but Zoe was enjoying walking with her umbrella in the rain. We got to the post office, and I discovered that postage is going up to 70 cents next week, and they couldn't sell be 70 cent stamps yet, and buying 60 cent stamps would be pointless after Monday, so I left empty handed. Zoe was eyeing off the umbrellas they had for sale in the post office. No sooner had we walked out of the post office and Zoe managed to walk all over her umbrella and totally destroy it. If I hadn't seen it happen, I'd have said she did it deliberately to get a new umbrella, but it really was an accident, so we had to turn around and buy one of the umbrellas from the post office. We made our way back down Oxford Street, and stopped in the boutique toy shop there. Zoe was particularly enthralled by the musical jewelery boxes, and really wanted one. I negotiated with her for it to be a birthday present, and got it gift wrapped. She seemed fine with the idea of not being able to have it until her birthday. We then made our way back to the car and drove home for a rather late lunch. I'm glad Zoe had the hot cross bun after the bank to keep her going, as she didn't seem to mind the late lunch at all. I made fritters with some left over corned beef, and then after we'd had lunch I thought we might as well get out of the house. Zoe's Kindergarten has a working bee on Saturday (I'm expecting it'll get canceled due to the wet weather, though). Incidentally, I have no idea where the "bee" in "working bee" comes from. Megan's Dad is Welsh, and I was lost for words when it came to explaining what a working bee was. Anyway, I wanted to get some gardening gloves, so we went to Bunnings. I managed to find some cute little kids ones as well, so Zoe can help. After Bunnings, we went to the pet shop to get some more cat litter. The pet shop seems to be a great source of entertainment for Zoe. She absolutely loved playing around with the hutches they had on display, and checking out all the fish, and the aquarium accessories. We eventually made it out of the pet shop, and we went around to the adjacent shopping mall in search of some craft supplies at some of the cheap shops there. It turns out the cheap shops are also a great source of entertainment. By the time we were done there, it was time to get home so Sarah could pick Zoe up, so we headed home. There was enough time for Zoe to watch a little bit of TV before Sarah arrived. So despite the weather, and no real plan for the day, we managed to completely fill the day, and Zoe had a great time. She seemed no worse for wear for powering through without a nap. There was a few small tantrums at Bunnings, but that mostly revolved around her tipping over her miniature shopping trolley. We're excited to tell you that fellow LUGgers Chris Neugebauer and Craige McWhirter have laid the groundwork to put in a bid to get Linux.conf.au back to Hobart for 2017! Starting this year, Linux Australia started announcing the successful bids for future conferences on a two year cycle, so while we have plenty of time before an actual LCA would arrive in Hobart, we need to get our bid in now! The intention to bid was announced on the Linux Australia mailing list here. If you would like to help out by volunteering to be part of the bid team, and later to help bring about a successful conference, please get in contact with Chris - you will find his email in the above annoucement link, drop a note on the TasLUG mailing list or drop by our IRC channel #taslug on FreeNode. Came across an interesting condition today, which took me a bit of testing to identify why the job would go into a pending state. This one relates to Simpana 10 on a Linux client where you have a File System iDA with a PrePost command being executed. In my test below the script is doing nothing special, it’s merely to have something to execute to show the behavior. I’ve provided it below purely for reference. [root@jldb1 bin]# cat pre-scan.sh #!/bin/sh # test # echo$1 $2$3 $4$5 $6$7 $8$9 >> /root/pre-scan.log
exit 0

Job goes pending and produced the following errors and output below;

JPR (Job Pending Record)

Error Code: [7:75]

Description: Unable to run [/usr/local/bin/pre-scan.sh] on client.

Source: jwcs, Process: startPrePostCmd

[JobManager.log – commserve]

3024  d88   03/27 18:16:26 21  Scheduler  Set pending cause [Unable to run [/usr/local/bin/pre-scan.sh] on the client.                 ]::Client [jwcs] Application [startPrePostCmd] Message Id [117440587] RCID [0] ReservationId [0].  Level [0] flags [0] id [0] overwrite [0] append [0] CustId[0].
3024  118c  03/27 18:16:26 21  Scheduler  Phase [Failed] message received from jwcs.lab.heimic.net] Module [startPrePostCmd] Token [21:3:1] restartPhase [0]
3024  118c  03/27 18:16:26 21  JobSvr Obj Phase [3-Pre Scan] for Backup Job Failed. Backup will continue with phase [Pre Scan].

[startPrePostCmd.log - commserve]

4940  e4c   03/27 20:21:46 ### Init() - Initializing job control [token=21:3:7,cn=jwcs], serverName [jwcs.lab.heimic.net], ControlFlag [1], Job Id [21]
4940  e4c   03/27 20:21:47 ### Cvcl::init() - CVCL: Running in FIPS Mode
4940  e4c   03/27 20:21:48 ### CVJobCtrlLog::registerProcess(): successfully created file [C:\Program Files\CommVault\Simpana\Base\JobControl\4.940]
4940  e4c   03/27 20:21:48 ### ::main() - jobId 21 - restoreTaskId = 0
4940  e4c   03/27 20:21:48 ### ::main() - jobId 21 - adminTaskId = 0
4940  e4c   03/27 20:21:48 ### ::getBackupCmdAndMachine() - jobId 21 - before construct application id
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - appTypeId = 29
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - symbolic AppId = 2:20
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - prePostId = 1
4940  e4c   03/27 20:21:49 ### ::getBackupCmdAndMachine() - jobId 21 - preifind cmd = /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 ### ::main() - jobId 21 - commandPath = /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - before execute cmd
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - Use Local System Acct.
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - remoteexename = [/usr/local/bin/pre-scan.sh]
4940  e4c   03/27 20:21:49 21  ::main() - jobId 21 - args = [ -bkplevel 1 -attempt 7 -job 21]
4940  e4c   03/27 20:21:49 21  executePrePostCmd() -  Attempting to execute remote command on client [jldb1]..
4940  e4c   03/27 20:21:49 21  executePrePostCmd() - jobId 21 - Received error text from server cvsession [Unknown Error]
4940  e4c   03/27 20:21:49 21  executePrePostCmd() - jobId 21 - Error [0] returned from executeRemoteCommand /usr/local/bin/pre-scan.sh
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[1] = [117440623]
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[2] = [/usr/local/bin/pre-scan.sh]
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - MsgId[0x0700004b], Arg[3] = []
4940  e4c   03/27 20:21:49 21  EvEvent::setMsgEventArguments() - [MsgId[0x0700004b][]: [3] Args Pushed, [1] Args expected.
4940  e4c   03/27 20:21:49 21  ::exitHere() - jobId 21 - Exiting due to failure.
4940  e4c   03/27 20:21:49 21  BKP CALLED COMPLETE (PHASE Status::FAIL), 21. Token [21:3:7]
4940  e4c   03/27 20:21:53 21  ::exitHere() - jobId 21 - startPrePostCmd Terminating Event.
4940  238c  03/27 20:21:53 21  CVJobCtrlLog::unregisterProcess(): successfully removed file [C:\Program Files\CommVault\Simpana\Base\JobControl\4.940]

[cvd.log – client]

30846 427e0940 03/27 20:21:50 ### [CVipcD] Requests from non-CS with hostname [jwcs.lab.heimic.net] and clientname [jwcs] to execute in user entered path are not allowed

I worked out this problem is caused by lack of value in regkey sCSGUID as found in the location below;

/etc/CommVaultRegistry/Galaxy/Instance001/CommServe/.properties

Sample below;

[root@jldb1 ]# cat /etc/CommVaultRegistry/Galaxy/Instance001/CommServe/.properties | more
bCSConnectivityAvailable 1
sCSCLIENTNAME jwcs
sCSGUID
sCSHOSTNAME jwcs.lab.heimic.net
sCSHOSTNAMEinCSDB jwcs.lab.heimic.net

sCSGUID should be populated and its lack of value causes this condition with pre-scan script execution.

Fix:

Easiest method to recreate this regkey value is to do a local uninstall of the simpana services on the client. Revoke the client certificate in Simpana Console via Control Panel – Certificate Administration for the client in question. Followed by a reinstall.

Observation:

Subclients that have no scripts being executed as part of the backup will run fine if this regkey value is missing. You will never see a problem until you add a script. In addition, clients that have a simpana firewall configuration will be broken and subclients without scripts will break too. As the regkey value is used for simpana firewall configuration exchange I believe based on my testing.

Hope you enjoy my post… drop me a comment if you like the content and/or it helps you.

I am often asked about my “accent”. The most common guess is that it’s a “British” accent, while I lived in London for about a year I don’t think that my accent changed much during that time (people have commented on the way I speak since I was in primary school). Also there isn’t a “British accent” anyway, the Wikipedia page of Regional Accents of English has the first three sections devoted to accents in the island of Britain (and Northern Ireland is part of the United Kingdom which people often mean when they sat “Britain”). The Received Pronounciation is the main BBC accent and the accent that is most associated with Britain/England/the UK (which are three different things even though most people don’t know it) and I don’t think that I sound like that at all.

I’ve had various other guesses, the Netherlands (where I lived for a few years but mostly spoke to other foreigners), New Zealand (which I’ve visited a couple of times for conferences), Denmark (the closest I got was attending a conference in Sweden), and probably others I can’t remember.

If I actually had developed an accent from another country then it would probably be from the US. The amount of time I’ve spent watching Hollywood movies and watching US TV shows greatly exceeds the amount of time I’ve spent listening to people from all other countries. The fact that among all the people who wanted to try and guess where my accent supposedly originated none have ever included the US seems like strong evidence to suggest that I don’t have any sort of accent that really derives from another country. Also I have never had someone mistake me for being a resident of their own country based on accent which seems like clear evidence that all claims about me having a foreign accent are bogus.

Autism forums such as WrongPlanet.net [1] always turn up plenty of results for a search on “accent”. In such discussions it seems that a “British accent” is most common mistake and there are often theories raised about why that is – often related to speaking in a formal or precise way or by using a large vocabulary. Also in such discussions the list of countries that people supposedly have accents from is very inclusive, it seems that any country that the listener has heard of but doesn’t know that well is a good candidate. The fact that Aspies from outside the US are rarely regarded as having an American accent could be due to the fact that Hollywood has made most of the world population aware of what most American accents sound like.

Also if I really had some sort of accent from another country then probably someone would comment on that when I’m outside Australia. When I’m travelling people tend to recognise my accent as Australian, while it doesn’t please me when someone thinks that I sound like Crocodile Dundee (as happened in the Netherlands) it might not be entirely inaccurate.

The way the issue of accent is raised is generally in the form of people asking where I’m from, it seems to imply that they don’t think I belong in Australia because of the way I speak. It’s particularly annoying when people seem unable to realise that they are being obnoxious after the first wrong guess. When I reply “no” to the first “are you from $COUNTRY” question and don’t offer any further commentary it’s not an invitation to play 20 questions regarding where I’m supposedly from, it’s actually an indication that I’m not interested in a conversation on that topic. A Social Skills 101 course would include teaching people that when someone uses one-word answers to your questions it usually means that they either don’t like your questions or don’t want to talk to you. ### Social Skills vs Status The combination of persistence and misreading a social situation which are involved when someone interrogates me about my supposed accent are both parts of the diagnostic criteria for Autism. But I generally don’t get questions about my “accent” in situations where there are many Aspies (IE anything related to the Free Software community). I think that this is because my interactions with people in the Free Software community are based around work (with HR rules against being a jerk) and community events where no-one would doubt that I belong. I mostly get questions about my “accent” from random middle-class white people who feel entitled to query other people about their status who I meet in situations where there is nothing restraining them from being a jerk. For example random people I meet on public transport. ## March 26, 2014 My late biological maternal grandmother ("Nana"), remarried late in her life to a long-time friend named Bryce. I was probably an early teenager. He was a nice guy, and I kept in loose contact with him after my Nana passed away. After my Nana passed away, he moved out of the retirement home he'd been living in with my Nana, and in with one of his sons. Sometime before I moved back to Australia, in ailing physical health, he moved from his son's place into Masonic Care's aged-care hostel in Sandgate. He turned 90 last year. Mentally, he's doing pretty good. Physically, he's very wobbly on his legs. He's had a few falls, which was the main catalyst for moving from his son's place to the aged-care hostel. Other than that, he's in pretty good physical health though. I remember the first time I visited him in the hostel. After I left, I wept uncontrollably. Here was a man who was literally just waiting out the rest of his life in a small cupboard of a room. I was appalled at how small the room was, and the fact that he was just sitting around waiting to die really upset me. I've visited him a few times since I've been back. I've taken him over to my parent's place when I've taken Zoe to visit them, just so he gets out. I should say that I'm sure his own family do spend some time with him, so it's not like he's spending all his time rotting in this place, but probably still a fair chunk of it. Growing old sucks. Yesterday, when the weather forecast for today was looking like it was going to be pretty wet and miserable, I decided I'd use the day to take Zoe to Underwater World (which I've since learned has rebranded it self as "Sea Life: Mooloolaba". I had the presence of mind to call up Bryce yesterday to see if he'd like to join us today. We had to pass in his general direction to get up there, so it wasn't particularly out of my way. He informed me that he was now in a wheelchair, which I thought was fine for this excursion. So this morning, after we got ourselves going, we stopped at Sandgate to pick up Bryce, and made it to Underwater World by about 10am. I was a bit leery of the drive, because from home, it was another 30 minutes on top of the drive to Wet and Wild, and 15 minutes on top of the drive to Sea World, so I wasn't sure how Zoe would take that length car trip. It turned out that she took it pretty well. She started getting a bit restless in the last 30 minutes, but it was manageable. I was a little apprehensive about how wrangling Zoe and looking after a frail 90 year old in a wheelchair was going to work out. It turned out it worked out just fine. I could leave Bryce wherever he was, if I had to chase after Zoe, and Zoe quite liked helping push the wheelchair around. Towards the end of the day, when she got tired, I could just pop her in Bryce's lap, and push the pair of them around. It was a really good outing. I have only vague memories of visiting the place in my childhood, and it's become significantly better since then. Zoe really enjoyed going through the glass tunnels under the main ocean exhibit. We did several laps of that. We were fortunate enough to catch the sting ray feeding almost immediately upon arrival, and we also saw the seal show and made the otter feeding. The place was more focused on salt water aquatic life, hence the name, but there was also some freshwater exhibits. I never thought that much of the Monterey Aquarium, much preferring the California Academy of Science's aquariums, especially in terms of drive time accessibility. If you ignore the freshwater/salt water diversity, I think Sea Life is even better than the California Academy of Sciences. We left at about 2pm, and after a lot of hunting around, tracked down the photo they took when we entered, and then drove home closer to 3pm. To my surprise, Zoe didn't fall asleep immediately, but she did fall asleep on the way back to Bryce's place. She woke up to say goodbye to him, and then we drove home, stopping off in the Valley to check my post office box along the way, and arrived back home about 15 minutes before Sarah arrived to pick her up. It ended up being a very full day. Bryce really enjoyed himself, and I felt really happy that I was able to relatively easily brighten up his day. I've resolved to try another such outing again, I just need to figure out what to do. I thought I'd try for a 10km run, but it started to rain at the 4km mark. I was also not feeling particularly confident about lasting the distance, so I decided to just turn it into a 5km run instead.  ISBN: 9781451638189LibraryThing This book follows on from Live Free or Die and Citadel. This time we focus solely on Dana as she is transferred to a new unit. The story is interesting, although perhaps it focusses on the dysfunction of the Latin American countries a little more than is really necessary. More interestingly, the book ends the series (as best as I can tell) in an unusual manner for a book like this, with the humans not winning a simple out right victory -- moral or otherwise. Overall, a fun light read. Tags for this post: book john_ringo alien invasion combat troy_risingRelated posts: Citadel; Live Free or Die; Isaac Asimov's Robot City: Robots and Aliens: Humanity; Isaac Asimov's Robot City: Robots and Aliens: Maverick; Dragon's Egg; Starquake Comment Recommend a book One good thing about going to the MySQL UC is the fact that you will interact with many people and you benefit from social events in the evenings. In its heyday, I recall you get no more than 4 hours of sleep a night, because you’re busy with people for up to 20 hours a day. Meetings, drinks, the hallway track are also all very interesting. That’s the added value of going to an event besides just the learning. Monday is open source appreciation day and I know there will be drinks planned on Monday evening (31.03) at least from the CentOS Dojo crew. Tuesday (01.04) brings on the welcome reception (4.30-6.30pm), while Wednesday is the community dinner at Pedro’s (7-10pm). MariaDB.com (SkySQL) has graciously offered to pay the first$500 of the bar bill, and as a Pedro’s regular I can tell you the martinis are pretty good.

Thursday (03.04) is the Community Network Reception (5.30-8.30pm) with the awards and lightning talks, which is a must attend event. While not part of the conference, after the reception I’d personally head over to Taste restaurant for more community bonding.

Friday is sadly the day many of us will leave (I am no exception). I expect to usually be all around the Hyatt as well as at the Evolution Cafe/Bar (hotel bar) which is where lots of conversation happen.

Bits of advice: drink plenty of water. It is costly in the hotel but I’m sure you can be creative with getting a bottle and filling it regularly. Bring some cash – split dinners are hard to do with a credit card, so cash goes a long way. For the non-Americans reading this, have some dollar bills – tipping is customary. Bring plenty of business cards, and carry a notebook + pen in your pocket at all times (you will have long action items post-conference week, I’m sure of it).

Related posts:

An recent addition to Simpana 10 Oracle iDA over Simpana 9 was the ability to specify Media Parameters for RMAN Command Line Operations, which wasn’t possible in Simpana 9.

Below is an example on its use, and the documentation link for review is here.

The client in this example is “jwora1″ running Windows 2008 R2 x64 and an Oracle 11gR2 64bit release. Simpana 10 with a SP4 is installed on client and Commserve – “jwcs”.

RMAN Script:

run {
allocate channel ch1 type 'sbt_tape' PARMS="BLKSIZE=262144,ENV=(CVOraSbtParams=C:\p.txt,CvClientName=jwora1,CvInstanceName=Instance001)" trace 2;
backup current controlfile;
}

Contents of p.txt file below;

[sp]
SP_Main-jwma1

[mediaagent]
jwma1

Below is a look at the GUI configuration for the Oracle instance “orcl” on client “jwora1″ which shows that third party command line backups should use Storage Policy (SP) – “SP_Main-jwcs”. However as you will not by the running of the job using the Media Parameters it will use a different SP and MediaAgent, as defined by the p.txt file I passed.

subclient not configured with any SP

orcl properties showing command line backup should use SP – SP_Main-jwcs by default.

orcl properties showing log backups would use SP – SP_Main_jwcs by default

sample execution of my rman backup script – current control file backup

Commserve Job Controller showing the running job. Note which MediaAgent is used and SP.

If you find my posts of value, please send me some feedback. Especially if you find this post and it helps you in your travels.

UPDATE: And to follow on from the example above, the following is also possible too. If you don’t pass the CvClientName and CvInstanceName on the channel allocation, you can pull those too from the parameters file. Sample below of alternative backup script syntax and parameters file contents. All documented on the documentation link provided top of post.

RMAN Script:

run {
allocate channel ch1 type 'sbt_tape' PARMS="BLKSIZE=262144,ENV=(CVOraSbtParams=C:\p2.txt)" trace 2;
backup current controlfile;
}

Contents of p2.txt file:

[sp]
SP_Main-jwma1
[mediaagent]
jwma1
[CvClientName]
jwora1
[CvInstanceName]
Instance001

The parameter file can have spaces between the definitions like in the top example, which I prefer, as it makes the file easier to read. Where as the p2.txt file has no extra spaces, which also works but makes it harder to read personally.

Enjoy.

I wrote previously about Percona Live Santa Clara 2014, and I want to bring to your attention something Percona has done that is very nice to open source communities: have an open source appreciation day.

Its before the conference (so on Monday), and you get a choice between the CentOS Dojo (great lineup there including many from Red Hat, Monty from MariaDB, and PeterZ from Percona) or the OpenStack Today (another great lineup there). I’d split my time between both the events if time permitted, except I’m flying in on that day.

I can highly recommend going to either as registration (Free) gets you access to the expo hall & keynotes as well. That’s a saving of $75!!! Remember to register for the conference where the discount code is still SeeMeSpeak. As a bonus, Serg and I have additional talks now, so there will be more MariaDB goodness at the conference. See you next week! Related posts: ## March 25, 2014 There was quite the torrential downpour overnight. It woke me up, and it woke Zoe up too, and at about 1:45am she ended up in bed with me. I think she managed to invent a new baby sleep position, where she was on my pillow, perpendicular to me along the bed head, and had somehow ejected the pillow that was on her side of the bed. We got up, with a slow start, and the weather was still looking a bit dicey, so Zoe wanted to go to Kindergarten by car. That actually meant we were the first ones there, because I'd been working to a timetable of leaving home by bike. One of the chickens was starting to hatch (and subsequently hatched around noon) so that was exciting. Funnily enough, Zoe had spent all morning telling me how she didn't want to go to Kindergarten, but by the time we got there, she didn't seem to mind being there all that much. After I got home from Kindergarten, I biked down to Bulimba to go to the bank to finalise opening my business bank accounts. I've since discovered that one can't do much without an ABN, I can't even get a cheque book, so I've sicked my accountant on that one for me. I got stuck into my US taxes some more today, and made a very satisfactory amount of progress on them. I think I should be able to finish them off in the next session I get to work on them. I felt like getting out of the house after that. It was looking quite like rain again, making picking up Zoe by bike out, so I ran a few errands in the car before getting to Kindergarten quite early. Zoe was fast asleep, but I let her have a long, slow wake up, and that made our departure much easier. She got to have a little hold of one of the baby chicks before we left. Today I learned that baby chicks smell absolutely divine. We got home, and Zoe did some self-directed craft for a bit, and then wanted to play hide and seek, so we did that and finally got around to looking at all of the Woolworths DreamWorks Heroes cards she's been collecting. I was disappointed to discover there's only a single card per pack, so that's going to mean I have to spend at least$840 on groceries, excluding duplicates, before we get all of them. I'm glad the checkout operators aren't sticking to the rules and are handing them out a little more generously than that.

After that, we did a bit more rough and tumble play, and then it was time to start making dinner, so Zoe watched a DVD.

We got dinner out of the way relatively early, so I practiced plaiting her hair (I've surprised myself with the half-decent job I can do) and then did bath time and bed time.

Bed time was a little protracted (she didn't like her bedroom and wanted to sleep in my bed) but otherwise seems to have gone smoothly.

I’ve just been given an Armourdillo Hybrid case for the Nexus 5 [1] to review. The above pictures show the back of the case, the front of the case, the stand, and the front of the case with the screen blank. When I first photographed the case the camera focused on a reflection of the window, I include that picture for amusement and to demonstrate how reflective the phone screen is.

This case is very hard, the green plastic is the soft inner layer which is still harder than the plastic in a typical “gel case”. The black part is polycarbonate which is very hard and also a little slippery. The case is designed with lots of bumps for grip (a little like the sole of a running shoe) so it’s not likely to slip out of your hand. But the polycarbonate slides easily on plastic surfaces such as the dash of a car. It’s fortunate that modern cars have lots of “cup holders” that can be used for holding a phone.

I haven’t dropped the phone since getting the new case, but I expect that the combination of a hard outer shell and a slightly softer inner shell (to cushion the impact) will protect it well. All the edges of the case extend above the screen so dropping the phone face down on a hard flat surface shouldn’t cause any damage.

The black part has a stand for propping the phone on it’s side to watch a movie. The stand is very solid and is in the ideal position for use on soft surfaces such as a doona or pillow for watching TV in bed.

### Appearance

This case is mostly designed to protect the phone and the bumps that are used for grip detract from the appearance IMHO. I think that the Ringke Fusion case for my Nexus 4 [2] looks much better, it’s a trade-off between appearance and functionality.

My main criteria for this case were good protection (better than a gel case) and small size (not one of the heavy waterproof cases). It was a bonus to get a green case for the Enlightened team in Ingress. NB Armourdillo also offers a blue case for the Resistance team in Ingress as well as other colors.

## March 24, 2014

I got up this morning with the intent of knocking out a 10km run. I managed to last 8km today, so it's an improvement, but I don't know what's up with my running fitness at the moment.

After that, I pretty much did Debian stuff all day. I managed an upload of dstat and found a potential security bug in another of my packages when I was trying to update it, so I raised that issue with the package's upstream.

I also mostly sorted out opening a bank account for my company. I just have to visit the branch in person tomorrow.

Sarah had indicated to me that Zoe had slept poorly last night, on top of a big weekend, and that I should probably pick her up in the car, so I drove to Kindergarten expecting to find her fast asleep and not take too kindly to being woken. Instead, she was wide awake, not having napped at all.

The highlight of her day was they had some baby chickens at Kindergarten. They had four day-old hatchlings, with more eggs in an incubator.

Megan wanted Zoe to have a coffee with her, so we stopped at the local coffee shop, with her Dad and little sister, for a babyccino on the way home.

I had a pretty big weekend away, and didn't feel up to doing the grocery shopping yesterday afternoon when I got home, so we went to the supermarket on the way home to do the weekly grocery shop. After we got home, I got stuck into making dinner while Zoe watched TV.

My girlfriend came around after work and joined us for dinner, and the three of us had a nice dinner together.

Zoe started showing signs of being particularly tired during dinner, and was a bit uncooperative around bath time, but we got through it all, and I managed to get her down to bed a little bit early, and she fell asleep without too much trouble. It's a fairly warm night. Hopefully she'll sleep well.

As my previous post documented, I’ve experimented with localbitcoins.com.  Following the arrest of two Miami men for trading on localbitcoins, I decided to seek legal advice on the sitation in Australia.

Online research led me to Nick Karagiannis of Kelly and Co, who was already familiar with Bitcoin: I guess it’s a rare opportunity for excitement in financial regulatory circles!  This set me back several thousand dollars (in fiat, unfortunately), but the result was reassuring.

They’ve released an excellent summary of the situation, derived from their research.  I hope that helps other bitcoin users in Australia, and I’ll post more in future should the legal situation change.

I remember finding the website sometime ago where you can generate your Debian Sources file. Turns out I needed to do just that today and I did a Google and found the link.

Such a great idea and it works so nicely. Check it out here.

Kudos to the author.

## March 23, 2014

Well it looks like CommVault Simpana 10 Service Pack 6 now GA. Appears to have come out over the weekend. Guess some of us will be upgrading as time permits and change requests get approved.

Don’t forget to read the release notes…

Aldi Mobile has made a significant change to their offerings. They previously had an offer of $35 for “unlimited” calls and 2.5G of data in a month for which they had to publicly apologise for misleading customers as 2500 minutes of calls a month (83 minutes per day) is no-where near unlimited [1]. They also had an offer of$15 for 2G of data.

The 2G data bolt-on was really good for some of my relatives, when they use that and configure their phones not to update software over 3G they never had to ask me about any problems related to excess data use. So my mother in law is facing an extra $5 per month (or maybe more depending on data use) and more time spent calling me for tech support. The data bolt on that Aldi is going to offer in future is$30 for 3G of data to replace the previous offer of 15 for 2G of data. The cost will be unchanged for anyone who uses between 2G and 3G a month, for everyone who uses less than 2G or more than 3G the data bolt-on will cost more. There is simply no possibility for any Aldi data-only customer to save on data use. The only way someone who uses a moderate amount of data could save money is if they use more than 160 minutes of calls and less than 1G of data. ### Disclaimer My analysis above is based on interpreting the Aldi web site. As with most telcos they aren’t trying to make things easy in this regard, it seems that the consensus of opinion among telcos is to use complex pricing to make it difficult to compare and reduce competitive pressure. I blame any inaccuracies in my analysis on the Aldi web site. ### Why Aldi Shouldn’t Mislead Customers Aldi isn’t primarily a mobile phone company, their main business is running a supermarket. The trust of customers is important to them, raising prices when competition goes away is one thing, but misleading customers about it is another. If Aldi were to honestly say “now that Kogan Mobile no longer exists there is nothing forcing us to have low prices” then I’d have a lot more respect for their company and be more inclined to shop at their supermarket. It’s a sad indictment of our society that I need to include a “why lying is wrong” section in such a blog post. I’ve now moved onto doing the next phase of work in the backyard. Below is a couple of photos that show the progress of the weekend. I put the timber in the week or so earlier, and put in 2 trailer loads of soil to help bring up level of the area. Followed by a load of 20mm river pebbles today. Looks like we will certainly need more. On the flip side our wheel barrow has a badly damaged tube which I will now need to repair before next weekend when I get another load. I think we will need another 2 loads probably, but see how we go. Pictures below; I was fixing up a neighbours Macintosh and I suspect due to the disk being used up to 100% it ended up corrupting, and when I run disk verify/repair in Disk Utility it seems to work but we still see the free space reported between Finder and Disk Utility being incorrect. Check out the picture below to see what I mean. Never and I repeat, never fill up your boot volume on OSX as you will cause lots of problems. I suspect this machine that had this issue will need to be formatted and reinstalled to correct the condition. Apr 1 2014 19:00 Apr 1 2014 21:00 Apr 1 2014 19:00 Apr 1 2014 21:00 Location: The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville. Bianca Gibson: Preventing Volunteer Burnout Russell Coker: Current Status of BTRFS The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5 Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff. For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at: http://www.metlinkmelbourne.com.au/route/view/725 Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings. LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C. April 1, 2014 - 19:00 read more ## March 22, 2014 I’ve been doing a bit of playing around with REST APIs lately, both at work and for my own amusement. One of the things that was frustrating me a bit was that actually accessing the APIs was pretty baroque — you’d have to construct urls manually with string operations, manually encode any URL parameters or POST data, then pass that to a requests call with params to specify auth and SSL validation options and possibly cookies, and then parse whatever response you get to work out if there’s an error and to get at any data. Not a great look, especially compared to XML-RPC support in python, which is what REST APIs are meant to obsolete. Compare, eg: server = xmlrpclib.Server("http://foo/XML-RPC") print server.some.function(1,2,3,{"foo": "bar"}) with: base_url = "https://api.github.com/" resp = requests.get(base_url + "/repos/django/django") if resp.ok: res = resp.json() else: raise Exception(r.json()) That’s not to say the python was is bad or anything — it’s certainly easier than trying to do it in shell, or with urllib2 or whatever. But I like using python because it makes the difference between pseudocode and real code small, and in this case, the xmlrpc approach is much closer to the pseudocode I’d write than the requests code. So I had a look around to see if there were any nice libraries to make REST API access easy from the client side. Ended up getting kind of distracted by reading through various arguments that the sorts of things generally called REST APIs aren’t actually “REST” at all according to the original definition of the term, which was to describe the architecture of the web as a whole. One article that gives a reasonably brief overview is this take on REST maturity levels. Otherwise doing a search for the ridiculous acronym “HATEOAS” probably works. I did some stream-of-consciousness posts on Google-Plus as well, see here, here and here. The end result was I wrote something myself, which I called beanbag. I even got to do it mostly on work time and release it under the GPL. I think it’s pretty cool: github = beanbag.BeanBag("https://api.github.com") x = github.repos.django.django() print x["name"] As per the README in the source, you can throw in a session object to do various sorts of authentication, including Kerberos and OAuth 1.0a. I’ve tried it with github, twitter, and xero’s public APIs with decent success. It also seems to work with Magento and some of Red Hat’s internal tools without any hassle. Things are different now. That’s certain. Or at least that’s what one of the marketing sites for my new employer has to say. Back in March I started at Red Hat’s Brisbane office working in release engineering (or the “Release Configuration Management” team). Short summary: it’s been pretty fun so far. Googling just now for something to link that provides some sort of context, I came upon a video with my boss (John Flanagan) and one of my colleagues (Jesse Keating) — neither of whom I’ve actually met yet — giving a talk to the ACM chapter at Northeastern University in Boston. (It’s an hour long, and doesn’t expect much background knowledge of Linux; but doesn’t go into anything in any great depth either) My aim in deciding to go job hunting late last year was to get a large change of scenery and get to work with people who understood what I was doing — it eventually gets a bit old being a black box where computer problems go in, solutions come out, and you can only explain what happens in between with loose analogies before seeing eyes glaze over. Corporate environment, Fedora laptop, enterprise customers, and a zillion internal tools that are at least new to me, certainly counts as a pretty neat change of scenery; and I think I’ve now got about five layers of technical people between me and anyone who doesn’t have enough technical background to understand what I do on the customer side. Also, money appears in my bank account every couple of weeks, without having to send anyone a bill! It’s like magic! The hiring process was a bit odd — mostly, I gather, because while I applied for an advertised position, the one I ended up getting was something that had been wanted for a while, but hadn’t actually had a request open. So I did a bunch of interviews for the job I was applying for, then got redirected to the other position, and did a few interviews for that without either me or the interviewers having a terribly clear idea what the position would involve. (I guess it didn’t really help that my first interview, which was to be with my boss’s boss, got rearranged because he couldn’t make it in due to water over the roads, and then Brisbane flooded; that the whole point of the position is that they didn’t have anyone working in that role closer than the Czech Republic is probably also a factor…) As it’s turned out, that’s been a pretty accurate reflection of the role: I’ve mostly been setting my own priorities, which mostly means balancing between teaching myself how things work, helping out the rest of my team, and working with the bits of Red Hat that are local, or at least operate in compatible timezones. Happily, that seems to be working out fairly okay. (And at least the way I’ve been doing it isn’t much different to doing open source in general: “gah, this program is doing something odd. okay, find the source, see what it’s doing and why, and either (a) do something different to get what you want, or (b) fix the code. oh, and also, you now understand that program”) As it turned out, that leads into the main culture shock I had on arriving: what most surprised me was actually the lack of differences compared to being involved in Debian — which admittedly might have been helped by a certain article hitting LWN just in time for my first day. “Ah, so that list is the equivalent of debian-devel. Good to know.” There’s a decent number of names that pop up that are familiar from Debian too, which is nice. Other comfortingly familiar first day activities were subscribing to more specific mailing lists, joining various IRC channels, getting my accounts setup and setting up my laptop. (Fedora was suggested, “not Debian” was recommended ;) Not that everything’s the same — there’s rpm/yum versus dpkg/apt obviously, and there’s a whole morass of things to worry about working for a public company. But a lot of it fits into either “different to Debian, but not very” and “well, duh, Red Hat’s a for-profit, you have to do something like this, and that’s not a bad way of doing it”. Hmm, not sure what else I can really talk about without at least running it by someone else to make sure it’s okay to talk about in public. I think there’s only a couple of things I’ve done so far that have gone via Fedora and are thus easy — the first was a quick python script to make publishing fedora torrents easier, and the other was a quick patch to the fedora buildsystem software to help support analytics. Not especially thrilling, though. I think Dennis is planning on throwing me into more Fedora stuff fairly soon, so hopefully that might change. Reading through some of the comments from last year’s Linux Australia Survey, a couple struck me as interesting. One’s on Java: linux.conf.au seems to have a bias against Java. Since Java is an open source language and has a massive open source infrastructure, this has not made a lot of sense to me. It seems that Python, Perl, PHP, Ruby are somehow superior from an open source perspective even though they are a couple of orders of magnitude less popular than Java in the real world. This bias has not changed since openjdk and I’m guessing is in the DNA of the selectors and committee members. Hence *LUG* has lost a lot of appeal to me and my team. It would be good if there was an inclusive open source conference out there… and the other’s more general: I appreciate LCA’s advocacy of open source, but I feel that a decoupling needs to be made in the mindshare between the terms “open source” and “Linux”. Unfortunately, for people involved in open source operating systems that aren’t Linux, we may feel slightly disenfranchised by what appears to be a hijacking of the term “open source” (as in “the only open source OS is linux” perception). My impression is that bias is mostly just self-selection; people don’t think Java talks will get accepted, so don’t submit Java talks. I guess it’s possible that there’s a natural disconnect too: linux.conf.au likes to have deep technical talks on topics, and maybe there’s not much overlap between with what’s already there and what people with deep technical knowledge of Java stuff find interesting, so they just go to other conferences. That said, it seems like it’d be pretty easy to propose either a mini-conference for Java, or BSD, or non-traditional platforms in general (covering free software development for say BSD, JVM, MacOS and Windows) and see what happens. Especially given Greg Lehey’s on the Ballarat organising team from what I’ve been told, interesting BSD related content seems like it’d have a good chance of success at getting in… Martin Pool linked to an old post by Evan Miller on how writing tests could be more pleasant if you could just do the setup and teardown parts once, and (essentially) rely on backtracking to make sure it happens for every test. He uses a functional language for his example, and it’s pretty interesting. But it is overly indented, and hey, I like my procedural code, so what about trying the same thing in Python? Here’s my go at it. The code under test was the simplest thing I could think of — a primality checker:   def is_prime(n): if n == 1: return False i = 2 while i*i <= n: if n % i == 0: return False i += 1 return True My test function then tests a dozen numbers numbers which I know are prime or not, return True if is_prime got the right answer, and False otherwise. It makes use of a magic "branch" function to work out which number to test:   def prime_test(branch): if branch(True, False): n = branch(2,3,5,7,1231231) return is_prime(n) else: n = branch(1,4,6,8,9,10,12312312) return not is_prime(n) In order to get all the tests run, we need a loop, so the test harness looks like:   for id, result in run_tests(prime_test): print id, result (Counting up successes, and just printing the ids of failures would make more sense, probably. In any event, the output looks like:   [True, 2] True [True, 3] True [True, 5] True [True, 7] True [True, 1231231] True [False, 1] True [False, 4] True [False, 6] True [False, 8] True [False, 9] True [False, 10] True [False, 12312312] True Obviously all the magic happens in run_tests which needs to work out how many test cases there'll end up being, and provide the magic branch function which will give the right values. Using Python's generators to keep some state makes that reasonable straightforward, if a bit head-twisting:   def run_tests(test_fn): def branch(*options): if len(idx) == state[0]: idx.append(0) n = idx[state[0]] if n+1 < len(options): state[1] = state[0] state[0] += 1 vals.append(options[n]) return options[n] idx = [] while True: state, vals = [0, None], [] res = test_fn(branch) yield (vals, res) if state[1] is None: break idx[state[1]] += 1 idx[state[1]+1:] = [] This is purely a coding optimisation -- any setup and teardown in prime_test is performed each time, there's no caching. I don't think there'd be much difficulty writing the same thing in C or similar either -- there's no real use being made of laziness or similar here -- I'm just passing a function that happens to have state around rather than a struct that happens to include a function pointer. Anyway, kinda nifty, I think! (Oh, this is also inspired by some of the stuff Clinton was doing with abusing fork() to get full coverage of failure cases for code that uses malloc() and similar, by using LD_PRELOAD) One thing that keeps me procrastinating about writing programs I have is doing up a user interface for them. It just seems like so much hassle writing GUI code or HTML, and if I just write for the command line, no one else will use it. Of course, most of the reason I don’t mind writing for the command line is that interaction is so easy, and much of that is thanks to the wonders of “printf”. But why not have a printf for GUIs? So I (kinda) made one:   f, n = guif.guif("%t %(edit)250e \n %(button)b", "Enter some text", "", "Press me!"); In theory, you can specify widget sizes using something like “%10,12t” to get a text box with a width of 10 and a height of 12, but it doesn’t seem to actually work at the moment, and might be pixel based instead of character based, which I’m not sure is a win. I was figuring you could say “%-t” for left aligned, and “%+t” for right aligned; and I guess you could do “%^t” for top and “%_t” for bottom alignment. I’ve currently just got it doing a bunch of rows laid out separately — you’d have to specify explicit widths to get things lined up; but the logical thing to do would be to use “\t” to automatically align things. It also doesn’t handle literals inside the format string, so you can’t say “Enter some text: %e\n%b”. At the moment the two objects that returns are the actual frame (f), and a dictionary of named elements (n) in case you want to reference them later (to pull out values, or to make buttons actually do something, etc). That probably should be merged into a single object though. I guess what I’d like to be able to write is a complete program that creates and displays a simple gui with little more than: #!/usr/bin/env python import guif, wx f = guif.guif("Enter some text: %(edit)250e \n %(done)b", "", "Done!", stopon = ("done", wx.EVT_BUTTON)) print "Hey, you entered %s!" % f.edit.GetValue() f.Close() I figure that should be little enough effort over the command line equivalent to be pleasant: #!/usr/bin/env python import sys print "Enter some text:", x = sys.stdin.readline() print "Hey, you entered %s!" % x.strip() �We will be releasing a 50-page document that summarises the NBN Co business case,� Ms Gillard said. So the 50 page NBN Co business case summary came out yesterday. It runs to 36 pages, including the table of contents. According to the document, they’re going to be wholesale providers to retail ISPs/telcos, and be offering a uniform wholesale price across the country (6.3). There’ll be three methods of delivery — fibre, wireless and satellite, though I didn’t notice any indication of whether people would pay more for content over satellite than over fibre. They’re apparently expecting to undercut the wholesale prices for connectivity offered today (6.3.1). They’ve pulled some “market expection” data from Alcatel/Lucent which has a trend line of exponential increase in consumer bandwidth expectations up to 100Mb/s in 2015 or so, and 1Gb/s around 2020 for fixed broadband — and a factor of 100 less for wireless broadband (6.3.2, chart 1). Contrary to that expection, their own “conservative” projections A1 and A2 (6.3.2, exhibit 2) have about 50Mb/s predicted for 2015, and 100Mb/s for 2020 — with A2 projecting no growth in demand whatsoever after 2020, and A1 hitting 1Gb/s a full 20 years later than the Alcatel/Lucent expectations. Even that little growth in demand is apparently sufficient to ensure the NBN Co’s returns will “exceed the long term government bond rate”. To me, that seems like they’re assuming that the market rates for bandwidth in 2015 or 2020 (or beyond) will be comparable to rates today — rather than exponentially cheaper. In particular, while the plan goes on to project significant increase in demand for data usage (GB/month) in addition to speed (Mb/s), there’s no indication of how the demand for data and speed get transferred into profits over the fifteen year timespan they’re look at. By my recollection, 15 years ago data prices in .au were about 20c/MB, compared to maybe 40c/GB (60/mo for 150GB on Internode small easy plan) today.

Given NBN Co will be a near-monopoly provider of bandwidth, and has to do cross-subsidisation for rural coverage (and possibly wireless and satellite coverage as well), trying to inflate the cost per GB seems likely to me: getting wires connected up to houses is hard (which is why NBN Co is budgeting almost $10B in payments to Telstra to avoid it where possible), and competing with wires with wireless is hard too (see the 100x difference in speed mentioned earlier), so you’re going to end up paying NBN Co whatever they want you to pay them. However they plan on managing it, they’re expecting to be issuing dividends from 2020 (6.7), that will “repay the government’s entire investment by 2034″. That investment is supposedly$27.1B, which would mean at least about $2B per year in profits. For comparison, Telstra’s current profits (across all divisions, and known as they are for their generous pricing) are just under$4B per year. I don’t think inflation helps there, either; and there’s also the other 20B or so of debt financing they’re planning on that they’ll have to pay back, along with the 12-25% risk premium they’re expecting to have to pay (6.8, chart 5). I’m not quite sure I follow the “risk premium” analysis — for them to default on the debt financing, as far as I can see, NBN Co would have to go bankrupt, which would require selling their assets, which would be all that fibre and axis to ducts and whatnot: effectively meaning NBN Co would be privatised, with first dibs going to all the creditors. I doubt the government would accept that, so it seems to me more likely that they’d bail out NBN Co first, and there’s therefore very, very little risk in buying NBN Co debt compared to buying Australian government debt, but a 12-25% upside thrown in anyway. As a potential shareholder, this all seems pretty nice; as a likely customer, I’m not really terribly optimistic. While I was still procrastinating doing the altosui and Google Earth mashup I mentioned last post, Keith pointed out that Google Maps has a static API, which means it’s theoretically possible to have altosui download maps of the launch site before you leave, then draw on top of them to show where your rocket’s been. The basic API is pretty simple — you get an image back centred on a given latitude and longitude; you get to specify the image size (up to 640×640 pixels), and a zoom level. A zoom level of 0 gives you the entire globe in a 256×256 square, and each time you increase the zoom level you turn each pixel into four new ones. Useful zoom levels seem to be about 15 or so. But since it’s a Mercator projection, you don’t have to zoom in as far near the poles as you do around the equator — which means the “or so” is important, and varies depending on the the latitude. Pulling out the formula for the projection turns out to be straightforward — though as far as I can tell, it’s not actually documented. Maybe people who do geography stuff don’t need docs to work out how to convert between lat/long and pixel coordinates, but I’m not that clever. Doing a web search didn’t seem to offer much certainty either; but decoding the javascript source turned out to not be too hard. Formulas turn out to be (in Java):   Point2D.Double latlng2coord(double lat, double lng, int zoom) { double scale_x = 256/360.0 * Math.pow(2, zoom); double scale_y = 256/(2.0*Math.PI) * Math.pow(2, zoom); Point2D.Double res = new Point2D.Double(); res.x = lng*scale_x; double e = Math.sin(Math.toRadians(lat)); e = limit(e, -1+1.0E-15, 1-1.0E-15); res.y = 0.5*Math.log((1+e)/(1-e))*-scale_y; return res; } That gives you an absolute coordinate relative to the prime meridian at the equator, so by the time you get to zoom level 15, you’ve got an 8 million pixel by 8 million pixel coordinate system, and you’re only ever looking at a 640×640 block of that at a time. Fortunately, you also know the lat/long of the center pixel of whatever tile you’re looking at — it’s whatever you specified when you requested it. The inverse function of the above gives you the the latitude and longitude for centrepoints of adjacent maps, which then lets you tile the images to display a larger map, and choosing a consistent formula for the tiling lets you download the right map tiles to cover an area before you leave, without having to align the map tiles exactly against your launch site coordinates. In Java, the easy way to deal with that seems to be to setup a JScrollable area, containing a GridBagLayout of the tiles, each of which are images set as the icon of JLabels. Using the Graphics2D API lets you draw lines and circles and similar on the images, and voila, you have a trace: Currently the “UI” for downloading the map images is that it’ll print out some wget lines on stdout, and if you run them, next time you run altosui for that location, you’ll get maps. (And in the meantime, you’ll just get a black background) …except when it is: Anyhoo, somehow or other I’m now a Tripoli certified rocket scientist, with some launches and data to show for it: Bunches of fun — and the data collection gives you an excuse to relive the flight over and over again while you’re analysing it. Who couldn’t love that? Anyway, as well as the five or six rocket flights I’ve done without collecting data (back in 2007 with a Rising Star, and after DebConf 10 at Metra with a Li’l Grunt), I’ve now done three flights on my Little Dog Dual Deploy (modified so it can be packed slightly more efficiently — it fits in my bag that’s nominally ok for carry-on, and in my bike bag) all of which came back with data. I’ve done posts on the Australian Rocketry Forums on the first two flights and the third flight. There’s also some video of the third flight: But anyway! One of the things rocketeering focusses on as far as analysis goes is the motor behaviour — how much total impulse it provides, average thrust, burn time, whether the thrust is even over the burn time or if it peaks early or late, and so on. Commercial motors tend to come with stats and graphs telling you all this, and there are XML files you can feed into simulators that will model your rocket’s behaviour. All very cool. However, a lot of the guys at the Metra launch make their own motors, and since it tends to be way more fun to stick your new motor in a rocket and launch it than to put it on a testing platform, they only tend to have guesses at how it performs rather than real data. But Keith mentioned it ought to be possible to derive the motor characteristics from the flight data (you need to subtract off gravity and drag from the sensed acceleration, then divide out the mass to get force, ideally taking into account the fact that the motor is losing mass as it burns, that drag varies according to speed and potentially air pressure, and gravity may not be exactly aligned with your flight path), and I thought that sounded like a fun thing to do. Unfortunately when I looked at my data (which comes, of course, from Bdale and Keith’s Telemetrum board and AltOS software), it turned out there was a weird warble in my acceleration data while it was coasting — which stymied my plan to calculate drag, and raised a question about the precision of the acceleration under boost data too. After hashing around ideas on what could be causing it on IRC (airframe vibration? board not tied down? wind?), I eventually did the sensible thing and tried recording data while it was sitting on the ground. Result — exactly the same: weird warbling in the accel data even when it’s just sitting there. As it turned out, it was a pretty regular warble too — basically a square wave with a wavelength of 100ms. That seemed to tie in with the radio — which was sending out telemetry packets ten times a second between launch and apogee. Of course, there wasn’t any reason for the radio to be influencing the accelerometer — they’re even operating off separate voltages (the accelerometer being the one 5V part on the board). Hacking the firmware to send out telemetry packets at a different rate confirmed the diagnosis though — the accelerometer was reporting lower acceleration while the radio’s sending data. Passing the buck to Keith, it turned out that being the one 5V part was a problem — the radio was using enough current to cause the supply voltage to drop slightly, which caused all the other sensors to scale proportionally (and thus still be interpreted correctly), but the accelerometer kept operating at 5V leading to a higher output voltage which gets interpreted as lower acceleration. One brief idea was to try comparing the acceleration sensor to the 1.25V generated by the cpu/radio chip, but unfortunately it gets pulled down worse than the 3V does. Fortunately this happens on more than just my board (though not all of them), so hopefully someone’ll think up a fix. I’m figuring I’ll just rely on cleaning up the data in post-processing — since it’s pretty visible and regular, that shouldn’t be too hard. Next on the agenda though is trying some real-time integration with Google Earth — basically letting altosui dump telemetry data as normal, but also watching the output file for updates, running a separate altosui process to generate a new KML file from it, which Google Earth is watching and displaying in turn. I think I’ve got all the pieces for that pretty ready, mostly just waiting for next weekend’s QRS launch, and crossing my fingers my port HP Mini 2133 can handle the load. In any event, I hacked up some scripts to simulate the process using data from my third flight, and it seemed to work ok. Check out the recording: BTW, if that sounds like fun (and if it doesn’t, you’re doing it wrong), now would probably be a good time to register for lca and sign up to the rocketry miniconf — there’s apparently still a couple of days left before early bird prices run out. I saw a couple of things over the last couple of days about progressive taxation — one was a Malcolm Gladwell video on youtube about how a top tax rate of 91% is awesome and Manhattan Democrats are way smarter than Floridian Republicans; the other an article by Greg Mankiw in the New York Times about how he wants to write articles, but is disinclined too because if he does, Obama will steal from his kids. Gladwell’s bit seems like almost pure theatre to me — the only bit of data is that during and after WW2 the US had a top marginal tax rate of just over 90% on incomes of200,000 (well, except that WW2 and the debt the US accrued in fighting it isn’t actually mentioned). Gladwell equates that to a present day individual income of two million a year, which seems to be based on the official inflation rate; comparing it against median income at the time (PDF) gives a multiplier of 13.5 ($50,000/$3,700) for a top-tax bracket household income of $5.4 million ($2.7 million individual). I find it pretty hard to reason about making that much money, but I think it’s interesting to notice that the tax rate of households earning 5x the median income (ie $250,000 now,$18,500 then) is already pretty similar: 33% now, 35% then. Of course in 1951 the US was paying off debt, rather than accruing it… (I can’t find a similar table of income tax rates or median incomes for Australia; but our median household income is about $67,000 now and a household earning$250,000 a year would have a marginal rate between 40% and 45%, and seems to have been about 75% for a few years after WW2)

Meanwhile, Mankiw’s point comes down to some simple compound interest maths: getting paid $1000 now and investing it at 8% to give to your kids in 30 years would result in: (1) a$10,000 inheritance if it weren’t taxed, or (2) a $1,000 inheritance after income tax, dividend tax and estate tax — so effectively those taxes add up to a 90% tax rate anyway. If you’re weighing up whether to spend the money now or save it for your kids, you get two other options: (3) spend$523 on yourself, or (4) spend $1000 through your company. An inflation rate of just 2.2% (the RBA aims for between 3% and 4%) says (3) is better than (2), and if you want to know why evil corporations are so popular, comparing (3) and (4) might give it away… An approach to avoiding that problem is switching to consumption taxes like the GST instead of income taxes — so you discourage people spending money rather than earning it. At first glance that doesn’t make a difference: there’s no point earning money if you can’t spend it. But it does make a huge difference to savings. For Mankiw’s example: 47.7% income tax ($1000 – $477 =$523) equates to 91.2% consumption tax (as compared to 10% GST); but your kids get $10,000 so can buy$5,230 worth of goods and still afford the additional $4,770 in taxes. As opposed to only getting$1,000 worth of goods without any consumption taxes.

The other side of the coin is what happens to government revenues. In Mankiw’s example, the government would receive $477 in the first year’s tax return,$1,173 over the next thirty years (about $40 per year), and$571 when the funds are inherited for a total of $2,221. That would work out pretty much the same if the government instead sold 30-year treasury bonds to match that income, and then paid off that debt once it collected the consumption tax. Since US Treasury’s are currently worth 3.75% at 30 years at the moment, that turns into$3,900 worth of debt after thirty years; which in turn leaves the government better off by $870. The improvement is due to the difference between the private return on saving (8%) versus the government’s cost of borrowing (3.75%). Given the assumptions then, everyone wins: the parent, the kids, the government. It’s possible that would be the case in reality too; though it’s not certain. The main challenges are in the rates: if there’s a lot more saving going on (because it’s taxed less and thus more effective), then interest rates are liable to go down unless there’s a corresponding uptick in demand, which for interest rates means an uptick in economic activity. If Mankiw’s representative in being more inclined to work more in that scenario, that’s at least a plausible outcome. Similarly, if there’s a lot more government borrowing going on (because their revenue is becoming more deferred), then their rates might rise. In the scenario above, bond rates of 4.85% is the break even point in terms of a single 91.2% consumption tax matching a 47.7% tax rate on income and dividends and a 35% inheritance tax. Not worrying about taxing income makes a bunch of things easier: there’s no more worries about earned income, versus interest income, versus superannuation income, versus dividend income, versus capital gains, versus fringe benefits, etc. One thing it makes harder is having a progressive tax system — which is to say that people who are “worth” more are forced to contribute a higher share of their “worth” to government finances. With a progressive income tax, that means people who earn more pay more. With a progressive consumption tax, that would mean that people who spend more pay more — so someone buying discount soup might pay 10% GST (equivalent to 9.1% income tax), someone buying a wide screen tv might pay 50% (33% income tax) and someone buying a yacht might pay 150% (60% income tax). Because hey, if your biggest expenses are cans of soup, you probably can’t afford to contribute much to the government, but if you’re buying yachts… One way to handle that would be to make higher GST rates kick in at higher prices — so you pay 10% for things costing up to$100, 50% for things costing up to $10000, and 150% for things costing more than that. The disadvantage there is the difference in your profit margin between selling something for$9,999 including 50% GST and $16,668 including 150% GST is$1.20, which is going to distort things. Why spend $60,000 on a nice car at 150% GST, if you can spend$9,999 on a basic car, $9,999 on electonics,$9,999 on other accessories, and $9,999 on labour to get them put together and end up with a nicer car, happier salesmen, and$20,000 in savings?

Another way to get a progressive income tax would be by doing tax refunds: everyone pays the highest rate when they buy stuff, but you then submit a return with your invoices, and get a refund. If you spend $20,000 on groceries over the year, at say 20% GST, then reducing your GST to 10% would be a refund of$1,667. If you spend $50,000 on groceries and a car, you might only get to reduce your GST to an average of 15%, for a refund of$2,090. If you spend $1,000,000 on groceries, a car, and a holiday home, you might be up to an average of 19.5% for a refund of just$4,170. Coming up with a formula that always gives you more dollars the more expenditure you report (so there’s no advantage to under reporting), but also applies a higher rate the more you spend (so it’s still progressive) isn’t terribly hard.

The downside is the paying upfront is harshest on the poorest: if you’re spending $2,000 a month on food it doesn’t help to know that$1,200 of that is 150% GST and you’ll get most of it back next year if you’re only earning $900 a month. But equally it wouldn’t be hard to have CentreLink offices just hand out$1,120 a month to anyone who asks (and provides their tax file number), and confidently expect to collect it back in GST pretty quickly. Having the “danger” be that you hand out $1,120 to someone who doesn’t end up spending$2,000 a month or more doesn’t seem terribly bad to me. And there’s no problem handing out $1,200 to someone making thousands a week, because you can just deduct it from whatever they were going to claim on their return anyway. As I understand it, there’s not much problem with GST avoidance for three structural reasons: one is that at 10%, it’s just not that big a deal; another is that since it’s nationwide, avoiding it legally tends to involve other problems whether it be postage/shipping costs, delays, timezone differences, legal complexities or something else; and third is that because businesses get to claim tax credits for their purchases there’s paper trails at both ends meaning it’s hard to do any significant off-book work without getting caught. Increasing the rate substantially (from 10% to 150%) could end up encouraging imports — why buy a locally built yacht for$750,000 (150% GST) when you could buy it overseas for $360,000 (20% VAT say) and get it shipped here for$50,000? I don’t know if collecting GST at the border is a sufficiently solved problem to cope with that sort of incentive… On the other hand, having more people getting some degree of refund means it’s harder to avoid getting caught by the auditors if you’re not passing on the government’s tithe, so that’s possibly not too bad.

It appears the first draft of the linux.conf.au 2011 schedule (described by some as a thing of great beauty) is up as of this morning. Looks promising to me.

Of note:

• There’s lots of electronics-related talks (Arduino miniconf, Rocketry miniconf, Lunar Numbat, Freeing Production, “Use the Force, Linus”, All Chips No Salsa, e4Meter, Growing Food with Open Source, Lightweight Messaging, Misterhouse, and the Linux Powered Coffee Roaster). If you count mesh telephony too and don’t count the TBD slot, you can spend every day but Wednesday ensconced in hardware-hacking talks of one sort or another.
• There seems like reasonable female representation — Haecksen miniconf, LORE, HTML5 Video, Documentation, Intelligent Web, Incubation and Mentoring, Perl Best Practices, Project Managers, Growing Food with Open Source; so 7% of the miniconfs and 13% of the talks so far announced.
• Speaking of oppressed minorities, there’s also a couple of talks about non-Linux OSes: pf and pfsync on OpenBSD, and HaikuOS. Neato.
• Maybe it’s just me, but there seems to be a lot of “graphics” talks this year: GLSL, OptlPortal, Pixels from a Distance, X and the Future of Linux Graphics, HTML5 Video, Anatomy of a Graphics Driver; and depending on your point of view Print: The Final Frontier, Non-Visual Access, Can’t Touch This, and the X Server Development Process.
• The cloud/virtualisation stuff seems low-key this year: there’s Freeing the Cloud, Roll Your Own Cloud, Virtual Networking Performance, Virtualised Network Bandwidth Control, and ACID in the Cloud (that somehow doesn’t include an acid rain pun in the abstract). Of course, there’s also the “Freedom in the Cloud” and “Multicore and Parallel Computing” miniconfs which are probably pretty on point, not to mention the Sysadmin and Data Storage miniconfs which could see a bunch of related talks too.

And a bunch of other talks too, obviously. What looks like eight two-hour tutorial slots are yet to be published, maybe six more talks to be added, and three more keynotes (or given the arrangement of blank slots, maybe two more talks and four more keynotes). Also, there’s the PDNS on Wednesday, Penguin Dinner on Thursday, both over the river. And then there’s Open Day on Saturday, and an as yet not completely organised rocket launch sometime too…

I’ve been playing with some graphing tools lately, in particular Dan Vanderkam’s dygraphs JavaScript Visualization Library. So far I’ve translated the RC bug list (the “official” one, not the other one) into the appropriate format, generated some numbers for an LD50 equivalent for bugs, and on Wouter’s suggestion the buildd stats.

One of the nice things about the dygraphs library is it lets you dynamically play with the date range you’re interested in; and you can also apply a rolling average to smooth out some of the spikiness in the data. Using that to restrict the above graphs to the lenny development cycle (from etch’s release in April 2007 to lenny’s release in February 2009) gives some interesting stats. Remembering that the freeze started in late July 2008 (Debconf 8 was a couple of weeks later in August 2008).

RC bugs; first:

Not sure there’s a lot of really interesting stuff to deduce from that, but there’s a couple of interesting things to note. One is that before the freeze, there were some significant spikes in the bug count — July 2007, September 2007, November 2007, and April 2008, in particular; but after the freeze, the spikes above trend were very minor, both in size and duration. Obviously all of those are trivial in comparison to the initial spurt in bugs between April and June 2007, though. Also interesting is that by about nine months in, lenny had fewer RC bugs than the stable release it was replacing (etch) — and given that’s against a 22 month dev cycle, it’s only 40% of etch’s life as stable. Of course some of that may simply be due to a lack of accuracy in tracking RC bugs in stable; or a lack of accuracy in the RC bugcount software.

Quite a bit more interesting is the trend of the number of bugs (of all sorts — wishlist, minor, normal, RC, etc) filed each week — it varies quite a bit up until the freeze, but without any particular trend; but pretty much as soon as the freeze is announced trends steadily downward until the release occurs at which point there’s about half as many bugs being filed each week as there were before the freeze. And after lenny’s released it starts going straight back up. There’s a few possible explanations for the cause of that: it might be due to fewer changes being uploaded due to the freeze, and thus less bugs being filed; it might be due to people focussing on fixing bugs rather than finding them; it might be due to something completely unrelated.

An measure of development activity that I find intriguing is what I’m calling the “LD50″ — the median number of days it takes a bug to be closed, or the “lethal dosage of development time for 50% of the bug population”. That’s not the same as a half life, because there’s not necessarily an exponential decay behaviour — I haven’t looked into that at all yet. But it’s a similar idea. Anyway, working out the LD50 for cohorts of bugs filed in each week brings out some useful info. In particular for bugs filed up until the lenny freeze, the median days until a fix ranged from as low as 40 days to up to 120 days; but when the freeze was declared, that shot straight up to 180 days. Since then it’s gradually dropped back down, but it’s still quite high. As far as I can tell, this feature was unique to the lenny release — previous releases didn’t have the same effect, at least to anywhere near that scale. As to the cause — maybe the bugs got harder to fix, or people started prioritising previously filed bugs (eg RC bugs), or were working on things that aren’t tracked in the BTS. But it’s interesting to note that was happening at the same time that fewer bugs were being filed each week — and indeed it suggests an alternative explanation for fewer bugs being filed each week: maybe people noticed that Debian bugs weren’t getting fixed as quickly, and didn’t bother reporting them as often.

This is a look at the buildd “graph2″, which is each architecture’s percentage of (source) packages that are up to date, out of the packages actually uploaded on that architecture. (The buildd “graph” is similar, but does a percentage of all packages that are meant to be built on the architecture) Without applying the rolling average it’s a bit messy. Doing a rolling average over two weeks makes things much simpler to look at, even if that doesn’t turn out that helpful in this case:

Really the only interesting spots I can see in those graphs are that all the architectures except i386 and amd64 had serious variability in how up to date their builds right up until the freeze — and even then there was still a bit of inconsistency just a few months before the actual release. And, of course, straight after both the etch and lenny release, the proportion of up to date packages for various architectures drops precipitiously.

Interestingly, comparing those properties to the current spot in squeeze’s development seems to indicate things are promising for a release: the buildd up-to-dateness for all architectures looks like it’s stabilised above 98% for a couple of months; the weekly number of bugs filed has dropped down from a high of 1250 a week to about 770 at the moment; and the LD50 has dropped from 170 days shortly after lenny’s freeze to just under 80 days currently (though that’s still quite a bit higher than the 40 days just before lenny’s freeze). The only downside is the RC bug count is still fairly high (at 550), though the turmzimmer RC count is a little better at only 300, currently.

At the end of my previous post I mentioned some thoughts on dealing with more interesting initial states ($$q^0$$). We’ll define our initial state by choosing the amount of funds we’re willing to lose $$F$$, and a set of initial prices $$0 < p_i(q^0) < 1$$. Unless $$p_i(q^0) = \frac{1}{n}$$ for all $$i$$, we will be forced to set $$q^0_i > 0$$ in some (possibly all) cases. We will treat this as implying a virtual payout from the market maker to the market maker.

The maximum loss, is then given by $$C(q^0) – \min(q^0_i) = F$$ (since the final payout will be $$q_j – q^0_j$$, the money collected will be $$C(q)-C(q^0)$$, and $$C(q) \ge q_j$$).

If we wish to restrict quantities $$q_i$$ to be integers, we face a dificulty at this point. Working from the relationship between $$p_i(q^0)$$ and $$q^0_i$$ gives:

\begin{aligned} p_i(q^0) & = \frac{e^{q^0_i/\beta}}{\sum_{j=1}^{n}{e^{q^0_j/\beta}}} \\ & = \frac{e^{q^0_i/\beta}}{e^{C(q^0)/\beta}} \\ & = e^{q^0_i/\beta - C(q^0)/\beta} \\ \beta \ln( p_i(q^0) ) & = q^0_i - C(q^0) \\ q^0_i & = C(q^0) + \beta \ln( p_i(q^0 ) ) \end{aligned}

Since $$C(q^0)$$ is independent of $$i$$, we can immediately see that the $$i$$ with minimal $$q^0_i$$ will be the one with minimal price. Without loss of generality, assume that this is when $$i=1$$, then we can see:

\begin{aligned} F & = C(q^0) - q^0_1 \\ & = C(q^0) - \left( C(q^0) + \beta \ln(p_1(q^0)) \right) \\ & = - \beta \ln(p_1(q^0)) \\ \beta & = \frac{F}{\ln\left(p_1(q^0)^{-1}\right)} \end{aligned}

In the case where $$p_i(q^0) = \frac{1}{n}$$ is common for all outcomes this simplifies to the formula seen in the previous post.

Note that this is unlikely to result in a value of $$\beta$$ that is particularly easy to work with. However simply rounding down to the nearest representable number works fine — since $$\beta$$ is in direct proportion to the amount of funds at risk, this simply rounds down the amount of funds at risk at the same rate.

Likewise, keeping track of $$p_i(q)$$ as an implementation choice will restrict us to rational prices, and thus likely irrational values for $$q_i$$. However it’s likely we’d prefer to only offer precisely defined payoffs for precisely defined costs, even if only for ease of accounting. In order to deal with this, we can treat $$q_i = m_i(q) + g_i(q)$$ where $$m_i(q) \ge q^0_i$$ represents the (possibly increasing) virtual payout the market maker will receive, and $$g_i(q)$$ are the (integer) payouts participants will receive. In particular, we might restrict $$q^0_i \le m_i(q) < q^0_i + 1$$, so that we can calculate costs and payouts using the normal floor and ceiling functions and ensure any proceeds go to participants.

This gets us very close to being able to adjust the outcomes being considered dynamically; so that we can either split a single outcome into distinct categories to achieve a more precise estimate, or merging multiple outcomes into a single category to reduce the complexity of calculations.

If we look at changing the $$m \dotso n$$th outcomes from $$q$$ into new outcomes $$m’ \dotso n’$$ in $$r$$, then our presumed constraints are as follows.

First, if this is the most accurate assignment between the old states and the new states we can come up with (and if it’s not, use those assignments instead), then we need to set the payout for all the new cases to the worst case payout for the old cases:

$$g_{i’}(r) = \left\{ \begin{array}{l l} g_i(q) & \quad 1 \le i’ < m \\ \max_{m \le i \le n}(g_i(q)) & \quad m’ \le i’ \le n \\ \end{array} \right.$$

Also, since we’re not touching the prices for the first $$m-1$$ outcomes, and our prices need to add up to one, we have:

\begin{aligned} p_{i'}(r) & = p_{i'}(q) \quad \forall 1 \le i' < m \\ \sum_{i'=m'}^{n'} p_{i'}(r) & = \sum_{i=m}^{n} p_i(q) \end{aligned}

And most importantly, we wish to limit the additional funds we commit to $$\Delta F$$ (possibly zero or negative), and thus $$C(r) = C(q) + \Delta F$$.

Using the relationship between $$p_i(r)$$ and $$r_i$$ again, gives:

\begin{aligned} C(r) & = r_i - \gamma \ln(p_i(r)) \\ & = m_i(r) + g_i(r) - \gamma \ln(p_i(r)) \\ \Delta F & = m_i(r) + g_i(r) - C(q) - \gamma \ln(p_i(r)) \\ \Delta F & \ge g_i(r) - C(q) - \gamma \ln(p_i(r)) \\ \Delta F & \ge \left\{ \begin{array}{l l} g_i(q) - C(q) - \gamma \ln(p_i(q)) & \quad 1 \le i < m \\ g_i(r) - C(q) - \gamma \ln(p_i(r)) & \quad m \le i \le n \\ \end{array} \right. \\ \Delta F & \ge \left\{ \begin{array}{l l} g_i(q) - C(q) - \left(q_i - C(q)\right) & \quad 1 \le i < m \\ \max_{m \le j \le n}(g_j(q)) - C(q) - \gamma \ln(p_i(r)) & \quad m \le i \le n \\ \end{array} \right. \\ \Delta F & \ge \left\{ \begin{array}{l l} -m_i(q) & \quad 1 \le i < m \\ \max_{m \le j \le n}\left( g_j(q) - \left( q_j - \beta \ln(p_j(q)) \right) \right) - \gamma \ln(p_i(r)) & \quad m \le i \le n \\ \end{array} \right. \\ \Delta F & \ge \left\{ \begin{array}{l l} -m_i(q) & \quad 1 \le i < m \\ \max_{m \le j \le n}\left( -m_j(q) + \beta \ln(p_j(q)) \right) - \gamma \ln(p_i(r)) & \quad m \le i \le n \\ \end{array} \right. \\ \end{aligned}

Setting where $$\mu$$ to be the modified outcome with maximum payout (that is, $$g_\mu(q) = \max_{m \le j \le n}(g_j(q))$$, $$m \le \mu \le n$$) and $$\nu$$ to be the least new price (so $$m’ \le \nu \le n$$ such that $$p_\nu(r) = \min_{m’ \le j \le n’}(p_i(r))$$) lets us simplify this to:

$$\Delta F \ge -m_i(q) \quad \forall 1 \le i < m$$

and

$$\gamma \le \frac{\Delta F + m_\mu(q) + \beta \ln\left(p_\mu(q)^{-1}\right)}{\ln\left(p_\nu(r)^{-1}\right)}$$

Since $$m_i(q) \ge 0$$, one simple approach to satisfying the inequalities is to simply drop the $$m_i(q)$$ terms, giving:

$$\Delta F \ge 0 \quad \text{and} \quad \gamma \le \frac{\Delta F + \beta \ln(p_\mu(q)^{-1})}{\ln(p_\nu(r)^{-1})}$$

This has the drawback of not providing the maximum liquidity for the funds thought to be at risk, however.

Some additional notes on implementing Hanson’s Logarithmic Market Scoring Rule, based on David Pennock’s post from 2006.

Usage is to is to pick $$n$$ distinct outcomes, such that exactly one will be true, and then to trade contracts that correspond with each outcome, so that if the outcome occurs the corresponding contract has a unit payoff, and otherwise is worthless. The market scoring rule provides a way for a market maker to set and update prices for the outcomes no matter how they might be bought and sold. While the market maker’s worst-case loss is limited to a fixed amount, $$F$$, this is also the usual outcome.

The scoring rule uses a cost function, defined as:

$C(q) = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \right)$

At any point, if event $$i$$ occurs, the payoff owed to participants is $$q_i$$. In order to achieve any given combination of payouts per outcome, a participant need simply pay $$C(q+\delta) – C(q)$$ where $$\delta_i$$ is the participant’s desired payout for event $$i$$.

Prices thus vary non-linearly depending on both current payoff’s expected, and desired payoff. However a number of properties can be easily verified. First, the total payout for any event $$j$$ is no more than $$C(q)$$:

\begin{aligned} C(q) & = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \right) \\ e^{\frac{C(q)}{\beta}} & = \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \\ & \ge e^{\frac{q_j}{\beta}} \\ C(q) & \ge q_j \end{aligned}

If we define the initial state, $$q^0$$ by $$q^0_i=0$$ for all $$i$$, then $$C(q^0)=\beta \ln(n)$$. $$C(q^0)$$ is the maximum amount we can lose (since we will have received the remaining $$C(q)-C(q^0)$$ from participants), and as such, we can define $$\beta$$ in terms of the funds the marked maker can afford to lose, $$F$$, as:

$\beta = \frac{F}{\ln(n)}$

We can see that the cost of buying a payout of $$p$$ in all scenarios (which we will denote as $$\delta = p\iota$$, meaning $$\delta_i=p$$ for all $$i$$) is exactly $$p$$:

\begin{aligned} C(q+p\iota) & = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i+p}{\beta}}} \right) \\ & = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}} e^{\frac{p}{\beta}}} \right) \\ & = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \right) + \beta \ln\left( e^{\frac{p}{\beta}} \right) \\ & = C(q) + p \\ \end{aligned}

The instantaneous price of each contract is given by the derivative of the cost function, which works out to be:

$p_i(q) = \frac{\partial{C(q)}}{\partial{q_i}} = \frac{e^{\frac{q_i}{\beta}}}{\sum_{j=1}^{n}{e^{\frac{q_j}{\beta}}}}$

We can directly observe from this that at any time the instantaneous prices of all the events will be between 0 and 1, and that they will sum to exactly 1. Furthermore, if we maintain a record of the values of $$C(q)$$ (which represents the sum of funds received from participants and the maximum loss) and $$p_i(q)$$, we can calculate $$q_i$$:

\begin{aligned} p_i(q) & = \frac{e^{\frac{q_i}{\beta}}}{\sum_{j=1}^{n}{e^{\frac{q_j}{\beta}}}} \\ & = \frac{e^{\frac{q_i}{\beta}}}{e^{\frac{C(q)}{\beta}}} \\ & = e^{\frac{q_i}{\beta} - \frac{C(q)}{\beta}} \\ & = e^{\frac{q_i - C(q)}{\beta}} \\ \beta \ln\left( p_i(q) \right) &= q_i - C(q) \\ q_i &= C(q) + \beta \ln\left( p_i(q) \right) \\ \end{aligned}

Note that since $$0 \lt p_i(q) \lt 1$$, then $$\beta \ln\left( p_i(q) \right) \lt 0$$ and $$q_i \lt C(q)$$ as expected.

If we partition the possible states into three disjoint sets, $$W$$, $$L$$ and $$I$$, such that

$\delta_i = \left\{ \begin{array}{l l} -c & \quad \mbox{ iff $$i \in L$$ } \\ 0 & \quad \mbox{ iff $$i \in I$$ } \\ g & \quad \mbox{ iff $$i \in W$$ } \end{array} \right.$

For notational convenience, we will write $$p_S(q) = \sum_{i \in S}{p_i(q)}$$. If we set $$c > 0$$ and $$C(q+\delta) = C(q)$$, we then can determine $$g$$:

\begin{aligned} C(q+\delta) & = C(q) \\ \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i+\delta_i}{\beta}}} \right) & = \beta \ln\left( \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \right) \\ \sum_{i=1}^{n}{e^{\frac{q_i+\delta_i}{\beta}}} & = \sum_{i=1}^{n}{e^{\frac{q_i}{\beta}}} \\ \sum_{i \in L}{e^{\frac{q_i-c}{\beta}}} + \sum_{i \in I}{e^{\frac{q_i}{\beta}}} + \sum_{i \in W}{e^{\frac{q_i+g}{\beta}}} & = \sum_{i \in L}{e^{\frac{q_i}{\beta}}} + \sum_{i \in I}{e^{\frac{q_i}{\beta}}} + \sum_{i \in W}{e^{\frac{q_i}{\beta}}} \\ \sum_{i \in L}{e^{-\frac{c}{\beta}} e^{\frac{q_i}{\beta}}} + \sum_{i \in W}{e^{\frac{g}{\beta}} e^{\frac{q_i}{\beta}}} & = \sum_{i \in L}{e^{\frac{q_i}{\beta}}} + \sum_{i \in W}{e^{\frac{q_i}{\beta}}} \\ e^{-\frac{c}{\beta}} \sum_{i \in L}{e^{\frac{q_i}{\beta}}} + e^{\frac{g}{\beta}} \sum_{i \in W}{e^{\frac{q_i}{\beta}}} & = \sum_{i \in L}{e^{\frac{q_i}{\beta}}} + \sum_{i \in W}{e^{\frac{q_i}{\beta}}} \\ e^{-\frac{c}{\beta}} \sum_{i \in L}{p_i(q)} + e^{\frac{g}{\beta}} \sum_{i \in W}{p_i(q)} & = \sum_{i \in L}{p_i(q)} + \sum_{i \in W}{p_i(q)} \\ e^{-\frac{c}{\beta}} p_L(q) + e^{\frac{g}{\beta}} p_W(q) & = p_L(q) + p_W(q) \\ e^{\frac{g}{\beta}} p_W(q) & = p_L(q) + p_W(q) - e^{-\frac{c}{\beta}} p_L(q) \\ e^{\frac{g}{\beta}} & = 1 + \frac{p_L(q)}{p_W(q)} \left(1 - e^{-\frac{c}{\beta}} \right) \\ g & = \beta \ln\left( 1 + \frac{p_L(q)}{p_W(q)} \left(1 - e^{-\frac{c}{\beta}} \right) \right) \\ \end{aligned}

Since $$c > 0$$, $$e^{-\frac{c}{\beta}} \lt 1$$ and $$g$$ is well defined and positive. This allows us to purchase $$c\iota$$ and $$\delta$$ at a total cost of $$c$$, with the result that we end up losing $$c$$ if an event in $$L$$ occurs, we end up breaking even if an event in $$I$$ occurs, and we gain $$g$$ if an event in $$W$$ occurs.

This provides a fairly straightforward way to calculate gains for a given cost using the prices, rather than the cost function directly.

Rather than choosing a particular amount to pay for a particular gain, it’s possible to determine how much it will cost to change the prices in a particular way. We might take the same sets, $$W$$, $$I$$, and $$L$$ and instead decide to adjust the prices as follows:

$p_i(q+\delta) = p_i(q) \cdot \left\{ \begin{array}{l l} y & \quad \mbox{ iff $$i \in L$$ } \\ 1 & \quad \mbox{ iff $$i \in I$$ } \\ x & \quad \mbox{ iff $$i \in W$$ } \end{array} \right.$

If we take $$p_W(q+\delta) = p_W(q) \cdot x = p_W(q) + \rho$$ then since the prices always add to one, $$p_L(q+\delta) = p_L(q) \cdot y = p_L(q) – \rho$$ and $$y = 1 – \frac{p_W(q)}{p_L(q)}(x-1)$$

The corresponding $$c$$ and $$g$$ values are then

\begin{aligned} x & = e^{\frac{g}{\beta}} \\ g & = \beta \ln(x) \\ & = \beta \ln\left( \frac{p_W(q) + \rho}{p_W(q)} \right) \\ & = \beta \ln(p_W(q+\delta)) - \beta \ln(p_W(q)) \\ \\ y & = e^{\frac{-c}{\beta}} \\ c & = -\beta \ln(y) \\ & = -\beta \ln\left( 1 - \frac{p_W(q)}{p_L(q)}(x-1) \right) \\ & = -\beta \ln\left( 1 - \frac{p_W(q)}{p_L(q)}\left( 1+\frac{\rho}{p_W(q)}-1 \right) \right) \\ & = -\beta \ln\left( 1 - \frac{\rho}{p_L(q)} \right) \\ & = \beta \ln\left( \frac{p_L(q)}{p_L(q) - \rho} \right) \\ & = \beta \ln(p_L(q)) - \beta \ln(p_L(q+\delta)) \\ \end{aligned}

Note that this assumes each price in $$W$$ is multiplied by the same amount, and similarly for each price in $$L$$. This also has the benefit that it maintains the relative prices within $$W$$ and $$L$$.

Obviously, $$I$$ can be the empty set. The advantage of having outcomes in $$I$$ is it allows participants to make their estimates conditional. For instance the statement “If X happens, Y will happen” should warrant a gain if “X and Y” happens, a loss if “X but not Y” happens, and no change if either “not X and not Y” or “not X but Y” happen.

This can also be used if a market may need to be cancelled. This may be done by having a “cancellation” outcome such that every action a participant may take results in that outcome being in $$I$$. This prevents people from exiting the market place at a profit before the outcome is known, however.

Splitting and merging outcomes is an interesting possibility — if the price of “it will happen on Tuesday” is $$p$$, then splitting that event into two events “it will happen on Tuesday morning” and “it will happen on Tuesday afternoon”, each with price $$p/2$$ would allow more precise predictions. Having this happen dynamically (such as when $$p$$ rises above a particular limit) would allow for precision only when it’s needed.

The drawback is that it may require an increase in $$F$$ (but not always — once Tuesday has been split into morning and afternoon, splitting Wednesday as well can simply reuse the same extra funds). Having different “sized” regions may also require some care. Representation also becomes a possible issue. Some of the maths for handling this might also help with handling initial state $$q^0$$ with different prices $$p_i(q^0) \ne p_j(q^0)$$.

MathJax is pretty cool — it’s essentially a client-side JavaScript implementation of LaTeX, so you can write maths in ASCII, like “x^n + y^n = z^n”, surround it with dollar signs, and have it look like:

$$x^n + y^n = z^n$$

And, of course, you can be more complicated if you like:

$$C(\mathbf{q}) = b(\mathbf{q}) \log\left( \sum_i e^{\frac{q_i}{b(\mathbf{q})}} \right)$$

Inclusion in WordPress is easy: you unpack the MathJax beta on your website, add a “script” line so that the MathJax javascript is loaded, and it dynamically displays the maths when the page is loaded. It also manages to do it with real fonts, so you can select bits of the equations, and not have to deal with ugly images — oh, and it zooms nicely.

Of course, there’s a downside to having a client side script redisplay the formulas, and I suspect everyone reading via RSS will have already picked up on what it is…

Quite a long time ago I read a fascinating article on semiotics and user-interface design. My recollection is that it made the argument that computer user interfaces could be broken up into roughly three branches: “menus”, where you have a few options to choose between, and that’s it; “WIMP paradigm” where you’ve got windows, icons, menus and a pointer and can gesticulate to get things done; and “command oriented” where you type commands in to have things happen.

While the WIMP paradigm is obviously pretty good, it’s restricted by its “metaphoric” nature: you have to represent everything you want to do with a picture — so if you don’t have a picture for something, you can’t do anything with it. In effect, it’s reduces your interaction with computers to point-and-grunt, which is really kind of demeaning for its operators. Can you imagine if the “communication skills” that were expected of you in a management role in business were the ability to point accurately and be able to make two distinct grunting noises?

On the other hand, if your system’s smart enough to actually do what you want just based on a wave of your hand that is pretty appealing — it’s just that when you want something unusual — or when your grunts and handwaving aren’t getting your point across — you can’t sit down and explain what you want merely with more grunts and pointing.

Obviously that’s where programming and command lines come in — both of which give you a range of fairly powerful languages to communicate with computers, and both of which are what people end up using when they want to get new and complicated things done.

It’s probably fair to say that the difference between programming languages and command line invocations is similar to essays and instant messaging — programs and essays tend to be long and expect certain formulas to be followed, but also tend to remain relevant for an extended period; an IM or a command line invocation tends to be brief, often a bit abbreviated, and only really interesting exactly when it’s written. Perhaps “tweet” or “facebook status update” would be a more modern version of IM — what can I say, I’m an old fogey. In any event, my impression is that the command line approach is often a good compromise when point-and-grunt fails: it’s not too much more effort, but brings you a lot more power. For instance,

 

$for a in *.htm; do mv "$a" "${a%.htm}.html"; done isn’t a very complicated way of saying “rename all those .htm files to .html”, compared to first creating a program like:   #!/usr/bin/env python import os for name in os.listdir("."): if name.endswith(".htm"): os.rename(name, name[:-4]+".html")  and then running it. And obviously, one of the advantages of Unix systems is that they have a very powerful command line system. In any event, one of the things that strikes me about all the SaaS and cloud stuff is that there really isn’t much a linguistic equivalent to the command line for the web. If I want to do something with gmail, or flickr, or facebook I’m either pointing and grunting, or delving deeply into HTML, javascript, URLs, REST interfaces and whatever else to make use of whatever arbitrary APIs happen to be available. A few services do have specialised command line tools of course — there’s GoogleCL, various little things to upload to flickr, the bts tool in devscripts to play with the Debian bug tracking system, and so forth. But one of the big advantages of the web is that you aren’t meant to need special client side tools — you just have a browser, and leave the smarts on whichever web server you’re accessing. And you don’t get that if you have to install a silly little app to interface with whichever silly little website you happen to be interested in. So I think there ought to be a standard “command line” API for webapps, so that you can say something like: $ web www.google.com search -q='hello world'

to do a Google search for ‘hello world’. The mapping from the above command line to a URL is straightforward: up until the option arguments, each word gets converted into a portion of the URL path, so the base url is http://www.google.com/search, and options get put after a question mark and separated by ampersands, with regular URL quoting (spaces become plusses, irregular characters get converted to a percent and a hex code), in this case ?q=hello+world.

The obvious advantage is you can then use the same program for other webapps, such as the Debian BTS:

$web bugs.debian.org cgi-bin bugreport.cgi --bug=123456 --mbox=yes From mech...@...debian.net Tue Dec 11 11:32:47 2001 Received: (at submit) by bugs.debian.org; 11 Dec 2001 17:32:47 +0000 Return-path: Received: from gent-smtp1.xs4all.be [195.144.67.21] (root) by master.debian.org with esmtp (Exim 3.12 1 (Debian)) id 16Dqlr-0007yg-00; Tue, 11 Dec 2001 11:32:47 -0600 ... It obviously looks cleaner when you use the shorter url (web bugs.debian.org 123456), although due to the way the BTS is setup, you also lose the ability to specify things like mbox format then. Of course, web pages are in all sorts of weird formats, too: having Google’s HTML and javascript splatter all over your terminal isn’t very pleasant, for instance. But that’s what pipes are for, right? $ web chart.apis.google.com chart --cht=p3 \
--chs=400x150 --chd=t:2,3,5,10,20,60 \
--chl='Alice|Bob|Carol|Dave|Ella|Fred' | display

It’d probably be interesting to make “web” clever enough to automatically pipe images to display and HTML to firefox and so on, depending on what media type is returned.

Obviously you can use aliases just like you’d use bookmarks on the web, so saying:

$alias gchart='web chart.apis.google.com chart'$ alias debbug='web bugs.debian.org cgi-bin bugreport.cgi'

lets you type a little less.

Anyway, I think that makes for a kind-of interesting paradigm for looking at the web. And the “web” app above is pretty trivial too — as described all it does is convert arguments into a URL according to the given formula.

Things get a little more interesting if you try to make things interactive; a webapp that asks you your name, waits for you to tell it, then greets you by name is made unreasonably difficult if you try to do it on a single connection (with FastCGI and nginx for instance, the client has to supply the exact length of all the information you’re going to send before it will receive anything, and if you don’t know what you’re going to need to send up front…). Which means that so far my attempts to have web localhost bash behave as expected aren’t getting very far.

The other thing that would be nice would be passing files to remote web apps — being able to say “upload this avi to youtube” would be more elegant as web youtube.com upload ./myvideo.avi than web youtube.com upload <./myvideo.avi, but when web doesn’t know what “youtube” or “upload” actually means, that’s a bit hard to arrange. After all, maybe you were trying to tell youtube to do the uploading to your computer, and ./myvideo.avi was where you wanted it to end up.

Anyway. Thoughts appreciated.

My previous post apparently didn’t do the economics for the resource rent analysis quite right — it seems that the idea is a cleverer company would be able to use the resource rent tax to find cheaper sources of funding, which changes things…

The idea then would be that you start your mining project seeking 60% in risky funding (they get whatever profits you make and the totality of the loss), and 40% in risk-free funding (they get the same return as they would if they invested in government bonds, whether the project succeeds or fails). That’s as opposed to the current approach of seeking 100% in risky funding.

So say you’ve raised $5B. You spend your$5B doing surveys, setting up your mine, etc. Failure here means you declare bankruptcy and the government gives you enough money to pay back the $2B of risk-free investment, plus interest, presuming the Greens don’t have their way. On the other hand, your mine might be a success, and you might, eg, start getting$1.5B in revenue, against $500M in expenses. At this point you first have to pay your “super profit” tax, which is, apparently 40% of: • gross receipts:$1.5B
• less depreciation: assuming 20 year expected life, 5% of 5B = $250M • less running expenses:$500M
• less “normal return” on debt/equity: 6% of $5B =$300M
• totalling: $450M So$180M on resource rents. You then pay corporate income tax of 30% (eventually 28%) of:

• gross receipts: $1.5B • less depreciation: assuming 20 year expected life, 5% of 5B =$250M
• less running expenses: $500M • less resource rent:$180M
• totalling: $570M So$171M ($159.6M at 28% in 2014 or so). You then pay the risk-free return to your risk-free investors, which is 6% of$2B or $120M. (Actually, this might be tax deductible too) So after paying expenses ($500M), resource rents ($180M), income tax ($171M) and the risk-free dividend ($120M), your$1.5B of earnings is down to $529M. Issuing all that to your risky investors, gives an annual return of 17.63%, fully-franked. That compares to doing things the current way as follows: you raise$5B of risky investment; your mine succeeds and makes $1.5B in revenue, against$500M in expenses. You just pay company tax at 30% after expenses and depreciation, so that’s 30% of $750M, or$225M. That leaves you $775M to pay in dividends, which is an annual return of 15.5%, fully-franked. That, obviously, is an entirely convincing investment. It relies on the government refunding the$2B of “risk-free” investment in the event that the mine falls apart, though — which, as I understand it, is the part of the plan the Greens oppose. But otherwise, the above’s fairly plausible.

The difference in those sums — profit rising from 15.5% to 17.63% is due to the level of depreciation in the above sums. If those formulas for calculating the rent and company taxes are correct, then your return on investment increases by two-thirds of your annual depreciation compared to the initial investment and decreases by a fifth of the risk-free rate. In the above case, annual depreciation was 5% of the entirety of the initial investment, and the risk-free rate was 6%, which implies an improvement of 2/3*5%-6%/5 which is the 2.13% we saw.

In reality, you’d probably need to offer a higher return to your “risk-free” investors — because if you didn’t, they’d probably just by bonds directly from the government in the first place. And if I’m not mistaken you still need to repay the principle for your risk-free investors over the life of your mind. So hopefully that simply evens out in the end.

There’s not a lot of difference in that scenario to having the government borrow enough to maintain 40% ownership in every mining operation in Australia. They’ll then receive 40% of the after-tax profits, and have to pay interest on their borrowings at the long term bond rate, which would mean (in the above example) getting $225M in company tax, then$310M in franked dividends, then paying out $120M in interest costs for a total of$415M extra per-annum. That’s more than the total of $351M in receipts in the above example, I think due to the depreciation deduction in the resource rent tax calculation. Mechanically, there’s a few differences: the company has to gain two sorts of investment (risky shares and risk-free bonds, for instance), if it fails it has to go to a lot more trouble to pay back the risk-free investors (getting the tax office to issue a refund in cash), and the government gets to keep it mostly off its books (doesn’t have to raise funds directly, investment losses turn into tax refunds). In any event, that should make it easier for mining companies to raise funds — they only need to raise 60% of the amount at the risky level, for the same return they previously offered. I don’t see anything stopping you from being tricky and doing a two stage capital raising: raising$3B of risky funds to do exploration; and if that fails repaying your investors 40% ($1.2B) of their capital — then doing the risk-free fund raising to get enough cash to start production. The initial fund raising then has a chance at a 17% ongoing return, or a 60% loss — compared to currently having a chance at a 15% return or a 100% loss. Again, that should make it easier to raise funds for new projects. On the other hand, I also still don’t see anything stopping you from transferring your profits. Say you’re a public investment company. You’ve got plenty of money from offering superannuation products or what not, and you want to get into mining because you hear it gives a high return for your investors. So you allocate a few billion to start a mining company, which does some prospecting and opens a mine. That works out, and it starts making super profits. You decide you want to reduce your tax, and get more dividends. So instead of having one privately held subsidiary mining company, whose balance sheet looks like: • Revenue:$1500M
• Expenses: $500M • Resource rent tax:$180M
• Company tax: $171M • Dividends:$649M

you decide to invest in a transport company as well. Hopefully one that’s already making a decent profit, but paying a bit more than market value works too. You then have them make an agreement that the mine will exclusively use your transport company for the next 10 or 20 years, for whatever excuse satisfies appropriate laws. Then have the transport company seriously jack up the price. Your balance sheets should then look like:

 Mine Mine change Transport change Total change Revenue $1500M - +$700M +$700M Expenses$1200M +$700M - +$700M Resource rent tax $0M -$180M n/a -$180M Company tax$15M -$156M +$210M +$54M Dividends$285M -$364M +$490M +$126M And voila, your resource rent tax has been reallocated to your dividends (except for the 30% that goes to company tax, of course). It doesn’t have to be a transport company, either — any private company that you can buy outright, that isn’t hit by the resource tax, and that you can find some excuse to make your exclusive supplier of a necessary product/service will do fine. And even better, as far as I can see, even when you get rid of all the resource rent proceeds the government was hoping for from your mine, they’ve still covered 40% of your initial risk… ## March 21, 2014 For a very long time, OpenMPI has described itself as "an open source, freely available implementation of both the MPI-1 and MPI-2 documents", which allows for parallel programming. The team has just released version 1.7.5, and they can proudly announce Open MPI is now fully MPI-3.0 compliant. This is a "feature release" will be part of the 1.8 series. read more I woke up at 2am, and then spent over an hour trying to get back to sleep. That was phenomenally annoying. Then at 5am, Zoe's white noise app on her Nexus 7 decided to go a bit wonky (as it does from time to time), which caused her to wake up. I couldn't convince her tablet to behave, so I just let her get in bed with me, but neither of us went back to sleep. I think at about 6am I told her she could turn on the TV so I could try and squeeze in a bit more rest. I managed to forget to put the oats in the Thermomix when I made porridge this morning. It took me a while to figure out why the consistency and colour didn't look quite right when I dished it up. Yep, it was one of those mornings. Zoe had her trial ballet class this morning, at 9:45am in Coorparoo. I managed to get Zoe organised a bit early, so we were the first ones there. It's in a church hall, and has a nice floor, with a mirrored wall and barres on each long wall. Eventually the other girls arrived, there were 5 regulars in total. A good sized class. I had mixed feelings when I discovered that parents weren't normally able to be present in the dance studio to watch the classes. I was today, because it was a trial, but once Zoe starts regular classes, I'll have to wait outside. I'm disappointed I won't be able to watch, but it'll be nice to have 45 minutes to myself. Zoe did really well in the class today. She picked everything up very quickly, and I don't think she'll be negatively impacted by having entered in Term 2. She seemed very comfortable and looked like she enjoyed herself. Not long after we arrived, the heavens opened in a massive downpour, which continued when we had to leave. It was a fair dash back to the car. I'd had plans of a picnic lunch in New Farm Park, but scrapped them on account of the weather. We got home, and had a late morning tea, and then Zoe wanted to play with her Marble Run. After that, we gave Smudge a good brushing and then Zoe wanted to do some craft and painting. I let Zoe set herself up, and it was fun to see what bits and pieces she chose to dig out of the arts and craft bins. She wanted to make another crown for Mummy's boss, but we don't have any crown making stuff at home (yet), so she ended up starting to decorate a wand and gluing a pom pom on a shoe box, before moving on to painting her cardboard box cubby house and rocket. She wanted to start doing some hand painting right around lunch time, but I managed to convince her to do that after lunch. We had some lunch, and then she did a little bit more painting and stickering before her nap. She passed out for her nap very quickly. After her nap, I asked her if she wanted to paint or do science, and without hesitation, she chose science! I delved into the "Weather" chapter of 365 Science Experiments and we did a couple with ice cubes and salt and then she saw one where you stick a balloon over the neck of a glass bottle and put it in hot water and the balloon "inflates", so we did that one as well. We also popped a balloon to simulate thunder. It was all nice and easy, without a lot of preparation required. After that, we did a bit of a clean up and then Zoe watched some of her DVDs from the library while she waited for Sarah to arrive to pick her up. It was a good day today. No meltdowns. I have written before about the naming of network items, but someone asked for a summary. You carve out a part of DNS as yours, and .net.example.edu.au is a good choice as that prevents name clashes with "user-visible" DNS names. You name devices within that with no further DNS hierarchy. Obviously we do need some hierarchy, we simply don't use DNS dots to do it as they are elided by almost all network management systems. I strongly recommend a geographical-then-function-then-counter naming system. Sort of like b4-f1-sw2 for the management interface of Building #4, Floor 1, switch 2. But you'll need to adjust the basic outline so it works well for your local circumstances. Do not put the make and model in the name, upgrading a switch should be cheap and simple, so the less to change the better. Try to avoid using L or O or I next to digits. Names are in lower case only. For the non-management interfaces DNS hierarchy is a good idea. Take VLAN4 of eth0 of the router b4-f1-r1, that would get the DNS name vlan4.eth0.b4-f1-r1.net.example.edu.au. Use the vendor's name for the interface, so if they call it GigabitEthernet0 then you say vlan4.gigabitethernet0.b4-f1-r1.net.example.edu.au and if they call it ge-0/0/0 then you say vlan4.ge-0-0-0.b4-f1-r1.net.example.edu.au. But do use your own names for the subinterface technology as this rapidly brings errors to the fore, "vlan4", "isl4", "lane4", "mpls0", "gre0", and so on. Use the same names for IPv4 and IPv6. Sometimes you'll provide the IP address for use by someone else. Name these gateways links with additional hierarchy: both the customer identifier (an order number, or customer ID, or ASN for a peering link) and a counter. If you are doing split service delivery (unicast here, multicast there) then note that in the naming too. Typically gw1.cust1234.vlan4.eth0.b4-f1-r1.net.example.edu.au, but a worst case could be as ugly as gw1.multicast.ipv6.as666.vlan4.eth0.b4-f1-r1.net.example.edu.au. ##### Accessing your management interfaces You can, and should, place management access in a VLAN away from users. In a small network there is no need to route that VLAN. Instead drop a couple of bastion servers on the management LAN. Connect those servers to your non-management network and give those servers plenty of backdoor connectivity: PSTN modems attached to a POTS phone line, GSM modems, ADSL service. Set up a SSH CA and use that to control SSH access. Run your standard authentication, but put a replica of the authentication on each of the bastions (maybe subsetting that if you have a lot of users). That way the bastion users are running the corporate password policy but network engineers are able to log in when the only device in the entire network which is reachable is the bastion server (via one of its backdoor links). Also have those servers run a lot of VPN options: the typical network engineer workstation can simply bring up that VPN rather than SSH through the bastion. The network management platforms can be on the "real" network, getting the benefits of easy administration (eg, puppet) whilst running VPN tunnels to all of the bastion server to query the management interfaces of the networking devices. Run one bastion server per network core. Remember that these servers need not be large, a low power 1RU server with a couple of ethernet interfaces is overkill). You might need a USB hub to be able to plug in all of the PSTN, GSM and ADSL modems. If you have new switches with USB consoles then plug them into the hub as well. If you have the older RS-232 consoles then plug them into a 8 or 16 port RS-232/USB concentrator. You can buy "console servers" commercially, and they are worth a look. Usually they fail as bastion servers as they can't replicate the corporate authentication database and don't run a range of VPN choices. In a big datacentre use a console server and plug it into the management VLAN and use the bastion server to reach it. Some console servers have two management interfaces, and one can run directly to the bastion server. After doing all of this good work, make sure the system administrators don't do an end-run around it. Their KVM switches go onto the management network, as do their ILMI interfaces. As the number of hosts on this network can get large do try to run IPv6-only whereever possible (just doing that for the ILMI interfaces can save a lot of IPv4 addressing). Furthermore, using IPv6's autoconf is much more resilient to networking issues than IPv4's DHCP. You should use static addressing on major assets and obviously on routers. Closet switches and the like should all use DHCP to obtain their address (even if you statically assign the addressing back at the DHCP server) because the aim there is to drive the administration overhead to the floor. Ignore people who tell you to use NAT for the management VLAN. One day you'll find yourself on a router with no VRFs where you want to suppress all private networks whilst trying to keep the range you need for management connectivity. You will then hunt that person down and kill them. With a spoon, because it is more painful. Use a iptables rule to set DSCP = CS6 (control) for ssh as packets exit out of the bastion server into the management network. Set them back to DSCP = BE as they exit out of the bastion server into the customer network (otherwise they'll get policed by the router). Also set that DSCP on the devices in the management network ("ip ssh dscp 48"). Resist the temptation to set DSCP for SNMP, as it can be like a bulk data transfer at times. Use the same QoS design in the management network as you do for the customer network. Some auditors don't like the network management names appearing in DNS. In that case run a DNS forwarder on the bastion servers. Set that forwarder up to look in the zones for .net.example.edu.au and the reverse zone for the IP addressing, then to forward other zones. Or you can use dnsmasq to do the same with a hosts file. Using dnsmasq has the advantage that you can suppress the DNS names of the router's forwarding interfaces too. Obviously the network elements point to the DNS forwarder at their nearest core. When people connect using the VPN then it can replace their laptop's DNS resolver list and then they'll automatically see the verboten names. But really, getting better auditors is the correct answer, because making your network more difficult to diagnose is to increase the risks to the business, not to decrease them. ##### Tip Typing: $ ssh b4-f1-sw2.net.example.edu.au

can be painful. Especially from a corporate laptop where you can't change the DNS search list. Use a SSH ~/.ssh/config file to do the expansion:

Host *-*-*
Hostname %h.net.example.edu.au

Ask the systems' administrators to set up a ssh CA and generate keys for devices. That's easy to do when starting out, and a major pain to do afterwards.

I am running Linux from a flash memory SD Card à la Raspberry Pi. Which the system logger is writing to. The typical configuration of the system logger is to force the disk to write the more important messages immediately. That means it might do 50 writes per 4 kilobyte disk block. If something goes wrong and the system logger is hammered then the writes can lead to failure of the SD Card in short order (which makes things worse, as the logger then tries to write the I/O error messages at high priority).

##### A workaround

The system logger can be told not to sync files. Traditional system loggers do that using a hyphen before the log file name. So go through /etc/syslog.conf or equivalent and put a hyphen before every filename on the SD Card.

*.*;auth,authpriv.none  /var/log/syslog
*.*;auth,authpriv.none  -/var/log/syslog
##### A hack

The kernel has a ring-buffer logger. This is displayed using dmesg. I could use that for user-space messages too. The device /dev/kmsg will place messages into the dmesg ring buffer. I could alter /etc/rsyslog.conf to make /dev/kmsg the destination for all log messages:

*.info /dev/kmsg

That will loop, because the system logger is logging messages from dmesg to the system log, and the system log is dmesg, and .... So let's save some of the Raspberry Pi's 700Mhz for real work by setting /etc/syslog.conf to not listen to kernel messages by deleting the line:

$ModLoad imklog But anyone can type dmesg. Allowing anyone to look at the super-secret log messages isn't the best idea. There is a sysctl to restrict looking at the kernel's ring buffer to root. Edit /etc/sysctl.d/local.conf to set that: kernel.dmesg_restrict = 1 That's more like it: $ dmesg
dmesg: klogctl failed: Operation not permitted

This works, but it is not really optimal. The kernel is very inefficient at writing messages: for some issues the only tool the authors have is the log messages, so they trade off efficiency for increasing the probability the log message will be seen. Kernel facilities also lack the resource allocation and control of user space: the message will be logged, whatever other more useful work is pending.

##### Doing it right

The needed programs are:

• A daemon holding a ring-buffer of log messages and their access permissions.

• A module for rsyslog which sends log messages to that daemon's ring-buffer. By hanging off rsyslog we get logging of journald output thrown in for free.

• A command which requests log messages from that daemon's ring-buffer and displays them

This can be efficient (we're not absolutely desperate to see all messages come hell or high water) and fair (if the system is busy then logging can get a fair share). It would be useful for large embedded systems and embedded-like systems such as the RPi.

## March 20, 2014

Despite having a good, unbroken night's sleep, Zoe seemed a bit out of sorts this morning. She didn't have a particularly good breakfast. I'm not sure if that was because I made the porridge with cow's milk instead of almond milk (because I had some leftover cow's milk), or she was just having a bad day.

Zoe has a trial dance class booked for tomorrow, which clashes with her normal Brazilian Jiu-Jitsu class, so we skipped Playgroup and did a BJJ class this morning.

It was a good class, because there were 3 other kids there, a regular boy and girl, and a girl who was taking a trial class, so it was nice to have a bit more happening, but the two other regular kids were younger and a little less focused, and I think that distracted Zoe a bit. I think the lack of a decent breakfast also kicked in, and she went on strike a couple of times. Overall, it was a pretty good class though. The one downside was when I accidentally closed her fingers in the door on the way out. I didn't realise she was holding onto the door frame when I let go of the door I was holding open for her. Luckily it wasn't a heavy door or it could have ended more badly.

Zoe had chosen to go to BJJ class by car today. I wondered if it was because of the improved music selection with the new radio. Her library books were due back soon, and since I don't have her this weekend, I thought we could just go to the library after BJJ class.

I gave her a muesli bar to keep her going after BJJ class, but I think the lack of a good breakfast was really starting to show after we left the library. I was feeling like lunch out, and hadn't been to Grill'd for a while, nor sampled their kids menu, so I asked Zoe if she wanted a burger for lunch, and she said she did.

We got to Grill'd, and I eventually managed to extract an order out of Zoe. She went for the mini chicken burger. The kid's menu items were pretty good. They came in a little box, with the burger, some chips, a popper (a.k.a. "juice box") and a colouring sheet with a set of colouring pencils.

Unfortunately Zoe had a massive meltdown because she couldn't get the angle of her straw to her liking, and wasn't doing very well expressing how she wanted it ("down and then up" didn't make a whole lot of sense without it having bendy bits in places it didn't have bendy bits). Eventually I got her to eat.

I'd say that food was definitely the issue, because after lunch she perked up significantly, and we had a nice little play in Bulimba Memorial Park, where I'd parked the car, and then we went home. We read a few of the new library books, and then she passed out for a late nap without much complaint.

After her nap, she was still a little bit out of sorts, but we managed a walk down to the Hawthorne Garage to grab some lamb cutlets for dinner, and then she watched some TV and ate quite a good amount of fruit while I organised dinner.

After a handwashing-related meltdown and dinner, we went back to the Hawthorne Garage for a babyccino. She'd perked up somewhat by this stage, and it was a nice outing. Many dogs were patted while there and on the way home.

Bath time and bedtime went pretty well. It's another warm night, so I hope she sleeps well.

## March 19, 2014

If you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC: AppRTC.

When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).

We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.

Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.

Here are my favourite parameters:

• hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
• stereo=true : turns on stereo audio
• debug=loopback : connect to yourself (great to check your own firewalls)
• tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)

For example, here’s how a stereo, HD loopback test would look like: https://apprtc.appspot.com/?r=82313387&hd=true&stereo=true&debug=loopback .

This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery:

• ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
• ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
• audio=true&video=false : audio-only call
• audio=false : video-only call
• audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
• audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
• asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
• arc=opus/48000 : preferred audio receive codec is opus at 48kHz
• dtls=false : disable datagram transport layer security
• dscp=true : enable DSCP
• ipv6=true : enable IPv6

AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).

Have fun playing with the main and always up-to-date WebRTC application: AppRTC.

Some time late last Sunday night, I stumbled upon a fight discussion on Twitter. It turns out there are actually more threads to it than I’ve reproduced here, so if you’re really keen, do feel free to click through on individual tweets to see where it went. Here I’ve only reproduced the thread I read at the time. It starts with this:

I’m pretty sure this next tweet is the one I first noticed in my timeline:

At this point my stupidity got the better of me, and I decided to engage:

On reflection, that’s not too terrible an ending. But this exchange go me thinking about the words we use. Various ancient cultures had a notion that words had power; that to name a thing would cause it to come into existence. So I tried this myself today. I said, in the most solemn voice I could manage, “there is a bacon cheeseburger on the corner of my desk”. Alas, it didn’t work. I was stuck with my linux.conf.au 2014 lanyard, my headset, sunglasses, a stapler, some Ceph and HP stickers, a stack of SUSE post-it notes and a pile of folded up Tasmanian state election propaganda I’ve been meaning to set fire to.

Perhaps I’m not as adept as the ancients at manipulating reality. Perhaps “bacon cheeseburger” isn’t actually a word of power. Or perhaps that notion was simply never meant to be taken literally. Maybe it was more that the words we use frame the way we think. More disturbingly, that the words we use put walls up around the way we are able to think.

Cassy O’Connor said “rape”, which I (with the benefit of never having actually been raped; apparently it helps to be a reasonably sized, allegedly scary looking, bearded white male) took to be a rather evocative analogy for the violence that can be wrought upon forests. But, she was shot down for this usage, because it was seen to be “disrespectful toward those who have been raped”.

Rob from Taroona seems to be referring to forests as “resources”, and while it’s apparent he understands that there’s a balance to be struck between the existence of forests and our use of them, for me the term “resource” is problematic. Dig up “resource” in a dictionary if you still have one (or just go the lazy approach), and it tends to be defined along the lines of “something that one uses to achieve an objective”.

I can’t bring myself to see forests that way. Rather I see timber as a resource, and trees as life forms.

And I wonder to what extent the words I choose to describe things trap my thinking inside little mental boxes.

Today was a very busy day for me.

First up, I had my chiropractic adjustment. Then I had my half-yearly dental checkup. Then I got stuck into cleaning the house. Fortunately I'd done a bit the afternoon before, otherwise I'd never have gotten it done before my massage. I'm starting to seriously consider getting a cleaner. I can make much better use of the limited time I get than cleaning the house.

Zoe's Kindergarten has a Parent Advisory Group or something like that, which meets monthly. I wasn't able to attend the first one, but the second one was today, about an hour before pickup, so I grabbed a quick lunch after my massage and biked over to Kindergarten early.

The meeting wasn't particularly interesting. The majority of it revolved around fund raising.

At pickup time, Zoe was fast asleep again, and had another massive meltdown when I woke her up. It took forever to get her out. She wanted to give Megan a hug, but Megan had long gone. Then, rather fortuitously, Megan's Dad texted me to tell me they were in the playground on the neighbouring school grounds, so I managed to use that as an incentive to get Zoe moving.

I didn't know until today, but at the foothill of the neighbouring primary school, there's some play equipment. A few of the Kindergarten kids seem to have a play over there after Kindergarten. Megan and Zoe played there for a bit before it was time to go.

Once we got home, we unpacked and then walked down to the Hawthorne Garage to get some vegetables, and then, after buying some milk from the corner store and another grumpy meltdown, I figured a quiet afternoon was in order, so Zoe watched a DVD for a bit while I prepared dinner.

I'd bumped into the mother of Zoe's friends Eva and Layla from her old day care while I was at the dentist in the morning, and arranged for her to drop by to pick up a copy of the photos I'd taken at their 4th birthday party. I'd hoped that she'd bring the girls with her, and had told Zoe they'd be dropping by, but unfortunately she came on the way to picking them up from day care, so Zoe was a bit disappointed. We did arrange to have them over for dinner next Friday, so I'm sure that will be very exciting.

After that, we had dinner, and a fairly uneventful bedtime. It's hot again, so I hope that doesn't mess with her sleep.

I bought 10 BTC to play with back in 2011, and have been slowly spending them to support bitcoin adoption.  One thing which I couldn’t get reliable information on was how to buy and sell bitcoin within Australia, so over the last few months I decided to sell a few via different methods and report the results here (this also helps my budget, since I’m headed off on paternity leave imminently!).

All options listed here use two-factor authentication, otherwise I wouldn’t trust them with more than cents.  And obviously you shouldn’t leave your bitcoins in an exchange for any longer than necessary, since most exchanges over time have gone bankrupt.

### Option 1: MtGox AUD

Yes, I transferred some BTC into MtGox and sold them.  This gave the best price, but after over two months of waiting the bank transfer to get my money hadn’t been completed.  So I gave up, bought back into bitcoins (fewer, since the price had jumped) and thus discovered that MtGox was issuing invalid BTC transactions so I couldn’t even get those out.  Then they halted transactions altogether blaming TX malleability.  Then they went bankrupt.  Then they leaked my personal data just for good measure.  The only way their failure could be more complete is if my MtGox Yubikey catches on fire and burns my home to the ground.

Volume: Great (5M AUD/month)

Price Premium: $25 –$50 / BTC

Charge: 0.65%

Hassle: Infinite

Summary: 0/10

### Option 2: localbitcoins.com

According to bitcoincharts.com, localbitcoins is the largest volume method for AUD exchange.  It’s not an exchange, so much as a matching and escrow service, though there are a number of professional traders active on the site.  The bulk of AUD trades are online, though I sold face to face (and I’ll be blogging about the range of people I met doing that).

localbitcoins.com is a great place for online BTC buyers, since they have been around for quite a while and have an excellent reputation with no previous security issues, and they hold bitcoins in escrow as soon as you hit “buy”.  It’s a bit more work than an exchange, since you have to choose the counter-party yourself.

For online sellers, transfers from stolen bank accounts is a real issue.  Electronic Funds Transfer (aka “Pay Anyone”) is reversible, so when the real bank account owner realizes their money is missing, the bank tends to freeze the receiving (ie. BTC seller’s) bank account to make sure they can’t remove the disputed funds.  This process can take weeks or months, and banks’ anti-fraud departments generally treat bitcoin sellers who get defrauded with hostility (ANZ is reported to be the exception here).  A less common scam is fraudsters impersonating the Australian Tax Office and telling the victim to EFT to the localbitcoins seller.

Mitigations for sellers include any combination of:

1. Only accepting old-fashioned cash deposits via a branch (though I’m aware of one US report where a fraudster convinced the teller to reverse the deposit, I haven’t heard of that in Australia)
2. Insisting on “localbitcoins.com” in the transfer message (to avoid the ATO fraud problem)
3. Only dealing with buyers with significant reputation (100+ trades with over 150 BTC is the Gold Standard)
4. Insisting on real ID checking (eg. Skype chat of buyer with drivers’ license)
5. Only dealing with buyers whose accounts are older than two weeks (most fraudsters are in and out before then, though their reputation can be very good until they get caught)
6. Only allowing internal transfers between the same bank (eg. Commonwealth), relying on the bank’s use of two factor authentication to reduce fraud.

Many buyers on localbitcoins.com are newcomers, so anticipate honest mistakes for the most part.  The golden rule always applies: if someone is offering an unrealistic price, it’s because they’re trying to cheat you.

Volume: Good (1M AUD/month)

Price Premium: $5 –$20 / BTC

Hassle: Medium

Summary: 7/10

### Option 3: btcmarkets.net

You’ll need to get your bank account checked to use this fairly low-volume exchange, but it’s reasonably painless.  Their issues are their lack of exposure (I found out about them through bitcoincharts.com) and lack of volume (about a quarter of the localbitcoins.com volume), but they also trade litecoin if you’re into that.  You can leave standing orders, or just manually place one which is going to be matched instantly.

They seem like a small operation, based in Sydney, but my interactions with them have been friendly and fast.

Volume: Low (300k AUD/month)

Price Premium: $0 / BTC Charge: 1% Hassle: Low Summary: 7/10 ### Option 4: coinjar.io I heard about this site from a well-circulated blog post on Commonwealth Bank closing their bank account last year. I didn’t originally consider them since they don’t promote themselves as an exchange, but you can use their filler to sell them bitcoins at a spot rate. It’s limited to$4000 per day according to their FAQ.

They have an online ID check, using the usual sources which didn’t quite work for me due to out-of-date electoral information, but they cleared that manually within a day.  They deposit 1c into your bank account to verify it, but that hasn’t worked for me, so I’ve no way to withdraw my money and they haven’t responded to my query 5 days ago leaving me feeling nervous.  A search of reddit points to common delays, and founder’s links to the hacked-and-failed Bitcoinica give me a distinct “magical gathering” feel.

Volume: Unknown (self-reports indicate ~250k/month?)