Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

July 29, 2014

Android Screen Saving

Just over a year ago I bought a Samsung Galaxy Note 2 [1]. About 3 months ago I noticed that some of the Ingress menus had burned in to the screen. Back in ancient computer times there were “screen saver” programs that blanked the screen to avoid this, then the “screen saver” programs transitioned to displaying a variety of fancy graphics which didn’t really fulfill the purpose of saving the screen. With LCD screens I have the impression that screen burn wasn’t an issue, but now with modern phones we have LED displays which have the problem again.

Unfortunately there doesn’t seem to be a free screen-saver program for Android in the Google Play store. While I can turn the screen off entirely there are some apps such as Ingress that I’d like to keep running while the screen is off or greatly dimmed. Now I sometimes pull the notification menu down when I’m going to leave Ingress idle for a while, this doesn’t stop the screen burning but it does cause different parts to burn which alleviates the problem.

It would be nice if apps were designed to alleviate this. A long running app should have an option to change the color of it’s menus, it would be ideal to randomly change the color on startup. If the common menus such as the “COMM” menu would appear in either red, green, or blue (the 3 primary colors of light) in a ratio according to the tendency to burn (blue burns fastest so should display least) then it probably wouldn’t cause noticable screen burn after 9 months. The next thing that they could do is to slightly vary the position of the menus, instead of having a thin line that’s strongly burned into the screen there would be a fat line lightly burned in which should be easier to ignore.

It’s good when apps have an option of a “dark” theme, that involves less light coming from the screen that should reduce battery use and screen burn. A dark theme should be at least default and probably mandatory for long running apps, a dark theme is fortunately the only option for Ingress.

I am a little disappointed with my phone. I’m not the most intensive Ingress player so I think that the screen should have lasted for more than 9 months before being obviously burned.

[life] Day 181: Kindergarten, startup stuff, tennis and haircuts

Zoe had a massive sleep last night. I had her in bed by 7:20pm. She woke up a little before 6am because she'd lost Cowie, and went back to sleep until 7:30am. I had planned to try biking to Kindergarten for the first time in ages, but we got to Kindergarten late enough as it was driving.

I pretty much spent the day studying for my real estate license. I selected finalists for the design contest I'm running on 99designs. If you'd like to vote, I'm running a poll.

I picked up Zoe from Kindergarten and walked her next door to her tennis lesson. She really didn't want to do it this afternoon, and it took some firm encouragement to get her to participate. I'm never sure where to draw the line, but based on the grinning and running around within seconds of her finally joining in, I think I made the right decision. I think the problem was she was too hot. It was quite a warm day today.

The plan after that had been to go back to Megan's house for a play date, but her little sister had come home from day care early, showing signs of conjunctivitis, so we instead went to the local coffee shop for a babyccino with Megan and her Dad. While we were there, I managed to snag an appointment for a haircut for me, and a fringe trim for Zoe, so we headed over there afterwards.

After our haircuts, it was pretty much time to start making dinner, so Zoe watched some TV, and I prepared dinner.

I managed to get Zoe to bed early. It'll be interesting to see if she has another massive sleep again.

Happiness and Lecture Questions

I just attended a lecture about happiness comparing Australia and India at the Australia India Institute [1]. The lecture was interesting but the “questions” were so bad that it makes a good case for entirely banning questions from public lectures. Based on this and other lectures I’ve attended I’ve written a document about how to recognise worthless questions and cut them off early [2].

As you might expect from a lecture on happiness there were plenty of stupid comments from the audience about depression, as if happiness is merely the absence of depression.

Then they got onto stupidity about suicide. One “question” claimed that Australia has a high suicide rate, Wikipedia however places Australia 49th out of 110 countries, that means Australia is slightly above the median for suicide rates per country. Given some of the dubious statistics in the list (for example the countries claiming to have no suicides and the low numbers reported by some countries with extreme religious policies) I don’t think we can be sure that Australia would be above the median if we had better statistics. Another “question” claimed that Sweden had the highest suicide rate in Europe, while Greenland, Belgium, Finland, Austria, France, Norway, Denmark, Iceland, and most of Eastern Europe are higher on the list.

But the bigger problem in regard to discussing suicide is that the suicide rate isn’t about happiness. When someone kills themself because they have a terminal illness that doesn’t mean that they were unhappy for the majority of their life and doesn’t mean that they were any unhappier than the terminally ill people who don’t do that. Some countries have a culture that is more positive towards suicide which would increase the incidence, Japan for example. While people who kill themselves in Japan are probably quite unhappy at the time I don’t think that there is any reason to believe that they are more unhappy than people in other countries who only keep living because suicide is considered to be wrong.

It seems to me that the best strategy when giving or MCing a lecture about a potentially contentious topic is to plan ahead for what not to discuss. For a lecture about happiness it would make sense to rule out all discussion of suicide, anti-depressants, and related issues as they aren’t relevant to the discussion and can’t be handled in an appropriate manner in question time.

Pettycoin Alpha01 Tagged

As all software, it took longer than I expected, but today I tagged the first version of pettycoin.  Now, lots more polish and features, but at least there’s something more than the git repo for others to look at!

brendanscott

The Cabinet Office has announced the adoption of its open standards:

“The selected standards, which are compatible with commonly used document applications, are:

PDF/A or HTML for viewing government documents

Open Document Format (ODF) for sharing or collaborating on government documents

The move supports the government’s policy to create a level playing field for suppliers of all sizes, with its digital by default agenda on track to make cumulative savings of £1.2 billion in this Parliament for citizens, businesses and taxpayers.”

Imagine a world in which there is the possibility of competition for office suites.  One day Australia might join that world too.



July 28, 2014

[life] Day 180: Kindergarten, recovery and an afternoon play date

I was away all weekend with Anshu, so I had to play weekend catch up when I got home this morning. After I'd unpacked the car and sorted out some lunch, I did the grocery shopping, and by the time I'd unpacked from that it was pretty much time to pick up Zoe and Megan from Kindergarten.

On the way home from Kindergarten, Zoe asked if they could go to the playground. I'd been intended to offer them the playground or a ferry ride, so this worked out nicely.

Zoe wanted to ride her scooter to the park, and Megan seemed happy to run alongside her, so this seemed like a win-win situation. There were a few other kids from Kindergarten at the playground as well.

The small world factor struck this afternoon. There was a mother at the playground that I'd seen at pick up time at Kindergarten, who I didn't recognise, so I struck up a conversation with her. It turns out she's the mother of a boy who was in Zoe's swim class last year. I'd previously spoken with her husband at swim school. They were from Melbourne, had had a stint up in Brisbane, returned to Melbourne, decided they liked Brisbane better, and just relocated back again. Their son, Miller, had gone to Zoe's Kindergarten last year as well, and his Dad had had good things to say about it Sarah at Zoe's swim class.

After the stint in the park, we came back home, and Zoe and Megan watched a bit of TV while I prepared dinner, and then Jason came to pick up Megan.

We had a nice dinner, and I got Zoe to bed a little bit early.

[life] Day 177: Bike riding practice, picnic

Friday was another loosely planned day. Zoe indicated that she'd like to practice riding her bike, and it was a nice day, so we made a picnic lunch of it.

We went to Minnippi again, and Zoe did pretty well. I used the gentle downhill part of the path this time to give Zoe a bit more momentum, and there were a few brief periods where I let go of the bike completely and she stayed upright. I definitely think she's getting better, and her confidence is improving. Hopefully a few more practices will have her riding on her own.

After she got tired of riding her bike, we checked out the aviation-themed play area. We had some fun alternating between being the "pilot" and the "control tower". We had our picnic lunch up in that part of the park.

Shortly after lunch, another little girl, Lilian, arrived with her mother, and Zoe befriended her, although she didn't want to play with her all that much. I struck up a bit of a conversation with her mother, and when they migrated over to the duck pond, we went as well, as we had some crusts to feed to the ducks.

There was a guy over there with a big loaf of bread, which he was feeding to the ducks unsuccessfully. When Zoe and Lilian arrived, he donated the remainder of the bread to them to feed to the ducks.

After that, we all went to the other play structure for a while.

When Lilian left, we headed back to Cannon Hill to get some more kitty litter and a tennis racquet. We also dropped into Bunnings for a coffee and babyccino. Bunnings has a bit of an indoor play area, so Zoe checked that out too.

It was getting on in the afternoon by this stage, so we headed home and pottered around for a little bit. Zoe watched some TV, Anshu arrived, and then Sarah arrived to pick up Zoe. It was a nice day.

brendanscott

Getup alleges that someone has copied one of Getup’s videos, then issuing a takedown over the same video in respect of Getup.  With copyright enforcement rhetoric running hysterical for years now it comes as no surprise that the provider of the video site would remove Getup’s video.  Takedown procedures are specifically designed to be swift and effective against allegations, regardless of the justice of the matter.  Given such a lopsided approach to rights, it is surprising that it has taken until now for people to start abusing the system.  It will get worse in the future.



Vale Peter Miller

Sad to receive news this morning that a long time friend and colleague, Peter Miller, had passed.

Peter Miller

“After fighting cancer for many years, finally lost”. No, not lost; if there was ever anyone who fought the battle of life and won it was be Peter. Even knowing he was at his last days he was unbowed. Visiting him last week he proudly showed us the woodworking plans and cut lists for some cabinets he was making for his wife MT. He had created the diagrams himself, writing C++ code to call manually drive a drawing library, outputting postscript. Let’s see you do architectural drawing without a CAD program. The date on the printout was two weeks ago.

“The world is a less interesting place today,” wrote another friend. No. Peter firmly believed that interest comes from within. The world is there to be explored, I can hear him saying. He taught us to go forth, wonder, and understand. And so we should.

AfC

July 27, 2014

Twitter posts: 2014-07-21 to 2014-07-27

July 24, 2014

[life] Day 176: Museum and swimming

Today was a pretty chill day, after yesterday's crazy busy one.

Zoe jumped into bed with at 5:40am, but snoozed again until about 6:30am. It was exciting to get up and watch the inverter showing an ever-increasing power production as the sun rose.

I let Zoe choose what she wanted to do, which is code for "I had nothing in particular planned". She chose the museum by bus this morning, so we were out the door by 9am and on a bus not long after.

The museum had never mailed out my new membership cards from a month ago, so I stopped by the tickets desk first to try and sort that out. They were very apologetic, and gave me two free tickets to Deep Oceans show. They're valid until October, so we'll go back and check that out another day.

Zoe mostly just wanted to go to the Science Centre, so after some morning tea, we headed over there. The place was almost totally empty, so we had free run, which was pretty cool. That took us through until lunch time.

I was trying to make the 12:34pm bus home, but we managed to miss it by maybe 20 metres, which was a bit of a bummer. The lady who did the indoor air quality testing was going to come back at some point after 1pm. Fortunately she didn't end up coming until closer to 2pm, so we were fine getting the 1:04pm bus instead.

After she'd been, we briefly dropped in on one of our neighbours on the way out to grab a few things for dinner from the Hawthorne Garage.

Zoe wanted to go to the pool, which was going to be a bit tight, but we made it out to Colmslie for a brief splash around in the pool before I had to get home to put dinner on.

I wanted to get out to a seminar about company boards at 5:30pm, and Sarah was coming around to babysit Zoe for me, so I wanted to get dinner on the table at 5 before I had to leave. That didn't work out quite to plan, so I had to leave with dinner about 15 minutes from being ready.

I managed to order a taxi and get it almost immediately, and it got me into the city within 15 minutes, which was pretty good. On the way home afterwards, I managed to hail a taxi within minutes of leaving the building, so overall, the transport piece worked really well.

The seminar itself was vaguely interesting. I'm curious about getting on a company board, as I think it could be a good use of my experience, and also a non-9-to-5 way of making some income. I'm not quite sure how to get that first board seat though, and exactly what to expect from a time commitment.

First Step with Clojure: Terror

$ sudo apt-get install -y leiningen
[...]
$ lein new scratch
[...]
$ cd scratch
$ lein repl
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2
Transferring 5K from central
Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2
Transferring 4K from central
Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2
Transferring 3311K from central
[...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.


  1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

  2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

  3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

July 23, 2014

[tech] Going solar

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

[life] Day 175: Kindergarten, cleaning, swim class and a lot of general madness

Today was ridiculously busy.

I woke up pretty early, but ended up not getting out of bed until about 7:30am. While I was in the shower, the guy from Origin buzzed to get let in because he wanted to replace the building's hot water meters. Then I raced next door for my chiropractic adjustment.

I got back home, had breakfast, and started cleaning the house, which I mostly finished by 11am, then I biked over for my massage. While I was getting my massage, the solar installer tried calling me because they'd arrived. Fortunately they didn't have to wait too long.

I did a bit more cleaning for 45 minutes, raced out to Grill'D to grab some lunch and then over to Kindergarten to chair the PAG meeting.

After the meeting, I picked up Zoe and Megan, and we went home to see how the solar installers were going.

They were making a spectacular mess, and we didn't have a lot of time before we had to head out again for Zoe's swim class. We drove over to the pool, and discovered a few other kids from Zoe's Kindergarten were in the preceding classes. Zoe's swim school is running a 2 for 1 thing this term because of the cold weather, to try and keep kids enrolled. I figured twice as many swim classes could only help, so jumped at the chance.

Megan was happy to play around while we waited for Zoe to have her class, and then we went home again. The solar installers were just finishing up.

No sooner had they walked out the door and the woman I'd organised to do indoor air quality testing arrived. I'm wanting to rule out living on a busy road having any contribution to Zoe's suspected asthma.

I was making a new Thermomix recipe for dinner, and Laura was coming over for dinner after she picked up Megan's little sister from day care. Dinner turned out really well, but with all of the preceding madness, I didn't get it started until a bit later than I had hoped, and so it was on the table later than I'd have liked.

Once Laura left with her kids, I chucked Zoe in the shower and got her down to bed only about 20 minutes later than normal. She slept through the night last night for Sarah, so I'm hoping she'll sleep through the night again tonight.

Per-repo update hooks with gitolite

Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.

In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.

There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.

The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.

The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!

July 22, 2014

Public Lectures About FOSS

Eventbrite

I’ve recently started using the Eventbrite Web site [1] and the associated Eventbrite Android app [2] to discover public events in my area. Both the web site and the Android app lack features for searching (I’d like to save alerts for my accounts and have my phone notify me when new events are added to their database) but it is basically functional. The main issue is content, Eventbrite has a lot of good events in their database (I’ve got tickets for 6 free events in the next month). I assume that Eventbrite also has many people attending their events, otherwise the events wouldn’t be promoted there.

At this time I haven’t compared Eventbrite to any similar services, Eventbrite events have taken up much of my available time for the next 6 weeks (I appreciate the button on the app to add an entry to my calendar) so I don’t have much incentive to find other web sites that list events. I would appreciate comments from users of competing event registration systems and may write a post in future comparing different systems. Also I have only checked for events in Melbourne, Australia as I don’t have any personal interest in events in other places. For the topic of this post Eventbrite is good enough, it meets all requirements for Melbourne and I’m sure that if it isn’t useful in other cities then there are competing services.

I think that we need to have free FOSS events announced through Eventbrite. We regularly have experts in various fields related to FOSS visiting Melbourne who give a talk for the Linux Users of Victoria (and sometimes other technical groups). This is a good thing but I think we could do better. Most people in Melbourne probably won’t attend a LUG meeting and if they did they probably wouldn’t find it a welcoming experience.

Also I recommend that anyone who is looking for educational things to do in Melbourne visit the Eventbrite web site and/or install the Android app.

Accessible Events

I recently attended an Eventbrite event where a professor described the work of his research team, it was a really good talk that made the topic of his research accessible to random members of the public like me. Then when it came to question time the questions were mostly opinion pieces disguised as questions which used a lot of industry specific jargon and probably lost the interest of most people in the audience who wasn’t from the university department that hosted the lecture. I spent the last 15 minutes in that lecture hall reading Wikipedia and resisted the temptation to load an Android game.

Based on this lecture (and many other lectures I’ve seen) I get the impression that when the speaker or the MC addresses a member of the audience by name (EG “John Smith has a question”) then it’s strongly correlated with a low quality question. See my previous post about the Length of Conference Questions for more on this topic [3].

It seems to me that when running a lecture everyone involved has to agree about whether it’s a public lecture (IE one that is for any random people) as opposed to a society meeting (which while free for anyone to attend in the case of a LUG is for people with specific background knowledge). For a society meeting (for want of a better term) it’s OK to assume a minimum level of knowledge that rules out some people. If 5% of the audience of a LUG don’t understand a lecture that doesn’t necessarily mean it’s a bad lecture, sometimes it’s not possible to give a lecture that is easily understood by those with the least knowledge that also teaches the most experienced members of the audience.

For a public lecture the speaker has to give a talk for people with little background knowledge. Then the speaker and/or the MC have to discourage or reject questions that are for a higher level of knowledge.

As an example of how this might work consider the case of an introductory lecture about how an OS kernel works. When one of the experienced Linux kernel programmers visits Melbourne we could have an Eventbrite event organised for a lecture introducing the basic concepts of an OS kernel (with Linux as an example). At such a lecture any questions about more technical topics (such as specific issues related to compilers, drivers, etc) could be met with “we are having a meeting for more technical people at the Linux Users of Victoria meeting tomorrow night” or “we are having coffee at a nearby cafe afterwards and you can ask technical questions there”.

Planning Eventbrite Events

When experts in various areas of FOSS visit Melbourne they often offer a talk for LUV. For any such experts who read this post please note that most lectures at LUV meetings are by locals who can reschedule, so if you are only in town for a short time we can give you an opportunity to speak at short notice.

I would like to arrange to have some of those people give a talk aimed at a less experienced audience which we can promote through Eventbrite. The venue for LUV talks (Melbourne University 7PM on the first Tuesday of the month) might not work for all speakers so we need to find a sponsor for another venue.

I will contact Linux companies that are active in Melbourne and ask whether they would be prepared to sponsor the venue for such a talk. The fallback option would be to have such a lecture at a LUV meeting.

I will talk to some of the organisers of science and technology events advertised on Eventbrite and ask why they chose the times that they did. Maybe they have some insight into which times are best for getting an audience. Also I will probably get some idea of the best times by just attending many events and observing the attendance. I think that the aim of an Eventbrite event is to attract delegates who wouldn’t attend other meetings, so it is a priority to choose a suitable time and place.

Finally please note that while I am a member of the LUV committee I’m not representing LUV in this post. My aim is that community feedback on this post will help me plan such events. I will discuss this with the LUV committee after I get some comments here.

Please comment if you would like to give such a public lecture, attend such a lecture, or if you just have any general ideas.

[debian] Day 174: Kindergarten, startup stuff, tennis

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

[debian] Day 173: Investigation for bug #749410 and fixing my VMs

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

July 21, 2014

Our Call For Papers has closed

The Call For Papers is now closed. The last 6 weeks has been very exciting as we’ve watched all of those paper submissions flow in.

To those of you who have submitted a presentation to us - good luck, and thank you! You should hear from us in September whether you have succeeded.

There are more and more wonderful things happening each day.

The LCA 2015 Auckland Team

Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian.

After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script

As soon as I log into my desktop, my startup script starts a few programs, including:

Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:

dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver

In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:

xautolock -time 30 -locker "gnome-screensaver-command --lock" &

to lock the screen using gnome-screensaver after 30 minutes of inactivity.

I can also trigger it manually using the following shortcut defined in my ~/.i3/config:

bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts

While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:

# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'

# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5

# show battery stats
bindsym XF86Battery exec gnome-power-statistics

to make volume control, screen brightness and battery status buttons work as expected on my laptop.

These bindings require the following packages:

Keyboard layout switcher

Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard.

To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:

bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script

Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:

HandleLidSwitch=lock

Instead, when I want to suspend to ram, I use the following keyboard shortcut:

bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram

which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep.

To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:

francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks

While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications

A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:

for_window [class="VidyoDesktop"] floating enable

You can get the Xorg class of the offending application by running this command:

xprop | grep WM_CLASS

before clicking on the window.

Keeping IM windows on the first workspace

I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:

assign [class="Pidgin"] 1

Automatically moving workspaces when docking

Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:

# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1

You can get these output names by running:

xrandr --display :0 | grep " connected"

Drupal in the Enterprise (aka Vote for my DrupalCon Session)

TL; DR: [spam]Please vote for my DrupalCon Denver proposal on Drupal workflows in the enterprise.[/spam]

For the last few months I've been working for Technocrat on a new Drupal based site for the Insurance Australia Group's Direct Insurance brands. The current sites are using Autonomy Teamsite.

The basics of the build are relatively straight forward, around 1000 nodes, a bunch of views and a bit of glue to hold it all together. Where things get complicated is the workflow. The Financial services sector in Australia is subject to strict control of representations being made about products. The workflow system needs to ensure IAG complies with these requirements.

During the evaluation we found that generally Drupal workflows are based around publishing a single piece of content on the production site. In the IAG case a collection of nodes need to be published as a piece of work, along with a new block. These changes need to be reviewed by stakeholders and then deployed. This led us to build a job based workflow system.

We are using the Features module to handle all configuration, deploy for entities and some additional tools, including Symfony, Jenkins and drush to hold it all together.

I've proposed the session for Drupal Downunder in January and will refine the session based on feedback from there in preparation for Denver. If you want to learn more about Drupal Workflows in the Enterprise, please vote for my session.

Interacting with the Acquia Cloud API using Python

The Acquia Cloud API makes it easy to manage sites on the platform. The API allows you to perform many administrative tasks including creating, destroying and copying databases, deploying code, managing domains and copying files.

Acquia offers 2 official clients. The primary client is a drush plugin which can only be downloaded from Acquia Insight. The other is a PHP library which states in the README that it is "[n]ot ready for production usage".

On a recent project using WF Tools we needed some pretty advanced deployment scripts for sites hosted on Acquia Cloud. We had tried using a mix of bash and PHP, but that created a maintenance nightmare, so we switched to Python.

I was unable to find a high quality Python library, so I wrote a python client for the Acquia Cloud API. The library implements all of the features that we needed, so there are a few things missing.

Chaining complex commands together is easy because the library implements a fluent interface. An extreme example of what is possible is below:


import acapi

# Instantiate the client
c = acapi.Client('user@example.com', 'acquia-token')

# Copy the prod db to dev, make a backup of the dev db and download it to /tmp
c.site('mysite').environment('prod').db('mysite').copy('dev').backups().create().download('/tmp/backup.sql.gz')

Some of the code is "borrowed" from the Python client for Twilio. The library is licensed under the terms of the MIT license.

I am continuing to develop the library. Consider this a working alpha. Improving error handling, creating a comprehensive test suite and implementing the missing API calls are all on the roadmap. Pull requests are welcome.

The code is PEP 8 (coding standards and PEP 257 (documentation standards) compliant and uses the numpydoc for code documentation.

Check out the Python client for Acquia's Cloud API on github.

[life] Day 170: The flight back

I have no idea if I'm getting my day numbers right any more with all the crossings of the international date line, but we'll call Friday day 170 and be done with it.

The flight back went pretty well. Zoe had a good time watching some movies, and also slept for a reasonable chunk of the flight. Zoe's cold had progressed into her typical runny nose/nasty cough combination, but neither was particularly bad. She did cough a bit in her sleep, but it didn't seem to stop her sleeping, and she was pretty happy for the duration of the flight. She was definitely impatient to land, because she knew she'd be seeing her mother.

We must have been the first flight into Brisbane on Friday morning, so we breezed through passport control quickly, and the car seat helpfully came out on the same carousel as the suitcases, so we were able to collect everything and exit quarantine relatively quickly.

Sarah met us outside, and dropped me home, and took the day off to spend with Zoe. I used the day to unpack and run a few errands.

I was super impressed with how well Zoe traveled overall. She's such a good little traveler. She's the perfect age/height for her Trunki now, and that made traversing airports at close to normal walking pace very doable. I'm also happy with how I handled solo-parent international travel. I've done a flight to Townsville with Zoe before, and a flight to Melbourne with Zoe and Anshu, but long-haul international for nearly 3 weeks is a totally different ball game, and aside from me needing to learn to pack a bit better when leaving a location (checklists, checklists, checklists!) everything went really well. The only thing I forgot to pack was my own swimwear, and that was easily fixed.

July 20, 2014

Twitter posts: 2014-07-14 to 2014-07-20

Why Linux is the Future of Computing

Presentation to the La Trobe Valley Linux Miniconference, Saturday July 19, 2014

The Fridge Magnets

Last Thursday night was the TasLUG OpenStack 4th Birthday meetup. We had some nice nibbly food, some drinks, and four OpenStacky talks:

  • An update from the OpenStack Foundation (presented by me, with slides provided by the Foundation).
  • A talk about the NeCTAR cloud and using the command line tools to work with images, by Scott Bragg.
  • A talk on spinning up instances with Nova and Heat, by Stewart Wilde.
  • A talk by me on Ceph, and how it can be used as the storage backend for an OpenStack cloud.

We also had some posters, stickers and fridge magnets made up. The fridge magnets were remarkably popular. If you weren’t at TasLUG last night, and you want a fridge magnet, first download this image (the full-res one linked to, not the inline one):

Then, go to Vistaprint and place an order for Magnetic Business Cards, using this image. You can get 25 done for about $10, plus shipping.

Finally, I would like to publicly thank the OpenStack Foundation for supporting this event.

July 19, 2014

[life] Day 168: Homeward bound

It's all a bit hazy now, but I think Zoe slept all night and woke up a bit early and came down to my room. Graydon appeared not long after. I made us all breakfast and then got stuck into packing.

After we were all packed up, and Zoe and Graydon had played a bit, Neal took us to REI and Best Buy to do a spot of shopping, and then dropped us at Hertz to pick up the rental car.

After lunch, we packed up the car and headed on our way to Dallas.

The drive went really well. I'd rented some sort of Chevy SUV, and it had a nice interior, and the car radio supported Pandora and had a big display. I stuck Zoe's car seat in the middle, and she was happy being able to see out the front and also see the cover art for what Pandora was dishing up. As I hoped, she napped for a couple of hours on the way up.

The drive took about three and a half hours, and I'd wanted to stop for a break along the way, but missed the exit for the only decent looking rest stop, so pressed on.

We made it to the airport with a comfortable margin of time, and had enough time for dinner. The highlight of the evening was hearing Kim Kardashian get paged twice. Everyone looked at each other and wondered if it was that Kim Kardashian and considered going to the gate she was paged to to find out.

Our flight ended up leaving a little bit late, due to needing to unload some of the cargo to make the distance and also to ensure we didn't arrive before the 5am curfew in Brisbane airport.

July 17, 2014

Adverse Childhood Exprience (ACE) questionnaire | acestoohigh.com

NOTE: the links referred to in this post may contain triggers. Make sure you have appropriate support available.

http://acestoohigh.com/got-your-ace-score/

There are 10 types of childhood trauma measured in the ACE Study, personal as well as ones related to other family members. Once you have your score, there are many useful insights later in the article.

The origin of this study was actually in an obesity clinic.

OpenPower firmware up on github!

With the whole OpenPower thing, a lot of low level firmware is being open sourced, which is really exciting for the platform – the less proprietary code sitting in memory the better in my books.

If you go to https://github.com/open-power you’ll see code for a bunch of the low level firmware for OpenPower and POWER8.

Hostboot is the bit of code that brings up the CPU and skiboot both sets up hardware and provides runtime services to Linux (such as talking to the service processor, if one is present).

Patches to https://github.com/open-power/skiboot/blob/master/doc/overview.txt are (of course) really quite welcome. It shouldn’t be too hard to get your head around the basics.

To see the Linux side of the OPAL interface, go check out linux/arch/powerpc/platforms/powernv -there you can see how we ask OPAL to do things for us.

If you buy a POWER8 system from IBM running PowerKVM you’re running this code.

Update on MySQL on POWER8

About 1.5 months ago I blogged on MySQL 5.6 on POWER andtalked about what I had to poke at to make modern MySQL versions run and run well on shiny POWER8 systems.

One of those bugs, MySQL bug 47213 (InnoDB mutex/rw_lock should be conscious of memory ordering other than Intel) was recently marked as CLOSED by the Oracle MySQL team and the upcoming 5.6.20 and 5.7.5 releases should have the fix!

This is excellent news for those wanting to run MySQL on SMP systems that don’t have an Intel-like memory model (e.g. POWER and MIPS64).

This was the most major and invasive patch in the patchset for MySQL on POWER. It’s absolutely fantastic that this has made it into 5.6.20 and 5.7.5 and may mean that these new versions will work out-of-the-box on POWER (I haven’t checked… but from glancing back at my patchset there was only one other patch that could be related to correctness rather than performance).

API Bug of the Week: getsockname().

A “non-blocking” IPv6 connect() call was in fact, blocking.  Tracking that down made me realize the IPv6 address was mostly random garbage, which was caused by this function:

bool get_fd_addr(int fd, struct protocol_net_address *addr)
{
   union {
      struct sockaddr sa;
      struct sockaddr_in in;
      struct sockaddr_in6 in6;
   } u;
   socklen_t len = sizeof(len);
   if (getsockname(fd, &u.sa, &len) != 0)
      return false;
   ...
}

The bug: “sizeof(len)” should be “sizeof(u)”.  But when presented with a too-short length, getsockname() truncates, and otherwise “succeeds”; you have to check the resulting len value to see what you should have passed.

Obviously an error return would be better here, but the writable len arg is pretty useless: I don’t know of any callers who check the length return and do anything useful with it.  Provide getsocklen() for those who do care, and have getsockname() take a size_t as its third arg.

Oh, and the blocking?  That was because I was calling “fcntl(fd, F_SETFD, …)” instead of “F_SETFL”!

July 16, 2014

[life] Day 167: Hamilton Pool and Reimers Ranch Park

Zoe slept all night, but woke up with signs of coming down with a cold. She was also mighty grumpy. The plan had been to go swimming at Hamilton Pool today, and I was initially thinking we should skip it, but Eva pointed out it was like 100°F and it wouldn't really change much, so we stuck with the original plan.

Hamilton Pool allows a limited number of vehicles in at a time, and so Neal was aiming to be there at 9am when the park opened to guarantee we'd get in. We arrived right at the crack of 9am, and there were a few cars in front of us already, but we made it in successfully.

Zoe did really well walking down from the car park to the pool, and we swam around for a bit. It was out of my comfort zone for swimming (rocky floor, poor visibility, over my head water depth), but I swam across it anyway. It was a very beautiful pool carved out of the limestone by Hamilton Creek. There were a couple of points where the creek trickled over the edge overhead and made little showers.

After a couple of hours there, we returned to the car (Zoe again did really well hiking up) and drove to neighbouring Reimers Ranch, where we had our picnic lunch under cover while a rain shower passed over. We then walked down to the Pedernales River and had a swim around in there.

Zoe wore a life jacket at both swimming locations, and really enjoyed the independence of being able to float around in the deep water.

We had to be back home by 3pm, which we were, so it was a shorter day than yesterday, but a good one nevertheless. The inclement weather also seemed to drop the temperature by about 5 degrees Celsius, so it was a good day overall. Aside from the morning grumpies, Zoe was in a fabulous mood all day.

July 15, 2014

Linux Security Summit 2014 Schedule Published

The schedule for the 2014 Linux Security Summit (LSS2014) is now published.

The event will be held over two days (18th & 19th August), starting with James Bottomley as the keynote speaker.  The keynote will be followed by referred talks, group discussions, kernel security subsystem updates, and break-out sessions.

The refereed talks are:

  • Verified Component Firmware – Kees Cook, Google
  • Protecting the Android TCB with SELinux – Stephen Smalley, NSA
  • Tizen, Security and the Internet of Things – Casey Schaufler, Intel
  • Capsicum on Linux – David Drysdale, Google
  • Quantifying and Reducing the Kernel Attack Surface -  Anil Kurmus, IBM
  • Extending the Linux Integrity Subsystem for TCB Protection – David Safford & Mimi Zohar, IBM
  • Application Confinement with User Namespaces – Serge Hallyn & Stéphane Graber, Canonical

Discussion session topics include Trusted Kernel Lock-down Patch Series, led by Kees Cook; and EXT4 Encryption, led by Michael Halcrow & Ted Ts’o.   There’ll be kernel security subsystem updates from the SELinux, AppArmor, Smack, and Integrity maintainers.  The break-out sessions are open format and a good opportunity to collaborate face-to-face on outstanding or emerging issues.

See the schedule for more details.

LSS2014 is open to all registered attendees of LinuxCon.  Note that discounted registration is available until the 18th of July (end of this week).

See you in Chicago!

[life] Day 166: The Neal Tanner tour of Austin

Alas, Zoe woke up at about 1am very sad. I'm not sure if she woke up and was so sad because of the lack of Cowie or disorientation due to the new house, but I managed to calm her down in my room downstairs and get her to go back to bed in Graydon's room, and she slept until about 7:30am. Miraculously, she didn't seem to wake up Graydon or Wiley.

Neal had some time off, and with the au pair looking after Wiley, he was able to give Zoe and I a tour of Austin with Graydon tagging along.

First stop was the Capitol building in Austin. It was a beautiful building, bigger than the Capitol building in Washington D.C. (everything's bigger in Texas). We tacked ourselves onto the end of a tour, and broke away a couple of times to check things out at our own pace.

Unfortunately the Senate wing was closed for remodeling, and the House of Representatives was being used for a mock government thing (I learned that Texas only has a part time legislature), so we weren't able to see these wings thoroughly, but we were able to go into the public gallery of the House of Representatives while the mock government thing was happening.

Zoe and Graydon had lots of fun chasing each other around the rotunda under the dome, and no one seemed to care.

After that, we drove over to Zilker Park for a picnic lunch.

After lunch, we went into Barton Springs Pool, an underground spring-fed natural pool, for a swim. The water was a very refreshing 20°C. The bottom was a bit slippery, but manageable. Once Zoe adjusted to the breathtaking cold temperature, she was fine. It was a good day to cool off, because it got up to 37°C.

After the swim, Graydon rode his bike, and Zoe borrowed his balance bike, and we made our way along the trail that ran along the edge of Town Lake, and took in a spectacular view of downtown Austin.

It was seriously hot by this stage, and Zoe was struggling a bit, so we slowly made our way back to the car. I'd spotted a frozen custard place in our travels, so we sampled some of that on the way back home.

For dinner, Neal and I popped out to Rudy's for some more tasty BBQ take out for dinner. It was quite the experience just ordering.

Hookup wires can connect themselves...

A test of precision movement of the Lynxmotion AL5D robot arm, seeing if it could pluck a hookup wire from a whiteboard and insert it into an Arduino Uno. The result: yes it certainly can! To be able to go from Fritzing layout file to automatic real world jumper setup wires would have to be inserted in a specific ordering so that the gripper could overhang the unwired part of the header as it went along.





Lynxmotion AL5D moving a jumper to an Arduino. from Ben Martin on Vimeo.

July 14, 2014

[life] Day 165: Switching homes, World Cup

We had a leisurely start to the day today. Zoe actually woke up and went downstairs without coming into my room. Apparently she tried waking up Vincent, but failed, and went downstairs and played on her own. I got to have a lie in until 8am, when I figured she must be sleeping in and got up to check on her to discover I was the last one up.

I packed up our suitcases and then we had one last swim in the pool. At the conclusion of the time in the pool, Henner discovered a baby snake in the pool filter, so we rescued it, and after Zoe and Vincent had a look at it and a hold, we walked it down the end of the street to return it to the wild. Hopefully it survived Vincent flinging it into the unknown.

After lunch, we packed up the cars and went around to Neal and Eva's place to watch the World Cup. Eva's half-German, and takes her German heritage seriously, so Zoe and I became honorary Germans for the afternoon. I was given a German team soccer jersey to wear, and we both put on German flag face paint. I'm no soccer fan, but it was fun anyway.

Zoe and Graydon don't seem to really remember each other from when they were next door to each other, but I think they must on a subconscious level, because they've gotten on spectacularly well. After a dinner of grilled chicken fajitas, the Schliebs' bid us farewell, and I threw all the kids in the bath. After Neal read Zoe and Graydon a story, they went to bed. I went upstairs to check on them about 20 minutes after lights out and they were still giggling away to each other. It was very cute. I think Zoe's going to have a lot of fun for the next couple of days.

I discovered when I was unpacking that Cowie was missing. After checking with Susanne, and her scouring the house, Cowie was discovered tucked into bed in Greta's princess tent, where I wouldn't have had a hope of finding her, so we'll have to do a retrieval run tomorrow at some point. I've managed to convince Zoe to go to bed with some substitute stuffed toys without much push back, but I don't know if it'll cut the mustard for the full night. We shall see.

Call for Proposals and Mini-Confs extended for one week

The Call for Proposals (CFP) opened on the 9th July, and the quality of submissions so far has been fantastic. There has been some requests for extension from potential speakers, and we want to make sure that everyone has a chance to have their proposal considered! Originally scheduled to be closed on Sunday 13th July, the papers committee has agreed to extend the deadline until midnight Sunday 20th July as we know there are more stories out there that deserve attention.

For those of you still considering submitting a proposal, there is only one rule:

Your proposal must be related to open source

This year the papers committee is going to be focused on open source in education as well as our usual focus on deep technical content.

If you have been working on something interesting, now is the time to tell the world! Visit linux.conf.au/cfp to register and for more information, including some very useful tips and tricks for submitting a proposal.

Important Dates:

  • Call for proposals now closes: Midnight 20 July 2014 NZ Time
  • Email notifications from conference organisers: September 2014
  • Early Bird registrations open: 23 September 2014
  • Conference dates: Monday 12 - Friday 16 January 2014

About linux.conf.au:

LCA (linux.conf.au) is a meeting place for the free and open source software communities. It will be held in Auckland at the University of Auckland Business School from Monday 12 to Friday 16 January, 2015, and provides a unique opportunity for open source developers, students, users and hackers to come together, share new ideas and collaborate.

The LCA2015 team

July 13, 2014

Twitter posts: 2014-07-07 to 2014-07-13

[life] Day 164: San Antonio Zoo

A big day yesterday on top of a big day the day before (and a late night).

We drove down to San Antonio to go to the San Antonio Zoo. The drive took about an hour and a half, but Zoe was happy watching a movie in the back with Vincent.

We rendezvoused with Eva and Neal and Graydon and Wiley there, and worked our way around the Zoo.

One of the things I love about America is the zoos are so much more affordable. I'll have to do a separate post about the price differences between US zoos and Australian zoos some time.

My favourite exhibit was the hippopotamus one. The hippos were submerged in an exhibit with a glass wall, so you could see above and below the waterline, and you could watch them coming up for air, and the fish nibbling away at their skin while they sat on the bottom. It held Zoe's attention for a while too.

We had a good time, and had at least superficially covered everything by early afternoon. The boys went on the train, which was a massive 20 minute ride, while Zoe and I had an ice cream.

Zoe did pretty well, but the combination of the late night the night before, the heat, and being a bit hungry before lunch made her a bit irritable. I probably carried her on my shoulders for 75% or more of the excursion. It certainly was hot.

We all stopped off for Tex Mex at Chuy's on the way home for dinner.

Zoe went to bed nice and early and slept solidly for 12 hours.

July 12, 2014

[life] Day 163: Montessori, shopping, BBQ and bats

We had quite the full day yesterday.

Susanne had arranged with Vincent's school, where he had been going to the holiday program, for Zoe to attend on Friday as well (if she wanted to). We'd been telling Zoe about it since Wednesday, and she'd been saying she didn't want to go, but I wanted her to actually see what she was declining before we made a final decision.

We tagged along on Friday morning when Susanne was dropping of Vincent. Once Zoe realised that it was just like her Kindergarten, and that the teachers seemed nice, she became more receptive to the idea, but in the end still couldn't quite bring herself to stay.

I borrowed Henner's monster pickup truck and Zoe and went to Barton Creek Square to do some clothes shopping. It was good timing, because there were heaps of sales on. I was glad that I had Zoe with me, because that way I could choose outfits for her that she actually liked.

It was the first time I've driven on the other side of the road for over a year, and in a monster pickup truck to boot. It was quite the experience.

We got back just before Susanne was going to head back to pick Vincent up, and Zoe really wanted to pick up Vincent, so Zoe went with Susanne and I popped out to a nearby mall to get something for myself and some lunch at VERTS Kebap, which is apparently a bit of a thing.

We played around in the pool in the afternoon, and then went out for dinner at The County Line, where we caught up with our old next-door neighbours, Neal and Eva and Graydon and their new addition, Wiley. Much meat was eaten.

After dinner, Henner, Vincent, Zoe and I went in to Austin to view the bats that roost under South Congress Bridge. We sat in the park under the bridge and waited for the right time for them to all fly out. There was a lady doing the rounds answering questions, and Zoe gave her a really good grilling.

We got back quite late from viewing the bats, and put the kids straight to bed.

OpenStack miniconf programme for PyCon AU

The miniconf organisers are pleased to announce their first draft of a schedule for the PyCon Australia OpenStack miniconf:

Time Talk
09:00 – 09:15 #Welcome

(Robert Collins, HP & Joshua Hesketh, Rackspace Australia)
09:15 – 09:45 #OpenStack Identity and Federation

(Jamie Lennox, Red Hat)

Keystone is the central authentication and authorization point for OpenStack.

It already handles managing users via LDAP and SQL, however, as OpenStack and the number of possible identity sources grows, Keystone is evolving to rely primarily on external sources of identity using protocols like SAML. Keystone’s role then becomes one of pure authorization and mapping those identities into an OpenStack context.

For the unfamiliar, we’ll start with a quick recap of the role of Keystone and

the permission models of OpenStack, then look at the challenges of handling

many distinct authentication sources and the in-development changes required

for federated identity providers.



Jamie is Australia’s Keystone core developer working for Red Hat in Brisbane and

currently the primary developer on Kite. He enjoys tinkering with anything

security related, and has recently been involved making the client side of OpenStack more usable for developers.

09:55 – 10:25 #Python Build Reasonableness and Semantic Versioning

(Robert Collins, HP)

Semver and Python with PBR

PBR – Python Build Reasonableness

PBR is a setuptools plugin which OpenStack developed to provide simple and consistent minimal-boilerplate build definitions for its projects. Now used by all the OpenStack projects, PBR provides integration glue for core features:

– testing

– binary package creation for Linux distributors

– inclusion of files in tarballs

– changelog and authors file creation

– pypi summary creation

– version number creation

– sphinx doc stub creation and manpage enablement

– unified requirements management – for both easy-install and pip with single-file control

The most interesting part is the version number creation, since coming up with the right version number can be a contentious discussion in some projects. Semver provides simple and robust rules for deciding on version numbers, and I’m in the middle of implementing automation for these in PBR itself, with integration glue to export them in PEP-440, dpkg and rpm format. The only dependencies PBR has are git + a recent pip, so this should be useful for many attendees – and while PBR is an OpenStack invention we’re very interested in making sure its useful and reliable for anyone that wants to use it.





10:25 – 10:45 Morning Tea
10:50 – 11:20 #Tempest: OpenStack Integrated Testing

(Matthew Treinish, HP)

Tempest is OpenStack’s integrated test suite which aims to provide validation that OpenStack is working. As such it is run as a gating on job on all proposed commits to OpenStack. It is designed to run against an operational OpenStack cloud, which includes everything from a devstack deployment to a public cloud. Tempest originally started as just a small number of integration tests to verify that the various OpenStack projects worked together. It has since grown into one of the top 5 most active OpenStack projects with several different classes of testing and validation.

This talk will provide an overview of what Tempest is and how it works. Providing an explanation of the philosophy behind the project, and insight into why things are setup a certain way. Additionally, it will cover some of the features in tempest and how to configure and run it locally with or without devstack.



Matthew Treinish is a part of the HP OpenStack team working to make OpenStack better. He’s been an active contributor to OpenStack since the Folsom cycle. He is the QA program PTL for the Juno development cycle, a core contributor on Tempest, elastic-recheck, and a couple of smaller projects and a member of the stable-maint team.

11:30 – 12:00 #Changing the world with ZeroVM and Swift

(Jakub Krajcovic, Rackspace Australia)

ZeroVM is a new generation of virtualization. It provides a secure sandbox for executing code and is able to spawn in under 5ms. It natively supports python execution out of the box.

This talk aims to outline how ZeroVM and Swift can start changing how we fundamentally think about computing and the consumption of IT resources. ZeroVM could be thought of as a “”micro-hypervisor”” that creates a sandboxed environment for code execution and through python middleware can run natively on top of a Swift storage cluster. This talk will present ZeroVM, describe about how it plugs into Swift and what capabilities the combination of these technologies opens.

The most immediate “”killer apps”” that the Swift+ZeroVM combination offers are in data processing:

– oil, geo, mining

– TV and movies

– photo and picture processing

Some less obvious but even more interesting are things like the possibility to completely redesign an SQL DB and create a structured query language that processes binary unstructured data instead of rows and columns.



Jakub Krajcovic is a solutions architect with very varied professional experience. He has a background in computer science, started out as a HPUX engineer and Linux enthusiast but over the course of time found himself working on 2 feature films including the first of the Hobbit trilogy and gradually finding home in the world of cloud computing. In his spare time he also worked with the Open Group on translating parts of the TOGAF 9 Framework. He is passionate about complex problems surrounding large quantities of data and making utility computing a reality. Jakub currently leads Rackspace’s cloud architecture team in Australia.

12:00 – 13:30 Lunch
13:30 – 14:00 #Deploy your python app into an OpenStack cloud using Solum

(Angus Salkeld, Rackspace Australia)

This talk will give an introduction to Solum and show how you can use Solum to Deploy an application into an OpenStack cloud.

A short video of Solum in action will be shown to give the audience an idea of the problem that Solum is trying to solve.

The presenter will then go through some of the capablities of Solum and some future features that are been worked on.



I am a developer at Rackspace Hosting, working on multiple OpenStack projects (Solum, Heat, Mistral). Prior to Rackspace I worked at Red Hat on OpenStack and before that Clustering.

14:10 – 14:40 #OpenStack security

(Grant Murphy, Red Hat)

This session will look at the historical trend of vulnerabilities that have been found in OpenStack, how they are managed and the initiatives that the OpenStack security group currently is undertaking to help reduce future occurrences.



Grant Murphy is a security engineer based in Brisbane currently working on product security for Red Hat. He is also a member of the upstream OpenStack vulnerability managment team and Python enthusiast.

14:50 – 15:20 #TripleO – What, Why, How

(James Polley, HP)

What is(n’t) TripleO?

Why do I want to use it

How do I get started



After spending too many years as a sysadmin, James has forsaken the glorious excitement that is being woken in the middle of the night by broken systems with the glorious excitement that is creating and documenting systems that break and wake other people in the middle of the night. He fully intended to come back and update this bio prior to the conference, but embarrassingly forgot to do so.

15:20 – 15:40 Afternoon Tea
15:45 – 16:15 #How to Read the Logs

(Anita Kuno, HP)

OpenStack generates a lot of log files in its testing process. Learn how to find and identify common patterns in log files from a member of the OpenStack Infrastructure team. Understand the different kinds of log files to expect, how to evaluate them to find why your patch is failing and what steps to take when you identify the failure. Learn how to search for bugs and how to file a bug report. Understanding the value that elastic-recheck provides will also be covered.



Anita Kuno is a Cloud Automation and Distribution Engineer at HP. She works on upstream OpenStack as part of the Infrastructure team. She has mentored many new contributors on how to develop with the OpenStack Infrastructure system which includes Gerrit and jenkins. She is a Gerrit upstream contributor.



Anita has served as an election official for the OpenStack Program Technical Lead Election and the Technical Committee Election for two consecutive release cycles. She is also an astrologer and acupuncturist.

16:25 – 16:55 #Large Scale Identification of Race Conditions (In OpenStack CI)

(Joseph Gordon, HP)

“Does your project have a CI system that suffers from an ever-growing set of race conditions? We have the tool for you: it has enabled increased velocity despite project growth.

When talking about the GNU HURD kernel, Richard Stallman once said, “it turned out that debugging these asynchronous multithreaded programs was really hard.” With 30+ asynchronous services developed by over 1000 people the OpenStack project is an object lesson of this problem. One of the consequences is race conditions often leak into code with no obvious defect. Just before OpenStack’s most recent stable release we were pushing the boundaries of what was possible with manual tracking of race conditions. To address this problem we have developed an ElasticSearch based toolchain called “elastic-recheck.” This helps us track race conditions so developers can fix them and identify when CI failures are related to the failed patch or are due to a known pre-existing race condition. Automated tracking of over 70 specific race conditions has allowed us to quickly determine which bugs are hurting us the most, allowing us to prioritize debugging efforts. Immediate and automated classification of test failures into genuine and false failures has saved countless hours that would have been wasted digging through the over 350MBs of logs produced by a single test run.”



Joe Gordon works full time on the open source project, OpenStack, on behalf of HP. He has spoken at, and co-chaired at OpenStack summits. And has given talks at such events as Europython 2013, CloudOpen (Japan 2014, Europe 2013), and OpenStack Israel.

16:55 – 17:10 #Close

(Robert Collins, HP & Joshua Hesketh, Rackspace Australia)

July 11, 2014

Improving Computer Reliability

In a comment on my post about Taxing Inferior Products [1] Ben pointed out that most crashes are due to software bugs. Both Ben and I work on the Debian project and have had significant experience of software causing system crashes for Debian users.

But I still think that the widespread adoption of ECC RAM is a good first step towards improving the reliability of the computing infrastructure.

Currently when software developers receive bug reports they always wonder whether the bug was caused by defective hardware. So when bugs can’t be reproduced (or can’t be reproduced in a way that matches the bug report) they often get put in a list of random crash reports and no further attention is paid to them.

When a system has ECC RAM and a filesystem that uses checksums for all data and metadata we can have greater confidence that random bugs aren’t due to hardware problems. For example if a user reports a file corruption bug they can’t repeat that occurred when using the Ext3 filesystem on a typical desktop PC I’ll wonder about the reliability of storage and RAM in their system. If however the same bug report came from someone who had ECC RAM and used the ZFS filesystem then I would be more likely to consider it a software bug.

The current situation is that every part of a typical PC is unreliable. When a bug can be attributed to one of several pieces of hardware, the OS kernel and even malware (in the case of MS Windows) it’s hard to know where to start in tracking down a bug. Most users have given up and accepted that crashing periodically is just what computers do. Even experienced Linux users sometimes give up on trying to track down bugs properly because it’s sometimes very difficult to file a good bug report. For the typical computer user (who doesn’t have the power that a skilled Linux user has) it’s much worse, filing a bug report seems about as useful as praying.

One of the features of ECC RAM is that the motherboard can inform the user (either at boot time, after a NMI reboot, or through system diagnostics) of the problem so it can be fixed. A feature of filesystems such as ZFS and BTRFS is that they can inform the user of drive corruption problems, sometimes before any data is lost.

My recommendation of BTRFS in regard to system integrity does have a significant caveat, currently the system reliability decrease due to crashes outweighs the reliability increase due to checksums at this time. This isn’t all bad because at least when BTRFS crashes you know what the problem is, and BTRFS is rapidly improving in this regard. When I discuss BTRFS in posts like this one I’m considering the theoretical issues related to the design not the practical issues of software bugs. That said I’ve twice had a BTRFS filesystem seriously corrupted by a faulty DIMM on a system without ECC RAM.

[life] Day 162: Trampoline, zoo and more time in the pool

Zoe woke up at around her usual 1:30am, with her mosquito bites troubling her. It seems that it's always around 1:30am that they interrupt her sleep. I put some cream on them and put her back to sleep in my bed.

I think at about 3:30am I awoke to find Vincent at the foot of my bed whispering something I couldn't make out. After I woke up enough to realise what was going on, I tried to talk to him, but he got upset and left. I decided to let him go.

Zoe woke up at around 7am, and went in to Vincent's room and found Henner asleep in there with him, and came back to my room, so it sounds like it was a fun night all round. Hopefully tonight will go better.

After breakfast, Susanne dropped Vincent's little sister Greta off at day care, and Zoe and Vincent had a great time on Vincent's trampoline. I joined in with them for a while and we had much fun just mucking around on the trampoline.

After that, we were going to go somewhere (I've forgotten), but ran into traffic from President Obama's visit, so we aborted that and went to the Austin Zoo and Animal Sanctuary (it's amazing how much my muscle memory wants me to type "Zoe" when I try to type "Zoo").

It was a pretty decent Zoo. Not huge, but not half-baked either. They had a god collection of big cats, with the tigers being pretty accessible. It was hot, in the 35°C range, but the lack of humidity made the heat quite tolerable. Zoe had a lot of fun feeding some (very large) goats.

We had what felt like one of the most long miniature train rides that I've ever had, for the price of $2.50 a ticket, and then headed off to get some sandwiches for lunch.

After lunch, Vincent and Zoe had a splash around in the "splash pad" of a strip mall near Henner and Susanne's house, and then after an ice cream we went home and jumped in the pool.

Another nice day in Austin. I put Zoe to bed early again without any fuss. Hopefully the mosquito bites won't give her grief tonight.

[life] Day 161: Atlanta to Austin

Yesterday was a pretty full day. We were up relatively early to head to the airport for a 10am flight. Chris dropped us off at about 8:30am. Check in was pretty quick, and I had the most amazing TSA experience I've ever had.

I think we must have gotten randomly opted into TSA Pre✓ or something, because I didn't have to take my laptop out of my bag or remove my shoes. It was amazing. We were through security in a matter of minutes. I was so astounded I almost felt the need to find a supervisor and thank them.

We made it to our gate comfortably before boarding time. Unfortunately I hadn't managed to convince Zoe to have a good breakfast, so she was already complaining of being hungry. I didn't want to miss the opportunity to pre-board by going to forage for food, so we just had to ride it out.

After we boarded and were about to push back, the plane's auxiliary power conked out. I was worried the plane was going to be broken and we've have to get off and wait for a new one, but fortunately they managed to just hook it up to gate power and restart it and we continued with our departure without it being too much later than scheduled.

The cabin crew took a while to start the refreshments, and Zoe was politely reminding me every 30 seconds that she was still hungry. The refreshments started, and then while they were half a dozen rows in front of us, the pilot announced we were heading for some turbulence and they were suspending the beverage service. Poor Zoe was very disappointed, but handled it very well.

We eventually got our snacks and drinks, which tided her over until we arrived in Austin. Susanne and Vincent sailed up to the kerb just as we emerged from the terminal, so it was very good timing.

Zoe and Vincent got along like long lost best friends. It was a battle to get Zoe to eat lunch, she was so distracted.

After some lunch and a splash around in the pool, Vincent's swim teacher arrived to give him a lesson. Zoe had been invited to join in as well, and after a few minutes watching and hesitating, she joined in. Having the swim teacher do in-home one on one lessons is awesome.

I put Zoe down to bed about half an hour earlier than normal, because that was Vincent's bedtime, and they were sharing a room. I figured with all the day's excitement, she could use the extra sleep anyway.

Austin is delightfully warm and dry. I think it was around 34°C and not much humidity. The pool was a great place to be.

July 10, 2014

Welcome To Australia

Related Posts:

  • No related posts

Ghosts of Conference Past

Wow, has it really been two weeks since the Capital Cabal had a bunch of the Ghosts of Conference Past in Wellington? It feels like it was, well, this weekend - and it's only Saturday morning!



One of the traditions of linux.conf.au that works really well is called the Ghosts Weekend. This is when the current organisers invite a selection of previous organisers somewhere for a weekend so they can perform a brain dump about things that went right for them, and in some cases, more importantly, what went wrong. They also go over the organisers plans to make sure that they're on track.



This all leads to a pretty intensive weekend!



The ghosts
We started on Friday with picking everyone up from the airport (my final pickup was 00:30, then we had to find some soda water for Ben & Leah's duty free Gin!). Saturday morning was a walk around all the venues, yes they're all within walking distance of one another. After a fantastic lunch at Coyote's we then hit a meeting room and got down to discussions.



If you're running a discussion like this, and most people in the room have laptops then I highly recommend using Gobby to take your notes. Everyone has a slightly different take on things, so note down slightly different things. Fantastic.



Our discussions continued over dinner on Saturday night which was at Fujiyama Teppanyaki, which is certainly a fun place to have dinner. Stewart did an amazing job at dodging the bit of cooked egg that one of the chefs flung his way!



nom nom nom
Sunday was back in the meeting room for continuing discussions. Followed by a few well earned beers and dinner at Wellington Brewery Bar. nom, nom, nom.



All in all, it was a fantastic weekend and I'd like thank the Ghosts who attended, give apologies to the Ghosts we couldn't invite and give a huge thank you to Susanne for organising a fantastic weekend that went incredibly smoothly!

LCA2010 in Wellington, New Zealand!

Awesome! Susanne and I have just announced at LCA2009 that linux.conf.au in 2010 will be held in Wellington, New Zealand! The website for LCA2010 is at www.penguinsvisiting.org.nz.



During our announcement we wanted to show a fantastic little clip from 42 Below about Wellington. But following the tradition of Hobarts annoucement at LCA2008 we had sound FAIL, so it didn't work. Please, go and check out the video clip on YouTube. (But ignore the bit about winning a trip to Wellington, the competition has finished.)



Follow the signs, visit Wellington!

Keysigning - done!

Wow, this is probably the first time I've managed to sign the verified keys from the GPG keysigning within 24 hours of the keysigning finishing.



One task crossed off my imaginary todo list!

MythTV Mini-conference - a success

The MythTV mini-conference has been and gone and from my point of view from organising it, it was a total success. We had a stack of great talks (including an impromptu talk by Paul Wayper on the customised wooden panel on his Dell laptop).



Nigel Pearson was great, giving us all an insight into the development of MythTV, not only how the development process works, but also the complexitiy of the internals of MythTV. That was really interesting.



An unexpected bonus was that all the talks were recorded. We weren't expecting the A/V stuff to be setup until Wednesday, but it was all ready to go (after one minor cable change) on Monday morning in our room. Fantastic! A big thank you to Josh and his A/V team. The recordings aren't up for downloading yet, I'll post here (and on the mythtv-users mailing list) when they're ready.



At the end of the day we had a large discussion about what features people wanted to see in MythTV, and in particular what features people wanted to see in MythTV for a 1.0 release. The general consensus was that MythTV as it stands has more than enough features for 1.0. Once the recordings are up I'll write a list of the key items that were discussed.



Finally, thank you to all my speakers, and everyone who came along to listen, and who took part in the discussion afterwards.



Cheers!

MythTV Lightning Talks

For the MythTV Mini-conference at LCA 2009 we're going to have some lightning talks, and have a few spare slots.



If you'd like to tell people how great your setup is; your current pet project; or almost anything else, then please let me know.

MythTV Mini-conference Schedule

The MythTV Mini-conference schedule has now been published here. If you want to attend the mini-conference you have to be registered for linux.conf.au! The talks are:
MythNetTV
by Michael Still
Transcoding in MythTV
by Paul Wayper
MythTV Internals
by Nigel Pearson
Debugging DTV
by Steven Ellis
MythTV Development from '04 to Now
by Nigel Pearson
Lightning Talks
Presence Awareness
by Jonathan Oxer
Panel
It is an awesome line-up and I'm certainly looking forward to hearing all about them!

MythTV Mini-conference

I just realised that I hadn't blogged about this at all. I'm hosting the MythTV mini-conference at linux.conf.au 2009 in Hobart, Australia in January. How could I have forgotten about blogging this?! The little blurb about the mini-conference is:
MythTV is a personal video recorder (PVR) for Linux which allows
you to decide what you want to watch; when you want to watch
it. MythTV has been increasing in popularity since Isaac Richards
first started working on it in 2002. It is now very usable, and has
several dedicated Linux distributions, as well as several books
written about it.

This mini-conference is intended to bring together both users and
developers of MythTV to discuss topics of interest to both groups.
The programme is looking really good, and it should be a rocking conference. I'll post another blog entry once the programme is officially released.

LCA2015 - Debian Miniconf submitted

Phew, I've submitted a proposal to run a Debian Miniconf at linux.conf.au 2015 here's hoping that it is accepted!



The Debian Miniconf was held in 2008 in Melbourne, so I feel it is well overdue to run it again.

LCA2010: On LinkedIn

Attending LCA2010? Use LinkedIn?



How about showing you're attending on the LCA2010 event on LinkedIn!

LCA2010 - Call for Miniconfs are now open!

WELLINGTON, New Zealand - Monday 15th June 2009 - Linux.conf.au announced the opening of its Call for Miniconfs for LCA2010. Miniconfs provide the opportunity of hosting 1-day mini-conferences on a variety of topics that run for 2 out of the 5 days during linux.conf.au. The Call for Miniconfs will remain open until 17 July 2009, after which time successful Miniconf Proposals will be notified and the best twelve selected to be included on the programme for LCA2010.

"Miniconfs are an important part of Linux.conf.au each year, and provide a great opportunity to host an entire day of sessions specific to a topic", says Andrew Ruthven, LCA2010 Director. "We're proud of hosting LCA2010 in Wellington, New Zealand and hope to see a variety of Miniconf Proposals to showcase the technical expertise of world's leading experts in free and open source software. When you gather IT experts together like this, the collective energy can help shape the future direction of emerging projects and developing technologies. That's what LCA2010 is all about – people getting together and making a difference".

IT businesses, government and community groups from around the world will also have the opportunity to showcase their work through presentations, displays and demonstrations at the LCA2010 Open Day, which will be held on Saturday 23rd January 2010. The conference will open its doors to the general public and highlight the best of breed Free and Open Source technology.

LCA2010 is easily affordable for professionals and hobbyists alike, thanks to generous sponsorship by leading proponents of free and open source software, and because the conference - much like the software - is largely organised by volunteers. If your business or organisation would like to take this opportunity to support LCA2010, please visit http://www.lca2010.org.nz/sponsors/why_sponsor.

Registrations to LCA2010 will open to delegates in September 2009.

About linux.conf.au

Linux.conf.au is one of the world's best conferences for free and open source software! The coming Linux.conf.au, LCA2010, will be held at the Wellington Convention Centre in Wellington, New Zealand from Monday 18th January to Saturday 23rd January 2010. LCA2010 is fun, informal and seriously technical, bringing together Free and Open Source developers, users and community champions from around the world. LCA2010 is the second time linux.conf.au has been held in New Zealand, with the first being Dunedin in 2006.

For more information see: http://www.lca2010.org.nz/

About Linux Australia

Linux Australia (http://www.linux.org.au/) is the peak body for Linux User Groups (LUGs) around Australia, and as such represents approximately 5000 Australian Linux users and developers. Linux Australia facilitates the organisation of this international Free Software conference in a different Australasian city each year.

For more information see: http://www.linux.org.au/

Emperor Penguin Sponsors

LCA2010 is proud to acknowledge the support of our Emperor Penguin Sponsor, InternetNZ.

For more information see: http://www.internetnz.org.nz/

Media Enquiries

LCA2010 Organisers

Email: media@lca2010.org.nz

Phone/Fax: +64 (4) 802 0422

A Linux Conference as a Ritual

Sociological Images has an interesting post by Jay Livingston PhD about a tennis final as a ritual [1]. The main point is that you can get a much better view of the match on your TV at home with more comfort and less inconvenience, so what you get for the price of the ticket (and all the effort of getting there) is participating in the event as a spectator.

It seems to me that the same idea applies to community Linux conferences (such as LCA) and some Linux users group meetings. In terms of watching a lecture there are real benefits to downloading it after the conference so that you can pause it and study related web sites or repeat sections that you didn’t understand. Also wherever you might sit at home to watch a video of a conference lecture you will be a lot more comfortable than a university lecture hall. Some people don’t attend conferences and users’ group meetings because they would rather watch a video at home.

Benefits of Attending (Apart from a Ritual)

One of the benefits of attending a lecture is the ability to ask questions. But that seems to mostly apply to the high status people who ask most questions. I’ve previously written about speaking stacks and my observations about who asks questions vs the number that can reasonably be asked [2].

I expect that most delegates ask no questions for the entire conference. I created a SurveyMonkey survey to discover how many questions people ask [3]. I count LCA as a 3 day conference because I am only counting the days where there are presentations that have been directly approved by the papers committee, approving a mini-conf (and thus delegating the ability to approve speeches) is different.

Another benefit of attending is the so-called “hallway track” where people talk to random other people. But that seems to be of most benefit to people who have some combination of high status in the community and good social skills. In the past I’ve attended the “Professional Delegates Networking Session” which is an event for speakers and people who pay the “Professional” registration fee. Sometimes at such events there has seemed to be a great divide between speakers (who mostly knew each other before the conference) and “Professional Delegates” which diminishes the value of the event to anyone who couldn’t achieve similar benefits without it.

How to Optimise a Conference as a Ritual

To get involvement of people who have the ritualistic approach one could emphasise the issue of being part of the event. For example to get people to attend the morning keynote speeches (which are sometimes poorly attended due to partying the night before) one could emphasise that anyone who doesn’t attend the keynote isn’t really attending the conference.

Conference shirts seem to be strongly correlated with the ritual aspect of conferences, the more “corporate” conferences don’t seem to offer branded clothing to delegates. If an item of branded schwag was given out before each keynote then that would increase the attendance by everyone who follows the ritual aspect (as well as everyone who just likes free stuff).

Note that I’m not suggesting that organisers of LCA or other conferences go to the effort of giving everyone schwag before the morning keynote, that would be a lot of work. Just telling people that anyone who misses the keynote isn’t really attending the conference would probably do.

I’ve always wondered why conference organisers want people to attend the keynotes and award prizes to random delegates who attend them. Is a keynote lecture a ritual that is incomplete if the attendance isn’t good enough?

CFP closes in ONE week!

We have only 1 week of the CFP left, and although we're getting some brilliant proposals we need a great deal more in order to build the wonderful conference that we want it to be. The linux.conf.au 2015 papers committee is looking for a broad range of proposals, and will consider submissions on anything from programming and software, to desktop, mobile, gaming, userspace, community, government, space, and education. There is only one rule:

Your proposal must be related to open source.

This year the papers committee is going to be focused on open source in education as well as our usual focus on deep technical content. We will also welcome presentations on:

  • Clouds, datacenters and scalability.
  • Community challenges; legal threats; education and outreach.
  • Documentation for open source projects.
  • HTML5; multimedia codecs.
  • Kernel developments; new architectures.
  • Open hardware; embedded systems; wearable computing.
  • Security; privacy; anonymity.
  • Networking; Software defined networking; bufferbloat; network function virtualization.
  • Software development; programming languages.
  • Sysadmin and automation.
  • ...
So...

What would you like to present at LCA 2015 in Auckland?

Yes? Then to submit your proposal, create an account, and select Submit a Proposal from the menu on the left hand side.

Essays: Improving the Public Policy Cycle Model

I don’t have nearly enough time to blog these days, but I am doing a bunch of writing for university. I decided I would publish a selection of the (hopefully) more interesting essays that people might find interesting :) Please note, my academic writing is pretty awful, but hopefully some of the ideas, research and references are useful. 

For this essay, I had the most fun in developing my own alternative public policy model at the end of the essay. Would love to hear your thoughts. Enjoy and comments welcome!

Question: Critically assess the accuracy of and relevance to Australian public policy of the Bridgman and Davis policy cycle model.

The public policy cycle developed by Peter Bridgman and Glyn Davis is both relevant to Australian public policy and simultaneously not an accurate representation of developing policy in practice. This essay outlines some of the ways the policy cycle model both assists and distracts from quality policy development in Australia and provides an alternative model as a thought experiment based on the authors policy experience and reflecting on the research conducted around the applicability of Bridgman and Davis’ policy cycle model.

Background

In 1998 Peter Bridgman and Glyn Davis released the first edition of The Australian Policy Handbook, a guide developed to assist public servants to understand and develop sound public policy. The book includes a policy cycle model, developed by Bridgman and Davis, which portrays a number of cyclic logical steps for developing and iteratively improving public policy. This policy model has attracted much analysis, scrutiny, criticism and debate since it was first developed, and it continues to be taught as a useful tool in the kit of any public servant. The fifth edition of the Handbook was the most recent, being released in 2012 which includes Catherine Althaus who joined Bridgman and Davis on the fourth edition in 2007.

The policy cycle model

The policy cycle model presented in the Handbook is below:

bridgman-and-davis

The model consists of eight steps in a circle that is meant to encourage an ongoing, cyclic and iterative approach to developing and improving policy over time with the benefit of cumulative inputs and experience. The eight steps of the policy cycle are:

  1. Issue identification – a new issue emerges through some mechanism.

  2. Policy analysis – research and analysis of the policy problem to establish sufficient information to make decisions about the policy.

  3. Policy instrument development – the identification of which instruments of government are appropriate to implement the policy. Could include legislation, programs, regulation, etc.

  4. Consultation (which permeates the entire process) – garnering of external and independent expertise and information to inform the policy development.

  5. Coordination – once a policy position is prepared it needs to be coordinated through the mechanisms and machinations of government. This could include engagement with the financial, Cabinet and parliamentary processes.

  6. Decision – a decision is made by the appropriate person or body, often a Minister or the Cabinet.

  7. Implementation – once approved the policy then needs to be implemented.

  8. Evaluation – an important process to measure, monitor and evaluate the policy implementation.

In the first instance is it worth reflecting on the stages of the model, which implies the entire policy process is centrally managed and coordinated by the policy makers which is rarely true, and thus gives very little indication of who is involved, where policies originate, external factors and pressures, how policies go from a concept to being acted upon. Even to just develop a position resources must be allocated and the development of a policy is thus prioritised above the development of some other policy competing for resourcing. Bridgman and Davis establish very little in helping the policy practitioner or entrepreneur to understand the broader picture which is vital in the development and successful implementation of a policy.

The policy cycle model is relevant to Australian public policy in two key ways: 1) that it both presents a useful reference model for identifying various potential parts of policy development; and 2) it is instructive for policy entrepreneurs to understand the expectations and approach taken by their peers in the public service, given that the Bridgman and Davis model has been taught to public servants for a number of years. In the first instance the model presents a basic framework that policy makers can use to go about the thinking of and planning for their policy development. In practise, some stages may be skipped, reversed or compressed depending upon the context, or a completely different approach altogether may be taken, but the model gives a starting point in the absence of anything formally imposed.

Bridgman and Davis themselves paint a picture of vast complexity in policy making whilst holding up their model as both an explanatory and prescriptive approach, albeit with some caveats. This is problematic because public policy development almost never follows a cleanly structured process. Many criticisms of the policy cycle model question its accuracy as a descriptive model given it doesn’t map to the experiences of policy makers. This draws into question the relevance of the model as a prescriptive approach as it is too linear and simplistic to represent even a basic policy development process. Dr Cosmo Howard conducted many interviews with senior public servants in Australia and found that the policy cycle model developed by Bridgman and Davis didn’t broadly match the experiences of policy makers. Although they did identify various aspects of the model that did play a part in their policy development work to varying degrees, the model was seen as too linear, too structured, and generally not reflective of the at times quite different approaches from policy to policy (Howard, 2005). The model was however seen as a good starting point to plan and think about individual policy development processes.

Howard also discovered that political engagement changed throughout the process and from policy to policy depending on government priorities, making a consistent approach to policy development quite difficult to articulate. The common need for policy makers to respond to political demands and tight timelines often leads to an inability to follow a structured policy development process resulting in rushed or pre-canned policies that lack due process or public consultation (Howard, 2005). In this way the policy cycle model as presented does not prepare policy-makers in any pragmatic way for the pressures to respond to the realities of policy making in the public service. Colebatch (2005) also criticised the model as having “not much concern to demonstrate that these prescriptions are derived from practice, or that following them will lead to better outcomes”. Fundamentally, Bridgman and Davis don’t present much evidence to support their policy cycle model or to support the notion that implementation of the model will bring about better policy outcomes.

Policy development is often heavily influenced by political players and agendas, which is not captured in the Bridgman and Davis’ policy cycle model. Some policies are effectively handed over to the public service to develop and implement, but often policies have strong political involvement with the outcomes of policy development ultimately given to the respective Minister for consideration, who may also take the policy to Cabinet for final ratification. This means even the most evidence based, logical, widely consulted and highly researched policy position can be overturned entirely at the behest of the government of the day (Howard, 2005) . The policy cycle model does not capture nor prepare public servants for how to manage this process. Arguably, the most important aspects to successful policy entrepreneurship lie outside the policy development cycle entirely, in the mapping and navigation of the treacherous waters of stakeholder and public management, myriad political and other agendas, and other policy areas competing for prioritisation and limited resources.

The changing role of the public in the 21st century is not captured by the policy cycle model. The proliferation of digital information and communications creates new challenges and opportunities for modern policy makers. They must now compete for influence and attention in an ever expanding and contestable market of experts, perspectives and potential policies (Howard, 2005), which is a real challenge for policy makers used to being the single trusted source of knowledge for decision makers. This has moved policy development and influence away from the traditional Machiavellian bureaucratic approach of an internal, specialised, tightly controlled monopoly on advice, towards a more transparent and inclusive though more complex approach to policy making. Although Bridgman and Davis go part of the way to reflecting this post-Machiavellian approach to policy by explicitly including consultation and the role of various external actors in policy making, they still maintain the Machiavellian role of the public servant at the centre of the policy making process.

The model does not clearly articulate the need for public buy-in and communication of the policy throughout the cycle, from development to implementation. There are a number of recent examples of policies that have been developed and implemented well by any traditional public service standards, but the general public have seen as complete failures due to a lack of or negative public narrative around the policies. Key examples include the Building the Education Revolution policy and the insulation scheme. In the case of both, the policy implementation largely met the policy goals and independent analysis showed the policies to be quite successful through quantitative and qualitative assessment. However, both policies were announced very publicly and politically prior to implementation and then had little to no public narrative throughout implementation leaving the the public narrative around both to be determined by media reporting on issues and the Government Opposition who were motivated to undermine the policies. The policy cycle model in focusing on consultation ignores the necessity of a public engagement and communication strategy throughout the entire process.

The Internet also presents significant opportunities for policy makers to get better policy outcomes through public and transparent policy development. The model down not reflect how to strengthen a policy position in an open environment of competing ideas and expertise (aka, the Internet), though it is arguably one of the greatest opportunities to establish evidence-based, peer reviewed policy positions with a broad range of expertise, experience and public buy-in from experts, stakeholders and those who might be affected by a policy. This establishes a public record for consideration by government. A Minister or the Cabinet has the right to deviate from these publicly developed policy recommendations as our democratically elected representatives, but it increases the accountability and transparency of the political decision making regarding policy development, thus improving the likelihood of an evidence-based rather than purely political outcome. History has shown that transparency in decision making tends to improve outcomes as it aligns the motivations of those involved to pursue what they can defend publicly. Currently the lack of transparency at the political end of policy decision making has led to a number of examples where policy makers are asked to rationalise policy decisions rather than investigate the best possible policy approach (Howard, 2005). Within the public service there is a joke about developing policy-based evidence rather than the generally desired public service approach of developing evidence-based policy.

Although there are clearly issues with any policy cycle model in practise due to the myriad factors involved and the at times quite complex landscape of influences, by constantly referencing throughout their book the importance of “good process” to “help create better policy” (Bridgman & Davis, 2012), they both imply their model is a “good process” and subtly encourage a check-box style, formally structured and iterative approach to policy development. The policy cycle in practice becomes impractical and inappropriate for much policy development (Everett, 2003). Essentially, it gives new and inexperienced policy makers a false sense of confidence in a model put forward as descriptive which is at best just a useful point of reference. In a book review of the 5th edition of the Handbook, Kevin Rozzoli supports this by criticising the policy cycle model as being too generic and academic rather than practical, and compares it to the relatively pragmatic policy guide by Eugene Bardach (2012).

Bridgman and Davis do concede that their policy cycle model is not an accurate portrayal of policy practice, calling it “an ideal type from which every reality must curve away” (Bridgman & Davis, 2012). However, they still teach it as a prescriptive and normative model from which policy developers can begin. This unfortunately provides policy developers with an imperfect model that can’t be implemented in practise and little guidance to tell when it is implemented well or how to successfully “curve away”. At best, the model establishes some useful ideas that policy makers should consider, but as a normative model, it rapidly loses traction as every implementation of the model inevitably will “curve away”.

The model also embeds in the minds of public servants some subtle assumptions about policy development that are questionable such as: the role of the public service as a source of policy; the idea that good policy will be naturally adopted; a simplistic view of implementation when that is arguably the most tricky aspect of policy-making; a top down approach to policy that doesn’t explicitly engage or value input from administrators, implementers or stakeholders throughout the entire process; and very little assistance including no framework in the model for the process of healthy termination or finalisation of policies. Bridgman and Davis effectively promote the virtues of a centralised policy approach whereby the public service controls the process, inputs and outputs of public policy development. However, this perspective is somewhat self serving according to Colebatch, as it supports a central agency agenda approach. The model reinforces a perspective that policy makers control the process and consult where necessary as opposed to being just part of a necessarily diverse ecosystem where they must engage with experts, implementers, the political agenda, the general public and more to create robust policy positions that might be adopted and successfully implemented. The model and handbook as a whole reinforce the somewhat dated and Machiavellian idea of policy making as a standalone profession, with policy makers the trusted source of policies. Although Bridgman and Davis emphasise that consultation should happen throughout the process, modern policy development requires ongoing input and indeed co-design from independent experts, policy implementers and those affected by the policy. This is implied but the model offers no pragmatic way to do policy engagement in this way. Without these three perspectives built into any policy proposal, the outcomes are unlikely to be informed, pragmatic, measurable, implementable or easily accepted by the target communities.

The final problem with the Bridgman and Davis public policy development model is that by focusing so completely on the policy development process and not looking at implementation nor in considering the engagement of policy implementers in the policy development process, the policy is unlikely to be pragmatic or take implementation opportunities and issues into account. Basically, the policy cycle model encourages policy makers to focus on a policy itself, iterative and cyclic though it may be, as an outcome rather than practical outcomes that support the policy goals. The means is mistaken for the ends. This approach artificially delineates policy development from implementation and the motivations of those involved in each are not necessarily aligned.

The context of the model in the handbook is also somewhat misleading which affects the accuracy and relevance of the model. The book over simplifies the roles of various actors in policy development, placing policy responsibility clearly in the domain of Cabinet, Ministers, the Department of Prime Minister & Cabinet and senior departmental officers (Bridgman and Davis, 2012 Figure 2.1). Arguably, this conflicts with the supposed point of the book to support even quite junior or inexperienced public servants throughout a government administration to develop policy. It does not match reality in practise thus confusing students at best or establishing misplaced confidence in outcomes derived from policies developed according to the Handbook at worst.

spheres-of-government

An alternative model

Part of the reason the Bridgman and Davis policy cycle model has had such traction is because it was created in the absence of much in the way of pragmatic advice to policy makers and thus has been useful at filling a need, regardless as to how effective is has been in doing so. The authors have however, not significantly revisited the model since it was developed in 1998. This would be quite useful given new technologies have established both new mechanisms for public engagement and new public expectations to co-develop or at least have a say about the policies that shape their lives.

From my own experience, policy entrepreneurship in modern Australia requires a highly pragmatic approach that takes into account the various new technologies, influences, motivations, agendas, competing interests, external factors and policy actors involved. This means researching in the first instance the landscape and then shaping the policy development process accordingly to maximise the quality and potential adoptability of the policy position developed. As a bit of a thought experiment, below is my attempt at a more usefully descriptive and thus potentially more useful prescriptive policy model. I have included the main aspects involved in policy development, but have included a number of additional factors that might be useful to policy makers and policy entrepreneurs looking to successfully develop and implement new and iterative policies.

Policy-model

It is also important to identify the inherent motivations of the various actors involved in the pursuit, development of and implementation of a policy. In this way it is possible to align motivations with policy goals or vice versa to get the best and most sustainable policy outcomes. Where these motivations conflict or leave gaps in achieving the policy goals, it is unlikely a policy will be successfully implemented or sustainable in the medium to long term. This process of proactively identifying motivations and effectively dealing with them is missing from the policy cycle model.

Conclusion

The Bridgman and Davis policy cycle model is demonstrably inaccurate and yet is held up by its authors as a reasonable descriptive and prescriptive normative approach to policy development. Evidence is lacking for both the model accuracy and any tangible benefits in applying the model to a policy development process and research into policy development across the public service continually deviates from and often directly contradicts the model. Although Bridgman and Davis concede policy development in practise will deviate from their model, there is very little useful guidance as to how to implement or deviate from the model effectively. The model is also inaccurate in that is overly simplifies policy development, leaving policy practitioners to learn for themselves about external factors, the various policy actors involved throughout the process, the changing nature of public and political expectations and myriad other realities that affect modern policy development and implementation in the Australian public service.

Regardless of the policy cycle model inaccuracy, it has existed and been taught for nearly sixteen years. It has shaped the perspectives and processes of countless public servants and thus is relevant in the Australian public service in so far as it has been used as a normative model or starting point for countless policy developments and provides a common understanding and lexicon for engaging with these policy makers.

The model is therefore both inaccurate and relevant to policy entrepreneurs in the Australian public service today. I believe a review and rewrite of the model would greatly improve the advice and guidance available for policy makers and policy entrepreneurs within the Australian public service and beyond.

References

(Please note, as is the usual case with academic references, most of these are not publicly freely available at all. Sorry. It is an ongoing bug bear of mine and many others).

Althaus, C, Bridgman, P and Davis, G. 2012, The Australian Policy Handbook. Sydney, Allen and Unwin, 5th ed.

Bridgman, P and Davis, G. 2004, The Australian Policy Handbook. Sydney, Allen and Unwin, 3rd ed.

Bardach, E. 2012, A practical guide for policy analysis: the eightfold path to more effective problem solving, 4th Edition. New York. Chatham House Publishers.

Everett, S. 2003, The Policy Cycle: Democratic Process or Rational Paradigm Revisited?, The Australian Journal of Public Administration, 62(2) 65-70

Howard, C. 2005, The Policy Cycle: a Model of Post-Machiavellian Policy Making?, The Australian Journal of Public Administration, Vol. 64, No. 3, pp3-13.

Rozzoli, K. 2013, Book Review of The Australian Policy Handbook: Fifth Edition., Australasian Parliamentary Review, Autumn 2013, Vol 28, No. 1.

Taxing Inferior Products

I recently had a medical appointment cancelled due to a “computer crash”. Apparently the reception computer crashed and lost all bookings for a day and they just made new bookings for whoever called – and anyone who had a previous booking just missed out. I’ll probably never know whether they really had a computer problem or just used computer problems as an excuse when they made a mistake. But even if it wasn’t a real computer problem the fact that computers are so unreliable overall that “computer crash” is an acceptable excuse indicates a problem with the industry.

The problem of unreliable computers is a cost to everyone, it’s effectively a tax on all business and social interactions that involve computers. While I spent the extra money on a server with ECC RAM for my home file storage I have no control over the computers purchased by all the companies I deal with – which are mostly the cheapest available computers. I also have no option to buy a laptop with ECC RAM because companies like Lenovo have decided not to manufacture them.

It seems to me that the easiest way of increasing overall reliability of computers would be to use ECC RAM everywhere. In the early 90′s all IBM compatible PCs had parity RAM, that meant that for each byte there was one extra bit which would report 100% of single-bit errors and 50% of errors that involved random memory corruption. Then manufacturers decided to save a tiny amount of money on memory by using 8/9 the number of chips for desktop/laptop systems and probably make more money on selling servers with ECC RAM. If the government was to impose a 20% tax on computers that lack ECC RAM then manufacturers would immediately start using it everywhere and the end result would be no price increase overall as it’s cheaper to design desktop systems and servers with the same motherboards – apparently some desktop systems have motherboard support for ECC RAM but don’t ship with suitable RAM or advertise the support for such RAM.

This principle applies to many other products too. One obvious example is cars, a car manufacturer can sell cheap cars with few safety features and then when occupants of those cars and other road users are injured the government ends up paying for medical expenses and disability pensions. If there was a tax for every car that has a poor crash test rating and a tax for every car company that performs badly in real world use then it would give car companies some incentive to manufacture safer vehicles.

Now there are situations where design considerations preclude such features. For example implementing ECC RAM in mobile phones might involve technical difficulties (particularly for 32bit phones) and making some trucks and farm equipment safer might be difficult. But when a company produces multiple similar products that differ significantly in quality such as PCs with and without ECC RAM or cars with and without air-bags there would be no difficulty in making them all of them higher quality.

I don’t think that we will have a government that implements such ideas any time soon, it seems that our government is more interested in giving money to corporations than taxing them. But one thing that could be done is to adopt a policy of only giving money to companies if they produce high quality products. If a car company is to be given hundreds of millions of dollars for not closing a factory then that factory should produce cars with all possible safety features. If a computer company is going to be given significant tax breaks for doing R&D then they should be developing products that won’t crash.

[life] Day 160: Chattahoochee Nature Centre

Zoe woke up early this morning, but mid-breakfast decided to go back to bed for a bit. I don't think she actually went back to sleep. Not sure what that was all about.

After breakfast, we drove over to the Chattahoochee Nature Centre to check it out.

Alongside the Chattahoochee River, the place was beautiful. The main thing we saw was the butterfly garden and a butterfly enclosure in a large screened tent with some flowers. Everyone was given a sugar-water drenched sponge on a stick, and the kids had a great time trying to entice butterflies onto the sticks.

There were also a few birds of prey in cages we took a look at, and then we walked around the lake, stopping to have lunch, before we headed back. It was a lovely morning out.

When we got home, the girls played in the wading pool in the backyard and generally around the house until dinner time. I used the time to try and pack as best I could.

I found a place that did Chicago-style deep dish pizza, so I ordered a pizza from them and Chris picked it up on the way home. It was delicious. I think I need to figure out how to make deep dish pizza myself.

Tomorrow morning, we head off to Austin. It's been a lovely time in Decatur, Georgia.

[life] Day 159: Georgia Aquarium and a walk in the woods

I had a broken night's sleep for some reason. Zoe woke up at one point and ended up in bed with me. She slept quite late, and I ended up needing to wake her up so that we could get away on time. Even as it was, we left 30 minutes later than planned.

We spent the morning at the Georgia Aquarium. It's a pretty impressive facility. I've never been that impressed by the Monterey Bay Aquarium, but this one is certainly good. Zoe had a good time checking out everything, but it was a bit of a challenge herding four kids in the same direction at various times.

All the busy days may have been taking a bit of a toll on her, because she had a couple of minor meltdowns at the aquarium today, but recovered quickly and seemed okay otherwise.

After lunch and a little more looking around, we headed back home so James could take his nap. Briana ducked out and ran a few errands, and I held the fort down. The girls mostly disbanded and played on their own, and I finally got on top of all the photos I've been taking.

Clara had a piano lesson at 4:30pm, so we dropped her off for that, just around the corner, and went for a walk in the woods nearby. It was a beautiful little loop, that reminded me of the woods in Zurich a little bit. Unfortunately Lucy got stung by something while we were there.

I managed to get Zoe down for bed pretty early again tonight, and we don't have any tight time constraints in the morning, so I should be able to let her sleep in for as long as she likes tomorrow morning.

July 08, 2014

Doing Password Complexity Wrong

I just made an account on yet another web service. On the suggestion of my password manager, I attempted to use the password “W:9[$X*F”. It was rejected because “Password must contain at least one non-alphabet character, one lowercase letter, one uppercase letter”. OK, how about “Passw0rd”? Yep, that’s fine.

Anyone want to guess which of those two passwords is going to fall victim to a brute-force attack first? Go on, don’t be shy, take a wild shot in the dark!

July 07, 2014

Starcraft: The Zerg Campaign

It appears that Andrew Lee has finished the Zerg Campaign in Starcraft. Amusement follows.

Overmind: Greetings underlings. I’m so cool I shit ice-cream. And I’ve got a new toy - this super important mega-death weapon thing that’s currently breeding in a Chrysalis. It’s a big big secret what’s inside but when she pops out, she’ll lay the smackdown on everything! I’m not telling anyone what’s inside the chrysalis so it’ll be a total surprise when she awakens. Also, since hot babe Kerrigan just died in Chapter 1, it can’t possibly be her. And if it is her (which it’s not), it’ll be such a cunning plot twist that you’ll wet yourself.



It’s like totally important that this Chrysalis thing is protected. So, amongst all my legions of creatures, I am going to choose… my most young and inexperienced Cerebrate to protect it.



All the other Cerebrates: Erm… boss… you sure about that?



Overmind: Of course I’m sure! I’m the Overmind! Okay… so little young cutie teacher’s pet newbie Cerebrate, there’s a few things you need to know before you begin your life as protector of the chrysalis… are you ready?



Newbie Cerebrate: Yes boss.



Overmind: Ok. This… is a drone.



All the other Cerebrates: Oh for fuck’s sake.



Overmind: And when you have enough minerals, you can build a Pwning Spool.



Newbie Cerebrate: Minerals… Pwning Spool… Got it.



Cerebrate Daggoth: Things sure are dull now that Overmind is cuddling Newbie over there. Hey Zasz, can I ask you something?



Cerebrate Zasz: Sure Daggoth.



Daggoth: So, we’re like masters of evolution right?



Zasz: Yeah.



Daggoth: So we can evolve from little larvae things to become anything at all right?



Zasz: Yeah.



Daggoth: So, we could look like… I dunno… Jessica Alba… or the Asian chick from Battlestar Gallactica… but instead we look like pulsating grey pieces of shit.



Zasz: Sucks dunnit. Anyway, that’s why Overmind nabbed Kerrigan. He’s been totally getting into Japanese tentantacle porn…



Daggoth: …



Overmind: And then you right click and build a Hydralisk Den. That let’s you build Hydralisks. That’s why it’s called a Hydralisk Den.



Newbie: Hey boss. I’m grateful that you’re taking the time to explain all this to me. But it’s been like 15 levels now and I think I’m ready to do a bit more. How about we push the storyline a bit.



Overmind: Ok. Go kill Terrans.



Newbie: (Kills Terrans)



Overmind: Go kill Protoss.



Newbie: (Kills Protoss)



Overmind: Go kill renegade Zerg.



Newbie: (Kills renegade Zerg) Script writers took a break on this chapter huh?



Overmind: yeah. Go kill more stuff till the Chrysalis thing hatches.



Newbie: Okay.



Raynor: Hey, I’ve been having these wacky dreams… as if Kerrigan were calling out to me. But I know that’s impossible because she died in Chapter 1.



Zerg Kerrigan: Hi everyone! I’m back. It was me in the Chrysalis. How cool and unexpected was that!



Everyone with IQ over 7: Shit me a brick! We all just wet ourselves!

Raynor: Oh my God. Sarah! What have they done to you?



Zerg Kerrigan: I’m a Zerg now. (wins Most Obvious Statement award)



Raynor: Well, I’ve still got a hard-on for you. So…are we going to bonk or are you going to kill me?



Zerg Kerrigan: Well, I want to kill you. But some strange lingering emotion inside me compels me to let you go. Some emotion stronger than any Zerg power over me. I… I don’t know what it could be. Leave Jimmy. Leave now.. before it’s too late. Must… control… (but how about we get together in the expansion set)

Raynor: Ok. Bye.

Zeratul: I’m an invisible Dark Templar Protoss. No one can see me and I’ve got this cool Jedi Lightsabre! I’m hunting Cerebrates… Charrge! (kills Zasz)



Zasz: Arrrggh (dies)

Overmind: Shame about Zasz. But when Zertaul shoved his light sabre up Zasz’s ass, I connected with Zertaul’s mind and now I know where the Protoss Homeworld is! Suck that Zeratul.



Zertaul: Bugger. (runs away)

Overmind: Cool, let’s go invade the Protoss Homeworld. The most important thing there is the Khaldarin Crystals, the source of all power!!! Since this is the most important thing ever after protecting the Chrysalis, I’m going to pick the Newbie Cerebrate again to take control of all my forces. Newbie, go to the Protoss Homeworld, steal the crystals and then kill all the Protoss there.



Newbie: Hey big boss, can I ask you a few questions?



Overmind: Of course my little cupcake:



Newbie: Well firstly, in Chapter 1, Mensk and Kerrigan turned on that one Psi-emitter rubics cube and “Zerg from across the galaxy” were lured to it. So how is it that the Protoss Homeworld, filled with 10 billion Psychic Protoss, is completely invisible to us?



Overmind: Err…



Newbie: And then there’s this thing about the Khaldarin Crystals. We only just found out about the Protoss Homeworld five minutes ago when Zeratul shoved his sabre up Zasz’s ass and you looked into his mind…so how is it that we know all about the Crystals and exactly where they are, so much so that we’ve set up a big friggin’ Neon Lit beacon over the beacon saying “BRING DRONE HERE”. But the Protoss, who have lived on the planet since forever AND who have a special upgrade called “Khaldarin Amulet”, don’t have a clue about the Crystals?



Overmind: Shut the fuck up. Now, go steal the crystals. Just look for the beacon.



Newbie: Yes boss. (steals the Crystals)



Overmind: Now do the Crystal Thing… and I can plant my fat ass down on the Protoss Homeworld. YEAH!

Xcode Distributed Builds Performance

[Sorry if you get this post twice—let’s say that our internal builds of RapidWeaver 4.0 are still a little buggy, and I needed to re-post this ;)]

Xcode, Apple’s IDE for Mac OS X, has this neat ability to perform distributed compilations across multiple computers. The goal, of course, is to cut down on the build time. If you’re sitting at a desktop on a local network and have a Mac or two to spare, distributed builds obviously make a lot of sense: there’s a lot of untapped power that could be harnessed to speed up your build. However, there’s another scenario where distributed builds can help, and that’s if you work mainly off a laptop and occasionally join a network that has a few other Macs around. When your laptop’s offline, you can perform a distributed build with just your laptop; when your laptop’s connected to a few other Macs, they can join in the build and speed it up.

There’s one problem with idea, though, which is that distributed builds add overhead. I had a strong suspicion that a distributed build with only the local machine was a significant amount slower than a simple individual build. Since it’s all talk unless you have benchmarks, lo and behold, a few benchmarks later, I proved my suspicion right.

  • Individual build: 4:50.6 (first run), 4:51.7 (second run)
  • Shared network build with local machine only: 6:16.3 (first run), 6:16.3 (second run)

This was a realistic benchmark: it was a full build of RapidWeaver including all its sub-project dependencies and core plugins. The host machine is a 2GHz MacBook with 2GB of RAM. The build process includes a typical number of non-compilation phases, such running a shell script or two (which takes a few seconds), copying files to the final application bundle, etc. So, for a typical Mac desktop application project like RapidWeaver, turning on shared network builds without any extra hosts evokes a pretty hefty speed penalty: ~30% in my case. Ouch. You don’t want to leave shared network builds on when your laptop disconnects from the network. To add to the punishment, Xcode will recompile everything from scratch if you switch from individual builds to distributed builds (and vice versa), so flipping the switch when you disconnect from a network or reconnect to it is going to require a full rebuild.

Of course, there’s no point to using distributed builds if there’s only one machine participating. So, what happens when we add a 2.4GHz 20” Aluminium Intel iMac with 2GB of RAM, via Gigabit Ethernet? Unfortunately, not much:

  • Individual build: 4:50.6 (first run), 4:51.7 (second run)
  • Shared network build with local machine + 2.4GHz iMac: 4:46.6 (first run), 4:46.6 (second run)

You shave an entire four seconds off the build time by getting a 2.4GHz iMac to help out a 2GHz MacBook. A 1% speed increase isn’t very close to the 40% build time reduction that you’re probably hoping for. Sure, a 2.4GHz iMac is not exactly a build farm, but you’d hope for something a little better than a 1% speed improvement by doubling the horsepower, no? Gustafson’s Law strikes again: parallelism is hard, news at 11.

I also timed Xcode’s dedicated network builds (which are a little different from its shared network builds), but buggered if I know where I put the results for that. I vaguely remember that dedicated network builds was very similar to shared network builds with my two hosts, but my memory’s hazy.

So, lesson #1: there’s no point using distributed builds unless there’s usually at least one machine available to help out, otherwise your builds are just going to slow down. Lesson #2: you need to add a significant amount more CPUs to save a significant amount of time with distributed builds. A single 2.4GHz iMac doesn’t appear to help much. I’m guessing that adding a quad-core or eight-core Mac Pro to the build will help. Maybe 10 × 2GHz Intel Mac minis will help, but I’d run some benchmarks on that setup before buying a cute Mac mini build farm — perhaps the overhead of distributing the build to ten other machines is going to nullify any timing advantage you’d get from throwing another 20GHz of processors into the mix.

WWDC Craziness

  • Meet new people (✓),
  • Catch up with fellow Aussies I haven’t seen in years (✓),
  • Go to parties (✓),
  • Behave appropriately at said parties (✓),
  • Use the phrase “Inconceivable!” inappropriately (✓),
  • Work on inspiring new code (✓),
  • Keep up with Interblag news (✗),
  • Keep up with RSS feeds (✗),
  • Keep up with personal email (✗),
  • Keep up with work email (✗),
  • Installed Leopard beta (✓),
  • Port code to work on Leopard (✗),
  • Successfully avoid Apple Store, Virgin, Banana Republic et al in downtown San Francisco (✓),
  • Keep family and friends at home updated (✓),
  • Mention the words “Erlang”, “Haskell” and “higher-order messaging” to puny humansfellow Objective-C programmers (✓),
  • Write up HoPL III report (✗),
  • Find and beat whoever wrote NSTokenField with a large dildo (✗),
  • Get food poisoning again (✗),
  • Sleep (✗),
  • Actually attend sessions at the conference ( ✗).

Goodbye, World

Yeah. So the other day, I walk into my local AppleCentre to buy myself a nice new STM bag. Of course, since I’m there pretty much every third hour of the day…

  • Me: “Can I have myself an André discount at all?”
  • Manager: “Hmmm… well, normally I would, but I can’t do that today. How about I throw in a free copy of World of Warcraft? Yes, that sounds like an excellent idea…”

Nooooooooooooooooooooo! Tom, I officially hate you. Do you know how long I’ve been trying to avoid playing this frigtard game? Goodbye sunshine, it’s been nice knowing you. If I don’t reply to any emails from now on, I’m either dead, or I’m playing this bloody MMORPG that I’ve been avoiding so successfully up until now. Bye all!

Die evil Cisco VPN client, die

If you have a VPN at your workplace, chances are good that it’s one of those Cisco 3000 VPN Concentrator things, which seem to be an industry standard for VPN equipment. Chances are also good that you’ve likely been forced to use the evil, evil proprietary Cisco VPN client, which has been known to be a source of angsta majora for Mac OS X and Linux users. (And if you think Windows users have it good, think again: the Cisco VPN client completely hosed a friend’s 64-bit Windows XP system to the point where it wouldn’t even boot.)

Enter vpnc, an open-source VPN client that works just fine on both Mac OS X and Linux. Linux people, I assume that you know what you’re doing — all you should need is a TUN/TAP kernel driver and you’re good to go. Mac OS X folks, you’ll need to be comfortable with the UNIX terminal to use it; unfortunately no GUI has been written for it yet. If you’re a Terminal geek, here’s a small guide for you:

  • Download and install a tun/tap driver for Mac OS X.
  • Download and install libgcrypt. If you have DarwinPorts (neé MacPorts) installed, simply do “port install libgcrypt”. Otherwise, grab it from the libgcrypt FTP site and install it manually.
  • You’ll need to check out the latest version of the vpnc code from its Subversion repository: “svn checkout http://svn.unix-ag.uni-kl.de/vpnc/”. The latest official release (0.3.3, as of this writing) will not compile properly on Mac OS X, which is why you need the code from the Subversion trunk.
  • After doing the standard “make && make install” mantra, run your Cisco VPN .pcf profile through the pcf2vpnc tool and save the resulting .vpnc file in /etc/vpnc.
  • ./vpnc YourProfile.vpnc, and that should be it. While you’re debugging it, the --nodetach and --debug 1 options may be useful.

Muchas gracias to Mario Maceratini at Rising Sun Pictures for hunting down vpnc for me.

Australians: 5GB mobile broadband for $39/month

This is a Public Service Announcement for Australians: if you’re looking for mobile broadband access for your laptop (and what geek isn’t?), Vodafone are doing a pretty spectacular deal at the moment for ‘net access via their 3G/HSDPA network.

For $39/month, you get 5GB of data; no time limits; no speed caps; and fallback from 3G to GPRS in regional areas where HSDPA isn’t available yet. It’s a fantastic deal for people who live in metropolitan areas and work on the road a lot.

The main catch is that it’s a 24-month contract, so is a somewhat long time to be locked in to a plan. However, I have a feeling that no other mobile Internet offering is going to be competitive with 5GB for $40/month within the next two years. (Hell, $39/month for decent mobile Internet access is competitive with even some fixed-line ADSL2 providers.) One other small catch is that you also can’t use multiple devices on the plan: it’s tied to the single SIM card that you purchase with the plan. So, all you cool kids with 3G/GPRS-capable mobile phones, you can’t include that device on part of the bundle (looks sadly at iPhone). Other than that, it’s really a pretty bloody good deal.

To compare this with other plans:

  • Vodafone themselves offer a craptacular 100MB for $29/month, which is barely enough to just check email these days. (And that doesn’t include the modem, which is another $200). A mere 1GB of data is $59/month, or $99 per month with no contract!
  • Telstra are even worse (this is my surprised face): $59 for 200MB. I’ll say that again: $59 per month for 200MB. 1GB is $89.
  • Bigpond (who are different from Telstra1) offer vaguely competitive plans if you’re OK with a 10-hour-per-month time limit: that goes for $35/month. (This translates to around 30 minutes per business day, which may be OK if you just hop online occasionally to check email.) The $35 plan is the only timed plan, though: other than that, it’s $55 for 200MB (puke), or $85 for 1GB.
  • I can’t even find out whether Optus have mobile broadband plans available. Comments?
  • Virgin Mobile Broadband used to be pretty spectacular at $10/month for 1GB, and is still somewhat OK at $80/month for the same 1GB if it’s bundled with a phone plan. Considering that Vodafone’s $39/month for 5GB, you can still pair their deal with a phone plan of your choice and have 5GB instead of 1GB, though.
  • Three (or 3, or whatever) just launched the next best alternative with their new X-Series plans. Their Gold plan is $30/month for 1GB, and their Platinum plan is $40/month for 2GB. Interestingly, the X-Series plans give you a ton of free Skype minutes (2000 minutes on the 1GB plan and 4000 minutes on the 2GB plan), so if you’re a really heavy Skype person and chat about 130 hours per month, the Three deal may be better than Vodafone’s.

The 3G modem they use is a Huawei E220, which looks like it’s the same modem used by Virgin and Three. There appears to be Linux support for it, and I can confirm that Mac supports works fine on Mac OS X 10.5 (Leopard) thanks to an alternative driver.

So, if you’re interested, visit the Vodafone 5GB webpage. You can sign up through the Internet on the spot. However, you can also sign up over the phone, and if you do, you have a 30-day “cooling off” period where you can opt out of your contract if you’re not happy with the service. (Stupidly enough, you can’t get the 30-day cooling off period if you pop into a Vodafone store, because phone service has different conditions to face-to-face service. Ja, whatever man.) Hurry though: the deal expires on December 31, 2007. Get it as a late Christmas present for yourself, I guess!

1 Telstra Mobility Broadband is a completely separate service from Bigpond Broadband, and Telstra and Bigpond are separate entities. I found this out the hard way, when I was on a 10-hour-per-month CDMA/EVDO plan with Telstra, and couldn’t upgrade to the 10-hour-per-month 3G plan with Bigpond, because Telstra and Bigpond are separate things. Ahuh. (I couldn’t upgrade to a 10h plan on Telstra, because Telstra doesn’t even offer hourly plans anymore.) Way to go for rewarding all your mobile Internet early adopters that braved EVDO, you frigtards.

VMware Fusion Beta 3 vs Parallels

Parallels Desktop for Mac was the first kid on the block to support virtualisation of other PC operating systems on Mac OS X. However, in the past fortnight, I’ve found out that:

  1. Parallels allocates just a tad too many unnecessary Quartz windows1, which causes the Mac OS X WindowServer to start going bonkers on larger monitors. I’ve personally seen the right half of a TextEdit window disappear, and Safari not being able to create a new window while Parallels is running, even with no VM running. (I’ve started a discussion about this on the Parallels forums.)
  2. Parallels does evil things with your Windows XP Boot Camp partition, such as replace your ntoskrnl.exe and hal.dll file and rewriting the crucial boot.ini file. This causes some rather hard-to-diagnose problems with some low-level software, such as MacDrive, a fine product that’s pretty much essential for my Boot Camp use. Personally, I’d rather not use a virtualisation program that decides to screw around with my operating system kernel, hardware abstraction layer, and boot settings, thank you very much.

VMware Fusion does none of these dumbass things, and provides the same, simple drag’n’drop support and shared folders to share files between Windows XP and Mac OS X. I concur with stuffonfire about VMware Fusion Beta 3: even in beta, it’s a lot better than Parallels so far. Far better host operating system performance, better network support, hard disk snapshots (albeit not with Boot Camp), and DirectX 8.1 support to boot. (A good friend o’ mine reckons that 3D Studio runs faster in VMware Fusion on his Core 2 Duo MacBook Pro than it does natively on his dedicated Athlon 64 Windows machine. Nice.) The only major feature missing from VMware Fusion is Coherence, and I can live without that. It’s very cool, but hardly necessary.

Oh yeah, and since VMWare Fusion in beta right now, it’s free as well. Go get it.

1 Strictly speaking, allocating a ton of Quartz windows is Qt’s fault, not Parallels’s fault. Google Earth has the same problem. However, I don’t really care if it’s Qt’s fault, considering that it simply means running Parallels at all (even with no VM open) renders my machine unstable.

Choice Isn't Always a Good Thing

You know that Which Operating System Are You quiz?

Well, they’re gonna have to expand it to include all six versions of Windows Vista, whenever that decides to be unleashed unto the world. Hello, six versions? With the starter edition only being able to use 256MB of RAM and run three applications at once? Even eWeek says that “you would be better off running Windows 98”. You know what, instead of choosing between Vista Starter, Vista Home Basic, Vista Home Premium, Vista Business, Vista Enterprise or Vista Ultimate, how about I just run Mac OS X or Linux instead, you stupid tossers?

Jesus, the excellent lads over at Microsoft Research (who produce some truly world-class work) must be just cringing when they hear their big brother company do totally insane stuff like this.

Vimacs Downloads are Back

Whoops, those of you who had problems downloading Vimacs will find that the download links work properly now. (What the hell, people besides me actually use Vimacs?)

Video iPod Can Do Better Than 640x480

One of the features of the new video iPod (the “Generation 5.5” one) is that it handles videos bigger than 640×480 just fine. This shouldn’t be surprising for geeks who own the older video iPod that plays 320×240 video, since the alpha geeks will know that the older video iPods could play some videos bigger than 320×240 just fine.

A nice side-effect of this is that if you are ripping DVDs to MPEG-4, you can very likely rip them at native resolution: I had zero problems playing Season 2 of Battlestar Galactica on a new video iPod, and it had a native resolution of 704×400. (Note: This is with a standard MPEG-4 video profile, not H.264 baseline low-complexity.) This is pretty cool, since you can now hook up a little video iPod direct to a big-ass TV and know that video resolution is no longer a differentiating factor between DVDs and MPEG-4 video. Now if only the iPod had component video connectors available…

California USA 2007 Tour

Where’s André in June?

If you’ll be in town on any of those dates or going to HoPL or WWDC, drop me an email!

As an aside, HoPL III looks incredible: Waldemar Celes (Lua), Joe Armstrong (Erlang), Bjarne (C++), David Ungar (Self), and the awesome foursome from Haskell: Paul Hudak, John Hughes, Simon Peyton Jones and Phil Wadler. (Not to mention William Cook’s great paper on AppleScript, which I’ve blogged about before.) Soooo looking forward to it.

UCS-2 vs UTF-16

I always used to get confused between UCS-2 and UTF-16. Which one’s the fixed-width encoding and which one’s the variable-length encoding that supports surrogate pairs?

Then, I learnt this simple little mnemonic: you know that UTF-8 is variable-length encoded1. UTF = variable-length. Therefore UTF-16 is variable-length encoded, and therefore UCS-2 is fixed-length encoded. (Just don’t extend this mnemonic to UTF-32.)

Just thought I’d pass that trick on.

1 I’m assuming you know what UTF-8 is, anyway. If you don’t, and you’re a programmer, you should probably learn sometime…

Transitions

I’m not too sure that I can go much farther

I’m really not sure things are even getting better

I’m so tired of the me that has to disagree

I’m so tired of the me that’s in control

I woke up to see the…

Sun shining all around me

How could it shine down on me?

You think that it would notice that I can’t take any more

Had to ask myself,

… what’s it really for?

Everything I tried to do, it didn’t matter

Now I might be better off just rolling over

‘cos you know I try so hard but couldn’t change a thing

And it hurts so much I might as well let go

I can’t really take the…

Sun shining all around me

Why would it shine down on me?

You think that it would notice that I no longer believe

Can’t help telling myself

… it don’t mean a thing.

I woke up to see the…

Sun shining all around me

How could it shine down on me?

Sun shining all its beauty

Why would it shine down on me?

You think that it would notice that I can’t take any more

Just had to ask myself,

… what’s it really for?

—Yoko Kanno and Emily Curtis, What’s It For

Trust in love to save, baby. Bring on 2007!

Today is a Good Day

First, fuel costs are down:



Second, I actually finished an entire tube of Blistex before I lost the stupid thing. I believe this is the second time in my life that this has happened:

Third:



Fourth, my personal inbox looks like this right now:



Zero messages, baby. Yeah! (Well, OK, my work inboxes still have a ton of messages… but zero personal mails left is really pretty nice.)

Plus, this is being published from Auckland airport, on the way to San Francisco. Not a bad day at all.

The Problem with Threads

If you haven’t had much experience with the wonderful world of multithreading and don’t yet believe that threads are evil1, Edward A. Lee has an excellent essay named “The Problem with Threads”, which challenges you to solve a simple problem: write a thread-safe Observer design pattern in Java. Good luck. (Non-Java users who scoff at Java will often fare even worse, since Java is one of the few languages with some measure of in-built concurrency control primitives—even if those primitives still suck.)

His paper’s one of the best introductory essays I’ve read about the problems with shared state concurrency. (I call it an essay since it really reads a lot more like an essay than a research paper. If you’re afraid of academia and its usual jargon and formal style, don’t be: this paper’s an easy read.) For those who aren’t afraid of a bit of formal theory and maths, he presents a simple, convincing explanation of why multithreading is an inherently complex problem, using the good ol’ explanation of computational interleavings of sets of states.

His essay covers far more than just the problem of inherent complexity, however: Lee then discusses how bad threading actually is in practice, along with some software engineering improvements such as OpenMP, Tony Hoare’s idea of Communicating Sequential Processes2, Software Transactional Memory, and Actor-style languages such as Erlang. Most interestingly, he discusses why programming languages aimed at concurrency, such as Erlang, won’t succeed in the main marketplace.

Of course, how can you refuse to read a paper that has quotes such as these?

  • “… a folk definition of insanity is to do the same thing over and over again and to expect the results to be different. By this definition, we in fact require that programmers of multithreaded systems be insane. Were they sane, they could not understand their programs.”
  • “I conjecture that most multi-threaded general-purpose applications are, in fact, so full of concurrency bugs that as multi-core architectures become commonplace, these bugs will begin to show up as system failures. This scenario is bleak for computer vendors: their next generation of machines will become widely known as the ones on which many programs crash.”
  • “Syntactically, threads are either a minor extension to these languages (as in Java) or just an external library. Semantically, of course, they rhoroughly disrupt the essential determinism of the languages. Regrettably, programmers seem to be more guided by syntax than semantics.”
  • “… non-trivial multi-threaded programs are incomprehensible to humans. It is true that the programming model can be improved through the use of design patterns, better granularity of atomicity (e.g. transactions), improved languages, and formal methods. However, these techniques merely chip away at the unnecessarily enormous non-determinism of the threading model. The model remains intrinsically intractable.” (Does that “intractable” word remind you of anyone else?)
  • “… adherents to… [a programming] language are viewed as traitors if they succumb to the use of another language. Language wars are religious wars, and few of these religions are polytheistic.”

If you’re a programmer and aren’t convinced yet that shared-state concurrency is evil, please, read the paper. Please? Think of the future. Think of your children.

1 Of course, any non-trivial exposure to multithreading automatically implies that you understand they are evil, so the latter part of that expression is somewhat superfluous.

2 Yep, that Tony Hoare—you know, the guy who invented Quicksort?

The Long Road to RapidWeaver 4

Two years ago, I had a wonderful job working on a truly excellent piece of software named cineSync. It had the somewhat simple but cheery job of playing back movies in sync across different computers, letting people write notes about particular movie frames and scribbling drawings on them. (As you can imagine, many of the drawings that we produced when testing cineSync weren’t really fit for public consumption.) While it sounds like a simple idea, oh boy did it make some people’s lives a lot easier and a lot less stressful. People used to do crazy things like fly from city to city just to be the same room with another guy for 30 minutes to talk about a video that they were producing; sometimes they’d be flying two or three times per week just to do this. Now, they just fire up cineSync instead and get stuff done in 30 minutes, instead of 30 minutes and an extra eight hours of travelling. cineSync made the time, cost and stress savings probably an order of magnitude or two better. As a result, I have immense pride and joy in saying that it’s being used on virtually every single Hollywood movie out there today (yep, even Iron Man). So, hell of a cool project to work on? Tick ✓.

Plus, it was practically a dream coding job when it came to programming languages and technologies. My day job consisted of programming with Mac OS X’s Cocoa, the most elegant framework I’ve ever had the pleasure of using, and working with one of the best C++ cross-platform code bases I’ve seen. I also did extensive hacking in Erlang for the server code, so I got paid to play with one of my favourite functional programming languages, which some people spend their entire life wishing for. And I got schooled in just so much stuff: wielding C++ right, designing network protocols, learning about software process, business practices… so, geek nirvana? Tick ✓.

The ticks go on: great workplace ✓; fantastic people to work with ✓; being privy to the latest movie gossip because we were co-located with one of Australia’s premiere visual effects company ✓; sane working hours ✓; being located in Surry Hills and sampling Crown St for lunch nearly every day ✓; having the luxury of working at home and at cafés far too often ✓. So, since it was all going so well, I had decided that it was obviously time to make a life a lot harder, so I resigned, set up my own little software consulting company, and start working on Mac shareware full-time.

Outside of the day job on cineSync, I was doing some coding on a cute little program to build websites named RapidWeaver. RapidWeaver’s kinda like Dreamweaver, but a lot more simple (and hopefully just as powerful), and it’s not stupidly priced. Or, it’s kinda like iWeb, but a lot more powerful, with hopefully most of the simplicity. I first encountered RapidWeaver as a normal customer and paid my $40 for it since I thought it was a great little program, but after writing a little plugin for it, I took on some coding tasks.

And you know what? The code base sucked. The process sucked. Every task I had to do was a chore. When I started, there wasn’t even a revision control system in place: developers would commit their changes by emailing entire source code files or zip archives to each other. There was no formal bug tracker. Not a day went by when I shook my fist, lo, with great anger, and thunder and lightning appeared. RapidWeaver’s code base had evolved since version 1.0 from nearly a decade before, written by multiple contractors with nobody being an overall custodian of the code, and it showed. I saw methods that were over thousand lines long, multithreaded bugs that would make Baby Jesus cry, method names that were prefixed with with Java-style global package namespacing (yes, we have method names called com_rwrp_currentlySelectedPage), block nesting that got so bad that I once counted thirteen tabs before the actual line of code started, dozens of lines of commented-out code, classes that had more than a hundred and twenty instance variables, etc, etc. Definitely no tick ✗.

But the code—just like PHP—didn’t matter, because the product just plain rocked. (Hey, I did pay $40 for it, which surprised me quite a lot because I moved to the Mac from the Linux world, and sneered off most things at the time that cost more than $0.) Despite being a tangled maze of twisty paths, the code worked. I was determined to make the product rock more. After meeting the RapidWeaver folks at WWDC 2007, I decided to take the plunge and see how it’d go full-time. So, we worked, and we worked hard. RapidWeaver 3.5 was released two years ago, in June 2006, followed by 3.5.1. 3.6 followed in May 2007, followed by a slew of upgrades: 3.6.1, 3.6.2, 3.6.3… all the way up to 3.6.7. Slowly but surely, the product improved. On the 3rd of August 2007, we created the branch for RapidWeaver 3.7, which we didn’t realise yet was going to be such a major release that it eventually became 4.0.

And over time, it slowly dawned on me just how many users we had. A product that I initially thought had a few thousand users was much closer to about 100,000 users. I realised I was working on something that was going to affect a lot of people, so when we decided to call it version 4.0, I was a little nervous. I stared at the code base and it stared back at me; was it really possible ship a major new revision of a product and add features to it, and maintain my sanity?

I decided in my naïvety to refactor a huge bunch of things. I held conference calls with other developers to talk about what needed to change in our plugin API, and how I was going to redo half of the internals so it wouldn’t suck anymore. Heads nodded; I was happy. After about two weeks of being pleased with myself and ripping up many of our central classes, reality set in as I realised that I was very far behind on implementing all the new features, because those two weeks were spent on nothing else but refactoring. After doing time estimation on all the tasks we had planned out for 4.0 and realising that we were about within one day of the target date, I realised we were completely screwed, because nobody sane does time estimation for software without multiplying the total estimate by about 1.5-2x longer. 4.0 was going to take twice as long as we thought it would, and since the feature list was not fixed, it was going to take even longer than that.

So, the refactoring work was dropped, and we concentrated on adding the new required features, and porting the bugfixes from the 3.6 versions to 4.0. So, now we ended up with half-refactored code, which is arguably just as bad as no refactored code. All the best-laid plans that I had to clean up the code base went south, as we soldiered on towards feature completion for 4.0, because we simply didn’t have the time. I ended up working literally up until the last hour to get 4.0 to code completion state, and made some executive decisions to pull some features that were just too unstable in their current state. Quick Look support was pulled an hour and a half before the release as we kept finding and fixing bugs with it that crashed RapidWeaver while saving a document, which was a sure-fire way to lose customers. Ultimately, pulling Quick Look was the correct decision. (Don’t worry guys, it’ll be back in 4.0.1, without any of that crashing-on-save shenanigans.)

So, last Thursday, it became reality: RapidWeaver 4.0 shipped out the door. While I was fighting against the code, Dan, Aron, Nik and Ben were revamping the website, which now absolutely bloody gorgeous, all the while handling the litany of support requests and being their usual easygoing sociable selves on the Realmac forums. I was rather nervous about the release: did we, and our brave beta testers, catch all the show-stopper bugs? The good news is that it seems to be mostly OK so far, although no software is ever perfect, so there’s no doubt we’ll be releasing 4.0.1 soon (if only to re-add Quick Look support).



A day after the release, it slowly dawned on me that the code for 4.0 was basically my baby. Sure, I’d worked on RapidWeaver 3.5 and 3.6 and was the lead coder for that, but the 3.5 and 3.6 goals were much more modest than 4.0. We certainly had other developers work on 4.0 (kudos to Kevin and Josh), but if I had a bad coding day, the code basically didn’t move. So all the blood, sweat and tears that went into making 4.0 was more-or-less my pride and my responsibility. (Code-wise, at least.)



If there’s a point to this story, I guess that’d be it: take pride and responsibility in what you do, and love your work. The 4.0 code base still sucks, sitting there sniggering at me in its half-refactored state, but we’ve finally suffered the consequences of its legacy design for long enough that we have no choice but to give it a makeover with a vengeance for the next major release. Sooner or later, everyone pays the bad code debt.

So, it’s going to be a lot more hard work to 4.1, as 4.1 becomes the release that we all really wanted 4.0 to be. But I wouldn’t trade this job for pretty much anything else in this world right now, because it’s a great product loved by a lot of customers, and making RapidWeaver better isn’t just a job anymore, it’s a need. We love this program, and we wanna make it so good that you’ll just have to buy the thing if you own a Mac. One day, I’m sure I’ll move on from RapidWeaver to other hopefully great things, but right now, I can’t imagine doing anything else. We’ve come a long way from RapidWeaver 3.5 in the past two years, and I look forward to the long road ahead for RapidWeaver 5. Tick ✓.

Two new mixes

I’ve been pretty dormant in my music for the past few years, but I have been working on two two mixes in my sparse spare time: Tes Lyric, a weird blend of electronica, classical and rock, and Stage Superior, a progressive house mix. They’re up on my music page now; enjoy!

svk and the Psychological Effect of Fast Commits

svk—a distributed Subversion client by Chia Liang Kao and company—is now an essential part of my daily workflow. I’ve been using it almost exclusively for the past year on the main projects that I work with, and it’s fantastic being able to code when you’re on the road and do offline commits, syncing back to the main tree when you’re back online. Users of other distributed revision control systems do, of course, get these benefits, but svk’s ability to work with existing Subversion repositories is the killer reason to use it. (I’m aware that Bazaar has some Subversion integration now, but it’s still considered alpha, whereas svk has been very solid for a long time now.)

The ability to do local checkins with a distributed revision control client has a nice side-effect: commits are fast. They typically take around two seconds with svk. A checkin from a non-distributed revision control client such as Subversion requires a round-trip to the server. This isn’t too bad on a LAN, but even for a small commit, it can take more than 10 or 15 seconds to a server on the Internet. The key point is that these fast commits have a psychological effect: having short commit times encourages you to commit very regularly. I’ve found that since I’ve switched to svk, not only can I commit offline, but I commit much more often: sometimes half a dozen times inside of 10 minutes. (svk’s other cool feature of dropping files from the commit by deleting them from the commit message also helps a lot here.) Regular commits are always better than irregular commits, because either (1) you’re committing small patches that are easily reversible, and/or (2) you’re working very prolifically. Both of these are a win!

So, if you’re still using Subversion, try svk out just to get the benefits of this and its other nifty features. The svk documentation is quite sparse, but there are some excellent tutorials that are floating around the ‘net.

Steven Seagal

My dad’s been on a Steven Seagal action movie rampage, recently. How many friggin’ movies has this guy made, you think? A half-dozen? A dozen? Nope, thirty-two. And they’re all exactly the damn same, although some of them have hilarious titles (such as Today You Die, Half Past Dead and Out for a Kill) with equally hilarious taglines (“Whoever set him up is definitely going down”).

Please add Steven Seagal to the list of heroes who I want to be when I grow up. Life just can’t be that bad when you keep starring in action movies with hot Asian chicks in half of them.

Static and Dynamic Typing: Fight!

It’s rare that I find a good, balanced article on the (dis)advantages of static vs dynamic typing, mostly because people on each side are too religious (or perhaps just stubborn) to see the benefits of the other. Stevey’s blog rant comparing static vs dynamic typing is one of the most balanced ones that I’ve seen, even if I think half his other blog posts are on crack.

I lean toward pretty far toward the static typing end of the spectrum, but I also think that dynamic typing not only has its uses, but is absolutely required in some applications. One of my favourite programming languages is Objective-C, which seems to be quite unique in its approach: the runtime system is dynamically typed, but you get a reasonable amount of static checking at compile-time by using type annotations on variables. (Survey question: do you know of any Objective-C programmers who simply use id types everywhere, rather than the more specific types such as NSWindow* and NSArray*? Yeah, I didn’t think so.) Note that I think Objective-C could do with a more a powerful type system: some sort of parameterised type system similar in syntax to C++ templates/Java generics/C# generics would be really useful just for the purposes of compile-time checking, even though it’s all dynamically typed at runtime.

One common thread in both Stevey’s rant and what I’ve personally experienced is that dynamic typing is the way to go when your program really needs to be extensible: if you have any sort of plugin architecture or long-lived servers with network protocols that evolve (hello Erlang), it’s really a lot more productive to use a dynamic typing system. However, I get annoyed every time I do some coding in Python or Erlang: it seems that 50% of the errors I make are type errors. While I certainly don’t believe that static type systems guarantee that “if it compiles, it works”, it’s foolish to say that they don’t help catch a large class of errors (especially if your type system’s as powerful as Haskell’s or Ocaml’s), and it’s also foolish to say that unit tests are a replacement for a type system.

So, the question I want to ask is: why are programming languages today so polarised into either the static and dynamic camp? The only languages I know of that strive to accommodate for the benefits of both are Objective-C, Perl (though I’d say that writing Perl without use strict is an exercise in pain, since its only three types are scalars, arrays and hashes), and (gasp) Visual Basic. Programming languages and programming language research should’ve looked at integrating static and dynamic typing a long time ago. C’mon guys, it’s obvious to me that both approaches have good things to offer, and I ain’t that smart. I think a big reason they haven’t is largely for religious reasons, because people on both sides are too blinded to even attempt to see each other’s point of view. How many academic papers have there been that address this question?

I hope that in five years, we’ll at least have one mainstream programming language that we can write production desktop and server applications in, that offer the benefits of both static and dynamic typing. (Somebody shoot me, now I’m actually agreeing with Erik Meijer.) Perhaps a good start is for the current generation of programmers to actually admit that both approaches have their merit, rather than simply get defensive whenever one system is critiqued. It was proved a long time ago that dynamic typing is simply staged type inference and can be subsumed as part of a good-enough static type system: point to static typing. However, dynamic typing is also essential for distributed programming and extensibility. Point to dynamic typing. Get over it, type zealots.

P.S. Google Fight reckons that dynamic typing beats static typing. C’mon Haskell and C++ guys, unite! You’re on the same side! Down with those Pythonistas and Rubymongers! And, uhh, down with Smalltalk and LISP too, even though they rule! (Now I’m just confusing myself.)

Starcraft: The Terran Campaign

One o’ me good mates, sent me an email recently with a rather chortle-worthy summary of the HumanTerran plot of Starcraft I. Without further ado:

Raynor: Oh shit, nasty alien things with big teeth. Let’s put our faith in the supremely experienced commander who will save us (after I teach him how to build a command center, barracks and use the ‘repair’ button).

 

Zergling: Grrrrowl! Yummm!

Raynor: (shoots gun from his cool looking vulture bike)

Zergling: Gah! (dies)

Duke: Raynor. You’re a bad bad man. Why did you kill that cute little Zergling?

Raynor: It tried to eat me.

Duke: Well, tough. You should’ve asked me permission first because I’m the big boss of the Confedration and on one can take a shit unless I say so. Off to prison with you!

Raynor: Help! Get me outa here!

Mengsk: I’ll help! (opens the door) Hi. I’ve got this terrorist label but actually I’m a nice guy. The Confederation are the REAL baddies. Just to prove it, let me introduce you to my hot babe assistant. Remember, only good guys have hot babe assistants.

Kerrigan: Hi!

Raynor: …

Kerrigan: Wha! you perv!

 

Raynor: Huh what? How did you you know that I was thinking about having hot monkey sex with you up against the side of my bike whilst wearing a ballerina’s tutu?

 

Mengsk: She’s a telepath.

Kerrigan: Well, yes. And you’re staring at my tits.

Duke: Erm… fellas? Sorry to bother you… my ship sorta crashed in the middle of all these Zerg and they want to eat me. 

 

Raynor: Suck ass!

 

Mengsk: I’ll save you.

 

Raynor: WHAT!?!?

 

Mengsk: Well… Raynor will save you.

 

Raynow: WHAT!?!?! Oh… ok. (saves Duke)

 

Duke: Mengsk? YOU! I hate you with the white hot intensity of a thousand suns. I teach children to eat the liver of your dogs. You are a blight on all humanity and the scourge of the universe. The Confederation will never rest until we destroy you entirely!

 

Mengsk: Join me!

Duke: Ok.

Mengsk: Cool… Duke, what’s say you and I go kill all your former buddies in the Confederation.

 

Duke: Ok.

 

Raynor: uh… what about the Zerg?

Mengsk: No no. Much more important to kill Confederates.

 

Raynor / Kerrigan: uh… why?

 

Mengsk: Just do as you’re told.

 

Tassadar: Hey guys. We Protoss honour, respect and revere all sentient life. Therefore, we’re going to incinerate your planet. 

 

Mengsk: Oh shit. That means we don’t get to kill Confederates. Kerrigan, go kill the Protoss so that I can use this Psi-transmitter gadget to lure Zerg to kill Confederates. 

 

Kerrigan / Raynor: Erm… is this making any sense?

Kerrigan: I must do as I’m told because I’m a hot babe assistant. Ok, off I go to kill Protoss and lure Zerg and plant this Psi thingy.

Zerg Overmind: Who’s the hot chick in the catsuit. She’d look even cooler with green blood. I”m going to infest her… this will be fun. (infests Kerrigan) (Then Zerg go on to kill all Confederates)

Mengsk: YAY! All the Confederates died!

Raynor: You suck Mengsk. You too Duke. I’m leaving and coming back in a later chapter filled with vengenace to whomp your sorry asses. I’ll probably fall in love with Kerrigan since I’m the obvious hero and she’s the obvious heroine, but she’s infested with Zerg blood now… but I’ll use love to reach into the depths of her heart and rescue her and turn her back to the light side.

Andrew, please finish the Zerg and Protoss campaigns soon; we’d love to hear more.

MacDev 2009





I have the small honour of being a speaker at the maiden conference of MacDev 2009, a grass-roots, independently run, Mac developer conference in the UK that’s being held in April next year. MacDev looks like it’ll be the European equivalent of C4, which was absolutely the best Mac developer conference I’ve ever been to; I’d say it’s the Mac equivalent of Linux.conf.au. If you’re a Mac developer at all, come along, it should be great fun, and give your liver a nice workout. Plus, how can you ignore such a sexy list of speakers?

Update: My talk abstract is now available…

One reason for Mac OS X’s success is Objective-C, combining the dynamism of a scripting language with the performance of a compiled language. However, how does Objective-C work its magic and what principles is it based upon? In this session, we explore the inner workings of the Objective-C runtime, and see how a little knowledge about programming language foundations—such as lambda calculus and type theory—can go a long way to tackling difficult topics in Cocoa such as error handling and concurrency. We’ll cover a broad range of areas such as garbage collection, blocks, and data structure design, with a focus on practical tips and techniques that can immediately improve your own code’s quality and maintainability.

I am a great believer in getting the foundations right. Similarly to how bad code design or architecture often leads to a ton of bugs that simply wouldn’t exist in well-designed code, building a complex system on unsteady foundations can produce a lot of unnecessary pain. What are the foundations of your favourite programming language?

It’s 2008 and we’re still seeing buffer overflows in C.

Speaking at DevWorld 2008

For Mac developers in Australia, I’ll be speaking at the inaugural conference of DevWorld 2008, which will be held from September 29-30 this year in Melbourne. You can check out the full list of sessions; I’ll be giving the very last talk on that page: The Business of Development.

Coding is just one part of what makes a great product, but there’s always so much else to do and learn. So, what can you do to help ship a great product—besides coding—if you’re primarily a developer? In this talk, learn about important commercial and business issues that you, as a coder, can help to define and shape in your company, such as licensing and registration keys, adopting new technologies, software updates, handling support, your website, and crash reports.

Note that DevWorld 2008 is unfortunately only open to staff and students at an Australian university (“AUC member university”, to be exact), so unless you’re a student right now at one of those Unis, you’ll have to miss out on this incredibly exciting opportunity to hear me talk at you for an hour (snort). I hear the story behind this is that if this year’s DevWorld is successful, next year’s will be a more standard open conference. Anyway, hopefully catch some of you there in September!

Solid State Society

The traditional hard disk that’s likely to be in your computer right now is made out of a few magnetic circular platters, with a head attached to an actuator arm above the platter that reads and writes the data to it. The head’s such a microscopic distance away from the platter that it’s equivalent to a Boeing 747 flying at 600 miles per hour about six inches off the ground. So, when you next have a hard disk crash (and that’s when, not if), be amazed that the pilot in the 747 flying six inches off the ground didn’t crash earlier.



Enter solid-state drives (SSDs). Unlike hard disks, SSDs contain no moving parts, and are made out of solid-state memory instead. This has two big advantages: first, SSDs don’t crash (although this is a small lie—more on that later). Second, since SSDs are made out of memory, it’s much faster than a hard disk to get to a particular piece of data on the disk. In other words, they have a random access time that are orders of magnitude faster than their magnetic cousins. Hard disks need to wait for the platter to rotate around before the head can read the data off the drive; SSDs simply fetch the data directly from a memory column & row. In modern desktop computers, random access I/O is often the main performance bottleneck, so if you can speed that up an order of magnitude, you could potentially make things a lot faster.



Unfortunately, while SSDs are orders of magnitude faster than a hard disk for random access, they’re also an order of magnitude more expensive. That was until May this year, when this thing appeared on the scene:







(Image courtesy of itechnews.net.)



That boring-looking black box is a 120GB Super Talent Masterdrive MX. As far as SSD drives go, the Masterdrive MX is not particularly remarkable for its performance: it has a sustained write speed of just 40MB per second, which is a lot lower than many other SSDs and typical hard disks.



However, it’s a lot cheaper than most other SSDs: the 120GB drive is USD$699. That’s not exactly cheap (you could easily get a whopping two terabytes of data if you spent that money on hard disks), but it’s cheap enough that people with more dollars than sense might just go buy it… people like me, for instance. I’ve had that SSD sitting in my lovely 17” MacBook Pro for the past two months, as an experiment with solid-state drives. So, how’d it go?



I’ll spare you the benchmarks: if you’re interested in the raw numbers, there are a number of decent Masterdrive MX reviews floating around the Web now. I was more interested in the subjective performance of the drive. Does it feel faster for everyday tasks? Is it simply a better experience?



The overall answer is: yes, it’s better, but it’s not so much better that I’d buy the SSD again if I could go back in time. With a hard disk, things occasionally get slow. I’m sure I’m not the only one to witness the Spinning Beachball of Death while I wait 5-10 seconds for the hard disk to finally deliver the I/O operations to the programs that want them completed. With a hard disk, launching a program from the dock would sometimes take 20-30 seconds under very heavy I/O load, such as when Spotlight’s indexing the disk and Xcode’s compiling something. With the SSD, those delays just went away: I can’t even remember a time where I saw the evil Beachball due to system I/O load.



The most notable difference was in boot time. A lot of people love how Mac OS X is pretty fast to boot (and I agree with them), but when you go to log in, it’s a very different story. If, like me, you’ve got about ten applications and helper programs that launch when you log in, it can take literally minutes before Mac OS X becomes responsive. I clocked my MacBook Pro at taking just over a minute to log in with my current setup on a hard disk (which launches a mere half a dozen programs); the SSD took literally about 5 seconds. 5… 4… 3… 2… 1done. What is thy bidding, my master? I wish I’d made a video to demonstrate the difference, because it’s insanely faster when you see it. 10x faster login speed is nothing to sneeze at.



However, aside from boot up time, normal day-to-day operation really was about the same. Sure, it was nice that applications launched faster and it booted so fast that you don’t need to make a coffee anymore when logging in, but those were the only major performance differences that I saw. Mac OS X and other modern operating systems cache data so aggressively that I guess most of the data you’ll read and write will usually hit the cache first anyway. The lower sustained write performance didn’t end up being a problem at all: the only time I noticed it was when I was copying large torrented downloadsfiles around on the same drive, but that wasn’t slow enough for me to get annoyed. The one benchmark that I really cared about—compiling—turned out to take exactly as long on the SSD as the hard disk. I thought that maybe it was possible that random I/O write speed was a possible bottleneck with gcc; it turns out that’s not true at all. (I’ll also point out that I was using Xcode to drive most of the compilation benchmarks, which is one of the fastest build systems I’ve seen that uses gcc; no spastic libtool/automake/autoconf/autogoat insanity here.) Sorry to disappoint the other coders out there.



Aside from performance, the total silence of the SSD was a nice bonus, but it’s not something that you can’t live without once you’ve experienced it. In most environments, there’s enough background noise that you usually don’t hear the quiet hard disk hum anyway, so the lack of noise from the SSD doesn’t really matter. It was, however, very cool knowing that you could shake your laptop while it was on without fear of causing damage to your data. I’m usually pretty careful about moving my laptop around while it’s on, but with an SSD in there, I was quite happy to pick up the machine with one hand and wave it around in the air (as much as you can with a 17” MacBook Pro, anyway).



So, with all the nice small advantages of the SSD, you may be wondering why it’s no longer in my MacBook Pro. Here’s some reviews of the disk on newegg.com that may give you a hint:







It turns out those reviewers were right. Two months after I bought it, the Masterdrive MX completely died, which seemed like a pretty super talent for an SSD. The Mac didn’t even recognise the disk; couldn’t partition it; couldn’t format it. So much for SSDs not crashing, eh?



While SSDs don’t crash in the traditional manner that a hard disk may, there’s a whole number of other reasons why it might crash. RAM’s known to go wonky; there’s no reason why that can’t happen to solid-state memory too. Maybe the SATA controller on the disk died. No matter what the cause, you have the same problem as a traditional hard disk crash: unless you have backups, you’re f*cked. Plus, since I was on holiday down at Mount Hotham, my last backup was two weeks ago, just before I left for holiday. All my Mass Effect saved games went kaboom, and I just finished the damn game. André not very happy, grrr.



So, what’s the PowerPoint summary?

  • The Super Talent Masterdrive MX would be great buy if it didn’t friggin’ crash and burn your data with scary reliability. Even if you’re a super storage geek, avoid this drive until they have the reliability problems sorted out.
  • The Powerbook Guy on Market St in San Francisco is awesome. They were the guys to install the SSD in my MacBook Pro, and were extremely fast (two-hour turnaround time), professional, and had reasonable prices. (I would’ve done it myself, but I’d rather keep the warranty on my A$5000 computer, thanks.) Plus, they sold me the coolest German screwdriver ever for $6. (“This one screwdriver handles every single screw in a MacBook Pro”. Sold!)
  • The MacCentric service centre in Chatswood in Sydney is equally awesome. When the SSD died, they quoted me the most reasonable price I had ever seen for a hard disk swap in a MacBook Pro (have you seen how many screws that thing has?), and also had a two-hour turnaround time. Yeah, I know, decent Mac service in Australia! Woooooah.
  • Back up.
  • SSDs are great. I think they’ll complement rather than replace hard disks in the near future, and possibly replace them entirely if the price tumbles down enough. Next-generation SSDs are going to completely change the storage and filesystem games as they do away with the traditional stupid block-based I/O crap, and become directly addressable like RAM is today. Just don’t believe the hype about SSDs not crashing.

I, for one, welcome the solid state society. Bring on the future!

Six Months in San Francisco

I feel like there’s been four stages to my life. The first stage was being a youngling at primary school: I don’t remember much from there except that I fantasised about handball being an olympic sport. The second stage was the PC demoscene, where I grew interested in many things that I love today about computing: art, music, and my first experience with a community and culture that you could love and immerse yourself in. The third stage was my twenties: an introduction to university, Linux, coding, the Mac, Haskell, research conferences, industry conferences, the working life, and balancing it all with healthy doses of relaxation, food and the beautiful world that Sydney had to offer. The fourth stage was tearing myself away from that fairly sheltered life and my emotional base, and moving to San Francisco.

I’ve been here for six months. It’s felt like two years. It has been a truly wonderful experience: making new friends, learning a new culture that’s both significantly but subtly different, and doing it all without my family nearby, who’ve been my anchor and support for the past three decades. Part of the motivation was proving to myself that I could make it on my own: prove myself worthy in the eyes of my peers, be social enough to make genuine friends here who I cared about and who cared about me, living on my own and simply paying the rent. Part of the motivation was to shake things up from a cruisy life in Sydney and experience new things. I’m glad to report that the experiment’s going pretty well so far.

San Francisco is a city of immense contrast. For every stupid hipster who thinks that owning a Prius absolves them of their environmental debt to society, there are remarkable individuals who understand and challenge the daunting realism of politics, lobbying, energy, transformity and limits to growth. For every poser upstart get-rich-quick guy chasing after VC funding for Facebook apps, there are the quiet anonymous developers at Apple, Google, and startups you’ve never heard of who work on all the amazing technologies that the entire world takes for granted today. The Tenderloin, so unpleasant to walk through, has some of the very best restaurants and bars that the city has to offer. The nouveau shiny high-rises of South Beach contrast with the destitute run-down feel of western SoMa, only a few blocks away.

It’s a make-or-break city: rents are insanely high despite the rent control laws, and there’s no lower-middle class population here because either you’re flying high, or you’re not flying at all. It’s natural selection in action: either you keep up with the pack and continue being successful, or you fall and become left behind. And so, in contrast to the relaxed lifestyle of Sydney, San Francisco is full of ambition. While it lacks the non-stop pace of New York or the late-night industry of Detroit and Chicago, the people here want to change the world, and they have the innovation, the smarts and the determination to do so.

The tech industry here is simply amazing. Despite being here for half a year, I’m still floored when I go to a party and every person I meet there ends up being a Web designer, or a coder, or a sysadmin, or a DBA, or a network engineer, or a manager of a bunch of coders, or a VC funding a tech company, or a lawyer or accountant or marketing or PR person working for a tech company, or a level designer or artist working for a games company. Even the girls. It boggles me. It’s like the entire Bay Area simply exists to build software and iPhones and tech solutions. I was truly daunted in the first few months to find out that everyone around me was, well, just like me. A few months ago, I was at my favourite little tea shop in San Francisco decompressing and minding my own business, when three people sat down next to me and started talking about VGA BIOS exploits. (Turns out that they work for VMware.) I mean, seriously?

I wouldn’t say that I’m totally acclimated to the Bay Area yet, and perhaps I never will be. Visiting Australia just a month ago reminded me just how different the two cities are in their lifestyles. People are always doing something in San Francisco: there’s so many interesting people there that you feel like need to divide your time between groups, let alone having time to yourself. Even the serious introverts there are out on most schoolnights. The people here are always switched on; even at a party, there’s an air of networking going on and the feeling of opportunities to be seized. You almost always end up talking shop at any event, because people here are defined by what they do: one of the very first questions you’re usually asked is “Where do you work?” or “What do you do for a living?”. In Sydney, asking that question so soon would just be a little bit weird. You usually save that for far later in the conversation, when you’re running out of things to say to the pretty girl to try to hook up with her. (And don’t even get me started about the American dating scene.)

And so, for all the wonderful parks, bars, tacos, restaurants, pirate shops and museums of the city; the incredible beauty and varied terrain of the North Bay; the charm and chilled suburbia of North Berkeley in the East; and the innovation and serenity of Silicon Valley just south, I still miss Sydney and the culture I grew up with for twenty years. I don’t mean that in a yearning way or mean to imply that San Francisco is somehow inadequate, because it rocks: I’m having a wonderful time experiencing new things, and it was the right decision to move here. This is where I should be at this stage in my life. Sydney will always be where my heart is, but right now, San Francisco is home, and it’s as fantastic as I hoped it would be.

Self-Reflection

R. A. Salvatore, Road of the Patriarch, p. 280:

The point of self-reflection is, foremost, to clarify and to find honesty. Self-reflection is the way to throw self-lies out and face the truth—however painful it might be to admit that you were wrong. We seek consistency in ourselves, and so when we are faced with inconsistency, we struggle to deny.

Denial has no place in self-reflection, and so it is incumbent upon a person to admit his errors, to embrace them and to move along in a more positive direction.

We can fool ourselves for all sorts of reasons. Mostly for the sake of our ego, of course, but sometimes, I now understand, because we are afraid.

For sometimes we are afraid to hope, because hope breeds expectation, and expectation can lead to disappointment.

… Reality is a curious thing. Truth is not as solid and universal as any of us would like it to be; selfishness guides perception, and perception invites justification. The physical image in the mirror, if not pleasing, can be altered by the mere brush of fingers through hair.

And so it is true that we can manipulate our own reality. We can persuade, even deceive. We can make others view us in dishonest ways. We can hide selfishness with charity, make a craving for acceptance into magnanimity, and amplify our smile to coerce a hesitant lover.

… a more difficult alteration than the physical is the image that appears in the glass of introspection, the pureness or rot of the heart and the soul.

For many, sadly, this is not an issue, for the illusion of their lives becomes self-delusion, a masquerade that revels in the applause and sees in a pittance to charity a stain remover for the soul.

… There are those who cannot see the stains on their souls. Some lack the capacity to look in the glass of introspection, perhaps, and others alter reality without and within.

It is, then, the outward misery of Artemis Entreri that has long offered me hope. He doesn’t lack passion; he hides from it. He becomes an instrument, a weapon, because otherwise he must be human. He knows the glass all too well, I see clearly now, and he cannot talk himself around the obvious stain. His justifications for his actions ring hollow—to him most of all.

Only there, in that place, is the road of redemption, for any of us. Only in facing honestly that image in the glass can we change the reality of who we are. Only in seeing the scars and the stains and the rot can we begin to heal.

For Rebecca, who holds that glass of introspection higher than anyone else I’ve ever known. Thank you for everything.