Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

August 28, 2014

Raspberry Pi versus Cray X-MP supercomputer

It's often said that today we have in the phone in our pocket computers more powerful than the supercomputers of the past. Let's see if that is true.

The Raspberry Pi contains a Broadcom BCM2835 system on chip. The CPU within that system is a single ARM6 clocked at 700MHz. Located under the system on chip is 512MB of RAM -- this arrangement is called "package-on-package". As well as the RPi the BCM2835 SoC was also used in some phones, these days they are the cheapest of smartphones.

The Whetstone benchmark was widely used in the 1980s to measure the performance of supercomputers. It gives a result in millions of floating point operations per second. Running Whetstone on the Raspberry Pi gives 380 MFLOPS. See Appendix 1 for the details.

Let's see what supercomputer comes closest to 380 MFLOPS. That would be the Cray X-MP/EA/164 supercomputer from 1988. That is a classic supercomputer: the X-MP was a 1982 revision of the 1975 Cray 1. So good was the revision work by Steve Chen that it's performance rivalled the company's own later Cray 2. The Cray X-MP was the workhorse supercomputer for most of the 1980s, the EA series was the last version of the X-MP and its major feature was to allow a selectable word size -- either 24-bit or 32-bit -- which allowed the X-MP to run programs designed for the Cray 1 (24-bit), Cray X-MP (24-bit) or Cray Y-MP (32-bit).

Let's do some comparisons of the shipped hardware.

 

Basic specifications. Raspberry Pi versus Cray X-MP/EA/164
Item Cray X-MP/EA/164 Raspberry Pi Model B+
Price US$8m (1988) A$38
Price, adjusted for inflation US$16m A$38
Number of CPUs 1 1
Word size 24 or 32 32
RAM 64MB 512MB
Cooling Air cooled, heatsinks, fans Air cooled, no heatsink, no fan

 

Neither unit comes with a power supply. The Cray does come with housing, famously including a free bench seat. The RPi requires third-party housing, typically for A$10; bench seats are not available as an option.

The Cray had the option of solid-state storage. A Secure Digital card is needed to contain the RPi's boot image and, usually, its operating system and data.

 

I/O systems. Raspberry Pi versus Cray X-MP/EA/164
Item Cray Raspberry Pi
SSD size 512MB Third party, minimum of 4096MB
Price US$6m (1988) A$20
Price, adjusted for inflation US$12m A$20

 

Of course the Cray X-MP also had rotating disk. Each disk unit could contain 1.2GB and had a peak transfer rate of 10MBps. This was achieved by using a large number of platters to compensate for the low density of the recorded data, giving the typical "top loading washing machine" look of disks of that era. The disk was attached to a I/O channel. The channel could connect many disks, collectively called a "string" of disks. The Cray X-MP had two to four I/O channels, each capable of 13MBps.

In comparison the Raspberry Pi's SDHC connector attaches one SDHC card at a speed of 25MBps. The performance of the SD cards themselves varies hugely, ranging from 2MBps to 30MBps.

Analysis

What is clear from the number is that the floating point performance of the Cray X-MP/EA has fared better with the passage of time than the other aspects of the system. That's because floating point performance was the major design goal of that era of supercomputers. Ignoring the floating point performance, the Raspberry Pi handily beats out every computer in the Cray X-MP range.

Would Cray have been surprised by these results? I doubt it. Seymour Cray left CDC when they decided to build a larger supercomputer. He viewed this as showing CDC as not "getting it": larger computers have longer wires, more electronics to drive the wires, more heat from the electronics, more design issues such as crosstalk and more latency. Cray's main design insight was that computers needed to be as small as possible. There's not much smaller you can make a computer than a system-on-chip.

So why aren't today's supercomputers systems-on-chip? The answer has two parts. Firstly, the chip would be too small to remove the heat from. This is why "chip packaging" has moved to near the centre of chip design. Secondly, chip design, verification, and tooling (called "tape out") is astonishingly expensive for advanced chips. It's simply not affordable. You can afford a small variation on a proven design, but that is about the extent of the financial risk which designers care to take. A failed tape out was one of the causes of the downfall of the network processor design of Procket Networks.

Appendix 1. Whetstone benchmark

The whets.c benchmark was downloaded from Roy Longbottom's PC Benchmark Collection.

Compiling this for the RPi is simple enough. Since benchmark geeks care about the details, here they are.

$ diff -d -U 0 whets.c.orig whets.c
@@ -886 +886 @@
-#ifdef UNIX
+#ifdef linux
$ gcc --version | head -1
gcc (Debian 4.6.3-14+rpi1) 4.6.3

$ gcc -O3 -lm -s -o whets whets.c

Here's the run. This is using a Raspbian updated to 2014-08-23 on a Raspberry Pi Model B+ with the "turbo" overclocking to 1000MHz (this runs the RPi between 700MHz and 1000MHz depending upon demand and the SoC temperature). The Model B+ has 512MB of RAM. The machine was in multiuser text mode. There was no swap used before and after the run.

$ uname -a
Linux raspberry 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l GNU/Linux

$ cat /etc/debian_version 
7.6

$ ./whets
##########################################
Single Precision C/C++ Whetstone Benchmark

Calibrate
       0.04 Seconds          1   Passes (x 100)
       0.19 Seconds          5   Passes (x 100)
       0.74 Seconds         25   Passes (x 100)
       3.25 Seconds        125   Passes (x 100)

Use 3849  passes (x 100)

          Single Precision C/C++ Whetstone Benchmark

Loop content                  Result              MFLOPS      MOPS   Seconds

N1 floating point     -1.12475013732910156       138.651              0.533
N2 floating point     -1.12274742126464844       143.298              3.610
N3 if then else        1.00000000000000000                 971.638    0.410
N4 fixed point        12.00000000000000000                   0.000    0.000
N5 sin,cos etc.        0.49911010265350342                   7.876   40.660
N6 floating point      0.99999982118606567       122.487             16.950
N7 assignments         3.00000000000000000                 592.747    1.200
N8 exp,sqrt etc.       0.75110864639282227                   3.869   37.010

MWIPS                                            383.470            100.373

It is worthwhile making the point that this took maybe ten minutes. Cray Research had multiple staff working on making benchmark numbers such as Whetstone as high as possible.

Terry is getting In-Terry-gence.

I had hoped to use a quad core ARM machine running ROS to spruce up Terry the robot. Performing tasks like robotic pick and place, controlling Tiny Tim and autonomous "docking". Unfortunately I found that trying to use a Kinect from an ARM based Linux machine can make for some interesting times. So I thought I'd dig at the ultra low end Intel chipset "SBC". The below is a J1900 Atom machine which can have up to 8Gb of RAM and sports the features that one expects from a contemporary desktop machine, Gb net, USB3, SATA3, and even a PCI-e expansion.





A big draw to this is the "DC" version, which takes a normal laptop style power connector instead of the much larger ATX connectors. This makes it much simpler to hookup to a battery pack for mobile use. The board runs nicely from a laptop extension battery, even if the on button is a but funky looking. On the left is a nice battery pack which is running the whole PC.



An interesting feature of this motherboard is no LED at all. I had sort of gotten used to Intel boards having blinks and power LEDs and the like.

There should be enough CPU grunt to handle the Kinect and start looking at doing DSLAM and thus autonomous navigation.

[life] Day 211: A trip to the museum with Megan and a swim assessment

Today was a nice full day. It was go go go from the moment my feet hit the floor, though

I had a magnificent 9 hours of uninterrupted sleep courtesy of a sleeping pill, some pseudoephedrine and a decongestant spray.

I started off with a great yoga class, and then headed over to Sarah's to pick up Zoe. The traffic was phenomenally bad on the way there, and I got there quite late, so I offered to drop Sarah at work on the way back to my place, since I had no particular schedule.

After I'd dropped Sarah off, I was pondering what to do today, as the weather looked a bit dubious. I asked Zoe if she wanted to go to the museum. She was keen for that, and asked if Megan could come too, so I called up Jason on the way home to see she wanted to come with us, and directly picked her up on the way home.

We briefly stopped at home to grab my Dad Bag and some snacks, and headed to the bus stop. We managed to walk (well, make a run for) straight onto the bus and headed into the museum.

The Museum and the Sciencecentre are synonymous to Zoe, despite the latter requiring admission (we've got an annual membership). In trying to use my membership to get a discounted Sciencecentre ticket for Megan, I managed to score a free family pass instead, which I was pretty happy about.

We got into the Sciencecentre, which was pretty busy with a school excursion, and girls started checking it out. The problem was they both wanted to call the shots on what they did and didn't like not getting their way. Once I instituted some turn-taking, everything went much more smoothly, and they had a great time.

We had some lunch in the cafe and then spent some more time in the museum itself before heading for the bus. We narrowly missed the bus 30 minutes earlier than I was aiming for, so I asked the girls if they wanted to take the CityCat instead. Megan was very keen for that, so we walked over to Southbank and caught the CityCat instead.

I was half expecting Zoe to want to be carried back from the CityCat stop, but she was good with walking back. Again, some turn taking as to who was the "leader" walking back helped keep the peace.

I had to get over to Chandler for Zoe's new potential swim school to assess her for level placement, so I dropped Megan off on the way, and we got to the pool just in time.

Zoe did me very proud and did a fantastic job of swimming, and was placed in the second-highest level of their learn to swim program. We also ran into her friend from Kindergarten, Vaeda, who was killing time while her brother had a swim class. So Zoe and Vaeda ended up splashing around in the splash pool for a while after her assessment.

Once I'd managed to extract Zoe from the splash pool, and got her changed, we headed straight back to Sarah's place to drop her off. So we pretty much spent the entire day out of the house. Zoe and Megan had a good day together, and I managed to figure out pretty quickly how to keep the peace.

When testing goes bad

I've recently started working on a large, mature code base (some 65,000 lines of Python code). It has 1048 unit tests implemented in the standard unittest.TestCase fashion using the mox framework for mocking support (I'm not surprised you've not heard of it).

Recently I fixed a bug which was causing a user interface panel to display when it shouldn't have been. The fix basically amounts to a couple of lines of code added to the panel in question:

+    def can_access(self, context):
+        # extend basic permission-based check with a check to see whether 
+        # the Aggregates extension is even enabled in nova 
+        if not nova.extension_supported('Aggregates', context['request']):
+            return False
+        return super(Aggregates, self).can_access(context)

When I ran the unit test suite I discovered to my horror that 498 of the 1048 tests now failed. The reason for this is that the can_access() method here is called as a side-effect of those 498 tests and the nova.extension_supported (which is a REST call under the hood) needed to be mocked correctly to support it being called.

I quickly discovered that given the size of the test suite, and the testing tools used, each of those 498 tests must be fixed by hand, one at a time (if I'm lucky, some of them can be knocked off two at a time).

The main cause is mox's mocking of callables like the one above which enforces the order that those callables are invoked. It also enforces that the calls are made at all (uncalled mocks are treated as test failures).

This means there is no possibility to provide a blanket mock for the "nova.extension_supported". Tests with existing calls to that API need careful attention to ensure the ordering is correct. Tests which don't result in the side- effect call to the above method will raise an error, so even adding a mock setup in a TestCase.setUp() doesn't work in most cases.

It doesn't help that the codebase is so large, and has been developed by so many people over years. Mocking isn't consistently implemented; even the basic structure of tests in TestCases is inconsistent.

It's worth noting that the ordering check that mox provides is never used as far as I can tell in this codebase. I haven't sighted an example of multiple calls to the same mocked API without the additional use of the mox InAnyOrder() modifier. mox does not provide a mechanism to turn the ordering check off completely.

The pretend library (my go-to for stubbing) splits out the mocking step and the verification of calls so the ordering will only be enforced if you deem it absolutely necessary.

The choice to use unittest-style TestCase classes makes managing fixtures much more difficult (it becomes a nightmare of classes and mixins and setUp() super() calls or alternatively a nightmare of mixin classes and multiple explicit setup calls in test bodies). This is exacerbated by the test suite in question introducing its own mock-generating decorator which will generate a mock, but again leaves the implementation of the mocking to the test cases. py.test's fixtures are a far superior mechanism for managing mocking fixtures, allowing simpler, central creation of mocks and overriding of them through fixture dependencies.

The result is that I spent some time working through some of the test suite and discovered that in an afternoon I could fix about 10% of the failing tests. I have decided that spending a week fixing the tests for my 5 line bug fix is just not worth it, and I've withdrawn the patch.

August 27, 2014

[life] Day 210: Running and a picnic, with a play date and some rain

I had a rotten night's sleep last night. Zoe woke up briefly around midnight wanting a cuddle, and then I woke up again at around 3am and couldn't get back to sleep. I surprised I'm not more trashed, really.

It was a nice day today, so I made a picnic lunch, and we headed out to Minnippi Parklands to do a run with Zoe in the jogging stroller. It was around 10am by the time we arrived, and I had grand plans of running 10 km. I ran out of steam after about 3.5 km, conveniently at the "Rocket Park" at Carindale, which Zoe's been to a few times before.

So we stopped there for a bit of a breather, and then I ran back again for another 3 km or so, in a slightly different route, before I again ran out of puff, and walked the rest of the way back.

We then proceeded to have our picnic lunch and a bit of a play, before I dropped her off at Megan's house for a play while I chaired the PAG meeting at Kindergarten.

After that, and extracting Zoe, which is never a quick task, we headed home to get ready for swim class. It started to rain and look a bit thundery, and as we arrived at swim class we were informed that lessons were canceled, so we turned around and headed back home.

Zoe watched a bit of TV and then Sarah arrived to pick her up. I'm going to knock myself out with a variety of drugs tonight and hope I get a good night's sleep with minimum of cold symptoms.

August 26, 2014

Central Sydney WordPress Meetup: E-mail marketing

Andrew Beeston from Clicky! speaks about email marketing at Central Sydney WordPress meetup:

[life] Day 209: Startup stuff, Kindergarten, tennis and a play date

Last night was not a good night for sleep. I woke up around 12:30am for some reason, and then Zoe woke up around 2:30am (why is it always 2:30am?), but I managed to get her to go back to bed in her own bed. I was not feeling very chipper this morning.

Today was photo day at Kindergarten, so I spent some extra time braiding Zoe's hair (at her request) before we headed to Kindergarten.

When I got home, I got stuck into my real estate license assessment and made a lot of progress on the current unit today. I also mixed in some research on another idea I'm running with at the moment, which I'm very excited about.

I biked to Kindergarten to pick Zoe up, and managed to get her all sorted out in time for her tennis class, and she did the full class without any interruptions.

After tennis, we went to Megan's house for a bit. As we were leaving, her neighbour asked if we could help video one of her daughters doing the ALS ice bucket challenge thing, so we got a bit waylaid doing that, before we got home.

I managed to get Zoe down to bed a bit early tonight. My cold is really kicking my butt today. I hope we both sleep well tonight.

About your breakfast

We know that eating well (good nutritional balance) and at the right times is good for your mental as well as your physical health.

There’s some new research out on breakfast. The article I spotted (Breakfast no longer ‘most important meal of the day’ | SBS) goes a bit popular and funny on it, so I’ll phrase it independently in an attempt to get the real information out.

One of the researchers makes the point that skipping breakfast is not the same as deferring. So consider the reason, are you going to eat properly a bit later, or are you not eating at all?

When you do have breakfast, note that really most cereals contain an atrocious amount of sugar (and other carbs) that you can’t realistically burn off even with a hard day’s work. And from my own personal observation, there’s often way too much salt in there also. Check out Kellogg’s Cornflakes for a neat example of way-too-much-salt.

Basically, the research comes back to the fact that just eating is not the point, it’s what you eat that actually really does matter.

What do you have for breakfast, and at what point/time in your day?

August 25, 2014

APM:Rover 2.46 released

The ardupilot development team is proud to announce the release of version 2.46 of APM:Rover. This is a major release with a lot of new features and bug fixes.



This release is based on a lot of development and testing that happened prior to the AVC competition where APM based vehicles performed very well.



Full changes list for this release:

  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added logging of power status on Pixhawk
  • added PIVOT_TURN_ANGLE parameter for pivot based turns on skid steering rovers
  • lots of improvements to the EKF support for Rover, thanks to Paul Riseborough and testing from Tom Coyle. Using the EKF can greatly improve navigation accuracy for fast rovers. Enable with AHRS_EKF_USE=1.
  • improved support for dual GPS on Pixhawk. Using a 2nd GPS can greatly improve performance when in an area with an obstructed view of the sky
  • support for up to 14 RC channels on Pihxawk
  • added BRAKING_PERCENT and BRAKING_SPEEDERR parameters for better breaking support when cornering
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints on Pixhawk.
  • improved reporting of magnetometer and barometer errors to the GCS
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control



Many thanks to everyone who contributed to this release, especially Tom Coyle and Linus Penzlien for their excellent testing and feedback.



Happy driving!

APM:Plane 3.1.0 released

The ardupilot development team is proud to announce the release of version 3.1.0 of APM:Plane. This is a major release with a lot of new features and bug fixes.



The biggest change in this release is the addition of automatic terrain following. Terrain following allows the autopilot to guide the aircraft over varying terrain at a constant height above the ground using an on-board terrain database. Uses include safer RTL, more accurate and easier photo mapping and much easier mission planning in hilly areas.



There have also been a lot of updates to auto takeoff, especially for tail dragger aircraft. It is now much easier to get the steering right for a tail dragger on takeoff.



Another big change is the support of Linux based autopilots, starting with the PXF cape for the BeagleBoneBlack and the Erle robotics autopilot.



Full list of changes in this release

  • added terrain following support. See http://plane.ardupilot.com/wiki/common- ... following/
  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • added new takeoff code, including new parameters TKOFF_TDRAG_ELEV, TKOFF_TDRAG_SPD1, TKOFF_ROTATE_SPD, TKOFF_THR_SLEW and TKOFF_THR_MAX. This gives fine grained control of auto takeoff for tail dragger aircraft.
  • overhauled glide slope code to fix glide slope handling in many situations. This makes transitions between different altitudes much smoother.
  • prevent early waypoint completion for straight ahead waypoints. This makes for more accurate servo release at specific locations, for applications such as dropping water bottles.
  • added MAV_CMD_DO_INVERTED_FLIGHT command in missions, to change from normal to inverted flight in AUTO (thanks to Philip Rowse for testing of this feature).
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)

    added new STAB_PITCH_DOWN parameter to improve low throttle behaviour in FBWA mode, making a stall less likely in FBWA mode (thanks to Jack Pittar for the idea).
  • added GLIDE_SLOPE_MIN parameter for better handling of small altitude deviations in AUTO. This makes for more accurate altitude tracking in AUTO.
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port. See http://diydrones.com/profiles/blogs/fir ... t-on-linux for details.
  • prevent cross-tracking on some waypoint types, such as when initially entering AUTO or when the user commands a change of target waypoint.
  • fixed servo demo on startup (thanks to Klrill-ka)
  • added AFS (Advanced Failsafe) support on 32 bit boards by default. See http://plane.ardupilot.com/wiki/advance ... iguration/
  • added support for monitoring voltage of a 2nd battery via BATTERY2 MAVLink message
  • added airspeed sensor support in HIL
  • fixed HIL on APM2. HIL should now work again on all boards.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints, 50 rally points and 84 fence points on Pixhawk.
  • improved steering on landing, so the plane is actively steered right through the landing.
  • improved reporting of magnetometer and barometer errors to the GCS
  • added FBWA_TDRAG_CHAN parameter, for easier FBWA takeoffs of tail draggers, and better testing of steering tuning for auto takeoff.
  • fixed failsafe pass through with no RC input (thanks to Klrill-ka)
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • improved simulation of rangefinders in SITL
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control

Many thanks to everyone who contributed to this release, especially our beta testers Marco, Paul, Philip and Iam.



Happy flying!

[life] Day 208: Kindergarten, running, insurance assessments, home improvements, BJJ and a babyccino

Today was a pretty busy day. I started off with a run, and managed to do 6 km this morning. I feel like I'm coming down with yet another cold, so I'm happy that I managed to get out and run at all, let alone last 6 km.

Next up, I had to get the car assessed after a minor rear-end collision it suffered on Saturday night (nobody was hurt, I wasn't at fault). I was really impressed with NRMA Insurance's claim processing, it was all very smooth. I've since learned that they even have a smartphone app for ensuring that one gets all the pertinent information after an accident.

I dropped into Bunnings on the way home to pick up a sliding rubbish bin. I've been wanting one of these ever since I moved into my place, and finally got around to doing it. I also grabbed some LED bulbs from Beacon.

After I got home, I spent the morning installing and reinstalling the rubbish bin (I suck at getting these things right first go) and swapping light bulbs around. Overall, it was a very satisfying morning scratching a few itches around the house that had been bugging me for a while.

I biked over to Kindergarten for pick up again, and we biked back home and didn't have a lot of time before we had to head out for Zoe's second freebie Brazilian Jiu-Jitsu class. This class was excellent, there were 8 kids in total, and 2 other girls. Zoe got to do some "rolling" with a partner. It was so cute to watch. They just had to try and block each other from touching their knees, and if they failed, they had to drop to the floor and hop back up again. For each of Zoe's partners they were very civilized and took turns at failing to block.

Zoe was pretty tired after the class. It was definitely the most strenuous class she's had to date, and she briefly fell asleep in the car on the way home. We had to make a stop at the Garage to grab some mushrooms for the mushroom soup we were making for dinner.

Zoe helped me make the mushroom soup, and after dinner to popped out for a babyccino. It's been a while since we've had a post-dinner one, and it was nice to do it again. We also managed to get through the entire afternoon without and TV, which I thought was excellent.

My Media Server

Over the years, I’ve experimented with a bunch of different methods for media servers, and I think I’ve finally arrived at something that works well for me. Here are the details:

The Server

An old Dell Zino HD I had lying around, running Windows 7. Pretty much any server will be sufficient, this is just the one I had available. Dell doesn’t sell micro-PCs anymore, so just choose your favourite brand that sells something small and with low power requirements. The main things you need from it are a reasonable processor (fast enough to handle transcoding a few video streams in at least realtime), and lots of HD space. I don’t bother with RAID, because I won’t be sad about losing videos that I can easily re-download (the internet is my backup service).

Downloading

I make no excuses, nor apologies for downloading movies and TV shows in a manner that some may describe as involving “copyright violation”.

If you’re in a similar position, there are plenty of BitTorrent sites that allow you register and add videos to a personal RSS feed. Most BitTorrent clients can then subscribe to that feed, and automatically download anything added to it. Some sites even allow you to subscribe to collections, so you can subscribe to a TV show at the start of the season, and automatically get new episodes as soon as they arrive.

For your BitTorrent client, there are two features you need: the ability to subscribe to an RSS feed, and the ability to automatically run a command when the download finishes. I’ve found qBittorrent to be a good option for this.

Sorting

Once a file is downloaded, you need to sort them. By using a standard file layout, you have a much easier time of loading them into your media server later. For automatically sorting your files when they download, nothing compares to the amazing FileBot, which will automatically grab info about the download from TheMovieDB or TheTVDB, and pass it onto your media server. It’s entirely scriptable, but you don’t need to worry about that, because there’s already a great script to do all this for you, called Advanced Media Server (AMC). The initial setup for this was a bit annoying, so here’s the command I use (you can tweak the file locations for your own server, and you’ll need to fix the %n if you use something other than qBittorent):

"C:/Program Files/FileBot/filebot" -script fn:amc --output "C:/Media/Other" --log-file C:/Media/amc.log --action hardlink --conflict override -non-strict --def "seriesFormat=C:/Media/TV/{n}/{'S'+s}/{fn}" "movieFormat=C:/Media/Movies/{n} {y}/{fn}" excludeList=C:/Media/amc-input.txt plex=localhost "ut_dir=C:/Media/Downloads/%n" "ut_kind=multi" "ut_title=%n"

Media Server

Plex is the answer to this question. It looks great, it’ll automatically download extra information about your media, and it has really nice mobile apps for remote control. Extra features include offline syncing to your mobile device, so you can take your media when you’re flying, and Chromecast support so you can watch everything on your TV.

The Filebot command above will automatically tell Plex that a new file has arrived, which is great for if you choose to have your media stored on a NAS (Plex may not be able to automatically watch a directory on a NAS for when new files are added).

Backup

Having a local server is great for keeping a local backup of things that do matter – your photos and documents, for example. I use CrashPlan to sync my most important things to my server, so I have a copy immediately available if my laptop dies. I also use CrashPlan’s remote backup service to keep an offsite backup of everything.

Conclusion

While I’ve enjoyed figuring out how to get this all working smoothly, I’d love to be able to pay a monthly fee for an Rdio or Spotify style service, where I get the latest movies and TV shows as soon as they’re available. If you’re wondering what your next startup should be, get onto that.

August 24, 2014

Guitar

Does anyone still blog? It seems nearly everyone has moved onto Twitter/Facebook. I miss being able to express thoughts in more than 160 characters.



I went to a picnic recently, and some people were passing a steel string guitar around. I'm not a good acoustic player, but it was fun so I had a bash. Someone played Under The Bridge, and took liberties with the chord voicings. So I was inspired to pick up my guitar and work through the official transcription, which I own. While the basic form of the song is pretty simple, as you can hear, the clever part is the choice of chord voicings and fills. I'll be practicing that one for a while.



I've also started over working through the Berklee method books, starting at Volume 2. I learned by playing by ear and memorising, so sight reading is still something I'm getting used to, and sometimes I'm not disciplined enough to do it properly. But I'm getting better at that. I'll be so happy when I start to get good at position playing.

Physiotherapy

I'm at Elevate in Sydney CBD. For a long time, I've struggled with flexibility issues, and I began to think something must be wrong. It turned out something is wrong-I have a minor skeletal deformity in my left hip joint, and my muscles have developed in a strangely imbalanced way to compensate. Except it isn't working, I have severely reduced range of motion and had chronic pain in my left hip joint.

My Physio thinks he can correct the problem, but it's going to take a while. So I'll be off training for at least six weeks, and more likely two months or more. But it will be worth it if my joint pain goes away and I can move like the other people my judo class.

Posted via LiveJournal app for iPhone.

 
"Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was 'Oh no, not again.'" - Hitchhiker's Guide to the Galaxy.

One for the stats nerds

At USyd, we did all our stats in R. Now I'm working at the Department of Health, and we do most of our stats in SAS. SAS is pretty different to R, and so I've needed to work hard to try to learn it.



This is a rite of passage that most trainee biostatisticians go through, and so people have shoved various books into my hands to help me get up to speed. I'll omit the names of many of the books to protect the guilty, but the most useful book someone pressed innto my hands was The Little SAS Book, which I read cover to cover in two sittings.



The Little SAS Book is more technical than the others, hence more suitable for programmers, and actually gives you an inkling of what the designers of the language were thinking. That's helped me begin to think in the language, which is something none of the other books have helped me to do.



The best comparison I can come up with for now is that SAS is like German, whereas R is like Japanese. SAS has lots of compound statements, each of which does a lot, while R has many small statements which each do a little bit. So would you like to be able to speak German or Japanese? The correct answer is, of course, both, each at the appropriate time :)

Gainfully employed

A while ago, I applied for a job as a biostatistician in public health. I made it to the interview stage, and that seemed to go quite well. They said they'd contact me in seven to ten days. I didn't hear anything for a while, but eventually I bumped into one of my referees who said he'd spoken with my interviewers, and they sounded "very positive". I poked my other referee, and he said they'd spoken to him too. So that was sounding pretty good.



On the advice of my girlfriend, I asked them how things were going, and they said they were waiting for a criminal record check to complete. Owing to my misspent youth, there's no criminal record to check, so now I was feeling very positive indeed. To cut a long story short, today they offered me a position, and I accepted.



So all's well that ends well!

British Medical Journal publishes paper on the risks of head banging

Head and neck injury risks in heavy metal



This is the funniest thing I've seen in ages. I particularly like the mitigation options: many injuries could be prevented if AC/DC would stop playing Back In Black live, and instead play Moon River ;)
 
Sometimes I feel like I'm not so much in control of my body as sending it memoes. And it responds with things like "Tough job, hip rotation. Can you come back Monday?".

True Temperament

I've been playing a few more jazz chords and moveable chord forms of late, and my ear's been getting a little better. Unfortunately, this means I've become more sensitive to how out of tune the notes in some of those chords sound as you move up the neck.



To some extent, this problem is inherent in the guitar's construction. There are some very determined people at a company called True Temperament who've decided to do something about this by making custom necks with strange looking curved frets. The FAQ on that site also goes into some depth about tuning methods and why the problem is unsolvable on a standard guitar.



So I'm going to have to live with it, or pick up a saxophone or something :)

Judo

Judo is going well. I've been working on the fundamentals a lot recently, especially breakfalling. I'm finally getting to the point where I'm losing my fear of being thrown - the people in my club are nice and aren't there to hurt people, and I've learnt that by just going with the throw rather than resisting you can breakfall more cleanly. That way, you're very unlikely to get hurt. And if you pay attention while you're being thrown, you tend to learn more about how the throw works, so when it comes your turn, you'll do it better.



Being a lightweight, I've got to fight differently to the heavier judoka, so in the next little while I'm going to focus on improving my speed and perfecting my technique for the basic throws and sweeps. If you're stronger you can just apply more force to get bad throwing techniques to work, but this isn't an option for me. I think ultimately it's a good thing, because I'll have to learn the throws properly.



Always more things to work on. Practice, practice, practice.
 
Having spent the last few weeks being sick, now that I'm beginning to get better I'm finding I'm pretty hyper.
 
Just getting over this illness, whatever it is. I just want to eat and sleep.
 
I'm using a surprising amount of maths in my current job. Recently, we've been trying out measures of diversity. Today, I'm taking a look at Shannon entropy.

The tyranny of distance

At the moment, I'm commuting between two and two and a half hours each way to and from work. That's between twenty and twenty five hours a week. And it's costing me more than $65 a week to do all that travelling. This doesn't make any sense. So looking forward to moving.
 
I've been sick for a few weeks. I thought it was a bad cold, but my mum thinks it's something more serious. She suggests it might be sinusitis. We're not doctors, so it's off to visit a GP tomorrow to find out what's wrong.

An open letter to Peter Ryan regarding police treatment of cyclists

Hon Peter Ryan,



I am writing because I am concerned at the number of recent incidents where a driver has collided with a cyclist, and the case hasn't been followed up by the police. Such incidents and the publicity surrounding them does nothing to encourage road users to obey the law when they realise that they will most likely get away with not doing so.



A week ago in Ballarat, a 13 year old boy was hit by a car, and the police said the boy had the right of way[1]. Despite this, the article linked states that the police will not charge the driver. This, despite her having broken Australian Road Rule 67 to 72, 84 or 86 depending on circumstances at the stated intersection, or perhaps 140 to 144 if travelling in the same direction. She was likely negligent in allowing the collision to happen in the first place, which, by my understanding, is a criminal offence, especially since there was serious injury involved. If she used the usual excuse that "she didn't see him", then that's an admission of guilt in failing to obey ARR 297 - driver having proper control of vehicle.



Also recently, there was a highly publicised case where Shane Warne had an altercation with a bicycle rider. In that case, the fact that Warne hit the cyclist from behind (ARR 126) after overtaking unsafely (ARR 144) is undisputed[2]. The fact that details were not exchanged following the collision is also undisputed (ARR 287). It is also well established that Warne was stopped unnecessarily in a bike lane (ARR 125; 153)[3]. And yet the police will not investigate[4].



Going back a number of years, I also have not had good experiences getting the police to follow up on cases. In my most recent case (11/10/2005; I do not know the case number sorry, all I know was that I was attended to by Angove & Auchterlonie from Boroondara police), the driver also failed to obey ARR 287 (as well as a slew of other offences, such as ARR 46 and 148 - changing lanes without indicating sufficiently and without due care). The police refused to prosecute the driver, and also would not hand over the driver's details or insurer details, based on some misguided privacy policy, asking me instead to fork out for a freedom of information request. Given that I was a broke student at the time, this was not a feasible thing to do and I never did receive compensation from the driver for damage to my bicycle, clothes, and large out of pocket expenses for travel to medical care for several years that the TAC didn't cover. The police also displayed a lack of knowledge of the law, initially thinking that I had broken ARR 141.



I can't imagine why the police aren't investigating these cases, because in each case, clear evidence is at hand, and not disputed. The identities of all parties are known. It should be an open and shut case. Without the police making charges, the rider in each case will have a much harder time claiming from the driver's insurance (if the boy was not admitted overnight, his TAC excess will be an enormous burden to his family). The driver in each case will not be discouraged from driving in a similar fashion next time. And other drivers also know that they will most likely get away with any offences they commit if a bicycle is involved. This is a perverse reversal of the situation that we should have, in which drivers should be encouraged to take due diligence around cyclists. It almost seems that the police always assume a cyclist is at fault unless proven otherwise in Australia, whereas most other countries with an established bicycling culture assume that the driver is at fault unless proven otherwise as they hold the burden of driving the more deadly vehicle and so should be required to take due care.



If the laws weren't adequate enough to prosecute to the driver in the above cases, has your department been contacted to update the laws, and what is being done? Keep in mind that cyclists have no protection other than by the law, and as the more vulnerable road user, the laws should focus on their safety and ensuring that transgressions are dealt with effectively.



Can you please encourage the police in each of these cases to follow them up to the full extent that the law currently allows.





Sincerely,





[1]

http://www.theage.com.au/victoria/teen-cyclist-struck-by-car-20120110-1ps85.html



[2]

http://theage.drive.com.au/motor-news/warnes-tirade-triggers-bike-rego-call-20120118-1q5k0.html



[3]

http://www.cyclingtipsblog.com/2012/01/cyclist-versus-warnie-the-cyclists-story/



[4]

http://www.heraldsun.com.au/news/more-news/warne-blasts-cyclists-on-twittershane-warne-clashes-with-cyclist-on-way-home-from-training-session/story-fn7x8me2-1226246735306

Breaking windows

Another letter in The Age today. Unedited text below:





Ian Porter (Without car manufacturing, we are on the road to ruin, The Age, 13 Jan) believes that the government needs to keep throwing money at the car industry in order to support other industry in Australia. I'm surprised as an industry analyst, he hasn't heard of the broken window fallacy.



Throwing good money after a bad unsustainable industry that can't adapt is just a waste. It's exactly identical to sending soldiers to dig holes only to fill them back up again just to keep them employed and off the streets. The money could be better spent on doing useful things that will remain useful into the future. Yes, paying people to break windows and then paying the glazier to repair them will keep people employed, but couldn't the glazier be better employed building things that then keep other people employed into the future?



Why don't we do something useful with the money instead? Like built modern intra- and inter-city rail infrastructure? This won't become stranded assets when cheap oil becomes unavailable. We won't be left with vast tracts of useless motorways - we will continue to be able to use the rail infrastructure well past these boom times.

Police a bit rich

Hrrrfm. The Age didn't publish my letter:





I find it a bit rich that the police union are upset that information

alongside a photograph was distributed about one of their members, without

his consent. I understand that truth is not not considered a defence to

libel in Australia, so it was perhaps unwise to distribute such a photo.

But it is common police practice to photograph protesters without our

consent, and to store these photos with profiles in national databases

without a right of appeal or review. I probably find myself on some

watchlist now just for attending some of last night's Occupy Melbourne

general assembly.



Maybe there would be no need for a photograph to be distributed if police

correctly wore their own name badges (and if the name badges weren't

deliberately too small to read). Or if there was some accountability, as

opposed to the protectionism that police have demonstrated in the past

with the likes of their disgusting behaviour at the APEC protests.

on the hardships of living with minimal amounts of RAM (4GB or so)

I just got 200MB/s read/write rate from my swap device on my laptop. Fast laptop eh? OK, so I'm cheating by using the compcache/zram module from the staging tree.



When I bought my 2 laptops, I was upgrading from 256MB to 4GB. I thought that would be enough to last me for years. The video card on that first laptop came with more memory than the system memory of the machine I was upgrading from. Alas, I forgot to factor in opera and firefox (we're now in the era when Emacs is officially lightweight). And being laptops with the particular chipsets they have, 4GB is it, I'm afraid.



And the fact that Linux's VMM, for me, has never really handled the case of a machine running with a working set not all that much smaller than physical RAM. If I add up resident process sizes plus cache plus buffer plus slab plus anything else I can find, I always come up about 25% short of what's actually in the machine. Ever since those 256MB days (whereas about half the ram went "missing" on the 128MB machine prior to then). And even when your working set, including any reasonable allowance for what ought to be cachable, falls far short of RAM, it still manages to swap excessively, killing interactive performance (yes, I've tried /proc/sys/vm/swappiness). When I come in in the morning, it's paged everything out to make backups through the night marginally faster (not that I cared about that - I was asleep). Then it pages everything back in again at 3MB/s, despite the disk being capable of 80MB/s. Pity it's not smart enough to realise that I need the entire contiguous block of swapped pages back in, so it might as well dump the whole wasted cache, and read swap back in contiguously at 80MB/s rather than seeking everywhere and getting nowhere.



What I really wanted, was compressed RAM. Reading from disk with lots of seeks is a heck of a lot slower than decompressing pages in RAM. I vaguely recall such an idea exists if you're running inside VMWare or the like. But this is a desktop. I want to display to my physical screen without having to virtualise my X11 display.



But the zram module might be what I want. Pretty easy to set up (in the early days, it required a backing swap device and was kinda fiddly). Here's the hack I've got in rc.local along with a reminder sent to myself that I've still got this configured, at reboot:

echo 'rc.local setting zramfs to 3G in size - with a 32% compression ratio (zram_stats), that means we take up 980M for the ramfs swap' | mail -s 'zram' tconnors
echo $((3*1024*1024*1024)) > /sys/block/zram0/disksize
mkswap /dev/zram0
swapon -p 5 /dev/zram0


It seems to present as a block device of default 25% of RAM size (but I've chosen 3GB above), and as you write to that device, compressed versions of the page end up in physical memory. Eventually you'd run out of physical memory, and hopefully you have a second swap device (of lower priority) configured where it can page out for real. In my case, I'm using the debian swapspace utility. Be warned, if you plan to hibernate your laptop, not to forget to have a real swap partition handy :)



zram_stats tells me I'm currently swapping 570MB compressed down to 170MB, for a compression ratio of 28%. That 170MB has to be subtracted from the memory the machine has, so it appears to really only have 3.8 or so GB. No huge drawback. At that compression ratio, if I were to swap another 3GB out, physical ram stolen by zram would only be 1GB. My machine would be appearing to have 3GB of physical ram, 3GB of blindingly fast swap, and a dynamic amount (via swapspace) of slow disk based swap. I'd be swapping more because I had 1GB less than I originally had. But at least I'd be swapping so quickly I ought not notice it ('course, I haven't bench marked this). And I'd be able to have 2GB more in my working set before paging really starts to become unbearable.



So, with an uptime of 4 hours, I haven't even swapped to disk yet (I know this, because swapspace hasn't allocated any swapfiles yet). The machine hasn't yet thrashed half to death yet. That must be a recent record for me.





Yes, the module is in the staging tree. It's already deadlocked on me once, getting things stuck in the D state. And the machine has deadlocked with unexplained reasons a couple of other times recently (with X having had the screensaver going at the time, so no traces, and no idea whether it's general kernel 3.0 flakiness or zram in particular; I had forgotten until tonight that I even had previously configured zram back in the 2.6.32 days).



What I really *really* want, since I lack the ability to add more ram to the machines, is a volatile-ram ESata device, used purely as a swap device, reinitialised each boot (ie, having a battery backup is just pointless complexity and expense, and SSD is slow, fragile and prone to bad behaviour when you can't use TRIM, for the amount of writes involved in a swap device). There is the Gigabyte i-Ram and GC-RAMDISK and similar devices, but they're kinda pricey, even without the RAM included in the price. Why is SSD so much cheaper than plain old simple RAM these days? I thought the complexity involved would very much make it go the other way around.



What I really *really* **really** want, is for software to be less bloaty crap.

We do things differently, here

Another slightly edited letter published in The Age today. Maybe I should become a media mogul. Original here:



I've just come back from a tour of Europe. Wind farms are everywhere. Near tourist attractions, and along roads where it is particularly windy. Anywhere appropriate, and especially near townships and individual houses, because that's where the consumers of electricity are. No one seems to have a problem with them. People there don't suffer from increased rates of cancer or bogus self-imagined inflictions. They don't ban building windmills within 2km of towns. They don't ban the building along a windy stretch of road because tourists happen to drive along there.



They also don't develop in green wedges, allow cattle to graze in their national parks, or try to make it easier to log old growth forest and make it harder to protect such lands.



Europeans seem to have no problem accepting that, despite their per-capita emissions being way below ours, that something has to change. It's a pity we are not lead by leaders. The best I can manage out of my local member, Ted Baillieu, is a form letter in reply to my concerns, uttering vague niceties about planning and the economy, and attacking the opposition.

Failed and now discarded Victorian cycling strategy

I've always been a crazy cat man who sends letters to the editor and his local parliamentary representitatives. Except that I have no cat. Anyway, a shortened version of this letter was published in The Age today.





Dear Sir/Madam (CCed my State local member, and The Age letters),



The auditor general's report into the Victorian government cycling strategy said "the Department of Transport and VicRoads had not ... addressed conflicts and delays where cyclists crossed busy roads, and where cyclists and pedestrians shared paths." ("No way to spin it, the wheels are off", The Age, Aug 18). It's not just busy crossings that cause unnecessary delays to people not in the precious car.



A trip along Southbank is usually hampered by 2 traffic light controlled crossings where up to 100 pedestrians and riders are waiting at the lights with an average waiting time of around 5 minutes between them. That's 10 minutes per return trip wasted. We wait for just an occasional car (each with just one person in them) and empty tram to pass. The cars are frequent enough to make running ("jaywalking") across the road a little unsafe, but not frequent enough that the road is anywhere near capacity and that adding more frequent red lights for the car traffic is going to harm the flow into and out of the city at all. And it will help the 100 waiting pedestrians.



In the land where cycling is taken seriously, Holland, they want cyclists to have to wait no more than 15 seconds. And they're experimenting with microwave sensors to extend the green cycle to let cyclists cross safely. Here, the Southbank story is repeated across the city. Outside Swinburne University, a pedestrian light regularly sees several dozens of students wait several minutes at lunchtime to cross while a small handful of cars (each with just one person in them) cross just frequently enough to make a quick dash across the road down from the lights, impossible. The vehicle sensor loops at Camberwell Junction have, for years, been tuned to not even be sensitive enough to detect cyclists, so you can wait at 11pm for 2 cycles of the lights to completely bypass turning green on the road you're travelling on, while no cars cross the other legs of the junction at all before you give up and cross illegally (completely safely, because there's no traffic at all at that time of night and you can see for miles).



Given that you can't continue to keep encouraging people to get in their cars because it's been conclusively proven over the last 40 years that you can't build your way out of car congestion, perhaps it time to promote other forms of travel?

New Media

The Age published an article about http://theconversation.edu.au/, a new media outlet run by the former chief in editor of The Age. Not only have I seen intelligent articles on it, their editors and authors understand Creative Commons licenses.

Mary Poppins

Small country towns and petrol. I should have learned by now. I was running short of where I expected to run low[1], but the maps told me there was a little town up the road with fuel, so I stopped there. It's Sunday. In a small town. That's OK, I've got 36km of fuel left according to the onboard computer, and a guy reckons a town 20km back there had fuel. I didn't see it, and it was a very small town, so I decide to push my luck and head over the mountains, where I know a town 63km away (Mansfield) has fuel. I've stopped before when the computer reckoned there was 7km left, and I really had 1.5L of fuel left, which should be good for 30km. So 36+30km should get me there with 3km to spare! Except that mountains take juice. So I absolutely babied it over. Then thought I was lost because of a GPS stuffup (problem exists between touchscreen and bike seat). I never went above 80km/h hour (and associated with the want not to have to roll on the throttle, is the want to not put on too much brake to waste too much energy. Not braking and mountains aren't really a clever mix). And it should have been a really fun road. I still hadn't reached the top of the mountain when the computer said there was 0km left. I couldn't listen to music, because I had to listen to the telltale signs of the engine missing or other signs that I should immediately turn off the ignition lest the fuel pump bearings burn up. Anyway, 20km later, no signs of trouble, and I roll into the fuel station. 1.5L left still.



I've got a Mary Poppins fuel tank.



Anyhoo, that blew the cobwebs away. 2 days ought to be enough holiday between jobs, right? Wish me luck tomorrow! I've reccied where it is that I'm working, so now I've just got to find out what I'm actually doing. Oh, and find a house.



[1] At Caltex in the main street of Wagga Wagga, I got the worst quality fuel I've ever got (5.8L/100km, for premium grade, despite traveling most of the time at the speed limit, and riding pretty conservatively (compared to the rest of my trip in NSW!). The previous fuel was 5.1L/100km, and the next fuel was 5.4L/100km despite only having the lower grade available)

The wrong politic

It seems that Senator Carr didn't like the frank and fearless advice his public servants were offering him, and the Chief Scientist's position became untenable. Sure, you're not meant to offer that frank and fearless advice through the media, but what's the point of a having a chief scientist or indeed any publicly paid scientist if they're only allowed to tow the party line, and not allowed to tell the public what they need to know? We see this time and again. CSIRO researchers have been completely barred from making any public comments without going through the central media office. What's the use of public funding if the public research isn't allowed to be told?



Tony Rabbit wanted to remove the Chief Scientist's office because it was too political (I did read this in the SMH a few months ago, but can't find the cite). Senator Carr wanted to remove the officer because she was the wrong politic and was telling too much truth.

Why I don't donate to natural disasters in Australia anymore

I donated towards the Black Saturday fires, and then the donation policy of Red Cross became "we'll forward donations to people with insurance and people with holiday homes that got burned down". I wanted my money to go to people who can't afford to pay for insurance, and certainly to people who can't afford holiday and investment homes. Insurance will cover those who can afford it. The rest truly deserve a break. The Qld flood donations are going to people who simply won't need it.



And as to who would pay for it, and whether Australia should postpone bringing ourselves back into budget surplus: If we didn't dump the mining rent resources tax, we'd be fine. Not only would the annual amount generated by the tax neatly match the amount that needs to be spent repairing Qld, but if it was framed ideally (ie, applied to all mining companies) it would come from companies that were largely responsible for the worsening of these severe storms. I.e., they wouldn't be able to externalise their costs onto the rest of society so much anymore - those that actually consume more would end up paying for the damage it does, which would then partly fund the mitigation costs we all endure. Actually, it should come partly from farmers too. What did you expect would happen when you clear the land of its natural ability to regulate water flow?



The guy who texted into JJJ talkback that we should just drop the National Broadband scheme instead, on the basis that it would be obsolete by the time it was built, made me laugh. Yes sure, if we don't build something, then the next thing we can't build would be even better!

Wrong technological fixes to problems vol. #8123

Arizona state apparently spent $1B to attempt to automate the detection of people crossing a 53 mile section of the Mexican border.



If we expect the lifetime of such a project to be 15 years before the infrastructure completely falls apart and needs to be renewed, then in that same 15 year period, we can employ 33333/15=2222 staff at what seem to be typical US wage rates (neglecting inflation. But since the US economy is a basket case, I might be justified in doing that). In that 53 mile space, we could space 2222 guards every 40 metres in a line, or a bit more sparse if you wanted a grid of guards to detect tunneling.



<hint of sarcasm="maybe">

I'm sure most government spending is useful, and I'm sure the expense of the project could be entirely justified. I'm sure the article is just ill-informed.

</hint of sarcasm>

Vale Purrple

I'm not having much luck selecting cats for longevity rather than character. Still, I'd rather character than longevity.







Purrple has been living with mum since Phred died, because cats always deserve fellow playthings, and I wasn't about to get another cat. The signs of her (presumably) cancer started showing in October, but the tests the vet did didn't reveal anything (he wasn't searching for cancer though). On Monday this week, she started showing other signs - that of kidney failure. In the end, she went the same way Phred did.



I didn't get to see her in the end - the curse of long distance part time veterinaries. He made the call, it could either be sew her back up, give her drugs, and transport her back to us, or put her down.



And it's only just hit me. I had to type that up.

Reqium for a species

Yikes. I'm reading Clive Hamilton's "Requiem for a species. Why we resist the truth about Climate Change". (all tyops are mine)





To date, governments have shunned geoengineering for fear of being accused of wanting to avoid their responsibilities with science fiction solutions. The topic is not mentioned in the Stern report and receives only one page in Australia's Garnaut report. As a sign of its continuing political sensitivity, when in April 2009 it was reported that President Obama's new science adviser John Holdren had said that geoengineering is being vigorously discussed as an emergency option in the White House, he immediately felt the need to issue a "clarification" claiming that he was only expressing his personal views. Holdren is one of the sharpest minds in the business and would not be entertaining what is now known as 'Plan B'— engineering the planet to head off catastrophic warming — unless he was fairly sure Plan A would fail.





It is far easier, on the face of it (and certainly, politically), to perform geoengineering than to slow down the generation of CO2. So cheap that one country can afford it, instead of it being such a huge (political) task that not even all of the worlds countries acting cooperatively will be able to pull it off. So great, lets go servo the eco-system. Control Systems are easy, right? They never break into unwanted oscillations while you're still learning their response function.





The implications are sobering. In August 1883 the painter Edvard Munch witnessed an unusual blood-red sunset over Oslo. He was shaken by it, writing that he 'felt a great, unending scream piercing through nature'. The incident inspired him to create his famous work, The Scream. The sunset he saw that evening followed the eruption of Krakatoa off the coast of Java. The explosion, one of the most violent in recorded history, sent a massive plume of ash into the stratosphere, causing the Earth to cool by more than one degree and disrupting weather patterns for several years. More vivid sunsets would be one of the consequences of using sulphate aerosols to engineer the climate; but a more disturbing effect of enhanced dimming would be the permanent whitening of daytime skies. A washed-out sky would become the norm. If the nations of the world resort to climate engineering as an expedient response to global heating, and in doing so relieve pressure to cut carbon emissions, then as the concentration of carbon dioxide in the atmosphere continued to rise so would the latent warming that must be suppressed. It would then become impossible to stop sulphur injections into the stratosphere, even for a year or two, without an immediate jump in temperature. It's estimated that, if we did stop, the backup of greenhouse gases could see warming rebound at a rate 10-20 times faster than in the recent past, a phenomenon referred to, apparently without irony, as the "termination problem". Once we start manipulating the atmosphere we could be trapped, forever dependent on a program of sulphur injections into the stratosphere. In that case, human beings would never see a blue sky again.





Please read his book. The book goes down many paths -- human pyschology, politics, science. It's bloody depressing, but people need to understand why we not going down a better route.

Health and Safety

It has always frustrated me that the medical profession and unions and the like are always pushing the health and safety barrow so much without critical thought.



Putting up safety fencing to the point that no one pays attention anymore, because they assume the safety fencing will always be everywhere (I am reminded here of work. You have to pay a lot more attention now lest your attention lapses when you go near a fence with a hole in it because the engineering simply makes it impossible to make everything safe). (I could rant how the unions have forced legislation on how brightly lit my office has to be, to the point where it hurts my eyes if I don't wear sunglasses, but I'm going offtopic here)



Putting sensors in cars that are so safe that you don't need to pay attention anymore, so that most people drive like they're driving a Volvo (I fear and loath the research into automatically driven cars − unless those cars all limit themselves to 40km/h or they legislate against cyclists and kangaroos from roads, the research will be a failure, safety wise). Rear view cameras being mandated in cars simply because a few people in 4wds are too stupid to look backwards before running over their spawn? How is that going to protect against a child that is lying underneath the car?



But when it comes to mandatory helmet legislation, it seems the medical profession are just blind and dogmatic, and lack any critical thinking skills.



The head of Montreal’s trauma unit, Dr Raziz, really needs to come out to Australia and have a look in the trauma unit of Melbourne's hospitals some time. There he'll see that helmets do very little against cars that hit helmeted cyclists; after all, the standards only test to impact speeds of 19.5km/h - a fall of 1.5metres without any additional velocity components of heavy blunt metal. Helmets do nothing when a cyclist is ejected over the handlebars face first into the tarmac, because their handlebars got caught in the wheel-well of an excessively high 4wd. Cars are driven recklessly by drivers who have never gotten used to, nor tolerated cyclists because most cyclists had been driven off the road 20 years ago precisely because of the legislation that people like him were trying to push. When you only look at one small part of the picture, you only see a very small part of the picture. Get out there and look at the big picture. Getting more people cycling is the solution.



Something that makes people feel safer, but is not actually safer (bicycle helmets) just leads to risk compensation. Forcing people to wear helmets is anything but safe. The choice to wear helmets should be your choice, and your choice only (or your parents, if you are of an age that it is deemed that you can't legally decide for yourself).



A far more effective piece of legislation to introduce would be to ban vehicles from having a bonnet of height more than that of your typical sedan. The aggressivity ratings of 4WDs is unacceptably large, so they have poor crash compatibility with other road users. If only the legislation worked to minimise risky practices rather than forcing passive safety and adopting other practices that lead to risk compensation.

Conga line of suckholes

I've got a higher respect for ex-leader of the ALP, Mark Latham than I currently do have for Prime Minister Julia Gillard and Attorney-General Robert McClelland.



Conga line of suckholes indeed.

Google Maps Fun

Google maps is fun. Google maps API is even more fun!



Just so many cool things to do. My first Google maps hack lets you draw routes on the map (or sat image) and print out the distance associated with it. I've got some other cool things coming up, but I thought I'd get that one out now.



Oh, it is currently centered on work's new building, but zoomed out to see most of the city.

Python Generators

Didn't understand the coolness of python generators today. I wanted to generate a list of

files in a directory. Previously I had to write a class and do __iter__ tricks. Now I can

simply do this:



def file_walk(path):
    for dn, ignore, files in os.walk(path):
        for fn in files:
            yield dn + os.sep + fn

Python decorators

I love Python because it is so easy to write unreadable code in.



I wanted to handle sub-commands and didn't want to go to all the hassle of

maintaining a dispatch dictionary. So I decided I could be really seedy and

do:



globals()[args[0]](args[1:])



However this sucks if someone picks function that wasn't intended as a subcommand.



Python decorators to the rescue! My main problem with the dispatch dictionary was

having to update commands in two places. So with decorators I can do this:



commands = {}



def command(fn):

commands[fn.func_name] = fn



@command

def foo(args): pass



and then



commands[args[0](args[1:])



and I don't have to be so seedy. Of course I'm sure people still won't understand

what I'm doing, but thats ok ;)

Hoary Hedgehog

Today I decided to join the droves trying out hoary. I would love to say it was simple seamless experience, but it wasn't.



Not that it was particularly painful find you, just a couple of little annoyances.



Firstly SATA didn't quite work for me. My BIOS has somethign called "combination mode", which seems to make the SATA disks appear like normal ATA disks as well as SATA disks. Which confused the hell out of the kernel. By setting it to "Normal" mode in the BIOS this problem went away.



The other problem was buggy media. The CD seemed a bit scratchy but after retrying about 5 times it finally worked.



Anyway seemed to work and let me get at a terminal. I haven't got rid of the GNOME stuff and installed ratpoison yet, but this will mainly be a remote

login box anyway, so I don't care too much about the GUI.

Switching ALSA audio outputs

While its all well and good mixing audio output, so you can hear both the CD you are listening too, and

the audio stream from a VLC stream, it can get kind of confusing listening to two things at once.



So I to write a simple shell script which would mute the currently active output

stream, and unmute the other, which I could then easily bind this to a key in ratpoison.



I thought it would be easy, which it was, but the result is fairly gross. Surely someone can show me a better way

to do this:



#!/bin/sh

if [ `amixer cget iface=MIXER,name="PCM Playback Switch" | tail -1 | cut -d\, -f2` "==" off ]; then 
    amixer cset iface=MIXER,name="PCM Playback Switch" 1 > /dev/null; 
    amixer cset iface=MIXER,name="CD Playback Switch" 0 > /dev/null; 
else 
    amixer cset iface=MIXER,name="PCM Playback Switch" 0 > /dev/null; 
    amixer cset iface=MIXER,name="CD Playback Switch" 1 > /dev/null; 
fi
 
Yay, I got my paper into LCA 2005. Hooray!



In other news I picked up my big day out tickets today. Hooray!



Finally, I leave you with a great (very nerdy) quote from Ted Ts:



The way the kernel will deal with C++ language being a complete
disaster (where something as simple as "a = b + c + d +e" could
involve a dozen or more memory allocations, implicit type conversions,
and overloaded operators) is to not use it.  Think about the words of
wisdom from the movie Wargames: "The only way to win is not to play
the game".
 
So instead of going to SLUG last night, Jamie and I went and played pool at the Clare Hotel. Anyway, long story short, it turns out we aren't the only

ones who play the penis game in Sydney.

have cable!

of course i dont have a computer yet. i have a nice new powerbook on order which should show up soon.



so anyway this post is coming courtesy of my palm pilot connecting to cable via new dlink 624+ wireless router.
 
This is a cool (though for very nerdy reasons), piece of art.

Meme meme wonderful meme

So I was explaining to Suzy exactly what a meme was the other day, but now I have a more thorough example to show. It seems that the "23rd post, 5th line" meme, is a mutation, of another "grab nearest book, 23rd page, 5th line" (or more likely the mutation went the other way around.) The point of the story is look at how that meme managed to adapt to the livejournal environment to ensure its continued replication. (Or so the meme freaks would have you believe ;)



Anyway, the meme lives on and I will give it to you in book format, from the "PCI System Architecture; 4th Ed." (yes, I am that lame...)



"and the return of the bus to the idle state. It defined how a device must respond"



So there you go, benchmark still hasn't finished and Angel still isn't on TV yet.

Screwing over business

So I am waiting for benchmarks to run, so I will actually update

this thing.



These latest wage rises are really quite silly for two main reasons:



1/ It really doesn't help the low income worked by much (which

is what the ACTU has been saying).

2/ It screws over small and medium businesses, which will end

up putting people out of work because they can't afford the

wage rise.



This article in the Australian explains it better than I will, but the short version is, for the $19 wage rise businesses will end up paying more than $19 per employee due to on-costs. And of the $19 families will see less than $4 once you take out the tax and the loss of other benefits.



This is really dumb, basically everyone loses, well, except that is for us. By not having to pay as many benefits out, and by raking in extra payroll tax from business, the government will be able to afford to gice middle-income earners a tax break, and get themselves reelected, or am I being too cynical?

The Free Trade Agreement

I'm sure you all realise that one person's crusade is another person's "whatever", but can I urge you all to read of the problems with the FTA.



This bill was all about being able to sell wheat and stuff to the US, but in return we have to agree to the United State's draconian copyright and digital rights management laws. This will make it illegal for certain software to be developed in Australia, with the chance of programmers being branded criminals and through in gaol.



Of course with all the other problems in the world I'm sure

this will go largely unnoticed :(.
 
I don't know what is more annoying not being able to eat or not being able to shave.



On the not eating side of things, does anyone have any suggestiosn for liquid meals?
 
So to explain my last post, whilst playing rugby on the

weekend, one of the oppositions' elbows came into contact with the side of my head. Normally this is pretty by the by, elbow swings, jaw moves, keep playing. In this case however I was lying on the ground and hence jaw couldn't really move, and the force behind the elbow was the entire weight of the player assisted by gravity. So jaw has nowhere and breaks under the force.



Of course I don't actually remember any of this happening, I was knocked unconcious, hell I didn't even feel the pain in my jaw until hours later.



Luckily it is an undisplaced fracture, which basically means I have a crack, but the jaw didn't move anywhere, so I don't need to be wired up or anything. I wish I could scan in the xrays they are pretty cool.



So now I'm out of rugby for at least 6 weeks while the bone heals.



Also of note is how damn cool the OPG (no idea what it stands for) xray machine, you stand still and it zips around your head taking incremental xrays to give a flat project of your mouth, really classy.



I should also note how cool drugs are. Codeine and paracetomal really hit the spot. I'm not sure how good any of the code I write whilst on it will be but oh well.
 
Broken jaws are painful.

Blog blog blog... over here

So I've jumped ship as it were and am now mostly blogging on my own site, so point your browser to http://benno.id.au/blog/ (or

your aggregator to http://benno.id.au/blog/feed/)

NSW geocoding

So my maps stuff now lets you look up addresses in NSW.

Posting from the air!

I'm currently flying somewhere over the black sea, en route to London. Singapore airlines is really cool and have the interweb on board. Rock!

GIMP and colour depth

So I'm playing with a new embedded board, which has a nice 16-bit 640x480 colour LCD. So to make

a nice splash-screen I fired up the GIMP did a nice gradient effect and some text. This looked

really nice on my 32-bit colour display, but once you move to a 16-bit display the gradient is

no longer nice and smooth.



So all I want to do is convert my 24-bit colour RGB image to a 16-bit colour RGB image, and get the

GIMP to do dithering. This is something that should be relatively straight forward. Alas I couldn't

find any way to do, and neither could anyone else in the lab.



I ended up using convert, which worked pretty well:



convert -treedepth 5 -colors 65535 -dither logo24.bmp logo.bmp



But it still amazes me that GIMP can't do this. If anyone knows how to do it please let me know!



Here is the image if you want to give it a try.

OMG! You would not believe the picture of a shark I saw today!





In fact this whole shark site is pretty hilarious. Don't even ask why I was googling for sharks.

More GoogleMaps fun.

I've updated my google maps interface again.



There is a new tool "Handy Places" which gets a bunch of different map views from a backend database and gives you easy access to them. Of course I still need allow people to add their own bookmarks here, but that should be doable soon.



Length finder has some rudimentry UI for saving stuff, but this doesn't actually work yet :). Otherwise I still think its the best length finder available for google maps at the moment.



Current Place has been updated to correctly give you an idea of the size of the earth you are currently looking at. The coolest thing is seeing how the map gets distorted as you get closer to the poles.

Round-up of options for disabling Mac OS X Lion's feature of auto-restoring windows

As part of the "Back to the Mac" theme in OS X Lion (10.7) by Apple, they have taken what they have learnt from user interfaces and experience on iOS devices such as the iPhone and iPad, and tried to bring some of those concepts home to the full fledged Mac experience.





There are many changes in relation to this but one of those most noticeable, and sometimes annoying, features -- is that applications will now often restore the documents and windows you had open when you last closed them. Not only that, your entire desktop will re-open to every application and document you were last using.





While this is sometimes useful, it can also be very annoying. For myself, the most common application this bugs me with is Preview. I am quite used to building up a pile of documents in preview, only to quit it to clear them out so that next time I start them, the build-up is gone. This is also happening with Google Chrome and TextEdit.





Fortunately, there are some solutions at hand! There are two main options;





Close all windows in an application once-off



If you use the option key when selecting the application menu, you will note most applications gain a "and Close all windows" suffix on the quit item. This will quit the application and not remember the open windows.



Additionally you can simple use Apple-Option-Q as a single keyboard shortcut to close an application including all of it's windows.





Disable the feature entirely



You can wholesale disable the feature, so that no application will re-open windows and so that your desktop won't re-open applications. This is a fairly simple toggle.







  1. Select the Apple menu, then move halfway down to "System Preferences" and select it.


  2. Select the "Show All" button from the top menu bar


  3. Select the very first icon under the "Personal" category, "General"


  4. Look towards the bottom and select the option "Restore windows when quitting and re-opening apps"








Disable the behaviour on a per application basis.



It can be done manually with "defaults write" in many cases, however a free application RestoreMeNot has been developed and released which automates this behavior.



It's a free download and allows you to exclude specific applications from this behavior, in my case Preview was first cab off the rank.

Watch out for hostname changes when using replication!

Cross-post from my blog at sun



For one reason or another, many times we find ourselves changing the hostname of a machine. It's been repurposed or moved - or perhaps the original installer didn't know what name it should have. To achieve this on most modern Linux distributions there are 2 key files you need to update.





  1. /etc/hostname needs to be updated with correct hostname to be set on boot, and


  2. /etc/hosts needs to be updated for DNS lookups of the local hostname. This is more important than you might think and will break many applications if not updated.






Some people also take the third step of updating the hostname on the fly with the 'hostname' tool, which if you do that means the gotcha I'm about to describe take you completely unaware in some weeks or months.



If you are using MySQL replication, there are two key options which depend on the hostname. These are the 'log-bin' and 'relay-log' for the binary log and binary log replication log paths respectively. The problem is not only do the logs themselves depend on the hostname, so does the index which tells you where to find them. So if you restart the server, it will look for a new index file and won't find it - causing errors such as:



090825 17:17:15 [ERROR] Failed to open the relay log './mellie-relay-bin.000002' (relay_log_pos 339)

090825 17:17:15 [ERROR] Could not find target log during relay log initialization




There are several possible solutions to this, one involves combining the old and new files (which you can find documented here - but that's a more pro-active approach. The second is to completely restart the replication process - which in my opinion, is cleaner. So I will detail that approach.



First we need to stop the slave process, just to make sure.



mysql> STOP SLAVE;




Then we can get the slave status, to see what position the master is currently. You can see what that looks like here. The important values to note are the following two:



mysql> SHOW SLAVE STATUS\G

*************************** 1. row ***************************

Master_Host: localhost

Master_User: root

Master_Port: 3306

Relay_Master_Log_File: gbichot-bin.005

Exec_Master_Log_Pos: 79





These values tell us what the current position in the master's binary logs the slave has executed up to as well as the basic master details. The reason this is important is we are going to tell the slave to completely forget about it's current replication information and fetch the data fresh.



While normally you could just change the master log file and position, since it can't open the relay log at all - the slave replication does not start and we must completely reset and specify all of the details again. The above information contains everything you need except the password for the replication user. You can find that by reading either the 'master.info' file or by prior knowledge.



ERROR 1201 (HY000): Could not initialize master info structure; more error messages can be found in the MySQL error log




To reset that, we run RESET SLAVE like so:



mysql> RESET SLAVE;




Then we run we need to construct a CHANGE MASTER statement with the above information.



In our case the statement is

mysql> RESET SLAVE;

mysql> CHANGE MASTER TO MASTER_HOST="localhost", MASTER_USER="root", MASTER_PASSWORD="test", MASTER_LOG_FILE="gbichot-bin.005", MASTER_LOG_POS=79;


... but be sure to use your own values from the SLAVE STATUS and make sure the log and position are from Relay_Master_Log_File and Exec_Master_Log_Pos - there are other values that look similar so don't confuse them. Once this is done we can start the slave, and check the status to make sure it is replicating correctly.



mysql> START SLAVE;

*wait a few moments*

mysql> SHOW SLAVE STATUS\G




When the slave status is displayed, make sure that both the IO and SQL threads are running and there are no errors.





mysql> SHOW SLAVE STATUS\G

*************************** 1. row ***************************

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

Seconds_Behind_Master: 8




Also keep an eye on the Seconds_Behind_Master value and make sure that it is reducing to 0.





That's this weeks situation. If you do change your hostname, it pays to reboot your machine to make sure everything has worked and that issues will not pop up down the track when your machine is unexpectedly rebooted by a crash or some other circumstance. You don't need unexpected changes causing problems!



This applies to any situation where you might change a configuration file, but making it's effects current are done separately. This is very common in MySQL, where you might want to change a dynamic variable and also edit the my.cnf file. If you make a syntax error, you won't know until your server reboots. So it helps to be very careful and preferably do the actual MySQL or server reboot.

Checking ldirectord is alive

This week's blog post is not strictly to do with MySQL, but a little more to do with highly available clusters using heartbeat and ldirectord



While many people use heartbeat with MySQL, ldirectord usage is less common in these scenarios and more often using in clustered web and mail servers. It is however sometimes used with a selection of MySQL slave servers.



ldirectord is a tool that manages IPVS in the kernel. Essentially you give it a list of servers that act as a "cluster" for a service, for example, 2 web servers. It will then setup the linux load balancer IPVS to direct to these 2 machines. However, what it does is keeps monitoring the servers, and if one of them goes away, it removes them from the cluster pool.



The problem I have run into a number of times, is that ldirectord gets stuck and stops monitoring services. However when it does this, then services stop getting updated, and if a service goes down, it won't be removed from the cluster. On top of that, I have had it get stuck on START UP, after a failover, in this case not many of the services had a chance to come up yet, and you are stuck with a cluster often with 0 nodes available to service some requests - which causes downtime.



So last night, Shane Short and myself wrote a patch to ldirectord and a nagios plugin, in order to make sure that ldirectord is doing it's job and hasn't got stuck. It works by hooking into ldirectord's '_check_real' function, which is called whenever a server is checked. It will then adjust the timestamp on a 'pulse' file, which we later check with our nagios script.



Here is the patch to ldirectord: http://lathiat.net/files/ldirectord.patch



So now, in the default configuration we will end up with a file at /var/run/ldirector.ldirectord.pulse (this file is actually in the same location as your ldirectord PID file, so if you have moved that, it will be with it) which has it's timestamp updated with each service check.



Now, we make a plugin for nagios, which I have here:



http://lathiat.net/files/check_ldirectord_pulse



And you can configure this plugin into Nagios or Nagios NRPE as normal. You can test it is working by running the plugin manually, you should have a result like this;





OK: ldirectord pulse is regular (4 seconds since last beat)





The default timeout is 120 seconds (2 minutes) for a warning, and 300 seconds (5 minutes) for a critical alert.



Hope this helps some people using ldirectord! Personally this has caused downtime for ourselves a few times when ldirectord got stuck. I would love to hear any feedback or if you are successfully (or unsuccesfully) using this, you can contact me here:

http://lathiat.net/contact

At Linux.conf.au 2009 and starting on Planet MySQL

Greetings Planet MySQL readers,



I am new to Planet MySQL.. so I thought I should introduce myself.



I am Trent Lloyd (some may know me online as 'lathiat'), based in Perth, Western Australia and presently working for Sun Microsystems as a MySQL Support Engineer providing support to Sun's MySQL customers. I have been with Sun for 12 months, and previous to that was working for MySQL AB before it was acquired for 8 months. My background before that is in the ISP industry working for HostAway in a combination System/Network administration and support role.



I have also given a number of papers, often related to either Avahi or IPv6 at a few conferences.. mainly Linux.conf.au - you can view them on my web-site www.lathiat.net.



I have an open-source/free-software community background.. I co-authored Avahi (a mDNS/DNS-SD/Link-Local IP stack for Linux and other *nixes) with Lennart Poeterring and dabbled my hand in the likes of the Ubuntu MOTU and GNOME communities.



Working in the MySQL Support Team, and having a very large customer base to source problems from, we run into many problems and gotcha's on our customer systems which people don't hit every day but are very difficult to track down when you do. I hope that by sharing some of these experiences





With that introduction out of the way, I have just landed in Hobart, Tasmania, Australia to attend linux.conf.au (which was a very last minute arrangement). I have attended this conference every year since 2003 (thats 7 of them!), it's always great and I am glad I am able to attend again.



I am sitting at the "Open Source Databases" mini-conference which has a strong MySQL presence. There are quite a few Sun folk here, which is fantastic - unfortunately a few others were unable to make it but it should be great never the less.



I will be bringing you more Linux.conf.au updates throughout the week and MySQL tips into the future!

Linkin Park - No More Sorrow

I just noticed that as of a few days ago (22nd August) - Avahi is now 3 years old.



Heres my original release announcement:

http://lists.freedesktop.org/archives/avahi/2005-August/000227.html



That's a crazy amount of time, since then I presented at Linux.conf.au 2006 and GUADEC 2007 (Birmingham) as well as some presentations at GNOME mini-confs and the like.



Fun and games... also for anyone following this blog, I regularly twitter as lathiat

http://twitter.com/lathiat



Feel free to follow me, but it tends to be more personal stuff rather than Tech/Open-Source related

Matchbox 20 - Push

Dear World,



Please remind me why I bought a mac?



Love,

Trent



[i was typing away the other day and *blip* my backlight stopped working, permanently, seriously, wtf.]

Traveling to sydney...

Howdy all,



I will be traveling to Sydney on Sunday the 20th and 21st - if you're in sydney and want to catch up for a beer or something let me know.



Only timeslot will be Sunday (20th) afternoon as I fly out Monday night (21st)



- lathiat

Pete Murray - Oppurtunity

As most of you reading this are likely aware, I work as a Support Engineer for MySQL AB - which was recently acquired by Sun Microsystems.



If you have been living under some variety of rock-like object, you can see the press release here:

http://www.mysql.com/news-and-events/sun/







As part of that, as of this month I now work for Sun Microsystems Australia.





Trent Lloyd

MySQL Support Engineer

Sun Microsystems





However I am really doing the same job I was before.. so far the acquisition seems to have gone fairly positively. I was a little concerned about some of the employment/IP restrictions that our new fangled contracts bring but in the end I was able to sort out enough to satisfy me.



I am still working from Home, and there are no plans (at least for me) to move around within Sun - I plan to continue my role as MySQL Support Engineer.



I really hope this acquisition is beneficial for MySQL in the long run.. there are potential ups and downs but so far for me the experience has been very positive and the Sun team have been very welcoming.



The MySQL Users Conference is upon us this week, which means there will likely to be some interesting announcements from various MySQL-related companies as tends to be the case around major conferences (there are already some talk of a new product from "Kickfire" that sounds interesting, albeit sketchy at this stage)



In more personal news, they sold the house I am renting so I have to move. For more amusement Aleesha is over-seas until after I have to move so I have to move before she gets back, oh well. I applied for a house today and the Property Manager seemed very enthusiastic so I guess thats a good sign.



It's a very nice 3x2 townhouse:

http://gallery.mac.com/lathiat#100525&bgcolor=black&view=grid



It's also only a couple hundred meters from the local telephone exchange, so ADSL2+ speeds should be fairly flat (24/1 M) which will be nice :)



To follow me closer, check out my twitter: twitter.com/lathiat

Britney Spears - Piece of me

Howdy all,



I know there are some followers of my blog.. and I know I don't update it very much (sorry!)



However I update my twitter quite regularly, so check out http://twitter.com/lathiat (you can subscribe as an RSS if you want)



Trent

Matchbox 20: The Burn

Back home in Perth from Linux.conf.au... couple surprises when I got in.



First up, I made Qantas Silver Frequent flyer. This is mainly a yay for the priority check-in - and I get 1 free Qantas club invitation.



Had I thought about it when I started travelling, and taken Qantas the whole time I probably would have been gold by now but I took 2 big european trips on emirates.. and a whole stack of Virgin Blue domestic. Oh well.



Secondly, I left my air conditioner on.. It's a crappy old wall unit and it has apparently continually frozen and unfrozen up over the week and theres a nice trail of water down the wall onto my mattress which is very damp and has several water stain marks. Obviously been through the cycle a few times. It also managed to eat a bit into the plasterboard under it. -woops-. When I got in it was frozen solid out the filter and nearly out the plastic grating.



Note to self: check this in future.



And last but not least, its too damn hot in Perth.

Poker

While I'm in Melbourne for linux.conf.au (which finished yesterday), I thought I'd head down to the Crown Poker Room to get some action.



Unfortunately I missed the Aussie Millions by a couple of weeks, but I did run into one of the final tablist while there. He made alot of money.



I set myself a limit of $300.. so first I tackled the Sunday $100 ($85+$15) NLHE tournament. I didn't fair so well.. I got up in chips early (doubled up).. had a really unfortunate hand..



I was dealt [Kd] exposed, next card was [Ac] and my redealt card was [6s]. I elected to fold and out flopped [Kh][Kc][3d] - I was not happy! I definitely would have been in with [Kd][As] and my trips would have more than likely got me a healthy stack. D'oh.



I endeed up going out just before level 4 with my [Kh][As] vs [Jh][10s]. The flop was a blank and a [10c] came on the river sealing my fate.



From then I moved onto the $2/$3 NLHE. I bought in for $100 and about an hour later was up to $254 and cashed out when my seat in a $50 Sit'n'Go came up.



I didn't do too well in the Sit'n'Go either.. busted out in 6th place calling all-in with my [As][10s] against my offenders [Ah][Ac]. The aces held up.



Then I nearly left.. but decided to head back to the $2/$3 NLHE cash game again for $100. Only about 30 minutes in I was up to $306. At this point although I felt guilty leaving the table a couple hands after winning an $80 or so pot; but I knew it was starting to go to my head and I'd loste up and likely lose it all and probably more back. So I left, $210 up for the day.



Not a bad effort! Tournaments bad.. cash games good.. probably a bit lucky but the tables were quite soft, very often folding to reraises or calling down with bugger all.



Woo!

MacBook even more broken than I thought... Asus EeePC is good...

So having taken my MacBook to the apple store on Tuesday, they replaced the logic board and subsequently found out that not only was the logic board toast, so is the hard drive.



How the hell did I manage that?



So anyway, being 10AM Tuesday morning, and a talk to give at 1:30PM - I cruised down to Myer (by way of a $6 taxi, didn't realise how close it was) and bought an Asus EeePC.



I then rewrote the talk in <2 hours and did it. It was a bit of a rush job but it worked OK.



I'm fairly impressed by the EeePC.. it's got 4G flash, 512MB ram, Webcam.. and comes with a pre-installed and customized Xandros-based OS. It has all the good utils.. Pidgin, Firefox, Skype, Openoffice, etc... although I hope to install Ubuntu on it later.



It did manage to compile a kernel in <hour earlier.. so its not *that* slow.. altho the kernel was a little minimalistic with the features needed for it.

MacBook broken

I've really hit a wall with my MacBook.



it has stopped booting after a kernel panic, which was caused by unplugging a Sony Ericsson phone that was just charging.



I can't even boot a cd or into firewire target mode.



This is the 3rd trip now I have had something go wrong with it, dammit. I think life is telling me to go back to a PC and ubuntu!



I guess its

kindof poetic my MacBook running macosx broke at Linux.conf.au.



Posted from my iPod touch.

Feel Like Home (feat. Styles Of Beyond) - Fort Minor

Well, I made it through my first day in Melbourne for linux.conf.au 2008



Due to horrible travel planning, I arrived in Melbourne at 6am but was unable to check-in to Trinity College until 1PM. Thus I had to wast a couple hours then met some of the organisers and helped shuffle boxes of conference bags out of one room, into cars to then be unpacked down at the venue.



Following that I spent some time folding up t-shirts and finally I was able to register and check in at the college.



Having taken the midnight flight over, I hadn't slept and so I carked it from 4PM to 8PM. This is going to really throw my sleeping out! But oh well..



Following on from my lazing, I located the Trinity common area, with semi-working wireless (some rogue client is handing out DHCP leases that are useless, so you have to refresh 30 times)



So far I have ran into lots of people I know from work and past LCAs.. I'll certainly miss some but they include.. Stuart Young, Stuart Smith, Colin Charles, Giuseppe Maxia, James Iseppi, Arjen Lentz, Joel Stanley, Leon Brooks, Donna Benjamin, Daniel Stone, Grant Diffey, Michael Kedzierski(sp?), Ryan Verner, Tim Ansell.. there are a few people who escape me at the moment!



As wireless is only available in the common area, my 2GB of 3G data with three is certainly proving helpful :)



Until tomorrow... (or well, today)

Veronicas - Hook Me Up

Heading off shortly to Linux.conf.au 2008 in Melbourne, Victoria.



I've got the red-eye, so I'll be landing in at 6AM Melbourne time - for any other LCAers i'll be staying at the Trinity college.



I'm giving a talk at the MySQL Mini-conf @ 1:30PM on Tuesday - it will be about my experiences as a MySQL Support Engineer and the common sorts of problems that people run into and how to avoid them!



Should be a good week for the conference, and maybe I can lose^H^H^H^Hwin a few dollars at the Crown with my handy dandy poker skills :)



Anyone up for a game? Unfortunately I've misplaced my small, portable and travel-friendly chipset - I really must find that.

Anika-Robert Picardo-Extreme Bob-More Parodies, Travesties & Anomalies

Thanks to work, I spent this week in Cupertino, CA. Having never been anywhere in the USA before, it was somewhat exciting to be staying literally up the road from the Apple Campus.





Me @ 1 Infinite Loop



I was also able to goto Google in Mountain View, to attend the Silicon Valley MySQL Users Group at the Visitors Center. Unfortunately I didn't come accross any large google logos to take my photo with, but I did find building number 42







I also spotted the offices of Symantec, Trend Micro, Packeteer, Solid, Microsoft, Borland and MySQL (Surprise!) along the way. Certainly "exciting" for someone thats never been to the bay area.



I had hoped to make it up to San Francisco, and do a little site-seeing of the Golden Gate Bridge and some other stuff, but was too tired to do it this afternoon, and I am attending BarCampBlock today in Palo Alto from early until I fly out tonight - so unfortunately I am going to miss out this trip. Hopefully work will send me back this way again sometime next year.

GUADEC 2007

So, thanks to GUADEC 2007 kindly accepting my talk & paying for my travel expenses, and I had a good time.



I think my Avahi talk went well... we had a good 15+ minute talk about future possibilities, which is what I was hoping.

Lennart's Pulseaudio talk was also very good, it has advanced quite some since his talk @ LCA so it's good to see progress.



I got to meet Sjoerd, and generally the entire Collabora team which was pretty cool - I've fixed some bugs in Avahi so they can progress with the Salut support for OLPC which is good.



I have uploaded my photo collection to flickr, so you can check them out here.



Heres some of my favourites..



I managed to crash the in-flight entertainment system, apparently it uses svgalib..





I love these power transformers that point out all sides, as much as I'm sure they are safe, it just irk's me slightly ;)





Primitive mapping technology...





Walkabout - the Australian Bar - Dude in a kangaroo suit





Me with an iFailPhone





Dress code... no football tops.... exceed england.





www.wtfisthisgame.com





All in all, it was a good trip :)

Weird Al Yankovic - Pancreas

Just over 18 months ago, I started working at HostAway, initially starting out doing some casual 2-week phone support they needed at the time, my role became permanent and I quickly expanded into both Network & Systems Administration (while still doing a lot of day to day customer support, small organizations tend to demand wider skill ranges :)



None the less I felt, for various reasons, it was time to move on.  As such, I have just completed my first week working MySQL AB as a Support Engineer.



The job is home based, and I am currently working the hours of 5AM-1PM local time.  Now you may think "wow, you're crazy" for working those hours, but I have discovered that so far I am actually quite enjoying it, I feel I am getting a lot more out of my day at the moment because I seem more awake longer, getting up earlier in the day (Usually about 4:30AM).



I do have the option to start working at 7AM on most days, and given I have no travel time this doesn't put me very far behind a "normal" job as far as messing up the daily schedule.



Being a free software 'person', working for a company with heavy involvement in the free software world is pretty exciting for me.  I guess time will tell how it goes, but if this week is anything to go by I think I should be happy in the long term.



In other news, I have discovered that listening to Weird Al Yankovic continually is horrible for your sanity, mostly because he picks alot of very catchy songs to parody, which proceed to persistently circle around inside my head for the next week...

The bird that really likes my carport..

So yesterday I came home to this bird, in my carport







From some googling I *think* it is a pigeon, maybe a dove, but I didnt' check very hard, so I'm far from even moderately sure :)  The tag on it's foot says "AUST 2006" and some other stuff I couldn't make out.



It was there all yesterday afternoon, I managed to get quite close to take the above photo, but it did flinch when I tried to get any closer.



This morning, it was still there, and this afternoon when I go thome, it was still there although now its finding its way around the floor.



I do wonder why it likes my carport so much, It doesn't *seem* hurt, and it managed to get up here





So I assume that it can fly somewhat, I guess I'll wait and see if it's there tomorrow... If anyone knows anything feel free to leave a comment :)



I turned on the sprinklers and it was right at the very edge of the big double door out to the world drinking the water splashing in, so it's not stuck in or anything.



Weird!

Green Day - Boulevard of Broken Dreams

[for the freedom lovers out there, this it totally and completely unrelated to anything freedom :)]



how the hell did I just lose the following poker hand



I pull pocket sixes, and then flop Ace-6-Ace, i.e. 6's full of Ace's full house, after some serious betting and deceiving I go all in.



HAND



(my hand, circled in black - 6H, 6C, 6S, AH, AS, opponents hand - circled in Red, AH, AD, AH, AS, 6S)

He beat me with 4 of a kind aces.



Opponent Before:





Opponent After:





Note the completely evil look on his face! A conspiracy I tell you...



This after a long string of really stupid pair and occasionally two-pair wins, sheesh!

GNOME in Jericho

Watching Jericho S01E14, noticed this...







Looks suspiciously like GNOME 1.4 to me :)

Avahi Scalability: "Is it good or is it bad?"

Lennart rightfully pointed out that I didn't really make any conclusion as to the results of my little test, the reason for this is really, "I'm not sure"



Certainly, It seems to be OK, the number of transmitted packets by my rough calculation, make sense, I would be interested to see what the realistic practical throughput of multicast on wireless is when you have many hosts transmitting at once, I know in 802.11b multicast is transmitted at "basic rate" of 1 or 2 mbit (or so I beleive), I'm not sure if 802.11g changes this.



My generally quick gut feeling is "I think this would work" (on wireless), I have no doubt this is fine on a wired network.



More testing to be done...

Some random non-scientific Avahi "scaling" figures

Talking to sjoerd and others on IRC, (for the benefit of the OLPC project), I decided to attempt to get some kind of an idea of the amount of traffic Avahi generates on a large network.



I booted up 80 UMLs, running 2.6.20.2, on my AMD Athlon64 X2 4200+ (O/C to 2.5GHz per core), with 2GB of ram.



Each was running with 16M ram, a base debian etch install with Avahi 0.6.16.



Interestingly with 80 VMs running my memory usage looked like this:

Mem: 2076124k total, 2012064k used, 64060k free, 18436k buffers

Swap: 996020k total, 8k used, 996012k free, 1476504k cached






I configured a 'UML Switch' with a tap device on the host attached (tun1) and told each VM to come up and use avahi-autoipd to obtain a link-local IP.



I had each VM set to advertise 3 services, via the static service advertisement files



  • _olpc_presence._tcp
  • _activity._tcp (subtype _RSSActivity._sub._activity._tcp)
  • _activity._tcp (subtype _WebActivity._sub._activity._tcp)


plus it was configured with Avahi defaults so it would announce a workstation service (the default 'ssh' service was however NOT present) and the magic services that indicate what kind of services are being announced



So I started Wireshark and IPTRAF and started booting 80 VMs, at a pace of 1 every 10 seconds, after roughly 10-15 minutes the following numbers of packets were seen on the host tun1 interface



704 UDP (56.3%)

390 ARP (21.2%)

156 OTHER (12.5%)




The ARPs are for avahi-autoipd and the UDP packets are for avahi-daemon to speak mDNS, iptraf reported



Incoming Bytes: 417,391



I then gave my local machine an IP which bumped the packet count to 712, 395 and 157.



I then started 'avahi-browse _activity._tcp', this would result in 2 services from each machine being returned, following that tidying up the packet count was at



935 UDP

Incoming Bytes: 496,901

Outgoing Bytes: 28,787 (30 packets according to iptraf)




Now this *really* gave me machine a heart attack, as many 'linux' processes we're eating 20% CPU as possible, and took a good 10+ seconds for my machine to start responding again, I suspect if i was running the SKAS3 patch it might be a little less harsh.



I then after cancelling that, run avahi-browse -r _activity._tcp which causes Avahi to resolve each of the services, following that run



UDP 1287

Incoming Bytes: 570,000 packets 1384

Outgoing Bytes: 185,000 packets 227




In this case most of the services were cached and I just had to resolve each one.



I forgot to watch for traffic counts, so I re-ran the above test and iptraf claimed 165kbits/second at peak for 1 5 second interval. In this time I noticed a bunch of the service resolution queries timed out, I suspect this may have to do with it causing my machine to lock hard for a bit while it does it's magic... ;)



So that's the end of my very simple basic run of basically doing some real (rather than theoretical) tests of the number of packets seen flying around with 80 hosts on a network with Avahi with a few services, and the impact of people running a browse/resolve on a popular service type.



I'm going to try comandeer some more hardware to run some faster tests and collect some more useful data.

�X��(�

So I was fiddling around with my new phone (Sony Ericcson k800i) and I noticed I could play videos as a ringtone, I was interesting how that worked...



So I downloaded a Music Video of "The Veronicas - When it all falls apart" from the Three Music store, which is at a cost of $3.00 (Which, BTW, came down at 30K/s, not bad for mobile data...)



Once my phone had downloaded it, I had two options "View" (which works fine) and the other was "Ringtone", having selected this the phone stated "This video is restricted against that kind of use"



Sigh.



I wonder what three would say if I asked for a refund ;)

DOA - Dead or Alive (The Movie): Featuring: Partial linux source

I was watching the movie "Dead or Alive" this afternoon, and I was curious to see the source code scrolling past was from the linux kernel







Interestingly they have blotched out bits and pieces, notably the copyright declaration. They also appear to lack the ability to render tabs.



You can compare to arch/alpha/kernel/err_impl.h (taken from Ubuntu's linux-source-2.6.19), I'll include the excerpt pictured above here:





*

* linux/arch/alpha/kernel/err_impl.h

*

* Copyright (C) 2000 Jeff Wiedemeier (Compaq Computer Corporation)

*

* Contains declarations and macros to support Alpha error handling

* implementations.

*/



union el_timestamp;

struct el_subpacket;

struct ev7_lf_subpackets;



struct el_subpacket_annotation {

struct el_subpacket_annotation *next;

u16 class;

u16 type;

u16 revision;

char *description;

char **annotation;

};





I'm not sure how legal or anything this is, but interesting none the less...

Twitter posts: 2014-08-18 to 2014-08-24

Creating certs and keys for services using FreeIPA (Dogtag)

The default installation of FreeIPA includes the Dogtag certificate management system, a Certificate Authority for your network. It manages expiration of certificates and can automatically renew them. Any client machines on your network will trust the services you provide (you may need to import the IPA CA cert).

There are a number of ways to make certificates. You can generate a certificate signing request or you can have Dogtag manage the whole process for you. You can also create individual cert and key files or put them into a nss database. My preferred method is to use individual files and have Dogtag do the work for me.

If you so desire, you can join your servers to the realm in just the same manner as a desktop client. However, even if they are not joined to the realm you can still create certs for them! You will need to run a few additional steps though, namely creating DNS records and adding the machine manually.

Let’s create a certificate for a web server on www.test.lan (192.168.0.100) which is has not joined our realm.

SSH onto your IPA server and get a kerberos ticket.

[user@machine ~]# ssh root@ipa-server.test.lan

[root@ipa-server ~]# kinit admin

If the host is not already in the realm, create DNS entries and add the host.

[root@ipa-server ~]# ipa dnsrecord-add test.lan www --a-rec 192.168.0.100

[root@ipa-server ~]# ipa dnsrecord-add 0.168.192.in-addr.arpa. 100 --ptr-rec www.test.lan.

[root@ipa-server ~]# ipa host-add www.test.lan

Add a web service for the www machine.

[root@ipa-server ~]# ipa service-add HTTP/www.test.lan

Only the target machine can create a certificate (IPA uses the host kerberos ticket) by default, so to be able to create the certificate on your IPA server you need to allow it to manage the web service for the www host.

[root@ipa-server ~]# ipa service-add-host --hosts=ipa-server.test.lan HTTP/www.test.lan

Now create the cert and key.

[root@ipa-server ~]# ipa-getcert request -r -f /etc/pki/tls/certs/www.test.lan.crt -k

/etc/pki/tls/private/www.test.lan.key -N CN=www.test.lan -D

www.test.lan -K HTTP/www.test.lan

Now copy that key and certificate to your web server host and configure apache as required.

[root@ipa-server ~]# rsync -P /etc/pki/tls/certs/www.test.lan.crt /etc/pki/tls/private/www.test.lan.key root@www.test.lan:

You can also easily delete keys so that they aren’t tracked and renewed any more, first get the request id.

[root@ipa-server ~]# ipa-getcert list

Take note of the id for the certificate you want to delete.

[root@ipa-server ~]# getcert stop-tracking -i [request id]

A CRL (certificate revocation list) is automatically maintained and published on the IPA server at ​https://ipa-server.test.lan/ipa/crl/MasterCRL.bin

August 23, 2014

Raspberry Pi Virtual Machine Automation

Several months ago now I was doing some development for Raspberry Pi. I guess that shows how busy with life things have been. (I have a backlog of lots of things I would like to blog about that didn’t get blogged yet, pardon my grammar!) Now the Pi runs on an SD card and it […]

Papers Committee weekend - who will be presenting at LCAuckland

This weekend is the Papers Committee weekend, and Steven (Ellis) is now on his way over to Sydney to join our revered Papers Committee for a fun-packed weekend deciding which of the many submitted presentations to chose from for our conference next year.

It’s a very important job, crucial, even! I don't envy them, trying to foresee what is going to be at the top of everyone’s must-see list, predicting what will be trending in 6 month’s time, and what will have died a sad, lonely death or sputtered out after a brief burst of glory in the meantime.

Then there’s the programme... Who fits together? Who shouldn’t be opposite whom? And on it goes. It will be hard work! After speaking with the Chairs of the committee (Michael Davies and Michael Still) we've learned that this is traditionally a passionately fought process with each and every person focussed intently on ensuring that our delegates have access to the best presentations currently and soon-to-be available.

“The Michaels” know the conference and its audience and the rest of the committee is made up of past organisers, some FOSS celebrities and past presenters - most of whom have done this job many times now. Steve has been sent with some strict instructions about the presentations our team wants to see and the format of the conference itself that has some new, exciting ideas.

To those in the Papers Committee gathering together this weekend to make these important decisions - we wish you all a safe journey there and back again, and we say Stand Your Ground!

To those of you who have submitted a presentation we say "Good Luck - you are all wonderful in our eyes!

All the best

The LCA 2015 team

Do Anti-Depressants work?

In the middle of 2013 I had a nasty bout of depression and was prescribed anti-depressant drugs. Although undiagnosed, I think I may have suffered low level depression for a few years, but had avoided anti-depressants and indeed other treatment for a couple of reasons:

  • I am a man, and men are bad at looking after their own health.
  • The stigma around mental health. It’s tough to face it and do something about it. Consider how you react to these two statements “I broke my leg and took 6 months to recover”, and ” I broke my mind and took 6 months to recover”.
  • The opinion of people influential in my life at that time. My GP friend Michael presented a statistic that anti-depressants were only marginally better that placebos (75% versus 70%) in treating depression. I was also in a close relationship with a person who held an “all drugs are bad”, anti-western medicine mentality. At the time I lacked the confidence to make health choices that were right for me.

Combined, these factors cost me 18 months of rocky mental health.

When my health collapsed the mental health care professionals recommend the combination of anti-depressants and counselling with a psychologist or psychiatrist. The good news is that this treatment, combined with a lot of hard work, and putting positive, supportive, relationships around me, is working. I came off the bottom quite quickly (a few months), and have continued to improve. I am currently weaning myself off the anti-depressants, and life is good, and getting better, as I “re-wire” my thought process.

That’s the difficult, personal bit out of the way. Lets talk about anti-depressants and science.

Did Anti-deps help me?

Due to Michael’s statistic above (anti-deps only 5% better than placebo) I was left with lingering doubts about anti-depressants. Could I be fooling myself, using something that didn’t work? This was too much for the scientist in me, so I felt compelled to check the evidence myself!

Now, the fact that I “got better” is not good enough. I may have improved from the counselling alone. Or through the “natural history” of disease, just like we automatically heal in 1-2 weeks from a common cold.

The health care professionals I worked with are confident anti-depressants function as advertised, based on their training and years of experience. This has some weight, but the causes and effects in mental health are complex. Professionals can hold mistaken beliefs. Indeed a wise professional will adapt as medical science advances and new therapies are replaced by old. They are not immune to unconscious bias. So the views of professionals, even based on years of experience, is not proof.

Trust Me. I’m a Doctor

I am a “Dr”, but not a medical one. I have a PhD in Electronic Engineering. I don’t know much about medicine, but I do know something about research. In a PhD you create a tiny piece of new knowledge, something human kind didn’t know before. It’s hard, and takes years, and even then the “contribution” you make is usually minor and left to gather dust on a shelf in a university library.

But you do learn how to find out what is real and what is not. How to separate facts from bullshit. You learn about scientific rigour. You do that by performing “research and disappointment” for four years, and finding out just how wrong you can be so many times before finally you get to to core of something real. You learn that what you want to believe, that your opinion, means nothing when it gets tested against the laws of nature.

So with the help of Michael and a great (and very funny) book on how medical trials work called Snake Oil Science, I did a little research of my own.

Drilling into a few studies

What I was looking for were “quality” studies, which have been carefully designed to sort out what’s true from what’s not. So my approach was to look into a few studies that supported the negative hypothesis. Get beyond the headlines.

One high quality study with the widely presented conclusion “anti-deps useless for mild and moderate depression” was (JAMA 2010). This paper and it’s conclusion has been debunked here. Briefly, they used the results from 3 studies of just one SSRI (Paxil) and used that under-representation to draw impossibly broad conclusions.

Ben Goldacre is campaigning against publication bias. This is the tendency for journals only to publish positive results. This is a real problem and I support Ben’s work. Unfortunately, it also feeds alt-med conspiracy theories about big pharma.

Ben has a great TED Talk on the problem of publication bias in drug trials. To lend credibility he cites a journal paper (NEJM 358 Turner). Ben presents numbers from this paper that suggest anti-depressants don’t work, due to selective publishing of only positive trials.

Here a couple of frames from Ben’s TED talk (at the 7:30 mark). Big pharma supplied the FDA with these results to get their nasty western meds approved:

However here are the real results with all trials included:

Looks like a damning case against anti-deps, and big pharma. Nope. I took the simple step of reading the paper, rather than accepting the argument from authority that comes from a physician quoting a journal paper, in A TED talk. Here is a direct quote from the paper Ben cited:

“We wish to clarify that non-significance in a single trial does not necessarily indicate lack of efficacy. Each drug, when subjected to meta-analysis, was shown to be superior to placebo. On the other hand, the true magnitude of each drug’s superiority to placebo was less than a diligent literature review would indicate.”

Just to summarise: Every drug. Superior to a placebo. This means they work.

The paper continues. By averaging all the data the overall mean effect size over all studies (published and not, all drugs) was 32% over a placebo. That’s actually quite positive.

So while Ben’s argument of publication bias is valid, his dramatic implication that anti-deps don’t work is wrong, at least from this study.

Yes publication bias is a big problem and needs to be addressed. However science is at work, self correcting, and it’s good to see guys like Ben working on it. It’s a classic trick used by alt-med as well: just quote good results, and ignore the results that show the alt-med therapies to be ineffective. This is Bad Science.

However this doesn’t discredit science, and shouldn’t make us abandon high quality trials and fall back on even poorer science like anecdotes and personal experience.

Breathless Headlines

This article from CBC News. No references to clinical studies, some leading questions, and a few personal opinions. So it’s just a hypothesis – but no more that that. A lack of understanding of the chemical functionality of a drug doesn’t invalidate it’s use. This isn’t the first time an effective drug’s function wasn’t well understood. For example Paracetamol isn’t completely understood even today.

As usual, a little digging reveals a very different slant that’s makes the CBC article look misleading. The author of the book is quoted in Wikipedia:

“Whitaker acknowledges that psychiatric medications do sometimes work but believes that they must be used in a ‘selective, cautious manner’. It should be understood that they’re not fixing any chemical imbalances. And honestly, they should be used on a short-term basis.”

I am attracted to the short term approach, and it is the approach suggested by the mental health care professionals that has helped me. Like a bandage or cast, anti-deps can support one while other mental health repairs are going on.

In contrast, the CBC article (first para):

“But people are questioning whether these drugs are the appropriate treatment for depression, and if they could even be causing harm.”

Poor journalism and cherry picking.

My Conclusions

My little investigation is by no means comprehensive. However the high quality journal papers I’ve studied so far support the hypothesis that anti-deps work and debunk the “anti-depressants are not effective compared to placebo” argument to my satisfaction.

I would like to read more studies of the combination of psycho-therapy and SSRIs – if anyone has any references to high quality journal papers on these subjects please let me know. The mental health nurse that treated me last year suggested recovery was about “40% SSRIs + 60% therapy”. I can visualise this treatment as a couple of normal distribution curves overlapping, with the means added together to be your mental health.

Medicine and Engineering

I was initially aghast at some of the crappy science even I can pick up in these “journal” papers. “This would never happen in engineering” I thought. However I bet some similar tricks are at play. There are pressures to “publish, patent” etc that would encourage bad science there too. For example signal processing papers rarely publish their source code, so it’s very hard to reproduce a competing algorithm. All you have is a few of the core equations. If I make a bug while simulating a competitors algorithm, it gives me the “right” answer – Oh look mine is better!

In my research: Some people using Codec 2 say it sounds bad and doesn’t work well for HF comms. Other people are saying it’s great and much better than the legacy analog technology. Huh? Well, I could average them out in a meta study and say “it’s about the same as analog”. Or use my internal bias and self esteem to simply conclude Codec 2 is awesome.

But what I am actually doing is saying “Hmm, that’s interesting – why can two groups of sensible people have the opposite results? Lets look into that”. Turns out different microphones make Codec 2 behave in different ways. This is leading me to investigate the effect of the input speech filtering. So through this apparent conflict we are learning more and improving Codec 2. What an awesome result!

I suspect it’s the same with anti-deps. Other factors are at play and we need better study design. Frustrating – we all want definitive answers. But no one said Science was easy. Just that it’s self correcting.

That’s why IFL Science.

August 22, 2014

Raspberry Pi and 802.11 wireless (WiFi) networks

A note to readers

There are a many ways to configure wireless networking on Debian. Far too many. What is described here is the simplest option which uses the programs and configurations which ship in an unaltered Raspbian distribution. This lets people bring up wireless networking to their home access point with a minimum of fuss. More advanced configurations may be more easily done with other tools, such as NetworkManager. Now back to your originally programmed channel…

The RaspberryPi does not come with wireless onboard. But it's simple enough to buy a small USB wireless dongle. Element14 sell them for A$9.31. It's unlikely you'll see them in shops for such a low price so it is well work ordering a WiFi dongle with your RPi.

Raspbian already comes with the necessary software installed. Let's say our home wireless network has a SSID of example and a pre-shared key (aka password) of TGAB…Klsh. Edit /etc/wpa_supplicant/wpa_supplicant.conf. You will see some existing lines:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

Now add some lines describing your wireless network:

network={
  ssid="example"
  psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh"
  scan_ssid=1
}

The parameter scan_ssid=1 allows the WiFi dongle to connect with a wireless access point which does not do SSID broadcasts.

Now plug the dongle in. Check dmesg that udev installed the dongle's device driver:

$ dmesg
[    3.873335] usb 1-1.4: new high-speed USB device number 5 using dwc_otg
[    4.005018] usb 1-1.4: New USB device found, idVendor=0bda, idProduct=8176
[    4.030075] usb 1-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[    4.050034] usb 1-1.4: Product: 802.11n WLAN Adapter
[    4.060398] usb 1-1.4: Manufacturer: Realtek
[    4.069904] usb 1-1.4: SerialNumber: 000000000001

[    8.586604] usbcore: registered new interface driver rtl8192cu

A new interface will have appeared:

$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:11:22:33:44:55  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 KiB)  TX bytes:0 (0.0 KiB)

IPv4's DHCP should run and your interface should be populated with addresses:

$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:11:22:33:44:55
          inet addr:192.0.2.1  Bcast:192.0.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:100 errors:0 dropped:0 overruns:0 frame:0
          TX packets:100 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 KiB)  TX bytes:0 (0.0 KiB)

If you use multiple wireless networks, then add additional network={} stanzas to wpa_supplicant.conf. wpa_supplicant will choose the correct stanza based on the SSIDs present on the wireless network.

IPv6

If you are using IPv6 (by deleting /etc/modprobe.d/ipv6.conf) then IPv6's zeroconf and SLAAC will run and you will also get a IPv6 link-local address and maybe a global address if your network has IPv6 connectivity off the subnet.

$ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 00:11:22:33:44:55
          inet addr:192.0.2.1  Bcast:192.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::211:22ff:fe33:4455/64 Scope:Link
          inet6 addr: 2001:db8:abcd:1234:211:22ff:fe33:4455/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:100 errors:0 dropped:0 overruns:0 frame:0
          TX packets:100 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 KiB)  TX bytes:0 (0.0 KiB)

Commonly occurring issues

If the interface is not populated with addresses then try to restart the interface. You will need to do this if you plugged the dongle in prior to editing wpa_supplicant.conf.

$ sudo ifdown wlan0
$ sudo ifup wlan0

If you still have trouble then look at the messages in /var/log/daemon.log, especially those from wpa_supplicant. Also check dmesg, ensuring that the device driver isn't printing messages indicating misbehaviour.

Also check that the default route points to where you expect; that is, the default route line says default viadev wlan0.

$ ip route show
default via 192.168.255.254 dev wlan0
192.168.255.0/24 dev wlan0  proto kernel  scope link  src 192.168.255.1

$ ip -6 route show
2001:db8:abcd:1234::/64 dev wlan0  proto kernel  metric 256  expires 10000sec
fe80::/64 dev wlan0  proto kernel  metric 256 
default via fe80::1 dev wlan0  proto ra  metric 1024  expires 1000sec

If you have edited /etc/network/interfaces then you may need to restore these lines to that file:

allow-hotplug wlan0
iface wlan0 inet manual
 wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

Security

As this example shows, the pre-shared key should be long — up to 63 characters — and very random. The entire strength of WPA2 relies on the length and randomness of the key. If your current key is neither of these then you might want to generate a new key and configure it into the access point.

An easy way to generate a key is:

$ sudo apt-get install pwgen
$ pwgen -s 63 1
TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh

This works even better if you use the RaspberryPi's hardware random number generator.

There is only one secure wireless protocol which you can use at home: Wireless Protected Access version two with pre-shared key, this is known as “WPA2-PSK” or as “WPA2 Personal”. The only secure encryption is CCMP -- this uses the Advanced Encryption Standard and is sometimes named “AES” in the access point configurations. The only secure authentication algorithm for use with WPA2-PSK is OPEN: this doesn't mean “open access point for use by all, so no authentication” but the reverse: “Open Systems Authentication”.

You can configure wpa_supplicant.conf to insist on these secure options as the only technology it will use with your home network.

network={
  ssid="example"
  psk="TGABpPpabLkgX0aE2XOKIjsXTVSy2yEF0mtUgFjapmMXwNNQ3yYJmtA9pGYKlsh"
  scan_ssid=1

  # Prevent backsliding into insecure protocols
  key_mgmt=WPA-PSK
  auth_alg=OPEN
  proto=WPA2
  group=CCMP
  pairwise=CCMP
}

[life] Day 205: Rainy day play, a Brazilian Jiu-Jitsu refresher

I had grand plans of doing a 10 km run in the Minnippi Parklands, pushing Zoe in the stroller, followed by some bike riding practise for Zoe and a picnic lunch. Instead, it rained. We had a really nice day, nevertheless.

Zoe slept well again, and I woke up pretty early and was already well and truly awake when she got out of bed, so as a result we were ready to hit the road reasonably early. Since it was raining, I thought a visit to Lollipops Play Cafe would be a fun treat.

We got there about 10 minutes before the play cafe opened, so after some puddle stomping, we popped into Bunnings to get a few things, and then went to Lollipops.

Unfortunately Jason was tied up, so Megan couldn't join us. I did run into Mel, a mother from Kindergarten, who was there with her son, Matthew, and daughter. So instead of practising my knots or doing my real estate license assessment, I ended up having a chat with her , which was nice. She mentioned that she had some stuff to try and do in the afternoon, so I asked if Matthew wanted to come over for a play date for a couple of hours. He was keen for that.

So we went home, and I made some lunch for us, and then Mel dropped Matthew off at around 1pm, and they had a great time playing. I think first up they played a game of hide and seek, and then my practise rope got used for quite a bit of tug-o-war, and then we did some craft. After that I busted out the kinetic sand, and that kept them occupied for ages. They also just had a bit of a play with all the boxes on the balcony. It was a really nice play session. I like it when boys come over for a play date, as the dynamic is totally different, and Zoe and Matthew played really well together.

I dropped Matthew back home on the way to Zoe's Brazilian Jiu Jitsu class. Infinity Martial Arts was running a "please come back" promotion, where you could have two free lessons and a new uniform, so I figured, why not? I'd like to give Zoe the choice of Brazilian Jiu Jitsu again or gymnastics for Term 4, and this seemed like a good way of refreshing her memory as to what Brazilian Jiu Jitsu was. I'm hoping that Tumbletastics will do a free lesson in the school holidays as well, so Zoe will be able to make a reasonably informed choice.

Zoe's now in the "4 to 7" age group for BJJ classes, and there was just one other boy in the class today. She did really well, and the new black Gi looks really good on her. She also had the same teacher, Patrick, who she's really fond of, so it was a good afternoon all round. We stayed and watched a little bit of the 7 to 11 age group class that followed before heading back home.

We'd barely gotten home and Sarah arrived to pick up Zoe, so the day went quite quickly really, without being too hectic.

Juno nova mid-cycle meetup summary: conclusion

There's been a lot of content in this series about the Juno Nova mid-cycle meetup, so thanks to those who followed along with me. I've also received a lot of positive feedback about the posts, so I am thinking the exercise is worthwhile, and will try to be more organized for the next mid-cycle (and therefore get these posts out earlier). To recap quickly, here's what was covered in the series:



The first post in the series covered social issues: things like how we organized the mid-cycle meetup, how we should address core reviewer burnout, and the current state of play of the Juno release. Bug management has been an ongoing issue for Nova for a while, so we talked about bug management. We are making progress on this issue, but more needs to be done and it's going to take a lot of help for everyone to get there. There was also discussion about proposals on how to handle review workload in the Kilo release, although nothing has been finalized yet.



The second post covered the current state of play for containers in Nova, as well as our future direction. Unexpectedly, this was by far the most read post in the series if Google Analytics is to be believed. There is clear interest in support for containers in Nova. I expect this to be a hot topic at the Paris summit as well. Another new feature we're working on is the Ironic driver merge into Nova. This is progressing well, and we hope to have it fully merged by the end of the Juno release cycle.



At a superficial level the post about DB2 support in Nova is a simple tale of IBM's desire to have people use their database. However, to the skilled observer its deeper than that -- its a tale of love and loss, as well as a discussion of how to safely move our schema forward without causing undue pain for our large deployments. We also covered the state of cells support in Nova, with the main issue being that we really need cells to be feature complete. Hopefully people are working on a plan for this now. Another internal refactoring is the current scheduler work, which is important because it positions us for the future.



We also discussed the next gen Nova API, and talked through the proposed upgrade path for the transition from nova-network to neutron.



For those who are curious, there are 8,259 words (not that I am counting or anything) in this post series including this summary post. I estimate it took me about four working days to write (ED: and about two days for his trained team of technical writers to edit into mostly coherent English). I would love to get your feedback on if you found the series useful as it's a pretty big investment in time.



Tags for this post: openstack juno nova mid-cycle summary

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots



Comment

Men Commenting on Women’s Issues

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts’o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it’s insensitive to needlessly debate the statistics.

Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn’t end when the conference ends (it can continue on mailing lists etc), that good people shouldn’t do nothing when bad things happen, and that free speech doesn’t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction.

Craig Sanders wrote a very misguided post about the Ted Ts’o situation [3]. One of the many things wrong with his post is his statement “I’m particularly disgusted by the men who intervene way too early – without an explicit invitation or request for help or a clear need such as an immediate threat of violence – in womens’ issues“.

I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing.

Asking for Help

I don’t believe that members of minority groups should have to ask for help. Asking isn’t easy, having someone spontaneously offer help because it’s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named “Women Don’t Ask” which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named “Women Don’t Ask” is an indication that we shouldn’t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as “complaining”, so I think we should consider a “complaint” from a woman as a direct request to do something.

The geek feminism blog has an article titled “How To Exclude Women Without Really Trying” which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on “Allies” which includes “Even a blog post helps” [6]. It seems clear from public web sites run by women that women really want men to be involved.

Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same.

One thing that we really don’t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn’t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene.

Men Don’t Listen to Women

Rebecca Solnit wrote an article about being ignored by men titled “Men Explain Things to Me” [7]. When discussing women’s issues the term “Mansplaining” is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women’s issues are the ones who are least likely to listen to women. This implies that other men have to teach them.

Craig says that women need “space to discover and practice their own strength and their own voices“. I think that the best way to achieve that goal is to listen when women speak. Of course that doesn’t preclude speaking as well, just listen first, listen carefully, and listen more than you speak.

Craig claims that when men like me and Matthew Garrett comment on such issues we are making “women’s spaces more comfortable, more palatable, for men“. From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don’t like. Matthew has covered both of those so it seems that he’s doing a good job of making men uncomfortable – I think that this is a good thing, a discussion that is “comfortable and palatable” for the people in power is not going to be any good for the people who aren’t in power.

The Voting Aspect

It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term “apologist for rapists” in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already.

These same patterns always happen so making a comment in support makes a difference. It doesn’t have to be insightful, long, or well written, merely “I agree” and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position.

I don’t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn’t going to be considered much by most people.

The Cost of Commenting

The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don’t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible.

The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn’t helping women in any way. It’s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost.

It’s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones.

Not Being Patronising

Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I’ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I’m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don’t).

It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it’s likely to be taken badly. It’s not just that women don’t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there’s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago.

Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can’t easily understand even when I spend some time considering the matter. So there will be things which don’t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn’t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me.

I don’t think it’s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I’ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won’t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn’t enough, you need to have conversations with multiple people so that they can point out the things you don’t understand.

This isn’t any sort of comprehensive list of ways to avoid being patronising, but it’s a few things which seem like common mistakes.

Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women’s issues.