Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

April 25, 2015

The True Meaning of Myki

Those around Victoria will be familiar with our public transport payment system called “Myki” which has had, shall we say, some teething troubles. It appears this was well known to the Vikings over 1,000 years ago as this list of Old Norse words that made it into English has:

muck – myki (cow dung)

So there you go, Myki is actually Old Norse for bullshit. :-)

This item originally posted here:



The True Meaning of Myki

Tuggeranong Trig (again)

The cubs at my local scout group are interested in walking to a trig, but have some interesting constraints around mobility for a couple of their members. I therefore offered to re-walk Tuggeranong Trig in Oxley with an eye out for terrain. I think this walk would be very doable for cubs -- its 650 meters with only about 25 meters of vertical change. The path is also ok for a wheelchair I think.



             



Interactive map for this route.



Tags for this post: blog pictures 20150415-tuggeranong_trig photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

April 24, 2015

Peace and Freedom

Over 1000 women gathered in the Hague in 1915

It's ANZAC day.

It's the 100 year anniversary of a particularly bad battle in Turkey, that has somehow come to represent the apex of Australian and NewZealand glorification of war. Sure, we say it's not glorifying war - but seriously how is this wall to wall coverage not glorification? The coverage in all media over the past week has numbed my senses. Not made me reflect on sacrifice.

All our focus on this one stupid battle? I'd like to put some focus on those efforts to stop the slaughter.

Gallipolli was ultimately a battle lost for the ANZACs.

So too was the attempt by over 1000 women who came together in 1915 to try to stop war. To call for resolutions for peace. To identify and disarm the causes of conflict. If only we could reflect more on that effort.

The Women's International League for Peace and Freedom - http://www.wilpf.org.au/centenary/100years

Image: Screengrab from http://honesthistory.net.au/wp/wp-content/uploads/WILPF_posters_72dpi-FI...

Text in the image says:

As the British army, including Anzacs, is invading Turkey more than 1000 women from both warring and neutral nations meet in The Hague for the International Congress of Women. They set out resolutions for ending all war and resolve to take them immediately to all heads of state in Europe and the USA. They name themselves the International Committee of Women for Permanent Peace.

"I know that the idea that lasting peace can be gained through war is nonsense" - Eleanor Moore

April 23, 2015

Why is the Arduino IDE so stupid?

If I perform the following actions:

  • File, New

    Opens a new editor window. Reasonable enough, although I would have preferred a default single-window GUI model like QtCreator or even gedit.
  • File, Save

    Opens a save as dialog. In spite of the Arduino ‘sketchbook’ directory, it opens in my home directory.
  • New Folder

    Creates a directory New Folder, but doesn’t shift the focus to it, leaving you confused when this is done in a directory with a lot of files…
  • Click on ‘New Folder’ and rename it, say, Test123
  • Navigate into Test123/
  • Type in a filename for the project, say, TestTest1
  • Hit save.

    So now Arduino IDE dutifully ignores what I typed and proceeds to create a tab called ‘Test123′.

    It will even do this if ‘Test123/’ already existed.

    What?
  • File, Save As.

    It forgets where you where in the hierarchy and starts in the home directory again(!)
  • Navigate to Test123/ intending to use it as a container for multiple projects
  • Type in a filename, say Hello, then hit Save
  • The sketch is _still_ called Test123.

    What?

So insanely enough, it seems you essentially create a director and thats where the sketch gets its name.  Within that directory it creates a file with the same name with the extension ‘.ino’

Lets try something else:

  • From the shell, create a directory, Test456 and create a readme.txt file, and a directory Test456a and a file Test456a/readme2.txt
  • File New
  • File save
  • Navigate into Test456
  • Type in helloworld for the name
  • Again, the project gets called Test456
  • But take a look in the directory Test456: the contents are now gone (all, including the sub directory Test456a) and replaced with Test123.ino

    Wait, what? THIS SHOULD NEVER EVER HAPPEN!!!!

Luckily I discovered this in a directory in a git working copy with no modifications so I didn’t lose anything important.

Testing done using Arduino1.5.8 amd64 for Digispark. So its a little out of date but not exactly the oldest either.

I have used Arduino before and to be honest I don’t recall it being this stupid, but maybe I just got lucky.

One difference is this time I got sick of the massive latency opening the windows and tried a few different Java JRE (openjdk6, openjdk7, gcc4.7-jre) before discovering that with gcc4.7-jre the menus are as snappy as the openbox right click menu, or even a (*sharp intake of breath*) brand new Windows 7 corporate desktop… maybe there is some API implementation difference between the JRE’s that affects the file save dialog functionality.

I don’t seem to have any issues opening projects.

So my workflow for creating a new project now consists of:

  • From the shell, create a directory in the relevant part of the git working copy I am using
  • Create a new empty .ino text file with the same name as the directory (or copy a template I made)
  • Open it with the IDE and start working

 

April 19, 2015

Twitter posts: 2015-04-13 to 2015-04-19

April 18, 2015

A letter sent, a disappointment received.

While I was distracted from the whole blogging thing, something did actually get me hacking at the keyboard on something that wasn't code. That was the metadata laws and the actions of the Labor party in allowing them to pass through with a few amendments that in the long run are going to be meaningless.

So I hacked out an email to my local federal member Stephen Jones (which I've included below).

I didn't actually receive a response from Stephen Jones until after the legislation passed through the Senate, and I have to say that I was seriously disappointed. I don't expect a lot from my representatives, but what I would like is something that actually addresses the points that I set out in the original email. What I got was the stock standard "we need to do this because [INSERT SOMETHING ABOUT TERRISMS HERE]".

Sigh.

Dear Stephen Jones,

I've always found you to be a decent person and someone who cares for his electorate. However I am deeply concerned at the fact that you and Labor seem to have allowed the governments Data Retention Legislation to pass without either looking at the amendments or in fact seriously considering whether it is needed at all.

Leaving aside the near Orwellian prospect of the entire nations communications being tracked for a rolling period of two years. There are a huge number of problems that seem to have been overlooked in the name of "national security".

- No warrants. There is no judicial oversight of the access to this data. I cannot believe that this is a thing that is supported. Why do we now think that it's not possible for police and other services to misuse their powers? Checks and balances exist for a reason and any move to water them down is dangerous.

- The actual data to be retained still has not been defined. In fact the legislation with amendments requires the ISP's to determine what "type" of communication it is, which means that the ISP will need to look at the content of the packet. This is not just "envelope" stuff, this is looking inside the envelope and working out what the letter you're mailing is about. This is bad.

- There doesn't appear to actually be a need for it. What problem does it solve that hasn't already been solved? The police and intelligence services seem to be operating quite well already, arresting those who would do us harm and relying on targetted communicates intercepts.

- The possibilities for abuse are through the roof. Not only for official abuse but you've just created a massive honeypot for every script kiddie and cracker around.

- What safeguards are there against using this information retrospectively? Say a new government comes into power and decides that something should be illegal and it should be illegal retrospectively? What's to stop them using this great store of data to start prosecuting people?

Labor and the government have both just told the Australian public that they are now suspect. That their every action needs to be tracked, just in case they may do something wrong. This is not something I am comfortable with, and frankly neither should you be.

 

Blog Catagories: 

April 16, 2015

Dare Devil

So we've been watching Dare Devil over the last couple of nights, we're up to episode 3 and I have to say I'm really impressed.

I've never been a big fan of Dare Devil the character, and dear god the movie was complete shite (up there with the first Hulk movie featuring Eric Bana for badness), but this series has really sucked me in.

For episode 3 what really brought it home for me was Ben Urich. A journalist for the Daily Bugle in the comics, Urich represents the every man and is often used to tell the story of the normal people caught up in the semi regular destruction rained down upon New York (which has included the Hulk taking over, everyone in Manhatten being turned into Spider creatures, the almost annual flooding by Namor and of course an alien invasion or two).

I'm really liking the shorter series formats for the Marvel shows as well (well leaving aside Agents of Shield). They carry the comic book story arc feel much better than trying to drag things out for 23 episodes. Agent Carter proved that and now so is Dare Devil.

All in I'm pretty happy with the state of Marvels Cinematic Universe, and am looking forward to seeing the next tranche.

Blog Catagories: 

April 14, 2015

 
Imagine a roadblock which is a wall of perfectly transparent AeroGel.



Here you are, barrelling down a highway at the speed limit, when suddenly you realise that you have come to a halt, so gently that you weren’t aware of as much as having slowed down.



Viola! You now have some idea of what is like to have been Gaslighted for over 5000 days by a person who is an emotional vampire: their goal is not to kill you, it’s to keep sucking away your self in order to present a façade of having a self themselves.



If you have been “told,” tens of thousands of times in indirect ways (never directly: you only become aware of an increasing number of knives accumulating in your back over a span of time), that you cannot succeed, that establishes just such an emotional roadblock.



Right now, teaching a Raspberry Pi to sing is not happening. I know what needs to be done. The resources to discover exactly how to do it are freely available. It simply does not happen. Welcome to the AeroGel roadblock.



The self-righteous Psychopath who spent so much time installing this roadblock in my mind can do no wrong in their own eyes. To actually imply that their integrity is less than complete inspires a rage attack (which is not the same as anger: there is no control at all). Deprogramming each of these blocks will not take place instantly.

April 13, 2015

Get off my Lawn

When I was about the right age to first think that taking compromising photos of myself might be good for a lark, technology was a little different. Mobile phones that weren’t actually bricks anymore could show maybe two lines of pixelated text on an unpleasantly glowing background, terrible quality digital cameras were barely affordable, and connecting to the internet actually had a sound – kind of like KSShhh-aaa-KWEO-pung-pung-drhdrhdrhd-KHH, but it went for longer than that. Or maybe it was: mobile phones only existed in gangster movies where they were installed as part of a car, digital cameras didn’t exist, and I only had access to a few local BBSes. I forget the specifics, but that’s not the point – the point is that when I was in my teens, technology was shit, and nobody had any of it. Now, technology is excellent, everybody has all of it, it’s really easy to use, and the ways in which we interact with our technology shape the ways we expect our technology to work.

If I write an email to someone, I’m thinking “I will type my message in this box here, hit SEND, and then they will receive the message and read it”. I am not thinking “I will type my message in here, hit SEND, then it will be transmitted in plain text across a vast network of computer systems, through a number of mail servers, possibly be recorded by several government agencies in case I’m a terrorist, be stored for a little while in a mail spool and possibly backed up by some ISP, before eventually being downloaded and read by the intended recipient”.

Same with photos: “I will take a picture and share it with my wife” is a distinctly personal experience (regardless of what it’s a photo of), and that’s what I’m thinking at the time. I am not thinking “I will take a picture with my phone which will then be uploaded across that same vast network to a cloud system somewhere and stored for Eris-only-knows how long in some other jurisdiction which can probably be hacked by script kiddies”.

Technology now is all about communicating with people, and about sharing our experiences, and that we can do this without having any idea what’s actually going on is fantastic. The price though is that with each service we use, we give up a certain amount of privacy, and what privacy we give up is not necessarily obvious.

To go back to the compromising photo example: When all I had was a little film camera, nobody I knew ever took photos they wouldn’t be happy for random strangers to see, because we all knew that we had to take the film to get processed – the mechanics of how the technology worked were at least somewhat obvious to the people using the technology. As far as I am aware, there are no nude photos in existence of my teenage self and partners, because we didn’t want those perverts in the photo shop to see them.

I want a world where user experience accurately reflects potential privacy – not “sharing to circles”, or allegedly private “private messages”, but where any share that could conceivably result in non-private communication is preceded by a dialog that states “I hope you know that this will go down on your permanent record”. Because privacy is important – as Bruce Schneier said: “Privacy is not about something to hide. Privacy is about human dignity. Privacy is about individuality. Privacy is about being able to decide when and how we show ourselves to other people.”

Managing Variables in Drupal 7

A couple of times recently the issue of managing variables in Drupal 7 has come up in conversation with other developers. This post outlines the various ways of managing variables in Drupal sites. The three things this guide ensures:

  • Sensitive data is kept secure
  • Variables are correct in each environment
  • You are able to track your variables (and when they changed)

The Variables Table

The most common place you'll find configuration variables is in Drupal's variable table (aka {variable}). The values in this table are often managed via admin forms that use system_settings_form(). Users enter the values click "Save configuration" and the data is stored in the database.

If you prefer to manage your configuration via the command line and know the variable you wish to set you can use drush vset. This does exactly the same thing as admin form, without needing to click on a mouse.

$conf Array

While the variables table is great at storing our variables, there are times when you want to enforce a setting. This might be because you want to prevent users from changing it (accidentally or otherwise) or because you need it to be different in each environment. The $conf array in settings.php always overrides any values in the variable table.

Acquia, Pantheon and platform.sh all provide environment variables so you can use different values in your $conf array depending on the environment.

Exporting Variables

In Drupal 7, the common way to export your variables is by using Strongarm with Features. I'm not going to cover how to do this as there is loads of documentation already available on this topic.

If your variable changes on a per environment basis or if it calculated on the fly, then you won't want to use strongarm+features as the exported values are static. You will need to put them in settings.php.

Note to self: I should debug and reroll my patch for adding support in alter hooks strongarm.

My settings.php is Out of Control!

This is a common problem, especially on more complex sites. To avoid this I recommend creating sites/default/settings/settings.[env].php files. Your settings.php file should check for the environment in an environment variable and then include the appropriate settings.[env].php file.

What About Sensitive Data?

You can encrypt variables on a case by case basis using the encrypt module and some custom code similar to what I recently implemented in the Acquia SDK module (see on store and on read examples). Warning: This doesn't encrypt the data if you're using drush vset.

If you are storing sensitive data in your variables table I would recommend you implement hook_sql_sync_sanitize() which will delete the sensitive data from your db when drush sql-sanitize or drush sql-sync --sanitize are run.

How to Decide?

This little code snippet should help you decide.

<?php

// Don't try using this code in your Drupal site.

if (!using_version_control()) {
  // Seriously there is no point in doing this without version control.
  abandon_all_hope();
  drupal_exit();
}

if (is_data_sensitive($var)) {
  $var = encrypt_var($var);
  if (!we_use_drush_based_workflows()) {
    learn_and_implement_drush_based_workflows();
    // I'm serious!
    }
  }
  implement_hook_sql_sync_sanitize($var);
}

if (is_unique_per_environment($var)) {
  store_conf_array($var);
}
else {
  store_in_db($var);
  if (!we_use_features_based_workflow()) {
    learn_and_implement_features_based_worflows();
    // I'm serious!
  }
  export_using_strongarm($var);
}
 
Would a book entitled “I married a Psychopath” or the like sell well?



One of the risks here for even a strong Empath is that there are no “red flags” in the differences between feelings and expression of them (body-language etc), for the very simple reason that there are no feelings, so there are no differences to sense.



It must be a lonely, empty life for someone who consists only of an empty bubble of Ego. Yet they are the only person who could change that. It begins with genuine humility (which has nothing to do with acting humble). They need to think nothing of themselves.



This may not sound so difficult until you understand that they think everything of themselves, full time. Religion (including Atheism) is not possible for them, as the only person they worship is themselves.





April 12, 2015

Twitter posts: 2015-04-06 to 2015-04-12

Pigs and Bread

In farming related news, we have pigs again, and I’ve finally written up my bread recipe on our new blog at downsouthfarm.com. My random commentary about food and farming related matters will henceforth be posted there, while everything else I usually rattle on about at length will remain here.

Enjoy :-)

One Tree and Painter

Paul and I set off to see two trigs today. One Tree is on the ACT border and is part of the centenary trail. Painter is a suburban trig in Belconnen. Much fun was had, I hope I didn't make Paul too late for the wedding he had to go to.



 



Interactive map for this route.



Interactive map for this route.



Tags for this post: blog pictures 20150412-one_tree_painter photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; A walk around Mount Stranger; Forster trig



Comment

April 11, 2015

So, it's been a while

Well as you can see it's been a while since I last posted here, just over a year in fact, so it's time for a bit of a clean up.

As you can see I've started redesigning things, updated the theme so that it's a bit more mobile friendly (as in will be viewable on mobile), added in the feeds from Angry Beanie and I'll be doing more work around including information about the projects I'm working on such as Govchecker and Zooborns for Android

I'm also going to try and do more writing here. I think I've fallen into the trap of not writing because I use twitter or facebook instead. Blogging though helps me to focus my thoughts a bit more so we'll see how that goes.

Anyway, this blog as ever is a work in progress, so we'll see what comes.

Oh and one more thing, you'll see that I've replaced the drupal comments system with disqus instead. This way we can hopefully avoid the comment spam problem I was getting before.

Blog Catagories: 

April 10, 2015

Tiny Tim improves and gets Smaller

I finally switched Tiny Tim over to a lipo battery. Almost everything worked when I tested the new battery, the only thing that failed in a major way were the two 2812 LEDs which, either didn't come on or came on for a very quick moment and went dark. So Tim is now smaller again without the "huge" AA battery pack at it's tail.





The 2812 story was interesting. It wasn't going to be happy jumping to the 7.6v of the 2S lipo. So I tried various voltage divider setups which didn't work either. I ended up using a common 5v regulator and the lights work fine again. I think I was maybe using too high resistor values in the divider and the 2812s didn't like it. At any rate, they apparently want a good regulated power source, and I wasn't giving it one before I switched over to using the regulator.



On the whole, going from 5-6v of the AA pack to 7.6v has made it a snappier mover. I tried it initially with the battery on the bench and found it would lift the back off the desk under hard break.



Next up is probably attaching a claw or drop mechanism and ultrasound sensor and then take on the Sparkfun autonomous ping-pong ball into cup challenge. I'll probably control it via wireless from a second on board micro-controller. The drop, ultrasound, and autonomous navigation micro (and additional battery) can all be put into a single "module" that I can then bolt to Tim. All the navigation micro needs to do is control the differential drive like a remote control would. This way, the existing micro etc on Tim doesn't change at all in order for the challenge to be accepted.





clintonroy

Writing your first conference proposal can be difficult, so we’re running a working bee at UQ on Saturday 11th (in conjunction with Humbug). If you’ve never written a conference proposal before, or you’d like yours given the once over, please come along, register over at meetup.



Filed under: Uncategorized

Towards (and beyond) ONE MILLION queries per second

At Percona Live MySQL Conference 2015 next week I’ll be presenting on “Towards One MILLION queries per second” on 14th April at 4:50pm in Ballroom A.

This is the story of work I’ve been doing to get MySQL executing ONE MILLION SQL queries per second. It involves tales of MySQL, tales of the POWER8 Processor and a general amount of fun in extracting huge amounts of performance.

As I speak, I’m working on some even more impressive benchmark results! New hardware, new MySQL versions and really breaking news on MySQL scalability.

April 09, 2015

Thinking time

I've had a lot of things to think about this week, so I've gone on a few walks. I found some geocaches along the way, but even better I think my head is a bit more sorted out now.



Interactive map for this route.



Interactive map for this route.



Interactive map for this route.



Tags for this post: blog canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

April 08, 2015

brendanscott

The WSJ has an interesting article about an investor who is funding claims to invalidate patents. The logic is that he shorts the stock. When the patent is invalidated, the stock plummets. He sells the stock – profit.  Hat tip: Andrew Wilson



Lightning Networks Part IV: Summary

This is the fourth part of my series of posts explaining the bitcoin Lightning Networks 0.5 draft paper.  See Part I, Part II and Part III.

The key revelation of the paper is that we can have a network of arbitrarily complicated transactions, such that they aren’t on the blockchain (and thus are fast, cheap and extremely scalable), but at every point are ready to be dropped onto the blockchain for resolution if there’s a problem.  This is genuinely revolutionary.

It also vindicates Satoshi’s insistence on the generality of the Bitcoin scripting system.  And though it’s long been suggested that bitcoin would become a clearing system on which genuine microtransactions would be layered, it was unclear that we were so close to having such a system in bitcoin already.

Note that the scheme requires some solution to malleability to allow chains of transactions to be built (this is a common theme, so likely to be mitigated in a future soft fork), but Gregory Maxwell points out that it also wants selective malleability, so transactions can be replaced without invalidating the HTLCs which are spending their outputs.  Thus it proposes new signature flags, which will require active debate, analysis and another soft fork.

There is much more to discover in the paper itself: recommendations for lightning network routing, the node charging model, a risk summary, the specifics of the softfork changes, and more.

I’ll leave you with a brief list of requirements to make Lightning Networks a reality:

  1. A soft-fork is required, to protect against malleability and to allow new signature modes.
  2. A new peer-to-peer protocol needs to be designed for the lightning network, including routing.
  3. Blame and rating systems are needed for lightning network nodes.  You don’t have to trust them, but it sucks if they go down as your money is probably stuck until the timeout.
  4. More refinements (eg. relative OP_CHECKLOCKTIMEVERIFY) to simplify and tighten timeout times.
  5. Wallets need to learn to use this, with UI handling of things like timeouts and fallbacks to the bitcoin network (sorry, your transaction failed, you’ll get your money back in N days).
  6. You need to be online every 40 days to check that an old HTLC hasn’t leaked, which will require some alternate solution for occasional users (shut down channel, have some third party, etc).
  7. A server implementation needs to be written.

That’s a lot of work!  But it’s all simply engineering from here, just as bitcoin was once the paper was released.  I look forward to seeing it happen (and I’m confident it will).

Reading the Lord of the Rings aloud

The reading project that I am working on is a re-read of the Lord of the Rings. I’ve read the book/trilogy around a The_Lord_of_the_Rings_Trilogydozen times over the years but the two main differences this time are that I am reading it aloud and that I am consulting a couple of commentaries as I go. The references works I am using are The Lord of the Rings: A Reader’s Companion and the The Lord of the Rings Reread series by Kate Nepveu. The Companion is a fairly large book (860 pages) that follows the text page by page and gives explanations for words, characters and the history/development of the text. These can range from a simple definition to a couple of pages on a specific topic or character. The reread has a quick synopsis at the start of the article for each chapter and then some commentary by Kate followed by some comments from her readers (which I usually only quickly skim).

I started my read-aloud on February 15th 2015 and I am now ( April 7th ) just past the half-way point ( I completed The Fellowship of the Ring on March 27th) . My process is to read the text for 30-60 minutes ( I’m reading the three-book 1979 3rd edition paperback edition, which amusingly has various errors that the Reader’s Companion points out as I go) which gets me though 5-10 pages. I read aloud everything on the page including chapter titles, songs, non-English words and footnotes. A few times I have checked the correct pronunciation of words ( Eomer is one ) but otherwise I try not to get distracted. Once I finish for the session I open the Reader’s Companion and check the entries for the pages I have just read and at the end of each chapter ( chapters are usually around 20-30 pages) I have a look at Kate’s blog entry. I try an read most days and sometimes do extras on weekends.

One thing I really need to say is that I really am enjoying the whole thing. I love the book (like I said I’ve read it over a dozen times) and reading it aloud makes the experience even better. The main difference is that I do not skip over words/sentences/paragraphs which tends to happen when I read normally. So I don’t miss phrases like the description of Lake Hithoel:

The sun, already long fallen from the noon, was shining in a windy sky. The pent waters spread out into a long oval lake, pale Nen Hithoel, fenced by steep grey hills whose sides were clad with trees. At the far southern end rose three peaks. The midmost stood somewhat forward from the others and sundered from them, an island in the waters, about which the flowing River flung pale shimmering arms. Distant but deep there came up on the wind a roaring sound like the roll of thunder heard far away.

LOTR_Readers_Companion

Nor do I skip the other little details that are easy to miss, like Grishnakh and his Mordor Orcs leaving the rest of the group for a couple of days on the plains of Rohan or the description of country leading up to the west gate of Moria. Although I do wish I’d seen the link to the map of Helm’s Deep halfway down this page before I’d read the chapter as it would have made things clearer. The Companion is also good at pointing out how things fit in the chronology, so when somebody gazes at the horizon and sees a cloud of smoke it will say what event elsewhere in the book (or other writing) that is from. You also get a great feel for Tolkien’s language and words and his vivid descriptions of scenes and landscape (often up to a page long) such the example above. Although I do find he uses “suddenly” an awful lot when he has new events/people break into the narrative.

The readers companion is a great resource, written by two serious Tolkien scholars but intended for general readers rather than academics. Kate Nepveu’s articles are also very useful in giving a more opinionated and subjective commentary. I would definitely recommend the experience to others who are fans of the Lord of the Rings. I’m not sure how well it would work with other books but certainly it enhances a work I already know well and love.

At the current rate I am expecting to finish some time in June or July. The next project I’m planning is Shakespeare’s plays. I am planning on reading each one (multiple times including possibly at least once aloud) and watching the BBC Television Shakespeare and other adaptations and commentaries. My plan is that I’ll cover the majority of them  but I’ll see how I go, However I’d like to at least get though the major ones.

FacebookGoogle+Share

April 06, 2015

Lightning Networks Part III: Channeling Contracts

This is the third part of my series of posts explaining the bitcoin Lightning Networks 0.5 draft paper.

In Part I I described how a Poon-Dryja channel uses a single in-blockchain transaction to create off-blockchain transactions which can be safely updated by either party (as long as both agree), with fallback to publishing the latest versions to the blockchain if something goes wrong.

In Part II I described how Hashed Timelocked Contracts allow you to safely make one payment conditional upon another, so payments can be routed across untrusted parties using a series of transactions with decrementing timeout values.

Now we’ll join the two together: encapsulate Hashed Timelocked Contracts inside a channel, so they don’t have to be placed in the blockchain (unless something goes wrong).

Revision: Why Poon-Dryja Channels Work

Here’s half of a channel setup between me and you where I’m paying you 1c: (there’s always a mirror setup between you and me, so it’s symmetrical)

Half a channel: we will invalidate transaction 1 (in favour of a new transaction 2) to send funds.

The system works because after we agree on a new transaction (eg. to pay you another 1c), you revoke this by handing me your private keys to unlock that 1c output.  Now if you ever released Transaction 1, I can spend both the outputs.  If we want to add a new output to Transaction 1, we need to be able to make it similarly stealable.

Adding a 1c HTLC Output To Transaction 1 In The Channel

I’m going to send you 1c now via a HTLC (which means you’ll only get it if the riddle is answered; if it times out, I get the 1c back).  So we replace transaction 1 with transaction 2, which has three outputs: $9.98 to me, 1c to you, and 1c to the HTLC: (once we agree on the new transactions, we invalidate transaction 1 as detailed in Part I)

Our Channel With an Output for an HTLC

Note that you supply another separate signature (sig3) for this output, so you can reveal that private key later without giving away any other output.

We modify our previous HTLC design so you revealing the sig3 would allow me to steal this output. We do this the same way we did for that 1c going to you: send the output via a timelocked mutually signed transaction.  But there are two transaction paths in an HTLC: the got-the-riddle path and the timeout path, so we need to insert those timelocked mutually signed transactions in both of them.  First let’s append a 1 day delay to the timeout path:

Timeout path of HTLC, with locktime so it can be stolen once you give me your sig3.

Similarly, we need to append a timelocked transaction on the “got the riddle solution” path, which now needs my signature as well (otherwise you could create a replacement transaction and bypass the timelocked transaction):

Full HTLC: If you reveal Transaction 2 after we agree it’s been revoked, and I have your sig3 private key, I can spend that output before you can, down either the settlement or timeout paths.

Remember The Other Side?

Poon-Dryja channels are symmetrical, so the full version has a matching HTLC on the other side (except with my temporary keys, so you can catch me out if I use a revoked transaction).  Here’s the full diagram, just to be complete:

A complete lightning network channel with an HTLC, containing a glorious 13 transactions.

Closing The HTLC

When an HTLC is completed, we just update transaction 2, and don’t include the HTLC output.  The funds either get added to your output (R value revealed before timeout) or my output (timeout).

Note that we can have an arbitrary number of independent HTLCs in progress at once, and open and/or close as many in each transaction update as both parties agree to.

Keys, Keys Everywhere!

Each output for a revocable transaction needs to use a separate address, so we can hand the private key to the other party.  We use two disposable keys for each HTLC[1], and every new HTLC will change one of the other outputs (either mine, if I’m paying you, or yours if you’re paying me), so that needs a new key too.  That’s 3 keys, doubled for the symmetry, to give 6 keys per HTLC.

Adam Back pointed out that we can actually implement this scheme without the private key handover, and instead sign a transaction for the other side which gives them the money immediately.  This would permit more key reuse, but means we’d have to store these transactions somewhere on the off chance we needed them.

Storing just the keys is smaller, but more importantly, Section 6.2 of the paper describes using BIP 32 key hierarchies so the disposable keys are derived: after a while, you only need to store one key for all the keys the other side has given you.  This is vastly more efficient than storing a transaction for every HTLC, and indicates the scale (thousands of HTLCs per second) that the authors are thinking.

Next: Conclusion

My next post will be a TL;DR summary, and some more references to the implementation details and possibilities provided by the paper.

 


[1] The new sighash types are fairly loose, and thus allow you to attach a transaction to a different parent if it uses the same output addresses.  I think we could re-use the same keys in both paths if we ensure that the order of keys required is reversed for one, but we’d still need 4 keys, so it seems a bit too tricky.

April 05, 2015

Twitter posts: 2015-03-30 to 2015-04-05

April 04, 2015

Bendora Arboretum and Bulls Head trig

Prompted largely by a not very detailed entry in a book, a bunch of friends and I went to explore Bendora Arboretum. The arboretum was planted in the 1940's as scientific experiments exploring what soft woods would grow well in our climate -- this was prompted by the large amount of wood Australia was importing at the time. There were 34 Arboreta originally, but only this one remains. The last three other than this one were destroyed in the 2003 bush fires.



This walk appears in Best Bush, Town and Village Walks in and around the ACT by Marion Stuart, which was the inspiration for this outing. The only thing to note with her description is that the walk is a fair bit longer than she describes -- its 2km from the locked gate to the hut, which means a 4km return walk before you explore the arboretum at all. The arboretum has received some attention from the ACT government recently, with new signage and a fresh gravel pass. Also please note this area might only be accessible by four wheel drive in winter, which is not mentioned in the book.



We also did a side trip to Bulls Head trig, which was interesting as its not the traditional shape.



                                       



See more thumbnails



Interactive map for this route.



Interactive map for this route.



Tags for this post: blog pictures 20150404-bendora_bulls_head photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

April 03, 2015

Using OpenVPN on Android Lollipop

I use my Linode VPS as a VPN endpoint for my laptop when I'm using untrusted networks and I wanted to do the same on my Android 5 (Lollipop) phone.

It turns out that it's quite easy to do (doesn't require rooting your phone) and that it works very well.

Install OpenVPN

Once you have installed and configured OpenVPN on the server, you need to install the OpenVPN app for Android (available both on F-Droid and Google Play).

From the easy-rsa directory you created while generating the server keys, create a new keypair for your phone:

./build-key nexus6        # "nexus6" as Name, no password

and then copy the following files onto your phone:

  • ca.crt
  • nexus6.crt
  • nexus6.key
  • ta.key

Create a new VPN config

If you configured your server as per my instructions, these are the settings you'll need to use on your phone:

Basic:

  • LZO Compression: YES
  • Type: Certificates
  • CA Certificate: ca.crt
  • Client Certificate: nexus6.crt
  • Client Certificate Key: nexus6.key

Server list:

  • Server address: hafnarfjordur.fmarier.org
  • Port: 1194
  • Protocol: UDP
  • Custom Options: NO

Authentication/Encryption:

  • Expect TLS server certificate: YES
  • Certificate hostname check: YES
  • Remote certificate subject: server
  • Use TLS Authentication: YES
  • TLS Auth File: ta.key
  • TLS Direction: 1
  • Encryption cipher: AES-256-CBC
  • Packet authentication: SHA384 (not SHA-384)

That's it. Everything else should work with the defaults.

April 01, 2015

Lightning Networks Part II: Hashed Timelock Contracts (HTLCs)

In Part I, we demonstrated Poon-Dryja channels; a generalized channel structure which used revocable transactions to ensure that old transactions wouldn’t be reused.

A channel from me<->you would allow me to efficiently send you 1c, but that doesn’t scale since it takes at least one on-blockchain transaction to set up each channel. The solution to this is to route funds via intermediaries;  in this example we’ll use the fictitious “MtBox”.

If I already have a channel with MtBox’s Payment Node, and so do you, that lets me reliably send 1c to MtBox without (usually) needing the blockchain, and it lets MtBox send you 1c with similar efficiency.

But it doesn’t give me a way to force them to send it to you; I have to trust them.  We can do better.

Bonding Unrelated Transactions using Riddles

For simplicity, let’s ignore channels for the moment.  Here’s the “trust MtBox” solution:

I send you 1c via MtBox; simplest possible version, using two independent transactions. I trust MtBox to generate its transaction after I send it mine.

What if we could bond these transactions together somehow, so that when you spend the output from the MtBox transaction, that automatically allows MtBox to spend the output from my transaction?

Here’s one way. You send me a riddle question to which nobody else knows the answer: eg. “What’s brown and sticky?”.  I then promise MtBox the 1c if they answer that riddle correctly, and tell MtBox that you know.

MtBox doesn’t know the answer, so it turns around and promises to pay you 1c if you answer “What’s brown and sticky?”. When you answer “A stick”, MtBox can pay you 1c knowing that it can collect the 1c off me.

The bitcoin blockchain is really good at riddles; in particular “what value hashes to this one?” is easy to express in the scripting language. So you pick a random secret value R, then hash it to get H, then send me H.  My transaction’s 1c output requires MtBox’s signature, and a value which hashes to H (ie. R).  MtBox adds the same requirement to its transaction output, so if you spend it, it can get its money back from me:

Two Independent Transactions, Connected by A Hash Riddle.

Handling Failure Using Timeouts

This example is too simplistic; when MtBox’s PHP script stops processing transactions, I won’t be able to get my 1c back if I’ve already published my transaction.  So we use a familiar trick from Part I, a timeout transaction which after (say) 2 days, returns the funds to me.  This output needs both my and MtBox’s signatures, and MtBox supplies me with the refund transaction containing the timeout:

Hash Riddle Transaction, With Timeout

MtBox similarly needs a timeout in case you disappear.  And it needs to make sure it gets the answer to the riddle from you within that 2 days, otherwise I might use my timeout transaction and it can’t get its money back.  To give plenty of margin, it uses a 1 day timeout:

MtBox Needs Your Riddle Answer Before It Can Answer Mine

Chaining Together

It’s fairly clear to see that longer paths are possible, using the same “timelocked” transactions.  The paper uses 1 day per hop, so if you were 5 hops away (say, me <-> MtBox <-> Carol <-> David <-> Evie <-> you) I would use a 5 day timeout to MtBox, MtBox a 4 day to Carol, etc.  A routing protocol is required, but if some routing doesn’t work two nodes can always cancel by mutual agreement (by creating timeout transaction with no locktime).

The paper refers to each set of transactions as contracts, with the following terms:

  • If you can produce to MtBox an unknown 20-byte random input data R from a known H, within two days, then MtBox will settle the contract by paying you 1c.
  • If two days have elapsed, then the above clause is null and void and the clearing process is invalidated.
  • Either party may (and should) pay out according to the terms of this contract in any method of the participants choosing and close out this contract early so long as both participants in this contract agree.

The hashing and timelock properties of the transactions are what allow them to be chained across a network, hence the term Hashed Timelock Contracts.

Next: Using Channels With Hashed Timelock Contracts.

The hashed riddle construct is cute, but as detailed above every transaction would need to be published on the blockchain, which makes it pretty pointless.  So the next step is to embed them into a Poon-Dryja channel, so that (in the normal, cooperative case) they don’t need to reach the blockchain at all.

March 31, 2015

PyCon Australia 2015 Call for Proposals is Open!

Closes Friday 8th May

PyCon Australia 2015 is pleased to announce that its Call for Proposals is now open!

The conference this year will be held on Saturday 1st and Sunday 2nd August 2015 in Brisbane. We'll also be featuring a day of Miniconfs on Friday 31st July.

The deadline for proposal submission is Friday 8th May, 2015.

PyCon Australia attracts professional developers from all walks of life, including industry, government, and science, as well as enthusiast and student developers. We’re looking for proposals for presentations and tutorials on any aspect of Python programming, at all skill levels from novice to advanced.

Presentation subjects may range from reports on open source, academic or commercial projects; or even tutorials and case studies. If a presentation is interesting and useful to the Python community, it will be considered for inclusion in the program.

We're especially interested in short presentations that will teach conference-goers something new and useful. Can you show attendees how to use a module? Explore a Python language feature? Package an application?

Miniconfs

Four Miniconfs will be held on Friday 31st July, as a prelude to the main conference. Miniconfs are run by community members and are separate to the main conference. If you are a first time speaker, or your talk is targeted to a particular field, the Miniconfs might be a better fit than the main part of the conference. If your proposal is not selected for the main part of the conference, it may be selected for one of our Miniconfs:

DjangoCon AU is the annual conference of Django users in the Southern Hemisphere. It covers all aspects of web software development, from design to deployment - and, of course, the use of the Django framework itself. It provides an excellent opportunity to discuss the state of the art of web software development with other developers and designers.

The Python in Education Miniconf aims to bring together community workshop organisers, professional Python instructors and professional educators across primary, secondary and tertiary levels to share their experiences and requirements, and identify areas of potential collaboration with each other and also with the broader Python community.

The Science and Data Miniconf is a forum for people using Python to tackle problems in science and data analysis. It aims to cover commercial and research interests in applications of science, engineering, mathematics, finance, and data analysis using Python, including AI and 'big data' topics.

The OpenStack Miniconf is dedicated to talks related to the OpenStack project and we welcome proposals of all kinds: technical, community, infrastructure or code talks/discussions; academic or commercial applications; or even tutorials and case studies. If a presentation is interesting and useful to the OpenStack community, it will be considered for inclusion. We also welcome talks that have been given previously in different events.

First Time Speakers

We welcome first-time speakers; we are a community conference and we are eager to hear about your experience. If you have friends or colleagues who have something valuable to contribute, twist their arms to tell us about it! Please also forward this Call for Proposals to anyone that you feel may be interested.

The most recent call for proposals information can always be found at: http://pycon-au.org/cfp

See you in Brisbane in July!

Important Dates

  1. Call for Proposals opens: Friday 27th March, 2015
  2. Proposal submission deadline: Friday 8th May, 2015
  3. Proposal acceptance: Monday 25 May, 2015

LUV Main April 2015 Meeting: Storytelling for Digital Media / Deploying Microservices Effectively

Apr 7 2015 19:00
Apr 7 2015 21:00
Apr 7 2015 19:00
Apr 7 2015 21:00
Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Speakers:

• Katherine Phelps: Storytelling for Digital Media

• Daniel Hall: Deploying Microservices Effectively

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

April 7, 2015 - 19:00

read more

Lightning Networks Part I: Revocable Transactions

I finally took a second swing at understanding the Lightning Network paper.  The promise of this work is exceptional: instant reliable transactions across the bitcoin network. But the implementation is complex and the draft paper reads like a grab bag of ideas; but it truly rewards close reading!  It doesn’t involve novel crypto, nor fancy bitcoin scripting tricks.

There are several techniques which are used in the paper, so I plan to concentrate on one per post and wrap up at the end.

Revision: Payment Channels

I open a payment channel to you for up to $10

A Payment Channel is a method for sending microtransactions to a single recipient, such as me paying you 1c a minute for internet access.  I create an opening transaction which has a $10 output, which can only be redeemed by a transaction input signed by you and me (or me alone, after a timeout, just in case you vanish).  That opening transaction goes into the blockchain, and we’re sure it’s bedded down.

I pay you 1c in the payment channel. Claim it any time!

Then I send you a signed transaction which spends that opening transaction output, and has two outputs: one for $9.99 to me, and one for 1c to you.  If you want, you could sign that transaction too, and publish it immediately to get your 1c.

Update: now I pay you 2c via the payment channel.

Then a minute later, I send you a signed transaction which spends that same opening transaction output, and has a $9.98 output for me, and a 2c output for you. Each minute, I send you another transaction, increasing the amount you get every time.

This works because:

  1.  Each transaction I send spends the same output; so only one of them can ever be included in the blockchain.
  2. I can’t publish them, since they need your signature and I don’t have it.
  3. At the end, you will presumably publish the last one, which is best for you.  You could publish an earlier one, and cheat yourself of money, but that’s not my problem.

Undoing A Promise: Revoking Transactions?

In the simple channel case above, we don’t have to revoke or cancel old transactions, as the only person who can spend them is the person who would be cheated.  This makes the payment channel one way: if the amount I was paying you ever went down, you could simply broadcast one of the older, more profitable transactions.

So if we wanted to revoke an old transaction, how would we do it?

There’s no native way in bitcoin to have a transaction which expires.  You can have a transaction which is valid after 5 days (using locktime), but you can’t have one which is valid until 5 days has passed.

So the only way to invalidate a transaction is to spend one of its inputs, and get that input-stealing transaction into the blockchain before the transaction you’re trying to invalidate.  That’s no good if we’re trying to update a transaction continuously (a-la payment channels) without most of them reaching the blockchain.

The Transaction Revocation Trick

But there’s a trick, as described in the paper.  We build our transaction as before (I sign, and you hold), which spends our opening transaction output, and has two outputs.  The first is a 9.99c output for me.  The second is a bit weird–it’s 1c, but needs two signatures to spend: mine and a temporary one of yours.  Indeed, I create and sign such a transaction which spends this output, and send it to you, but that transaction has a locktime of 1 day:

The first payment in a lightning-style channel.

Now, if you sign and publish that transaction, I can spend my $9.99 straight away, and you can publish that timelocked transaction tomorrow and get your 1c.

But what if we want to update the transaction?  We create a new transaction, with 9.98c output to me and 2c output to a transaction signed by both me and another temporary address of yours.  I create and sign a transaction which spends that 2c output, has a locktime of 1 day and has an output going to you, and send it to you.

We can revoke the old transaction: you simply give me the temporary private key you used for that transaction.  Weird, I know (and that’s why you had to generate a temporary address for it).  Now, if you were ever to sign and publish that old transaction, I can spend my $9.99 straight away, and create a transaction using your key and my key to spend your 1c.  Your transaction (1a below) which could spend that 1c output is timelocked, so I’ll definitely get my 1c transaction into the blockchain first (and the paper uses a timelock of 40 days, not 1).

Updating the payment in a lightning-style channel: you sent me your private key for sig2, so I could spend both outputs of Transaction 1 if you were to publish it.

So the effect is that the old transaction is revoked: if you were to ever sign and release it, I could steal all the money.  Neat trick, right?

A Minor Variation To Avoid Timeout Fallback

In the original payment channel, the opening transaction had a fallback clause: after some time, it is all spendable by me.  If you stop responding, I have to wait for this to kick in to get my money back.  Instead, the paper uses a pair of these “revocable” transaction structures.  The second is a mirror image of the first, in effect.

A full symmetric, bi-directional payment channel.

So the first output is $9.99 which needs your signature and a temporary signature of mine.  The second is  1c for meyou.  You sign the transaction, and I hold it.  You create and sign a transaction which has that $9.99 as input, a 1 day locktime, and send it to me.

Since both your and my “revocable” transactions spend the same output, only one can reach the blockchain.  They’re basically equivalent: if you send yours you must wait 1 day for your money.  If I send mine, I have to wait 1 day for my money.  But it means either of us can finalize the payment at any time, so the opening transaction doesn’t need a timeout clause.

Next…

Now we have a generalized transaction channel, which can spend the opening transaction in any way we both agree on, without trust or requiring on-blockchain updates (unless things break down).

The next post will discuss Hashed Timelock Contracts (HTLCs) which can be used to create chains of payments…

Notes For Pedants:

In the payment channel open I assume OP_CHECKLOCKTIMEVERIFY, which isn’t yet in bitcoin.  It’s simpler.

I ignore transaction fees as an unnecessary distraction.

We need malleability fixes, so you can’t mutate a transaction and break the ones which follow.  But I also need the ability to sign Transaction 1a without a complete Transaction 1 (since you can’t expose the signed version to me).  The paper proposes new SIGHASH types to allow this.

[EDIT 2015-03-30 22:11:59+10:30: We also need to sign the other symmetric transactions before signing the opening transaction.  If we released a completed opening transaction before having the other transactions, we might be stuck with no way to get our funds back (as we don’t have a “return all to me” timeout on the opening transaction)]

March 30, 2015

Sleep: How to nap like a pro | BBC Future

March 29, 2015

Twitter posts: 2015-03-23 to 2015-03-29

Challenge for 2015: hackaday prize competition

So the 2015 Hackaday prize is happening, until at least August.

Somehow I’ve currently ended up involved with not one, but two entries!  The good thing is that with four months to go until the first round submission, I have been careful not to bite off more that can be chewed in the time available on weekends, or after the kids go to bed, etc. with other commitments. Along the way though it should be educational and fun, and with any luck I might at least win a T-shirt or something (some electronics test gear would be nice) … I’m under no illusion we will get anywhere near winning a trip to space!

The themes this year are is “Build Something that Matters”, around environment, agriculture and energy, with the related facet of solving a problem, and not necessarily a world-scale problem.

So my first project, of which I am making good progress, is a farm crop monitoring system for Australian conditions.  This utilises the ESP8266 wifi module and will exercise its deep sleep mode, and solar power, along with a yet to be determined Linux module for a local base station, and hopefully ISM band telemetry over long distances. I will also be helped by my neighbour who is a farmer who can use this system.

The second project, which is not my idea but that of a close friend, (but for which I am presently responsible for maintaining the hackaday.io page), is an Algorithmic Composting machine built out of repurposed parts and cheap electronics.  I’ll probably end up assisting with the embedded electronics, as well as keeping the documentation up to date.

I wont be posting here in a lot of detail as the contest progresses, as there is a project log built into the hackaday.io site intended for that purpose.  So follow along at http://hackaday.io/project/4758 and http://hackaday.io/project/4991  instead! (And please like our projects if you have a hackaday account!)

 

March 28, 2015

Fedora 21: automatic software updates

The way Fedora does automatic software updates has changed with the replacement of yum(8) with dnf(8).

Start by disabling yum's automatic updates, if installed:

# dnf remove yum-cron yum-cron-daily

Then install the dnf automatic update software:

# dnf install dnf-automatic

Alter /etc/dnf/automatic.conf to change the "apply_updates" line:

apply_updates = yes

Instruct systemd to run the updates periodically:

# systemctl enable dnf-automatic.timer
# systemctl start dnf-automatic.timer

clintonroy

PyCon Australia 2015 is pleased to announce that its Call for Proposals is now open!

The conference this year will be held on Saturday 1st and Sunday 2nd August 2015 in Brisbane. We’ll also be featuring a day of Miniconfs on Friday 31st July.

The deadline for proposal submission is Friday 8th May, 2015.

PyCon Australia attracts professional developers from all walks of life, including industry, government, and science, as well as enthusiast and student developers. We’re looking for proposals for presentations and tutorials on any aspect of Python programming, at all skill levels from novice to advanced.

Presentation subjects may range from reports on open source, academic or commercial projects; or even tutorials and case studies. If a presentation is interesting and useful to the Python community, it will be considered for inclusion in the program.

We’re especially interested in short presentations that will teach conference-goers something new and useful. Can you show attendees how to use a module? Explore a Python language feature? Package an application?

Miniconfs

Four Miniconfs will be held on Friday 31st July, as a prelude to the main conference. Miniconfs are run by community members and are separate to the main conference. If you are a first time speaker, or your talk is targeted to a particular field, the Miniconfs might be a better fit than the main part of the conference. If your proposal is not selected for the main part of the conference, it may be selected for one of our Miniconfs:

DjangoCon AU is the annual conference of Django users in the Southern Hemisphere. It covers all aspects of web software development, from design to deployment – and, of course, the use of the Django framework itself. It provides an excellent opportunity to discuss the state of the art of web software development with other developers and designers.

The Python in Education Miniconf aims to bring together community workshop organisers, professional Python instructors and professional educators across primary, secondary and tertiary levels to share their experiences and requirements, and identify areas of potential collaboration with each other and also with the broader Python community.

The Science and Data Miniconf is a forum for people using Python to tackle problems in science and data analysis. It aims to cover commercial and research interests in applications of science, engineering, mathematics, finance, and data analysis using Python, including AI and ‘big data’ topics.

The OpenStack Miniconf is dedicated to talks related to the OpenStack project and we welcome proposals of all kinds: technical, community, infrastructure or code talks/discussions; academic or commercial applications; or even tutorials and case studies. If a presentation is interesting and useful to the OpenStack community, it will be considered for inclusion. We also welcome talks that have been given previously in different events.

Full details: http://2015.pycon-au.org/cfp



Filed under: Uncategorized

Parallel Importing vs The Economist

Simpson-economistFor the last few years I have subscribed to the online edition of  The Economist magazine. Previously I read it via their website but for the last year or two I have used their mobile app. Both feature the full-text of each week’s magazine. Since I subscribed near 15 years ago I have paid:

Launched Jun 1997   US$ 48
Jun 1999            US$ 48
Oct 2002            US$ 69
Oct 2003            US$ 69
Dec 2006            US$ 79
Oct 2009            US$ 79
Oct 2010            US$ 95
Oct 2011            US$ 95
Mar 2014            NZ$ 400 (approx US$ 300) 

You will note the steady creep for a few years followed by the huge jump in 2014.

Note: I reviewed by credit card bill for 2012 and 2013 and I didn’t see any payments, it is possible I was getting it for free for two years :) . Possibly this was due to the transition between using an outside card processor (Worldpay) and doing the subscriptions in-house.

Last year I paid the bill in a bit of a rush and while I was surprised at the amount I didn’t think to hard. This year however I had a closer look. What seems to have happened is that The Economist has changed their online pricing model from “cheap online product” to “discount from the printed price”. This means that instead of online subscribers paying the same everywhere they now pay slightly less than it would cost to get the printed magazine delivered to the home.

Unfortunately the New Zealand price is very high to (I assume) cover the cost of shipping a relatively small number of magazines via air all the way from the nearest printing location.

econ_nzecon_us

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So readers in New Zealand are now charged NZ$ 736 for a two-year digital subscription while readers in the US are now charged US$ 223 ( NZ$ 293) for the same product. Thus New Zealanders pay 2.5 times as much as Americans.

Fortunately since I am a globe-trotting member of the world elite® I was able to change my subscription address to my US office and save a bunch of cash. However for a magazine that publishes the Big Mac Index comparing prices of products around the world the huge different in prices for the same digital product seems a little weird.

FacebookGoogle+Share

March 26, 2015

The Cloud : An Inferior Implementation of HPC

The use of cloud computing as an alternative implementation for high performance computing (HPC) initially seems to be appealing, especially to IT managers and to users who may find the jump from their desktop application to the command line interface challenging. However a careful and nuanced review of metrics should lead to a reconsideration of these assumptions.

read more

sswam

Job control is a basic feature of popular UNIX and Linux shells, such as “bash”.

It can be very useful, so I thought I’d make a little tutorial on it…

^C    press Ctrl-C to interrupt a running job (you know this one!)
^\    press Ctrl-\ (backslash) to QUIT a running job (stronger)
^Z    press Ctrl-Z to STOP a running job, it can be resumed later
jobs  type jobs for a list of stopped jobs (and background jobs)
fg    type fg to continue a job in the foreground
bg    type bg to continue a job in the background
kill  kill a job, e.g. kill %1, or kill -KILL %2
wait  wait for all background jobs to finish

You can also use fg and bg with a job number, if you have several jobs in the list.

You can start a job in the background: put an &-symbol at the end of the command. This works well for jobs that write to a file, but not for interactive jobs. Things might get messy if you have a background job that writes to the terminal.

If you forget the % with kill, it will try to kill by process-id instead of job number.  You don’t want to accidentally kill PID 1!

An example:

vi /etc/apache2/vhosts.d/ids.conf
^Z
jobs
find / >find.out &
jobs
fg 2
^Z
jobs
bg 2
jobs
kill %2
fg


March 25, 2015

Keeping up with noisy blog aggregators using PlanetFilter

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options

In my opinion, the first step in starting a new free software project should be to look for a reason not to do it :) So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.

It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.

A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter

PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.

If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/.

You can either:

  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.

The software will fetch new posts every hour and overwrite the local copy of each feed.

A basic configuration file looks like this:

[feed]
url = http://planet.debian.org/atom.xml

[blacklist]

Filters

There are currently two ways of filtering posts out. The main one is by author name:

[blacklist]
authors =
  Alice Jones
  John Doe

and the other one is by title:

[blacklist]
titles =
  This week in review
  Wednesday meeting for

In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support

Since blog updates happen asynchronously in the background, they can work very well over Tor.

In order to set that up in the Debian version of planetfilter:

  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:

     proxyAddress = "127.0.0.1"
     proxyPort = 8008
     allowedClients = 127.0.0.1
     allowedPorts = 1-65535
     proxyName = "localhost"
     cacheIsShared = false
     socksParentProxy = "localhost:9050"
     socksProxyType = socks5
     chunkHighMark = 67108864
     diskCacheRoot = ""
     localDocumentRoot = ""
     disableLocalInterface = true
     disableConfiguration = true
     dnsQueryIPv6 = no
     dnsUseGethostbyname = yes
     disableVia = true
     censoredHeaders = from,accept-language,x-pad,link
     censorReferer = maybe
    
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:

     export http_proxy="localhost:8008"
     export https_proxy="localhost:8008"
    

Bugs and suggestions

The source code is available on repo.or.cz.

I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!

I'm also interested in any suggestions you may have.

Devops and Old Git Branches

A guest blog post I wrote on managing git branches when doing devops.

When doing Devops we all know that using source code control is a “good thing” — indeed it would be hard to imagine doing Devops without it. But if you’re using Puppet and R10K for your configuration management you can end up having hundreds of old branches lying around — branches like XYZ-123, XYZ-123.fixed, XYZ-123.fixed.old and so on. Which branches to cleanup, which to keep? How to easily cleanup the old branches? This article demonstrates some git configurations and scripts  that make working with hundreds of git branches easier…

Go to Devops and Old Git Branches to read the full article.

March 24, 2015

Skill Improvements versus Interface Designs for eResarchers

The increasing size of datasets acts a critical issue for eResearch, especially given that they are expanding at a rate greater than improvements in desktop application speed, suggesting that HPC knowledge is requisite. However knowledge of such systems is not common.

read more

March 23, 2015

A quick walk through Curtin

What do you do when you accidentally engaged a troll on twitter? You go for a walk of course.



I didn't realize there had been a flash flood in Canberra in 1971 that killed seven people, probably because I wasn't born then. However, when I ask people who were around then, they don't remember without prompting either, which I think is sad. I only learnt about the flood because of the geocache I found hidden at the (not very well advertised) memorial today.



This was walk inspired by one from Best Bush, Town and Village Walks in and around the ACT by Marion Stuart. I was disappointed that the guide book didn't mention the flash flood however, and skips the memorial.



       



Interactive map for this route.



Tags for this post: blog pictures 20150323-curtin photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs



Comment

Mee (noodle) and Nasi (rice) Goreng Recipe/s

This is based on recipes online and an interpretation by a local restaurants that I used to frequent. While there are other alternative recipes that possibly taste better, I find that this is the quickest and easiest version.

- sugar

- curry powder

- chilli sauce

- soy sauce

- tomato sauce

- eggs

- chicken, prawns, and/or seafood mix

- egg noodles (any kind)

- lemon juice (optional)

- oyster sauce (optional)

- garlic (optional)

- ginger (optional)

- onion (optional)

- tomatoes (optional)

- tofu (optional)

- vegetables (optional, type is your choice)

Coat chicken with bicarbonate soda if desired (meat tenderiser. This step is not required at all if chicken is diced into small enough pieces and cooked well) and then wash off in cold water. Marinade chicken in fish sauce, sugar, garlic, pepper (optional step). Fry off chicken, tofu, onion, garlic, ginger, etc... in pan. Create sauce by using tomato sauce, soy sauce, chill sauce, curry powder, sugar, etc... Cook sauce and add noodles/rice when ready. Garnish everything with chopped lettuce and fried shallots if desired.



The following is what it looks like.

http://indaily.com.au/food-and-wine/2014/09/23/adam-liaws-mee-goreng/

March 22, 2015

Twitter posts: 2015-03-16 to 2015-03-22

March 21, 2015

Narrabundah trig and 16 geocaches

I walked to the Narrabundah trig yesterday, along the way collecting 15 of the 16 NRL themed caches in the area. It would have been all 16, except I can't find the last one for the life of me. I'm going to have to come back.



I really like this area. Its scenic, has nice trails, and you can't tell you're in Canberra unless you really look for it. It seemed lightly used to be honest, I think I saw three other people the entire time I was there. I encountered more dogs off lead than people.



 



Interactive map for this route.



Tags for this post: blog pictures 20150321-narrabundah photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger; Forster trig



Comment

A more accessible online world will benefit everyone.

An empty wheelchair at the bottom of the stairs

PSA: If you are a web professional, work in a digital agency or build mobile apps, please read this article now: Taking the social model of disability online

Done? Great.

"The social model of disability reframes discussion of disability as a problem of the world, rather than of the individual. The stairs at the train station are the problem, rather than using a wheelchair."

El Gibbs has reminded me of question time during Gian Wild's keynote at Drupal Downunder in 2012. Gian asserts that accessibility guidelines are a legal requirement for everyone, not just Government. There was an audible gasp from the audience.

It's true that our physical environment needs to include ramps, lifts, accessible toilets, reserved parking spaces, etc in order to accommodate those with mobility needs. Multi-lingual societies require multi-lingual signage. There are hearing loops - but for some reason, this "social model" of accessibility doesn't seem to have extended online.

Making the digital world accessible, and counteracting the systemic discriminatory impact of failing to do so is something we must take seriously. We must build this in during planning and design, we must make it easy for content editors to maintain WCAG compliance AFTER a site or app is delivered.

Building accessibility features in from the beginning also means it costs less to implement, and delivers a double win of making the whole team more mindful of these issues to begin with. It should be part of the acceptance criteria, it should be part of the definition of done.

I'd like to see us tackle these issues directly in Drupal core. If you're interested in keeping track of accessibility issues in Drupal, you might like to follow drupala11y on twitter, and check out issues on drupal.org that have been tagged with "accessibility"

Accessibility traps might not affect you now, but they will. This is probably affecting people you know right now. People who silently struggle with small font sizes, poor contrast, cognitive load, keyboard traps, video without captions. 

My own eyesight and hearing is not what it was.  My once able parents now require mobility aids. My cousin requires an electric wheelchair. A friend uses a braille reader, and yet I still forget.  It's not front and centre for me, but it should be. Let's all take a moment to think about how we can focus on making our online and digital world more accessible for everyone. It really does benefit us all.

An Introduction to Supercomputers

Very much a minor update to the presentation I gave in 2013, this talk provides a definition of supercomputers, high performance computing, and parallel programming, their use and current metrics, the importance and dominance of the Linux operating system in these areas, as well as some practical hands-on examples.

An Introduction to Supercomputers. Presentation to Linux Users of Victoria Beginners Workshop, 21st March, 2015

March 20, 2015

2015 Arctic Sea Ice Maximum Annual Extent Is Lowest On Record | NASA

A quick trip to Namadgi

I thought I'd drop down to the Namadgi visitors centre to have a look during lunch because I hadn't been there since being a teenager. I did a short walk to Gudgenby Hut, and on the way back discovered this original border blaze tree. Its stacked on pallets at the moment, but is apparently intended for display one day. This is how much of the ACT's boarder was marked originally -- blazes cut on trees.



 



Interactive map for this route.



Tags for this post: blog pictures 20150320-namadgi photo canberra bushwalk namadgi border

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

Our privacy is about to be serially infringed – ABC The Drum

March 19, 2015

LInks: WW1 Maps, Shawshank, Microservices, Dev Interviewing

FacebookGoogle+Share

Two very different numbers

Omnishambles

While watching another disappointing day in Australian political life unfold,  I wondered: How long has it been since Tony Abbott declared, “good government starts today“?

It’s such a great example of Abbott’s hopelessness: He survived as Prime Minister after a failed party room spill motion only to deliver another classic clanger. Good job, Tony.

Then I checked the date and realised it had only been 38 days since the attempted spill. It seems much longer because barely a day goes by without a spectacular cock-up or gaffe by Abbott or one of his ministers.

Laura Tingle wrote yesterday that “we are being governed by idiots and fools“, excoriating the Abbott government for recklessness and incompetence, hinting at a deeper problem in our political system. Jonathan Green picked up where Tingle left off, suggesting Australia’s next great reforms “will be of this stagnant polity itself”. We can only hope.

I don’t want to juggle the date arithmetic next time I ponder how long we’ve been blessed with “good government”. The obvious* solution is a Twitter account to remind everyone*, right?

Thus, an automated, single-serving Twitter account that tweets the number of days since the spill, with a topical news item and photo, neither of which tend to reflect well on the Prime Minister.

Violence Against Women

Late yesterday evening, a wise voice caught me off guard: There’s a lot of men making a lot of noise about data retention today. Where’s that noise when a woman is killed every week by a partner or ex-partner?

If central Sydney can undergo substantial social and commercial upheaval after the deaths of two young men in “king hit” attacks, surely 8 intimate partner homicides (and 22 total suspicious deaths) of women so far this year would elicit some response? History suggests otherwise.

So we must make more noise.

My meagre contribution today is, yes, a single-serving Twitter account. I know it’s silly and practically meaningless, but hopefully people will see it, share it, and support women like Rosie Batty who are doing the really important work.

It will tweet updated figures from two sources:

First, Guardian Australia has a page for women who have died “where police have later laid charges against their partners or ex-partners”. (It’s a mouthful of legalese because they have to be careful about affecting trials.)

Second, Destroy the Joint’s Counting Dead Women Australia team maintains a Facebook post that documents every woman who has died violently, and follows what happens after. It’s based on a UK project of the same name.

38 days since “good government” began. 22 women violently killed in Australia this year. Two very different numbers.

Minimalist VHF Software Defined Radio Part 2

Shortly after I published the first post on a simple VHF SDR, Brady KC9TPA started making suggestions about optimising the code. So I encouraged him to have a look into the transmit side. How can we take a baseband modem signal (like GMSK) and convert it up to a HF IF frequency like 10.7 MHz using the STM32F4 DAC?

After a busy month (and not much sleep) Brady has done it! The following figures explain how it works:

Normally we would use a baseband DAC, mixer, LO, and crystal filter to generate a signal at HF (top). However Brady has shown it is possible to use a much simpler architecture (bottom).

So with the STM32F4, some clever software, and a buffer amplifier, he has generated a 10.7MHz HF signal. The DAC runs at 2MHz, which creates images (aliases) spaced every 1 MHz. The Band Pass Filter (BPF) selects just the image you want, e.g. 10.7 MHz in our case. The BPF doesn’t have to be very demanding like an Xtal filter, as the other images are 1MHz away. It is possible to tune the exact frequency a few hundred kHz in software.

Compared to a baseband IQ design this architecture doesn’t need two DACs, and doesn’t have have any IQ balance issues.

He used a GMSK modem signal as the baseband signal, however it could have easily been SSB, analog FM, or FreeDV. This is basically a baseband to HF SSB exciter. With a suitable BPF it could easily be tuned to anywhere on the lower HF bands. Software could then be used to tune the tx frequency within that band.

Brady received and sampled the 10.7MHz signal using an off the shelf SDR and it demodulated perfectly. Here are few photos showing his experimental set up, just a STM32F4 Discovery board and a buffer amplifier connected to the DAC. Note the sharp edges on the scope plot – this indicates lots of juicy HF content that we can tune to. He hasn’t added a BPF yet. The last plot is the GMSK signal as received by our demodulator running in Octave.

Our next step will be to mix this signal to VHF and add a PA to produce a 1 Watt 2M signal, to support our VHF FreeDV work. Please contact us if you can help us with a VHF PA design!

This design and the previous post that demonstrated the HF rx side suggests that the SM1000 could be modified to be a HF SDR transceiver. It already has a microphone and speaker amplifier, and even runs FreeDV out of the box! We would need to add a BPF, PA, and some gain on the rx side.

There is still a question over the STM32F4 internal ADC, e.g. it’s inter-modulation performance when used in over-sampled mode (thanks Glen English for pointing this out). Some more work required there. However this architecture is not limited to the STM32F4 – any uC connected to a few M-sample/s DAC and ADC (internal or external) will do. That’s the great thing about radios based on gcc C code and nearly no hardware!

Indo-Chinese Chilli Chicken (or Prawn) Recipe

This is based on recipes online and an interpretation by a local fusion restaurant that I used to frequent. While there are other alternative recipes that possibly taste better, I find that this is the quickest and easiest version.

- chicken (purchase diced for quicker preparation time. This recipe also works very well with prawns if you're more keen on seafood.)

- onion

- capsicum

- tomato sauce

- soy sauce

- chilli sauce

- tomatoes (fresh or canned and diced, optional)

- egg (optional)

- cornflour (optional)

- garlic (optional)

- ginger (optional)

- spring onion (optional)

- lemon juice (optional)



Coat chicken with bicarbonate soda if desired (meat tenderiser. This step is not required at all if chicken is diced into small enough pieces and cooked well) and then wash off in cold water. Marinade chicken in corn flour, egg, salt, pepper (optional step). Fry off chicken in pan. Create sauce by using tomato sauce, soy sauce, chill sauce add add to pan (add water to mixture if it reduces too far over time). Add onion and capsicum to pan as well to cook through. Add garlic, ginger, lemon juice, etc... to taste... Goes well with rice or crusty bread.



The following is what it looks like.

https://nishkitchen.wordpress.com/category/indo-chinese/

https://nishkitchen.wordpress.com/tag/chilli-chicken-recipe/



http://www.indianfoodforever.com/indo-chinese/chinese-chilli-chicken.html

http://food.ndtv.com/recipe-boneless-chilli-chicken-98809

http://www.boldsky.com/cookery/non-vegetarian/chicken/indo-chinese-chilli-garlic-chicken-recipe-053009.html?PageSpeed=noscript

March 18, 2015

Goodwin trig

I talk about urban trigs, but this one takes the cake. Concrete paths, street lighting, and a 400 meter walk. I bagged this one on the way home from picking something up in Belconnen. To be honest, I can't see myself coming here again.



   



Interactive map for this route.



Tags for this post: blog pictures 20150318-goodwin photo canberra bushwalk trig_point belconnen

Related posts: Harcourt and Rogers Trigs; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

LUV Beginners March Meeting: An Introduction to High Performance Computing Using Linux

Mar 21 2015 12:30
Mar 21 2015 16:30
Mar 21 2015 12:30
Mar 21 2015 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Of the top five hundred computers in the world today, 97% of them run Linux. This is no accident, as Linux offers the best platform for efficient and scalable code. In this introductory session, LUV members will be introduced to the core concepts and architecture behind supercomputing, high-performance computing, and parallel processing, along with an introductory session on an actual HPC system.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

March 21, 2015 - 12:30

read more

March 17, 2015

SM1000 Part 12 – Testing in the US

Walter, K5WH has one of the 3 pre-beta SM1000 units. He writes:

Here’s a pic of the operations setup of the SM1000 on the air today from Houston Texas, into my HPSDR radio. With average Power down to 3 watts even. Made successful contacts to Mel-K0PFX and Gerry-N4DV. After working the audio levels a little, had reports of nice transmitted audio, and the received audio was very clean sounding as well. We were even fortunate enough to have a station breaking in with us from Benin West Africa, TY2BP Pat.

So not only working a couple stateside stations, but first DX as well. Great success with the SM1000! Walter has used the SM1000 with his HSPDR and TS-480 radios at power levels between 4 and 75W.

Pictures of depression

A warning light is pulsing on the control panel in front of you, but it can wait. You’ll get to it in a moment. So many things to do.

A polite, persistent bleeping began at some point. You weren’t paying enough attention to recall when. It’s ever so slightly out of phase with the warning light.

You feel a dull rumbling through the seat, the floor, between your joints. The room shifts on its axis, as if it’s falling away from under you.

Darkness. A klaxon splutters and honks. Rotating beacons cut the room into contorted still images. Orange, blue, orange, blue.

You watch a wall puncture, crack, and tear. The air around you whistles out into silence.

Metal grinds through metal. It would sound like two trains carving through each other, but for the vacuum.

Then the walls peel away.

Floating. Alone. Adrift. Bewildered.

In depression, no one can hear you scream.

Gravity

Late last year, I had another crash. (Episode is a silly word.) I should’ve seen it coming. Or, I did see it coming, but pretty much anything else short of anchovies is more pleasant than actually dealing with it.

I have no right or reason to be depressed. There are contributing factors, for sure, but no root cause. In every other respect, life is grand. But that’s not how depression works. It’s a parasite, sucking out every feeling until you’re a dead-eyed husk… except guilt. That one it nurtures.

What’s weird is having a graphical representation of the fall. Check it out: Metadata! The quantified self!

Physical memory utilisation on SLIVER

This is a collectd chart of the RAM utilisation in my desktop computer. SLIVER has two big monitors, a nice video card, proper headphones, and so on. It’s where the work gets done, and it’s a dead zone from late November to mid February. My GitHub activity chart looks much the same.

Things improved in February, but I’m still taking a break from work. I need to get my shit together, and don’t want to disappoint anyone if I hit another wall. See that gap in March? Another wall!

But I’m out of the dead zone.

On good days, I’ve been seeing friends, doing personal projects, science experiments, and learning new things. On bad days, sleeping, watching television, reloading web pages. I’m still trading the occasional people-heavy event for a couple of bad days to “recover”. Pfft. That’ll get better.

It sucks being away from work. Lots of big changes and exciting things going on. But I’m grateful for the support, understanding, and time away. Back soon.

– — –

The big difference this time around is hope. Psychologically, I know I can beat depression a hundred times worse, because I did. Financially, I can survive a siege of non-functional depression because I’ve had three years to build a war chest to outlast it. Personally and professionally, I’m more confident because I know where I fit, and what I need to learn.

So, it’s been a shitty few months. But it’s going to be okay.

2001: A Space Baby

Google Breakpad and the post crash experience

Google Breakpad has many components to it, but at the basic level it lets you capture information at the time a crash occurs and upload that to the net. A really cute part of Breakpad is that the binary doesn't need to have the debug symbols in it, you don't even need to have them on the client machine at any location. When you build version $githash then you use a breakpad tool to copy out the debug symbols into separate files. When the user discovers a crash they upload a minidump file to a server of your selecting. Then you can combine the extracted symbols from build time and the minidump file to generate a backtrace with line number information. So software users don't have to know about gdb or lldb or whatnot and how to make a backtrace and where to paste it.







I recently updated FontForge's use of breakpad to use a small server on localhost to report the bug. The application dmg file for fontforge will soon also include the extracted symbols for the build. By telling breakpad to use a local server, that server can lookup the symbols that are shipped and generate a human readable backtrace with line number information. Because its also a web interface and running locally, it can spawn a browser on itself. So instead of getting the Mac dialog supplied by the osx crash reporter app telling you that there was a crash, you get a web page telling you the same thing. But the web page can use jQuery/Bootstrap (or $ui tool of choice) and ask what the user was doing and offer many ways to proceed from there depending on how the user wants to report things. The https://gist.github.com/ site can be used to report without any login or user accounts. It's also rather handy as a place to checking larger backtraces that might be, maybe, 50-100kb.



But once you can upload to gist, you can get a http and other URL links to the new gist. So it makes sense from there to offer to make a new github issue for the user too. And in that new issue include the link to the gist page so that developers can get at the full backtrace. It turns out that you can do this last part, which requires user login to github, by redirecting to github/.../issues/new and passing title and body GET parameters. While there is a github API, to report a new issue using it you would need to do OAuth first. But in the libre world it's not so simple to have a location to store the OAuth secure token for next time around. So the GET redirect trick nicely gets around that situation.





For those interested in this, the gist upload and callback to subsequently make a github issue are both available. The Google Breakpad hands over the minidump to a POST method which then massages the minidump into the backtrace and spawns a browser on itself. The GET serves up all the html, css, js, and other assets to the browser and that served html/js is what I link to at the start of the paragraph which is where the actual upload/reporting of the backtrace takes place.



The only thing left to do is to respond to the backtraces that come in and everybody gets a more stable FontForge out of the deal. It might be interesting to send off reports to a Socorro server too so that statistics month on month can be easily available.



FWIW

Today I emailed Julie Collins MP, and senators Catryna Bilyk, Carol Brown, Jacqui Lambie, Helen Polley, Lisa Singh and Anne Urquhart concerning data retention. For the record, and in case it helps anyone else who wants to contact their representatives and senators, here’s what I wrote:

Dear NAME/TITLE,

I am writing regarding the Telecommunications (Interception and Access)

Amendment (Data Retention) Bill 2014. As I am sure you are very busy,

I will be as brief as I can.

The distinction the bill makes between metadata (so-called “non-content

data”) and content is grossly misleading; once you have enough of it,

metadata is just as privacy invasive, if not more so, than the actual

content of communications, and as such should only be collected with

proper judicial oversight, i.e. after a warrant is obtained.

Retaining this data for the entire Australian population is mass

surveillance, nothing more, nothing less, and is completely

inappropriate in a modern democratic society.

Tinkering around the edges as Labor is suggesting with amendments to

protect journalists’ sources is misguided at best; the only way to

protect such sources effectively would be to not retain the sources’

data either, and given that you can’t know who they are, the only way

to achieve this would be to not retain anyone’s data at all.

Finally, mandatory data retention won’t help to catch any criminal with

even a shred of intelligence, as it can be trivially circumvented by

the use of overseas communications providers, virtual private networks

and the like.

In summary, I am completely opposed to mandatory data retention in

Australia. As my representative, I’m asking you to reject this bill.

Yours faithfully,

Tim Serong

March 15, 2015

Twitter posts: 2015-03-09 to 2015-03-15

Memorable Quotes

- They like their cats fried with garlic and washed down with beer in Vietnam's specialist restaurants.



Some diners even falsely believe that by eating a cat's spine they will gain the feline's agility.



Although no official statistics are available, eating cat meat in Vietnam is by all accounts almost as popular as eating dog meat, something of a tradition in the South-east Asian nation, despite the import of both meats being illegal.

...

Animal rights groups say cats and dogs are smuggled across the border from China, Thailand and Laos to feed the Vietnamese trade.



Residents of Hanoi say they see few cats and dogs roaming the streets.

http://www.smh.com.au/world/cat-on-the-menu-outrage-at-vietnamese-trade-in-felines-20150205-136ras.html

- Dog vs. Cat

In this gastronomic sampling of Chinese food, I'd have to go with dog. The meat was much more tender with a pleasant flavor. Cat on the other hand was average and not something to really look forward to eating.

http://thoughtcatalog.com/mark-wiens/2013/07/what-does-dog-and-cat-meat-taste-like/

- "My life is rather full. I have a full time job and numerous hobbies in addition to copy editing Wikipedia."...

http://www.mirror.co.uk/news/technology-science/technology/man-makes-47000-wikipedia-edits-5106883

- Statistics: you can sensationalise anything with the right statistics. Case in point being the ad for the TV program about what really happens in Bali: "one Australian dies every 9 days in Bali". With the number of Australians visiting Bali that is probably unsurprising. So if they stayed in Australia they were more of a chance to die as one Australian dies every 3 1/2 minutes in Australia.

https://plus.google.com/109656415348418300485/posts

- A 'notorious molester' at Knox Grammar School had memorial gates erected in his honour with the inscription 'He touched us all', an inquiry has been told.

http://www.skynews.com.au/news/national/2015/02/24/school-put-up-memorial-for--molester-.html

- If you use a search engine, you will discover a wealth of material and, after reading and attmepting to apply some of it to your situation, you will enjoy a sense of satisfaction and achievement that I would in no way wish to deprive you of...

http://unix.stackexchange.com/questions/187140/centos-software-raid-1-issue

- @fruit: To quote the great Homer Simpson, "Aw, you can come up with statistics to prove anything, Kent. Forty percent of all people know that.

https://www.ozbargain.com.au/node/184124

- "It's like watching a truck jack-knife on a road," Oliver says, clearly relishing the suspense. "It's like, 'It didn't crash this time. Let's give it five minutes, then it's going into a ditch'."



Oliver, host of Last Week Tonight, the satirical take on the world's news and current affairs that is fast becoming one of the world's most popular programs of its type, describes Abbott as "a car crash of a human being", albeit terrific fodder for comedy.

...

"Tony Abbott is an objectively fascinating man," Oliver insists. "The fact he's the leader of a country is in itself appealing as a comic. What's nice is being able to present someone who people have not seen here [in the US] and just to give them a glimpse into other people's pain, as well as their own."

http://www.smh.com.au/entertainment/tv-and-radio/tony-abbott-is-a-car-crash-of-a-human-being-says-comedian-john-oliver-20150302-13sog5.html

- "This paint job sends a direct message back to perpetrators that their wild urinating on this wall is not welcome," said Julia Staron of the St. Pauli's Community of Interest group to Reuters. "The paint protects the buildings and the residents and most importantly it sends a signal this behaviour is not on."

http://www.cbc.ca/news/trending/german-city-uses-water-repellent-paint-to-splash-public-urinators-with-their-own-pee-1.2985123

- Work on the embassy was stopped in 1985, after it was determined that the building was so riddled with listening devices implanted by Soviet workers that the structure was in effect a multistory microphone. Washington and Moscow, as well as the Administration and Congress, have been haggling over what to do with the building ever since.

http://www.nytimes.com/1992/06/20/world/deal-made-on-bugged-us-moscow-embassy.html

- The Prime Minister has once again left onlookers shocked, and probably a little confused, but for once it wasn't what was coming out of his mouth, rather what was going into it.



While on a tour of a produce farm in Tasmania, Tony Abbott was seen to be munching on a raw onion - skin and all.

Onion, anyone?



"Better than any other onions I've eaten in a long time," Mr Abbott was reported as saying.



Images of the odd onion-fest immediately began doing the rounds online, with one media outlet labelling Mr Abbott "The Minister for Onions."



Mr Abbott was touring Charlton Farm Produce near Devonport in Tasmania on Friday when all of a sudden, he picked up the onion and ate it skin-on merely commenting that it was delicious and not shedding a single tear.



The PM, who appeared to be grimacing slightly, but otherwise may as well have been eating an apple, took the onion in his hand, and casually chomped into it while watching the grading of the vegetable.

http://www.smh.com.au/federal-politics/political-news/tony-abbott-shocks-as-he-eats-a-raw-onion-whole-20150313-143syz.html

Evolution

I saw this on Twitter today:

I’m going to leave aside the possibility that this is a plot by someone else to ruin Justin D’Agostino’s life by forging an email to Kelly Ellis, as I’ve seen similar sentiments posted too many times (i.e. more than never), and I’m fucking sick of it.

Assuming for a moment that the egg-donor hypothesis is correct, if you are insufficiently evolved to control your urges (or if you share any of the opinions stated in the email above), then you are insufficiently evolved to warrant employment. Please leave and make room for someone else.

Books for Sale – Part 2

I’m doing a book clean-out. The following are all for sale. Remainders will be given away to charity or something. Pickup is from either my house (Dominion Rd/Balmoral, Auckland) or my I can meet during the week near my work in Wyndham Street in the Auckland CBD.

Prices as mark, discount if you want to by more than 5 or so. Links may not match the exact edition I am selling.

If you are interested in any please contact me via email ( simon@darkmere.gen.nz ) or over twitter ( @slyall ). Sale will run to end of April or so.

See Part 1 for more books

Business

Commentary / Opinion / Speculation / Politics

Technical

Travel / Misc

 

FacebookGoogle+Share

Books for sale – Part 1

I’m doing a book clean-out. The following are all for sale. Remainders will be given away to charity or something. Pickup is from either my house (Dominion Rd/Balmoral, Auckland) or my I can meet during the week near my work in Wyndham Street in the Auckland CBD.

Prices as mark, discount if you want to by more than 5 or so. Links may not match the exact edition I am selling.

If you are interested in any please contact me via email ( simon@darkmere.gen.nz ) or over twitter ( @slyall ) Sale will run to end of April or so.

See Part 2 for more books

Science Fiction / Fantasy

Deryni Books by Katherine Kurtz, all paperbacks of used quality unless otherwise named.

  • Deryni Rising – $4
  • Deryni Checkmate – $4
  • High Deryni – $4
  • Camber of Culdi (2 copies) -$4 each
  • The Bishops Heir (Hardback, ripped jacket) – $4
  • The Quest for Saint Camber – $4
  • The Deryni Archives – $4

Science Fiction Short Story Collections

Sci-Fi Novels

Other Fiction

History

FacebookGoogle+Share

March 14, 2015

hacksa2015 – we won!

The previous week we were informed that we were the winners of the first hacksa competition!

This was really awesome, we had put in a bit of work that week and it validates my ideas about how to approach a hackathon I blogged about previously . You can see our proof of concept web application at http://phaze.space

I think what helped us over the line was that we had a working web application that actually ‘did something’, or a ‘minimum viable product’ in the parlance: we demonstrated the primary user experience ( generate musical playlists when you don’t know what to choose ) along with various potential features illustrated by button placeholders.

There was a cash prize, and some music, and headphones, and a membership in a co-working space which we donated to the runner-up because we all have day jobs and wouldn’t be able to use it.  For me though the best prize was tickets to the NetWorkPlay conference held in Adelaide last week.  This was a completely different scene, this was a media industry conference (mostly documentary film-makers, and a mix of other film industry and media) and I met some different and quite interesting people.

One takeaway from NetWorkPlay as a software guy was research showing that most younger people directly use youtube as a search engine instead of google when searching for media. This was interesting, my first instinct (habit?)  is using google or other ‘traditional’ search engines even when searching for videos that ends up with me on YouTube anyway. A learned quite a few other interesting things, and more importantly had to move out of my comfort zone and had a good time interacting with people I would never have likely crossed paths with.

So thanks to my team members (you know who you are) for an awesome effort, and I’m looking forward to govhack 2015!

I’d also like to thank the competition organisers, including madeinkatana.com , SA Music Development Office, Musitec and Flinders New Ventures Institute, and the sponsors for the generous prizes.

brendanscott

Today is π day, at least in the US, where they think it’s a good idea to order dates by neither most nor least significant digits (3/14/15). The 14th of March is hailed in geekiness as π day because, in US date representations it’s 3.14 – the first 3 digits of the constant π.  However, today isn’t just any old π day. Today is a super π day because the two digits of the year 3.14.15 make up the 3rd and 4th decimal places of the constant.

Make the most of your super π day, because it won’t be happening again!

Unless, that is, you decide, perhaps temporarily, to join some Orthodox Churches and observe the Julian calendar, in which case you can have your π and eat it again in 13 days’ time.



March 13, 2015

Response to "Adopting Microservices at Netflix: Lessons for Team and Process Design"

NGINX has released an article entitled Adopting Microservices at Netflix: Lessons for Team and Process Design. For a high level article it reads well, but closer consideration suggests that it fraught with problems, not the least some rather simplistic panacea attitudes.

read more

LUV Beginners April Meeting: What's in the Cloud?

Apr 18 2015 12:30
Apr 18 2015 16:30
Apr 18 2015 12:30
Apr 18 2015 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Daniel Jitnah will have a look at virtualisation technology in Linux, including a brief demo of setting up and managing virtual machines. Then he will talk about what "the cloud" really is and its relation to virtualisation and will demonstrate a very simple cloud system: OpenNebula... just as an example. He'll also have a look at what are the common offerings for Linux based cloud software systems.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

April 18, 2015 - 12:30

read more

March 12, 2015

clintonroy

Walked into work fairly early.

Dead tired by the end of the day.

Signed onto sleepio to give them a burl. Looks like they’d like me to get a fitbit..



Filed under: diary

March 11, 2015

clintonroy

Awful day.

Sleep was disrupted by neighbours moving their cars around at two thirty am to make way for builders coming. Why two thirty? fucked if I know.

Finally dragged myself out of bed tired and headachy, headed into work.

Did manage to get some stuff done at work before the jack hammering started, and I left immediately in the early afternoon.

Struggling to stay awake till a sensible hour while catching up on a few of these here diary entries.



Filed under: diary

clintonroy

Walked home.

Conference stuff.



Filed under: diary

clintonroy

Walked into work stupidly early.

Home early, straight to bed.



Filed under: diary

clintonroy

Library day.



Filed under: diary

clintonroy

Coder dojo, much to my surprise I was introducing my simplified markov chain generator. It didn’t get much traction, but I felt that was because most of the kids had something they were interested in doing.

Next week I’m apparently talking all about 3D printing…



Filed under: diary

clintonroy

Deliberately didn’t walk to work as I’m giving blood today.

Stuffing myself silly with food and water, and only one coffee. So much water.

Was vindicated when I gave blood in record time with no dramas, and I didn’t get even a minor bruise at the extraction point.

Stuffed myself silly straight after.



Filed under: diary

clintonroy

Walked to walk stupidly early.

Conference work that night.



Filed under: diary

clintonroy

The monthly Python Users group meeting. There was a great talk about calculating statistics, like variances, in a single pass.



Filed under: diary

clintonroy

Walked home.

Conference stuff.



Filed under: diary

clintonroy

Was on the motorbike for the first time in a while today. Riding into work very early is great, riding home at peak hour is silly however.



Filed under: diary

March 10, 2015

WebVTT Audio Descriptions for Elephants Dream

When I set out to improve accessibility on the Web and we started developing WebSRT – later to be renamed to WebVTT – I needed an example video to demonstrate captions / subtitles, audio descriptions, transcripts, navigation markers and sign language.

I needed a freely available video with spoken text that either already had such data available or that I could create it for. Naturally I chose “Elephants Dream” by the Orange Open Movie Project , because it was created under the Creative Commons Attribution 2.5 license.

As it turned out, the Blender Foundation had already created a collection of SRT files that would represent the English original as well as the translated languages. I was able to reuse them by merely adding a WEBVTT header.

Then there was a need for a textual audio description. I read up on the plot online and finally wrote up a time-alignd audio description. I’m hereby making that file available under the Create Commons Attribution 4.0 license. I’ve added a few lines to the medadata headers so it doesn’t confuse players. Feel free to reuse at will – I know there are others out there that have a similar need to demonstrate accessibility features.

March 09, 2015

FreeDV and Codec 2 2015 Road Map

Last week I had a great chat with Gary Pearce KN4AG from Ham Radio Now:

Which brings me to my plans for 2015……..

2015 Open Digital Voice Road Map

I’m pretty excited about where Open Source Digital Radio is going in 2015. My goals for this year are:

  • A “sub zero” negative SNR FreeDV HF mode.
  • VHF FreeDV mode(s) that demonstrates a TDMA repeater in a 5kHz channel, diversity reception, high bit rate audio/data, and operation at 10dB less C/No than analog FM or 1st generation DV systems.
  • SM1000 speaker-mic and SM2000 VHF radio in production.

This figure shows the work packages we need to execute to make this happen:

The amount of blue shading indicates progress to date. It’s doable, but we could use some help – see below. The custom VHF radio product is the audacious stretch goal, but we need custom hardware to demonstrate the potential of open source VHF DV. In particular decent modems, TDMA, diversity, and variable bit rate (VBR). Sub-carrier VBR is explained in the next section. I’m no Icom, but do have the huge open source advantage of “owning the stack” and a very impressive brains trust that is forming up behind this work.

Sub Carrier Variable Bit Rate

HF and VHF channels vary wildly in quality due to multipath fading. If the channel is really poor it’s essential to push through with low quality, but intelligible speech. However if the channel is good, you may have 20dB more signal to noise ratio available. Current digital (and to a lessor extent analog) voice radio systems just toss that 20dB away. Systems are designed for the worst case, with power heaped on to survive the fading.

I’ve been brainstorming a sub-carrier idea. We send the must-have information using a full power, low bit rate to survive the worst case. On HF this might be 450 bit/s. At the same time, we transmit a higher bit rate sub-carrier at say 10dB less power. When the channel is good, we use the high bit rate carrier. When it’s poor, we fall back to just the low bit rate carrier.

This is better than a protocol based system that negotiates the bit rate, as it requires no back channel and can handle rapid changes in channel quality. It does however require a demodulator that can determine when the sub-carrier is viable. This needs some smarts to avoid rapid high/low quality switching, or for Ham radio could be manually switch-able so the operator has full control.

Sending a second, low power carrier has negligible impact on our total transmit power, e.g. 11 W total rather than 10W (0.4dB). It does use more bandwidth, and will require some implementation tricks (e.g. two class C amplifiers at VHF, but no changes at HF with a linear amplifier). The addition of a low power sub-carrier will have a small impact on peak/average power ratio (PAPR) compared to multiple carriers at the same power. Another bonus.

It’s an idea we can implement as we have complete control over the stack. From microphone to antenna and back again. Go Open Source.

I’d like to try this trick for both HF and VHF.

Daniel has illustrated it well for GMSK carriers:

Old school, analog colour TV worked like this. You get the black and white image information and sync signals on one carrier, then a sub-carrier carries the colour information. In weak signal situations you get black and white. Hmm, I wonder if this analog analogy is too dated. Showing my age.

Codec 2 and AMBE 2+

I recently compared a few AMBE 2+ samples with Codec 2, both at 2400 bit/s. I think they are close enough, except for the $100k up-front license fee, several $ per-unit license fee, policy of eternal lock-in through standardisation, patent encumbrances, and being forbidden by law to modify and learn anything about the algorithm. Not sure why Codec 2 is louder. I listen to them through my laptop speaker as that’s close to a radio.

Codec Female Male
Original female male
AMBE 2+ female male
Codec 2 female male

I cheated a bit, and ran both of them through a a 200Hz high pass filter. AMBE does a better job between 0 and 200 Hz. However most radio systems use an audio bandwidth of 400 to 2600 Hz. Indeed analog FM in VHF applications is sharply filtered at 400Hz so various control tones are not heard.

This isn’t the best we can do with Codec 2, it can be improved further.

Get Involved

There is a fine group of people who are already working with me. They are having fun and working on a really useful and meaningful project. Here are some examples:

  • Rick, KA8BMA and I have had a great year building the SM1000 together.
  • Mel Whitten K0PFX and his band of merry Hams have done a fantastic job promoting HF DV and testing FreeDV.
  • Daniel, VA7DRM has been zooming around British Columbia testing VHF diversity and prototypes of a 2nd generation VHF DV system.
  • Richard Shaw has been doing fine work on cross platform build systems and packaging of FreeDV and Codec 2.
  • Many people have provided invaluable, high level technical advice on RF, Modems, HF and VHF channels and DSP. You know who you are.
  • Many other contributors, large and small, through donations, testing, donations of test equipment, promotion, porting, contract development or small patches, some of which have expressly asked to remain anonymous (e.g. for commercial reasons).

Don’t Just Talk. Act

What did you do during the Open Source Digital Radio revolution?

Contributions count much more than suggestions. Suggestions add to my TODO list, contributions make it shorter. I’m just one guy working part time for nearly zero income and have my limits. If you can help make my TODO list shorter rather than longer please contact me!

If you have a great, must-have suggestion, then I will politely ask you to step up and submit a patch for it. Now you have my attention!

Can you code in C, use Octave/Matlab, write a radio or protocol specification, provide me with RF test equipment, or use a soldering iron to design and make VHF radios? Please contact me.

If you don’t have time or skills then you can still support this work by buying a SM1000 or simply donating. I also need my sig-gen and spec-an either replaced or fixed! Let me know if you need my shipping address :-)

Donation in US$:

Stromlo and Brown Trigs

So, the Facebook group set off for our biggest Trig walk yet today -- Stromlo and Brown Trigs. This ended up being a 15km walk with a little bit of accidental trespass (sorry!) and about 600 meters of vertical rise. I was expecting Stromlo to be prettier to be honest, but it wasn't very foresty. Brown was in nicer territory, but its still not the nicest bit of Canberra I've been to. I really enjoyed this walk though, thanks to Simon, Tony and Jasmine for coming along!



                     



Interactive map for this route.



Tags for this post: blog pictures 20150309-stromlo_and_brown photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

March 08, 2015

Technocracy: a short look at the impact of technology on modern political and power structures

Below is an essay I wrote for some study that I thought might be fun to share. If you like this, please see the other blog posts tagged as Gov 2.0. Please note, this is a personal essay and not representative of anyone else :)

In recent centuries we have seen a dramatic change in the world brought about by the rise of and proliferation of modern democracies. This shift in governance structures gives the common individual a specific role in the power structure, and differs sharply from more traditional top down power structures. This change has instilled in many of the world’s population some common assumptions about the roles, responsibilities and rights of citizens and their governing bodies. Though there will always exist a natural tension between those in power and those governed, modern governments are generally expected to be a benevolent and accountable mechanism that balances this tension for the good of the society as a whole.

In recent decades the Internet has rapidly further evolved the expectations and individual capacity of people around the globe through, for the first time in history, the mass distribution of the traditional bastions of power. With a third of the world online and countries starting to enshrine access to the Internet as a human right, individuals have more power than ever before to influence and shape their lives and the lives of people around them. It is easier that ever for people to congregate, albeit virtually, according to common interests and goals, regardless of their location, beliefs, language, culture or other age old barriers to collaboration. This is having a direct and dramatic impact on governments and traditional power structures everywhere, and is both extending and challenging the principles and foundations of democracy.

This short paper outlines how the Internet has empowered individuals in an unprecedented and prolific way, and how this has changed and continues to change the balance of power in societies around the world, including how governments and democracies work.

Democracy and equality

The concept of an individual having any implicit rights or equality isn’t new, let alone the idea that an individual in a society should have some say over the ruling of the society. Indeed the idea of democracy itself has been around since the ancient Greeks in 500 BCE. The basis for modern democracies lies with the Parliament of England in the 11th century at a time when the laws of the Crown largely relied upon the support of the clergy and nobility, and the Great Council was formed for consultation and to gain consent from power brokers. In subsequent centuries, great concerns about leadership and taxes effectively led to a strongly increased role in administrative power and oversight by the parliament rather than the Crown.

The practical basis for modern government structures with elected official had emerged by the 17th century. This idea was already established in England, but also took root in the United States. This was closely followed by multiple suffrage movements from the 19th and 20th centuries which expanded the right to participate in modern democracies from (typically) adult white property owners to almost all adults in those societies.

It is quite astounding to consider the dramatic change from very hierarchical, largely unaccountable and highly centralised power systems to democratic ones in which those in powers are expected to be held to account. This shift from top down power, to distributed, representative and accountable power is an important step to understand modern expectations.

Democracy itself is sustainable only when the key principle of equality is deeply ingrained in the population at large. This principle has been largely infused into Western culture and democracies, independent of religion, including in largely secular and multicultural democracies such as Australia. This is important because an assumption of equality underpins stability in a system that puts into the hands of its citizens the ability to make a decision. If one component of the society feels another doesn’t have an equal right to a vote, then outcomes other than their own are not accepted as legitimate. This has been an ongoing challenge in some parts of the world more than others.

In many ways there is a huge gap between the fearful sentiments of Thomas Hobbes, who preferred a complete and powerful authority to keep the supposed ‘brutish nature’ of mankind at bay, and the aspirations of John Locke who felt that even governments should be held to account and the role of the government was to secure the natural rights of the individual to life, liberty and property. Yet both of these men and indeed, many political theorists over many years, have started from a premise that all men are equal – either equally capable of taking from and harming others, or equal with regards to their individual rights.

Arguably, the Western notion of individual rights is rooted in religion. The Christian idea that all men are created equal under a deity presents an interesting contrast to traditional power structures that assume one person, family or group have more rights than the rest, although ironically various churches have not treated all people equally either. Christianity has deeply influenced many political thinkers and the forming of modern democracies, many of which which look very similar to the mixed regime system described by Saint Thomas Aquinas in his Summa Thelogiae essays:

Some, indeed, say that the best constitution is a combination of all existing forms, and they praise the Lacedemonian because it is made up of oligarchy, monarchy, and democracy, the king forming the monarchy, and the council of elders the oligarchy, while the democratic element is represented by the Ephors: for the Ephors are selected from the people.

The assumption of equality has been enshrined in key influential documents including the United States Declaration of Independence, 1776:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

More recently in the 20th Century, the Universal Declaration of Human Rights goes even further to define and enshrine equality and rights, marking them as important for the entire society:

Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world…1st sentence of the Preamble to the Universal Declaration of Human Rights

All human beings are born free and equal in dignity and rights.Article 1 of the United Nations Universal Declaration of Human Rights (UDHR)

The evolution of the concepts of equality and “rights” is important to understand as they provide the basis for how the Internet is having such a disruptive impact on traditional power structures, whilst also being a natural extension of an evolution in human thinking that has been hundreds of years in the making.

Great expectations

Although only a third of the world is online, in many countries this means the vast bulk of the population. In Australia over 88% of households are online as of 2012. Constant online access starts to drive a series of new expectations and behaviours in a community, especially one where equality has already been so deeply ingrained as a basic principle.

Over time a series of Internet-based instincts and perspectives have become mainstream, arguably driven by the very nature of the technology and the tools that we use online. For example, the Internet was developed to “route around damage” which means the technology can withstand technical interruption by another hardware or software means. Where damage is interpreted in a social sense, such as perhaps censorship or locking away access to knowledge, individuals instinctively seek and develop a work around and you see something quite profound. A society has emerged that doesn’t blindly accept limitations put upon them. This is quite a challenge for traditional power structures.

The Internet has become both an extension and an enabler of equality and power by massively distributing both to ordinary people around the world. How has power and equality been distributed? When you consider what constitutes power, four elements come to mind: publishing, communications, monitoring and enforcement.

Publishing – in times gone past the ideas that spread beyond a small geographical area either traveled word of mouth via trade routes, or made it into a book. Only the wealthy could afford to print and distribute the written word, so publishing and dissemination of information was a power limited to a small number of people. Today the spreading of ideas is extremely easy, cheap and can be done anonymously. Anyone can start a blog, use social media, and the proliferation of information creation and dissemination is unprecedented. How does this change society? Firstly there is an assumption that an individual can tell their story to a global audience, which means an official story is easily challenged not only by the intended audience, but by the people about whom the story is written. Individuals online expect both to have their say, and to find multiple perspectives that they can weigh up, and determine for themselves what is most credible. This presents significant challenges to traditional powers such as governments in establishing an authoritative voice unless they can establish trust with the citizens they serve.

Communications– individuals have always had some method to communicate with individuals in other communities and countries, but up until recent decades these methods have been quite expensive, slow and oftentimes controlled. This has meant that historically, people have tended to form social and professional relationships with those close by, largely out of convenience. The Internet has made it easy to communicate, collaborate with, and coordinate with individuals and groups all around the world, in real time. This has made massive and global civil responses and movements possible, which has challenged traditional and geographically defined powers substantially. It has also presented a significant challenge for governments to predict and control information flow and relationships within the society. It also created a challenge for how to support the best interests of citizens, given the tension between what is good for a geographically defined nation state doesn’t always align with what is good for an online and trans-nationally focused citizen.

Monitoring – traditional power structures have always had ways to monitor the masses. Monitoring helps maintain rule of law through assisting in the enforcement of laws, and is often upheld through self-reporting as those affected by broken laws will report issues to hold detractors to account. In just the last 50 years, modern technologies like CCTV have made monitoring of the people a trivial task, where video cameras can record what is happening 24 hours a day. Foucault spoke of the panopticon gaol design as a metaphor for a modern surveillance state, where everyone is constantly watched on camera. The panopticon was a gaol design wherein detainees could not tell if they were being observed by gaolers or not, enabling in principle, less gaolers to control a large number of prisoners. In the same way prisoners would theoretically behave better under observation, Foucault was concerned that omnipresent surveillance would lead to all individuals being more conservative and limited in themselves if they knew they could be watched at any time. The Internet has turned this model on its head. Although governments can more easily monitor citizens than ever before, individuals can also monitor each other and indeed, monitor governments for misbehaviour. This has led to individuals, governments, companies and other entities all being held to account publicly, sometimes violently or unfairly so.

Enforcement – enforcement of laws are a key role of a power structure, to ensure the rules of a society are maintained for the benefit of stability and prosperity. Enforcement can take many forms including physical (gaol, punishment) or psychological (pressure, public humiliation). Power structures have many ways of enforcing the rules of a society on individuals, but the Internet gives individuals substantial enforcement tools of their own. Power used to be who had the biggest sword, or gun, or police force. Now that major powers and indeed, economies, rely so heavily upon the Internet, there is a power in the ability to disrupt communications. In taking down a government or corporate website or online service, an individual or small group of individuals can have an impact far greater than in the past on power structures in their society, and can do so anonymously. This becomes quite profound as citizen groups can emerge with their own philosophical premise and the tools to monitor and enforce their perspective.

Property – property has always been a strong basis of law and order and still plays an important part in democracy, though perspectives towards property are arguably starting to shift. Copyright was invented to protect the “intellectual property” of a person against copying at a time when copying was quite a physical business, and when the mode of distributing information was very expensive. Now, digital information is so easy to copy that it has created a change in expectations and a struggle for traditional models of intellectual property. New models of copyright have emerged that explicitly support copying (copyleft) and some have been successful, such as with the Open Source software industry or with remix music culture. 3D printing will change the game again as we will see in the near future the massive distribution of the ability to copy physical goods, not just virtual ones. This is already creating havoc with those who seek to protect traditional approaches to property but it also presents an extraordinary opportunity for mankind to have greater distribution of physical wealth, not just virtual wealth. Particularly if you consider the current use of 3D printing to create transplant organs, or the potential of 3D printing combined with some form of nano technology that could reassemble matter into food or other essential living items. That is starting to step into science fiction, but we should consider the broader potential of these new technologies before we decide to arbitrarily limit them based on traditional views of copyright, as we are already starting to see.

By massively distributing publishing, communications, monitoring and enforcement, and with the coming potential massive distribution of property, technology and the Internet has created an ad hoc, self-determined and grassroots power base that challenges traditional power structures and governments.

With great power…

Individuals online find themselves more empowered and self-determined than ever before, regardless of the socio-political nature of their circumstances. They can share and seek information directly from other individuals, bypassing traditional gatekeepers of knowledge. They can coordinate with like-minded citizens both nationally and internationally and establish communities of interest that transcend geo-politics. They can monitor elected officials, bureaucrats, companies and other individuals, and even hold them all to account.

To leverage these opportunities fully requires a reasonable amount of technical literacy. As such, many technologists are on the front line, playing a special role in supporting, challenging and sometimes overthrowing modern power structures. As technical literacy is permeating mainstream culture more individuals are able to leverage these disrupters, but technologist activists are often the most effective at disrupting power through the use of technology and the Internet.

Of course, whilst the Internet is a threat to traditional centralised power structures, it also presents an unprecedented opportunity to leverage the skills, knowledge and efforts of an entire society in the running of government, for the benefit of all. Citizen engagement in democracy and government beyond the ballot box presents the ability to co-develop, or co-design the future of the society, including the services and rules that support stability and prosperity. Arguably, citizen buy-in and support is now an important part of the stability of a society and success of a policy.

Disrupting the status quo

The combination of improved capacity for self-determination by individuals along with the increasingly pervasive assumptions of equality and rights have led to many examples of traditional power structures being held to account, challenged, and in some cases, overthrown.

Governments are able to be held more strongly to account than ever before. The Open Australia Foundation is a small group of technologists in Australia who create tools to improve transparency and citizen engagement in the Australian democracy. They created Open Australia, a site that made the public parliamentary record more accessible to individuals through making it searchable, subscribable and easy to browse and comment on. They also have projects such as Planning Alerts which notifies citizens of planned development in their area, Election Leaflets where citizens upload political pamphlets for public record and accountability, and Right to Know, a site to assist the general public in pursuing information and public records from the government under Freedom of Information. These are all projects that monitor, engage and inform citizens about government.

Wikileaks is a website and organisation that provides an anonymous way for individuals to anonymously leak sensitive information, often classified government information. Key examples include video and documents from the Iraq and Afghanistan wars, about the Guantanamo Bay detention camp, United States diplomatic cables and million of emails from Syrian political and corporate figures. Some of the information revealed by Wikileaks has had quite dramatic consequences with the media and citizens around the world responding to the information. Arguably, many of the Arab Spring uprisings throughout the Middle East from December 2010 were provoked by the release of the US diplomatic cables by Wikileaks, as it demonstrated very clearly the level of corruption in many countries. The Internet also played a vital part in many of these uprisings, some of which saw governments deposed, as social media tools such as Twitter and Facebook provided the mechanism for massive coordination of protests, but importantly also provided a way to get citizen coverage of the protests and police/army brutality, creating global audience, commentary and pressure on the governments and support for the protesters.

Citizen journalism is an interesting challenge to governments because the route to communicate with the general public has traditionally been through the media. The media has presented for many years a reasonably predictable mechanism for governments to communicate an official statement and shape public narrative. But the Internet has facilitated any individual to publish online to a global audience, and this has resulted in a much more robust exchange of ideas and less clear cut public narrative about any particular issue, sometimes directly challenging official statements. A particularly interesting case of this was the Salam Pax blog during the 2003 Iraq invasion by the United States. Official news from the US would largely talk about the success of the campaign to overthrown Suddam Hussein. The Salam Pax blog provided the view of a 29 year old educated Iraqi architect living in Baghdad and experiencing the invasion as a citizen, which contrasted quite significantly at times with official US Government reports. This type of contrast will continue to be a challenge to governments.

On the flip side, the Internet has also provided new ways for governments themselves to support and engage citizens. There has been the growth of a global open government movement, where governments themselves try to improve transparency, public engagement and services delivery using the Internet. Open data is a good example of this, with governments going above and beyond traditional freedom of information obligations to proactively release raw data online for public scrutiny. Digital services allow citizens to interact with their government online rather than the inconvenience of having to physically attend a shopfront. Many governments around the world are making public commitments to improving the transparency, engagement and services for their citizens. We now also see more politicians and bureaucrats engaging directly with citizens online through the use of social media, blogs and sophisticated public consultations tools. Governments have become, in short, more engaged, more responsive and more accountable to more people than ever before.

Conclusion

Only in recent centuries have power structures emerged with a specific role for common individual citizens. The relationship between individuals and power structures has long been about the balance between what the power could enforce and what the population would accept. With the emergence of power structures that support and enshrine the principles of equality and human rights, individuals around the world have come to expect the capacity to determine their own future. The growth of and proliferation of democracy has been a key shift in how individuals relate to power and governance structures.

New technologies and the Internet has gone on to massively distribute the traditionally centralised powers of publishing, communications, monitoring and enforcement (with property on the way). This distribution of power through the means of technology has seen democracy evolve into something of a technocracy, a system which has effectively tipped the balance of power from institutions to individuals.

References

Hobbes, T. The Leviathan, ed. by R. Tuck, Cambridge University Press, 1991.

Aquinas, T. Sum. Theol. i-ii. 105. 1, trans. A. C. Pegis, Whether the old law enjoined fitting precepts concerning rulers?

Uzgalis, William, “John Locke”, The Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/fall2012/entries/locke/.

See additional useful references linked throughout essay.

Twitter posts: 2015-03-02 to 2015-03-08

Static networking in Ansible the quick and dirty way

I’m in the process of setting up a server at home to replace an old one. I’m maintaining the new one via Ansible to try and get keep as tidy as possible. Part of the setup involves setting up a bridge interface so that I can run kvm virtual machines on the box.

In order to make the box a little more stable I decided to make the ethernet settings static rather than via DHCP. Unfortunately ansible doesn’t really have a nice standard way of setting up network ports (there are a few modules around but none in the main distribution).  After looking around I decided just to make a simple ansible role to handle the files.

The machine is running centos7. The networking initially looked like:

/etc/sysconfig/network-scripts/ifcfg-enp2s0
::::::::::::::
HWADDR=9C:B6:54:07:E8:49
TYPE=Ethernet
BOOTPROTO=dhcp
NAME=enp2s0
ONBOOT=yes
#

I decided the easiest way was to just manually create and copy the files. So I created a static_networking role.

roles/static_networking/handlers/main.yml
roles/static_networking/files/grey/ifcfg-enp2s0
roles/static_networking/files/grey/ifcfg-bridge0
roles/static_networking/tasks/main.yml
roles/static_networking/tasks/setup-redhat.yml

Inside the tasks the main.yml just loads up the setup-redhat.yml which is:

---
- name: copy files if they are listed in var
  copy: src={{ ansible_hostname }}/ifcfg-{{ item }} dest=/etc/sysconfig/network-scripts/ owner=root mode=0644
  with_items: static_interfaces
  notify:
  - restart network

Which is fairly simple. It just goes though a list of “static_interfaces” for a host and copies these files from the local machine to the machine I am setting up. If the copy makes any changes it sends a notify.

For the machine “grey” I just create some entries in hosts_vars/grey.yml

static_interfaces:
 - enp2s0
 - bridge0

and then the files themselves:

roles/static_networking/files/grey/ifcfg-bridge0
::::::::::::::
DEVICE="bridge0"
ONBOOT="yes"
TYPE=Bridge
BOOTPROTO=static
IPADDR=10.1.1.28
NETMASK=255.255.255.0
GATEWAY=10.1.1.1
::::::::::::::
roles/static_networking/files/grey/ifcfg-enp2s0
::::::::::::::
DEVICE="enp2s0"
ONBOOT="yes"
NM_CONTROLLED="no"
BOOTPROTO="none"
BRIDGE=bridge0
HWADDR="9c:b6:54:07:e8:49"

which are the actual files to be copied. If any files are actually updated the handler will be triggered

roles/static_networking/handlers/main.yml 
---
# Called by "name" when network config files are changed
- name: restart network
  service: name=network state=restarted

Overall it seems to work and I only broke networking once (the ip on enp2s0 keep getting re-added until I forced network manager to forget about it). I wouldn’t really recommend this sort of thing for non-trivial sites though. Keeping per-site configs in roles isn’t really the best way to do things.

FacebookGoogle+Share

March 07, 2015

[tech] Honey, I wrote my first Chrome extension!

I love reading Linux Weekly News. It's a great source of high quality Linux and FOSS journalism, and I've been a subscriber for years.

One mild annoyance I have with the site is the way articles are cross-linked. All the article URIs are in the format /Article/531114/, which isn't particularly descriptive about that article's content.

When faced with an article that links to another article, with perhaps a word of anchor text, it's hard to tell if the new article is worth opening in a tab, is indeed already open in a tab, or has been previously read. (Yes, the "visited link" colour can help to a small degree, but even then, it doesn't tell you which previously read article it is).

This is what God the W3C invented the title attribute for.

Back in April 2011, I emailed Jonathan Corbet and asked if his content management system could just do this, but it was apparently a bit tricky, and it got filed in the "feature request" bucket.

I was sufficiently irritated by this deficiency last Monday, when doing some heavy reading on a topic, and so I decided to take matters into my own hands, and also learn how to write a Chrome Extension into the bargain.

I was delighted to have scratched the itch under 24 hours later and developed something that solved my particular problem - lwn4chrome I'm calling it.

I'm just finalising an icon for it, and then I'll have a stab at putting it in the Chrome Web Store as a freebie.

I might even have a crack at writing a Firefox extension as well for completeness, but I suspect the bulk of LWN's readership is using Chrome or Chromium.

March 05, 2015

Acurite 02032CAUDI Weather Station

I found an Acurite Weather Center 02032CAUDI at Costco for $99, which seemed like a pretty good deal.

It includes the "colour" display panel and a 5-in-1 remote sensor that includes temperature, wind-speed and direction, humidity and rain gauge.

The colour in the diplay is really just a fancy background sticker with the usual calculator-style liquid-crystal display in front. It does seem that for whatever reason the viewing angle is extremely limited; even off centre a little and it becomes very dim. It has an inbuilt backlight that is quite bright; it is either off or on (3-levels) or in "auto" mode, which dims it to the lowest level at certain hours. Hacking in a proximity sensor might be a fun project. The UI is OK; it shows indoor and outdoor temperature/humidity, wind-speed/rain and with is able to show you highs and lows with a bit of scrolling.

I was mostly interested in its USB output features. After a bit of fiddling I can confirm I've got it connected up to Meteobridge that is running on a Dlink DIR-505 and reporting to Weather Underground. One caveat is that you do need to plug the weather-station into a powered USB hub, rather than directly into the DIR-505; I believe because the DIR-505 can only talk directly to USB2.0 devices and not older 1.5 devices like the weather station. Another small issue is that the Meteobridge license is €65 which is not insignificant. Of course with some effort you can roll-your-own such as described in this series which is fun if you're looking for a project.

Luckily I had a mounting place that backed onto my small server cupboard, so I could easily run the cables through the wall to power and the DIR-505. Without this the cables might end up a bit of a mess. Combined with the fairly limited viewing angle afforded, finding somewhere practical to put the indoor unit might be one of the hardest problems.

Mounting the outdoor unit was fine, but mine is a little close to the roof-line so I'm not sure the wind-speed and direction are as accurate as if it were completely free-standing (I think official directions for wind-speed are something like free-standing 10m in the air). It needs to face north; both for the wind-direction and so the included solar-panel that draws air into the temp/humidity sensor is running as much as possible (it works without this, but it's more accurate with the fan). One thing is that it needs to mounted fairly level for the rain-gauge; it includes a small bubble-level on the top to confirm this. Firstly you'll probably find that most mount points you thought were straight actually aren't! Since the bubble is on the top, if you want to actually see it you need to be above it (obviously) which may not be possible if you're standing on a ladder and mounting it over your head. This may be a situation that inspires a very legitimate use of a selfie-stick.

It's a fun little device and fairly hackable for an overall reasonable price; I recommend.

On VMware and GPL

I do not believe any of the current reporting around the announced case has accurately described the issue; which I see as a much more subtle question of GPL use across API layers. Of course I don't know what the real issue is, because the case is sealed and I have no inside knowledge. I do have some knowledge of the vmkernel, however, and what I read does not match with what I know.

An overview of ESXi is shown below

overview of vmkernel and vmkapi

There is no question that ESXi uses a lot of Linux kernel code and drivers. The question as I see it is more around the interface. The vmkernel provides a well-described API known as vmkapi. You can write drivers directly to this API; indeed some do. You can download a SDK.

A lot of Linux code has been extracted into vmkLinux; this is a shim between Linux drivers and the vmkapi interface. The intent here is to provide an environment where almost unmodified Linux drivers can interface to the proprietary vmkernel. This means vendors don't have to write two drivers, they can re-use their Linux ones. Of course, large parts of various Linux sub-systems' API are embedded in here. But the intent is that this code is modified to communicate to the vmkernel via the exposed vmkapi layer. It is conceivable that you could write a vmkWindows or vmkOpenBSD and essentially provide a shim-wrapper for drivers from other operating systems too.

vmkLinux and all the drivers are GPL, and released as such. I do not think there could be any argument there. But they interface to vmkapi which, as stated, is an available API but part of the proprietary kernel. So, as I see it, this is a much more subtle question than "did VMware copy-paste a bunch of Linux code into their kernel". It goes to where the GPL crosses API boundaries and what is considered a derived work.

If nothing else, this enforcement increasing clarity around that point would be good for everyone I think.

March 04, 2015

Fried Fish with Butter Fried Potatoes Recipe

This is based on recipes online and an interpretation by a local restaurant that I used to frequent. While there are other alternative recipe that possibly taste better, I find that this is the quickest and easiest version.

- potatoes

- pre-made frozen, battered, fish

- sour cream

- tartare sauce

- salt

- butter

- bacon bits (optional)

- spring onion (optional)


Fry fish in pan. In meantime, chop potatoes into rough chunks and place into microwave until soft (to reduce cooking time. Boiling can take a long time). When fish cooked remove from heat. Place potatoes pieces into pan with a knob of butter to provide it with a crunchy 'crust'. Add bacon bits to same pan to add some 'crispness' as well. 


To serve, garnish fish with lemon juice and tartare sauce and serve potatoes and bacon bits with salt, chopped spring onion, and sour cream.



http://dtbnguyen.blogspot.com.au/2012/02/butter-fried-potatoes-with-bacon-bits.html



Would go really well with a salad to help you to cut through the fatty nature of the dish.

GNU Octave 3.8.2 Source Installation

GNU Octave is a high-level language, primarily intended for numerical computations. It makes a very good alternative to MATLAB.

Download from ftp://ftp.gnu.org/gnu/octave/



wget ftp://ftp.gnu.org/gnu/octave/octave-3.8.2.tar.bz2

Extract to /usr/local/src/OCTAVE



cd /usr/local/src/OCTAVE

tar xvjf octave-3.8.2.tar.bz2

read more

March 01, 2015

Twitter posts: 2015-02-23 to 2015-03-01

clintonroy

A quite full on day.

Woke up early because..that’s what I do. Headed out to Sunnybank library in the morning, CoderDojo, then back to The Edge for minicomicon, where I picked up a few small freebies, but didn’t spot anything that I felt like buying. I spent a little time coding up a simple Markov generator, hopefully simple enough for the coder dojo folks to follow. After all that, out to Humbug.



Filed under: diary

February 28, 2015

SM1000 Part 11 – Accepting Pre-orders!

The first batch of 100 SM1000s are being built in China right now and we estimate shipping will start in late March April. Due to popular demand I am accepting pre-orders right now!

Australian customers can buy directly from my Store, rest of the world please use the Aliexpress Store for direct shipping from Shenzhen, China.

Thanks Rick KA8BMA and Edwin from Dragino for all your kind help!

Update

We have experienced some quality issues with the enclosure manufacturer. This is frustrating – we now have 100 SM1000′s ready to ship but no enclosures for them! Anyway, Edwin has found a new supplier and we are in the process of getting the enclosures made. This means that shipping has slipped until late April. I apologise to those who have pre-purchased SM1000s on the basis of my previous promise of late March shipping. Happy to provide a refund, or please stand by and we will ship your order as soon as we can!

Oh, and we have sold nearly all of the first batch of 100! Thanks!

February 27, 2015

clintonroy

I went to bed really early last night due to my weird ongoing headache. I had a little help getting to sleep. This meant I basically had a full nights sleep by three o’clock. So I ended up walking to work stupidly early and arriving before five am. I still had some residual effects of the whatever-the-heck headache in the morning, but it’s gone by the evening.

The internet was really weird today, llamas and dresses for some reason.

Doing some conf stuff at The Edge. See three friends walk past on the walkway :)



Filed under: diary

Fried Rice Recipe

This is based on a family recipe, recipes online, and an interpretation by local restaurants that I used to frequent. While there are other alternative recipes that possibly taste better, I find that this is the quickest and easiest version.

- chinese sausage

- rice

- eggs

- onion

- garlic

- tomato sauce

- salt
- sugar

- soy sauce

- spring onion (optional)

 - dried shrimp (optional)

- shitake mushrooms (optional)

- lettuce (optional)

- fried shallot (optional)

- prawns (optional)

- Chinese BBQ Pork (also called char-siu/charsiu. See elsewhere on this blog for this recipe)



Sautee onion, garlic, chinese sausage in pan. Fry egg and then shred so that it can be mixed through rice more easily later on. Add rice and then add the rest of the diced/chopped ingredients. Add salt, sugar, soy sauce, etc... to taste. Garnish with shredded lettuce and fried shallots.



The following is what it looks like.

http://www.taste.com.au/recipes/1351/chinese+fried+rice

http://www.taste.com.au/recipes/15297/easy+fried+rice

http://www.taste.com.au/recipes/collections/fried+rice+recipes

A(nother) new era of WordPress

The other night at WordPress Sydney, I dropped a five minute brain-dump about some cool things going on in the web ecosystem that herald a new era of WordPress. That’s a decent enough excuse to blog for the first time in two years, right?

I became a WordPress user 9 years ago, not long after the impressive 2.0 release. I was a happy pybloxsom user, but WordPress 2.0 hit a sweet spot of convenience, ease of use, and compelling features. It was impossible to ignore: I signed up for Linode just so I could use WordPress. You’re reading the same blog on (almost) the same Linode, 9 years later!

WordPress 2.0

WordPress

Fast forward to 2015 and WordPress powers 20% of the web. It’s still here because it is a great product.

It’s a great product because it’s built by a vibrant, diverse Open Source community with a fantastic core team, that cares deeply about user experience, that mentors and empowers new contributors (and grooms or cajoles them to become leaders), and isn’t afraid of the ever-changing web.

Another reason for the long term success of WordPress is that it’s built on the unkillable cockroach of the world wide web: PHP.

I won’t expound on the deficiencies of PHP in this post. Suffice to say that WordPress has thrived on PHP’s ubiquity and ease of adoption, while suffering its mediocrity and recent (albeit now firmly interrupted) stagnation.

HHVM

The HipHop Virtual Machine is Facebook’s high performance PHP runtime. They started work on an alternative because PHP is… wait for it… not very efficient.

Unless you’ve goofed something up, the slowest part of your PHP-based application should be PHP itself. Other parts of your stack may exhibit scaling problems that affect response times, but in terms of raw performance, PHP is the piggy in the middle of your web server and data stores.

“But like I said, performance isn’t everything.” — Andi Gutmans

What is the practical implication of “performance isn’t everything”? Slow response times, unhappy users, more servers, increased power utilisation, climate change, and death.

Facebook’s project was released in 2010 as the HipHop compiler, which transpiled PHP code into C++ code, which was then compiled into a gigantic monolithic binary, HTTP server included.

In early 2013, HipHop was superseded by HHVM, a jitting virtual machine. It still seemed pretty weird and awkward on the surface, but by late 2013 the HHVM developers added support for FastCGI.

So today, deployment of HHVM looks and feels familiar to anyone who has used php-fpm.

Want to strap a rocket to your WordPress platform? I strongly recommend experimenting with HHVM, if not putting it into production… like, say, Wikipedia.

Hack

Not content with nuking PHP runtime stagnation, the HHVM developers decided to throw some dynamite in the pants of PHP language stagnation by announcing their new Hack language. It’s a bunch of incremental improvements to PHP, bringing modern features to the language in a familiar way.

Imagine you could get in a DeLorean, go back to 2005, and take care of PHP development properly. You’d end up with something like Hack.

Hack brings performance opportunities to the table that the current PHP language alone could not. You’ve heard all those JavaScript hipsters (hi!) extolling the virtues of asynchronous programming, right? Hack can do that, without what some describe as “callback hell”.

Asynchronous programming means you can do things while you wait. Such as… turning database rows into HTML while more database rows are coming down the wire. Which is pretty much what WordPress does. Among other things.

Based on the WordPress team’s conservative approach to PHP dependency updates, it’s unlikely we’ll see WordPress using Hack any time soon. But it has let the PHP community (and particularly Zend) taste the chill wind of irrelevance, so PHP is moving again.

WP-API

Much closer to WordPress itself, the big change on the horizon is WP-API, which turns your favourite publishing platform into a complete and easy-to-use publishing API.

If you’re not familiar with APIs, think about it this way: If you cut off all the user interface bits of WordPress, but kept all the commands for managing your data, and then made them really easy to use from other applications or web sites, you’d have a WordPress API.

But what’s the point of stripping off all the user interface bits of WordPress? Aren’t they the famously good bits? Well, yes. But you could make even better ones built on top of the API!

Today, there’s a huge amount of PHP code in WordPress dedicated to making the admin user interface so damn good. There’s also a lot of JavaScript code involved, making it nice and interactive in your browser.

With WP-API, you could get rid of all that PHP code, do less work on the server, and build the entire admin user interface in the browser with JavaScript. That might sound strange, but it’s how most modern web applications are built today. WordPress can adapt… again!

One of the things I love about WordPress is that you can make it look like anything you wish. Most of the sites I’ve worked on don’t look anything like traditional blogs. WP-API kicks that up a notch.

If you’ve ever built a theme, you’ll know about “the loop”. It’s the way WordPress exposes data to themes, in the form of a PHP API, and lots of themers find it frustrating. Instead of WordPress saying, “here are the posts you wanted, do what you like”, it makes you work within the loop API, which drip-feeds posts to you one at a time.

WP-API completely inverts that. You ask WordPress for the data you want — say, the first ten posts in May — then what you do with it, and how, is 100% up to you.

There’s way more potential for a WordPress API, though. A fully-featured mobile client, integration with legacy publishing systems at your newspaper, custom posting interfaces for specific kinds of users, etc., etc., etc.

The best bit is that WP-API is going to be part of WordPress. It’s a matter of “when”, not “if”, and core WordPress features are being built today with the WP-API merge in mind.

React

According to its creators, “React is a JavaScript library for building user interfaces”, but it’s way cooler than that. If you’re building complex, interactive interfaces (like, say, the admin back-end of a publishing platform), the React way of thinking is fireworks by the megaton.

For all the hype it enjoys today, Facebook launched React in 2013 to immense wailing and gnashing of teeth. It mixed HTML (presentation) and JavaScript (logic) in a way that reminded developers of the bad old days of PHP. They couldn’t see past it. Some still can’t. But that was always a facile distraction from the key ideas that inspired React.

The guts beneath most user interfaces, on the web or desktop, look like a mad scientist’s chemistry lab. Glass everywhere, weird stuff bubbling over a Bunsen burner at one end, an indecipherable, interdependent maze of piping, and dangerous chemical reactions… you’d probably lose a hand if you moved anything.

React is a champagne pyramid compared to the mad chemistry lab of traditional events and data-binding.

It stresses a one-way flow: Data goes in one end, user interface comes out the other. Data is transformed into interface definitions by components that represent logical chunks of your application, such as a tool bar, notification, or comment form.

Want to make a change? Instead of manipulating a specific part of the user interface, just change the data. The whole user interface will be rebuilt — sounds crazy, right? — but only the changes will be rendered.

The one-way data flow through logical components makes React-based code easy to read, easy to reason about, and cranks your web interface to Ludicrous Speed.

Other libraries and frameworks are already borrowing ideas, but based on adoption to date, number of related projects, and quality of maintenance, I reckon React itself will stick around too.

Connecting the Dots

It won’t happen overnight, but WP-API will dramatically reduce the amount of active PHP code in WordPress, starting with the admin back-end. It will become a JavaScript app that talks to the WP-API sooner than anyone suspects.

Front-end (read: theme) development will change at a slower pace, because rendering HTML on the server side is still the right thing to do for performance and search. But themers will have the option to ditch the traditional loop for an internal, non-remoting version of the WP-API.

There’ll be some mostly-dead code maintained for backwards compatibility (because that’s how the dev team rolls), but on the whole, the PHP side of WordPress will be a lean, mean, API-hosting machine.

Which means there’s going to be even more JavaScript involved. Reckon that’s going to be built the same way as today? Nuh-uh. One taste of React in front of WP-API, and I reckon the jQuery and Backbone era will be finished.

In WordPress itself, most of this will affect how the admin back-end is built, but we’ll also see some great WordPress-as-application examples in the near future. Think Parse-style app development, but with WordPress as the Open Source, self-hosted, user-controlled API services layer behind the scenes.

What about HHVM? You’re going to want your lean, mean, API-hosting machine to run fast and, in some cases, scale big. Unless the PHP team surprises everyone by embracing the JVM, I reckon the future looks more like HHVM than FPM (even with touted PHP 7 performance improvements).

Once HHVM is popular enough, having side-by-side PHP and Hack implementations of  core WordPress data grinding functions will begin to look attractive. If you’ve got MySQL on one side, a JSON consumer on the other, and asynchronous I/O available in between, you may as well do it efficiently. (Maybe PHP will adopt async/await. See you in 2020?)

End

Look, what I’m trying to say is that it’s a pretty good time to be caught up in the world of WordPress, isn’t it? :-)

Champagne Pyramid

February 26, 2015

Chicken Curry Recipe

This is based on a family recipe.

- chicken

- sugar

- salt

- pepper

- garlic

- curry

- onion

- carrot

- potato

- fish sauce

- coconut milk

- curry mix (powder or liquid)(optional)

- tomatoes (optional)



Marinate chicken in sugar/salt/pepper/garlic/curry powder mixture. Brown off chicken in pan. In the meantime, dice vegetables and put into microwave for short period to speed up cooking time. Put all vegetables into pan. Add coconut milk and possibly a curry mix (to boost the flavour) to pan to create sauce. Use fish sauce to taste. Goes well with white rice or else bread.



The following is what it looks like. 

http://www.taste.com.au/recipes/7378/coconut+chicken+curry

http://www.bbcgoodfood.com/recipes/1993658/homestyle-chicken-curry

Szechuan Pork Mince Recipe

This is based on recipes online and an interpretation by a local restaurants that I used to frequent. While there are other alternative recipes that possibly taste better, I find that this is the quickest and easiest version.  
- pork mince

- salt

- sugar

- pepper

- chilli bean paste

- rice wine

- soy sauce

- tofu (fried or fresh)

- soy sauce

- garlic (optional)

- ginger (optional)

- caramel (optional)

- green beans (optional)



Marinade pork mince in salt/sugar/pepper/rice wine/soy sauce. Fry off off mince in wok/pan. Add chilli bean taste. Add sugar, pepper, soy, caramel, etc... sauce to taste. Slice tofu, put into microwave for 30 seconds and drain liquid, and stir through sauce. Fry off green beans in the meantime and add into mixture if you want at this point. Water down sauce if it gets too thick.



Goes well with a asian chicken soup (use pre-made or make a quick one using carrots, celery, onion, chicken bones, water, pepper, salt, pepper, soy sauce, and fish sauce) and steamed white rice.



The following is what it looks like.

http://www.girlichef.com/2014/03/Szechuan-Green-Beans-with-Ground-Pork.html

http://www.cookinglight.com/food/in-season/green-bean-recipes/szechuan-green-beans-ground-pork