Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

October 31, 2014

JackPair – Secure Phone Calls using Codec 2

I’ve just found out about a new Kickstarter for JackPair, a device that enables secure phone calls over a mobile phone. It uses Codec 2.

Over the past 12 months I have been approached by a couple of groups interested in building a similar product (but not JackPair). These groups asked me to develop a modem that could pass data through a cell phone voice codec. Given I know modems and codecs it was a good fit. Quite a challenge too, to get 1200 – 2400 bit/s through a voice codec. To both groups I said I would only do the job if it was open source, and it never went any further.

I feel a product like this must be open source, in order to audit it and know it is really secure. So the software should be GPL and the hardware open. An end user must be able to (re)flash from blank silicon using their own trusted firmware. The paranoid could even do this every time they use it. Or solder their own device from scratch. That’s where I’m heading with my open source radio work – make the radio hardware trivial, and the software open and capable of running on commodity CPU.

The SM1000 has the hardware to build a JackPair type product, e.g Codec 2, DSP capability, microphone and speaker amps, and line audio interfaces. It would need a different firmware load (modem, crypto). The SM1000 is open hardware, so a good starting point.

Clearly the JackPair is a product whose time has come. I support this sort of project (secure telephony for everybody) as I feel my governments response to terrorism as more of a concern than terrorism itself. Good to see it happening, and nice to see Codec 2 helping make the world a better place.

Centrelink's PLAID broken

Jean Paul Degabriele, Victoria Fehr, Marc Fischlin, Tommaso Gagliardoni, Felix Günther, Giorgia Azzurra Marson, Arno Mittelbach, Kenneth G. Paterson. Unpicking PLAID. A cryptographic analysis of an ISO-standards-track authentication protocol.

Upon public release in 2009 PLAID was claimed to have been the subject of three years' cryptanalysis by the then Defence Signals Directorate. With that in mind the sections at the end of the paper about misuse of CBC are more concerning than the exploitation of shrill keys.

Identically partition disks.. the easy way!

Was just looking into a software RAID howto.. for no reason really, but kinda glad I did! When you set up software raid you want to make sure all disks are partitioned the same, right. so check this out:

3. Create partitions on /dev/sda identical to the partitions on /dev/sdb:

sfdisk -d /dev/sdb | sfdisk /dev/sda

That’s a much easier way ;)

This gem is thanks to: http://www.howtoforge.com/how-to-create-a-raid1-setup-on-an-existing-centos-redhat-6.0-system

NTLM Authentication in Squid using Winbind.

Some old windows servers require authentication through the old NTLM protocol, luckily with the help from squid, samba and winbind we can do this under Linux.

Some URLs a much of this information was gathered from are:

  • http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmCentOS5
  • http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm

HOW TO

In order to authenticate through winbind we will be using that and samba to connect to a windows domain, so you will need to have a domain and the details for it or all this will be for naught. I’ll use some fake credentials for this post.

Required Packages

Let’s install all the required packages:



yum install squid krb5-workstation samba-common ntp samba-winbind authconfig

NTP (Network Time Protocol)

Kerberos and windbind can be a little thingy about date and time, so its a good idea to use NTP for your network, I’ll assume your domain controller (DC) will be also your NTP server in which case lets set it up.

Comment out any lines that begin with server and create only one that points to your Active Directory PDC.



# vim /etc/ntp.conf

server pdc.test.lan

Now add it to the default runlevels and start it.



chkconfig ntpd on

/etc/init.d/ntpd start

Samba, Winbind and Kerberos

We will the use the authconfig package/command we installed earlier to configure Samba, Winbind and perform the join in one step, this makes things _SO_ much

easier!!!

NOTE: If you don’t have DNS set up then you will need to add the DC to your hosts file, and it is important to use the name the DC machine knows itself as in AD.



authconfig --enableshadow --enablemd5 --passalgo=md5 --krb5kdc=pdc.test.lan \

--krb5realm=TEST.LAN --smbservers=pdc.test.lan --smbworkgroup=TESTLAN \

--enablewinbind --enablewinbindauth --smbsecurity=ads --smbrealm=TEST.LAN \

--smbidmapuid="16777216-33554431" --smbidmapgid="16777216-33554431" --winbindseparator="+" \

--winbindtemplateshell="/bin/false" --enablewinbindusedefaultdomain --disablewinbindoffline \

--winbindjoin=administrator --disablewins --disablecache --enablelocauthorize --updateall

NOTE: Replace pdc.test.lan with that of your FQDN of your DC server, TESTLAN with your domain, TEST.LAN with the full name of the domain/realm, and make sure you set ‘–winbindjoin’ with a domain admin.

If that succeeds lets test it:



# wbinfo -u

# wbinfo -g



If you are able to enumerate your Active Directory Groups and Users, everything is working.

Next lets test that we can authenticate with winbind:



# wbinfo -a



E.G:



# wbinfo -a testuser

Enter testuser's password:

plaintext password authentication succeeded

Enter testuser's password:

challenge/response password authentication succeeded

Great, we have been added to the domain, so now we can setup squid for NTLM authentication.

SQUID Configuration

Squid comes with its own ntlm authentication binary (/usr/lib64/squid/ntlm_smb_lm_auth) which uses winbind, but as of Samba 3.x, samba bundle their own which is the recommended binary to use (according to the squid and samba projects). So the binary we use comes from the samba-winbind package we installed earlier:



/usr/bin/ntlm_auth

Add the following configuration elements to the squid.conf to enable NTLM authentication:



#NTLM

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 5

auth_param ntlm keep_alive on

acl ntlm proxy_auth REQUIRED

http_access allow ntlm



NOTE: The above is allowing anyone access as long as they authenticate themselves via NTLM, you could use further acl's to restrict this more.

The ntlm_auth binary has other switches that might be of use, such as restricting users by group membership:



auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --require-membership-of=EXAMPLE+ADGROUP

Before we are complete there is one more thing we need to do, for squid to be allowed to use winbind, the squid user (which was created when the squid package was installed) needs to be a member of a wbpriv group:



gpasswd -a squid wbpriv

IMPORTANT!

NTLM authentication WILL FAIL if you have "cache_effective_group squid" set, if you do then remove it! As this overrides the effective group and squid then isn't seen as part of the 'wbpriv' group which breaks authentication!!!

/IMPORTANT!

Add squid to the runlevels and start it:



# chkconfig squid on

# /etc/init.d/squid start

Trouble shooting

Make sure you open the port in iptables, if squid is listening on 3128 then:



# iptables -I INPUT 1 -p tcp --dport 3128 -j ACCEPT

# /etc/init.d/iptables save

NOTE: The '/etc/init.d/iptables save' command saves the current running configuration so the new rule will be applied on reboot.

Happy squid-ing.

Reverse proxy using squid + Redirection

Squid – Reverse Proxy

In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as though it originated from the reverse proxy itself. While a forward proxy is usually situated between the client application (such as a web browser) and the server(s) hosting the desired resources, a reverse proxy is usually situated closer to the server(s) and will only return a configured set of resources.

See: http://en.wikipedia.org/wiki/Reverse_proxy

Configuration

Squid should already be installed, if not then install it:



yum install squid

Then we edit squid config:



vim /etc/squid/squid.conf

Add we add the following to the top of the file:



http_port 80 vhost

https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid

cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http

cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl

cache_peer_domain site1-http site1.example.lan

cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select

http_access deny bad_requests

Now I’ll walk us through the above configuration.



http_port 80 vhost

https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

This sets the http and https ports squid is listening on. Note the cert options for https, we can get squid to use https up to the proxy and unencrytped link to the last hop if we want.. which is cool. If for some reason the server doesn’t support https.



cache_effective_user squid

cache_effective_group squid



Set the effective user and group for squid.. this may not be required, but doesn’t hurt.



cache_peer 1.2.3.4 parent 80 0 no-query originserver name=site1-http

cache_peer 1.2.3.5 parent 443 0 no-query originserver ssl sslflags=DONT_VERIFY_PEER name=site2-ssl

cache_peer_domain site1-http site1.example.lan

cache_peer_domain site2-ssl site2.anotherexample.lan

This is the magic, the first two lines, tell squid which peer to reverse proxy for and what port to use. Note if you use ssl the ‘sslflags=DONT_VERIFY_PEER’ is useful otherwise if your using a self signed cert you’ll have certificate errors.

IMPORTANT: If you want to allow http authentication (auth handled by the web server, such as htaccess) then you need to add ‘login=PASS’ otherwise squid will try and authenticate to squid rather than the http server.

The last two lines, reference the first two and tell squid the domains to listen to, so if someone connects to squid looking for that domain it knows where to go/cache.



acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select

http_access deny bad_requests



NOTE: The acl line has been cut over two lines, this should be on one. There should be the ACL line and the http_access line.

These lines set up some bad requests to which we deny access to, this is to help prevent SQL injection, and other hack attempts, etc.

That’s it, after a (re)start to squid you it will be reverse proxying the domains.

Redirect to SSL

We had a requirement to automatically redirect to https if someone came in on http. Squid allows redirecting through a variety of ways, you can write a redirect script at get squid to use it, but there is a simpler way, using all squid internals and acls.

Add the following to the entries added in the last section:



acl port80 myport 80

acl site1 dstdomain site1.example.lan

http_access deny port80 site1

deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan

http_access deny port80 site2

deny_info https://site2.anotherexample.lan/ site2

We create an acl for the squids port 80 and then one for the domain we want to redirect. We then use "http_access deny" to cause squid to deny access to that domain coming in on port 80 (http). This causes a deny which is caught by the deny_info which redirects it to https.

The order used of the acl's in the http_access and the deny_info is important. Squid only remembers the last acl used by a http_access command and will look for a corresponding deny_info matched to that acl. So make sure the last acl matches the acl used in the deny_info statement!

NOTE: See http://www.squid-cache.org/Doc/config/deny_info/

Appendix

The following is the configuration all put together now.

Reverse proxy + redirection:



http_port 80 vhost

https_port 443 cert=/etc/squid/localhost.crt key=/etc/squid/localhost.key vhost

cache_effective_user squid

cache_effective_group squid

cache_peer 1.2.3.4 parent 80 0 no-query originserver login=PASS name=site1-http

cache_peer 1.2.3.5 parent 443 0 no-query originserver login=PASS ssl sslflags=DONT_VERIFY_PEER name=site2-ssl

cache_peer_domain site1-http site1.example.lan

cache_peer_domain site2-ssl site2.anotherexample.lan

acl bad_requests urlpath_regex -i cmd.exe \/bin\/sh \/bin\/bash default\.ida?XXX insert update delete select

http_access deny bad_requests

acl port80 myport 80

acl site1 dstdomain site1.example.lan

http_access deny port80 site1

deny_info https://site1.example.lan/ site1

acl site2 dstdomain site2.anotherexample.lan

http_access deny port80 site2

deny_info https://site2.anotherexample.lan/ site2

Posfix – Making sense of delays in mail

The maillog

The maillog is easy enough to follow, but when you understand what all the delay and delays numbers mean then this may help really understand what is going on!

A standard email entry in postfix looks like:



Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

Pretty straight forward: date, email identifier in the mailq (34A1B160852B), recipient, which server the email is being sent to (relay). It is the delay and delays I’d like to talk about.

Delay and Delays

If we take a look at the example email from above:



Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=0.49, delays=0.2/0/0.04/0.25, dsn=2.0.0, status=sent

The delay parameter (delay=0.49) is fairly self explanatory, it is the total amount of time this email (34A1B160852B) has been on this server. But what is the delays parameter all about?



delays=0.2/0/0.04/0.25



NOTE: Numbers smaller than 0.01 seconds are truncated to 0, to reduce the noise level in the logfile.

You might have guessed it is a break down of the total delay, but what do each number represent?

Well from the release notes we get:



delays=a/b/c/d:

a=time before queue manager, including message transmission;

b=time in queue manager;

c=connection setup time including DNS, HELO and TLS;

d=message transmission time.

There for looking at our example:

  • a (0.2): The time before getting to the queue manager, so the time it took to be transmitted onto the mail server and into postfix.
  • b (0): The time in queue manager, so this email didn’t hit the queues, so it was emailed straight away.
  • c (0.04): The time it took to set up a connection with the destination mail relay.
  • d (0.25): The time it took to transmit the email to the destination mail relay.

However if the email is deferred, then when the email is attempted to be sent again:



Jan 10 10:00:00 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=82, delays=0.25/0/0.5/81, dsn=4.4.2, status=deferred (lost connection with mx1.example.lan[1.2.3.4] while sending end of data -- message may be sent more than once)

Jan 10 testmtr postfix/smtp[20123]: 34A1B160852B: to=, relay=mx1.example.lan[1.2.3.4]:25, delay=1092, delays=1091/0.2/0.8/0.25, dsn=2.0.0, status=sent

This time the first entry shows how long it took before the destination mail relay took to time out and close the connection:



delays=0.25/0/0.5/81

Therefore: 81 seconds.

The email was deferred then about 15 minutes later (1009 seconds [delays - <total delay from last attempt> ]) another attempt is made.

This time the delay is a lot larger, as the total time this email has spent on the server is a lot longer.

delay=1092, delays=1091/0.2/0.8/0.25



What is interesting though is the value of ‘a’ is now 1091, which means when an email is resent the ‘a’ value in the breakdown also includes the amount of time this email has currently spend on the system (before this attempt).

So there you go, those delays values are rather interesting and can really help solve where bottlenecks lie on your system. In the above case we obviously had some problem communicating to the destination mail relay, but worked the second time, so isn’t a problem with our system… or so I’d like to think.

Use xmllint and vim to format xml documents

If you want vim to nicely format an XML file (and a xena file in this example, 2nd line) then add this to your ~/.vimrc file:

" Format *.xml and *.xena files by sending them to xmllint

au FileType xml exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"

au FileType xena exe ":silent 1,$!xmllint --format --recover - 2>/dev/null"



This uses the xmllint command to format the xml file.. useful on xml docs that aren’t formatted in the file.

Debian 6 GNU/KFreeBSD Grub problems on VirtualBox

Debian 6 was released the other day, with this release they not only released a Linux kernel version but they now support a FreeBSD version as well!

So I decided to install it under VirtualBox and check it out…

The install process went smoothly until I got to the end when it was installing and setting up grub2. It installed ok on the MBR but got an error in the installer while trying to set it up. I jumped into the console to take a look around.

I started off trying to run the update-grub command which fails silently (checking $? shows the return code of 1). On closer inspection I noticed the command created an incomplete grub config named /boot/grub/grub.cfg.new

So all we need to do is finish off this config file. So jump back into the installer and select continue without boot loader, this will pop up a message about what you must set the root partition as when you do set up a boot loader, so take note of it.. mine was /dev/ad0s5.

OK, with that info we can finish off our config file. Firstly lets rename the incomplete one:

cp /boot/grub/grub.cfg.new /boot/grub/grub.cfg

Now my /boot/grub/grub.cfg ended like:

### BEGIN /etc/grub.d/10_kfreebsd ###

menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {

insmod part_msdos

insmod ext2




set root='(hd0,1)'

search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751

echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'

kfreebsd /kfreebsd-8.1-1-amd64.gz

So I needed to add the following to finish it off (note this I’ll repeat that last part):

### BEGIN /etc/grub.d/10_kfreebsd ###

menuentry 'Debian GNU/kFreeBSD, with kFreeBSD 8.1-1-amd64' --class debian --class gnu-kfreebsd --class gnu --class os {

insmod part_msdos

insmod ext2

insmod ufs2




set root='(hd0,1)'

search --no-floppy --fs-uuid --set dac05f8a-2746-4feb-a29d-31baea1ce751

echo 'Loading kernel of FreeBSD 8.1-1-amd64 ...'

kfreebsd /kfreebsd-8.1-1-amd64.gz

set kFreeBSD.vfs.root.mountfrom=ufs:/dev/ad0s5

set kFreeBSD.vfs.root.mountfrom.options=rw

}

Note: My root filesytem was UFS, thus the ‘ufs:/dev/ad0s5′ in the mountfrom option.

That’s it, you Debian GNU/kFreeBSD should now boot successfully :)

Links October 2014

The Verge has an interesting article about Tim Cook (Apple CEO) coming out [1]. Tim says “if hearing that the CEO of Apple is gay can help someone struggling to come to terms with who he or she is, or bring comfort to anyone who feels alone, or inspire people to insist on their equality, then it’s worth the trade-off with my own privacy”.

Graydon2 wrote an insightful article about the right-wing libertarian sock-puppets of silicon valley [2].

George Monbiot wrote an insightful article for The Guardian about the way that double-speak facilitates killing people [3]. He is correct that the media should hold government accountable for such use of language instead of perpetuating it.

Anne Thériault wrote an insightful article for Vice about the presumption of innocence and sex crimes [4].

Dr Nerdlove wrote an interesting article about Gamergate as the “extinction burst” of “gamer culture” [5], we can only hope.

Shweta Narayan wrote an insightful article about Category Structure and Oppression [6]. I can’t summarise it because it’s a complex concept, read the article.

Some Debian users who don’t like Systemd have started a “Debian Fork” project [7], which so far just has a web site and nothing else. I expect that they will never write any code. But it would be good if they did, they would learn about how an OS works and maybe they wouldn’t disagree so much with the people who have experience in developing system software.

A GamerGate terrorist in Utah forces Anita Sarkeesian to cancel a lecture [8]. I expect that the reaction will be different when (not if) an Islamic group tries to get a lecture cancelled in a similar manner.

Model View Culture has an insightful article by Erika Lynn Abigail about Autistics in Silicon Valley [9].

Katie McDonough wrote an interesting article for Salon about Ed Champion and what to do about men who abuse women [10]. It’s worth reading that while thinking about the FOSS community…

Samsung Galaxy Note 3

In June last year I bought a Samsung Galaxy Note 2 [1]. Generally I was very happy with that phone, one problem I had is that less than a year after purchasing it the Ingress menus burned into the screen [2].

2 weeks ago I bought a new Galaxy Note 3. One of the reasons for getting it is the higher resolution screen, I never realised the benefits of a 1920*1080 screen on a phone until my wife got a Nexus 5 [3]. I had been idly considering a Galaxy Note 4, but $1000 is a lot of money to pay for a phone and I’m not sure that a 2560*1440 screen will offer much benefit in that size. Also the Note 3 and Note 4 both have 3G of RAM, as some applications use more RAM when you have a higher resolution screen the Note 4 will effectively have less usable RAM than the Note 3.

My first laptop cost me $3,800 in 1998, that’s probably more than $6,000 in today’s money. The benefits that I receive now from an Android phone are in many ways greater than I received from that laptop and that laptop was definitely good value for money for me. If the cheapest Android phone cost $6,000 then I’d pay that, but given that the Note 3 is only $550 (including postage) there’s no reason for me to buy something more expensive.

Another reason for getting a new phone is the limited storage space in the Note 2. 16G of internal storage is a limit when you have some big games installed. Also the recent Android update which prevented apps from writing to the SD card meant that it was no longer convenient to put TV shows on my SD card. 32G of internal storage in the Note 3 allows me to fit everything I want (including the music video collection I downloaded with youtube-dl). The Note 2 has 16G of internal storage and an 8G SD card (that I couldn’t fully use due to Android limitations) while the Note 3 has 32G (the 64G version wasn’t on sale at any of the cheap online stores). Also the Note 3 supports an SD card which will be good for my music video collection at some future time, this is a significant benefit over the Nexus 5.

In the past I’ve written about Android service life and concluded that storage is the main issue [4]. So it is a bit unfortunate that I couldn’t get a phone with 64G of storage at a reasonable price. But the upside is that getting a cheaper phone allows me to buy another one sooner and give the old phone to a relative who has less demanding requirements.

In the past I wrote about the warranty support for my wife’s Nexus 5 [5]. I should have followed up on that before, 3 days after that post we received a replacement phone. One good thing that Google does is to reserve money on a credit card to buy the new phone and then send you the new phone before you send the old one back. So if the customer doesn’t end up sending the broken phone they just get billed for the new phone, that avoids excessive delays in getting a replacement phone. So overall the process of Google warranty support is really good, if 2 products are equal in other ways then it would be best to buy from Google to get that level of support.

I considered getting a Nexus 5 as the hardware is reasonably good (not the greatest but quite good enough) and the price is also reasonably good. But one thing I really hate is the way they do the buttons. Having the home button appear on the main part of the display is really annoying. I much prefer the Samsung approach of having a hardware button for home and touch-screen buttons outside the viewable area for settings and back. Also the stylus on the Note devices is convenient on occasion.

The Note 3 has a fake-leather back. The concept of making fake leather is tacky, I believe that it’s much better to make honest plastic that doesn’t pretend to be something that it isn’t. However the texture of the back improves the grip. Also the fake stitches around the edge help with the grip too. It’s tacky but utilitarian.

The Note 3 is slightly smaller and lighter than the Note 2. This is a good technical achievement, but I’d rather they just gave it a bigger battery.

Update USB 3

One thing I initially forgot to mention is that the Note 3 has USB 3. This means that it has a larger socket which is less convenient when you try and plug it in at night. USB 3 seems unlikely to provide any benefit for me as I’ve never had any of my other phones transfer data at rates more than about 5MB/s. If the Note 3 happens to have storage that can handle speeds greater than the 32MB/s a typical USB 2 storage device can handle then I’m still not going to gain much benefit. USB 2 speeds would allow me to transfer the entire contents of a Note 3 in less than 20 minutes (if I needed to copy the entire storage contents). I can’t imagine myself having a real-world benefit from that.

The larger socket means more fumbling when charging my phone at night and it also means that the Note 3 cable can’t be used in any other phone I own. In a year or two my wife will have a phone with USB 3 support and that cable can be used for charging 2 phones. But at the moment the USB 3 cable isn’t useful as I don’t need to have a phone charger that can only charge one phone.

Conclusion

The Note 3 basically does everything I expected of it. It’s just like the Note 2 but a bit faster and with more storage. I’m happy with it.

Terry 2.0 includes ROS!

What started as a little tinker around the edges has resulted in many parts of Terry being updated. The Intel j1900 motherboard is now mounted in the middle of the largest square structure, and SSD is mounted (the OCZ black drive at the bottom), yet another battery is mounted which is a large external laptop supply, the Kinect is now mounted on the pan and tilt mechanism along with the 1080p webcam that was previously there. The BeagleBone Black is moved to its own piece of channel and a breadboard is sunk into the main 2nd top level channel.





I haven't cabled up the j1900 yet. On the SSD is Ubuntu and ROS including a working DSLAM (strangely some fun and games getting that to compile and then to not segv right away).



I used 3 Actobotics Beams, one IIRC is a 7.7 incher and two shorter ones. The long beam actually lines on for the right side of the motherboard that you see in the image. The beam is attached with nylon bolts and has a 6.6mm standoff between the motherboard and the beam to avoid any undesired electrical shorts. With the two shorter beams on the left side of the motherboard it is rather securely attached to Terry now. The little channel you see on the right side up a little from the bottom is there for the 7.7 inch beam to attach to (behind the motherboard) and there is a shorter beam on this side to secure the floating end of the channel to the base channel.







The alloy structure at the top of the pan and tilt now has a Kinect attached. I used a wall mount plastic adaptor which with great luck and convenience the nut traps lined up to the actobotics holes. I have offset the channel like you see so that the center of gravity is closer to directly above the pan and tilt. Perhaps I will have to add some springs to help the tilt servo when it moves the Kinect back too far from the centre point. I am also considering a counter balance weight below the tilt which would also work to try to stabilize the Kinect at the position shown.







I was originally planning to put some gripper on the front of Terry. But now I'm thinking about using the relatively clean back channel to attach a threaded rod and stepper motor so that the gripper can have access to the ground and also table top. Obviously the cameras would have to rotate 180 degrees to be able to see what the gripper was up to. Also for floor pickups the tilt might have to be able to handle a reasonable downward "look" without being too hard on the servo.



There were also some other tweaks. A 6 volt regulator is now inlined into a servo extension cable and the regulator is itself bolted to some of the channel. Nice cooling, and it means that the other end of that servo extension can take something like 7-15v and it will give the servo the 6v it wants. That is actually using the same battery pack as the drive wheels (8xAA).



One thing that might be handy for others who find this post, the BeagleBone Black Case from sparkfun attaches to Actobotics channel fairly easily. I used two cheesehead m3 nylocks and had to force them into the enclosure. The nylocks lined up to the Actobotics channel and so the attachment was very simple. You'll want a "3 big hole" or more bit of channel to attach the enclosure to. I attached it to a 3 bit holer and then attaced that channel to the top of Terry with a few threaded standoffs. Simplifies attach and remove should that ever be desired.



I know I need slip rings for the two USB cameras up top. And for the tilt servo as well. I can't use a USB hub up top because both the USB devices can fairly well saturate a USB 2.0 bus. I use the hardware encoded mjpeg from the webcam which helps bandwidth, but I'm going to give an entire USB 2.0 bus to the Kinect.



Keynote Speaker - Professor Eben Moglen

Eben Moglen

The LCA 2015 team is honoured to announce our first Keynote speaker - Professor Eben Moglen, Executive Director of the Software Freedom Law Center and professor of Law and Legal History at Columbia University Law School.

Professor Moglen's presentation is scheduled for 09:00 am Tuesday, 13 January 2015

Professor Moglen has represented many of the world's leading free software developers. He earned his PhD in History and his law degree at Yale University during what he sometimes calls his “long, dark period” in New Haven.

After law school he clerked for Judge Edward Weinfeld of the United States District Court in New York City and for Justice Thurgood Marshall of the United States Supreme Court. He has taught at Columbia Law School since 1987 and has held visiting appointments at Harvard University, Tel Aviv University and the University of Virginia.

In 2003 he was given the Electronic Frontier Foundation's Pioneer Award for efforts on behalf of freedom in the electronic society.

We are especially grateful to Michael Davies for his efforts in bringing Professor Moglen to LCA 2015 in Auckland for us - thank you Michael!

The LCA 2015 Auckland Team

October 30, 2014

2014 GStreamer Conference

I’ve been home from Europe over a week, after heading to Germany for the annual GStreamer conference and Linuxcon Europe.

We had a really great turnout for the GStreamer conference this year

GstConf2k14

as well as an amazing schedule of talks. All the talks were recorded by Ubicast, who got all the videos edited and uploaded in record time. The whole conference is available for viewing at http://gstconf.ubicast.tv/channels/#gstreamer-conference-2014

I gave one of the last talks of the schedule – about my current work adding support for describing and handling stereoscopic (3D) video. That support should land upstream sometime in the next month or two, so more on that in a bit.

elephant

There were too many great talks to mention them individually, but I was excited by 3 strong themes across the talks:

  • WebRTC/HTML5/Web Streaming support
  • Improving performance and reducing resource usage
  • Building better development and debugging tools

I’m looking forward to us collectively making progress on all those things and more in the upcoming year.

[life] Day 274: Errands, friends old and new, and swim class

In researching ways to try and help Zoe sleep for longer, I learned that there's basically two triggers for waking up in the morning: light and heat. Because Queenslanders hate daylight saving, the sun gets up ridiculously early in summer. Because Queensland is hot, it also gets very hot pretty early. Our bedrooms are on the eastern side of the apartment to boot.

I already have nice blackout curtains, and I had pelmets installed last summer to try and reduce the light leakage around the curtains. I also had reflective window film put on our bedroom windows last summer in an effort to reduce the morning heat when the sun rose, but I don't think it's made a massive difference to a closed up bedroom. I think Zoe woke up at about 5:40am this morning. I'm not sure what the room temperature was, because the Twine in her room decided not to log it this morning. Air conditioning is the next thing to try.

After breakfast, we ran a few errands, culminating at a trip to the carwash for babyccino. After that, we headed over to Toowong to pick up Geneal, who was a friend of my biological mother that I've kept in loose contact since I've been an adult. We went over to the Toowong Bowls Club for lunch, and had a nice catch up.

The Toowong Bowls Club has a rather disturbing line on the wall showing the height of the 2011 floods. It's probably taller than my raised arm from the ground level of the building.

After lunch, and dropping Geneal home, we headed over for a play date at the home of Chloe, who will be starting Prep next year at Zoe's school. I met Chloe's Mum, Kelley, at the P&C meeting I went to earlier in the year, and then proceeded to continue to bump into her at numerous school-related things ever since. She's been a good person to know, having an older daughter at the school as well, and has given me lots of advice.

Zoe and Chloe got along really well, and Chloe seems like a nice kid. After the play date, we walked to school to collect Chloe's older sister, and then to swim class. We were early, but Zoe was happy to hang out.

I am just so loving the vibe I'm getting about the school, and really loving the school community itself. I'm really looking forward to the next seven years here.

After swim class, we walked back to Chloe's house to retrieve the car, and say goodbye to Chloe, and headed home. It was another nice full, but not too full day.

LUV Main November 2014 Meeting: Raspberry Pi update + systemd

Nov 5 2014 19:00
Nov 5 2014 21:00
Nov 5 2014 19:00
Nov 5 2014 21:00
Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Please note that the November meeting is on Wednesday night rather than Tuesday night due to the Melbourne Cup.

Alec Clews, Raspberry Pi update

Russell Coker, systemd

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

November 5, 2014 - 19:00

read more

October 29, 2014

New libeatmydata release: 105

Over on the project page and on launchpad you can now download libeatmydata 105.

This release fixes a couple of bugs that came in via the Debian project, including a rather interesting one about some binaries not running .so ctors to properly init libeatmydata and the code path in the libeatmydata open() not really dealing with being called first in this situation.

Enjoy!

Speaker Feature: Meg Howie, Joshua Hesketh

Meg Howie

Meg Howie

Ask Away: Staking Out the Stakeholders

11:35am Friday 16th January 2015

Meg is a designer and thinker whose practice spans graphic, interactive, film, service and performance design. She is currently undertaking a Master of Design at Massey University and her research explores the influence of open source culture and participatory democracy on civic engagement. Meg’s work is deeply social, and draws from human-centred design, behavioural psychology and collaborative modes of working.

For more information on Meg and her presentation, see here. You can follow her as @howiemeg and don’t forget to mention #LCA2015.



Joshua Hesketh

Joshua Hesketh

Who is Linux Australia?

3:40pm Thursday 15th January 2015

Joshua is a software developer for Rackspace Australia working on upstream OpenStack. He works from his home in Hobart, Tasmania. Joshua is currently President of Linux Australia, previously the co-chair for PyCon Australia and a key organiser for linux.conf.au. He has an interest in robotics having recently completed a degree in mechatronic engineering. Josh is an active contributor to the openstack-infra and nova projects.

For more information on Josh and his presentation, see here.

[life] Day 273: Kindergarten, more startup stuff, and another Prep day

I had another busy day today. I've well and truly falled off the running wagon, which I really need to fix rather urgently. I would have liked to have gone for a run this morning, but it didn't happen.

I started off with a chiropractic adjustment, and then a bit of random cooking to use up some perishables, before the cleaners arrived.

While the cleaners were here, I managed to knock over another unit of my real estate course, which I was pretty stoked about. I'll try and get it in the mail tomorrow, and that's the last one from the first half of the course done.

I grabbed a massage, and then headed over to pick up Zoe early from Kindergarten to take her to school for another Prep introduction session. I really like Zoe's school. This year for the first time they're running a four week program where the kids can come for a couple of hours.

Today it was fine and gross motor skills. They divided the group in half, and Zoe's half did fine motor skills first. The kids rotated through three different stations, which all had three or four activities each. Zoe did pretty well with these.

Then the groups swapped over, and we returned to the hall where we started, to do some gross motor skills. I would have thought this would have been right up Zoe's alley, since a lot of it was similar to TumbleTastics, but she was very clingy, and they kept rotating between stations faster than she got warmed up to the activity.

She was a bit overwhelmed in the larger group setting in general. Hopefully next week with a bit of preparation before we come (and no Kindergarten) she'll do better.

After we got home, I showed Zoe a balloon full of water that I'd put in the freezer. She had a great time smashing it on the balcony. I'll have to do that again.

It's a hot night tonight, I hope Zoe sleeps okay. It was definitely time to bust out the fan.

Training and Education in High Performance Computing for eReseachers

"Big data" requires processing. Processing requires HPC. Increased processing results in increased research output. Research organisations that do not increase HPC usage will fall behind. HPC requires either 'dumb down the interface or skill up the user'. Making "user friendly" interfaces may not be the right path to take as HPC use will always have a minimum level of complexity. Training courses that use andragogical technqiues correlate with increased HPC use.

Presentation to eResearch Australasia, Melbourne, October 28, 2014

October 28, 2014

Speaker Feature: Christoph Lameter, Brandon Philips

Christoph Lameter

Christoph Lameter

SL[AUO]B:Kernel memory allocator design and philosophy

12:15pm Friday 16th January 2015

Christoph specializes in High Performance Computing and High Frequency Trading technologies. As an operating system designer and kernel developer he has been developing memory management technologies for Linux to enhance performance and reduce latencies. He is fond of new technologies and new ways of thinking that disrupt existing industries and causes new development communities to emerge.

For more information on Christoph and his presentation, see here. You can follow him as @qant and don’t forget to mention #LCA2015.



Brandon Philips

Brandon Philips

CoreOS: An introduction

11:35 am Friday 16th January 2015

Brandon Philips is helping to build modern Linux server infrastructure at CoreOS. Prior to CoreOS, he worked at Rackspace hacking on cloud monitoring and was a Linux kernel developer at SUSE. In addition to his work at CoreOS, Brandon sits on Docker's governance board and is one of the top contributors to Docker. As a graduate of Oregon State's Open Source Lab he is passionate about open source technologies.

Brandon has also been a speaker at many conferences including Open Source Bridge 2012 and Open Source Conference 2012.

For more information on Brandon and his presentation, see here. You can follow him as @BrandonPhilips and don’t forget to mention #LCA2015.

[life] Day 272: Kindergarten, startup stuff

I had a great, productive day today.

I got stuck into my real estate licence coursework this morning, and finished off a unit. I biked down to the post office to mail it off, and picked up the second half of my coursework. After I finish the unit I started today, I'll have 8 more units to go. Looking at the calendar, if I can punch out a unit a week (which is optimistic, particularly considering that school holidays are approaching) I could be finished by the end of the year. More realistically, I can try to be finished by the time Zoe starts school, which will be perfect, and well inside the 12 month period I'm supposed to get it done in. We shall see how things pan out.

I biked to Kindergarten to pick up Zoe, and she wanted to watch Megan's tennis class for a while, so we hung around. She was pretty wiped out from a water play day at Kindergarten today. We biked home, and then she proceeded to eat everything in the house that wasn't tied down until Sarah arrived to pick her up.

I used the rest of the afternoon to do some more administrative stuff and tidy up a bit, before heading off to my yoga class. I had a really lovely stretch class with just me and my yoga teacher, so we spent the whole class chatting and having a great catch up. It was a great way to end the day.

[life] Day 271: Kindergarten, lots of administrivia and some tinkering

Zoe woke up at about 6am, which gave us a bit of extra time to get moving in the morning, or so I thought.

We biked over to the Kindergarten for drop off, and I left the trailer there to make biking back in the afternoon heat easier.

I had a pretty productive day. It was insanely hot, so I figured I could run the air conditioning more or less guilt (and expense free) courtesy of my solar power. I should check just how much power it draws to see how "free" it is to run.

I mostly cleared lots of random stuff off my to do list, and made a few lengthy phone calls. I also did some more tinkering with my BeagleBone Black, trying to get it set up so I can back up daedalus. It's been fun playing with Puppet again. I now have a pretty nice set up where I can wipe the BeagleBone Black and get it back to how I want it configured in about 5 minutes, thanks to Puppet.

I biked over to Kindergarten to pick up. I got there a few minutes early, and received a very heartening phone call regarding an issue I'd been working on earlier.

Zoe and Megan wanted to have a play date, and since it was hot and I'd left the air conditioning on, I suggested it be at our place. I biked home, and Jason dropped Megan around.

The girls played inside for a bit, but then wanted to do some more craft on the balcony, so I let them get to it, with instructions to put stuff away before they took more stuff out, and the balcony ended up significantly cleaner as a result. I used the time to do some more tinkering with my backups and to book a flight down to Sydney to help a friend out with some stuff.

A massive storm rolled in, not long after Anshu arrived, so we all went out on the balcony to watch the lightning, and then Sarah arrived to pick up Zoe. Megan hung out for a bit longer until Jason arrived to pick her up.

October 27, 2014

Speaker Feature: Lillian Grace, David Rowe

Lillian Grace

Lillian Grace

Wiki New Zealand: Winning through collaboration

4:35pm Thursday 15th January 2015

Lillian is the founder and chief of Wiki New Zealand.

Wiki New Zealand is a collaborative website making data about New Zealand visually accessible to everyone. The site presents data in simple, visual form only, so that it remains as unbiased and as accessible to everyone as possible. The content is easy to understand and digest, and is presented from multiple angles, wide contexts and over time, inviting users to compare, contrast and interpret. Lillian is an accomplished presenter who was invited to speak at OSDC 2013, was a keynote speaker at Gather 2014 and a speaker at TEDx Auckland 2013.

For more information on Lillian and her presentation, see here. You can follow her as @GracefulLillian and don’t forget to mention #LCA2015.



David Rowe

David Rowe

The Democratisation of Radio

10:40am Thursday 15th January 2015

David is an electronic engineer living in Adelaide, South Australia. His mission is to improve the world – just a little bit, through designing open hardware and writing open source software for telephony.

In January 2006 David quit corporate life as an Engineering Manager to become an open source developer. He now develops open telephony hardware and software full time. David likes to build advanced telephony technology – then give it away.

For more information on David and his presentation, see here. You can follow him as @davidgrowe67 and don’t forget to mention #LCA2015.

Linux Security Summit 2014 Wrap-Up

The slides from the 2014 Linux Security Summit in August may be found linked at the schedule.

LWN covered both the James Bottomley keynote, and the SELinux on Android talk by Stephen Smalley.

We had an engaging and productive two days, with strong attendance throughout.  We’ll likely follow a similar format next year at LinuxCon.  I hope we can continue to expand the contributor base beyond mostly kernel developers.  We’re doing ok, but can certainly do better.  We’ll also look at finding a sponsor for food next year.

Thanks to those who contributed and attended, to the program committee, and of course, to the events crew at Linux Foundation, who do all of the heavy lifting logistics-wise.

See you next year!

Speaker Feature: Lana Brindley & Alexandra Settle, Olivier Bilodeau

Lana Brindley and Alexandra Settle

Alexandra Settle Lana Brindley

8 writers in under 8 months: from zero to a docs team in no time flat

11:35am Thursday 15th January 2015

Lana and Alexandra are both technical writers as Rackspace, the open Cloud Company.

Lana has been writing open source technical documentation for about eight years, and right now I’m working on documenting OpenStack with Rackspace, she does a lot of speaking, mostly about writing. She also talks about other topics from open source software to geek feminism and working in IT.

Lana is also involved in several volunteer projects including linux.conf.au, Girl Geek Dinners, LinuxChix, OWOOT (Oceania Women of Open Tech), and various Linux Users Groups (LUGs). Alexandra is a technical writer with the Rackspace Cloud Builders Australia team. She began her career as a writer for the cloud documentation team at Red Hat, Australia. Alexandra prefers Fedora over other Linux distributions.

Recently she was part of a team that authored the OpenStack Design Architecture Guide, and hopes to further promote involvement in the OpenStack community within Australia.

For more information on Lana and Alexandra and their presentation, see here. You can follow them as @Loquacities (Lana) or @dewsday (Alexandra) and don’t forget to mention #LCA2015.



Olivier Bilodeau

Olivier Bilodeau

Advanced Linux Server-Side, Threats: How they work and what you can do about them

1:20pm Friday 16th January 2015

Olivier is an engineer that loves technology, software, security, open source, linux, brewing beer, travels and android.

Coming from the dusty Unix server room world, Olivier evolved professionally in networking, information security and open source software development to finally become malware researcher at ESET Canada. Presenting at Defcon, publishing in (In)secure Mag, teaching infosec to undergrads (ÉTS), driving the NorthSec Hacker Jeopardy and co-organizer of the MontréHack training initiative are among its note-worthy successes.

For more information on Olivier and his presentation, see here. You can follow him as @obilodeau and don’t forget to mention #LCA2015.

October 26, 2014

Twitter posts: 2014-10-20 to 2014-10-26

October 25, 2014

Craige McWhirter: Automating Building and Synchronising Local & Remote Git Repos With Github

I've blogged about some git configurations in the past. In particular working with remote git repos.

I have a particular workflow for most git repos

  • I have a local repo on my laptop
  • I have a remote git repo on my server
  • I have a public repo on Github that functions as a back up.

When I push to my remote server, a post receive hook automatically pushes the updates to Github. Yay for automation.

However this wasn't enough automation, as I found myself creating git repos and running through the setup steps more often than I'd like. As a result I created gitweb_repo_build.sh which takes all the manual steps I go through to setup my workflow and automates it.

The script currently does the following:

  • Builds a git repo locally
  • Adds a README.mdwn and a LICENCE. Commits the changes.
  • Builds a git repo hosted via your remote git server
  • Adds to the remote server, a git hook for automatically pushing to github
  • Adds to the remote server, a git remote for github.
  • Creates a repo at GitHub a via API 3
  • Pushes the READEME and LICENCE to the remote, which pushes to github.

It's currently written in bash and has no error handling.

I've planned a re-write in Haskell which will have error handling.

If this is of use to you, enjoy :-)

That rare feeling …

… of actually completing things.

Upon reflection, it appears to have been a sucessful week.

Work – We relocated offices (including my own desk (again)) over the previous week from one slightly pre-used office building to another more well-used office building. My role as part of this project was to ensure that the mechanics of the move as far as IT and Comms occured and proceed smoothly. After recabling the floor, working with networks, telephones and desktops staff it was an almost flawless move, and everyone was up and running easily on Monday morning. I received lots of positive feedback which was good.

Choir – The wrap up SGM for the 62nd Australian Intervarsity Choral Festival Perth 2011, Inc happened. Pending the incorporation of the next festival, it is all over bar a few cheques and paperwork. Overall it was a great festival and as Treasurer was pleased with the final financial result (positive).

Hacking – This weeks little project has been virtualsnack. This is a curses emulator of the UCC Snack Machine and associated ROM. It is based on a previous emulator written with PyGTK and Glade that had bitrotted in the past ten years to be non-functioning and not worth the effort to ressurect. The purpose of the emulator is enable development of code to speak to the machine without having to have the real machine available to test against.

I chose to continue to have the code in python and used npyscreen as the curses UI library. One of the intermediate steps was creating a code sample, EXAMPLE-socket.py, which creates a daemon that speaks to a curses interfaces.

I hereby present V1.0 “Gobbledok” of virtualsnack. virtualsnack is hosted up on Github for the moment, but may move in future. I suspect this item of software will only be of interest to my friends at UCC.

October 24, 2014

[life] Day 268: Science Friday, TumbleTastics, haircuts and a big bike outing

I didn't realise how jam packed today was until we sat down at dinner time and recounted what we'd done today.

I started the day pretty early, because Anshu had to be up for an early flight. I pottered around at home cleaning up a bit until Sarah dropped Zoe off.

After Zoe had watched a bit of TV, I thought we'd try some bottle rocket launching for Science Friday. I'd impulse purchased an AquaPod at Jaycar last year, and haven't gotten around to using it yet.

We wandered down to Hawthorne Park with the AquaPod, an empty 2 litre Sprite bottle, the bicycle pump and a funnel.

My one complaint with the AquaPod would have to be that the feet are too smooth. If you don't tug the string strongly enough you end up just dragging the whole thing across the ground, which isn't really what you want to be doing. Once Zoe figured out how to yank the string the right way, we were all good.

We launched the bottle a few times, but I didn't want to waste a huge amount of water, so we stopped after about half a dozen launches. Zoe wanted to have a play in the playground, so we wandered over to that side of the park for a bit.

It was getting close to time for TumbleTastics, and we needed to go via home to get changed, so we started the longish walk back home. It was slow going in the mid-morning heat and no scooter, but we got there eventually. We had another mad rush to get to TumbleTastics on time, and miraculously managed to make it there just as they were calling her name.

Lachlan wasn't there today, and I was feeling lazy, and Zoe was keen for a milkshake, so we dropped into Ooniverse on the way home. Zoe had a great old time playing with everything there.

After we got home again, we biked down to the Bulimba post office to collect some mail, and then biked over for a haircut.

After our haircuts, Zoe wanted to play in Hardcastle Park, so we biked over there for a bit. I'd been wanting to go and check out the newly opened Riverwalk and try taking the bike and trailer on a CityCat. A CityCat just happened to be arriving when we got to the park, but Zoe wasn't initially up for it. As luck would have it, she changed her mind as the CityCat docked, but it was too late to try and get on that one. We got on the next one instead.

I wasn't sure how the bike and the trailer were going to work out on the CityCat, but it worked out pretty well going from Hawthorne to New Farm Park. We boarded at Hawthorne from the front left hand side, and disembarked at New Farm Park from the front right hand side, so I basically just rolled the bike on and off again, without needing to worry about turning it around. It was a bit tight cornering from the pontoon to the gangway, but the deckhand helped me manoeuvre the trailer.

It was quite a nice little ride through the back streets of New Farm to get to the start of the Riverwalk, and we had a nice quick ride into the city. We biked all the way along the riverside through to the Old Botanic Gardens. We stopped for a little play in the playground that Zoe had played in the other weekend when we were wandering around for Brisbane Open House, and then continued through the gardens, over the Goodwill Bridge, and the bottom of the Kangaroo Point cliffs.

We wound our way back home through Dockside, and Mowbray Park and along the bikeway alongside Wynnum Road. It was a pretty huge ride, and I'm excited that it's opened up an easy way to access Southbank by bicycle. I'm looking forward to some bigger forays in the near future.

Watching Grass Grow

For Hackweek 11 I thought it’d be fun to learn something about creating Android apps. The basic training is pretty straightforward, and the auto-completion (and auto-just-about-everything-else) in Android Studio is excellent. So having created a “hello world” app, and having learned something about activities and application lifecycle, I figured it was time to create something else. Something fun, but something I could reasonably complete in a few days. Given that Android devices are essentially just high res handheld screens with a bit of phone hardware tacked on, it seemed a crime not to write an app that draws something pretty.

openSUSE wallpaperThe openSUSE desktop wallpaper, with its happy little Geeko sitting on a vine, combined with all the green growing stuff outside my house (it’s spring here) made me wonder if I couldn’t grow a little vine jungle on my phone, with many happy Geekos inhabiting it.

Android has OpenGL ES, so thinking that might be the way to go I went through the relevant lesson, and was surprised to see nothing on the screen where there should have been a triangle. Turns out the view is wrong in the sample code. I also realised I’d probably have to be generating triangle strips from curvy lines, then animating them, and the brain cells I have that were once devoted to this sort of graphical trickery are so covered in rust that I decided I’d probably be better off fiddling around with beziers on a canvas.

So, I created an app with a SurfaceView and a rendering thread which draws one vine after another, up from the bottom of the screen. Depending on Math.random() it extends a branch out to one side, or the other, or both, and might draw a Geeko sitting on the bottom most branch. Originally the thread lifecycle was tied to the Activity (started in onResume(), killed in onPause()), but this causes problems when you blank the screen while the app is running. So I simplified the implementation by tying the thread lifecycle to Surface create/destroy, at the probable expense of continuing to chew battery if you blank the screen while the app is active.

Then I realised that it would make much more sense to implement this as live wallpaper, rather than as a separate app, because then I’d see it running any time I used my phone. Turns out this simplified the implementation further. Goodbye annoying thread logic and lifecycle problems (although I did keep the previous source just in case). Here’s a screenshot:

Geeko Live Wallpaper

The final source is on github, and I’ve put up a release build APK too in case anyone would like to try it out – assuming of course that you trust me not to have built a malicious binary, trust github to host it, and trust SSL to deliver it safely ;-)

Enjoy!

Update 2014-10-27: The Geeko Live Wallpaper is now up on the Google Play store, although for some reason the “Live Wallpaper” category wasn’t available, so it’s in “Personalization” until (hopefully) someone in support gets back to me and tells me what I’m missing to get it into the right category.

Updated Update: Someone in support got back to me. “Live Wallpaper” can’t be selected as a category in the developer console, rather you have to wait for Google’s algorithms to detect that the app is live wallpaper and recategorize it automatically.

Specs for Kilo

Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.



API



  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.




Administrative



  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).




Containers Service







Hypervisor: Docker







Hypervisor: FreeBSD



  • Implement support for FreeBSD networking in nova-network: review 127827.




Hypervisor: Hyper-V



  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).




Hypervisor: Ironic







Hypervisor: VMWare



  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).




Hypervisor: libvirt







Instance features







Internal



  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.




Internationalization



  • Enable lazy translations of strings: review 126717 (fast tracked).




Performance



  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.




Scheduler



  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.




Security



  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.




Tags for this post: openstack kilo blueprint spec

Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

October 23, 2014

[life] Day 267: An outing to the Valley for lunch, and swim class

I was supposed to go to yoga in the morning, but I just couldn't drag my sorry arse out of bed with my man cold.

Sarah dropped Zoe around, and she watched a bit of TV while we were waiting for a structural engineer to come and take a look at the building's movement-related issues.

While I was downstairs showing the engineer around, Zoe decided she'd watched enough TV and, remembering that I'd said we needed to tidy up her room the previous morning, but not had time to, took herself off to her room and tidied it up. I was so impressed.

After the engineer was finished, we walked to the ferry terminal to take the cross-river ferry over to Teneriffe, and catch the CityGlider bus to the Valley for another one of the group lunches I get invited to.

After lunch, we reversed our travel, dropping into the hairdresser on the way home to make an appointment for the next day. We grabbed a few things from the Hawthorne Garage on the way through.

We pottered around at home for a little bit before it was time to bike to swim class.

After swim class, we biked home, and Zoe watched some TV while I got organised for a demonstration that night.

Sarah picked up Zoe, and I headed out to my demo. Another full day.

Call for Volunteers

The Earlybird registrations are going extremely well – over 50% of the available tickets have sold in just two weeks! This is no longer a conference we are planning – this is a conference that is happening and that makes the Organisation Team very happy!

Speakers have been scheduled. Delegates are coming. We now urgently need to expand our team of volunteers to manage and assist all these wonderful visitors to ensure that LCA 2015 is unforgettable – for all the right reasons.

Volunteers are needed to register our delegates, show them to their accommodation, guide them around the University and transport them here and there. They will also manage our speakers by making sure that their presentations don't overrun, recording their presentations and assisting them in many other ways during their time at the conference.

Anyone who has been a volunteer before will tell you that it’s an extremely busy time, but so worthwhile. It’s rewarding to know that you’ve helped everybody at the conference to get the most out of it. There's nothing quite like knowing that you've made a difference.

But there is more, membership has other privileges and advantages! You don't just get to meet the delegates and speakers, you get to know many of them while helping them as well. You get a unique opportunity to get behind the scenes and close to the action. You can forge new relationships with amazing, interesting, wonderful people you might not ever get the chance to meet any other way.

Every volunteer's contribution is valued and vital to the overall running and success of the conference. We need all kinds of skills too – not just the technically savvy ones (although knowing which is the noisy end of a walkie-talkie may help). We want you! We need you! It just wouldn't be the same without you! If you would like to be an LCA 2015 volunteer it's easy to register. Just go to our volunteer page for more information. We review volunteer registrations regularly and if you’re based in Auckland (or would like a break away from wherever you are) then we would love to meet you at one of our regular meetings. Registered volunteers will receive information about these via email.

Assembly Primer Part 7 — Working with Strings — ARM

These are my notes for where I can see ARM varying from IA32, as presented in the video Part 7 — Working with Strings.

I’ve not remotely attempted to implement anything approximating optimal string operations for this part — I’m just working my way through the examples and finding obvious mappings to the ARM arch (or, at least what seem to be obvious). When I do something particularly stupid, leave a comment and let me know :)

Working with Strings

.data
     HelloWorldString:
        .asciz "Hello World of Assembly!"
    H3110:
        .asciz "H3110"

.bss
    .lcomm Destination, 100
    .lcomm DestinationUsingRep, 100
    .lcomm DestinationUsingStos, 100

Here’s the storage that the provided example StringBasics.s uses. No changes are required to compile this for ARM.

1. Simple copying using movsb, movsw, movsl

    @movl $HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @movl $Destination, %edi
    movw r1, #:lower16:Destination
    movt r1, #:upper16:Destination

    @movsb
    ldrb r2, [r0], #1
    strb r2, [r1], #1

    @movsw
    ldrh r3, [r0], #2
    strh r3, [r1], #2

    @movsl
    ldr r4, [r0], #4
    str r4, [r1], #4

More visible complexity than IA32, but not too bad overall.

IA32′s movs instructions implicitly take their source and destination addresses from %esi and %edi, and increment/decrement both. Because of ARM’s load/store architecture, separate load and store instructions are required in each case, but there is support for indexing of these registers:

ARM addressing modes

According to ARM A8.5, memory access instructions commonly support three addressing modes:

  • Offset addressing — An offset is applied to an address from a base register and the result is used to perform the memory access. It’s the form of addressing I’ve used in previous parts and looks like [rN, offset]
  • Pre-indexed addressing — An offset is applied to an address from a base register, the result is used to perform the memory access and also written back into the base register. It looks like [rN, offset]!
  • Post-indexed addressing — An address is used as-is from a base register for memory access. The offset is applied and the result is stored back to the base register. It looks like [rN], offset and is what I’ve used in the example above.

2. Setting / Clearing the DF flag

ARM doesn’t have a DF flag (to the best of my understanding). It could perhaps be simulated through the use of two instructions and conditional execution to select the right direction. I’ll look further into conditional execution of instructions on ARM in a later post.

3. Using Rep

ARM also doesn’t appear to have an instruction quite like IA32′s rep instruction. A conditional branch and a decrement will be the long-form equivalent. As branches are part of a later section, I’ll skip them for now.

    @movl $HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @movl $DestinationUsingRep, %edi
    movw r1, #:lower16:DestinationUsingRep
    movt r1, #:upper16:DestinationUsingRep

    @movl $25, %ecx # set the string length in ECX
    @cld # clear the DF
    @rep movsb
    @std

    ldm r0!, {r2,r3,r4,r5,r6,r7}
    ldrb r8, [r0,#0]
    stm r1!, {r2,r3,r4,r5,r6,r7}
    strb r8, [r1,#0]

To avoid conditional branches, I’ll start with the assumption that the string length is known (25 bytes). One approach would be using multiple load instructions, but the load multiple (ldm) instruction makes it somewhat easier for us — one instruction to fetch 24 bytes, and a load register byte (ldrb) for the last one. Using the ! after the source-address register indicates that it should be updated with the address of the next byte after those that have been read.

The storing of the data back to memory is done analogously. Store multiple (stm) writes 6 registers×4 bytes = 24 bytes (with the ! to have the destination address updated). The final byte is written using strb.

4. Loading string from memory into EAX register

    @cld
    @leal HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @lodsb
    ldrb r1, [r0, #0]

    @movb $0, %al
    mov r1, #0

    @dec %esi  @ unneeded. equiv: sub r0, r0, #1
    @lodsw
    ldrh r1, [r0, #0]

    @movw $0, %ax
    mov r1, #0

    @subl $2, %esi # Make ESI point back to the original string. unneeded. equiv: sub r0, r0, #2
    @lodsl
    ldr r1, [r0, #0]

In this section, we are shown how the IA32 lodsb, lodsw and lodsl instructions work. Again, they have implicitly assigned register usage, which isn’t how ARM operates.

So, instead of a simple, no-operand instruction like lodsb, we have a ldrb r1, [r0, #0] loading a byte from the address in r0 into r1. Because I didn’t use post indexed addressing, there’s no need to dec or subl the address after the load. If I were to do so, it could look like this:

    ldrb r1, [r0], #1
    sub r0, r0, #1

    ldrh r1, [r0], #2
    sub r0, r0, #2

    ldr r1, [r0], #4

If you trace through it in gdb, look at how the value in r0 changes after each instruction.

5. Storing strings from EAX to memory

    @leal DestinationUsingStos, %edi
    movw r0, #:lower16:DestinationUsingStos
    movt r0, #:upper16:DestinationUsingStos

    @stosb
    strb r1, [r0], #1
    @stosw
    strh r1, [r0], #2
    @stosl
    str r1, [r0], #4

Same kind of thing as for the loads. Writes the letters in r1 (being “Hell” — leftovers from the previous section) into DestinationUsingStos (the result being “HHeHell”). String processing on little endian architectures has its appeal.

6. Comparing Strings

    @cld
    @leal HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString
    @leal H3110, %edi
    movw r1, #:lower16:H3110
    movt r1, #:upper16:H3110

    @cmpsb
    ldrb r2, [r0,#0]
    ldrb r3, [r1,#0]
    cmp r2, r3

    @dec %esi
    @dec %edi
    @not needed because of the addressing mode used

    @cmpsw
    ldrh r2, [r0,#0]
    ldrh r3, [r1,#0]
    cmp r2, r3

    @subl $2, %esi
    @subl $2, %edi
    @not needed because of the addressing mode used
    @cmpsl
    ldr r2, [r0,#0]
    ldr r3, [r1,#0]
    cmp r2, r3

Where IA32′s cmps instructions implicitly load through the pointers in %edi and %esi, explicit loads are needed for ARM. The compare then works in pretty much the same way as for IA32, setting condition code flags in the current program status register (cpsr). If you run the above code, and check the status registers before and after execution of the cmp instructions, you’ll see the zero flag set and unset in the same way as is demonstrated in the video.

The condition code flags are:

  • bit 31 — negative (N)
  • bit 30 — zero (Z)
  • bit 29 — carry (C)
  • bit 28 — overflow (V)

There’s other flags in that register — all the details are on page B1-16 and B1-17 in the ARM Architecture Reference Manual.

And with that, I think we’ve made it (finally) to the end of this part for ARM.

Other assembly primer notes are linked here.

October 22, 2014

CFP for Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015

This is the Call for Papers for the Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015 in Auckland. See http://linux.conf.au

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)

– continuous integration tools (e.g. jenkins)

– CI techniques (e.g. gated trunk, zuul)

– testing tools and techniques (e.g. subunit, fuzz testing tools)

– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.

– applying CI in your workplace/project

We’re looking for talks about open source technology *and* the human side of things.

Speakers at this miniconf must be registered for the main conference (although there are a limited number of miniconf only tickets available for miniconf speakers if required)

There will be a projector, and there is a possibility the talk will be recorded (depending on if the conference A/V is up and running) – if recorded, talks will be posted with the same place with the same CC license as main LCA talks are.

CFP is open until midnight November 21st 2014.

By submitting a presentation, you’re agreeing to the following:

I allow Linux Australia to record my talk.

I allow Linux Australia to release any recordings of my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License

I allow Linux Australia to release any other material (such as slides) from my presentations, tutorials and minconfs under the Creative Commons Attribution-Share Alike License.

I confirm that I have the authority to allow Linux Australia to release the above material. i.e., if your talk includes any information about your employer, or another persons copyrighted material, that person has given you authority to release this information.

Any questions? Contact me: stewart@flamingspork.com

 

http://goo.gl/forms/KZI1YDDw8n

[life] Day 266: Prep play date, shopping and a play date

Zoe's sleep seems a bit messed up lately. She yelled out for me at 3:53am, and I resettled her, but she wound up in bed with me at 4:15am anyway. It took me a while to get back to sleep, maybe around 5am, but then we slept in until about 7:30am.

That made for a bit of a mad rush to get out the door to Zoe's primary school for her "Prep Play Date" orientation. We managed to make it out the door by a bit after 8:30am.

15 minutes is what it appears to take to scooter to school, which is okay. With local traffic being what it is, I think this will be a nice way to get to and from school next year, weather permitting.

We signed in, and Zoe got paired up with an existing (extremely tall) Prep student to be her buddy. The other girl was very keen to hold Zoe's hand, which Zoe was a bit dubious about at first, but they got there eventually.

The kids spent about 20 minutes rotating through the three classrooms, with a different buddy in each classroom. They were all given a 9 station name badge when they signed in, and they got a sticker for each station that they visited in each classroom.

It was a really nice morning, and I discovered there's one other girl from Zoe's Kindergarten going to her school, so I made a point of introducing myself to her mother.

I've got a really great vibe about the school, and Zoe enjoyed the morning. I'm looking forward to the next stage of her education.

We scootered home afterwards, and Zoe got the speed wobbles going down the hill and had a spectacular crash, luckily without any injuries thanks to all of her safety gear.

Once we got home, we headed out to the food wholesaler at West End to pick up a few bits and pieces, and then I had to get to Kindergarten to chair the monthly PAG meeting. I dropped Zoe at Megan's place for a play date while I was at the Kindergarten.

After the meeting, I picked up Zoe and we headed over to Westfield Carindale to buy a birthday present for Zoe's Kindergarten friend, Ivy, who is having a birthday party on Saturday.

We got home from Carindale with just enough time to spare before Sarah arrived to pick Zoe up.

I then headed over to Anshu's place for a Diwali dinner.

Speaker Feature: Audrey Lobo-Pulo, Jack Moffitt

Audrey Lobo-Pulo

Audrey Lobo-Pulo

Evaluating government policies using open source models

10:40am Wednesday 14th January 2015

Dr. Audrey Lobo-Pulo is a passionate advocate of open government and the use of open source software in government modelling. Having started out as a physicist developing theoretical models in the field of high speed data transmission, she moved into the economic policy modelling sphere and worked at the Australian Treasury from 2005 till 2011.

Currently working at the Australian Taxation Office in Sydney, Audrey enjoys discussions on modelling economic policy.

For more information on Audrey and her presentation, see here. You can follow her as @AudreyMatty and don’t forget to mention #LCA2015.



Jack Moffitt

Jack Moffitt

Servo: Building a Parallel Browser

10:40am Friday 16th January 2015

Jacks current project is called Chesspark and is an online community for chess players built on top of technologies like XMPP (aka Jabber), AJAX, and Python.

He previously created the Icecast Streaming Media Server, spent a lot of time developing and managing the Ogg Vorbits project, and helping create and run the Xiph.org Foundation. All these efforts exist to create a common, royalty free, and open standard for multimedia on the Internet.

Jack is also passionate about Free Software and Open Source, technology, music, and photography.

For more information on Jack and his presentation, see here. You can follow him as @metajack and don’t forget to mention #LCA2015.

October 21, 2014

Speaker Feature: Denise Paolucci, Gernot Heiser

Denise Paolucci

Denise Paolucci

When Your Codebase Is Nearly Old Enough To Vote

11:35 am Friday 16th January 2015

Denise is one of the founders of Dreamwidth, a journalling site and open source project forked from Livejournal, and one of only two majority-female open source projects.

Denise has appeared at multiple open source conferences to speak about Dreamwidth, including OSCON 2010 and linux.conf.au 2010.

For more information on Denise and her presentation, see here.



Gernot Heiser

Gernot Heiser

seL4 Is Free - What Does This Mean For You?

4:35pm Thursday 15th January 2015

Gernot is a Scientia Professor and the John Lions Chair for operating systems at the University of New South Wales (UNSW).

He is also leader of the Software Systems Research Group (SSRG) at NICTA. In 2006 he co-founded Open Kernel Labs (OK Labs, acquired in 2012 by General Dynamics) to commercialise his L4 microkernel technology

For more information on Gernot and his presentation, see here. You can follow him as @GernotHeiser and don’t forget to mention #LCA2015.

OpenStack infrastructure swift logs and performance

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.

The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results

Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.

 

Here it is quite clear that swift is slower at actually responding.

 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632
Second pass without outliers
Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.

 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.

 

The size histograms don’t really add much here.

 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.

 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).

 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0
Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232
Second pass without outliers
Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0
Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.

 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.

 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352
Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913
Second pass without outliers
Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993
Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.

 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.

 

Swift still loses out on transfers but again does a much better job of keeping up.

 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

[life] Day 265: Kindergarten and startup stuff

Zoe yelled out for me at 5:15am for some reason, but went back to sleep after I resettled her, and we had a slow start to the day a bit after 7am. I've got a mild version of whatever cold she's currently got, so I'm not feeling quite as chipper as usual.

We biked to Kindergarten, which was a bit of a slog up Hawthorne Road, given the aforementioned cold, but we got there in the end.

I left the trailer at the Kindergarten and biked home again.

I finally managed to get some more work done on my real estate course, and after a little more obsessing over one unit, got it into the post. I've almost got another unit finished as well. I'll try to get it finished in the evenings or something, because I'm feeling very behind, and I'd like to get it into the mail too. I'm due to get the second half of my course material, and I still have one more unit to do after this one I've almost finished.

I biked back to Kindergarten to pick up Zoe. She wanted to watch Megan's tennis class, but I needed to grab some stuff for dinner, so it took a bit of coaxing to get her to leave. I think she may have been a bit tired from her cold as well.

We biked home, and jumped in the car. I'd heard from Matthew's Dad that FoodWorks in Morningside had a good meat selection, so I wanted to check it out.

They had some good roasting meat, but that was about it. I gave up trying to mince my own pork and bought some pork mince instead.

We had a really nice dinner together, and I tried to get her to bed a little bit early. Every time I try to start the bed time routine early, the spare time manages to disappear anyway.

October 20, 2014

SM1000 Part 7 – Over the air in Germany

Michael Wild DL2FW in Germany recently attended a Hamfest where he demonstrated his SM1000. Michael sent me the following email (hint: I used Google translate on the web sites):

Here is the link to the review of our local hamfest.

At the bottom is a video of a short QSO on 40m using the SM-1000 over about 400km. The other station was Hermann (DF2DR). Hermann documented this QSO very well on his homepage also showing a snapshot of the waterfall during this QSO. Big selective fading as you can see, but we were doing well!

He also explains that, when switching to SSB at the same average power level, the voice was almost not understandable!

SM1000 Beta and FreeDV Update

Rick KA8BMA has been working hard on the Beta CAD work, and fighting a few Eagle DRC battles. Thanks to all his hard work we now have an up to date schematic and BOM for the Betas. He is now working on the Beta PCB layout, and we are refining the BOM with Edwin from Dragino in China. Ike, W3IKIE, has kindly been working with Rick to come up with a suitable enclosure. Thanks guys!

My current estimate is that the Beta SM1000s will be assembled in November. Once I’ve tested a few I’ll put them up on my store and start taking orders.

In the mean time I’ve thrown myself into modem simulations – playing with a 450 bit/s version of Codec 2, LPDC FEC codes, diversity schemes and coherent QPSK demodulation. I’m pushing towards a new FreeDV mode that works on fading channels at negative SNRs. More on that in later posts. The SM1000 and a new FreeDV mode are part of my goals for 2014. The SM1000 will make FreeDV easy to use, the new mode(s) will make it competitive with SSB on HF radio.

Everything is open source, both hardware and software. No vendor lock in, no software licenses and you are free to experiment and innovate.

IBM Pays GlobalFoundries to take Microprocessor Business

Interesting times for IBM, having already divested themselves of the x86 business by selling it on to Lenovo they’ve now announced that they’re paying GlobalFoundries $1.5bn to take pretty much that entire side of the business!

IBM (NYSE: IBM) and GLOBALFOUNDRIES today announced that they have signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM’s global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. GLOBALFOUNDRIES will also become IBM’s exclusive server processor semiconductor technology provider for 22 nanometer (nm), 14nm and 10nm semiconductors for the next 10 years.

It includes IBM’s IP and patents, though IBM will continue to do research for 5 years and GlobalFoundries will get access to that. Now what happens to those researchers (one of whom happens to be a friend of mine) after that isn’t clear.

When I heard the rumours yesterday I was wondering if IBM was aiming to do an ARM and become a fab-less CPU designer but this is much more like exiting the whole processor business altogether. The fact that they seem to be paying Global Foundries to take this off their hands also makes it sound pretty bad.

What this all means for their Power CPU is uncertain, and if I was nVidia and Mellanox in the OpenPOWER alliance I would be hoping I’d know about this before joining up!

This item originally posted here:



IBM Pays GlobalFoundries to take Microprocessor Business

[life] Day 264: Pupil Free Day means lots of park play

Today was a Kindergarten (and it seemed most of the schools in Brisbane) Pupil Free Day.

Grace, the head honcho of Thermomix in Australia, was supposed to be in town for a meet and greet, and a picnic in New Farm Park had been organised, but at the last minute she wasn't able to make it due to needing to be in Perth for a meeting. The plan changed and we had a Branch-level picnic meeting at the Colmslie Beach Reserve.

So after Sarah dropped Zoe off, I whipped up some red velvet cheesecake brownie, which seems to be my go to baked good when required to bring a plate (it's certainly popular) and I had some leftover sundried tomatoes, so I whipped up some sundried tomato dip as well.

The meet up in the park was great. My group leader's daughters were there, as were plenty of other consultant's kids due to the Pupile Free Day, and Zoe was happy to hang out and have a play. There was lots of yummy food, and we were able to graze and socialise a bit. We called it lunch.

After we got home, we had a bit of a clean up of the balcony, which had quite a lot of detritus from various play dates and craft activities. Once that was done, we had some nice down time in the hammock.

We then biked over to a park to catch up with Zoe's friend Mackensie for a play date. The girls had a really nice time, and I discovered that the missing link in the riverside bike path has been completed, which is rather nice for both cycling and running. (It goes to show how long it's been since I've gone for a run, I really need to fix that).

After that, we biked home, and I made dinner. We got through dinner pretty quickly, and so Zoe and I made a batch of ginger beer after dinner, since there was a Thermomix recipe for it. It was cloudy though, and Zoe was more used to the Bunderberg ginger beer, which is probably a bit better filtered, so she wasn't so keen on it.

All in all, it was a really lovely way to spend a Pupil Free Day.

LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

October 19, 2014

Links: Docker, Dollar Vans, Bus Bunching and Languages

Speaker Feature: Pavel Emelyanov, Alasdair Allan

Pavel Emelyanov

Pavel Emelyanov

Libcontainer: One lib to rule them all

10:40 pm Friday 16th January 2015

Pavel Emelyanov is a principal engineer at Parallels working on server virtualization projects. He holds a PhD degree in Applied Mathematics from the Moscow Institute of Physics and Technology. His speaking experience includes the talk on network namespaces at LinuxCon 2009 and the presentation of the Virtuozzo resource management at the joint memory management, storage and filesystem summit in April 2011.

For more information on Pavel and his presentation, see here. You can follow him as @xemulp and don’t forget to mention #LCA2015.



Alasdair Allan

Alasdair Allan

Open Source Protocols and Architectures to Fix the Internet of Things…

3:40pm Friday 16th January 2015

Alasdair is a scientist, author, hacker, tinkerer and co-founder of a startup working on fixing the Internet of Things. He spends much of his time probing current trends in an attempt to determine which technologies are going to define our future.

He has also written articles for Make magazine. The latest entitled “Pick up your tools and get started!” posted 1 September 2014.

For more information on Alasdair and his presentation, see here. You can follow him as @aallan and don’t forget to mention #LCA2015.

Twitter posts: 2014-10-13 to 2014-10-19

A preponderance of yak shaving….

It is often observed that attempting to undertake one task begets another, with the corollary that two days later you’ve built a bikeshed painted in a multitude of colours.

So, dear readers, this tale of woe begins with the need to update my blog to something useful after 18 months of neglect and more. I had been writing a travel blog from when I took some leave off work to wander the globe. For this task, a new more generic DNS entry and an upgrade to the WordPress installation and syndication with my Advogato blog. Easily accomplished and a sense of progress.

This blog entry is going to be mostly a technical one. I’ll try incorporating more of real life in other entries.

Great, now I can tell the world about my little project toying with Vagrant and Puppet.

It is called “Browser In A Box”. It is up on Github https://github.com/mtearle/browser-in-a-box

It is very simple, a Vagrant file and a set of Puppet manifests/modules to launch Chromium in kiosk mode inside a VM to hit a certain URL. This is part of planned later work to look at creating a Vagrant development environment for Concerto.

At this point, I got distracted … aside from the liberal upgrades of bash on various machines to address Shellshock

Then I accidentally purchased a new Ultrabook. My previous netbook had been getting long in the tooth and it was time to upgrade. I ended up purchasing a Toshiba Satellite NB10, a reasonable processor Intel N2830, 4 Gig of RAM and 500 Gigs of spinning rust. Those are the nice bits.

On the negatives, Crappy Toshiba keyboard layout with the ~ key in a stupid spot and a UEFI bios. It is now blatantly apparent why Matthew Garrett drinks copious quantities of gin.

Special brickbats go to the Ubuntu installer for repartitioning and eating my Windows installation and recovery partition. (The option to install over my test Debian installation got over enthusiastic).  The wireless chipset (Atheros) has a known problem where it confuses the access point.

The next distraction ended up being a fit of procastination in terms of rearranging my tiny apartment. I’ve now modelled it in a program called Sweet Home 3D. Easy and straight forward to use. Needs a few more furniture models, but perfectly functional. I shall use it again next time I move.

Finally, we arrive at the the original task. I want to start syncing my calendars between various locations (written here for my benefit later).

They are:

  • Work stream – From my Work (Exchange) to my private host (Radicale) to Google Calendar (which will get to my Android phone)
  • Personal stream – From my private host (Radicale) to Google Calendar (and back again)
  • Party stream – From Facebook’s ical export to my private host and Google Calendar

In addition, various syncing of contacts but not my primary focus at the moment.

It appears that syncevolution will do most of what I want here. The challenge revolves around how to get it working. Ultimately, I want to have this live headless hosted on a virtual machine not running a desktop.

In a fit of enthusiasm, I decided upon attempting to build it from source as opposed to using the packages provided from the upstream (to avoid dragging in unnecessary dependencies.

I need to build from HEAD due to recent code added to syncevolution to support the change in Google’s CALDAV API to be behind OAuth V2.

This was not an overly successful exercise, I ended up getting something built but it didn’t ultimately work.

Problems encountered were:

  • libwbxml2 – The upstream at opensync.org is down. There appears to be forks, so playing the game of guessing the current head/release version.
  • activesyncd – Build system is currently broken in parts. There appears to be bit rot around the evolution bindings as the evolution API has changed

I gave up at that point. I’ve since spun up a different virtual machine with Debian Jessie and an install of Gnome. The packages from the syncevolution upstream installed cleanly, but have yet to work out the incarnations to make it work. However, that my friends is a story for a later blog entry…

Multimedia and Music Miniconf - Call for Papers

The Multimedia and Music Miniconf at LCA2015 will be held in Auckland, New Zealand, on Monday 12 January 2015. We are pleased to formally open the miniconf's Call for Papers. Submissions are encouraged from anyone with a story to tell which is related to open software for multimedia or music.

Examples of possible presentations include:

  • demonstrations of multimedia content authored using Open Source programs
  • audio recording examples
  • Open Source games
  • video and image editing on Linux
  • new multimedia software being written
  • multimedia web APIs and applications
  • unusual uses of Open Source multimedia software
  • codec news

In addition, we are planning to hold an informal jam session at the end of the Miniconf, giving community members a change to showcase their compositions and multimedia creations. Expressions of interest for this are also invited. If musical instruments are required it is preferable if participants arranged this themselves, but with sufficient lead time it might be possible to arrange a loan from locals in Auckland.

The miniconf website at annodex.org/events/lca2015 has further details about the miniconf.

To submit a proposal or for further information, please email Jonathan Woithe (jwoithe@atrad.com.au) or Silvia Pfeiffer (silviapfeiffer1@gmail.com).

Jonathan Woithe and Silvia Pfeiffer

(Multimedia and Music miniconf organisers)

October 18, 2014

KDE 4/Plasma system tray not displaying applications, notifications appearing twice

So I was trying to get RSIBreak working on my machine today, and for some reason it simply wasn’t displaying an icon in the Plasma system tray as it was meant to.

It took some searching around, but eventually I came across a comment on a KDE bug report that had the answer.

I opened up ~/.kde/config/plasma-desktop-appletsrc, searched for “systemtray“, and lo and behold, there were two Containments, both using the systemtray plugin. It seems that at some point during the history of my KDE installation, I ended up with two system trays, just with one that wasn’t visible.

After running kquitapp plasma to kill the desktop, I removed the first systemtray entry (I made an educated guess and decided that the first one was probably the one I didn’t want any more), saved the file and restarted Plasma.

Suddenly, not only did RSIBreak appear in my system tray, but so did a couple of other applications which I forgot I had installed. This also fixed the problem I was having with all KDE notifications appearing on screen twice, which was really rather annoying and I’m not sure how I coped with it for so long…



Filed under: Linux, Uncategorized Tagged: KDE, linux, Linux Tips

The r8169 driver and mysterious network problems

A few months ago, a friend of mine was having a problem. When he hooked up his Toshiba laptop to the Ethernet port in his bedroom, it would work under Windows, but not under Linux. When he hooked it up to the port in the room next door, it would work under both.

I headed over with my Samsung Ultrabook, and sure enough – it worked fine under Windows, but not Linux, while the room next door worked under both.

As it turns out, both our laptops used Realtek RTL8168-series Ethernet controllers, which are normally handled just fine by the r8169 driver, which can be found in the kernel mainline. However, Realtek also releases a r8168 driver (available in Debian as r8168-dkms). Upon installing that, everything worked fine.

(At some point I should probably go back and figure out why it didn’t work under r8169 so I can file a bug…)



Filed under: Hardware, Linux Tagged: Computing, Drivers, linux, Linux Tips

PRINCE2 Checklist and Flowchart

Recently a simple statement of PRINCE2 governance structures was provided. From this it is possible to derive a checklist for project managers to tick off, just to make sure that everything is done. Please note that this checklist is tailored and combines some functions. For example, there is no Business Review Plan as it is argued that any sensible project should incorporate these into the Business Case and the Project Plan.

A simple graphic is provided to assist with this process

read more

File Creation Time in Linux

Linux offers most of the expected file attributes from the command line, including the owner of a file, the group, the size, the date modified and name. However often users want find out when a file was created. This requires a little bit of extra investigation.

read more

October 17, 2014

Haskell : A neat trick for GHCi

Just found a really nice little hack that makes working in the GHC interactive REPL a little easier and more convenient. First of all, I added the following line to my ~/.ghci file.

  :set -DGHC_INTERACTIVE

All that line does is define a GHC_INTERACTIVE pre-processor symbol.

Then in a file that I want to load into the REPL, I need to add this to the top of the file:

  {-# LANGUAGE CPP #-}

and then in the file I can do things like:

  #ifdef GHC_INTERACTIVE
  import Data.Aeson.Encode.Pretty

  prettyPrint :: Value -> IO ()
  prettyPrint = LBS.putStrLn . encodePretty
  #endif

In this particular case, I'm working with some relatively large chunks of JSON and its useful to be able to pretty print them when I'm the REPL, but I have no need for that function when I compile that module into my project.

That time that I registered an electric vehicle

So, tell us a story, Uncle Paul.

Sure. One time when I was in Rovers, ...

No, tell us the story of how you got your electric motorbike registered!

Oh, okay then.

It was the 20th of February - a Friday. I'd taken the day off to get the bike registered. I'd tried to do this a couple of weeks before then, but I found out that, despite being told a month beforehand that the workload on new registrations was only a couple of days long, when I came to book it I found out that the earliest they could do was the 20th, two weeks away. So the 20th it was.

That morning I had to get the bike inspected by the engineer, get his sign-off, and take it down to the motor registry to get it inspected at 8:30AM. I also had to meet the plumber at our house, which meant I left a bit late, and by the time I was leaving the engineer it was already 8:15AM and I was in traffic. Say what you like about Canberra being a small town, but people like driving in and the traffic was a crawl. I rang the motor registry and begged for them to understand that I'd be there as soon as possible and that I might be a couple of minutes late. I squeaked into the entrance just as they were giving up hope, and they let me in because of the novelty of the bike and because I wasn't wasting their time.

The roadworthy inspection went fairly harmlessly - I didn't have a certificate from a weighbridge saying how heavy it was, but I knew it was only about eight kilos over the original bike's weight, so probably about 240 kilos? "OK, no worries," they said, scribbling that down on the form. The headlights weren't too high, the indicators worked, and there was no problem with my exhaust being too loud.

(Aside: at the inspection station there they have a wall full of pictures of particularly egregious attempts to get dodgy car builds past an inspection. Exhaust stuffed full of easily-removable steel wool? Exhausts with bit burnt patches where they've been oxy'd open and welded shut again? Panels attached with zip ties? Bolts missing? Plastic housings melted over ill-fitted turbos? These people have seen it all. Don't try to fool them.)

Then we came up to the really weird part of my dream. You know, the part where I know how to tap dance, but I can only do it while wearing golf shoes?

Er, sorry. That was something else. Then we came to the weird part of the process.

Modified vehicles have to get a compliance plate, to show that they comply with the National Code of Practice on vehicle conversions. The old process was that the engineer that inspected the vehicle to make sure it complied had blank compliance plates; when you brought the vehicle in and it passed their inspection, they then filled out all the fields on the plate, attached the plate to the vehicle, and then you transported it down to Main Roads. But that was a bit too open to people stealing compliance plates, so now they have a "better" system. What I had to do was:

  1. Get the bike inspected for road worthiness.
  2. They hand me a blank compliance plate.
  3. I then had to take it to the engineer, who told me the fields to fill in.
  4. He then told me to go to a trophy making place, where they have laser etchers that can write compliance plates beautifully.
  5. I arrive there at 11AM. They say it'll be done by about 2PM.
  6. Go and have lunch with friends. Nothing else to do.
  7. Pick etched compliance plate up.
  8. Take compliance plate back to engineer. Because he's busy, borrow a drill and a rivet gun and attach the plate to the bike myself.
  9. Take it back to Main Roads, who check that the plate is attached to the bike correctly and stamp the road worthiness form. Now I can get the bike registered.
Yeah, it's roundabout. Why not keep engrave the plates at Main Roads with the details the Engineer gives to them? But that's the system, so that's what I did.

And so I entered the waiting department. It only probably took about fifteen minutes to come up next in the queue, but it was fifteen minutes I was impatient to see go. We went through the usual hilarious dance with values:

  • Her: What are you registering?
  • Me: An electric motorbike.
  • Her: How many cylinders?
  • Me: Er... it's electric. None.
  • Her: None isn't a value I can put in.
  • Me: (rolls eyes) OK, one cylinder.
  • Her: OK. How many cubic centimetres?
Many months ago I had enquired about custom number plates, and it turns out that motorbikes can indeed have them. Indeed, I could by "3FAZE" if I wanted. For a mere $2,600 or so. It was very tempting, but when I weighed it up against getting new parts for the bike (which it turned out I would need sooner rather than later, but that's a story for another day) I thought I'd save up for another year.

So I finally picked up my new set of plates, thanked her for her time, and said "Excuse me, but I have to do this:" and then yelled:

"Yes!!!!"

Well, maybe I kept my voice down a little. But I had finally done it - after years of work, several problems, one accident, a few design changes, and lots of frustration and gradual improvement, I had an actual, registered electric motorbike I had built nearly all myself.

I still get that feeling now - I'll be riding along and I'll think, "wow, I'm actually being propelled along by a device I built myself. Look at it, all working, holding together, acting just like a real motorbike!" It feels almost like I've got away with something - a neat hack that turns out to work just as well as all those beautifully engineered mega-budget productions. I'm sure a lot of people don't notice it - it does look a bit bulky, but it's similar enough to a regular motorbike that it probably just gets overlooked as another two-wheeled terror on the roads.

Well, I'll just have to enjoy it myself then :-)

[life] Day 261: Lots of play dates with boys, TumbleTastics, and a fairy gathering

Today was a typical jam packed day. Zoe had a brief wake up at at some point overnight because she couldn't find Cowie, right next to her head, but that was it.

First up, the PAG fundraising committee come over for a quick (well, more like 2 hour) meeting at my place to discuss planning for the sausage sizzle tomorrow. Because I don't have Zoe, I've volunteered to do a lot of the running around, so I'm going to have a busy day.

Mel had brought Matthew and Olivia with her, so Zoe and Matthew had a good time playing, and Olivia kept trying to join in.

That meeting ran right up until I realised we had to head off for TumbleTastics, so Zoe got ready in record time and we scootered over and made it there just as her class was starting. I was sure we were going to be late, so I was happy we made it in time.

Lachlan and his Mum, Laura, and little sister came over for lunch again afterwards, and stayed for a little while.

After they left, we started getting ready for the Fairy Nook's attempt to break the Guiness Book of Records record for the most fairies in one place. We needed to get a wand, so once Zoe was appropriately attired, we walked around the corner to Crackerjack Toys and picked up a wand.

After that, I popped up to Mel's place to collect a whole bunch of eskies that the local councillor had lent us for the sausage sizzle. Mel had also picked up a tutu for Zoe from the local two dollar store in her travels.

We got home, and then walked to the Hawthorne AFL oval where the record attempt was. Initially there were like two other fairies there, but by 4:30pm, there was a pretty good turnout. I don't know what the numbers were, but I'm pretty sure they were well under the 872 they needed. There was a jumping castle and a few of Zoe's friends from Kindergarten, so it was all good.

Sarah arrived to pick up Zoe from there, and I walked home.

October 16, 2014

Speaker Feature: Laura Bell, Michael Cordover

Laura Bell

Laura Bell

Why can't we be friends? Integrating Security into an Existing Agile SDLC

3:40pm Friday 16th January 2015

Laura describes herself as an application security wrangler, repeat dreamer, some-time builder, python juggler, Mom and wife.

For more information on Laura and her presentation, see here. You can follow her as @lady_nerd and don’t forget to mention #LCA2015.



Michael Cordover

Michael Cordover

Using FOI to get source code: the EasyCount experience

3:40pm Wednesday 14th January 2015

Michael is interested in the law, science, politics and everything in between. He worked in computing, event management and project management. He a policy wonk and systems-oriented and he loves variety but is interested in detail.

His life goal as a child was to know everything. He says that's impossible but is still trying to get as close as he can.

For more information on Michael and his presentation, see here. You can follow him as @mjec and don’t forget to mention #LCA2015.

[life] Day 260: Bedwetting, a morning tea play date, and swim class

Zoe woke up at something like 3:30am because she'd wet the bed. She hasn't wet the bed since before she turned 4. In fact, I bought a Connie pad and she promptly never wet the bed again. I was actually thinking about stopping using it just last night, so I obviously jinxed things.

Anyway, she woke up, announced she'd had an accident, and I smugly thought I'd have it all handled, but alas, the pad was too low down, so she'd still managed to wet the mattress, which was annoying. Plan B was to just switch her to the bottom bunk, which still worked out pretty well. I've learned an important lesson about the placement of the Connie pad now.

Unfortunately for me, it seems that if I get woken up after about 4am, I have a low probability of getting back to sleep, and I'd gotten to bed a bit late the night before, so I only wound up with about 5 hours and felt like crap all day.

Vaeda and her mum, Francesca came over for a morning tea play date. I'd been wanting an excuse to try out a new scone recipe that I'd discovered, so I cranked out some scones for morning tea.

Vaeda and Francesca couldn't stay for too long, but it was a nice morning nonetheless. Then we popped out to Woolworths to pick up a $30 gift card that the store had donated towards the weekend sausage sizzle. Not quite 70 kg of free sausages, but better than nothing.

After we got back, we had some lunch, and I tried to convince Zoe to have a nap with me, without success, but we did have a couple of hours of quietish time, and I got to squeeze in some reading.

We biked over to swim class and then biked home, and I made dinner. Zoe was pretty tired, so I got her to bed nice and easily. It'll be an early night for me too.

October 15, 2014

Speaker Feature: John Dickinson, Himangi Saraogi

John Dickinson

John Dickinson

Herding Cats: Getting an open source community to work on the same thing.

2:15pm Thursday 15th January 2015

John is a familiar sight around the world, he has spoken at many conferences, summits, and meetups, including the OpenStack Summit, OSCON, and LinuxConf Australia.

He is Director of Technology at SwiftStack. SwiftStack is a technology innovator of private cloud storage for today s applications, powered by OpenStack Object Storage.

For more information on John and his presentation, see here. You can follow him as @notmyname and don’t forget to mention #lca2015.



Himangi Saraogi

Himangi Saraogi

Coccinelle: A program matching and transformation tool

1:20pm Wednesday 14th January 2015

Himangi finds contributing to open source a great learning platform and she herself has been contributing to Linux kernel and has submitted and had many patches accepted.

She has experience with tools like checkpatch, sparse and coccinelle.

For more information on Himangi and her presentation, see here. You can follow her as @himangi99 and don’t forget to mention #lca2015.

Sliding around... spinning around.

The wiring and electronics for the new omniwheel robot are coming together nicely. Having wired this up using 4 individual stepper controllers, one sees the value in commissioning a custom base board for the stepper drivers to plug into. I still have to connect an IMU to the beast, so precision strafing will (hopefully) be obtainable. The sparkfun mecanum video has the more traditional two wheels each side design, but does wobble a bit when strafing.





Apart from the current requirements the new robot is also really heavy, probably heavier than Terry. I'm still working out what battery to use to meet the high current needs of four reasonable steppers on the move.



[life] Day 259: Kindergarten, more demos and play dates

I was pretty exhausted after yesterday, so getting out of bed this morning took some serious effort. I started the day with a chiropractic adjustment and then got stuck into doing the obligatory "pre-cleaner clean" and preparing for my third Thermomix demonstration.

The cleaners arrived and I headed off around the corner. My host thought the demo was starting at 10:30am, so I again had a bit of extra time up my sleeve.

My Group Leader, Maria, came to observe this demo, and I thought she was just going to be incognito, but to my pleasant surprise, she actually helped out with some of the washing up throughout the demo, which made it easier.

The demo went really well, and I was happy with how it went, and Maria gave me really positive feedback as well, so I was really stoked.

I got home with enough time to collapse on the couch with a book for half an hour before I biked to Kindergarten to pick up Zoe.

As we were heading out, I realised I'd left her helmet at home on her scooter. That's what I get for not putting it back on her bike trailer. So I sent her to Megan's house and biked home to pick up the helmet and headed back again. Two runs up Hawthorne Road in the afternoon heat was a good bit of exercise.

After a brief play at Megan's, we headed home, and I started dinner. For some reason, I was super organised tonight and had dinner on the table nice and early, and everything cleaned up afterwards, so we had plenty of time to go out for a babyccino before bath time and bed time, and I still managed to get Zoe to bed a little early, and I didn't have any cleaning up to do afterwards.

It's been a good day.

My entry in the "Least Used Software EVAH" competition

For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I’ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits – those things that nobody ever thinks about except when they run out of file descriptors.

I didn’t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn’t manipulate filehandle limits. I considered adding it, then realised that there wasn’t a library available to do it. Since I haven’t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine…

Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you’re welcome.

October 14, 2014

[life] Day 258: Kindergarten, demonstrations and play dates

I had my second Thermomix demonstration this morning. It was a decent drive from home away, and in my thoroughness to be properly prepared, I somehow managed to misjudge the time and arrived an hour earlier than I needed to. Oops.

It was good to have the additional time up my sleeves though, and I was happy with how the demonstration went, and it left me with a comfortable amount of time to get to Kindergarten to pick Zoe up. It did completely wipe out the day though.

Zoe wanted to watch Megan's tennis class, so I left her with Jason while I popped home to get changed, and then came back in time for the conclusion of tennis class.

Zoe wanted Megan to come over for a play date, so I took Megan back to our place, and the girls had a great afternoon on the balcony doing some self-directed craft. I used the time to play catch up and make a bunch of phone calls.

It wasn't until after Sarah picked up Zoe that I realised I'd barely interacted with her all afternoon though, which was a bit of a shame. I'll be happy once this sausage sizzle on Saturday is done, and the pace of life should slow down a bit more again.

It was a bit of a struggle to force myself to go to yoga class tonight, but I'm glad I did, because it was a really great class.

[life] Day 257: Kindergarten, meetings, and scrounging for sausages

Zoe's Kindergarten has scored the fundraising sausage sizzle rights to the local Councillor's next Movies in the Park event. Since I've been the chairperson of the PAG, which doesn't seem to actually involve much other than being cheerleader in chief and chairing monthly meetings, I thought I'd lend the fundraising committee a hand with the organising of this event. The fundraising committee have worked their butts off this year.

After I dropped Zoe at Kindergarten, I want to the home of one of the committee members and met with the committee to discuss logistics for the upcoming sausage sizzle. I'd previously volunteered to try and get a donation of sausages from a local butcher, but hadn't had a chance to do that yet.

After taking a bus into the city and back for a lunch meeting, and picking up Zoe from Kindergarten afterwards, we set out on a tour of all the local supermarkets and butchers, with an official letter in hand from the Kindergarten.

We were unlucky on all fronts, but Zoe did score a free cheerio at one butcher, so she was pretty happy. Driving all over the place ate up most of the afternoon.

Anshu dropped in after work not long after we got home, and then Sarah arrived not long after that to pick up Zoe.

Linux and Windows 8 Dual Boot : A Brief How-To

As regular readers would know, I make some effort to avoid using closed-source and proprietary software. This includes that popular operating system common on laptops and servers, MS-Windows. However there are a small number of reasons why this O.S. is required, including life-saving medical equipment hardware which, for some unfathomable reason, has been written to only interface with proprietary operating systems. Open source developers?

read more

MariaDB Foundation board

There seems be a bit of an exodus from the MariaDB Foundation board recently… I’m not sure exactly what to make of it all, but the current members according to https://mariadb.org/en/foundation/ are:

  • Rasmus Johansson (chair)
  • Michael “Monty” Widenius
  • Jeremy Zawodny
  • Sergei Golubchik

With Jeremy Zawodny being the only non-MariaDB Corp member.

Recently, Jeremy Cole asked some people about their membership:

I’m a little worried for the project, the idea of a foundation around it and for people I count as friends who work on MariaDB.

MySQL 5.7.5 on POWER – thread priority

Good news everyone!

MySQL 5.7.5 is out with a bunch more patches for running well on POWER in the tree. I haven’t yet gone and tried it all out, but since I’m me, I look at bugs database and git/bzr history first.

On Intel CPUs, when you’re spinning on a spin lock, you’re meant to execute the PAUSE CPU instruction. This tells the CPU that other execution threads in the same core should be given priority as you are currently not doing anything productive. Without this, you’re likely going to hurt on hyperthreaded CPUs.

In MySQL, there are custom spinlocks in order to do interesting adaptive mutex things to attempt to squeeze the most performance possible out of modern systems.

One of the (not 100% ready, but close) bugs with patches I submitted against MySQL 5.7 was for using the equivalent of the PAUSE instruction for POWER CPUs. On POWER, we’re a bit different, you can actually set priorities of threads (which may matter more, as POWER8 CPUs can be in SMT8 mode – where there are *eight* executing threads per core).

So, the good news is that in MySQL 5.7.5, the magic instructions for setting thread priority are in! This should mean great things for performance on POWER systems with any of the SMT modes enabled.

The next interesting part of this is how it interacts with other KVM guests on a system. At least on POWER (and on x86 as well, although I won’t go into details here) there’s a hypervisor call that a guest can make saying “hey, I’m spinning here, perhaps you want to make sure other vcpus execute so that at some point I can continue”. On POWER, this is the H_CONFER hcall, where you can basically do a directed yield to another vcpu (the one that holds the lock you’re trying to get is a good idea).

Generally though, it’s only the guest kernel that does this, not userspace. You can see the H_CONFER call in __spin_yield(arch_spinlock_t*) and __rw_yield(arch_rwlock_t*) in arch/powerpc/lib/locks.c in the kernel.

It would be interesting to see what extra we could get out of a system running multiple guests with MySQL servers if InnoDB/MySQL could properly yield to the right vcpu (well, thread I guess).

October 13, 2014

One week of Nova Kilo specifications

Its been one week of specifications for Nova in Kilo. What are we seeing proposed so far? Here's a summary...



API







Administrative



  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705.




Containers Service







Hypervisor: FreeBSD



  • Implement support for FreeBSD networking in nova-network: review 127827.




Hypervisor: Hyper-V



  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190.




Hypervisor: VMWare



  • Add ephemeral disk support to the VMware driver: review 126527 (spec approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (spec approved).
  • Support the OVA image format: review 127054.




Hypervisor: libvirt



  • Add a new linuxbridge VIF type, macvtap: review 117465.
  • Add support for SMBFS as a image storage backend: review 103203.
  • Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979.
  • Support libvirt storage pools: review 126978.
  • Support quiesce filesystems during snapshot: review 126966.




Instance features



  • Allow direct access to LVM volumes if supported by Cinder: review 127318.




Interal



  • Move flavor data out of the system_metdata table in the SQL database: review 126620.




Internationalization







Scheduler



  • Add an IOPS weigher: review 127123 (spec approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530.
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Move select_destinations() to using a request object: review 127612.




Scheduling



  • Add instance count on the hypervisor as a weight: review 127871.




Security



  • Provide a reference implementation for console proxies that uses TLS: review 126958.
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.




Tags for this post: openstack kilo blueprints spec

Related posts: Compute Kilo specs are open; Specs for Kilo; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

LUV Beginners October Meeting: Command Line

Oct 18 2014 12:30
Oct 18 2014 16:30
Oct 18 2014 12:30
Oct 18 2014 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Wen Lin will be introducing newcomers to Linux to the use of the "command line".

Wen Lin is the long-serving treasurer for Linux Users of Victoria and has provided several presentations in the past on Libre/OpenOffice.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 18, 2014 - 12:30

Compute Kilo specs are open

From my email last week on the topic:
I am pleased to announce that the specs process for nova in kilo is
now open. There are some tweaks to the previous process, so please
read this entire email before uploading your spec!

Blueprints approved in Juno
===========================

For specs approved in Juno, there is a fast track approval process for
Kilo. The steps to get your spec re-approved are:

 - Copy your spec from the specs/juno/approved directory to the
specs/kilo/approved directory. Note that if we declared your spec to
be a "partial" implementation in Juno, it might be in the implemented
directory. This was rare however.
 - Update the spec to match the new template
 - Commit, with the "Previously-approved: juno" commit message tag
 - Upload using git review as normal

Reviewers will still do a full review of the spec, we are not offering
a rubber stamp of previously approved specs. However, we are requiring
only one +2 to merge these previously approved specs, so the process
should be a lot faster.

A note for core reviewers here -- please include a short note on why
you're doing a single +2 approval on the spec so future generations
remember why.

Trivial blueprints
==================

We are not requiring specs for trivial blueprints in Kilo. Instead,
create a blueprint in Launchpad
at https://blueprints.launchpad.net/nova/+addspec and target the
specification to Kilo. New, targeted, unapproved specs will be
reviewed in weekly nova meetings. If it is agreed they are indeed
trivial in the meeting, they will be approved.

Other proposals
===============

For other proposals, the process is the same as Juno... Propose a spec
review against the specs/kilo/approved directory and we'll review it
from there.




After a week I'm seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We're therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the "pipeline flush" that we saw in Juno.



So far we have five approved specs after only a week.



Tags for this post: openstack kilo blueprints spec

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

October 12, 2014

Twitter posts: 2014-10-06 to 2014-10-12

[life] Day 254: TumbleTastics and opportunistic play dates

The problem with being too busy to blog on the day, is by the time I get around to it, I've forgotten half the details...

I can't remember what we did in the morning before TumbleTastics. I think Sarah dropped Zoe around a bit late because she wasn't going to work.

We popped down to the post office to collect some mail, and then the supermarket. After that, Zoe was a bit tired and grumpy (apparently she'd woken up early) and didn't really want to go to TumbleTastics, but after some morning tea, perked up and reconsidered.

We scootered to TumbleTastics, and discovered that one of the boys from Kindergarten, Lachlan, is in her class. She had another really good class, and I invited Lachie and his Mum and little sister over for lunch afterwards.

Lachie and Zoe had a great time playing together, before and after lunch. I think that was the extend of what happened on Friday that was memorable.

October 11, 2014

Tyan OpenPower

Good news everyone! Tyan has announced the availability of their first OpenPOWER system! They call this a Customer Reference System, which means it’s an excellent machine to start poking at OpenPower and POWER8 (or deploying applications on).

Because it’s an OpenPower machine, it runs the open source Open Power firmware (all up on github) and will happily run Linux (feel free to port your other operating system kernels). I’ll be writing more on the OpenPower firmware soon as, well, technical details are fun!

Ubuntu 14.10 is listed as recommended as not only have they been building for POWER8 but have spent some time ensuring things work fairly well out-of-the-box (both as a KVM guest and running native on the bare metal). Or, you can always just boot whatever the mainline kernel is at – build for the POWERNV (POWER non-virtualized) platform (be sure to include all the required drivers) and have fun!

October 10, 2014

Single emergency mode with systemd

Just to remind myself.. add systemd.unit=emergency.target to the kernel line, or if that fails, try init=/sbin/sh and remove both quiet and rhgb options.

Afterwards, exit or:

exec /sbin/init

Can also enable debug mode to help investigating problems with systemd.log_level=debug

You can get a console early on in the boot process by enabling debug-shell:

systemctl enable debug-shell.service

BlueHackers @ Open Source Developers’ Conference 2014

This year, OSDC’s first afternoon plenary will be a specific BlueHackers related topic: Stress and Anxiety, presented by Neville Starick – an experienced Brisbane based counsellor.

We’ll also have our traditional BlueHackers “BoF” (birds-of-a-feather) session in the evening, usually featuring some general information, as well as the opportunity for safe lightning talks. Some people talk, some people don’t. That’s all fine.

The Open Source Developers’ Conference 2014 is being held at the beautiful Griffith University Gold Coast Campus, 4-7 November. It features a fine program, and if you use this magic link you get a special ticket price, but the regular registration is only around $300 anyhow, $180 for students! This includes all lunches and the conference dinner. Fabulous value.

[life] Day 253: Bike riding practice and swim class

It's been a while since we've done any bike riding practice, and as the day was a bit cooler and more overcast, I thought we could squeeze in a little bit first thing.

We went to the Minnippi Parklands again, and did a little bit of practise. Zoe actually (really briefly) rode her bike for the first time successfully, which was really exciting. We'll have to have another go next week.

After that, we dropped past the supermarket to get a few things for lunch, and then had lunch.

I've managed to forget what happened after lunch, so it can't have been particularly memorable. I miscalculated when swim class was by 30 minutes, and thought we were in more of a rush than we really were. We biked to the Post Office to mail off my US taxes, but I thought we didn't have time to do all the extra paperwork for the mailing, so I paid for the postage, but took all of the stuff with me to swim class to finish filling out. On the way there, I realised I had an extra 30 minutes up our sleeves, which was nice. It gave Zoe some time to have some fruit before class.

Sarah picked up Zoe from swim class, and I biked home to ditch the bike and meet Anshu at the movie theatre to watch Gone Girl. Hoo boy. Good movie.

[life] Day 252: A poorly executed spontaneous outing to Wet and Wild

The forecast maximum was supposed to be 32°C, so I rather spontaneously decided to go for our first visit to Wet and Wild for the season.

I whipped up some lunch, and got our swim gear together, and after a quick phone call with my REIQ tutor to discuss some questions I had about the current unit, I grabbed our gear and we headed off.

I thought that because school had gone back this week, that Wet and Wild would be nice and quiet, but boy, did I miscalculate. I think the place was the busiest I've ever seen it, even on a weekend visit. It would appear that a lot of people have decided to just take an extra week off school with the Labour Day public holiday on Monday.

Once we arrived, I discovered that I'd left my hat on my bed at home, so I had to buy a hat from the gift shop while I was queuing up to pay for a locker. Then we went to get changed, and I discovered I'd also left my swimming stuff on my bed as well, so I had to line up again to purchase a pair of board shorts and a rashie. I was pretty annoyed with myself at all the unnecessary added expense, because I'd left in too much of a hurry.

After we'd got ourselves appropriately attired, we had a fantastic day. Zoe's now tall enough for a few of the "big kid" slides, so we went on them together as well. The first one just involved us sitting in a big inflatable tube going down a wide, curving slide, so that was pretty easy. The second one I wasn't sure about, because we had to go separately. I went first, and waited at the bottom to catch her. As I went down, I was a bit worried it was going to be too much for Zoe, but she came down fine, and once she surfaced, her first word was "Again!", so it was all good.

She napped on the way home, so I swung by the Valley to check my post office box, and then we got home and Sarah picked her up. In spite of the extra expense, it was a really good day out.

October 08, 2014

Free and Open Source Software Business Applications

Free and open source software is based on key principles that are firmly grounded in instrumental advantages and ethical principles. It is heavily used in infrastructure and in new hardware forms; it is less used in user-end applications and traditional forms (laptops and desktops); however there is a good range of mature and feature-rich business applications available. The development of increasingly ubiquitous and more powerful computing is particularly well-suited for FOSS and workplaces that make use of such software will gain financial and positional advantages.

Presentation to Young Professionals CPA, October 8, 2014

read more

Lock In







ISBN: 0765375869

LibraryThing

I know I like Scalzi stuff, but each series is so different that I like them all in different ways. I don't think he's written a murder mystery before, and this book was just as good as Old Man's War, which is a pretty high bar. This book revolves around a murder being investigated by someone who can only interact with the real world via personal androids. Its different from anything else I've seen, and a unique idea is pretty rare these days.



Highly recommended.



Tags for this post: book john_scalzi robot murder mystery

Related posts: Isaac Asimov's Robot Short Stories; Prelude To Foundation ; Isaac Asimov's Foundation Series; Caves of Steel; Robots and Empire ; A Talent for War
Comment Recommend a book

Earlybird registrations are now open!

The moment that you have been waiting for has finally arrived! It is with many hugs of sincere gratitude to the team that we can announce that Earlybird Registration for LCA2015 is now open.

Now is the time to chose - are you a Professional, a Hobbyist or a Student attending LCA2015? Will you be one of our fantastic army of volunteers? Go to lca2015.linux.org.au to register and buy your Earlybird ticket or register as a volunteer.

All of the information that you need is there - what you receive for your ticket price, accommodation options, THE Penguin Dinner, as well as Partners Program and creche options for those of you who are bringing your family with you. You can also register as a volunteer right now, and begin to get involved with our wonderful conference.

There have been months of anticipation, and several sleepless nights, but we are now at a truly exciting stage of the conference organising process - registration!

We look forward to seeing you all in January 2015 in Auckland.

The LCA2015 team

kexec-tools 2.0.8 Released

I have released version 2.0.8 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

kexec-tools 2.0.7 Released

I have released version 2.0.7 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

October 07, 2014

Quick MySQL 5.7.5 thoughts

It was great to see the recent announcement of MySQL 5.7.5 over at the MySQL Server Team blog. I’m looking forward to throwing this release at some of the POWER8 systems we have for a couple of really good reasons: 1) Does it work better than previous MySQL 5.7 releases “out of the box” on POWER? 2) What do the scalability improvements in 5.7.5 mean for peak QPS on POWER (and can I set a new record?).

Looking through the list of changes, I’m (casually not) surprised as to the number of features and the amount of work that echoes what we were working on in Drizzle a few years ago.

A closer look at the source for 5.7.5 may also prove enlightening, I wonder how the MySQL team is coping with a lot of the code rot legacy and the absolutely atrocious internal APIs they inherited…

[life] Day 251: Kindergarten, some more training and other general running around

Yesterday was the first day of Term 4. I can't believe we're at Term 4 already. This year has flown by.

I had a letter from the PAG to all of the Kindergarten parents to put into the notice pockets at Kindergarten first up, so I drove to the Kindergarten not long after opening time, and quickly put them in all the pockets.

Then I headed over to Beaurepaires for a free tyre checkup.

After that, I headed over to my Group Leader's house for some more practical training. I'm feeling pretty confident about my ability to conduct a Thermomix demonstration now, especially after having done my first "real" one on Sunday night.

After that, it was time to pick up Zoe from Kindergarten. It was wonderful to see her after a week away. She wanted to go to Megan's house for a play date, but Megan had tennis, so we went and did the grocery shopping first.

After the grocery shopping, we popped around the Megan's for a bit, and then went home.

I made dinner, and Zoe seemed pretty tired, so I got her to bed a little bit early.

Post Receive Git Hook to Push to Github

I self-host my own git repositories. I also use github as a backup for many of them. I have a use case for a number of them where the data changes can be quite large and pushing to both my own and github's remote services doubles my bandwidth usage in my mostly bandwidth contrained environments.

To get around those contraints, I wanted to only push to my git repository and have that service then push to github. This is how I went about it, courtesy of Matt Palmer

Assumptions:

  • You have your own git server
  • You have a github account
  • You're fairly familiar with git

Authentication

It's likely you have not used your remote server to connect to github. To make sure everything happens smoothly, you need to:

  • Add the SSH key for your user account on your server to the authorised keys on your github account
  • SSH just once from your server to github to accept the key from github.

Add a Remote for github

In the bare git repository on your server, you need to add the remote configuration. On a Debian server using gitweb, this file would be located as /var/cache/git/MYREPO/config. Add the below lines to it:

[remote "github"]
    url = git@github.com:MYACCOUNT/MYREPO.git
    fetch = +refs/heads/*:refs/remotes/github/*
    autopush = true

Add a post-receive Hook

Now we need to create a post-receive hook to process the push to github. Going with the previous example, edit /var/cache/git/MYREPO/hooks/post-receive

#!/bin/bash

for remote in $(git remote); do
        if [ "$(git config "remote.${remote}.autopush")" = "true" ]; then
                git push "$remote"
        fi
done

Happy automated pushing to github.

October 06, 2014

Things that are not news

The following are not news:

  • Human has genitals (which may/may not have been exposed)
  • Absolutely anything about The Bachelor
  • Anything any celebrity is wearing, anyone they’re dating or if they are currently wearing underwear.
  • any list of “top 10″ things.
  • TEENAGERS DO THINGS THAT TEENAGERS DO!

(feel free to extend the list)

Miniconf Call for Presentations - Linux.conf.au 2015 Systems Administration Miniconf

As part of the linux.conf.au conference in Auckland, New Zealand in January 2015 we will be holding a one day mini conference oriented to Linux Systems Administration.

The organisers of the Systems Administration Miniconf would like to invite proposals for presentations to be delivered at the Miniconf. Please forward this CFP to your colleagues, social networks, and other relevant mailing lists.

This is our 9th year at Linux.conf.au. To see presentations from our previous years, see the miniconf website: http://sysadmin.miniconf.org/.

Presentation Topics

Topics for presentations could include (but are not limited to):

Systems Administration Best Practice, Infrastructure as a Service (IAAS), Platform as a Service (PAAS), Docker, Containerisation, Software as a Service (SAAS), Virtualisation, "Cloud" Computing, Service Migration, Dealing with Extranets, Coping with the shortage of IPv4 addresses, Software Defined Networking (SDN), DevOps, Configuration Management, Bootstrapping systems, Lifecycle Management, Monitoring, Dealing with BYOD, Backups in a virtual/distributed world, Security in a world where you cannot even trust your shell, Troubleshooting, Log Management, Buying Decisions, Identity Management, Multi-Factor Authentication, Web and Email management, Keeping legacy systems functioning, and other War stories from the Real World.

We strongly welcome topics on best practice, new developments in systems administration and cutting edge techniques to better manage Linux environments both virtual and physical.

Presentations should be of a technical nature and speakers should assume that members of the audience have at least a couple of years experience in Unix/Linux administration.

Format of Presentations

We are now seeking proposals for presentations at the mini-conference.

We have openings for:

  • 45 minute double-length presentations
  • 20 minute full presentations
  • 10-20 minute "Life in the Real World" presentations
  • 5-10 minute "lightning talks"

Please note, due to the single day available (and whole-LCA keynote before morning tea), we expect the majority of available timeslots to be 20 minutes long or less.

The 10-20 minute "Life the Real World" presentations are intended for people to talk about their sites or real world projects/problems. Information could include: Servers, talks, tools, people and roles, experiences, software, operating systems, monitoring, alerting, provisioning etc. The intent is to give attendees a picture of what is happening in the "real world" and some understanding of how other sites work, as well as offer you a chance to get suggestions on other tools and practices that might help your site. Discussion of "lessons learned" through really trying to use some of these things are especially welcomed.

Submitting talks

Please note that in order to give a presentation or attend the miniconf you must be registered (and paid up) for the main linux.conf.au conference. Presenting at the Miniconf does not entitle you to discounted or free registration at the main conference nor priority with registration. Unfortunately the Miniconf has no budget for sponsorship of speakers.

Submissions should be made via: http://sysadmin.miniconf.org/proposal15.html

Questions should be sent to: lca2015 at sysadmin.miniconf.org

Dates and Deadlines

To encourage early submissions priority (both of inclusion and scheduling) will be given to presentations submitted before the 19th of October 2014.

  • 2014-10-19 - Deadline for early submissions
  • 2014-10-26 - Early submissions confirmation
  • 2014-11-16 - Deadline for submissions
  • 2014-11-30 - Confirmation of all presentations
  • 2015-01-13 - Start of Miniconf and 2nd day of linux.conf.au 2015

Contact and Questions

Please see our website for more information on the miniconf, past presentations and presenting at it. If you have any questions please feel free to email the organisers at: lca2015 at sysadmin.miniconf.org

Ewen McNeill

LCA2015 Sysadmin Miniconf Convener

MariaDB 10.0 on POWER

Good news for those wanting to run MariaDB on POWER systems, the latest 10.0 bzr tree (as of a couple of weeks ago) builds and runs well!

I recently pulled the latest MariaDB 10.0 from BZR and built it on a POWER8 system in the lab to run some quick tests. The MariaDB team has done some work on getting MariaDB to run on POWER recently, a bunch of which is based off my work on MySQL on POWER.

There’s obviously still some work in progress going on, but my initial results show performance within around 10% of MySQL, so with a bit of work we will hopefully see MariaDB reach performance parity.

One interesting find was the code to account for thread memory usage uses a single atomic variable: this does not scale and does end up showing up on profiles.

I’ll comment more on the code in a future post, but it looks like we will have MariaDB being functional on POWER in an upcoming release.

MariaDB & Trademarks, and advice for your project

I want to emphasize this for those who have not spent time near trademarks: trademarks are trouble and another one of those things where no matter what, the lawyers always win. If you are starting a company or an open source project, you are going to have to spend a whole bunch of time with lawyers on trademarks or you are going to get properly, properly screwed.

MySQL AB always held the trademark for MySQL. There’s this strange thing with trademarks and free software, where while you can easily say “use and modify this code however you want” and retain copyright on it (for, say, selling your own version of it), this does not translate too well to trademarks as there’s a whole “if you don’t defend it, you lose it” thing.

The law, is, in effect, telling you that at some point you have to be an arsehole to not lose your trademark. (You can be various degrees of arsehole about it when you have to, and whenever you do, you should assume that people are acting in good faith and just have not spent the last 40,000 years of their life talking to trademark lawyers like you have).Basically, you get to spend time telling people that they have to rename their product from “MySQL Headbut” to “Headbut for MySQL” and that this is, in fact, a really important difference.

You also, at some point, get to spend a lot of time talking about when the modifications made by a Linux distribution to package your software constitute sufficient changes that it shouldn’t be using your trademark (basically so that you’re never stuck if some arse comes along, forks it, makes it awful and keeps using your name, to the detriment of your project and business).

If you’re wondering why Firefox isn’t called Firefox in Debian, you can read the Mozilla trademark policy and probably some giant thread on debian-legal I won’t point to.

Of course, there’s ‘ MySQL trademark policy and when I was at Percona, I spent some non-trivial amount of time attempting to ensure we had a trademark policy that would work from a legal angle, a corporate angle, and a get-our-software-into-linux-distros-happily angle.

So, back in 2010, Monty started talking about a draft MariaDB trademark policy (see also, Ubuntu trademark policy, WordPress trademark policy). If you are aiming to create a development community around an open source project, this is something you need to get right. There is a big difference between contributing to a corporate open source product and an open source project – both for individuals and corporations. If you are going to spend some of your spare time contributing to something, the motivation goes down when somebody else is going to directly profit off it (corporate project) versus a community of contributors and companies who will all profit off it (open source project). The most successful hybrid of these two is likely Ubuntu, and I am struggling to think of another (maybe Fedora?).

Linux is an open source project, RedHat Enterprise Linux is an open source product and in case it wasn’t obvious when OpenSolaris was no longer Open, OpenSolaris was an open source product (and some open source projects have sprung up around the code base, which is great to see!). When a corporation controls the destiny of the name and the entire source code and project infrastructure – it’s a product of that corporation, it’s not a community around a project.

From the start, it seemed that one of the purposes of MariaDB was to create a developer community around a database server that was compatible with MySQL, and eventually, to replace it. MySQL AB was not very good at having an external developer community, it was very much an open source product and not a an open source project (one of the downsides to hiring just about anyone who ever submitted a patch). Things struggled further at Sun and (I think) have actually gotten better for MySQL at Oracle – not perfect, I could pick holes in it all day if I wanted, but certainly better.

When we were doing Drizzle, we were really careful about making sure there was a development community. Ultimately, with Drizzle we made a different fatal error, and one that we knew had happened to another open source project and nearly killed it: all the key developers went to work for a single company. Looking back, this is easily my biggest professional regret and one day I’ll talk about it more.

Brian Aker observed (way back in 2010) that MariaDB was, essentially, just Monty Program. In 2013, I did my own analysis on the source tree of MariaDB 5.5.31 and MariaDB 10.0.3-ish to see if indeed there was a development community (tl;dr; there wasn’t, and I had the numbers to prove it).If you look back at the idea of the Open Database Alliance and the MariaDB Foundation, actually, I’m just going to quote Henrik here from his blog post about leaving MariaDB/Monty Program:

When I joined the company over a year ago I was immediately involved in drafting a project plan for the Open Database Alliance and its relation to MariaDB. We wanted to imitate the model of the Linux Foundation and Linux project, where the MariaDB project would be hosted by a non-profit organization where multiple vendors would collaborate and contribute. We wanted MariaDB to be a true community project, like most successful open source projects are – such as all other parts of the LAMP stack.

….

The reality today, confirmed to me during last week, is that:

Those in charge at Monty Program have decided to keep ownership of the MariaDB trademark, logo and mariadb.org domain, since this will make the company more valuable to investors and eventually to potential buyers.

Now, with Monty Program being sold to/merged into (I’m really not sure) SkySQL, it was SkySQL who had those things. So instead of having Monty Program being (at least in theory) one of the companies working on MariaDB and following the Hacker Business Model, you now have a single corporation with all the developers, all of the trademarks, that is, essentially a startup with VC looking to be valuable to potential buyers (whatever their motives).

Again, I’m going to just quote Henrik on the us-vs-them on community here:

Some may already have observed that the 5.2 release was not announced at all on mariadb.org, rather on the Monty Program blog. It is even intact with the “us vs them” attitude also MySQL AB had of its community, where the company is one entity and “outside community contributors” is another. This is repeated in other communication, such as the recent Recently in MariaDB newsletter.

This was, again, back in 2010.

More recently, Jeremy Cole, someone who has pumped a fair bit of personal and professional effort into MySQL and MariaDB over the past (many) years, asked what seemed to be a really simple question on the maria-discuss mailing list. Basically, “What’s going on with the MariaDB trademark? Isn’t this something that should be under the MariaDB foundation?”

The subsequent email thread was as confusing as ever and should be held up as a perfect example about what not to do. Some of us had by now, for years, smelt something fishy going on around the talk of a community project versus the reality. At the time (October 2013), Rasmus Johansson (VP of Engineering at SkySQL and Board Member of MariaDB foundation) said this:

The MariaDB Foundation and SkySQL are currently working on the trademark issue to come up with a solution on what rights to the trademark each entity should have. Expect to hear more about this in a fairly near future.

 

MariaDB has from its beginning been a very community friendly project and much of the success of MariaDB relies in that fact. SkySQL of course respects that.

(and at the same time, there were pages that were “Copyright MariaDB” which, as it was pointed out, was not an actual entity… so somebody just wasn’t paying attention). Also, just to make things even less clear about where SkySQL the corporation, Monty Program the corporation and the MariaDB Foundation all fit together, Mark Callaghan noticed this text up on mariadb.com:

The MariaDB Foundation also holds the trademark of the MariaDB server and owns mariadb.org. This ensures that the official MariaDB development tree<https://code.launchpad.net/maria> will always be open for the MariaDB developer community.

So…. there’s no actual clarity here. I can imagine attempting to get involved with MariaDB inside a corporation and spending literally weeks talking to a legal department – which thrills significantly less than standing in lines at security in an airport does.

So, if you started off as yay! MariaDB is going to be a developer community around an open source project that’s all about participation, you may have even gotten code into MariaDB at various times… and then started to notice a bit of a shift… there may have been some intent to make that happen, to correct what some saw as some of the failings of MySQL, but the reality has shown something different.

Most recently, SkySQL has renamed themselves to MariaDB. Good luck to anyone who isn’t directly involved with the legal processes around all this differentiating between MariaDB the project, MariaDB Foundation and MariaDB the company and who owns what. Urgh. This is, in no way, like the Linux Foundation and Linux.

Personally, I prefer to spend my personal time contributing to open source projects rather than products. I have spent the vast majority of my professional life closer to the corporate side of open source, some of which you could better describe as closer to the open source product end of the spectrum. I think it is completely and totally valid to produce an open source product. Making successful companies, products and a butt-ton of money from open source software is an absolutely awesome thing to do and I, personally, have benefited greatly from it.

MariaDB is a corporate open source product. It is no different to Oracle MySQL in that way. Oracle has been up front and honest about it the entire time MySQL has been part of Oracle, everybody knew where they stood (even if you sometimes didn’t like it). The whole MariaDB/Monty Program/SkySQL/MariaDB Foundation/Open Database Alliance/MariaDB Corporation thing has left me with a really bitter taste in my mouth – where the opportunity to create a foundation around a true community project with successful business based on it has been completely squandered and mismanaged.

I’d much rather deal with those who are honest and true about their intentions than those who aren’t.

My guess is that this factored heavily into Henrik’s decision to leave in 2010 and (more recently) Simon Phipps’s decision to leave in August of this year. These are two people who I both highly respect, never have enough time to hang out with and I would completely trust to do the right thing and be honest when running anything in relation to free and open source software.

Maybe WebScaleSQL will succeed here – it’s a community with a purpose and several corporate contributors. A branch rather than a fork may be the best way to do this (Percona is rather successful with their branch too).

October 05, 2014

Twitter posts: 2014-09-29 to 2014-10-05

October 04, 2014

CanberraUAV Outback Challenge 2014 Debrief

We've now had a few days to get back into a normal routine after the excitement of the Outback Challenge last week, so I thought this is a good time to write up a report on how things went for team I was part of, CanberraUAV. There have been plenty of posts already about the competition results, so I will concentrate on the technical details of our OBC entry, what went right and what went badly wrong.



For comparison, you might like to have a look at the debrief I wrote two years ago for the 2012 OBC. In 2012 there were far fewer teams, and nobody won the grand prize, although CanberraUAV went quite close. This year there were more than 3 times as many teams and 4 teams completed the criterion for the challenge, with the winner coming down to points. That means the Outback Challenge is well and truly "done", which reflects the huge advances in amateur UAV technology over the last few years.



The drama for CanberraUAV really started several weeks before the challenge. Our primary competition aircraft was the "Bushmaster", a custom built tail dragger with 50cc petrol motor designed by our chief pilot, Jack Pittar. Jack had designed the aircraft to have both plenty of room inside for whatever payload the rest of the team came up with, and to easily fit in the back of a station wagon. To achieve this he had designed it with a novel folding fuselage. Jack (along with several other team members) had put hundreds of hours into building the Bushmaster and had done a great job. It was beautifully laid out inside, and really was a purpose built Outback Challenge aircraft.



Just a couple of days after our D3 deliverable was sent in we had an unfortunate accident. A member of our local flying club was flying his CAP232 aerobatic plane at the same time we we were doing a test mission, and he misjudged a loop. The CAP232 loop went right through the rear of the Bushmaster fuselage, slicing it off. The Bushmaster hit the ground at 180 km/h, with predictable results.



Jack and Greg set to work on a huge effort to build a new Bushmaster, but we didn't manage to get it done in time. Luckily the OBC organisers allowed us to switch to our backup aircraft, a VQ Porter 2.7m RTF with some customisations. We had included the Porter in our D2 deliverable just in case it was needed, and already had plenty of autonomous hours up on it as we had been using it as a test aircraft for all our systems. It was the same basic style as the Bushmaster (a petrol powered tail dragger), but was a bit more cramped inside for fitting our equipment and used a bit smaller engine (a DLE35).



Strategy



Our basic strategy this year was the same as in 2012. We would search at a relatively low altitude (100m AGL this year, compared to 90m AGL in 2012), with a interleaved "mow the lawn" pattern. This year we setup the search with 60% overlap compared with 50% last year, and we had a longer turn around area at the end of each search leg to ensure the aircraft got back on track fully before it entered the search area. With that search pattern and a target airspeed of 28m/s we expected to cover the whole search area in 20 minutes, then we would cover it again in the reverse direction (with a 50m offset) if we didn't find Joe on the first pass.



As with 2012 we used an on-board computer to autonomously find Joe. This year we used an Odroid-XU, which is a quad core system running at 1.6GHz. That gave us a lot more CPU power than in 2012 (when we used a pandaboard), which allowed us to use more CPU intensive image recognition code. We did the first level histogram scanning at the full camera resolution this year (1280x960), whereas in 2012 we had run the first level scan at 640x480 to save CPU. That is why we were happy to fly a bit higher this year.



While the basic approach to image recognition was the same, we had improved the details of the implementation a lot in the last two years, with hundreds of small changes to the image scoring, communications and user interface. Using our 2012 image data as a guide (along with numerous test flights at our local flying field) we had refined the cuav code to provide much better object discrimination, and also to cope better with communication failures. We were pretty confident it could find Joe very reliably.



The takeoff



When the running order was drawn from a hat we were the 2nd last team on the list, so we ended up flying on the Friday. We'd been up early each morning anyway in case the order was changed (which does happen sometimes), and we'd actually been called out to the airfield on the Thursday afternoon, but didn't end up flying then due to high wind.



Our time to takeoff finally came just before 8am Friday morning. As with 2012 we did an auto takeoff, using the new tail-dragger takeoff coded added to APM:Plane just a couple of months before.



Unfortunately the takeoff did not go as planned. In fact, we were darn lucky the plane got into the air at all! As soon as Jack flicked the switch on the transmitter to put the plane into AUTO it started veering left on the runway, and nearly scraped a wing as it limped it's way into the air. A couple of seconds later it came quite close to the tents where the OBC organisers and our GCS was located.



It did get off the ground though, and missed the tents while it climbed, finally switching to normal navigation to the 2nd waypoint once it got to an altitude of 40m. Jack had his finger on the switch on the transmitter which would have taken manual control and very nearly aborted the takeoff. This would have to go down as one of the worst takeoffs in OBC history.



So why did it go so badly wrong? My initial analysis when I looked at the logs later was that the wind had pushed the plane sideways. After examining the logs more carefully though I discovered that while the wind did play a part, the biggest issue was the compass. For some reason the compass offsets we had loaded were quite different from the ones we had been testing with in all our flights in Canberra. I still don't know why the offsets changed, although the fact that we had compass learning enabled almost certainly played a part. We'd done dozens of auto takeoffs in Canberra with no issues, with the plane tracking beautifully down the center line of the runway. To have it go so badly wrong for the flight that matters was a disappointment.



I've decided that the best way to fix the issue for future flights is to make the auto takeoff code completely independent of the compass. Normally the compass is needed to get an initial yaw for the aircraft while it is not moving (as our hobby-grade GPS sensors can't give yaw when not moving), and that initial yaw is used to set the ground heading for the takeoff. That means that with the current code, any compass error at the time you change into AUTO will directly impact the takeoff.



Because takeoff is quite a short process (usually 20s or less), we can take an alternative approach. The gyros won't drift much in 20s, so what we can do is just keep a constant gyro heading until the aircraft is moving fast enough to guarantee a good GPS ground track. At that point we can add whatever gyro based yaw error has built up to the GPS ground course and use that as our navigation heading for the rest of the takeoff. That should make us completely independent of compass for takeoff, which should solve the problem for everyone, rather than just fixing it for our aircraft. I'm working on a set of patches to implement this, and expect it to be in the next release.



Note that once the initial takeoff is complete the compass plays almost no role in the flight of a fixed wing plane if you have the EKF enabled, unless you lose GPS lock. The EKF rejected our compass as being inconsistent, and happily got correct yaw from the fusion of GPS velocity and other sensors for the rest of the flight.



The search



After the takeoff things went much more smoothly. The plane headed off to the search area as planned, and tracked the mission extremely well. It is interesting to compare the navigation accuracy of this years mission compared to the 2012 mission. In 2012 we were using a simple vector field navigation algorithm, whereas we now use the L1 navigation code. This year we also used Paul's EKF for attitude and position estimation, and the TECS controller for speed/height control. The differences are really remarkable. We were quite pleased with how our Mugin flew in 2012, but this year it was massively better. The tracking along each leg of the search was right down the line, despite the 15 knot winds.



Another big difference from 2012 is that we were using the new terrain following capability that we had added this year. In 2012 we used a python script to generate our waypoints and that script automatically added intermediate waypoints to follow the terrain of the search area. This year we just set all waypoints as 100 meters AGL and let the autopilot do its job. That made things a lot simpler and also resulted in better geo-referencing and search area coverage.



On our GCS imaging display we had 4 main windows up. One is a "live" view from the camera. That is setup to only update if there is plenty of spare bandwidth on our radio links, and is really just there to give us something to watch while the imaging code does its job, plus to give us some situational awareness of what the aircraft is tracking over.

The second window is the map, which shows the mission flight path, the geo-fence, two plane icons (AHRS position estimate and GPS position estimate) plus overlays of thumbnails from the image recognition system of any "interesting" objects the Odroid has found.



The 3rd window is the "Mosaic" window. That shows a grid of thumbnails from the image recognition system, and has menus and hot-keys to allow us to control the image recognition process and sort the thumbnails in various ways. We expected Joe to have a image score of over 1000, but we set the threshold for thumbnails to display in the mosaic as 500. Setting a lower threshold means we get shown a lot of not very interesting thumbnails (things like oddly shaped tree stumps and patches of dirt that are a bit out of the ordinary), but at least it means we can see the system is working, and it stops the search being quite so boring.



The final window is a still image window. That is filled with a full resolution image of whatever image we have selected for download from the aircraft, allowing us to get some context around a thumbnail if we want it. Often it is much easier to work out what an object is if you can see the surroundings.



On top of those imaging windows we also have the usual set of flight control windows. We had 3 laptops setup in our GCS tent, along with a 40" LCD TV for one of the laptops (my thinkpad). Stephen ran the flight monitoring on his laptop, and Matt helped with the imaging and the antenna tracker from his laptop. Splitting the task up in this way helped prevent overload of any one person, which made the whole experience more manageable.



On Stephens laptop he had the main MAVProxy console showing the status of the key parts of the autopilot, plus graphs showing the consistency of the various subsystems. The ardupilot code on Pixhawk has a lot of redundancy at both the sensor level and at the higher level algorithm level, and it is useful to plot graphs showing if we get any significant discrepancies, giving us a warning of a potential failure. For this flight everything went smoothly (apart from the takeoff) but it is nice to have enough screens to show this sort of information anyway. It also makes the whole setup look a bit more professional to have a few fancy graphs going :-)



About 20 minutes after takeoff the imaging system spotted Joe lying in his favourite position in a peanut field. We had covered nearly all of the first pass across the search area so we were glad to finally spot him. The images were very clear, so we didn't need to spend any time discussing if it really was Joe or not.



We clicked the "Show Position" menu item we'd added to the MAVProxy map, which pops up a window giving the position in various coordinate systems. Greg passed this to the judges who quickly confirmed it was within the required 100 meters of Joe.



The bottle drop



That initial position was only a rough estimate though. To refine the position we used a new trick we'd added to MAVProxy. The "wp movemulti" command allowed us to move and rotate a pre-defined confirmation flight pattern over the top of Joe. That setup the plane to fly a butterfly pattern over Joe at an altitude of 80 meters, passing over him with wings level. That gives an optimal set of images for our image recognition system to work with, and the tight turns allow the wind estimation algorithm in ardupilot to get an accurate idea of wind direction and speed.



In the weeks leading up to the competition we had spent a lot of time refining our bottle drop system. We realised the key to a good drop is timing consistency and accurate wind drift estimation.



To improve the timing consistency we had changed our bottle release mechanism to be in two stages. The bottle was held to the bottom of the Porter by two servos. One servo held the main weight of the bottle inside a plywood harness, while the second servo was attached to the top of the parachute by a wire, using a glider release mechanism.



The idea is that when approaching Joe for the drop the main servo releases first, which leaves the bottle hanging beneath the plane held by the glider release servo with the parachute unfurled. The wind drags the bottle at an angle behind the plane for 2 seconds before the 2nd servo is released. The result is much more consistent timing of the release, as there is no uncertainty in how long the bottle tumbles before the chute unfolds.



The second part of the bottle drop problem is good wind drift estimation. We had decided to use a small parachute as we were not certain the bottle would withstand the impact with the ground without a chute. That meant significant wind drift, which meant we really had to know the wind direction and speed quite accurately, and also needed some data on how fast the bottle would drift in the wind.



In the weeks before the competition we did a lot of bottle drops, but the really key one was a drop just a week before we left for Kingaroy. That was the first drop in completely still conditions, which meant it finally gave us a "zero wind" data point, which was important in calculating the wind drift. Combining that drop with some previous drop results we came up with a very simple formula giving the wind drift distance for a drop from 80 meters as 5 times the wind speed in meters/second, as long as we dropped directly into the wind.



We had added a new feature to APM:Plane to allow an exact "acceptance radius" for an individual waypoint to be set, overriding the global WP_RADIUS parameter. So once we had the wind speed and direction it was just a matter of asking MAVProxy to rotate the drop mission waypoints to match the wind (which was coming from -121 degrees) and to set the acceptance radius for the drop to 35 meters (70 meters for zero wind, minus 7 times 5 for the 7m/s wind the EKF had estimated).



The plane then slowed to 20m/s for the drop, and did a 350m approach to ensure the wings were nice and level at the drop point. As the drop happened we could see the parachute unfurl in the camera view from the aircraft, which was a nice confirmation that the bottle had been successfully dropped.



The mission was setup for the aircraft to go back to the butterfly confirmation pattern after the drop. We had done that to allow us to see where the bottle had actually landed relative to Joe, in case we wanted to do a 2nd drop. We had 3 bottles ready (one on the plane, two back at the airfield), and were ready to fit a new bottle and adjust the drop point if we'd been too far off.



As soon as we did the confirmation pass it became clear we didn't need to drop a 2nd bottle. We measured the distance of the bottle to Joe as under 3 meters using the imaging system (the judges measured it officially as 2.6m), so we asked the plane to come back and land.



The landing



This year we used a laser rangefinder (a SF/02 from LightWare) to help with the landing. Using a rangefinder really helps ensure the flare is at the right altitude and produces much more consistent landings.



The only real drama we had with the landing was that we came in a bit fast, and ballooned more than it should have on the flare. The issue was that we hadn't used a shallow enough approach. Combined with the quite strong (14 knot) cross-wind it was an "interesting" landing.



We also should have set the landing point a bit closer to the end of the runway. We had put it quite a way along the runway as we weren't sure if the laser rangefinder would pick up anything strange as it crossed over the road and airport boundary fence, but in hindsight we'd put the touchdown point a bit too close to the geofence. Full size glider operations running at the same time meant only part of the runway was available for OBC teams to use.



The landing was entirely successful, and was probably better than a manual landing would have been by me in the same wind conditions (I'm only a mediocre pilot, and landings are my weakest point), but we certainly can do better. Paul Riseborough is already looking at ways to improve the autoland to hopefully produce something that produces a round of applause from spectators in future landings.



Radio performance



Another part of the mission that is worth looking at is the radio performance. We had two radio links to the aircraft - one was a RFD900 900MHz radio, and that performed absolutely flawlessly as usual. We had around 40 dB of fade margin at a range of over 6km, which is absolutely huge. Every team that flew in the OBC this year used a RFD900, which is a great credit to Seppo and the team at RFDesign.



The 2nd radio was a Ubiquity Rocket M5, which is a 5.8GHz ethernet bridge. We used an active antenna tracker this year for the 5.8GHz link, with a 28dBi MIMO antenna on the ground, and a 10dBi MIMO omni antenna in the aircraft (the protrusion from the top of the fuselage is for the antenna). The 5.8GHz link gave us lots of bandwidth for the images, but was not nearly as reliable as the RFD900 link. It dropped out 6 times over the course of the mission, with the longest dropout lasting just over a minute. The dropouts were primarily caused by magnetometer calibration on the antenna tracker - during the mission we had to add some manual trim to the tracker to improve the alignment. That worked, but really we should have used a ground antenna with a bit less gain (maybe around 24dBi) to give us a wider beam width.



Another alternative would have been to use a lower frequency. The 5.8GHz Rocket gives fantastic bandwidth, but we don't really need that much bandwidth for our system. The Robota team used 2.4GHz Ubiquity radios and much simpler antennas and ended up with a much better link than we had. The difference in path loss between 2.4GHz and 5.8GHz is quite significant.



The reason we didn't use the 2.4GHz gear is that we do most of our testing at a local MAAA flying club, and we know that if someone crashes their expensive model while we have a powerful 2.4GHz radio running then there will always be the thought that our radio may have caused interference with their 2.4GHz RC link.



So we're now looking into the possibility of using a 900MHz ethernet bridge. The Ubiquity M900 looks like a real possibility. It doesn't offer nearly as much bandwidth as the 5.8GHz or 2.4GHz radios as Australia only has 13MHz of spectrum available in the 900MHz band for ISM use, but that should still be enough for our application. We have heard that the spread spectrum M900 doesn't significantly interfere with the RFD900 in the same band (as the RFD900 is a TDM frequency hopping radio), but we have yet to test that theory.



Alternatively we may use two RFD900s in the same aircraft, with different numbers of hopping channels and different air data rates to minimise interference. One would be dedicated to telemetry and the other to image data. A RFD900 at 128kbps should be plenty for our cuav imaging system as long as the "live camera view" window is set to quite a low resolution and update rate.



Team cooperation



One of the most notable things about this years competition was just how friendly the discussions between the teams were. The competition has a great spirit of cooperation and it really is a fantastic experience to work closely with so many UAV developers from all over the world.



I don't really have time to go through all the teams that attended, but I do want to mention some of the highlights for me. Top of the list would have to be meeting Ben and Daniel Dyer from team SFWA. They put in an absolutely incredible effort to build their own autopilot from scratch. Their build log at http://au.tono.my/log/index.html is incredible to read, and shows just what can be achieved in a short time with enough talent. It was fantastic that they completed the challenge (the first team to ever do so) and I look forward to seeing how they take their system forward.



I'd also like to offer a tip of the hat to Team Swiss Fang. They used the PX4 native stack on a Pixhawk and it was fantastic to see how far they pushed that autopilot stack in the lead up to the competition. That is one of the things that competitions like the OBC can do for an autopilot - push it to much higher levels.



Team OpenUAS also deserves a mention, and I was especially pleased to meet Christophe who is one of the key people behind the Paparrazzi autopilot. Paparrazzi is a real pioneer in the field of amateur autopilots. Many of the posts we make on "ardupilot has just achieved X" on diydrones could reasonably be responded to by saying "Paparrazzi did that 3 years ago". The OpenUAS team had bad luck in both the 2012 competition and again this year. This time round it was an airspeed sensor failure which led to a crash soon after takeoff which is really tragic given the effort they have put in and the pedigree of their autopilot stack.



The Robota team also did very well, coming in second behind our team. Particularly impressive was the performance of the Goose autopilot on a quite small foam plane in the wind over Kingaroy. The automatic landing was fantastic. The Robota team used a much simpler approach, just using a 2.4GHz Ubiquity link to send a digital video stream to 3 video monitors and having 3 people staring at those screens to find Joe. Extremely simple, but it worked. They were let down a bit by the drop accuracy in the wind, but a great effort done with style.



I was absolutely delighted when Team Thunder, who were also running APM:Plane, completed the challenge, coming in 4th place. They built their system partly on the image recognition code we had released, which is exactly what we hoped would happen. We want to see UAV developers building on each others work to make better and better systems, so having Team Thunder complete the mission was great.



Overall ardupilot really shone in the competition. Over half the teams that flew in the competition were running ardupilot. Our community has shown that we can put together systems that compete with the best in the world. We've come a long way in the last few years and I'm sure there is a bright future for more developments in the UAV S&R space that will see ardupilot saving lives on a regular basis.



Thank you



On behalf of CanberraUAV I'd like to offer a huge thank you to the OBC organisers for the massive effort they have put in over so many years to run the competition. Back in 2007 when the competition started it took real insight for Rod Walker and Jon Roberts to see that a competition of this nature could push amateur UAV technology ahead so much, and it took a massive amount of perseverance to keep it going to the point that teams were able to finally complete the competition. The OBC has had a huge impact on amateur UAV technology.



We'd also like to thank our sponsors, 3DRobotics, who have been a huge help for CanberraUAV. We really couldn't have done it without you. Working with 3DR on this sort of technology is a great pleasure.



Next steps



Completing the Outback Challenge isn't the end for CanberraUAV and we are already starting discussions on what we want to do next. I've posted some ideas on our mailing list and we would welcome suggestions from anyone who wants to take part. We've come a long way, but we're not yet at the point where putting together an effective S&R aircraft is easy.

(UPDATE) Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Of course, there are more things I had known but not fully internalised yet. Of course.

Many  MIPS architectures, and specifically, most common router architectures, don’t have hardware support for NX.

 

Yet. It surely wont be long.

 

My own feeling in this age of Heartbleed and Shellshock is we should get ahead of the curve if we can – if a distribution supports NX in the toolchain then when a newer SOC arrives there is one less thing that we can forget about.

If I had bothered to keep reading instead of hacking I may have rediscovered sooner. But then I would know significantly less about embedded toolchains and how to debug and patch them.  Anyway, a determined user could also cherry-pick emulated NX protection from PAX.

When they Google this problem they will at least find my work.

 

How else to  learn?  :-)

October 03, 2014

Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

To recap our experiments to date, out of the box OpenWRT, with further digging, may appear to give the impression to have sporadic coverage of various Linux hardening measures without doing a bit of extra work. This in fact can be a false impression – see update – but for the uninitiated it could take a bit of digging to check!

One metric of interest not closely examined to date is the NOEXECSTACK attribute on executable binaries and libraries. When coupled with Kernel support, if enabled this disallows execution of code in the stack memory area of a program, thus preventing an entire class of vulnerabilities from working.  I mentioned NOEXECSTACK in passing previously; from the checksec report we saw that the x86 build has 100% coverage of NOEXECSTACK, whereas the MIPS build was almost completely lacking.

For a quick introduction to NOEXECSTACK, see http://wiki.gentoo.org/wiki/Hardened/GNU_stack_quickstart.

Down the Toolchain Rabbit Hole

As far as a challenging detective exercise, this one was a bit of a doosy, for me at least.  Essentially  I had to turn the OpenWRT build system inside out to understand how it worked, and then the same with uClibc, so that I could learn where to begin to start.  After rather a few false starts, the culprit turned out to be somewhere completely different.

First, OpenWRT by default uses uClibc as the C library, which is the bedrock upon which the rest of the user space is built.  The C library is not just a standard package however. OpenWRT, like the majority of typical embedded Linux systems employs a  “toolchain” or “buildroot” architecture.  Simply put, the combines packages together the C/C++ compiler, the assembler, linker, the C library and various other core components in a way that the remainder of the firmware can be built without having knowledge of how this layer is put together.

This is a Good Thing as it turns out, especially when cross-compiling, i.e. when building the OpenWRT firmware for a CPU or platform (the TARGET) that is different from that where the firmware build happens (the HOST.)  Especially where everything is bootstrapped from source, as OpenWRT is.

The toolchain is combined from the following components:

  • binutils — provides the linker (ld) and the assembler and code for manipulating ELF files (Linux binaries) and object libraries
  • gcc — provides the C/C++ compiler and, often overlooked, libgcc, a library various “intrinsic” functions such as optimisations for certain C library functions, amongst others
  • A C library — in this case, uClibc
  • gdb — a debugger

Now all these elements need to be built in concert, and installed to the correct locations, and to complicate matters, the toolchain actually has multiple categories of output:

  • Programs that run on the HOST that produce programs and libraries that run on the TARGET (such as the cross compiler)
  • Programs that run on the on the TARGET (e.g. ldd, used for scanning dependencies)
  • Programs and libraries that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build other programs that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build programs and libraries that run on the TARGET
  • Even, programs that run on the on the TARGET  to produce programs and libraries that run on the TARGET (a target-hosted C compiler!)

Confused yet?

All this magic is managed by the OpenWRT build system in the following way:

  • The toolchain programs are unpacked and built individually under the build_dir/toolchain directory
  • The results of the toolchain build designed to run on the host under the staging_dir/toolchain
  • The partial toolchain under staging_dir is used to build the remaining items under build_dir which are finally installed to staging_dir/target/blah-rootfs
  • (this is an approximation, maybe build OpenWRT for yourself to find out all the accurate naming conventions )
  • The kernel headers are an intrinsic part of this because of the C library, so along the way a pass over the target Linux kernel source is required as well.

OpenWRT is flexible enough to allow the C compiler to be changed (e.g. between stock gcc 4.6 and LInaro gcc 4.8) , and the binutils version, and even switch the C library between different project implementations ( uClibc vs eglibc vs MUSL.)

OpenWRT fetches the sources for all these things, then applies a number of local patches, before building.

We will need to refer to this later.

Confirming the Problem and Fishing for Red Herrings.

The first thing to note is that x86 has no problem, but MIPS does, and I want to run OpenWRT on various embedded devices with MIPS SOC.  Without that I may never have bothered digging deeper!

Of course I dived in initially and took the naive brute force approach.  I patched OpenWRT to apply the override flag to the linker: -Wl,-z,noexecstack. This was a bit unthinking, after all x86 did not need this.

Doing this gave partial success. In fact most programs gained NOEXECSTACK, except for a chunk of the uClibc components, busybox, and tellingly as it turned out, libgcc_s.so. That is, core components used by nearly everything. Of course.

(Spoiler: modern Linux toolchain implementations actually enable NOEXECSTACK by DEFAULT for C code! Which was an important fact I forgot at this point! Silly me.)

At this point,  I managed to overlook libgcc_s.so and decided to focus on uClibc. This decision would greatly expand my knowledge of OpenWRT and uClibc and embedded built systems, and do nothing to solve the problem the proper way!

OpenWRT builds uClibc as a host package, which basically means it abuses Makefiles to generate a uClibc configuration file partly derived from the OpenWRT config file settings, and eventually call the uClibc top level makefile to build uClibc. This can only be done after building binutils and two of three stages of gcc.

At this point I still did not fully understand how NOEXECSTACK really should be employed, which is probably an artefact of working on this stuff late at night and not reading everything as carefully as I might have.  So I did the obvious and incorrect thing and worked out how to patch uClibc further to push the force -Wl,-z,noexecstack through it. What I had to do to do that could almost take another blog article, so I’ll skip it for brevity.  Anyway, this did not solve the problem.

Finally I turned on all the debug and examined the build:

make V=csw toolchain/uClibc/compile

(Aside: the documentation for OpenWRT mentions using V=s to turn on some verboseness, but to get the actual compiler and linker commands of the toolchain build you need the extra flags. I should probably try and update the OpenWRT wiki but I have invested so much time in this that I might have to leave that as an exercise for the reader)

All the libraries were being linked using the -Wl,-z,noexecstack flag, yet some still failed checksec. Argh!

I should also note that repeating this process over and over gets tedious, taking about 20 minutes to built the toolchain plus minimal target firmware on my quad core AMD Phenom. Dont delete the build_dir/host and staging_dir/host  directories or it doubles!

So something else was going on.

Upgrades and trampolines, or not.

I sought help from the uClibc developers mailing list, who suggested I first try using up to date software. This was a fair point, as OpenWRT is using a 2 year old release of uClibc and 1 year old release of binutils, etc.

This of course entailed having to learn how to patch OpenWRT to give me that choice.

So another week later, around work and family and life, I found some time to do this, and discovered that the problem persisted.

At this point I revisited the Gentoo hardening guide.  After some detective work I discovered that several MIPS assembler files inside of uClibc did not actually have the recommended code fragments. Aha! I thought. Incorrectly again, as I should have realised; uClibc has already thought of this and when NOEXECSTACK is configured, as it is for OpenWRT, uClibc passes a different flag to the assembler that has the effect of fixing NOEXECSTACK for assembler files. And of course after I patched about 17 .S files and waited another half hour, the checksec scan was still unsuccessful. Red herring again!

I started to get desperate when I read about some compiler systems that use something called a ‘trampolline’. So I went back to the mailing uClibc  list.

At this point I would like to thank the uClibc developers for being so patient with me, as the solution was now near at hand.

Cutting edge patches and a wrinkle in time.

One of the uClibc developers pointed me to a patch posted on the gcc mailing list. As fate would happen, dated 10 September 2014, which was _after_ I started on these investigations.  This actually went full circle back to libgcc_s.so which was the small library I passed over to focus on uClibc.  This target library itself has some assembly files, which were neither built with the noexecstack option nor including the Gentoo-suggested assembly pragma. This patch also applies on gcc, not on uClibc, and of course was outside of binutils which was the other component I had to upgrade.  The fact that libgcc_s.so was not clean should maybe have pointed me to look at gcc, and it did cross my mind. But we all have to learn somehow.  Without all the above I would be the poorer for my knowledge of the internals of all these systems.

So I applied this patch, and finally, bliss, a sea of green NX enabled flags. Without in the end any modifications actually required to the older version of uClibc used inside OpenWRT.

This even fixed busybox NX status without modification.  So empirically this confirms what I read previously and also overlooked to my detriment, that being NOEXECSTACK is aggregated from all linked libraries: if one is missing it pollutes the lot.

Fixed NOEXECSTACK on uClibc

Postscript

Now I have to package the patch so that it can be submitted to OpenWRT, initially against Barrier Breaker given that has just been released!

Then I will probably need to repeat it against all the supported gcc versions and submit separate patches for those. That will get a bit tedious, thankfully I can start a test build then go away…

Simple PRINCE2 Governance

Pre-Project and Start-Up (SU)

A project is defined as a temporary organisation created for the purpose of delivering business products with a degree of uniqueness according to an agreed Business Case.

A project mandate must come from those managers and those authorised to allocate duties and funds, subject to delegated authority ("corporate /programme management").

Authorised individuals must raise a Project Mandate. This should state the basic objectives, scope, quality, and constraints and identify the customer, user, and other interested parties.

read more

Degrowth Economy

Just read this article: Life in a de-growth economy and why you might actually enjoy it.

I like the idea of a steady state economy. Simple maths shows how stupid endless growth is. And yet our politicians cling to it. We will get a steady state, energy neutral economy one day. It’s just a question of if we are forced, or if it’s managed.

Some thoughts on the article above:

  • I don’t agree that steady state implies localisation. Trade and specialisation and wonderful inventions. It’s more efficient if I write your speech coding software than you working it out. It’s for more efficient for a farmer to grow food than me messing about in my back yard. What is missing is a fossil fuel free means of transport to sustain trade and transportation of goods from where they are efficiently produced to where they are consumed.
  • Likewise local food production like they do in Cuba. Better to grow lots of food on a Cuban farm, they just lack an efficient way to transport it.
  • I have some problems with “organic” food production in the backyard, or my neighbours backyard. To me it’s paying more for chemically identical food to what I buy in the supermarket. Modern, scientific, food production has it’s issues, but these can be solved by science. On a small scale, sure, gardening is fun, and it would be great to meet people in communal gardens. However it’s no way to feed a hungry world.
  • Likewise this articles vision of us repairing/recycling clothing. New is still fine, as long as it’s resource-neutral, e.g. cotton manufactured into jeans using solar powered factories, and transported to my shopping mall in an electric vehicle. Or synthetic fibres from bio-fuels or GM bacteria.
  • Software costs zero to upgrade but can improve our standard of living. So there can be “growth” in some sense at no expense in resources. You can use my speech codec and conserve resources (energy for transmission and radio spectrum). I can send you that software over the Internet, so we don’t need an aircraft to ship you a black box or even a CD.

I live by some anti-growth, anti-consumer principles. I drive an electric car that is a based on a 25 year old recycled petrol car chassis. I don’t have a fossil fuel intensive commute. I use my bike more than my car.

I work part time from home mainly on volunteer work. My work is developing software that I can give away to help people. This software (for telecommunications) will in turn remove the need for expensive radio hardware, save power, and yet improve telecommunications.

I live inexpensively compared to my peers who are paying large mortgages due to the arbitrarily high price of land here, and other costs I have managed to avoid or simply say no to. No great luck or financial acumen at work here, although my parents taught me the useful habit of spending less than I earn. I’m not a very good consumer!

I don’t aspire to a larger home in a nice area or more gadgets. That would just mean more house work and maintenance and expense and less time on helping people with my work. In fact I aspire to a smaller home, and less gadgets (I keep throwing stuff out). I am renting at the moment as the real estate prices here are spiralling upwards and I don’t want to play that game. Renting will allow me to down-shift even further when my children are a little older. I have no debt, and no real desire to make more money, a living wage is fine. Although I do have investments and savings which I like tracking on spreadsheets.

I am typing this on a laptop made in 2008. I bought a second, identical one a few years later for $300 and swap parts between them so I always have a back up.

I do however burn a lot of fossil fuel in air travel. My home uses 11 kWhr/day of electricity, which, considering this includes my electric car and hence all my “fuel” costs, is not bad.

More

In the past I have written about why I think economic growth is evil. There is a lot of great information on this topic such as this physics based argument on why we will cook (literally!) in a few hundred years if we keep increasing energy use. The Albert Bartlett lectures on exponential growth are also awesome.

Recharging NSW

So those who have been following this blog know that I’ve been a keen enthusiast for EVs attempting to grow and expand the amount of EVs in Australia and the related charging network.

Some of my frustration on how slowly it has been growing has turned into why don’t I do something about it.

So I have. I’m now a director of a new company Recharging NSW Pty Ltd. The main aim is to encourage and support EV uptake in Australia.  By increase both cars on the road and public charging.

So there isn’t much I can share at present everything is still in the planning phases. but stay tuned.

 

 

 

[opinion] On Islamaphobia

It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.

For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.

I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.

I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.

We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.

I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.

October 01, 2014

On layers

There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.



I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.



What are layers?



Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):



  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar




Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).



Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.



Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.



What do layers give us?



Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.





We intend this diagram to amaze and confuse our victims




Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.



Do layers help us work out what OpenStack should focus on?



Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.



Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.



A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.



Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?



Is there consensus on what sits in each layer?



Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.



The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.



Let's ignore tents for now



The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.



Conclusion



Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.



Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.



So here are the layers as I see them for now:



  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar




I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.



I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.



In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.



Tags for this post: openstack kilo technical committee tc layers

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Compute Kilo specs are open; My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

[life] Day 244: TumbleTastics, photo viewing, bulk goods and a play in the park

Yesterday was another really lovely day. Zoe had her free trial class with TumbleTastics at 10am, and Zoe was asking for a leotard for it, because that's what she was used to wearing when she went to Gold Star gymnastics in Mountain View. After Sarah dropped her off, we dashed out to Cannon Hill to go to K Mart in search of a leotard.

We found a leotard, and got home with enough time to scooter over to TumbleTastics for the class. Zoe had an absolute ball again, and the teacher was really impressed with her physical ability, and suggested for her regular classes that start next term, that she be slotted up a level. It sounds like her regularly scheduled class will have older kids and more boys, so that should be good. I just love that Zoe's so physically confident.

We scootered back home, and after lunch, drove back to see Hannah to view our photos from last week's photo shoot. There were some really beautiful photos in the set, so now I need to decide which one I want on a canvas.

Since we were already out, I thought we could go and check out the food wholesaler at West End that we'd failed to get to last week. I'm glad we did, because it was awesome. There was a coffee shop attached to the place, so we grabbed a coffee and a babyccino together after doing some shopping there.

While we were out, I thought it was a good opportunity check out a new park, and we drove down to what I guess was Orleigh Park, and had an absolutely fantastic afternoon down there by the river, the shade levels in the mid-afternoon were fantastic. I'd like to make an all day outing one day on the CityCat and CityGlider, and bus one way and take the ferry back the other way with a picnic lunch in the middle.

We headed home after about an hour playing in the park, and Zoe watched a little bit of TV before Sarah came to pick her up.

Zoe's spending the rest of the school holidays with Sarah, so I'm going to use the extra spare time to try and catch up on my taxes and my real estate licence training, which I've been neglecting.