Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

October 25, 2014

Craige McWhirter: Automating Building and Synchronising Local & Remote Git Repos With Github

I've blogged about some git configurations in the past. In particular working with remote git repos.

I have a particular workflow for most git repos

  • I have a local repo on my laptop
  • I have a remote git repo on my server
  • I have a public repo on Github that functions as a back up.

When I push to my remote server, a post receive hook automatically pushes the updates to Github. Yay for automation.

However this wasn't enough automation, as I found myself creating git repos and running through the setup steps more often than I'd like. As a result I created gitweb_repo_build.sh which takes all the manual steps I go through to setup my workflow and automates it.

The script currently does the following:

  • Builds a git repo locally
  • Adds a README.mdwn and a LICENCE. Commits the changes.
  • Builds a git repo hosted via your remote git server
  • Adds to the remote server, a git hook for automatically pushing to github
  • Adds to the remote server, a git remote for github.
  • Creates a repo at GitHub a via API 3
  • Pushes the READEME and LICENCE to the remote, which pushes to github.

It's currently written in bash and has no error handling.

I've planned a re-write in Haskell which will have error handling.

If this is of use to you, enjoy :-)

That rare feeling …

… of actually completing things.

Upon reflection, it appears to have been a sucessful week.

Work – We relocated offices (including my own desk (again)) over the previous week from one slightly pre-used office building to another more well-used office building. My role as part of this project was to ensure that the mechanics of the move as far as IT and Comms occured and proceed smoothly. After recabling the floor, working with networks, telephones and desktops staff it was an almost flawless move, and everyone was up and running easily on Monday morning. I received lots of positive feedback which was good.

Choir – The wrap up SGM for the 62nd Australian Intervarsity Choral Festival Perth 2011, Inc happened. Pending the incorporation of the next festival, it is all over bar a few cheques and paperwork. Overall it was a great festival and as Treasurer was pleased with the final financial result (positive).

Hacking – This weeks little project has been virtualsnack. This is a curses emulator of the UCC Snack Machine and associated ROM. It is based on a previous emulator written with PyGTK and Glade that had bitrotted in the past ten years to be non-functioning and not worth the effort to ressurect. The purpose of the emulator is enable development of code to speak to the machine without having to have the real machine available to test against.

I chose to continue to have the code in python and used npyscreen as the curses UI library. One of the intermediate steps was creating a code sample, EXAMPLE-socket.py, which creates a daemon that speaks to a curses interfaces.

I hereby present V1.0 “Gobbledok” of virtualsnack. virtualsnack is hosted up on Github for the moment, but may move in future. I suspect this item of software will only be of interest to my friends at UCC.

October 24, 2014

[life] Day 268: Science Friday, TumbleTastics, haircuts and a big bike outing

I didn't realise how jam packed today was until we sat down at dinner time and recounted what we'd done today.

I started the day pretty early, because Anshu had to be up for an early flight. I pottered around at home cleaning up a bit until Sarah dropped Zoe off.

After Zoe had watched a bit of TV, I thought we'd try some bottle rocket launching for Science Friday. I'd impulse purchased an AquaPod at Jaycar last year, and haven't gotten around to using it yet.

We wandered down to Hawthorne Park with the AquaPod, an empty 2 litre Sprite bottle, the bicycle pump and a funnel.

My one complaint with the AquaPod would have to be that the feet are too smooth. If you don't tug the string strongly enough you end up just dragging the whole thing across the ground, which isn't really what you want to be doing. Once Zoe figured out how to yank the string the right way, we were all good.

We launched the bottle a few times, but I didn't want to waste a huge amount of water, so we stopped after about half a dozen launches. Zoe wanted to have a play in the playground, so we wandered over to that side of the park for a bit.

It was getting close to time for TumbleTastics, and we needed to go via home to get changed, so we started the longish walk back home. It was slow going in the mid-morning heat and no scooter, but we got there eventually. We had another mad rush to get to TumbleTastics on time, and miraculously managed to make it there just as they were calling her name.

Lachlan wasn't there today, and I was feeling lazy, and Zoe was keen for a milkshake, so we dropped into Ooniverse on the way home. Zoe had a great old time playing with everything there.

After we got home again, we biked down to the Bulimba post office to collect some mail, and then biked over for a haircut.

After our haircuts, Zoe wanted to play in Hardcastle Park, so we biked over there for a bit. I'd been wanting to go and check out the newly opened Riverwalk and try taking the bike and trailer on a CityCat. A CityCat just happened to be arriving when we got to the park, but Zoe wasn't initially up for it. As luck would have it, she changed her mind as the CityCat docked, but it was too late to try and get on that one. We got on the next one instead.

I wasn't sure how the bike and the trailer were going to work out on the CityCat, but it worked out pretty well going from Hawthorne to New Farm Park. We boarded at Hawthorne from the front left hand side, and disembarked at New Farm Park from the front right hand side, so I basically just rolled the bike on and off again, without needing to worry about turning it around. It was a bit tight cornering from the pontoon to the gangway, but the deckhand helped me manoeuvre the trailer.

It was quite a nice little ride through the back streets of New Farm to get to the start of the Riverwalk, and we had a nice quick ride into the city. We biked all the way along the riverside through to the Old Botanic Gardens. We stopped for a little play in the playground that Zoe had played in the other weekend when we were wandering around for Brisbane Open House, and then continued through the gardens, over the Goodwill Bridge, and the bottom of the Kangaroo Point cliffs.

We wound our way back home through Dockside, and Mowbray Park and along the bikeway alongside Wynnum Road. It was a pretty huge ride, and I'm excited that it's opened up an easy way to access Southbank by bicycle. I'm looking forward to some bigger forays in the near future.

Watching Grass Grow

For Hackweek 11 I thought it’d be fun to learn something about creating Android apps. The basic training is pretty straightforward, and the auto-completion (and auto-just-about-everything-else) in Android Studio is excellent. So having created a “hello world” app, and having learned something about activities and application lifecycle, I figured it was time to create something else. Something fun, but something I could reasonably complete in a few days. Given that Android devices are essentially just high res handheld screens with a bit of phone hardware tacked on, it seemed a crime not to write an app that draws something pretty.

openSUSE wallpaperThe openSUSE desktop wallpaper, with its happy little Geeko sitting on a vine, combined with all the green growing stuff outside my house (it’s spring here) made me wonder if I couldn’t grow a little vine jungle on my phone, with many happy Geekos inhabiting it.

Android has OpenGL ES, so thinking that might be the way to go I went through the relevant lesson, and was surprised to see nothing on the screen where there should have been a triangle. Turns out the view is wrong in the sample code. I also realised I’d probably have to be generating triangle strips from curvy lines, then animating them, and the brain cells I have that were once devoted to this sort of graphical trickery are so covered in rust that I decided I’d probably be better off fiddling around with beziers on a canvas.

So, I created an app with a SurfaceView and a rendering thread which draws one vine after another, up from the bottom of the screen. Depending on Math.random() it extends a branch out to one side, or the other, or both, and might draw a Geeko sitting on the bottom most branch. Originally the thread lifecycle was tied to the Activity (started in onResume(), killed in onPause()), but this causes problems when you blank the screen while the app is running. So I simplified the implementation by tying the thread lifecycle to Surface create/destroy, at the probable expense of continuing to chew battery if you blank the screen while the app is active.

Then I realised that it would make much more sense to implement this as live wallpaper, rather than as a separate app, because then I’d see it running any time I used my phone. Turns out this simplified the implementation further. Goodbye annoying thread logic and lifecycle problems (although I did keep the previous source just in case). Here’s a screenshot:

Geeko Live Wallpaper

The final source is on github, and I’ve put up a release build APK too in case anyone would like to try it out – assuming of course that you trust me not to have built a malicious binary, trust github to host it, and trust SSL to deliver it safely ;-)

Enjoy!

Specs for Kilo

Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.



API



  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.




Administrative



  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).




Containers Service







Hypervisor: Docker







Hypervisor: FreeBSD



  • Implement support for FreeBSD networking in nova-network: review 127827.




Hypervisor: Hyper-V



  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).




Hypervisor: Ironic







Hypervisor: VMWare



  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).




Hypervisor: libvirt







Instance features







Internal



  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.




Internationalization



  • Enable lazy translations of strings: review 126717 (fast tracked).




Performance



  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.




Scheduler



  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.




Security



  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.




Tags for this post: openstack kilo blueprint spec

Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

October 23, 2014

[life] Day 267: An outing to the Valley for lunch, and swim class

I was supposed to go to yoga in the morning, but I just couldn't drag my sorry arse out of bed with my man cold.

Sarah dropped Zoe around, and she watched a bit of TV while we were waiting for a structural engineer to come and take a look at the building's movement-related issues.

While I was downstairs showing the engineer around, Zoe decided she'd watched enough TV and, remembering that I'd said we needed to tidy up her room the previous morning, but not had time to, took herself off to her room and tidied it up. I was so impressed.

After the engineer was finished, we walked to the ferry terminal to take the cross-river ferry over to Teneriffe, and catch the CityGlider bus to the Valley for another one of the group lunches I get invited to.

After lunch, we reversed our travel, dropping into the hairdresser on the way home to make an appointment for the next day. We grabbed a few things from the Hawthorne Garage on the way through.

We pottered around at home for a little bit before it was time to bike to swim class.

After swim class, we biked home, and Zoe watched some TV while I got organised for a demonstration that night.

Sarah picked up Zoe, and I headed out to my demo. Another full day.

Call for Volunteers

The Earlybird registrations are going extremely well – over 50% of the available tickets have sold in just two weeks! This is no longer a conference we are planning – this is a conference that is happening and that makes the Organisation Team very happy!

Speakers have been scheduled. Delegates are coming. We now urgently need to expand our team of volunteers to manage and assist all these wonderful visitors to ensure that LCA 2015 is unforgettable – for all the right reasons.

Volunteers are needed to register our delegates, show them to their accommodation, guide them around the University and transport them here and there. They will also manage our speakers by making sure that their presentations don't overrun, recording their presentations and assisting them in many other ways during their time at the conference.

Anyone who has been a volunteer before will tell you that it’s an extremely busy time, but so worthwhile. It’s rewarding to know that you’ve helped everybody at the conference to get the most out of it. There's nothing quite like knowing that you've made a difference.

But there is more, membership has other privileges and advantages! You don't just get to meet the delegates and speakers, you get to know many of them while helping them as well. You get a unique opportunity to get behind the scenes and close to the action. You can forge new relationships with amazing, interesting, wonderful people you might not ever get the chance to meet any other way.

Every volunteer's contribution is valued and vital to the overall running and success of the conference. We need all kinds of skills too – not just the technically savvy ones (although knowing which is the noisy end of a walkie-talkie may help). We want you! We need you! It just wouldn't be the same without you! If you would like to be an LCA 2015 volunteer it's easy to register. Just go to our volunteer page for more information. We review volunteer registrations regularly and if you’re based in Auckland (or would like a break away from wherever you are) then we would love to meet you at one of our regular meetings. Registered volunteers will receive information about these via email.

Assembly Primer Part 7 — Working with Strings — ARM

These are my notes for where I can see ARM varying from IA32, as presented in the video Part 7 — Working with Strings.

I’ve not remotely attempted to implement anything approximating optimal string operations for this part — I’m just working my way through the examples and finding obvious mappings to the ARM arch (or, at least what seem to be obvious). When I do something particularly stupid, leave a comment and let me know :)

Working with Strings

.data
     HelloWorldString:
        .asciz "Hello World of Assembly!"
    H3110:
        .asciz "H3110"

.bss
    .lcomm Destination, 100
    .lcomm DestinationUsingRep, 100
    .lcomm DestinationUsingStos, 100

Here’s the storage that the provided example StringBasics.s uses. No changes are required to compile this for ARM.

1. Simple copying using movsb, movsw, movsl

    @movl $HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @movl $Destination, %edi
    movw r1, #:lower16:Destination
    movt r1, #:upper16:Destination

    @movsb
    ldrb r2, [r0], #1
    strb r2, [r1], #1

    @movsw
    ldrh r3, [r0], #2
    strh r3, [r1], #2

    @movsl
    ldr r4, [r0], #4
    str r4, [r1], #4

More visible complexity than IA32, but not too bad overall.

IA32′s movs instructions implicitly take their source and destination addresses from %esi and %edi, and increment/decrement both. Because of ARM’s load/store architecture, separate load and store instructions are required in each case, but there is support for indexing of these registers:

ARM addressing modes

According to ARM A8.5, memory access instructions commonly support three addressing modes:

  • Offset addressing — An offset is applied to an address from a base register and the result is used to perform the memory access. It’s the form of addressing I’ve used in previous parts and looks like [rN, offset]
  • Pre-indexed addressing — An offset is applied to an address from a base register, the result is used to perform the memory access and also written back into the base register. It looks like [rN, offset]!
  • Post-indexed addressing — An address is used as-is from a base register for memory access. The offset is applied and the result is stored back to the base register. It looks like [rN], offset and is what I’ve used in the example above.

2. Setting / Clearing the DF flag

ARM doesn’t have a DF flag (to the best of my understanding). It could perhaps be simulated through the use of two instructions and conditional execution to select the right direction. I’ll look further into conditional execution of instructions on ARM in a later post.

3. Using Rep

ARM also doesn’t appear to have an instruction quite like IA32′s rep instruction. A conditional branch and a decrement will be the long-form equivalent. As branches are part of a later section, I’ll skip them for now.

    @movl $HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @movl $DestinationUsingRep, %edi
    movw r1, #:lower16:DestinationUsingRep
    movt r1, #:upper16:DestinationUsingRep

    @movl $25, %ecx # set the string length in ECX
    @cld # clear the DF
    @rep movsb
    @std

    ldm r0!, {r2,r3,r4,r5,r6,r7}
    ldrb r8, [r0,#0]
    stm r1!, {r2,r3,r4,r5,r6,r7}
    strb r8, [r1,#0]

To avoid conditional branches, I’ll start with the assumption that the string length is known (25 bytes). One approach would be using multiple load instructions, but the load multiple (ldm) instruction makes it somewhat easier for us — one instruction to fetch 24 bytes, and a load register byte (ldrb) for the last one. Using the ! after the source-address register indicates that it should be updated with the address of the next byte after those that have been read.

The storing of the data back to memory is done analogously. Store multiple (stm) writes 6 registers×4 bytes = 24 bytes (with the ! to have the destination address updated). The final byte is written using strb.

4. Loading string from memory into EAX register

    @cld
    @leal HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString

    @lodsb
    ldrb r1, [r0, #0]

    @movb $0, %al
    mov r1, #0

    @dec %esi  @ unneeded. equiv: sub r0, r0, #1
    @lodsw
    ldrh r1, [r0, #0]

    @movw $0, %ax
    mov r1, #0

    @subl $2, %esi # Make ESI point back to the original string. unneeded. equiv: sub r0, r0, #2
    @lodsl
    ldr r1, [r0, #0]

In this section, we are shown how the IA32 lodsb, lodsw and lodsl instructions work. Again, they have implicitly assigned register usage, which isn’t how ARM operates.

So, instead of a simple, no-operand instruction like lodsb, we have a ldrb r1, [r0, #0] loading a byte from the address in r0 into r1. Because I didn’t use post indexed addressing, there’s no need to dec or subl the address after the load. If I were to do so, it could look like this:

    ldrb r1, [r0], #1
    sub r0, r0, #1

    ldrh r1, [r0], #2
    sub r0, r0, #2

    ldr r1, [r0], #4

If you trace through it in gdb, look at how the value in r0 changes after each instruction.

5. Storing strings from EAX to memory

    @leal DestinationUsingStos, %edi
    movw r0, #:lower16:DestinationUsingStos
    movt r0, #:upper16:DestinationUsingStos

    @stosb
    strb r1, [r0], #1
    @stosw
    strh r1, [r0], #2
    @stosl
    str r1, [r0], #4

Same kind of thing as for the loads. Writes the letters in r1 (being “Hell” — leftovers from the previous section) into DestinationUsingStos (the result being “HHeHell”). String processing on little endian architectures has its appeal.

6. Comparing Strings

    @cld
    @leal HelloWorldString, %esi
    movw r0, #:lower16:HelloWorldString
    movt r0, #:upper16:HelloWorldString
    @leal H3110, %edi
    movw r1, #:lower16:H3110
    movt r1, #:upper16:H3110

    @cmpsb
    ldrb r2, [r0,#0]
    ldrb r3, [r1,#0]
    cmp r2, r3

    @dec %esi
    @dec %edi
    @not needed because of the addressing mode used

    @cmpsw
    ldrh r2, [r0,#0]
    ldrh r3, [r1,#0]
    cmp r2, r3

    @subl $2, %esi
    @subl $2, %edi
    @not needed because of the addressing mode used
    @cmpsl
    ldr r2, [r0,#0]
    ldr r3, [r1,#0]
    cmp r2, r3

Where IA32′s cmps instructions implicitly load through the pointers in %edi and %esi, explicit loads are needed for ARM. The compare then works in pretty much the same way as for IA32, setting condition code flags in the current program status register (cpsr). If you run the above code, and check the status registers before and after execution of the cmp instructions, you’ll see the zero flag set and unset in the same way as is demonstrated in the video.

The condition code flags are:

  • bit 31 — negative (N)
  • bit 30 — zero (Z)
  • bit 29 — carry (C)
  • bit 28 — overflow (V)

There’s other flags in that register — all the details are on page B1-16 and B1-17 in the ARM Architecture Reference Manual.

And with that, I think we’ve made it (finally) to the end of this part for ARM.

Other assembly primer notes are linked here.

October 22, 2014

CFP for Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015

This is the Call for Papers for the Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015 in Auckland.

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)

– continuous integration tools (e.g. jenkins)

– CI techniques (e.g. gated trunk, zuul)

– testing tools and techniques (e.g. subunit, fuzz testing tools)

– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.

– applying CI in your workplace/project

We’re looking for talks about technology *and* the human side of this

Speakers at this miniconf can get a miniconf only pass, but to attend the main conference, you’ll need to organize that yourself.

There will be a projector, and there is a possibility the talk will be recorded (depending on if the conference A/V is up and running) – if recorded, talks will be posted with the same place with the same CC license as main LCA talks are.

CFP is open until midnight November 21st 2015.

http://goo.gl/forms/KZI1YDDw8n

[life] Day 266: Prep play date, shopping and a play date

Zoe's sleep seems a bit messed up lately. She yelled out for me at 3:53am, and I resettled her, but she wound up in bed with me at 4:15am anyway. It took me a while to get back to sleep, maybe around 5am, but then we slept in until about 7:30am.

That made for a bit of a mad rush to get out the door to Zoe's primary school for her "Prep Play Date" orientation. We managed to make it out the door by a bit after 8:30am.

15 minutes is what it appears to take to scooter to school, which is okay. With local traffic being what it is, I think this will be a nice way to get to and from school next year, weather permitting.

We signed in, and Zoe got paired up with an existing (extremely tall) Prep student to be her buddy. The other girl was very keen to hold Zoe's hand, which Zoe was a bit dubious about at first, but they got there eventually.

The kids spent about 20 minutes rotating through the three classrooms, with a different buddy in each classroom. They were all given a 9 station name badge when they signed in, and they got a sticker for each station that they visited in each classroom.

It was a really nice morning, and I discovered there's one other girl from Zoe's Kindergarten going to her school, so I made a point of introducing myself to her mother.

I've got a really great vibe about the school, and Zoe enjoyed the morning. I'm looking forward to the next stage of her education.

We scootered home afterwards, and Zoe got the speed wobbles going down the hill and had a spectacular crash, luckily without any injuries thanks to all of her safety gear.

Once we got home, we headed out to the food wholesaler at West End to pick up a few bits and pieces, and then I had to get to Kindergarten to chair the monthly PAG meeting. I dropped Zoe at Megan's place for a play date while I was at the Kindergarten.

After the meeting, I picked up Zoe and we headed over to Westfield Carindale to buy a birthday present for Zoe's Kindergarten friend, Ivy, who is having a birthday party on Saturday.

We got home from Carindale with just enough time to spare before Sarah arrived to pick Zoe up.

I then headed over to Anshu's place for a Diwali dinner.

Speaker Feature: Audrey Lobo-Pulo, Jack Moffitt

Audrey Lobo-Pulo

Audrey Lobo-Pulo

Evaluating government policies using open source models

10:40am Wednesday 14th January 2015

Dr. Audrey Lobo-Pulo is a passionate advocate of open government and the use of open source software in government modelling. Having started out as a physicist developing theoretical models in the field of high speed data transmission, she moved into the economic policy modelling sphere and worked at the Australian Treasury from 2005 till 2011.

Currently working at the Australian Taxation Office in Sydney, Audrey enjoys discussions on modelling economic policy.

For more information on Audrey and her presentation, see here. You can follow her as @AudreyMatty and don’t forget to mention #LCA2015.



Jack Moffitt

Jack Moffitt

Servo: Building a Parallel Browser

10:40am Friday 16th January 2015

Jacks current project is called Chesspark and is an online community for chess players built on top of technologies like XMPP (aka Jabber), AJAX, and Python.

He previously created the Icecast Streaming Media Server, spent a lot of time developing and managing the Ogg Vorbits project, and helping create and run the Xiph.org Foundation. All these efforts exist to create a common, royalty free, and open standard for multimedia on the Internet.

Jack is also passionate about Free Software and Open Source, technology, music, and photography.

For more information on Jack and his presentation, see here. You can follow him as @metajack and don’t forget to mention #LCA2015.

October 21, 2014

Speaker Feature: Denise Paolucci, Gernot Heiser

Denise Paolucci

Denise Paolucci

When Your Codebase Is Nearly Old Enough To Vote

11:35 am Friday 16th January 2015

Denise is one of the founders of Dreamwidth, a journalling site and open source project forked from Livejournal, and one of only two majority-female open source projects.

Denise has appeared at multiple open source conferences to speak about Dreamwidth, including OSCON 2010 and linux.conf.au 2010.

For more information on Denise and her presentation, see here.



Gernot Heiser

Gernot Heiser

seL4 Is Free - What Does This Mean For You?

4:35pm Thursday 15th January 2015

Gernot is a Scientia Professor and the John Lions Chair for operating systems at the University of New South Wales (UNSW).

He is also leader of the Software Systems Research Group (SSRG) at NICTA. In 2006 he co-founded Open Kernel Labs (OK Labs, acquired in 2012 by General Dynamics) to commercialise his L4 microkernel technology

For more information on Gernot and his presentation, see here. You can follow him as @GernotHeiser and don’t forget to mention #LCA2015.

OpenStack infrastructure swift logs and performance

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.

The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results

Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.

 

Here it is quite clear that swift is slower at actually responding.

 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275
Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632
Second pass without outliers
Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691
Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.

 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.

 

The size histograms don’t really add much here.

 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.

 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).

 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0
Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232
Second pass without outliers
Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0
Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.

 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.

 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift
Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352
Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913
Second pass without outliers
Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993
Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.

 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.

 

Swift still loses out on transfers but again does a much better job of keeping up.

 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

[life] Day 265: Kindergarten and startup stuff

Zoe yelled out for me at 5:15am for some reason, but went back to sleep after I resettled her, and we had a slow start to the day a bit after 7am. I've got a mild version of whatever cold she's currently got, so I'm not feeling quite as chipper as usual.

We biked to Kindergarten, which was a bit of a slog up Hawthorne Road, given the aforementioned cold, but we got there in the end.

I left the trailer at the Kindergarten and biked home again.

I finally managed to get some more work done on my real estate course, and after a little more obsessing over one unit, got it into the post. I've almost got another unit finished as well. I'll try to get it finished in the evenings or something, because I'm feeling very behind, and I'd like to get it into the mail too. I'm due to get the second half of my course material, and I still have one more unit to do after this one I've almost finished.

I biked back to Kindergarten to pick up Zoe. She wanted to watch Megan's tennis class, but I needed to grab some stuff for dinner, so it took a bit of coaxing to get her to leave. I think she may have been a bit tired from her cold as well.

We biked home, and jumped in the car. I'd heard from Matthew's Dad that FoodWorks in Morningside had a good meat selection, so I wanted to check it out.

They had some good roasting meat, but that was about it. I gave up trying to mince my own pork and bought some pork mince instead.

We had a really nice dinner together, and I tried to get her to bed a little bit early. Every time I try to start the bed time routine early, the spare time manages to disappear anyway.

October 20, 2014

SM1000 Part 7 – Over the air in Germany

Michael Wild DL2FW in Germany recently attended a Hamfest where he demonstrated his SM1000. Michael sent me the following email (hint: I used Google translate on the web sites):

Here is the link to the review of our local hamfest.

At the bottom is a video of a short QSO on 40m using the SM-1000 over about 400km. The other station was Hermann (DF2DR). Hermann documented this QSO very well on his homepage also showing a snapshot of the waterfall during this QSO. Big selective fading as you can see, but we were doing well!

He also explains that, when switching to SSB at the same average power level, the voice was almost not understandable!

SM1000 Beta and FreeDV Update

Rick KA8BMA has been working hard on the Beta CAD work, and fighting a few Eagle DRC battles. Thanks to all his hard work we now have an up to date schematic and BOM for the Betas. He is now working on the Beta PCB layout, and we are refining the BOM with Edwin from Dragino in China. Ike, W3IKIE, has kindly been working with Rick to come up with a suitable enclosure. Thanks guys!

My current estimate is that the Beta SM1000s will be assembled in November. Once I’ve tested a few I’ll put them up on my store and start taking orders.

In the mean time I’ve thrown myself into modem simulations – playing with a 450 bit/s version of Codec 2, LPDC FEC codes, diversity schemes and coherent QPSK demodulation. I’m pushing towards a new FreeDV mode that works on fading channels at negative SNRs. More on that in later posts. The SM1000 and a new FreeDV mode are part of my goals for 2014. The SM1000 will make FreeDV easy to use, the new mode(s) will make it competitive with SSB on HF radio.

Everything is open source, both hardware and software. No vendor lock in, no software licenses and you are free to experiment and innovate.

IBM Pays GlobalFoundries to take Microprocessor Business

Interesting times for IBM, having already divested themselves of the x86 business by selling it on to Lenovo they’ve now announced that they’re paying GlobalFoundries $1.5bn to take pretty much that entire side of the business!

IBM (NYSE: IBM) and GLOBALFOUNDRIES today announced that they have signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM’s global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. GLOBALFOUNDRIES will also become IBM’s exclusive server processor semiconductor technology provider for 22 nanometer (nm), 14nm and 10nm semiconductors for the next 10 years.

It includes IBM’s IP and patents, though IBM will continue to do research for 5 years and GlobalFoundries will get access to that. Now what happens to those researchers (one of whom happens to be a friend of mine) after that isn’t clear.

When I heard the rumours yesterday I was wondering if IBM was aiming to do an ARM and become a fab-less CPU designer but this is much more like exiting the whole processor business altogether. The fact that they seem to be paying Global Foundries to take this off their hands also makes it sound pretty bad.

What this all means for their Power CPU is uncertain, and if I was nVidia and Mellanox in the OpenPOWER alliance I would be hoping I’d know about this before joining up!

This item originally posted here:



IBM Pays GlobalFoundries to take Microprocessor Business

[life] Day 264: Pupil Free Day means lots of park play

Today was a Kindergarten (and it seemed most of the schools in Brisbane) Pupil Free Day.

Grace, the head honcho of Thermomix in Australia, was supposed to be in town for a meet and greet, and a picnic in New Farm Park had been organised, but at the last minute she wasn't able to make it due to needing to be in Perth for a meeting. The plan changed and we had a Branch-level picnic meeting at the Colmslie Beach Reserve.

So after Sarah dropped Zoe off, I whipped up some red velvet cheesecake brownie, which seems to be my go to baked good when required to bring a plate (it's certainly popular) and I had some leftover sundried tomatoes, so I whipped up some sundried tomato dip as well.

The meet up in the park was great. My group leader's daughters were there, as were plenty of other consultant's kids due to the Pupile Free Day, and Zoe was happy to hang out and have a play. There was lots of yummy food, and we were able to graze and socialise a bit. We called it lunch.

After we got home, we had a bit of a clean up of the balcony, which had quite a lot of detritus from various play dates and craft activities. Once that was done, we had some nice down time in the hammock.

We then biked over to a park to catch up with Zoe's friend Mackensie for a play date. The girls had a really nice time, and I discovered that the missing link in the riverside bike path has been completed, which is rather nice for both cycling and running. (It goes to show how long it's been since I've gone for a run, I really need to fix that).

After that, we biked home, and I made dinner. We got through dinner pretty quickly, and so Zoe and I made a batch of ginger beer after dinner, since there was a Thermomix recipe for it. It was cloudy though, and Zoe was more used to the Bunderberg ginger beer, which is probably a bit better filtered, so she wasn't so keen on it.

All in all, it was a really lovely way to spend a Pupil Free Day.

LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://http.debian.net/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

October 19, 2014

Links: Docker, Dollar Vans, Bus Bunching and Languages

Speaker Feature: Pavel Emelyanov, Alasdair Allan

Pavel Emelyanov

Pavel Emelyanov

Libcontainer: One lib to rule them all

10:40 pm Friday 16th January 2015

Pavel Emelyanov is a principal engineer at Parallels working on server virtualization projects. He holds a PhD degree in Applied Mathematics from the Moscow Institute of Physics and Technology. His speaking experience includes the talk on network namespaces at LinuxCon 2009 and the presentation of the Virtuozzo resource management at the joint memory management, storage and filesystem summit in April 2011.

For more information on Pavel and his presentation, see here. You can follow him as @xemulp and don’t forget to mention #LCA2015.



Alasdair Allan

Alasdair Allan

Open Source Protocols and Architectures to Fix the Internet of Things…

3:40pm Friday 16th January 2015

Alasdair is a scientist, author, hacker, tinkerer and co-founder of a startup working on fixing the Internet of Things. He spends much of his time probing current trends in an attempt to determine which technologies are going to define our future.

He has also written articles for Make magazine. The latest entitled “Pick up your tools and get started!” posted 1 September 2014.

For more information on Alasdair and his presentation, see here. You can follow him as @aallan and don’t forget to mention #LCA2015.

Twitter posts: 2014-10-13 to 2014-10-19

A preponderance of yak shaving….

It is often observed that attempting to undertake one task begets another, with the corollary that two days later you’ve built a bikeshed painted in a multitude of colours.

So, dear readers, this tale of woe begins with the need to update my blog to something useful after 18 months of neglect and more. I had been writing a travel blog from when I took some leave off work to wander the globe. For this task, a new more generic DNS entry and an upgrade to the WordPress installation and syndication with my Advogato blog. Easily accomplished and a sense of progress.

This blog entry is going to be mostly a technical one. I’ll try incorporating more of real life in other entries.

Great, now I can tell the world about my little project toying with Vagrant and Puppet.

It is called “Browser In A Box”. It is up on Github https://github.com/mtearle/browser-in-a-box

It is very simple, a Vagrant file and a set of Puppet manifests/modules to launch Chromium in kiosk mode inside a VM to hit a certain URL. This is part of planned later work to look at creating a Vagrant development environment for Concerto.

At this point, I got distracted … aside from the liberal upgrades of bash on various machines to address Shellshock

Then I accidentally purchased a new Ultrabook. My previous netbook had been getting long in the tooth and it was time to upgrade. I ended up purchasing a Toshiba Satellite NB10, a reasonable processor Intel N2830, 4 Gig of RAM and 500 Gigs of spinning rust. Those are the nice bits.

On the negatives, Crappy Toshiba keyboard layout with the ~ key in a stupid spot and a UEFI bios. It is now blatantly apparent why Matthew Garrett drinks copious quantities of gin.

Special brickbats go to the Ubuntu installer for repartitioning and eating my Windows installation and recovery partition. (The option to install over my test Debian installation got over enthusiastic).  The wireless chipset (Atheros) has a known problem where it confuses the access point.

The next distraction ended up being a fit of procastination in terms of rearranging my tiny apartment. I’ve now modelled it in a program called Sweet Home 3D. Easy and straight forward to use. Needs a few more furniture models, but perfectly functional. I shall use it again next time I move.

Finally, we arrive at the the original task. I want to start syncing my calendars between various locations (written here for my benefit later).

They are:

  • Work stream – From my Work (Exchange) to my private host (Radicale) to Google Calendar (which will get to my Android phone)
  • Personal stream – From my private host (Radicale) to Google Calendar (and back again)
  • Party stream – From Facebook’s ical export to my private host and Google Calendar

In addition, various syncing of contacts but not my primary focus at the moment.

It appears that syncevolution will do most of what I want here. The challenge revolves around how to get it working. Ultimately, I want to have this live headless hosted on a virtual machine not running a desktop.

In a fit of enthusiasm, I decided upon attempting to build it from source as opposed to using the packages provided from the upstream (to avoid dragging in unnecessary dependencies.

I need to build from HEAD due to recent code added to syncevolution to support the change in Google’s CALDAV API to be behind OAuth V2.

This was not an overly successful exercise, I ended up getting something built but it didn’t ultimately work.

Problems encountered were:

  • libwbxml2 – The upstream at opensync.org is down. There appears to be forks, so playing the game of guessing the current head/release version.
  • activesyncd – Build system is currently broken in parts. There appears to be bit rot around the evolution bindings as the evolution API has changed

I gave up at that point. I’ve since spun up a different virtual machine with Debian Jessie and an install of Gnome. The packages from the syncevolution upstream installed cleanly, but have yet to work out the incarnations to make it work. However, that my friends is a story for a later blog entry…

Multimedia and Music Miniconf - Call for Papers

The Multimedia and Music Miniconf at LCA2015 will be held in Auckland, New Zealand, on Monday 12 January 2015. We are pleased to formally open the miniconf's Call for Papers. Submissions are encouraged from anyone with a story to tell which is related to open software for multimedia or music.

Examples of possible presentations include:

  • demonstrations of multimedia content authored using Open Source programs
  • audio recording examples
  • Open Source games
  • video and image editing on Linux
  • new multimedia software being written
  • multimedia web APIs and applications
  • unusual uses of Open Source multimedia software
  • codec news

In addition, we are planning to hold an informal jam session at the end of the Miniconf, giving community members a change to showcase their compositions and multimedia creations. Expressions of interest for this are also invited. If musical instruments are required it is preferable if participants arranged this themselves, but with sufficient lead time it might be possible to arrange a loan from locals in Auckland.

The miniconf website at annodex.org/events/lca2015 has further details about the miniconf.

To submit a proposal or for further information, please email Jonathan Woithe (jwoithe@atrad.com.au) or Silvia Pfeiffer (silviapfeiffer1@gmail.com).

Jonathan Woithe and Silvia Pfeiffer

(Multimedia and Music miniconf organisers)

October 18, 2014

KDE 4/Plasma system tray not displaying applications, notifications appearing twice

So I was trying to get RSIBreak working on my machine today, and for some reason it simply wasn’t displaying an icon in the Plasma system tray as it was meant to.

It took some searching around, but eventually I came across a comment on a KDE bug report that had the answer.

I opened up ~/.kde/config/plasma-desktop-appletsrc, searched for “systemtray“, and lo and behold, there were two Containments, both using the systemtray plugin. It seems that at some point during the history of my KDE installation, I ended up with two system trays, just with one that wasn’t visible.

After running kquitapp plasma to kill the desktop, I removed the first systemtray entry (I made an educated guess and decided that the first one was probably the one I didn’t want any more), saved the file and restarted Plasma.

Suddenly, not only did RSIBreak appear in my system tray, but so did a couple of other applications which I forgot I had installed. This also fixed the problem I was having with all KDE notifications appearing on screen twice, which was really rather annoying and I’m not sure how I coped with it for so long…



Filed under: Linux, Uncategorized Tagged: KDE, linux, Linux Tips

The r8169 driver and mysterious network problems

A few months ago, a friend of mine was having a problem. When he hooked up his Toshiba laptop to the Ethernet port in his bedroom, it would work under Windows, but not under Linux. When he hooked it up to the port in the room next door, it would work under both.

I headed over with my Samsung Ultrabook, and sure enough – it worked fine under Windows, but not Linux, while the room next door worked under both.

As it turns out, both our laptops used Realtek RTL8168-series Ethernet controllers, which are normally handled just fine by the r8169 driver, which can be found in the kernel mainline. However, Realtek also releases a r8168 driver (available in Debian as r8168-dkms). Upon installing that, everything worked fine.

(At some point I should probably go back and figure out why it didn’t work under r8169 so I can file a bug…)



Filed under: Hardware, Linux Tagged: Computing, Drivers, linux, Linux Tips

PRINCE2 Checklist and Flowchart

Recently a simple statement of PRINCE2 governance structures was provided. From this it is possible to derive a checklist for project managers to tick off, just to make sure that everything is done. Please note that this checklist is tailored and combines some functions. For example, there is no Business Review Plan as it is argued that any sensible project should incorporate these into the Business Case and the Project Plan.

A simple graphic is provided to assist with this process

read more

File Creation Time in Linux

Linux offers most of the expected file attributes from the command line, including the owner of a file, the group, the size, the date modified and name. However often users want find out when a file was created. This requires a little bit of extra investigation.

read more

October 17, 2014

Haskell : A neat trick for GHCi

Just found a really nice little hack that makes working in the GHC interactive REPL a little easier and more convenient. First of all, I added the following line to my ~/.ghci file.

  :set -DGHC_INTERACTIVE

All that line does is define a GHC_INTERACTIVE pre-processor symbol.

Then in a file that I want to load into the REPL, I need to add this to the top of the file:

  {-# LANGUAGE CPP #-}

and then in the file I can do things like:

  #ifdef GHC_INTERACTIVE
  import Data.Aeson.Encode.Pretty

  prettyPrint :: Value -> IO ()
  prettyPrint = LBS.putStrLn . encodePretty
  #endif

In this particular case, I'm working with some relatively large chunks of JSON and its useful to be able to pretty print them when I'm the REPL, but I have no need for that function when I compile that module into my project.

That time that I registered an electric vehicle

So, tell us a story, Uncle Paul.

Sure. One time when I was in Rovers, ...

No, tell us the story of how you got your electric motorbike registered!

Oh, okay then.

It was the 20th of February - a Friday. I'd taken the day off to get the bike registered. I'd tried to do this a couple of weeks before then, but I found out that, despite being told a month beforehand that the workload on new registrations was only a couple of days long, when I came to book it I found out that the earliest they could do was the 20th, two weeks away. So the 20th it was.

That morning I had to get the bike inspected by the engineer, get his sign-off, and take it down to the motor registry to get it inspected at 8:30AM. I also had to meet the plumber at our house, which meant I left a bit late, and by the time I was leaving the engineer it was already 8:15AM and I was in traffic. Say what you like about Canberra being a small town, but people like driving in and the traffic was a crawl. I rang the motor registry and begged for them to understand that I'd be there as soon as possible and that I might be a couple of minutes late. I squeaked into the entrance just as they were giving up hope, and they let me in because of the novelty of the bike and because I wasn't wasting their time.

The roadworthy inspection went fairly harmlessly - I didn't have a certificate from a weighbridge saying how heavy it was, but I knew it was only about eight kilos over the original bike's weight, so probably about 240 kilos? "OK, no worries," they said, scribbling that down on the form. The headlights weren't too high, the indicators worked, and there was no problem with my exhaust being too loud.

(Aside: at the inspection station there they have a wall full of pictures of particularly egregious attempts to get dodgy car builds past an inspection. Exhaust stuffed full of easily-removable steel wool? Exhausts with bit burnt patches where they've been oxy'd open and welded shut again? Panels attached with zip ties? Bolts missing? Plastic housings melted over ill-fitted turbos? These people have seen it all. Don't try to fool them.)

Then we came up to the really weird part of my dream. You know, the part where I know how to tap dance, but I can only do it while wearing golf shoes?

Er, sorry. That was something else. Then we came to the weird part of the process.

Modified vehicles have to get a compliance plate, to show that they comply with the National Code of Practice on vehicle conversions. The old process was that the engineer that inspected the vehicle to make sure it complied had blank compliance plates; when you brought the vehicle in and it passed their inspection, they then filled out all the fields on the plate, attached the plate to the vehicle, and then you transported it down to Main Roads. But that was a bit too open to people stealing compliance plates, so now they have a "better" system. What I had to do was:

  1. Get the bike inspected for road worthiness.
  2. They hand me a blank compliance plate.
  3. I then had to take it to the engineer, who told me the fields to fill in.
  4. He then told me to go to a trophy making place, where they have laser etchers that can write compliance plates beautifully.
  5. I arrive there at 11AM. They say it'll be done by about 2PM.
  6. Go and have lunch with friends. Nothing else to do.
  7. Pick etched compliance plate up.
  8. Take compliance plate back to engineer. Because he's busy, borrow a drill and a rivet gun and attach the plate to the bike myself.
  9. Take it back to Main Roads, who check that the plate is attached to the bike correctly and stamp the road worthiness form. Now I can get the bike registered.
Yeah, it's roundabout. Why not keep engrave the plates at Main Roads with the details the Engineer gives to them? But that's the system, so that's what I did.

And so I entered the waiting department. It only probably took about fifteen minutes to come up next in the queue, but it was fifteen minutes I was impatient to see go. We went through the usual hilarious dance with values:

  • Her: What are you registering?
  • Me: An electric motorbike.
  • Her: How many cylinders?
  • Me: Er... it's electric. None.
  • Her: None isn't a value I can put in.
  • Me: (rolls eyes) OK, one cylinder.
  • Her: OK. How many cubic centimetres?
Many months ago I had enquired about custom number plates, and it turns out that motorbikes can indeed have them. Indeed, I could by "3FAZE" if I wanted. For a mere $2,600 or so. It was very tempting, but when I weighed it up against getting new parts for the bike (which it turned out I would need sooner rather than later, but that's a story for another day) I thought I'd save up for another year.

So I finally picked up my new set of plates, thanked her for her time, and said "Excuse me, but I have to do this:" and then yelled:

"Yes!!!!"

Well, maybe I kept my voice down a little. But I had finally done it - after years of work, several problems, one accident, a few design changes, and lots of frustration and gradual improvement, I had an actual, registered electric motorbike I had built nearly all myself.

I still get that feeling now - I'll be riding along and I'll think, "wow, I'm actually being propelled along by a device I built myself. Look at it, all working, holding together, acting just like a real motorbike!" It feels almost like I've got away with something - a neat hack that turns out to work just as well as all those beautifully engineered mega-budget productions. I'm sure a lot of people don't notice it - it does look a bit bulky, but it's similar enough to a regular motorbike that it probably just gets overlooked as another two-wheeled terror on the roads.

Well, I'll just have to enjoy it myself then :-)

[life] Day 261: Lots of play dates with boys, TumbleTastics, and a fairy gathering

Today was a typical jam packed day. Zoe had a brief wake up at at some point overnight because she couldn't find Cowie, right next to her head, but that was it.

First up, the PAG fundraising committee come over for a quick (well, more like 2 hour) meeting at my place to discuss planning for the sausage sizzle tomorrow. Because I don't have Zoe, I've volunteered to do a lot of the running around, so I'm going to have a busy day.

Mel had brought Matthew and Olivia with her, so Zoe and Matthew had a good time playing, and Olivia kept trying to join in.

That meeting ran right up until I realised we had to head off for TumbleTastics, so Zoe got ready in record time and we scootered over and made it there just as her class was starting. I was sure we were going to be late, so I was happy we made it in time.

Lachlan and his Mum, Laura, and little sister came over for lunch again afterwards, and stayed for a little while.

After they left, we started getting ready for the Fairy Nook's attempt to break the Guiness Book of Records record for the most fairies in one place. We needed to get a wand, so once Zoe was appropriately attired, we walked around the corner to Crackerjack Toys and picked up a wand.

After that, I popped up to Mel's place to collect a whole bunch of eskies that the local councillor had lent us for the sausage sizzle. Mel had also picked up a tutu for Zoe from the local two dollar store in her travels.

We got home, and then walked to the Hawthorne AFL oval where the record attempt was. Initially there were like two other fairies there, but by 4:30pm, there was a pretty good turnout. I don't know what the numbers were, but I'm pretty sure they were well under the 872 they needed. There was a jumping castle and a few of Zoe's friends from Kindergarten, so it was all good.

Sarah arrived to pick up Zoe from there, and I walked home.

October 16, 2014

Speaker Feature: Laura Bell, Michael Cordover

Laura Bell

Laura Bell

Why can't we be friends? Integrating Security into an Existing Agile SDLC

3:40pm Friday 16th January 2015

Laura describes herself as an application security wrangler, repeat dreamer, some-time builder, python juggler, Mom and wife.

For more information on Laura and her presentation, see here. You can follow her as @lady_nerd and don’t forget to mention #LCA2015.



Michael Cordover

Michael Cordover

Using FOI to get source code: the EasyCount experience

3:40pm Wednesday 14th January 2015

Michael is interested in the law, science, politics and everything in between. He worked in computing, event management and project management. He a policy wonk and systems-oriented and he loves variety but is interested in detail.

His life goal as a child was to know everything. He says that's impossible but is still trying to get as close as he can.

For more information on Michael and his presentation, see here. You can follow him as @mjec and don’t forget to mention #LCA2015.

[life] Day 260: Bedwetting, a morning tea play date, and swim class

Zoe woke up at something like 3:30am because she'd wet the bed. She hasn't wet the bed since before she turned 4. In fact, I bought a Connie pad and she promptly never wet the bed again. I was actually thinking about stopping using it just last night, so I obviously jinxed things.

Anyway, she woke up, announced she'd had an accident, and I smugly thought I'd have it all handled, but alas, the pad was too low down, so she'd still managed to wet the mattress, which was annoying. Plan B was to just switch her to the bottom bunk, which still worked out pretty well. I've learned an important lesson about the placement of the Connie pad now.

Unfortunately for me, it seems that if I get woken up after about 4am, I have a low probability of getting back to sleep, and I'd gotten to bed a bit late the night before, so I only wound up with about 5 hours and felt like crap all day.

Vaeda and her mum, Francesca came over for a morning tea play date. I'd been wanting an excuse to try out a new scone recipe that I'd discovered, so I cranked out some scones for morning tea.

Vaeda and Francesca couldn't stay for too long, but it was a nice morning nonetheless. Then we popped out to Woolworths to pick up a $30 gift card that the store had donated towards the weekend sausage sizzle. Not quite 70 kg of free sausages, but better than nothing.

After we got back, we had some lunch, and I tried to convince Zoe to have a nap with me, without success, but we did have a couple of hours of quietish time, and I got to squeeze in some reading.

We biked over to swim class and then biked home, and I made dinner. Zoe was pretty tired, so I got her to bed nice and easily. It'll be an early night for me too.

October 15, 2014

Speaker Feature: John Dickinson, Himangi Saraogi

John Dickinson

John Dickinson

Herding Cats: Getting an open source community to work on the same thing.

2:15pm Thursday 15th January 2015

John is a familiar sight around the world, he has spoken at many conferences, summits, and meetups, including the OpenStack Summit, OSCON, and LinuxConf Australia.

He is Director of Technology at SwiftStack. SwiftStack is a technology innovator of private cloud storage for today s applications, powered by OpenStack Object Storage.

For more information on John and his presentation, see here. You can follow him as @notmyname and don’t forget to mention #lca2015.



Himangi Saraogi

Himangi Saraogi

Coccinelle: A program matching and transformation tool

1:20pm Wednesday 14th January 2015

Himangi finds contributing to open source a great learning platform and she herself has been contributing to Linux kernel and has submitted and had many patches accepted.

She has experience with tools like checkpatch, sparse and coccinelle.

For more information on Himangi and her presentation, see here. You can follow her as @himangi99 and don’t forget to mention #lca2015.

Sliding around... spinning around.

The wiring and electronics for the new omniwheel robot are coming together nicely. Having wired this up using 4 individual stepper controllers, one sees the value in commissioning a custom base board for the stepper drivers to plug into. I still have to connect an IMU to the beast, so precision strafing will (hopefully) be obtainable. The sparkfun mecanum video has the more traditional two wheels each side design, but does wobble a bit when strafing.





Apart from the current requirements the new robot is also really heavy, probably heavier than Terry. I'm still working out what battery to use to meet the high current needs of four reasonable steppers on the move.



[life] Day 259: Kindergarten, more demos and play dates

I was pretty exhausted after yesterday, so getting out of bed this morning took some serious effort. I started the day with a chiropractic adjustment and then got stuck into doing the obligatory "pre-cleaner clean" and preparing for my third Thermomix demonstration.

The cleaners arrived and I headed off around the corner. My host thought the demo was starting at 10:30am, so I again had a bit of extra time up my sleeve.

My Group Leader, Maria, came to observe this demo, and I thought she was just going to be incognito, but to my pleasant surprise, she actually helped out with some of the washing up throughout the demo, which made it easier.

The demo went really well, and I was happy with how it went, and Maria gave me really positive feedback as well, so I was really stoked.

I got home with enough time to collapse on the couch with a book for half an hour before I biked to Kindergarten to pick up Zoe.

As we were heading out, I realised I'd left her helmet at home on her scooter. That's what I get for not putting it back on her bike trailer. So I sent her to Megan's house and biked home to pick up the helmet and headed back again. Two runs up Hawthorne Road in the afternoon heat was a good bit of exercise.

After a brief play at Megan's, we headed home, and I started dinner. For some reason, I was super organised tonight and had dinner on the table nice and early, and everything cleaned up afterwards, so we had plenty of time to go out for a babyccino before bath time and bed time, and I still managed to get Zoe to bed a little early, and I didn't have any cleaning up to do afterwards.

It's been a good day.

My entry in the "Least Used Software EVAH" competition

For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I’ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits – those things that nobody ever thinks about except when they run out of file descriptors.

I didn’t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn’t manipulate filehandle limits. I considered adding it, then realised that there wasn’t a library available to do it. Since I haven’t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine…

Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you’re welcome.

October 14, 2014

[life] Day 258: Kindergarten, demonstrations and play dates

I had my second Thermomix demonstration this morning. It was a decent drive from home away, and in my thoroughness to be properly prepared, I somehow managed to misjudge the time and arrived an hour earlier than I needed to. Oops.

It was good to have the additional time up my sleeves though, and I was happy with how the demonstration went, and it left me with a comfortable amount of time to get to Kindergarten to pick Zoe up. It did completely wipe out the day though.

Zoe wanted to watch Megan's tennis class, so I left her with Jason while I popped home to get changed, and then came back in time for the conclusion of tennis class.

Zoe wanted Megan to come over for a play date, so I took Megan back to our place, and the girls had a great afternoon on the balcony doing some self-directed craft. I used the time to play catch up and make a bunch of phone calls.

It wasn't until after Sarah picked up Zoe that I realised I'd barely interacted with her all afternoon though, which was a bit of a shame. I'll be happy once this sausage sizzle on Saturday is done, and the pace of life should slow down a bit more again.

It was a bit of a struggle to force myself to go to yoga class tonight, but I'm glad I did, because it was a really great class.

[life] Day 257: Kindergarten, meetings, and scrounging for sausages

Zoe's Kindergarten has scored the fundraising sausage sizzle rights to the local Councillor's next Movies in the Park event. Since I've been the chairperson of the PAG, which doesn't seem to actually involve much other than being cheerleader in chief and chairing monthly meetings, I thought I'd lend the fundraising committee a hand with the organising of this event. The fundraising committee have worked their butts off this year.

After I dropped Zoe at Kindergarten, I want to the home of one of the committee members and met with the committee to discuss logistics for the upcoming sausage sizzle. I'd previously volunteered to try and get a donation of sausages from a local butcher, but hadn't had a chance to do that yet.

After taking a bus into the city and back for a lunch meeting, and picking up Zoe from Kindergarten afterwards, we set out on a tour of all the local supermarkets and butchers, with an official letter in hand from the Kindergarten.

We were unlucky on all fronts, but Zoe did score a free cheerio at one butcher, so she was pretty happy. Driving all over the place ate up most of the afternoon.

Anshu dropped in after work not long after we got home, and then Sarah arrived not long after that to pick up Zoe.

Linux and Windows 8 Dual Boot : A Brief How-To

As regular readers would know, I make some effort to avoid using closed-source and proprietary software. This includes that popular operating system common on laptops and servers, MS-Windows. However there are a small number of reasons why this O.S. is required, including life-saving medical equipment hardware which, for some unfathomable reason, has been written to only interface with proprietary operating systems. Open source developers?

read more

MariaDB Foundation board

There seems be a bit of an exodus from the MariaDB Foundation board recently… I’m not sure exactly what to make of it all, but the current members according to https://mariadb.org/en/foundation/ are:

  • Rasmus Johansson (chair)
  • Michael “Monty” Widenius
  • Jeremy Zawodny
  • Sergei Golubchik

With Jeremy Zawodny being the only non-MariaDB Corp member.

Recently, Jeremy Cole asked some people about their membership:

I’m a little worried for the project, the idea of a foundation around it and for people I count as friends who work on MariaDB.

MySQL 5.7.5 on POWER – thread priority

Good news everyone!

MySQL 5.7.5 is out with a bunch more patches for running well on POWER in the tree. I haven’t yet gone and tried it all out, but since I’m me, I look at bugs database and git/bzr history first.

On Intel CPUs, when you’re spinning on a spin lock, you’re meant to execute the PAUSE CPU instruction. This tells the CPU that other execution threads in the same core should be given priority as you are currently not doing anything productive. Without this, you’re likely going to hurt on hyperthreaded CPUs.

In MySQL, there are custom spinlocks in order to do interesting adaptive mutex things to attempt to squeeze the most performance possible out of modern systems.

One of the (not 100% ready, but close) bugs with patches I submitted against MySQL 5.7 was for using the equivalent of the PAUSE instruction for POWER CPUs. On POWER, we’re a bit different, you can actually set priorities of threads (which may matter more, as POWER8 CPUs can be in SMT8 mode – where there are *eight* executing threads per core).

So, the good news is that in MySQL 5.7.5, the magic instructions for setting thread priority are in! This should mean great things for performance on POWER systems with any of the SMT modes enabled.

The next interesting part of this is how it interacts with other KVM guests on a system. At least on POWER (and on x86 as well, although I won’t go into details here) there’s a hypervisor call that a guest can make saying “hey, I’m spinning here, perhaps you want to make sure other vcpus execute so that at some point I can continue”. On POWER, this is the H_CONFER hcall, where you can basically do a directed yield to another vcpu (the one that holds the lock you’re trying to get is a good idea).

Generally though, it’s only the guest kernel that does this, not userspace. You can see the H_CONFER call in __spin_yield(arch_spinlock_t*) and __rw_yield(arch_rwlock_t*) in arch/powerpc/lib/locks.c in the kernel.

It would be interesting to see what extra we could get out of a system running multiple guests with MySQL servers if InnoDB/MySQL could properly yield to the right vcpu (well, thread I guess).

October 13, 2014

One week of Nova Kilo specifications

Its been one week of specifications for Nova in Kilo. What are we seeing proposed so far? Here's a summary...



API







Administrative



  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705.




Containers Service







Hypervisor: FreeBSD



  • Implement support for FreeBSD networking in nova-network: review 127827.




Hypervisor: Hyper-V



  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190.




Hypervisor: VMWare



  • Add ephemeral disk support to the VMware driver: review 126527 (spec approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (spec approved).
  • Support the OVA image format: review 127054.




Hypervisor: libvirt



  • Add a new linuxbridge VIF type, macvtap: review 117465.
  • Add support for SMBFS as a image storage backend: review 103203.
  • Convert to using built in libvirt disk copy mechanisms for cold migrations on non-shared storage: review 126979.
  • Support libvirt storage pools: review 126978.
  • Support quiesce filesystems during snapshot: review 126966.




Instance features



  • Allow direct access to LVM volumes if supported by Cinder: review 127318.




Interal



  • Move flavor data out of the system_metdata table in the SQL database: review 126620.




Internationalization







Scheduler



  • Add an IOPS weigher: review 127123 (spec approved).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530.
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Move select_destinations() to using a request object: review 127612.




Scheduling



  • Add instance count on the hypervisor as a weight: review 127871.




Security



  • Provide a reference implementation for console proxies that uses TLS: review 126958.
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.




Tags for this post: openstack kilo blueprints spec

Related posts: Compute Kilo specs are open; Specs for Kilo; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

LUV Beginners October Meeting: Command Line

Oct 18 2014 12:30
Oct 18 2014 16:30
Oct 18 2014 12:30
Oct 18 2014 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Wen Lin will be introducing newcomers to Linux to the use of the "command line".

Wen Lin is the long-serving treasurer for Linux Users of Victoria and has provided several presentations in the past on Libre/OpenOffice.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 18, 2014 - 12:30

Compute Kilo specs are open

From my email last week on the topic:
I am pleased to announce that the specs process for nova in kilo is
now open. There are some tweaks to the previous process, so please
read this entire email before uploading your spec!

Blueprints approved in Juno
===========================

For specs approved in Juno, there is a fast track approval process for
Kilo. The steps to get your spec re-approved are:

 - Copy your spec from the specs/juno/approved directory to the
specs/kilo/approved directory. Note that if we declared your spec to
be a "partial" implementation in Juno, it might be in the implemented
directory. This was rare however.
 - Update the spec to match the new template
 - Commit, with the "Previously-approved: juno" commit message tag
 - Upload using git review as normal

Reviewers will still do a full review of the spec, we are not offering
a rubber stamp of previously approved specs. However, we are requiring
only one +2 to merge these previously approved specs, so the process
should be a lot faster.

A note for core reviewers here -- please include a short note on why
you're doing a single +2 approval on the spec so future generations
remember why.

Trivial blueprints
==================

We are not requiring specs for trivial blueprints in Kilo. Instead,
create a blueprint in Launchpad
at https://blueprints.launchpad.net/nova/+addspec and target the
specification to Kilo. New, targeted, unapproved specs will be
reviewed in weekly nova meetings. If it is agreed they are indeed
trivial in the meeting, they will be approved.

Other proposals
===============

For other proposals, the process is the same as Juno... Propose a spec
review against the specs/kilo/approved directory and we'll review it
from there.




After a week I'm seeing something interesting. In Juno the specs process was new, and we saw a pause in the development cycle while people actually wrote down their designs before sending the code. This time around people know what to expect, and there are left over specs from Juno lying around. We're therefore seeing specs approved much faster than in Kilo. This should reduce the effect of the "pipeline flush" that we saw in Juno.



So far we have five approved specs after only a week.



Tags for this post: openstack kilo blueprints spec

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Blueprints to land in Nova during Juno; On layers; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

October 12, 2014

Twitter posts: 2014-10-06 to 2014-10-12

[life] Day 254: TumbleTastics and opportunistic play dates

The problem with being too busy to blog on the day, is by the time I get around to it, I've forgotten half the details...

I can't remember what we did in the morning before TumbleTastics. I think Sarah dropped Zoe around a bit late because she wasn't going to work.

We popped down to the post office to collect some mail, and then the supermarket. After that, Zoe was a bit tired and grumpy (apparently she'd woken up early) and didn't really want to go to TumbleTastics, but after some morning tea, perked up and reconsidered.

We scootered to TumbleTastics, and discovered that one of the boys from Kindergarten, Lachlan, is in her class. She had another really good class, and I invited Lachie and his Mum and little sister over for lunch afterwards.

Lachie and Zoe had a great time playing together, before and after lunch. I think that was the extend of what happened on Friday that was memorable.

October 11, 2014

Tyan OpenPower

Good news everyone! Tyan has announced the availability of their first OpenPOWER system! They call this a Customer Reference System, which means it’s an excellent machine to start poking at OpenPower and POWER8 (or deploying applications on).

Because it’s an OpenPower machine, it runs the open source Open Power firmware (all up on github) and will happily run Linux (feel free to port your other operating system kernels). I’ll be writing more on the OpenPower firmware soon as, well, technical details are fun!

Ubuntu 14.10 is listed as recommended as not only have they been building for POWER8 but have spent some time ensuring things work fairly well out-of-the-box (both as a KVM guest and running native on the bare metal). Or, you can always just boot whatever the mainline kernel is at – build for the POWERNV (POWER non-virtualized) platform (be sure to include all the required drivers) and have fun!

October 10, 2014

Single emergency mode with systemd

Just to remind myself.. add systemd.unit=emergency.target to the kernel line, or if that fails, try init=/sbin/sh and remove both quiet and rhgb options.

Afterwards, exit or:

exec /sbin/init

Can also enable debug mode to help investigating problems with systemd.log_level=debug

You can get a console early on in the boot process by enabling debug-shell:

systemctl enable debug-shell.service

BlueHackers @ Open Source Developers’ Conference 2014

This year, OSDC’s first afternoon plenary will be a specific BlueHackers related topic: Stress and Anxiety, presented by Neville Starick – an experienced Brisbane based counsellor.

We’ll also have our traditional BlueHackers “BoF” (birds-of-a-feather) session in the evening, usually featuring some general information, as well as the opportunity for safe lightning talks. Some people talk, some people don’t. That’s all fine.

The Open Source Developers’ Conference 2014 is being held at the beautiful Griffith University Gold Coast Campus, 4-7 November. It features a fine program, and if you use this magic link you get a special ticket price, but the regular registration is only around $300 anyhow, $180 for students! This includes all lunches and the conference dinner. Fabulous value.

[life] Day 253: Bike riding practice and swim class

It's been a while since we've done any bike riding practice, and as the day was a bit cooler and more overcast, I thought we could squeeze in a little bit first thing.

We went to the Minnippi Parklands again, and did a little bit of practise. Zoe actually (really briefly) rode her bike for the first time successfully, which was really exciting. We'll have to have another go next week.

After that, we dropped past the supermarket to get a few things for lunch, and then had lunch.

I've managed to forget what happened after lunch, so it can't have been particularly memorable. I miscalculated when swim class was by 30 minutes, and thought we were in more of a rush than we really were. We biked to the Post Office to mail off my US taxes, but I thought we didn't have time to do all the extra paperwork for the mailing, so I paid for the postage, but took all of the stuff with me to swim class to finish filling out. On the way there, I realised I had an extra 30 minutes up our sleeves, which was nice. It gave Zoe some time to have some fruit before class.

Sarah picked up Zoe from swim class, and I biked home to ditch the bike and meet Anshu at the movie theatre to watch Gone Girl. Hoo boy. Good movie.

[life] Day 252: A poorly executed spontaneous outing to Wet and Wild

The forecast maximum was supposed to be 32°C, so I rather spontaneously decided to go for our first visit to Wet and Wild for the season.

I whipped up some lunch, and got our swim gear together, and after a quick phone call with my REIQ tutor to discuss some questions I had about the current unit, I grabbed our gear and we headed off.

I thought that because school had gone back this week, that Wet and Wild would be nice and quiet, but boy, did I miscalculate. I think the place was the busiest I've ever seen it, even on a weekend visit. It would appear that a lot of people have decided to just take an extra week off school with the Labour Day public holiday on Monday.

Once we arrived, I discovered that I'd left my hat on my bed at home, so I had to buy a hat from the gift shop while I was queuing up to pay for a locker. Then we went to get changed, and I discovered I'd also left my swimming stuff on my bed as well, so I had to line up again to purchase a pair of board shorts and a rashie. I was pretty annoyed with myself at all the unnecessary added expense, because I'd left in too much of a hurry.

After we'd got ourselves appropriately attired, we had a fantastic day. Zoe's now tall enough for a few of the "big kid" slides, so we went on them together as well. The first one just involved us sitting in a big inflatable tube going down a wide, curving slide, so that was pretty easy. The second one I wasn't sure about, because we had to go separately. I went first, and waited at the bottom to catch her. As I went down, I was a bit worried it was going to be too much for Zoe, but she came down fine, and once she surfaced, her first word was "Again!", so it was all good.

She napped on the way home, so I swung by the Valley to check my post office box, and then we got home and Sarah picked her up. In spite of the extra expense, it was a really good day out.

October 08, 2014

Free and Open Source Software Business Applications

Free and open source software is based on key principles that are firmly grounded in instrumental advantages and ethical principles. It is heavily used in infrastructure and in new hardware forms; it is less used in user-end applications and traditional forms (laptops and desktops); however there is a good range of mature and feature-rich business applications available. The development of increasingly ubiquitous and more powerful computing is particularly well-suited for FOSS and workplaces that make use of such software will gain financial and positional advantages.

Presentation to Young Professionals CPA, October 8, 2014

read more

Lock In







ISBN: 0765375869

LibraryThing

I know I like Scalzi stuff, but each series is so different that I like them all in different ways. I don't think he's written a murder mystery before, and this book was just as good as Old Man's War, which is a pretty high bar. This book revolves around a murder being investigated by someone who can only interact with the real world via personal androids. Its different from anything else I've seen, and a unique idea is pretty rare these days.



Highly recommended.



Tags for this post: book john_scalzi robot murder mystery

Related posts: Isaac Asimov's Robot Short Stories; Prelude To Foundation ; Isaac Asimov's Foundation Series; Caves of Steel; Robots and Empire ; A Talent for War
Comment Recommend a book

Earlybird registrations are now open!

The moment that you have been waiting for has finally arrived! It is with many hugs of sincere gratitude to the team that we can announce that Earlybird Registration for LCA2015 is now open.

Now is the time to chose - are you a Professional, a Hobbyist or a Student attending LCA2015? Will you be one of our fantastic army of volunteers? Go to lca2015.linux.org.au to register and buy your Earlybird ticket or register as a volunteer.

All of the information that you need is there - what you receive for your ticket price, accommodation options, THE Penguin Dinner, as well as Partners Program and creche options for those of you who are bringing your family with you. You can also register as a volunteer right now, and begin to get involved with our wonderful conference.

There have been months of anticipation, and several sleepless nights, but we are now at a truly exciting stage of the conference organising process - registration!

We look forward to seeing you all in January 2015 in Auckland.

The LCA2015 team

kexec-tools 2.0.8 Released

I have released version 2.0.8 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

kexec-tools 2.0.7 Released

I have released version 2.0.7 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

October 07, 2014

Quick MySQL 5.7.5 thoughts

It was great to see the recent announcement of MySQL 5.7.5 over at the MySQL Server Team blog. I’m looking forward to throwing this release at some of the POWER8 systems we have for a couple of really good reasons: 1) Does it work better than previous MySQL 5.7 releases “out of the box” on POWER? 2) What do the scalability improvements in 5.7.5 mean for peak QPS on POWER (and can I set a new record?).

Looking through the list of changes, I’m (casually not) surprised as to the number of features and the amount of work that echoes what we were working on in Drizzle a few years ago.

A closer look at the source for 5.7.5 may also prove enlightening, I wonder how the MySQL team is coping with a lot of the code rot legacy and the absolutely atrocious internal APIs they inherited…

[life] Day 251: Kindergarten, some more training and other general running around

Yesterday was the first day of Term 4. I can't believe we're at Term 4 already. This year has flown by.

I had a letter from the PAG to all of the Kindergarten parents to put into the notice pockets at Kindergarten first up, so I drove to the Kindergarten not long after opening time, and quickly put them in all the pockets.

Then I headed over to Beaurepaires for a free tyre checkup.

After that, I headed over to my Group Leader's house for some more practical training. I'm feeling pretty confident about my ability to conduct a Thermomix demonstration now, especially after having done my first "real" one on Sunday night.

After that, it was time to pick up Zoe from Kindergarten. It was wonderful to see her after a week away. She wanted to go to Megan's house for a play date, but Megan had tennis, so we went and did the grocery shopping first.

After the grocery shopping, we popped around the Megan's for a bit, and then went home.

I made dinner, and Zoe seemed pretty tired, so I got her to bed a little bit early.

Post Receive Git Hook to Push to Github

I self-host my own git repositories. I also use github as a backup for many of them. I have a use case for a number of them where the data changes can be quite large and pushing to both my own and github's remote services doubles my bandwidth usage in my mostly bandwidth contrained environments.

To get around those contraints, I wanted to only push to my git repository and have that service then push to github. This is how I went about it, courtesy of Matt Palmer

Assumptions:

  • You have your own git server
  • You have a github account
  • You're fairly familiar with git

Authentication

It's likely you have not used your remote server to connect to github. To make sure everything happens smoothly, you need to:

  • Add the SSH key for your user account on your server to the authorised keys on your github account
  • SSH just once from your server to github to accept the key from github.

Add a Remote for github

In the bare git repository on your server, you need to add the remote configuration. On a Debian server using gitweb, this file would be located as /var/cache/git/MYREPO/config. Add the below lines to it:

[remote "github"]
    url = git@github.com:MYACCOUNT/MYREPO.git
    fetch = +refs/heads/*:refs/remotes/github/*
    autopush = true

Add a post-receive Hook

Now we need to create a post-receive hook to process the push to github. Going with the previous example, edit /var/cache/git/MYREPO/hooks/post-receive

#!/bin/bash

for remote in $(git remote); do
        if [ "$(git config "remote.${remote}.autopush")" = "true" ]; then
                git push "$remote"
        fi
done

Happy automated pushing to github.

October 06, 2014

Things that are not news

The following are not news:

  • Human has genitals (which may/may not have been exposed)
  • Absolutely anything about The Bachelor
  • Anything any celebrity is wearing, anyone they’re dating or if they are currently wearing underwear.
  • any list of “top 10″ things.
  • TEENAGERS DO THINGS THAT TEENAGERS DO!

(feel free to extend the list)

Miniconf Call for Presentations - Linux.conf.au 2015 Systems Administration Miniconf

As part of the linux.conf.au conference in Auckland, New Zealand in January 2015 we will be holding a one day mini conference oriented to Linux Systems Administration.

The organisers of the Systems Administration Miniconf would like to invite proposals for presentations to be delivered at the Miniconf. Please forward this CFP to your colleagues, social networks, and other relevant mailing lists.

This is our 9th year at Linux.conf.au. To see presentations from our previous years, see the miniconf website: http://sysadmin.miniconf.org/.

Presentation Topics

Topics for presentations could include (but are not limited to):

Systems Administration Best Practice, Infrastructure as a Service (IAAS), Platform as a Service (PAAS), Docker, Containerisation, Software as a Service (SAAS), Virtualisation, "Cloud" Computing, Service Migration, Dealing with Extranets, Coping with the shortage of IPv4 addresses, Software Defined Networking (SDN), DevOps, Configuration Management, Bootstrapping systems, Lifecycle Management, Monitoring, Dealing with BYOD, Backups in a virtual/distributed world, Security in a world where you cannot even trust your shell, Troubleshooting, Log Management, Buying Decisions, Identity Management, Multi-Factor Authentication, Web and Email management, Keeping legacy systems functioning, and other War stories from the Real World.

We strongly welcome topics on best practice, new developments in systems administration and cutting edge techniques to better manage Linux environments both virtual and physical.

Presentations should be of a technical nature and speakers should assume that members of the audience have at least a couple of years experience in Unix/Linux administration.

Format of Presentations

We are now seeking proposals for presentations at the mini-conference.

We have openings for:

  • 45 minute double-length presentations
  • 20 minute full presentations
  • 10-20 minute "Life in the Real World" presentations
  • 5-10 minute "lightning talks"

Please note, due to the single day available (and whole-LCA keynote before morning tea), we expect the majority of available timeslots to be 20 minutes long or less.

The 10-20 minute "Life the Real World" presentations are intended for people to talk about their sites or real world projects/problems. Information could include: Servers, talks, tools, people and roles, experiences, software, operating systems, monitoring, alerting, provisioning etc. The intent is to give attendees a picture of what is happening in the "real world" and some understanding of how other sites work, as well as offer you a chance to get suggestions on other tools and practices that might help your site. Discussion of "lessons learned" through really trying to use some of these things are especially welcomed.

Submitting talks

Please note that in order to give a presentation or attend the miniconf you must be registered (and paid up) for the main linux.conf.au conference. Presenting at the Miniconf does not entitle you to discounted or free registration at the main conference nor priority with registration. Unfortunately the Miniconf has no budget for sponsorship of speakers.

Submissions should be made via: http://sysadmin.miniconf.org/proposal15.html

Questions should be sent to: lca2015 at sysadmin.miniconf.org

Dates and Deadlines

To encourage early submissions priority (both of inclusion and scheduling) will be given to presentations submitted before the 19th of October 2014.

  • 2014-10-19 - Deadline for early submissions
  • 2014-10-26 - Early submissions confirmation
  • 2014-11-16 - Deadline for submissions
  • 2014-11-30 - Confirmation of all presentations
  • 2015-01-13 - Start of Miniconf and 2nd day of linux.conf.au 2015

Contact and Questions

Please see our website for more information on the miniconf, past presentations and presenting at it. If you have any questions please feel free to email the organisers at: lca2015 at sysadmin.miniconf.org

Ewen McNeill

LCA2015 Sysadmin Miniconf Convener

MariaDB 10.0 on POWER

Good news for those wanting to run MariaDB on POWER systems, the latest 10.0 bzr tree (as of a couple of weeks ago) builds and runs well!

I recently pulled the latest MariaDB 10.0 from BZR and built it on a POWER8 system in the lab to run some quick tests. The MariaDB team has done some work on getting MariaDB to run on POWER recently, a bunch of which is based off my work on MySQL on POWER.

There’s obviously still some work in progress going on, but my initial results show performance within around 10% of MySQL, so with a bit of work we will hopefully see MariaDB reach performance parity.

One interesting find was the code to account for thread memory usage uses a single atomic variable: this does not scale and does end up showing up on profiles.

I’ll comment more on the code in a future post, but it looks like we will have MariaDB being functional on POWER in an upcoming release.

MariaDB & Trademarks, and advice for your project

I want to emphasize this for those who have not spent time near trademarks: trademarks are trouble and another one of those things where no matter what, the lawyers always win. If you are starting a company or an open source project, you are going to have to spend a whole bunch of time with lawyers on trademarks or you are going to get properly, properly screwed.

MySQL AB always held the trademark for MySQL. There’s this strange thing with trademarks and free software, where while you can easily say “use and modify this code however you want” and retain copyright on it (for, say, selling your own version of it), this does not translate too well to trademarks as there’s a whole “if you don’t defend it, you lose it” thing.

The law, is, in effect, telling you that at some point you have to be an arsehole to not lose your trademark. (You can be various degrees of arsehole about it when you have to, and whenever you do, you should assume that people are acting in good faith and just have not spent the last 40,000 years of their life talking to trademark lawyers like you have).Basically, you get to spend time telling people that they have to rename their product from “MySQL Headbut” to “Headbut for MySQL” and that this is, in fact, a really important difference.

You also, at some point, get to spend a lot of time talking about when the modifications made by a Linux distribution to package your software constitute sufficient changes that it shouldn’t be using your trademark (basically so that you’re never stuck if some arse comes along, forks it, makes it awful and keeps using your name, to the detriment of your project and business).

If you’re wondering why Firefox isn’t called Firefox in Debian, you can read the Mozilla trademark policy and probably some giant thread on debian-legal I won’t point to.

Of course, there’s ‘ MySQL trademark policy and when I was at Percona, I spent some non-trivial amount of time attempting to ensure we had a trademark policy that would work from a legal angle, a corporate angle, and a get-our-software-into-linux-distros-happily angle.

So, back in 2010, Monty started talking about a draft MariaDB trademark policy (see also, Ubuntu trademark policy, WordPress trademark policy). If you are aiming to create a development community around an open source project, this is something you need to get right. There is a big difference between contributing to a corporate open source product and an open source project – both for individuals and corporations. If you are going to spend some of your spare time contributing to something, the motivation goes down when somebody else is going to directly profit off it (corporate project) versus a community of contributors and companies who will all profit off it (open source project). The most successful hybrid of these two is likely Ubuntu, and I am struggling to think of another (maybe Fedora?).

Linux is an open source project, RedHat Enterprise Linux is an open source product and in case it wasn’t obvious when OpenSolaris was no longer Open, OpenSolaris was an open source product (and some open source projects have sprung up around the code base, which is great to see!). When a corporation controls the destiny of the name and the entire source code and project infrastructure – it’s a product of that corporation, it’s not a community around a project.

From the start, it seemed that one of the purposes of MariaDB was to create a developer community around a database server that was compatible with MySQL, and eventually, to replace it. MySQL AB was not very good at having an external developer community, it was very much an open source product and not a an open source project (one of the downsides to hiring just about anyone who ever submitted a patch). Things struggled further at Sun and (I think) have actually gotten better for MySQL at Oracle – not perfect, I could pick holes in it all day if I wanted, but certainly better.

When we were doing Drizzle, we were really careful about making sure there was a development community. Ultimately, with Drizzle we made a different fatal error, and one that we knew had happened to another open source project and nearly killed it: all the key developers went to work for a single company. Looking back, this is easily my biggest professional regret and one day I’ll talk about it more.

Brian Aker observed (way back in 2010) that MariaDB was, essentially, just Monty Program. In 2013, I did my own analysis on the source tree of MariaDB 5.5.31 and MariaDB 10.0.3-ish to see if indeed there was a development community (tl;dr; there wasn’t, and I had the numbers to prove it).If you look back at the idea of the Open Database Alliance and the MariaDB Foundation, actually, I’m just going to quote Henrik here from his blog post about leaving MariaDB/Monty Program:

When I joined the company over a year ago I was immediately involved in drafting a project plan for the Open Database Alliance and its relation to MariaDB. We wanted to imitate the model of the Linux Foundation and Linux project, where the MariaDB project would be hosted by a non-profit organization where multiple vendors would collaborate and contribute. We wanted MariaDB to be a true community project, like most successful open source projects are – such as all other parts of the LAMP stack.

….

The reality today, confirmed to me during last week, is that:

Those in charge at Monty Program have decided to keep ownership of the MariaDB trademark, logo and mariadb.org domain, since this will make the company more valuable to investors and eventually to potential buyers.

Now, with Monty Program being sold to/merged into (I’m really not sure) SkySQL, it was SkySQL who had those things. So instead of having Monty Program being (at least in theory) one of the companies working on MariaDB and following the Hacker Business Model, you now have a single corporation with all the developers, all of the trademarks, that is, essentially a startup with VC looking to be valuable to potential buyers (whatever their motives).

Again, I’m going to just quote Henrik on the us-vs-them on community here:

Some may already have observed that the 5.2 release was not announced at all on mariadb.org, rather on the Monty Program blog. It is even intact with the “us vs them” attitude also MySQL AB had of its community, where the company is one entity and “outside community contributors” is another. This is repeated in other communication, such as the recent Recently in MariaDB newsletter.

This was, again, back in 2010.

More recently, Jeremy Cole, someone who has pumped a fair bit of personal and professional effort into MySQL and MariaDB over the past (many) years, asked what seemed to be a really simple question on the maria-discuss mailing list. Basically, “What’s going on with the MariaDB trademark? Isn’t this something that should be under the MariaDB foundation?”

The subsequent email thread was as confusing as ever and should be held up as a perfect example about what not to do. Some of us had by now, for years, smelt something fishy going on around the talk of a community project versus the reality. At the time (October 2013), Rasmus Johansson (VP of Engineering at SkySQL and Board Member of MariaDB foundation) said this:

The MariaDB Foundation and SkySQL are currently working on the trademark issue to come up with a solution on what rights to the trademark each entity should have. Expect to hear more about this in a fairly near future.

 

MariaDB has from its beginning been a very community friendly project and much of the success of MariaDB relies in that fact. SkySQL of course respects that.

(and at the same time, there were pages that were “Copyright MariaDB” which, as it was pointed out, was not an actual entity… so somebody just wasn’t paying attention). Also, just to make things even less clear about where SkySQL the corporation, Monty Program the corporation and the MariaDB Foundation all fit together, Mark Callaghan noticed this text up on mariadb.com:

The MariaDB Foundation also holds the trademark of the MariaDB server and owns mariadb.org. This ensures that the official MariaDB development tree<https://code.launchpad.net/maria> will always be open for the MariaDB developer community.

So…. there’s no actual clarity here. I can imagine attempting to get involved with MariaDB inside a corporation and spending literally weeks talking to a legal department – which thrills significantly less than standing in lines at security in an airport does.

So, if you started off as yay! MariaDB is going to be a developer community around an open source project that’s all about participation, you may have even gotten code into MariaDB at various times… and then started to notice a bit of a shift… there may have been some intent to make that happen, to correct what some saw as some of the failings of MySQL, but the reality has shown something different.

Most recently, SkySQL has renamed themselves to MariaDB. Good luck to anyone who isn’t directly involved with the legal processes around all this differentiating between MariaDB the project, MariaDB Foundation and MariaDB the company and who owns what. Urgh. This is, in no way, like the Linux Foundation and Linux.

Personally, I prefer to spend my personal time contributing to open source projects rather than products. I have spent the vast majority of my professional life closer to the corporate side of open source, some of which you could better describe as closer to the open source product end of the spectrum. I think it is completely and totally valid to produce an open source product. Making successful companies, products and a butt-ton of money from open source software is an absolutely awesome thing to do and I, personally, have benefited greatly from it.

MariaDB is a corporate open source product. It is no different to Oracle MySQL in that way. Oracle has been up front and honest about it the entire time MySQL has been part of Oracle, everybody knew where they stood (even if you sometimes didn’t like it). The whole MariaDB/Monty Program/SkySQL/MariaDB Foundation/Open Database Alliance/MariaDB Corporation thing has left me with a really bitter taste in my mouth – where the opportunity to create a foundation around a true community project with successful business based on it has been completely squandered and mismanaged.

I’d much rather deal with those who are honest and true about their intentions than those who aren’t.

My guess is that this factored heavily into Henrik’s decision to leave in 2010 and (more recently) Simon Phipps’s decision to leave in August of this year. These are two people who I both highly respect, never have enough time to hang out with and I would completely trust to do the right thing and be honest when running anything in relation to free and open source software.

Maybe WebScaleSQL will succeed here – it’s a community with a purpose and several corporate contributors. A branch rather than a fork may be the best way to do this (Percona is rather successful with their branch too).

October 05, 2014

Twitter posts: 2014-09-29 to 2014-10-05

October 04, 2014

CanberraUAV Outback Challenge 2014 Debrief

We've now had a few days to get back into a normal routine after the excitement of the Outback Challenge last week, so I thought this is a good time to write up a report on how things went for team I was part of, CanberraUAV. There have been plenty of posts already about the competition results, so I will concentrate on the technical details of our OBC entry, what went right and what went badly wrong.



For comparison, you might like to have a look at the debrief I wrote two years ago for the 2012 OBC. In 2012 there were far fewer teams, and nobody won the grand prize, although CanberraUAV went quite close. This year there were more than 3 times as many teams and 4 teams completed the criterion for the challenge, with the winner coming down to points. That means the Outback Challenge is well and truly "done", which reflects the huge advances in amateur UAV technology over the last few years.



The drama for CanberraUAV really started several weeks before the challenge. Our primary competition aircraft was the "Bushmaster", a custom built tail dragger with 50cc petrol motor designed by our chief pilot, Jack Pittar. Jack had designed the aircraft to have both plenty of room inside for whatever payload the rest of the team came up with, and to easily fit in the back of a station wagon. To achieve this he had designed it with a novel folding fuselage. Jack (along with several other team members) had put hundreds of hours into building the Bushmaster and had done a great job. It was beautifully laid out inside, and really was a purpose built Outback Challenge aircraft.



Just a couple of days after our D3 deliverable was sent in we had an unfortunate accident. A member of our local flying club was flying his CAP232 aerobatic plane at the same time we we were doing a test mission, and he misjudged a loop. The CAP232 loop went right through the rear of the Bushmaster fuselage, slicing it off. The Bushmaster hit the ground at 180 km/h, with predictable results.



Jack and Greg set to work on a huge effort to build a new Bushmaster, but we didn't manage to get it done in time. Luckily the OBC organisers allowed us to switch to our backup aircraft, a VQ Porter 2.7m RTF with some customisations. We had included the Porter in our D2 deliverable just in case it was needed, and already had plenty of autonomous hours up on it as we had been using it as a test aircraft for all our systems. It was the same basic style as the Bushmaster (a petrol powered tail dragger), but was a bit more cramped inside for fitting our equipment and used a bit smaller engine (a DLE35).



Strategy



Our basic strategy this year was the same as in 2012. We would search at a relatively low altitude (100m AGL this year, compared to 90m AGL in 2012), with a interleaved "mow the lawn" pattern. This year we setup the search with 60% overlap compared with 50% last year, and we had a longer turn around area at the end of each search leg to ensure the aircraft got back on track fully before it entered the search area. With that search pattern and a target airspeed of 28m/s we expected to cover the whole search area in 20 minutes, then we would cover it again in the reverse direction (with a 50m offset) if we didn't find Joe on the first pass.



As with 2012 we used an on-board computer to autonomously find Joe. This year we used an Odroid-XU, which is a quad core system running at 1.6GHz. That gave us a lot more CPU power than in 2012 (when we used a pandaboard), which allowed us to use more CPU intensive image recognition code. We did the first level histogram scanning at the full camera resolution this year (1280x960), whereas in 2012 we had run the first level scan at 640x480 to save CPU. That is why we were happy to fly a bit higher this year.



While the basic approach to image recognition was the same, we had improved the details of the implementation a lot in the last two years, with hundreds of small changes to the image scoring, communications and user interface. Using our 2012 image data as a guide (along with numerous test flights at our local flying field) we had refined the cuav code to provide much better object discrimination, and also to cope better with communication failures. We were pretty confident it could find Joe very reliably.



The takeoff



When the running order was drawn from a hat we were the 2nd last team on the list, so we ended up flying on the Friday. We'd been up early each morning anyway in case the order was changed (which does happen sometimes), and we'd actually been called out to the airfield on the Thursday afternoon, but didn't end up flying then due to high wind.



Our time to takeoff finally came just before 8am Friday morning. As with 2012 we did an auto takeoff, using the new tail-dragger takeoff coded added to APM:Plane just a couple of months before.



Unfortunately the takeoff did not go as planned. In fact, we were darn lucky the plane got into the air at all! As soon as Jack flicked the switch on the transmitter to put the plane into AUTO it started veering left on the runway, and nearly scraped a wing as it limped it's way into the air. A couple of seconds later it came quite close to the tents where the OBC organisers and our GCS was located.



It did get off the ground though, and missed the tents while it climbed, finally switching to normal navigation to the 2nd waypoint once it got to an altitude of 40m. Jack had his finger on the switch on the transmitter which would have taken manual control and very nearly aborted the takeoff. This would have to go down as one of the worst takeoffs in OBC history.



So why did it go so badly wrong? My initial analysis when I looked at the logs later was that the wind had pushed the plane sideways. After examining the logs more carefully though I discovered that while the wind did play a part, the biggest issue was the compass. For some reason the compass offsets we had loaded were quite different from the ones we had been testing with in all our flights in Canberra. I still don't know why the offsets changed, although the fact that we had compass learning enabled almost certainly played a part. We'd done dozens of auto takeoffs in Canberra with no issues, with the plane tracking beautifully down the center line of the runway. To have it go so badly wrong for the flight that matters was a disappointment.



I've decided that the best way to fix the issue for future flights is to make the auto takeoff code completely independent of the compass. Normally the compass is needed to get an initial yaw for the aircraft while it is not moving (as our hobby-grade GPS sensors can't give yaw when not moving), and that initial yaw is used to set the ground heading for the takeoff. That means that with the current code, any compass error at the time you change into AUTO will directly impact the takeoff.



Because takeoff is quite a short process (usually 20s or less), we can take an alternative approach. The gyros won't drift much in 20s, so what we can do is just keep a constant gyro heading until the aircraft is moving fast enough to guarantee a good GPS ground track. At that point we can add whatever gyro based yaw error has built up to the GPS ground course and use that as our navigation heading for the rest of the takeoff. That should make us completely independent of compass for takeoff, which should solve the problem for everyone, rather than just fixing it for our aircraft. I'm working on a set of patches to implement this, and expect it to be in the next release.



Note that once the initial takeoff is complete the compass plays almost no role in the flight of a fixed wing plane if you have the EKF enabled, unless you lose GPS lock. The EKF rejected our compass as being inconsistent, and happily got correct yaw from the fusion of GPS velocity and other sensors for the rest of the flight.



The search



After the takeoff things went much more smoothly. The plane headed off to the search area as planned, and tracked the mission extremely well. It is interesting to compare the navigation accuracy of this years mission compared to the 2012 mission. In 2012 we were using a simple vector field navigation algorithm, whereas we now use the L1 navigation code. This year we also used Paul's EKF for attitude and position estimation, and the TECS controller for speed/height control. The differences are really remarkable. We were quite pleased with how our Mugin flew in 2012, but this year it was massively better. The tracking along each leg of the search was right down the line, despite the 15 knot winds.



Another big difference from 2012 is that we were using the new terrain following capability that we had added this year. In 2012 we used a python script to generate our waypoints and that script automatically added intermediate waypoints to follow the terrain of the search area. This year we just set all waypoints as 100 meters AGL and let the autopilot do its job. That made things a lot simpler and also resulted in better geo-referencing and search area coverage.



On our GCS imaging display we had 4 main windows up. One is a "live" view from the camera. That is setup to only update if there is plenty of spare bandwidth on our radio links, and is really just there to give us something to watch while the imaging code does its job, plus to give us some situational awareness of what the aircraft is tracking over.

The second window is the map, which shows the mission flight path, the geo-fence, two plane icons (AHRS position estimate and GPS position estimate) plus overlays of thumbnails from the image recognition system of any "interesting" objects the Odroid has found.



The 3rd window is the "Mosaic" window. That shows a grid of thumbnails from the image recognition system, and has menus and hot-keys to allow us to control the image recognition process and sort the thumbnails in various ways. We expected Joe to have a image score of over 1000, but we set the threshold for thumbnails to display in the mosaic as 500. Setting a lower threshold means we get shown a lot of not very interesting thumbnails (things like oddly shaped tree stumps and patches of dirt that are a bit out of the ordinary), but at least it means we can see the system is working, and it stops the search being quite so boring.



The final window is a still image window. That is filled with a full resolution image of whatever image we have selected for download from the aircraft, allowing us to get some context around a thumbnail if we want it. Often it is much easier to work out what an object is if you can see the surroundings.



On top of those imaging windows we also have the usual set of flight control windows. We had 3 laptops setup in our GCS tent, along with a 40" LCD TV for one of the laptops (my thinkpad). Stephen ran the flight monitoring on his laptop, and Matt helped with the imaging and the antenna tracker from his laptop. Splitting the task up in this way helped prevent overload of any one person, which made the whole experience more manageable.



On Stephens laptop he had the main MAVProxy console showing the status of the key parts of the autopilot, plus graphs showing the consistency of the various subsystems. The ardupilot code on Pixhawk has a lot of redundancy at both the sensor level and at the higher level algorithm level, and it is useful to plot graphs showing if we get any significant discrepancies, giving us a warning of a potential failure. For this flight everything went smoothly (apart from the takeoff) but it is nice to have enough screens to show this sort of information anyway. It also makes the whole setup look a bit more professional to have a few fancy graphs going :-)



About 20 minutes after takeoff the imaging system spotted Joe lying in his favourite position in a peanut field. We had covered nearly all of the first pass across the search area so we were glad to finally spot him. The images were very clear, so we didn't need to spend any time discussing if it really was Joe or not.



We clicked the "Show Position" menu item we'd added to the MAVProxy map, which pops up a window giving the position in various coordinate systems. Greg passed this to the judges who quickly confirmed it was within the required 100 meters of Joe.



The bottle drop



That initial position was only a rough estimate though. To refine the position we used a new trick we'd added to MAVProxy. The "wp movemulti" command allowed us to move and rotate a pre-defined confirmation flight pattern over the top of Joe. That setup the plane to fly a butterfly pattern over Joe at an altitude of 80 meters, passing over him with wings level. That gives an optimal set of images for our image recognition system to work with, and the tight turns allow the wind estimation algorithm in ardupilot to get an accurate idea of wind direction and speed.



In the weeks leading up to the competition we had spent a lot of time refining our bottle drop system. We realised the key to a good drop is timing consistency and accurate wind drift estimation.



To improve the timing consistency we had changed our bottle release mechanism to be in two stages. The bottle was held to the bottom of the Porter by two servos. One servo held the main weight of the bottle inside a plywood harness, while the second servo was attached to the top of the parachute by a wire, using a glider release mechanism.



The idea is that when approaching Joe for the drop the main servo releases first, which leaves the bottle hanging beneath the plane held by the glider release servo with the parachute unfurled. The wind drags the bottle at an angle behind the plane for 2 seconds before the 2nd servo is released. The result is much more consistent timing of the release, as there is no uncertainty in how long the bottle tumbles before the chute unfolds.



The second part of the bottle drop problem is good wind drift estimation. We had decided to use a small parachute as we were not certain the bottle would withstand the impact with the ground without a chute. That meant significant wind drift, which meant we really had to know the wind direction and speed quite accurately, and also needed some data on how fast the bottle would drift in the wind.



In the weeks before the competition we did a lot of bottle drops, but the really key one was a drop just a week before we left for Kingaroy. That was the first drop in completely still conditions, which meant it finally gave us a "zero wind" data point, which was important in calculating the wind drift. Combining that drop with some previous drop results we came up with a very simple formula giving the wind drift distance for a drop from 80 meters as 5 times the wind speed in meters/second, as long as we dropped directly into the wind.



We had added a new feature to APM:Plane to allow an exact "acceptance radius" for an individual waypoint to be set, overriding the global WP_RADIUS parameter. So once we had the wind speed and direction it was just a matter of asking MAVProxy to rotate the drop mission waypoints to match the wind (which was coming from -121 degrees) and to set the acceptance radius for the drop to 35 meters (70 meters for zero wind, minus 7 times 5 for the 7m/s wind the EKF had estimated).



The plane then slowed to 20m/s for the drop, and did a 350m approach to ensure the wings were nice and level at the drop point. As the drop happened we could see the parachute unfurl in the camera view from the aircraft, which was a nice confirmation that the bottle had been successfully dropped.



The mission was setup for the aircraft to go back to the butterfly confirmation pattern after the drop. We had done that to allow us to see where the bottle had actually landed relative to Joe, in case we wanted to do a 2nd drop. We had 3 bottles ready (one on the plane, two back at the airfield), and were ready to fit a new bottle and adjust the drop point if we'd been too far off.



As soon as we did the confirmation pass it became clear we didn't need to drop a 2nd bottle. We measured the distance of the bottle to Joe as under 3 meters using the imaging system (the judges measured it officially as 2.6m), so we asked the plane to come back and land.



The landing



This year we used a laser rangefinder (a SF/02 from LightWare) to help with the landing. Using a rangefinder really helps ensure the flare is at the right altitude and produces much more consistent landings.



The only real drama we had with the landing was that we came in a bit fast, and ballooned more than it should have on the flare. The issue was that we hadn't used a shallow enough approach. Combined with the quite strong (14 knot) cross-wind it was an "interesting" landing.



We also should have set the landing point a bit closer to the end of the runway. We had put it quite a way along the runway as we weren't sure if the laser rangefinder would pick up anything strange as it crossed over the road and airport boundary fence, but in hindsight we'd put the touchdown point a bit too close to the geofence. Full size glider operations running at the same time meant only part of the runway was available for OBC teams to use.



The landing was entirely successful, and was probably better than a manual landing would have been by me in the same wind conditions (I'm only a mediocre pilot, and landings are my weakest point), but we certainly can do better. Paul Riseborough is already looking at ways to improve the autoland to hopefully produce something that produces a round of applause from spectators in future landings.



Radio performance



Another part of the mission that is worth looking at is the radio performance. We had two radio links to the aircraft - one was a RFD900 900MHz radio, and that performed absolutely flawlessly as usual. We had around 40 dB of fade margin at a range of over 6km, which is absolutely huge. Every team that flew in the OBC this year used a RFD900, which is a great credit to Seppo and the team at RFDesign.



The 2nd radio was a Ubiquity Rocket M5, which is a 5.8GHz ethernet bridge. We used an active antenna tracker this year for the 5.8GHz link, with a 28dBi MIMO antenna on the ground, and a 10dBi MIMO omni antenna in the aircraft (the protrusion from the top of the fuselage is for the antenna). The 5.8GHz link gave us lots of bandwidth for the images, but was not nearly as reliable as the RFD900 link. It dropped out 6 times over the course of the mission, with the longest dropout lasting just over a minute. The dropouts were primarily caused by magnetometer calibration on the antenna tracker - during the mission we had to add some manual trim to the tracker to improve the alignment. That worked, but really we should have used a ground antenna with a bit less gain (maybe around 24dBi) to give us a wider beam width.



Another alternative would have been to use a lower frequency. The 5.8GHz Rocket gives fantastic bandwidth, but we don't really need that much bandwidth for our system. The Robota team used 2.4GHz Ubiquity radios and much simpler antennas and ended up with a much better link than we had. The difference in path loss between 2.4GHz and 5.8GHz is quite significant.



The reason we didn't use the 2.4GHz gear is that we do most of our testing at a local MAAA flying club, and we know that if someone crashes their expensive model while we have a powerful 2.4GHz radio running then there will always be the thought that our radio may have caused interference with their 2.4GHz RC link.



So we're now looking into the possibility of using a 900MHz ethernet bridge. The Ubiquity M900 looks like a real possibility. It doesn't offer nearly as much bandwidth as the 5.8GHz or 2.4GHz radios as Australia only has 13MHz of spectrum available in the 900MHz band for ISM use, but that should still be enough for our application. We have heard that the spread spectrum M900 doesn't significantly interfere with the RFD900 in the same band (as the RFD900 is a TDM frequency hopping radio), but we have yet to test that theory.



Alternatively we may use two RFD900s in the same aircraft, with different numbers of hopping channels and different air data rates to minimise interference. One would be dedicated to telemetry and the other to image data. A RFD900 at 128kbps should be plenty for our cuav imaging system as long as the "live camera view" window is set to quite a low resolution and update rate.



Team cooperation



One of the most notable things about this years competition was just how friendly the discussions between the teams were. The competition has a great spirit of cooperation and it really is a fantastic experience to work closely with so many UAV developers from all over the world.



I don't really have time to go through all the teams that attended, but I do want to mention some of the highlights for me. Top of the list would have to be meeting Ben and Daniel Dyer from team SFWA. They put in an absolutely incredible effort to build their own autopilot from scratch. Their build log at http://au.tono.my/log/index.html is incredible to read, and shows just what can be achieved in a short time with enough talent. It was fantastic that they completed the challenge (the first team to ever do so) and I look forward to seeing how they take their system forward.



I'd also like to offer a tip of the hat to Team Swiss Fang. They used the PX4 native stack on a Pixhawk and it was fantastic to see how far they pushed that autopilot stack in the lead up to the competition. That is one of the things that competitions like the OBC can do for an autopilot - push it to much higher levels.



Team OpenUAS also deserves a mention, and I was especially pleased to meet Christophe who is one of the key people behind the Paparrazzi autopilot. Paparrazzi is a real pioneer in the field of amateur autopilots. Many of the posts we make on "ardupilot has just achieved X" on diydrones could reasonably be responded to by saying "Paparrazzi did that 3 years ago". The OpenUAS team had bad luck in both the 2012 competition and again this year. This time round it was an airspeed sensor failure which led to a crash soon after takeoff which is really tragic given the effort they have put in and the pedigree of their autopilot stack.



The Robota team also did very well, coming in second behind our team. Particularly impressive was the performance of the Goose autopilot on a quite small foam plane in the wind over Kingaroy. The automatic landing was fantastic. The Robota team used a much simpler approach, just using a 2.4GHz Ubiquity link to send a digital video stream to 3 video monitors and having 3 people staring at those screens to find Joe. Extremely simple, but it worked. They were let down a bit by the drop accuracy in the wind, but a great effort done with style.



I was absolutely delighted when Team Thunder, who were also running APM:Plane, completed the challenge, coming in 4th place. They built their system partly on the image recognition code we had released, which is exactly what we hoped would happen. We want to see UAV developers building on each others work to make better and better systems, so having Team Thunder complete the mission was great.



Overall ardupilot really shone in the competition. Over half the teams that flew in the competition were running ardupilot. Our community has shown that we can put together systems that compete with the best in the world. We've come a long way in the last few years and I'm sure there is a bright future for more developments in the UAV S&R space that will see ardupilot saving lives on a regular basis.



Thank you



On behalf of CanberraUAV I'd like to offer a huge thank you to the OBC organisers for the massive effort they have put in over so many years to run the competition. Back in 2007 when the competition started it took real insight for Rod Walker and Jon Roberts to see that a competition of this nature could push amateur UAV technology ahead so much, and it took a massive amount of perseverance to keep it going to the point that teams were able to finally complete the competition. The OBC has had a huge impact on amateur UAV technology.



We'd also like to thank our sponsors, 3DRobotics, who have been a huge help for CanberraUAV. We really couldn't have done it without you. Working with 3DR on this sort of technology is a great pleasure.



Next steps



Completing the Outback Challenge isn't the end for CanberraUAV and we are already starting discussions on what we want to do next. I've posted some ideas on our mailing list and we would welcome suggestions from anyone who wants to take part. We've come a long way, but we're not yet at the point where putting together an effective S&R aircraft is easy.

(UPDATE) Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Of course, there are more things I had known but not fully internalised yet. Of course.

Many  MIPS architectures, and specifically, most common router architectures, don’t have hardware support for NX.

 

Yet. It surely wont be long.

 

My own feeling in this age of Heartbleed and Shellshock is we should get ahead of the curve if we can – if a distribution supports NX in the toolchain then when a newer SOC arrives there is one less thing that we can forget about.

If I had bothered to keep reading instead of hacking I may have rediscovered sooner. But then I would know significantly less about embedded toolchains and how to debug and patch them.  Anyway, a determined user could also cherry-pick emulated NX protection from PAX.

When they Google this problem they will at least find my work.

 

How else to  learn?  :-)

October 03, 2014

Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

To recap our experiments to date, out of the box OpenWRT, with further digging, may appear to give the impression to have sporadic coverage of various Linux hardening measures without doing a bit of extra work. This in fact can be a false impression – see update – but for the uninitiated it could take a bit of digging to check!

One metric of interest not closely examined to date is the NOEXECSTACK attribute on executable binaries and libraries. When coupled with Kernel support, if enabled this disallows execution of code in the stack memory area of a program, thus preventing an entire class of vulnerabilities from working.  I mentioned NOEXECSTACK in passing previously; from the checksec report we saw that the x86 build has 100% coverage of NOEXECSTACK, whereas the MIPS build was almost completely lacking.

For a quick introduction to NOEXECSTACK, see http://wiki.gentoo.org/wiki/Hardened/GNU_stack_quickstart.

Down the Toolchain Rabbit Hole

As far as a challenging detective exercise, this one was a bit of a doosy, for me at least.  Essentially  I had to turn the OpenWRT build system inside out to understand how it worked, and then the same with uClibc, so that I could learn where to begin to start.  After rather a few false starts, the culprit turned out to be somewhere completely different.

First, OpenWRT by default uses uClibc as the C library, which is the bedrock upon which the rest of the user space is built.  The C library is not just a standard package however. OpenWRT, like the majority of typical embedded Linux systems employs a  “toolchain” or “buildroot” architecture.  Simply put, the combines packages together the C/C++ compiler, the assembler, linker, the C library and various other core components in a way that the remainder of the firmware can be built without having knowledge of how this layer is put together.

This is a Good Thing as it turns out, especially when cross-compiling, i.e. when building the OpenWRT firmware for a CPU or platform (the TARGET) that is different from that where the firmware build happens (the HOST.)  Especially where everything is bootstrapped from source, as OpenWRT is.

The toolchain is combined from the following components:

  • binutils — provides the linker (ld) and the assembler and code for manipulating ELF files (Linux binaries) and object libraries
  • gcc — provides the C/C++ compiler and, often overlooked, libgcc, a library various “intrinsic” functions such as optimisations for certain C library functions, amongst others
  • A C library — in this case, uClibc
  • gdb — a debugger

Now all these elements need to be built in concert, and installed to the correct locations, and to complicate matters, the toolchain actually has multiple categories of output:

  • Programs that run on the HOST that produce programs and libraries that run on the TARGET (such as the cross compiler)
  • Programs that run on the on the TARGET (e.g. ldd, used for scanning dependencies)
  • Programs and libraries that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build other programs that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build programs and libraries that run on the TARGET
  • Even, programs that run on the on the TARGET  to produce programs and libraries that run on the TARGET (a target-hosted C compiler!)

Confused yet?

All this magic is managed by the OpenWRT build system in the following way:

  • The toolchain programs are unpacked and built individually under the build_dir/toolchain directory
  • The results of the toolchain build designed to run on the host under the staging_dir/toolchain
  • The partial toolchain under staging_dir is used to build the remaining items under build_dir which are finally installed to staging_dir/target/blah-rootfs
  • (this is an approximation, maybe build OpenWRT for yourself to find out all the accurate naming conventions )
  • The kernel headers are an intrinsic part of this because of the C library, so along the way a pass over the target Linux kernel source is required as well.

OpenWRT is flexible enough to allow the C compiler to be changed (e.g. between stock gcc 4.6 and LInaro gcc 4.8) , and the binutils version, and even switch the C library between different project implementations ( uClibc vs eglibc vs MUSL.)

OpenWRT fetches the sources for all these things, then applies a number of local patches, before building.

We will need to refer to this later.

Confirming the Problem and Fishing for Red Herrings.

The first thing to note is that x86 has no problem, but MIPS does, and I want to run OpenWRT on various embedded devices with MIPS SOC.  Without that I may never have bothered digging deeper!

Of course I dived in initially and took the naive brute force approach.  I patched OpenWRT to apply the override flag to the linker: -Wl,-z,noexecstack. This was a bit unthinking, after all x86 did not need this.

Doing this gave partial success. In fact most programs gained NOEXECSTACK, except for a chunk of the uClibc components, busybox, and tellingly as it turned out, libgcc_s.so. That is, core components used by nearly everything. Of course.

(Spoiler: modern Linux toolchain implementations actually enable NOEXECSTACK by DEFAULT for C code! Which was an important fact I forgot at this point! Silly me.)

At this point,  I managed to overlook libgcc_s.so and decided to focus on uClibc. This decision would greatly expand my knowledge of OpenWRT and uClibc and embedded built systems, and do nothing to solve the problem the proper way!

OpenWRT builds uClibc as a host package, which basically means it abuses Makefiles to generate a uClibc configuration file partly derived from the OpenWRT config file settings, and eventually call the uClibc top level makefile to build uClibc. This can only be done after building binutils and two of three stages of gcc.

At this point I still did not fully understand how NOEXECSTACK really should be employed, which is probably an artefact of working on this stuff late at night and not reading everything as carefully as I might have.  So I did the obvious and incorrect thing and worked out how to patch uClibc further to push the force -Wl,-z,noexecstack through it. What I had to do to do that could almost take another blog article, so I’ll skip it for brevity.  Anyway, this did not solve the problem.

Finally I turned on all the debug and examined the build:

make V=csw toolchain/uClibc/compile

(Aside: the documentation for OpenWRT mentions using V=s to turn on some verboseness, but to get the actual compiler and linker commands of the toolchain build you need the extra flags. I should probably try and update the OpenWRT wiki but I have invested so much time in this that I might have to leave that as an exercise for the reader)

All the libraries were being linked using the -Wl,-z,noexecstack flag, yet some still failed checksec. Argh!

I should also note that repeating this process over and over gets tedious, taking about 20 minutes to built the toolchain plus minimal target firmware on my quad core AMD Phenom. Dont delete the build_dir/host and staging_dir/host  directories or it doubles!

So something else was going on.

Upgrades and trampolines, or not.

I sought help from the uClibc developers mailing list, who suggested I first try using up to date software. This was a fair point, as OpenWRT is using a 2 year old release of uClibc and 1 year old release of binutils, etc.

This of course entailed having to learn how to patch OpenWRT to give me that choice.

So another week later, around work and family and life, I found some time to do this, and discovered that the problem persisted.

At this point I revisited the Gentoo hardening guide.  After some detective work I discovered that several MIPS assembler files inside of uClibc did not actually have the recommended code fragments. Aha! I thought. Incorrectly again, as I should have realised; uClibc has already thought of this and when NOEXECSTACK is configured, as it is for OpenWRT, uClibc passes a different flag to the assembler that has the effect of fixing NOEXECSTACK for assembler files. And of course after I patched about 17 .S files and waited another half hour, the checksec scan was still unsuccessful. Red herring again!

I started to get desperate when I read about some compiler systems that use something called a ‘trampolline’. So I went back to the mailing uClibc  list.

At this point I would like to thank the uClibc developers for being so patient with me, as the solution was now near at hand.

Cutting edge patches and a wrinkle in time.

One of the uClibc developers pointed me to a patch posted on the gcc mailing list. As fate would happen, dated 10 September 2014, which was _after_ I started on these investigations.  This actually went full circle back to libgcc_s.so which was the small library I passed over to focus on uClibc.  This target library itself has some assembly files, which were neither built with the noexecstack option nor including the Gentoo-suggested assembly pragma. This patch also applies on gcc, not on uClibc, and of course was outside of binutils which was the other component I had to upgrade.  The fact that libgcc_s.so was not clean should maybe have pointed me to look at gcc, and it did cross my mind. But we all have to learn somehow.  Without all the above I would be the poorer for my knowledge of the internals of all these systems.

So I applied this patch, and finally, bliss, a sea of green NX enabled flags. Without in the end any modifications actually required to the older version of uClibc used inside OpenWRT.

This even fixed busybox NX status without modification.  So empirically this confirms what I read previously and also overlooked to my detriment, that being NOEXECSTACK is aggregated from all linked libraries: if one is missing it pollutes the lot.

Fixed NOEXECSTACK on uClibc

Postscript

Now I have to package the patch so that it can be submitted to OpenWRT, initially against Barrier Breaker given that has just been released!

Then I will probably need to repeat it against all the supported gcc versions and submit separate patches for those. That will get a bit tedious, thankfully I can start a test build then go away…

Simple PRINCE2 Governance

Pre-Project and Start-Up (SU)

A project is defined as a temporary organisation created for the purpose of delivering business products with a degree of uniqueness according to an agreed Business Case.

A project mandate must come from those managers and those authorised to allocate duties and funds, subject to delegated authority ("corporate /programme management").

Authorised individuals must raise a Project Mandate. This should state the basic objectives, scope, quality, and constraints and identify the customer, user, and other interested parties.

read more

Degrowth Economy

Just read this article: Life in a de-growth economy and why you might actually enjoy it.

I like the idea of a steady state economy. Simple maths shows how stupid endless growth is. And yet our politicians cling to it. We will get a steady state, energy neutral economy one day. It’s just a question of if we are forced, or if it’s managed.

Some thoughts on the article above:

  • I don’t agree that steady state implies localisation. Trade and specialisation and wonderful inventions. It’s more efficient if I write your speech coding software than you working it out. It’s for more efficient for a farmer to grow food than me messing about in my back yard. What is missing is a fossil fuel free means of transport to sustain trade and transportation of goods from where they are efficiently produced to where they are consumed.
  • Likewise local food production like they do in Cuba. Better to grow lots of food on a Cuban farm, they just lack an efficient way to transport it.
  • I have some problems with “organic” food production in the backyard, or my neighbours backyard. To me it’s paying more for chemically identical food to what I buy in the supermarket. Modern, scientific, food production has it’s issues, but these can be solved by science. On a small scale, sure, gardening is fun, and it would be great to meet people in communal gardens. However it’s no way to feed a hungry world.
  • Likewise this articles vision of us repairing/recycling clothing. New is still fine, as long as it’s resource-neutral, e.g. cotton manufactured into jeans using solar powered factories, and transported to my shopping mall in an electric vehicle. Or synthetic fibres from bio-fuels or GM bacteria.
  • Software costs zero to upgrade but can improve our standard of living. So there can be “growth” in some sense at no expense in resources. You can use my speech codec and conserve resources (energy for transmission and radio spectrum). I can send you that software over the Internet, so we don’t need an aircraft to ship you a black box or even a CD.

I live by some anti-growth, anti-consumer principles. I drive an electric car that is a based on a 25 year old recycled petrol car chassis. I don’t have a fossil fuel intensive commute. I use my bike more than my car.

I work part time from home mainly on volunteer work. My work is developing software that I can give away to help people. This software (for telecommunications) will in turn remove the need for expensive radio hardware, save power, and yet improve telecommunications.

I live inexpensively compared to my peers who are paying large mortgages due to the arbitrarily high price of land here, and other costs I have managed to avoid or simply say no to. No great luck or financial acumen at work here, although my parents taught me the useful habit of spending less than I earn. I’m not a very good consumer!

I don’t aspire to a larger home in a nice area or more gadgets. That would just mean more house work and maintenance and expense and less time on helping people with my work. In fact I aspire to a smaller home, and less gadgets (I keep throwing stuff out). I am renting at the moment as the real estate prices here are spiralling upwards and I don’t want to play that game. Renting will allow me to down-shift even further when my children are a little older. I have no debt, and no real desire to make more money, a living wage is fine. Although I do have investments and savings which I like tracking on spreadsheets.

I am typing this on a laptop made in 2008. I bought a second, identical one a few years later for $300 and swap parts between them so I always have a back up.

I do however burn a lot of fossil fuel in air travel. My home uses 11 kWhr/day of electricity, which, considering this includes my electric car and hence all my “fuel” costs, is not bad.

More

In the past I have written about why I think economic growth is evil. There is a lot of great information on this topic such as this physics based argument on why we will cook (literally!) in a few hundred years if we keep increasing energy use. The Albert Bartlett lectures on exponential growth are also awesome.

Recharging NSW

So those who have been following this blog know that I’ve been a keen enthusiast for EVs attempting to grow and expand the amount of EVs in Australia and the related charging network.

Some of my frustration on how slowly it has been growing has turned into why don’t I do something about it.

So I have. I’m now a director of a new company Recharging NSW Pty Ltd. The main aim is to encourage and support EV uptake in Australia.  By increase both cars on the road and public charging.

So there isn’t much I can share at present everything is still in the planning phases. but stay tuned.

 

 

 

[opinion] On Islamaphobia

It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.

For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.

I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.

I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.

We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.

I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.

October 01, 2014

On layers

There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.



I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.



What are layers?



Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):



  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar




Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).



Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.



Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.



What do layers give us?



Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at linux.conf.au 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.





We intend this diagram to amaze and confuse our victims




Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.



Do layers help us work out what OpenStack should focus on?



Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.



Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.



A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.



Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?



Is there consensus on what sits in each layer?



Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.



The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.



Let's ignore tents for now



The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.



Conclusion



Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.



Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.



So here are the layers as I see them for now:



  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar




I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.



I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.



In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.



Tags for this post: openstack kilo technical committee tc layers

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Compute Kilo specs are open; My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment

[life] Day 244: TumbleTastics, photo viewing, bulk goods and a play in the park

Yesterday was another really lovely day. Zoe had her free trial class with TumbleTastics at 10am, and Zoe was asking for a leotard for it, because that's what she was used to wearing when she went to Gold Star gymnastics in Mountain View. After Sarah dropped her off, we dashed out to Cannon Hill to go to K Mart in search of a leotard.

We found a leotard, and got home with enough time to scooter over to TumbleTastics for the class. Zoe had an absolute ball again, and the teacher was really impressed with her physical ability, and suggested for her regular classes that start next term, that she be slotted up a level. It sounds like her regularly scheduled class will have older kids and more boys, so that should be good. I just love that Zoe's so physically confident.

We scootered back home, and after lunch, drove back to see Hannah to view our photos from last week's photo shoot. There were some really beautiful photos in the set, so now I need to decide which one I want on a canvas.

Since we were already out, I thought we could go and check out the food wholesaler at West End that we'd failed to get to last week. I'm glad we did, because it was awesome. There was a coffee shop attached to the place, so we grabbed a coffee and a babyccino together after doing some shopping there.

While we were out, I thought it was a good opportunity check out a new park, and we drove down to what I guess was Orleigh Park, and had an absolutely fantastic afternoon down there by the river, the shade levels in the mid-afternoon were fantastic. I'd like to make an all day outing one day on the CityCat and CityGlider, and bus one way and take the ferry back the other way with a picnic lunch in the middle.

We headed home after about an hour playing in the park, and Zoe watched a little bit of TV before Sarah came to pick her up.

Zoe's spending the rest of the school holidays with Sarah, so I'm going to use the extra spare time to try and catch up on my taxes and my real estate licence training, which I've been neglecting.

September 30, 2014

Oracle Linux ships MariaDB

I can’t remember why I was installing Oracle Enterprise Linux 7 on Oracle VirtualBox a while back, but I did notice something interesting. It ships, just like CentOS 7, MariaDB Server 5.5. Presumably, this means that MariaDB is now supported by Oracle, too ;-) [jokes aside, It’s likely because OEL7 is meant to be 100% compatible to RHEL7]

OEL7__Running_

The only reason I mention this now is Vadim Tkachenko, probably got his most retweeted tweet recently, stating just that. If you want to upgrade to MariaDB 10, don’t forget that the repository download tool provides CentOS 7 binaries, which should “just work”.

If you want to switch to MySQL, there is a Public Yum repository that MySQL provides (and also don’t forget to check the Extras directory of the full installation – from OEL7 docs sub-titled: MySQL Community and MariaDB Packages). Be sure to read the MySQL docs about using the Yum repository. I also just noticed that the docs now have information on replacing a third-party distribution of MySQL using the MySQL yum repository.

Links September 2014

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Blueprints implemented in Nova during Juno

As we get closer to releasing the RC1 of Nova for Juno, I've started collecting a list of all the blueprints we implemented in Juno. This was mostly done because it helps me write the release notes, but I am posting it here because I am sure that others will find it handy too.



Process



  • Reserve 10 sql schema version numbers for back ports of Juno migrations to Icehouse. launchpad specification




Ongoing behind the scenes work



Object conversion



Scheduler
  • Support sub-classing objects. launchpad specification
  • Stop using the scheduler run_instance method. Previously the scheduler would select a host, and then boot the instance. Instead, let the scheduler select hosts, but then return those so the caller boots the instance. This will make it easier to move the scheduler to being a generic service instead of being internal to nova. launchpad specification
  • Refactor the nova scheduler into being a library. This will make splitting the scheduler out into its own service later easier. launchpad specification
  • Move nova to using the v2 cinder API. launchpad specification
  • Move prep_resize to conductor in preparation for splitting out the scheduler. launchpad specification




API
  • Use JSON schema to strongly validate v3 API request bodies. Please note this work will later be released as v2.1 of the Nova API. launchpad specification
  • Provide a standard format for the output of the VM diagnostics call. This work will be exposed by a later version of the v2.1 API. launchpad specification
  • Move to the OpenStack standard name for the request id header, in a backward compatible manner. launchpad specification
  • Implement the v2.1 API on the V3 API code base. This work is not yet complete. launchpad specification




Other
  • Refactor the internal nova API to make the nova-network and neutron implementations more consistent. launchpad specification




General features



Instance features



Networking



Scheduling
  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification




Other
  • i18n Enablement for Nova, turn on the lazy translation support from Oslo i18n and updating Nova to adhere to the restrictions this adds to translatable strings. launchpad specification
  • Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
  • Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
  • Include status information in API listings of hypervisor hosts. launchpad specification
  • Allow API callers to specify more than one status to filter by when listing services. launchpad specification
  • Add quota values to constrain the number and size of server groups a users can create. launchpad specification




Hypervisor driver specific



Hyper-V



Ironic



libvirt



vmware
  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification




Tags for this post: openstack juno blueprints implemented

Related posts: One week of Nova Kilo specifications; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Review priorities as we approach juno-3



Comment

Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder

Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email: francois@fmarier.org
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
    
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

My candidacy for Kilo Compute PTL

This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:



I'd like another term as Compute PTL, if you'll have me.

We live in interesting times. openstack has clearly gained a large
amount of mind share in the open cloud marketplace, with Nova being a
very commonly deployed component. Yet, we don't have a fantastic
container solution, which is our biggest feature gap at this point.
Worse -- we have a code base with a huge number of bugs filed against
it, an unreliable gate because of subtle bugs in our code and
interactions with other openstack code, and have a continued need to
add features to stay relevant. These are hard problems to solve.

Interestingly, I think the solution to these problems calls for a
social approach, much like I argued for in my Juno PTL candidacy
email. The problems we face aren't purely technical -- we need to work
out how to pay down our technical debt without blocking all new
features. We also need to ask for understanding and patience from
those feature authors as we try and improve the foundation they are
building on.

The specifications process we used in Juno helped with these problems,
but one of the things we've learned from the experiment is that we
don't require specifications for all changes. Let's take an approach
where trivial changes (no API changes, only one review to implement)
don't require a specification. There will of course sometimes be
variations on that rule if we discover something, but it means that
many micro-features will be unblocked.

In terms of technical debt, I don't personally believe that pulling
all hypervisor drivers out of Nova fixes the problems we face, it just
moves the technical debt to a different repository. However, we
clearly need to discuss the way forward at the summit, and come up
with some sort of plan. If we do something like this, then I am not
sure that the hypervisor driver interface is the right place to do
that work -- I'd rather see something closer to the hypervisor itself
so that the Nova business logic stays with Nova.

Kilo is also the release where we need to get the v2.1 API work done
now that we finally have a shared vision for how to progress. It took
us a long time to get to a good shared vision there, so we need to
ensure that we see that work through to the end.

We live in interesting times, but they're also exciting as well.




I have since been elected unopposed, so thanks for that!



Tags for this post: openstack kilo compute ptl

Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Compute Kilo specs are open



Comment

September 29, 2014

Trip report: LinuxCon North America, CentOS Dojo Paris, WebExpo Prague

I had quite a good time at LinuxCon North America/CloudOpen North America 2014, alongside my colleague Max Mether – between us, we gave a total of five talks. I noticed that this year there was a database heavy track — Morgan Tocker from Oracle’s MySQL Team had a few talks as did Martin MC Brown from Continuent. 

The interest in MariaDB stems from the fact that people are starting to just see it appear in CentOS 7, and its just everywhere (you can even get it from the latest Ubuntu LTS). This makes for giving interesting talks, since many are shipping MariaDB 5.5 as the default choice, but that’s something we released over 2 years ago; clearly there are many interesting new bits in MariaDB 10.0 that need attention!

Chicago is a fun place to be — the speaker gift was an architectural tour of Chicago by boat, probably one of the most useful gifts I’ve ever received (yes, I took plenty of photos!). The Linux Foundation team organised the event wonderfully as always, and I reckon the way the keynotes were setup with the booths in the same room was a clear winner — pity we didn’t have a booth there this year. 

Shortly afterwards, I headed to Paris for the CentOS Dojo. The room was full (some 50 attendees?), whom were mainly using CentOS and its clear that CentOS 7 comes with MariaDB so this was a talk to get people up to speed with what’s different with MySQL 5.5, what’s missing from MySQL 5.6, and when to look at MariaDB 10. We want to build CentOS 7 packages for the MariaDB repository (10.0 is already available with MariaDB 10.0.14), so watch MDEV-6433 in the meantime for the latest 5.5 builds.

Then there was WebExpo Prague, with over 1,400 attendees, held in various theatres around Prague. Lots of people here also using MariaDB, some rather interesting conversations on having a redis front-end, how we power many sites, etc. Its clear that there is a need for a meetup group here, there’s plenty of usage.

[life] Day 243: Day care for a day

I had to resort to using Zoe's old day care today so I could do some more Thermomix Consultant training. Zoe's asked me on and off if she could go back to her old day care to visit her friends and her old teachers, so she wasn't at all disappointed when she could today. Megan was even there as well, so it was a super easy drop off. She practically hugged me and sent me on my way.

When I came back at 3pm to pick her up, she wanted to stay longer, but wavered a bit when I offered to let her stay for another hour and ended up coming home with me.

We made a side trip to the Valley to check my post office box, and then came home.

Zoe watched a bit of TV, and then Sarah arrived to pick her up. After some navel gazing, I finished off the day with a very strenuous yoga class.

Git and mercurial abort: revision cannot be pushed

I’ve been migrating some repositories from Mercurial to Git; as part of this migration process some users want to keep using Mercurial locally until they have time to learn git.

First install the hg-git tools; for example on Ubuntu:

sudo aptitude install python-setuptools python-dev
sudo easy_install hg-git

Make sure the following is in your ~/.hgrc:

[extensions]
hgext.bookmarks =
hggit = 

Then, in your existing mercurial repository, add a new remote that points to the git repository. For example for a BitBucket repository:

cd <mercurial repository>
cat .hg/hgrc
[paths]
# the original hg repository
default = https://username@abcde.org/foo/barhg
# the git version (on BitBucket in this case)
bbgit   = git+ssh://git@bitbucket.org:foo/bar.git

Then you can go an hg push bbgit to push from your local hg repository to the remote git repository.

mercurial abort: revision cannot be pushed

You may get the error mercurial abort: revision cannot be pushed since it doesn’t have a ref when pushing from hg to git, or you might notice that your hg work isn’t being pushed. The solution here is to reset the hg bookmark for git’s master branch:

hg book -f -r tip master
hg push bbgit

If you find yourself doing this regularly, this small shell function (in your ~/.bashrc) will help:

hggitpush () {
   # $1 is hg remote name in hgrc for repo
   # $2 is branch (defaults to master)
   hg book -f -r tip ${2:-master}
   hg push $1
}

Then from your shell you can run commands like:

hggitpush bbgit dev
hggitpush foogit      # defaults to pushing to master

September 28, 2014

Twitter posts: 2014-09-22 to 2014-09-28

SM1000 Part 6 – Noise and Radio Tests

For the last few weeks I have been debugging some noise issues in “analog mode”, and testing the SM1000 between a couple of HF radios.

The SM1000 needs to operate in “analog” mode as well as support FreeDV Digital Voice (DV mode). In analog mode, the ADC samples the mic signal, and sends it straight to the DAC where it is sent to the mic input of the radio. This lets you use the SM1000 for SSB as well as DV, without unplugging the SM1000 and changing microphones. Analog mode is a bit more challenging as electrical noise in the SM1000, if not controlled, makes it through to the transmit audio. DV mode is less sensitive, as the modem doesn’t care about low level noise.

Tracking down noise sources involves a lot of detail work, not very exciting but time consuming. For example I can hear a noise in the received audio, is it from the DAC or ADC side? Write software so I can press a button to send 0 samples to the DAC so I can separate the DAC and ADC at run time. OK it’s the ADC side, is it the ADC itself or the microphone amplifier? Break net and terminate ADC with 1k resistor to ground (thanks Matt VK5ZM for this suggestion). OK it’s the microphone amplifier, so is it on the input side or the op-amp itself? Does the noise level change with the mic gain control? No, then it must not be from the input. And so it goes.

I found noise due to the ADC, the mic amp, the mic bias circuit, and the 5V switcher. Various capacitors and RC filters helped reduce it to acceptable levels. The switcher caused high frequency hiss, this was improved with a 100nF cap across R40, and a 1500 ohm/1nF RC filter between U9 and the ADC input on U1 (prototype schematic). The mic amp and mic bias circuit was picking up 50Hz noise at the frame rate of the DSP software that was fixed with 220uF cap across R40 and a 100 ohm/220uF RC filter in series with R39, the condenser mic bias network.

To further improve noise, Rick and I are also working on changes to the PCB layout. My analog skills are growing and I am now working methodically. It’s nice to learn some new skills, useful for other radio projects as well. Satisfying.

Testing Between Two Radios

Next step is to see how the SM1000 performs over real radios. In particular how does it go with nearby RF energy? Does the uC reset itself, is there RF noise getting into the sensitive microphone amplifier and causing runaway feedback in analog mode? Also user set up issues: how easy is it to interface to the mic input of a radio? Is the level reaching the radio mic input OK?

The first step was to connect the SM1000 to a FT817 as the transmit radio, then to a IC7200 via 100dB of attenuation. The IC7200 receive audio was connected to a laptop running FreeDV. The FT817 was set to 0.5W output so I wouldn’t let the smoke out of my little in-line attenuators. This worked pretty well, and I obtained SNRs of up to 20dB from FreeDV. It’s always a little lower through real radios, but that’s acceptable. The PTT control from the SM1000 worked well. It was at this point that I heard some noises using the SM1000 in “analog” mode that I chased down as described above.

At the IC7200 output I recorded this file demonstrating audio using the stock FT817 MH31 microphone, the SM1000 used in analog mode, and the SM1000 in DV mode. The audio levels are unequal (MH31 is louder), but I am satisfied there are no strange noises in the SM1000 audio (especially in analog mode) when compared to the MH31 microphone. The levels can be easily tweaked.

Then I swapped the configuration to use the IC7200 as the transmitter. This has up to 100W PEP output, so I connected it to an end fed dipole, and used the FT817 with the (non-resonant) VHF antenna as the receiver. It took me a while to get the basic radio configuration working. Even with the stock IC7200 mic I could hear all sorts of strange noises in the receive audio due to the proximity of the two radios. Separating them (walking up the street with the FT817) or winding the RF gain all the way down helped.

However the FreeDV SNR was quite low, a maximum of 15dB. I spent some time trying to work out why but didn’t get to the bottom of it. I suspect there is some transmit pass-band filtering in the IC7200, making some FDMDV carriers a few dB lower than others. Note x-shaped scatter diagram and sloped spectrum below:

However the main purpose of these tests was to see how the SM1000 handled high RF fields. So I decided to move on.

I tested a bunch of different combinations, all with good results:

  • IC7200 with stock HM36 mic, SM1000 in analog mode, SM1000 in DV mode (high and low drive)
  • Radios tuned to 7.05, 14.235 and 28.5 MHz.
  • Tested with IC7200 and SM1000 running from the same 12V battery (breaking transformer isolation).
  • Had a 1m headphone cable plugged into the SM1000 act as an additional “antenna”.
  • Rigged up an adaptor to plug the FT817 MH31 mic into the CN5 “ext mic” connector on the SM1000. Total of 1.5m in mic lead, so plenty of opportunity for RF pick up.
  • Running full power into low and 3:1 SWR loads. (Matt, VK5ZM suggested high SWR loads is a harsh RF environment).

Here are some samples, SM1000 analog, stock IC7200 mic, SM1000 DV low drive, SM1000 high drive. There are some funny noises on the analog and stock mic samples due to the proximity of the rx to the tx, but they are consistent across both samples. No evidence of runaway RF feedback or obvious strange noises. Once again the DV level is a bit lower. All the nasty HF channel noise is gone too!

Change Control

Rick and I are coordinating our work with a change log text file that is under SVN version control. As I perform tests and make changes to the SM1000, I record them in the change log. Rick then works from this document to modify the schematic and PCB, making notes on the change log. I can then review his notes against the latest schematic and PCB files. The change log, combined with email and occasional Skype calls, is working really well, despite us being half way around the planet from each other.

SM1000 Enclosure

One open issue for me is what enclosure we provide for the Beta units. I’ve spoken to a few people about this, and am open to suggestions from you, dear reader. Please comment below on your needs or ideas for a SM1000 enclosure. My requirements are:

  1. Holes for loudspeaker, PTT switch, many connectors.
  2. Support operation in “hand held” or “small box next to the radio” form

    factor.
  3. Be reasonably priced, quick to produce for the Qty 100 beta run.

It’s a little over two months since I started working on the SM1000 prototype, and just 6 months since Rick and I started the project. I’m very pleased with progress. We are on track to meet our goal of having Betas available in 2014. I’ve kicked off the manufacturing process with my good friend Edwin from Dragino in China, ordering parts and working together with Rick on the BOM.

September 27, 2014

Ubiquitous survelliance, VPNs, and metadata

My apologies for the lack of diagrams accompanying this post. I had not realised when I selected LiveJournal to host my blog that it did not host images.

There have been a lot of remarks, not the least by a minister, about the use of VPNs to avoid metadata collection. Unfortunately VPNs cannot be presumed to be effective in avoiding metadata collection, because of the sheer ubiquity of surveillance and the traffic analysis opportunities that ubiquity makes possible.

By ‘metadata’ I mean the production of flow records, one record per flow, with no sampling or aggregation.

By ‘ubiquitous surveillance’ I mean the ability to completely tap and record the ingress and egress data of a computer. Furthermore, the sharing of that data with other nations, such as via the Five Eyes programme. It is a legal quirk in the US and in Australia that a national spy agency may not, without a warrant or reasonable cause, be able to analyse the data of its own citizens directly, but can obtain that same information via a Five Eyes partner without a warrant or reasonable cause.

By ‘VPN service’ I mean a overseas service which sells subscriber-based access to a OpenVPN or similar gateway. The subscriber runs a OpenVPN client, the service runs a OpenVPN server. The traffic from within that encrypted VPN tunnel is then NATed and sent out the Internet-facing interface of the OpenVPN server. The traffic from the subscriber appears to have the IP address of the VPN server; this makes VPN services popular for avoiding geo-locked Internet content from Hula, Netflix and BBC iPlayer.

The theory is that this IP address misdirection also defeats ubiquitous surveillance. An agency producing metadata from the subscriber's traffic sees only communication with the VPN service. An agency tapping the subscriber's traffic sees only the IP address of the subscriber exchanging encrypted content with the IP address of the VPN service.

Unfortunately ubiquitous surveillance is ubiquitous: if a national spy agency cannot tap the traffic itself then it can ask its Five Eyes partner to do the tap. This means that the traffic of the VPN service is also tapped. One interface contains traffic with the VPN subscribers; the other interface contains unencrypted traffic from all subscribers to the Internet. Recall that the content of the traffic with the VPN subscribers is encrypted.

Can a national spy agency relate the unencrypted Internet traffic back to the subscriber's connections? If so then it can tap content and metdata as if the VPN service was not being used.

Unfortunately it is trivial for a national spy agency to do this. ‘Traffic analysis’ is the examination of patterns of traffic. TCP traffic is very vulnerable to traffic analysis:

  • Examining TCP traffic we see a very prominent pattern at the start of every connection. This ‘TCP three-way handshake’ sends one small packet all by itself for the entire round-trip time, receives one small packet all by itself for the entire round trip time, then sends one large packet. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

  • Examining TCP traffic we see a very prominent pattern which a connection encounters congestion. This ‘TCP multiplicative decrease’ halves the rate of transmission upon traffic where the sender has not received a Acknowledgement packet within the expected time. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

These are only the gross features. It doesn't take much imagination to see that the interval between Acks can be used to group connections with the same round-trip time. Or that the HTTP GET and response is also prominent. Or that jittering in web streaming connections is prominent.

In short, by using traffic analysis a national spy agency can — with a high probability — assign the unencrypted traffic on the Internet interface to the encrypted traffic from the VPN subscriber. That is, given traffic with (Internet site IP address, VPN service Internet-facing IP address) and (VPN service subscriber-facing IP address, Subscriber IP address) then traffic analysis allows a national spy agency to reduce that to (Internet site IP address, Subscriber IP address). That is, the same result as if the VPN service was not used.

The only question remains is if the premier national spy agencies are actually exchanging tables of (datetime, VPN service subscriber-facing IP address, Internet site IP address, Subscriber IP address) to allow national taps of (datetime, VPN server IP address, Subscriber IP address) to be transformed into (datetime, Internet site IP address, Subscriber IP address). There is nothing technical to prevent them from doing so. Based upon the revealed behaviour of the Five Eyes agencies it is reasonable to expect that this is being done.

Dear ASIO

Since the Senate passed legislation expanding your surveillance powers on Thursday night, you’ve copped an awful lot of flack on Twitter. Part of the problem, I think – aside from the legislation being far too broad – is that we don’t actually know who you are, or what exactly it is you get up to. You could be part of a spy novel, a movie or a decades-long series of cock ups. You could be script kiddies with a budget. Or you could be something else entirely.

At times like this I try to remind myself to assume good faith; to remember that most people are basically decent and are trying to live a good life. Some people are even trying to make the world a better place, whatever that might mean.

For those of you then who are decent people, and who are trying to keep Australia safe from whatever mysterious threats are out there that we don’t know about – all without wishing to impinge on or risk destroying the freedoms that we enjoy here – you have my thanks.

For those of you involved in the formulation of The National Security Legislation Amendment Bill 2014 (No 1) – you who might be reading this post as I type it, rather than after I publish it – I have tried very, very hard to imagine that you honestly believe you are making the world a better place. And maybe you do actually think that, but for my part I cannot see the powers granted as anything other than a direct assault on our democracy. As Glenn Greenwald pointed out, I should be more worried about bathroom accidents, restaurant meals and lightning strikes than terrorism. As a careful bath user with a strong stomach and a sturdy house to hide in, I think I’m fairly safe on that front. Frankly I’m more worried about climate change. Do you have anyone on staff who can investigate that threat to our national security?

Anyway, thanks for reading, and I’ll take it as a kindness if you don’t edit this post without asking first.

Regards,

Tim Serong

September 26, 2014

LUV Main October 2014 Meeting: MySQL + CCNx

Oct 7 2014 19:00
Oct 7 2014 21:00
Oct 7 2014 19:00
Oct 7 2014 21:00
Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Stewart Smith, A History of MySQL

Hank, Content-Centric Networking

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 7, 2014 - 19:00

read more

[life] Day 240: A day of perfect scheduling

Today was a perfectly lovely day, the schedule just flowed so nicely.

I started the day making a second batch of pizza sauce for the Riverfire party I'm hosting tomorrow night. Once that was finished, we walked around the corner to my dentist for a check up.

Zoe was perfect during the check up, she just sat in the corner of the room and watched and also played on her phone. The dentist commented on how well behaved she was. It blew my mind to run into Tanya there for the second time in a row. We're obviously on the same schedules, but it's just crazy to always wind up with back to back appointments.

After the appointment, we pretty much walked onto a bus to the city, so we could meet Nana for lunch. While we were on the bus, I called up and managed to get haircut appointments for both of us at 3pm. I figured we could make the return trip via CityCat, and the walk home would take us right past the hairdresser.

The bus got us in about 45 minutes early, so we headed up to the Museum of Brisbane in City Hall to see if we could get into the clock tower. We got really lucky, and managed to get onto the 11:45am tour.

Things have changed since I was a kid and my Nana used to take me up the tower. They no longer let you be up there when the bells chime, which is a shame, but apparently it's very detrimental to your hearing.

Zoe liked the view, and then we went back down to King George Square to wait for Nana.

We went to Jo Jo's for lunch, and they somehow managed to lose Zoe and my lunch order, and after about 40 minutes of waiting, I chased it up, and it still took a while to sort out. Zoe was very patient waiting the whole time, despite being starving.

After lunch, she wanted to see Nana's work, so we went up there. On the way back out, she wanted to play with the Drovers statues on Ann Street for a bit. After that, we made our way to North Quay and got on a CityCat, which nicely got us to the hairdresser in time for our appointment.

After that, we walked home, and drove around to check out a few bulk food places that I've learned about from my Thermomix Consultant training. We checked out a couple in Woolloongabba, and they had some great stuff available to the public.

It was getting late, so after a failed attempt at finding one in West End, we returned home so I could put dinner on.

It was a smoothly flowing day today, and Zoe handled it so well.

The Decline and Fall of IBM: End of an American Icon?







ISBN: 0990444422

LibraryThing

This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.



Tags for this post: book robert_cringely ibm corporate decline

Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks
Comment Recommend a book

September 25, 2014

[life] Day 239: Cousin catch up, ice skating and a TM5 pickup

My sister, brother-in-law and niece are in town for a wedding on the weekend, so after collecting Zoe from the train station, we headed out to Mum and Dad's for the morning to see them all. My niece, Emma, has grown heaps since I last saw her. Her and Zoe had some nice cuddles and played together really well.

I'd also promised Zoe that I'd take her ice skating, so that dovetailed pretty well with the visit, as instead of going to Acacia Ridge, we went to Boondall after lunch and skated there.

Zoe was very confident this time on the ice. She spent more time without her penguin than with it, so I think next time she'll be fine without one at all. She only had a couple of falls, the first one I think was a bit painful for her and a shock, but after that she was skating around really well. I think she was quite proud of herself.

My new Thermomix had been delivered to my Group Leader's house, so after that, we drove over there so I could collect it and get walked through how I should handle deliveries for customers. Zoe napped in the car on the way, and woke up without incident, despite it being a short nap. She had a nice time playing with Maria's youngest daughter while Maria walked me through everything, which was really lovely.

Time got away on me a bit, and we hurried home so that Sarah could pick Zoe up. I then got stuck into making some pizza sauce for our Riverfire pizza party on Saturday night.

Enabling OpenStack Roles To Resize Volumes Via Policy

If you have volume backed OpenStack instances, you may need to resize them. In most usage cases you'll want to have un-privileged users resize the instances. This documents how you can modify the Cinder policy to allow tenant members assigned to a particular role to have permissions to resize volumes.

Assumptions:

  • You've already created your OpenStack tenant.
  • You've already created your OpenStack user.
  • You know how to allocate roles to users in tenants.

Select the Role

You will need to create or identify a suitable role. In this example I'll use "Support".

Modify policy.json

Once the role has been created or identified, add these lines to the /etc/cinder/policy.json on the Cinder API server(s):

"context_is_support": [["role:Support"]],
"admin_or_support":  [["is_admin:True"], ["rule:context_is_support"]],

Modify "volume_extension:volume_admin_actions:reset_status" to use the new context:

"volume_extension:volume_admin_actions:reset_status": [["rule:admin_or_support"]],

Add users to the role

Add users who need priveleges to resize volumes to the role SupportAdmin in their tennant.

The users you have added to the "Support" role should now be able to resize volumes.

September 24, 2014

EVSE for Sun Valley Toursit Park

So you might of seen a couple posts about Sun Valley Tourist Park, that is because we visit there a lot to visit grandma and grandpa (wife’s parents) .  So we decided because its outside of our return range we have to charge there to get home if we take the I-MIEV. but with the Electric Vehicle Supply Equipment (EVSE) that comes with the car limits the charge rate to 10amps max. So we convinced the park to install a 32amp EVSE.  This allow us to charge at the I-MIEV full rate of 13amps so 30% faster.

Aeroviroment RS at Sun ValleyAeroviroment EVSE-RS at Sun Valley

If you want to know more about the EVSE it’s an Aeroviroment EVSE RS.  It should work fine with the Holden volt, Mitsubishi Outlander PHEV, I-MIEV 2012 or later (may not work with 2010 models) and the Nissan LEAF.

If you are in the central coast and want somewhere to charge you can find the details on how to contact the park on plugshare. It’s available for public use depending on how busy the park is and the driver paying a nominal fee, and the driver phones ahead, during office hours.

 

[life] Day 238: Picnic play date in Roma Street Parklands with a side trip to the museum

School holidays are a good time for Zoe to have a weekday play date with my friend Kim's daughter Sarah, and we'd lined up a picnic in Roma Street Parklands today.

Zoe had woken up at about 1:30am with a nightmare, and subsequently slept in. It had taken me forever to get back to sleep, so I was pretty tired and slept a bit late too.

We got going eventually, and narrowly missed a train, so had to wait for the next one. We got into the Parklands pretty much on time, and despite the drizzly weather, had a nice morning making our way around the gardens.

The weather progressively improved by lunchtime, and after an early lunch, Kim and kids headed home, and we headed into the museum.

Unfortunately I was wrong about which station we had to get off to go to the museum, and we got off at Southbank rather than South Brisbane and had a long, slow walk of shame to get to the museum.

We used the freebie tickets I'd gotten to see the Deep Oceans exhibit, before heading home. I love the museum's free cloaking service, as it allowed me to divest myself of picnic blankets, my backpack and the Esky while we were at the museum.

While we were making the long walk of shame to the museum, I got a call from the car repairer to say that my car was ready, so after we returned to the rental car at the train station we drove directly to the repairer and collected the car, which involved a lot of shuffling of car contents and car seats. I then thought I'd lost my car key, and that involved an unnecessary second visit back to the car rental place on foot before I discovered it was in my pocket all along.

When we got home, Zoe wanted to play pirates again with our chocolate gold coins. What we wound up playing was a variant of "hide the thimble" in her bedroom, where she hid the chocolate gold coins all over the place, and then proceeded to show me where she'd hidden them all. It was very cute.

There was a tiny bit of TV before Sarah arrived to pick up Zoe.

[life] Day 237: A day with the grandparents and a lot cooking

Yesterday was a pretty full on day. I had to drop the car off to get the rear bumper replaced, and I also had to get to my Thermomix Consultant practical training by 9:30am.

I'd arranged to drop the car off at 8am and then pick up a rental car, and Mum was coming to collect Zoe at 8:30am. Zoe woke up at a good time, and we managed to get going extra early, so I dropped the car off early and was picking up the rental car before 8am.

Mum also arrived extra early, so I used the additional time to swing by the Valley to check my PO box, as I had a suspicion my Thermomix Consultant kit might have arrived, and it had.

I then had to get over to my Group Leader's house to do the practical training, which consisted of watching and giving a demo, with a whole bunch of advice and feedback along the way. It was a long day of much cooking, but it was good to get all of the behind the scenes tricks on how to prepare for a demo, give the demo and have it all run smoothly and to schedule.

I then headed over to Mum and Dad's for dinner. Zoe had had a great day, and my Aunty Peggy was also down from Toowoomba. We stayed for dinner and then headed home. I managed to get Zoe to bed more or less on time.

Something Like a Public Consultation

The Australian government often engages in public consultation on a variety of matters. This is a good thing, because it provides an opportunity for us to participate in our governance. One such recent consultation was from the Attorney-General’s Department on Online Copyright Infringement. I quote:

On 30 July 2014, the Attorney-General, Senator the Hon George Brandis, and the Minister for Communications Malcolm Turnbull MP released a discussion paper on online copyright infringement.

Submissions were sought from interested organisations and individuals on the questions outlined in the discussion paper and on other possible approaches to address this issue.

Submissions were accepted via email, and there was even a handy online form where you could just punch in your answers to the questions provided. The original statement on publishing submissions read:

Submissions received may be made public on this website unless otherwise specified. Submitters should indicate whether any part of the content should not be disclosed to the public. Where confidentiality is requested, submitters are encouraged to provide a public version that can be made available.

This has since been changed to:

Submissions received from peak industry groups, companies, academics and non-government organisations that have not requested confidentiality are being progressively published on the Online copyright infringement—submissions page.

As someone who in a fit of inspiration late one night (well, a fit of some sort, but I’ll call it inspiration), put in an individual submission I am deeply disappointed that submissions from individuals are apparently not being published. Geordie Guy has since put in a Freedom of Information request for all individual submissions, but honestly the AGD should be publishing these. It was after all a public consultation.

For the record then, here’s my submission:

Question 1: What could constitute ‘reasonable steps’ for ISPs to prevent or avoid copyright infringement?

In our society, internet access has become a necessary public utility.  We communicate with our friends and families, we do our banking, we purchase and sell goods and services, we participate in the democratic process; we do all these things online.  It is not the role of gas, power or water companies to determine what their customers do with the gas, power or water they pay for.  Similarly, it is not the role of ISPs to police internet usage.

Question 2: How should the costs of any ‘reasonable steps’ be shared between industry participants?

Bearing in mind my answer to question 1, any costs incurred should rest squarely with the copyright owners.

Question 3: Should the legislation provide further guidance on what would constitute ‘reasonable steps’?

The legislation should explicitly state that:

  1. Disconnection is not a reasonable step given that internet access is a necessary public utility.
  2. Deep packet inspection, or any other technological means of determining the content, or type of content being accessed by a customer, is not a reasonable step as this would constitute a gross invasion of privacy.

Question 4: Should different ISPs be able to adopt different ‘reasonable steps’ and, if so, what would be required within a legislative framework to accommodate this?

Given that it is not the role of ISPs to police internet usage (see answer to question 1), there are no reasonable steps for ISPs to adopt.

Question 5: What rights should consumers have in response to any scheme or ‘reasonable steps’ taken by ISPs or rights holders? Does the legislative framework need to provide for these rights?

Consumers need the ability to freely challenge any infringement notice, and there must be a guarantee they will not be disconnected.  The fact that an IP address does not uniquely identify a specific person should be enshrined in legislation.  The customer’s right to privacy must not be violated (see point 2 of answer to question 3).

Question 6: What matters should the Court consider when determining whether to grant an injunction to block access to a particular website?

As we have seen with ASIC’s spectacularly inept use of section 313 of Australia’s Telecommunications Act to inadvertently block access to 250,000 web sites, such measures can and will result in wild and embarrassing unintended consequences.  In any case, any means employed in Australia to block access to overseas web sites is exceedingly trivial to circumvent using freely available proxy servers and virtual private networks.  Consequently the Court should not waste its time granting injunctions to block access to web sites.

Question 7: Would the proposed definition adequately and appropriately expand the safe harbour scheme?

The proposed definition would seem to adequately and appropriately expand the safe harbour scheme, assuming the definition of “service provider” extends to any person or entity who provides internet access of any kind to any other person or entity.  For example, if my personal internet connection is also being used by a friend, a family member or a random passerby who has hacked my wifi, I should be considered a service provider to them under the safe harbour scheme.

Question 8: How can the impact of any measures to address online copyright infringement best be measured?

I am deeply dubious of the efficacy and accuracy of any attempt to measure the volume and impact of copyright infringement.  Short of actively surveilling the communications of the entire population, there is no way to accurately measure the volume of copyright infringement at any point in time, hence there is no way to effectively quantify the impact of any measures designed to address online copyright infringement.

Even if the volume of online copyright infringement could be accurately measured, one cannot assume that an infringing copy equates to a lost sale.  At one end of the spectrum, a single infringing copy could have been made by someone who would never have been willing or able to pay for access to that work.  At the other end of the spectrum, a single infringing copy could expose a consumer to a whole range of new media, resulting in many purchases that never would have occurred otherwise.

Question 9: Are there alternative measures to reduce online copyright infringement that may be more effective?

There are several alternative measures that may be more effective, including:

  1. Content distributors should ensure that their content is made available to the Australian public at a reasonable price, at the same time as releases in other countries, and absent any Digital Restrictions Management technology (DRM, also sometimes erroneously termed Digital Rights Management, which does more to inconvenience legitimate purchasers than it does to curb copyright infringement).
  2. Content creators and distributors should be encouraged to update their business models to accommodate and take advantage of the realities of ubiquitous digital communications.  For example, works can be made freely available online under liberal licenses (such as Creative Commons Attribution Share-Alike) which massively increases exposure, whilst also being offered for sale, perhaps in higher quality on physical media, or with additional bonus content in the for-purchase versions.  Public screenings, performances, displays, commissions and so forth (depending on the media in question) will contribute further income streams all while reducing copyright infringement.
  3. Australian copyright law could be amended such that individuals making copies of works (e.g. downloading works, or sharing works with each other online) on a noncommercial basis does not constitute copyright infringement.  Changing the law in this way would immediately reduce online copyright infringement, because a large amount of activity currently termed infringement would no longer be seen as such.

Finally, as a member of Pirate Party Australia it would be remiss of me not to provide a link to the party’s rather more detailed and well-referenced submission, which thankfully was published by the AGD. We’ve also got a Pozible campaign running to raise funds for an English translation of the Dutch Pirate Bay blocking appeal trial ruling, which will help add to the body of evidence demonstrating that web site blocking is ineffective.

Resizing a Root Volume for an Openstack Instance

This documents how to resize an OpenStack instance that has it's root partition backed by a volume. In this circumstance "nova resize" will not resize the diskspace as expected.

Assumptions:

Shutdown the instance you wish to resize

Check the status of the source VM and stop it if it's not already:

$ nova list
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | ACTIVE | -          | 
Running     | Tutorial=192.168.0.107 |
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
$ nova stop ResizeMe0
$ nova list
+--------------------------------------+-----------+--------+------------+-
------------+---------------------------------------------+
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
+--------------------------------------+-----------+---------+-----------+-
------------+---------------------------------------------+
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | SHUTOFF | -          | 
Running     | Tutorial=192.168.0.107 |
+--------------------------------------+-----------+---------+------------+-
------------+---------------------------------------------+

Identify and extend the volume

Obtain the ID of the volume attached to the instance:

$ nova show ResizeMe0 | grep volumes
| os-extended-volumes:volumes_attached | [{"id": "616dbaa6-f5a5-4f06-9855-fdf222847f3e"}]         |

Set the volume's state to be "available" to so we can resize it:

$ cinder reset-state --state available 616dbaa6-f5a5-4f06-9855-fdf222847f3e
$ cinder show 616dbaa6-f5a5-4f06-9855-fdf222847f3e | grep " status "
| status | available |

Extend the volume to the desired size:

$ cinder extend 616dbaa6-f5a5-4f06-9855-fdf222847f3e 4

Set the status back to being in use:

$ cinder reset-state --state in-use 616dbaa6-f5a5-4f06-9855-fdf222847f3e

Start the instance back up again

Start the instance again:

$ nova start ResizeMe0

Voila! Your old instance is now running with an increased disk size as requested.

Cheap 3G Data in Australia

The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.



September 23, 2014

Opportunities and Issues in Free Software

Presentation to Software Freedom Day (Melbourne), September 2014