Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

July 07, 2015

Linux Security Summit 2015 Schedule Published

The schedule for the 2015 Linux Security Summit is now published!

The refereed talks are:

  • CC3: An Identity Attested Linux Security Supervisor Architecture – Greg Wettstein, IDfusion
  • SELinux in Android Lollipop and Android M – Stephen Smalley, NSA
  • Linux Incident Response – Mike Scutt and Tim Stiller, Rapid7
  • Assembling Secure OS Images – Elena Reshetova, Intel
  • Linux and Mobile Device Encryption – Paul Lawrence and Mike Halcrow, Google
  • Security Framework for Constraining Application Privileges – Lukasz Wojciechowski, Samsung
  • IMA/EVM: Real Applications for Embedded Networking Systems – Petko Manolov, Konsulko Group, and Mark Baushke, Juniper Networks
  • Ioctl Command Whitelisting in SELinux – Jeffrey Vander Stoep, Google
  • IMA/EVM on Android Device – Dmitry Kasatkin, Huawei Technologies

There will be several discussion sessions:

  • Core Infrastructure Initiative – Emily Ratliff, Linux Foundation
  • Linux Security Module Stacking Next Steps – Casey Schaufler, Intel
  • Discussion: Rethinking Audit – Paul Moore, Red Hat

Also featured are brief updates on kernel security subsystems, including SELinux, Smack, AppArmor, Integrity, Capabilities, and Seccomp.

The keynote speaker will be Konstantin Ryabitsev, sysadmin for kernel.org.  Check out his Reddit AMA!

See the schedule for full details, and any updates.

This year’s summit will take place on the 20th and 21st of August, in Seattle, USA, as a LinuxCon co-located event.  As such, all Linux Security Summit attendees must be registered for LinuxCon. Attendees are welcome to attend the Weds 19th August reception.

Hope to see you there!

It's 10pm, do you know where your SSL certificates are?

The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.

The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.

However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.

This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.

Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched sslaware.com, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.

If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.

July 06, 2015

Bitcoin Core CPU Usage With Larger Blocks

Since I was creating large blocks (41662 transactions), I added a little code to time how long they take once received (on my laptop, which is only an i3).

The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear.  (A 17MB block took 171 milliseconds).

Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block.  At least it’s scaling linearly, but it’s just slow.

So, 16 Seconds Per 8MB Block?

I did some digging.  Just invalidating and revalidating the 8MB block only took 1 second, so something about receiving a fresh block makes it worse. I spent a day or so wrestling with benchmarking[1]…

Indeed, ConnectTip does the actual script evaluation: CheckBlock() only does a cursory examination of each transaction.  I’m guessing bitcoin core is not smart enough to parallelize a chain of transactions like mine, hence the 2 seconds per MB.  On normal transaction patterns even my laptop should be about 4 times faster than that (but I haven’t actually tested it yet!).

So, 4 Seconds Per 8MB Block?

But things are going to get better: I hacked in the currently-disabled libsecp256k1, and the time for the 8MB ConnectTip dropped from 18.6 seconds to 6.5 seconds.

So, 1.6 Seconds Per 8MB Block?

I re-enabled optimization after my benchmarking, and the result was 4.4 seconds; that’s libsecp256k1, and an 8MB block.

Let’s Say 1.1 Seconds for an 8MB Block

This is with some assumptions about parallelism; and remember this is on my laptop which has a fairly low-end CPU.  While you may not be able to run a competitive mining operation on a Raspberry Pi, you can pretty much ignore normal verification times in the blocksize debate.


 

[1] I turned on -debug=bench, which produced impenetrable and seemingly useless results in the log.

So I added a print with a sleep, so I could run perf.  Then I disabled optimization, so I’d get understandable backtraces with perf.  Then I rebuilt perf because Ubuntu’s perf doesn’t demangle C++ symbols, which is part of the kernel source package. (Are we having fun yet?).  I even hacked up a small program to help run perf on just that part of bitcoind.   Finally, after perf failed me (it doesn’t show 100% CPU, no idea why; I’d expect to see main in there somewhere…) I added stderr prints and ran strace on the thing to get timings.

July 05, 2015

Twitter posts: 2015-06-29 to 2015-07-05

CCR at OSCON

What is conflict?

I've given a "Constructive Conflict Resolution" talk twice now. First at DrupalCon Amsterdam, and again at DrupalCon Los Angeles. It's something I've been thinking about since joining the Drupal community working group a couple of years ago. I'm giving the talk again at OSCON in a couple of weeks. But this time, it will be different. Very different. Here's why.

After seeing tweets about Gina Likins keynote at ApacheCon earlier this year I reached out to her to ask if she'd be willing to collaborate with me about Conflict Resolution in open source, and ended up inviting her to co-present with me at OSCON. We've been working together over the past couple of weeks. It's been a joy, and a learning experience! I'm really excited about where the talk is heading now. If you're going to be at OSCON, please come along. If you're interested, please follow our tweets tagged #osconCCR.

Jen Krieger from Opensource.com interviewed Gina and I about our talk - here's the article: Teaching open source communities about conflict resolution

In the meantime, do you have stories of conflict in Open Source Communities to share?

  • How were they resolved?
  • Were they intractable?
  • Do the wounds still fester?
  • Was positive change an end result?
  • Do you have resources for dealing with conflict?

Tweet your thoughts to me @kattekrab

July 03, 2015

Python Decompilation, Max4Live Programming, Ableton Push Colour Calibration, Automated DJ'ing and More

I was recently discussing with someone how Ableton programming/scripting works. This was particularly within the context of the Ableton Push device and possible hacking of other devices to allow for more sophisticated functionality. Apparently, many of the core scripts use Python. They need to be decompiled to allow you to have a proper look at them though. Obviously, some of the scripts are non-tricial and will require a sufficient understanding of both music as well as programming to be useful.



A decompilation of all files in the following directory,

C:\ProgramData\Ableton\Live9Suite\Resources\MIDI Remote Scripts\

is available here. The reason why I've done it is because others who have previously done it have removed it from their websites.



http://julienbayle.net/ableton-live-9-midi-remote-scripts/

http://blogs.bl0rg.net/netzstaub/2008/08/15/writing-ableton-control-surface-scripts/

http://remotescripts.blogspot.com.au/

http://julienbayle.net/PythonLiveAPI_documentation/Live9.0.6.xml



The decompilation was achieved using two small scripts which I created available here and use uncompyle2, https://github.com/Mysterie/uncompyle2 at their core. Since the current code contains an error which doesn't allow for a successful RPM build I've had to make a small modification.


For those who want to know the uncompyle2 currently only works with Python 2.7. To get it running in a Debian based environment I had to change a symlink so that /usr/bin/python -> python2.7 as opposed to /usr/bin/python -> python2.6



To get the RPM build working I had to copy README.rst to README.
Running 'python setup.py bdist_rpm' would give me an RPM package. Running 'alien' allows conversion of the RPM to a DEB package for easy installation on a Debian based platform.

http://sourceforge.net/projects/easypythondecompiler/

http://stackoverflow.com/questions/8189352/decompile-python-2-7-pyc

http://depython.com/

http://reverseengineering.stackexchange.com/questions/1701/decompiling-pyc-files


Successful RPM and DEB packages are available from my website, https://sites.google.com/site/dtbnguyen/

The following ZIP archive contains updated code, RPM, and DEB packages.
The following ZIP archive contains the decompiled code and scripts to automate decompilation of the Ableton code.


For those who are interested, Max4Live programming looks rather interesting for building devices and effects. It also looks like a perfect choice for those who may be on a limited budget and looking to extend Ableton's capabilities.

https://www.ableton.com/en/blog/programming-in-max-for-live/

http://www.youtube.com/playlist?list=PLasl9I6VeCCrNLAoOiKibDqJc1rsjLSDi

http://www.patrickmuller.de/n-e-w-s/max-msp-programming/

https://docs.cycling74.com/max5/vignettes/intro/doclive.html

https://www.ableton.com/en/help/article/how-get-started-max-live-9/

http://www.abletonop.com/2012/07/sell-your-live-devices-on-abletonop/

http://www.synthtopia.com/content/2009/11/24/5-reasons-to-avoid-max-for-live/

http://roberthenke.com/technology/m4l.html

https://cycling74.com/support/faq-maxforlive/

http://community.akaipro.com/akai_professional/topics/apc-mini-sequencers

https://www.youtube.com/watch?v=bmn8eJYEe9s

http://www.maxforlive.com/library/device.php?id=877



There have has been some grumbles regarding Ableton Push quality control (Novaton has sort of had similar problems with their Launchpad series but it hasn't been as obvious because most current models have only relied on a limited set of colours. Note to others this issue isn't actually covered by warranty either and it's a difficult problem to fix from a manufacturing perpsective. Hence, the need for this particular solution.) with regards to inconsitent colouring of LEDs. There was a small application that was created but wasn't publicly released. It's called, 'PUSH_RGB_Calibration_Tool.zip' and basically allows for calibration of white on the device by altering internal colour balance of primary colours. It's available on some file sharing websites. You'll require firmware version 1.7 tor it to run.

https://forum.ableton.com/viewtopic.php?f=55&t=191939&start=45

https://www.ableton.com/en/help/article/push-firmware-release-notes/

https://archive.is/zLbnS

http://filepi.com/i/J9mGlId

https://www.virustotal.com/en/file/687a48127a65226eaae13ded393aafaccee46e445cec33a63964ece921fddb51/analysis/



Someone recently asked me about automated DJ options. I've seen a few but they seem to be becoming increasingly sophisticated.

https://www.youtube.com/results?search_query=how+to+dj

How To DJ - Phil K (Intermediate Level)

https://www.youtube.com/watch?v=4r3Pw8VJtq0
http://www.mixmeister.com/products-comparison.php

http://forum.djtechtools.com/showthread.php?t=21834

http://www.virtualdj.com/wiki/Automix.html

https://www.native-instruments.com/forum/threads/is-there-an-auto-dj-function.28501/

http://djtechtools.com/2012/08/06/what-controller-is-right-for-you-all-in-one-vs-modular-dj-set-ups/



Apparently, some of my ideas and perspectives regarding the modern world and capitalism are similar to that of Thomas Piketty. However, the way in which we would set about rebalancing global economics to ensure a more fair and just global economic system for all is somewhat different. More on this in time...

http://www.smh.com.au/world/pushing-back-on-socialism-ecuador-vents-its-presidential-ire-on-the-streets-20150702-gi3ew2.html

https://en.wikipedia.org/wiki/Thomas_Piketty

https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Century

http://blog.melbournemusiccentre.com.au/2011/07/why-are-the-same-or-similar-items-cheaper-overseas/



Some options for puchasing used music equipment locally.

http://www.musicswopshop.com.au/

http://www.yourinstrument.com.au/

http://www.musosales.com.au/

http://melbourneexchange.com.au/

http://www.quicksales.com.au/



In case you've ever wanted to download videos from various websites, there are quite a few options out there.

http://www.clipconverter.cc/

http://keepvid.com/

http://www.flvdown.com/

http://www.flvdown.com/docs.php?doc=api

https://addons.mozilla.org/en-us/firefox/addon/flashgot/

http://sourceforge.net/projects/ytd2/

http://stackoverflow.com/questions/4032766/how-to-download-videos-from-youtube-on-java

http://superuser.com/questions/114196/how-to-find-the-stream-behind-a-flash-player



If you've had minor scratches on your optical discs you know that they can be extraordinarily frustrating. There are quite a few solutions out there for it though.

http://www.wisebread.com/quickly-removing-scratches-from-cds-and-dvds

http://www.wikihow.com/Fix-a-Scratched-CD

http://www.apartmenttherapy.com/7-bizarre-home-remedies-that-r-152502

http://howto.wired.com/wiki/Fix_a_Scratched_CD

http://www.instructables.com/id/Re-surfacing-CDs-so-they-work-again./



If you ever have to use automated imaging/partitioning software sometimes things don't turn out perfectly. Hidden partitions appear when they shouldn't wreaking havoc with links throughout your system. Changing the partition type is the solution though the actual 'type/code/number' may vary depending on the circumstances.

https://forums.lenovo.com/t5/Lenovo-P-Y-and-Z-series/Disk-Partitioning-and-OneKey-Recovery-Feature/td-p/8036

https://forums.lenovo.com/t5/Windows-7-Discussion/How-to-re-hide-OEM-partition/td-p/278948

https://forums.lenovo.com/t5/Lenovo-U-and-S-Series-Notebooks/help-me-with-OKR/m-p/133135#M11494

http://www.smh.com.au/digital-life/digital-life-news/meet-australian-company-ipechelon-one-of-the-biggest-antipiracy-operations-in-the-world-20150430-1mw93k.html



Options for locking down a device in case it is lost or stolen are increasingly popular nowadays even in consumer class devices. It's interesting how far, some companies are willing to take this and what their implementation is like.

http://www.computerworld.com/article/2481347/endpoint-security/the-down-side-of-hard-drive-passwords.html

http://www.tomshardware.com/forum/258614-32-access-password-protected-caddy

http://www.sevenforums.com/hardware-devices/246015-toshiba-hdd-locked.html

http://www.computing.net/answers/hardware/how-to-clear-hard-drive-password/48527.html

http://forum.thinkpads.com/viewtopic.php?t=104873

http://www.freakyacres.com/remove_computrace_lojack

http://en.wikipedia.org/wiki/LoJack_for_Laptops

http://www.pcworld.com/product/1344774/acer-aspire-s3-951-2634g25nss-ultrabook.html

http://www.manualslib.com/manual/546525/Acer-Aspire-S5-391.html?page=56

https://www.technibble.com/forums/threads/unlockhd-exe-help.42725/

http://www.experts-exchange.com/Hardware/Laptops_Notebooks/Q_26614042.html

http://www.allservice.ro/forum/viewtopic.php?p=7958&sid=6d91be00086c530afa8311af27da81c5

http://www.allservice.ro/acer/

http://www.tomsguide.com/forum/61372-35-locked-laptop-harddrive

http://www.techrepublic.com/pictures/cracking-open-acer-aspire-s3-ultrabook/

http://www.techrepublic.com/blog/cracking-open/acer-aspire-s3-teardown-good-hardware-lackluster-construction/



Help evaluate, test, and design Windows 10.

https://insider.windows.com/

WTF Internal Combustion?

At the moment I’m teaching my son to drive in my Electric Car. Like my daughter before him it’s his first driving experience. Recently, he has started to drive his grandfathers pollution generator, which has a manual transmission. So I was trying to explain why the clutch is needed, and it occurred to me just how stupid internal combustion engines are.

Dad: So if you dump the clutch too early the engine stops.

Son: Why?

Dad: Well, a petrol engine needs a certain amount of energy to keep it running, for like compression for the next cycle. If you put too big a load on the engine, it doesn’t have enough power to move the car and keep the engine running.

Dad: Oh yeah and that involves a complex clutch that can be burnt out if you don’t use it right. Or an automatic transmission that requires a complex cooling system and means you use even more (irreplaceable) fossil fuel as it’s less efficient.

Dad: Oh, and petrol motors only work well in a very narrow range of RPM so we need complex gearboxes.

Dad thinks to himself: WTF internal combustion?

Electric motors aren’t like that. Mine works better at 0 RPM (more torque), not worse. When the car stops my electric motor stops. It’s got one moving part and one gear ratio. Why on earth would you keep using irreplaceable fossil fuels when stopped at the traffic lights? It just doesn’t make sense.

The reason of course is energy density. We need to store a couple of hundred km worth of energy in a reasonable amount of weight. Petrol has about 44 MJ/kg. Let see, one of my Lithium cells weighs 3.3kg, and is rated at 100AH at 3.2V. So thats (100AH)(3600 seconds/H)(3.2V)/(3kg)=0.386MJ/kg or about 100 times worse than petrol. However that’s not the whole story, an EV is about 85% efficient in converting that energy into movement while a dinosaur juice combuster is only about 15% efficient.

Anyhoo it’s now possible to make EVs with 500 km range (hello Tesla) so energy density has been nailed. The rest is a business problem, like establishing a market for smart phones. We’re quite good at solving business problems, as someone tends to get rich.

I mean, if we can make billions of internal combustion engines with 1000′s of moving parts, cooling systems, gearboxes, anti-pollution, fuel injection, engine management, controlled detonation of an explosive (they also make napalm out of petrol) and countless other ancillary systems I am sure human kind can make a usable battery!

Internal combustion is just a bad hack.

History is going to judge us as very stupid. We are chewing through every last drop of fossil fuel to keep driving to and from homes in the suburbs that we can’t afford, to buy stuff we don’t need, making plastic for gadgets we throw away, and flying 1000′s of km to exotic locations for holidays, and overheating the planet using our grandchildren’s legacy of hydrocarbons that took 75 million years to form.

Oh that’s right. It’s for the economy.

Wrapper for running perf on part of a program.

Linux’s perf competes with early git for title of least-friendly Linux tool.  Because it’s tied to kernel versions, and the interfaces changes fairly randomly, you can never figure out how to use the version you need to use (hint: always use -g).

But when it works, it’s very useful.  Recently I wanted to figure out where bitcoind was spending its time processing a block; because I’m a cool kid, I didn’t use gprof, I used perf.  The problem is that I only want information on that part of bitcoind.  To start with, I put a sleep(30) and a big printf in the source, but that got old fast.

Thus, I wrote “perfme.c“.  Compile it (requires some trivial CCAN headers) and link perfme-start and perfme-stop to the binary.  By default it runs/stops perf record on its parent, but an optional pid arg can be used for other things (eg. if your program is calling it via system(), the shell will be the parent).

July 02, 2015

LUV Main July 2015 Meeting: Ansible / BTRFS / Educating People to become Linux Users

Jul 7 2015 18:30
Jul 7 2015 20:30
Jul 7 2015 18:30
Jul 7 2015 20:30
Location: 

200 Victoria St. Carlton VIC 3053

Speakers:

• Andrew Pam, An introduction to Ansible

• Russell Coker, BTRFS update

• Lev Lafayette, Educating People to become Linux Users: Some Key Insights from Adult Education

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 7, 2015 - 18:30

read more

Certification: Necessary Evil?

Certified Professional

I wrote this as a comment in response to Dries' post about the Acquia certification program - I thought I'd share it here too. I've commented there before.

I've also been conflicted about certifications. I still am. And this is because I fully appreciate the pros and cons. The more I've followed the issue, the more conflicted I've become about it.



My current stand, is this. Certifications are a necessary evil. Let me say a little on why that is.

I know many in the Drupal community are not in favour of certification, mostly because it can't possibly adequately validate their experience.



It also feels like an insult to be expected to submit to external assessment after years of service contributing to the code-base, and to the broader landscape of documentation, training, and professional service delivery.



Those in the know, know how to evaluate a fellow Drupalist. We know what to look for, and more importantly where to look. We know how to decode the secret signs. We can mutter the right incantations. We can ask people smart questions that uncover their deeper knowledge, and reveal their relevant experience.



That's our massive head start. Or privilege. 



Drupal is now a mature platform for web and digital communications. The new challenge that comes with that maturity, is that non-Drupalists are using Drupal. And non specialists are tasked with ensuring sites are built by competent people. These people don't have time to learn what we know. The best way we can help them, is to support some form of certification.



But there's a flip side. We've all laughed at the learning curve cartoon about Drupal. Because it's true. It is hard. And many people don't know where to start. Whilst a certification isn't going to solve this completely, it will help to solve it, because it begins to codify the knowledge many of us take for granted.



Once that knowledge is codified, it can be studied. Formally in classes, or informally through self-directed exploration and discovery.



It's a starting point.



I empathise with the nay-sayers. I really do. I feel it too. But on balance, I think we have to do this. But even more, I hope we can embrace it with more enthusiasm.



I really wish the Drupal Association had the resources to run and champion the certification system, but the truth is, as Dries outlines above, it's a very time-consuming and expensive proposition to do this work.



So, Acquia - you have my deep, albeit somewhat reluctant, gratitude!



:-)



Thanks Dries - great post.



cheers,

Donna

(Drupal Association board member)

Tell your MP you support Same Sex Marriage

If you support the right for two people to get married regardless of gender, then please respectfully and politely contact your local federal member and let them know.

Those who oppose this have already started up their very effective networks, and we will need to work very hard to counter it.

If you're not sure who your local MP or Senators are, I recommend you use http://www.openaustralia.org.au/ to find out. Just punch in your post code and it will let you know, as well as give you a run down of their voting history.

Do it, DO IT NOW!

This message brought to you by the realisation that I'm going to be rainbow haired soon.

Blog Catagories: 

July 01, 2015

Hunting for GC1D1NB

I went for an after work walk to try and find GC1D1NB on Tuggeranong Hill yesterday. It wasn't a great success. I was in the right area but I just couldn't find it. Eventually I ran out of time and had to turn back. I am sure I'll have another attempt at this one soon.



   



Interactive map for this route.



Tags for this post: blog pictures 20150701-tuggeranong_hill photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

FreeDV Robustness Part 5 – FreeDV 700

We’ve just released FreeDV v0.98 GUI software, which includes the new FreeDV 700 mode. This new mode has poorer speech quality than FreeDV 1600 but is far more robust, close to SSB on low SNR fading HF channels. Mel Whitten and the test team have made contacts over 1000 km using just 1 Watt!

You can download the Windows version of FreeDV 0.98 here.

To build it you need the latest codec2-dev and fdmdv2-dev from SVN, follow the Quickstart 1 instructions in fdmdv-dev/README.txt. I’ve been cross compiling for Windows on my Ubuntu Linux machine which is a time saver for me. Thanks Richard Shaw for your help with the cmake build system.

Mel and the team have been testing the software for the past few weeks and we’ve removed most of the small UI bugs. Thanks guys! I’m working on some further improvements to the robustness which I will release in a few weeks. Once we are happy with the FreeDV 700 mode, it will be ported to the SM1000. If you have time, and gcc/embedded experience I’d love to have some help with this!

It’s sounds pretty bad at 700 bit/s but so does SSB at 0dB SNR. The new modem uses a pilot symbol assisted coherent PSK modem (FreeDV 1600 uses a differential PSK modem). The new modem also has diversity; the 7 x 75 symb/s QPSK carriers are copied to form a total of 14 half power carriers. Overall this gives us significantly lower operating point SNR than FreeDV 1600 for fading channels. However the bandwidth is a little wider (800 – 2400 Hz), lets see how that goes through real radios.

Simulations indicate it has readability 4/5 at 0dB SNR on CCIR poor (fast) fading channels. It also has a PAPR of 7dB so if your PA can handle it you can hammer out 5dB more power than FreeDV 1600 (be careful).

For those of you who are integrating FreeDV into your own applications the FreeDV API now contains the 700 bit/s mode and freedv_tx and freedv_rx have been updated to demo it. The API interface has changed, we now have variables for the number of modem and speech samples which change with the mode. The coherent PSK modem has the very strange sample rate of 7500 Hz which at this stage the user (that’s you) has to deal with (libresample is your friend).

The 700 bit/s codec (actually 650 bit/s plus 2 data bits/frame) band limits the input speech between 600 and 2200 Hz to reduce the amount of information we need to encode. This might be something we can tweak, however Mel and the team have shown we can communicate OK using this mode. Here are some samples at 1300 (the codec rate used in FreeDV 1600) and 700 bit/s with no errors for comparison.

Lots more to talk about. I’ll blog some more when I pause and take a breath.

Comparing D7 and D8 outta the box

I did another video the other day. This time I've got a D7 and D8 install open side by side, and compare the process of adding an article.

Linux Australia council meeting minutes to be published on the planet

Wed, 2015-07-01 11:33

Last fortnight the Linux Australia council resolved to begin publishing their minutes to planet.linux.org.au.

While meeting minutes may seem boring, they in fact contain a lot of useful and interesting information about what the organisation and its various subcommittees are up to. As such we felt that this was useful information to publish wider and starting from now we'll be publishing them to the planet.

If you are interested in previous meetings and minute notes, you can find them at http://linux.org.au/news

June 30, 2015

New Charger for my EV

On Sunday morning I returned home and plugged in my trusty EV to feed it some electrons. Hmm, something is wrong. No lights on one of the chargers. Oh, and the charger circuit breaker in the car has popped. Always out for adventure, and being totally incompetent at anything above 5V and 1 Amp, I connected it directly to the mains. The shed lights started to waver ominously. Humming sounds like a Mary Shelley novel. And still no lights on the charger.

Oh Oh. Since disposing of my nasty carbon burner a few years ago I only have one car and it’s the EV. So I needed a way to get on the road quickly.

But luck was with me. I scoured my local EV association web site, and found a 2nd hand Zivan NG3 charger, that was configured for a 120V lead acid pack. I have a 36 cell Lithium pack that is around 120V when charged. Different batteries have different charging profiles, for example the way current tapers. However all I really need is a bulk current source, my external Battery Management System will shut down the charger when the cells are charged.

Using some residual charge I EVed down the road where I met Richard, a nice man, fellow engineer, and member of our local EV association. I arranged to buy his surplus NG3, took it home and fired it up. Away it went, fairly hosing electrons into my EV at 20A. The old charger was just 10A so this is a bonus – my charging time will be halved. I started popping breakers again, as I was sucking 2.4kW out of the AC. So I re-arranged a few AC wires, ripped out the older chargers, rewired the BMS module loop a little and away I went with the new charger.

Here is the lash up for the initial test. The new Zivan NG3 is the black box on the left, the dud charger the yellow box on the right. The NG3 replaces the 96V dud charger and two 12V chargers (all wired in series) that I needed to charge the entire pack. My current clamp meter (so useful!) is reading 17A.

Old chargers removed and looking a bit neater. I still need to secure the NG3 somehow. My BMS controller is the black box behind the NG3. It shuts down the AC power to the chargers when the batteries signal they are full.

Pretty red lights in the early morning. Each Lithium cell has a BMS module across it, that monitors the cell voltage, The red light means “just about full”. When the first cell hits 4.1V, it signals the BMS controller to shut down the charger. Richard pointed out that the BMS modules are shunt regulators, so will discharge each cell back down to about 3.6V, ensuring they are all at about the same state of charge.

This is the only reason I go to petrol stations. For air. There is so little servicing on EVs that I forget to check the air for a year, some tyres were a bit low.

The old charger lasted 7 years and was used almost every day (say 2000 times) so I can’t complain. The NG3 was $875 2nd hand. Since converting to the Lithium pack in 2009 I have replaced the electric motor armature (about $900) as I blew it up from overheating, 2 cells ($150 ea) as we over discharged them, a DC-DC converter ($200 ish) and now this charger. Also tyres and brakes last year, which are the only wearing mechanical parts left. In that time I’ve done 45,000 electric km.

Percival trig

I had a pretty bad day, so I knocked off early and went for a walk before going off to the meeting at a charity I help out with. The walk was to Percival trig, which I have to say was one of the more boring trigs I've been to. Some of the forest nearly was nice enough, but the trig itself is stranded out in boring grasslands. Meh.



   



Interactive map for this route.



Tags for this post: blog pictures 20150630-percival photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

June 29, 2015

A team walk around Red Hill

My team at work is trying to get a bit more active, so a contingent from the Canberra portion of the team went for a walk around Red Hill. I managed to sneak in a side trip to Davidson trig, but it was cheating because it was from the car park at the top of the hill. A nice walk, with some cool geocaches along the way.



 



Interactive map for this route.



Tags for this post: blog pictures 20150629-davidson photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

The Value of Money - Part 4

- I previously remarked that since we use the concept of 'deterrence' so readily throughout the world we are in a de-facto state of 'Cold War' whose weapons are defense, intelligence, and economics. There's a lot of interesting information out there...

http://blogs.telegraph.co.uk/news/shashankjoshi/100224247/france-should-remember-its-own-history-before-complaining-too-much-about-american-espionage/

https://wikileaks.org/gifiles/docs/11/1172615_-ct-analysis-an-economic-security-role-for-european-spy.html

http://www.wikileaks-forum.com/nsa/332/r-james-woolsey-why-we-spy-on-our-allies-17-03-2000/24575/

http://www.abc.net.au/news/2013-11-08/australian-nsa-involvement-explained/5079786

http://www.abc.net.au/news/2013-11-08/the-chinese-embassy-bugging-controversy/5079148

http://www.news.com.au/national/australia-must-choose-between-chinese-cash-and-loyalty-to-the-us-as-se-asia-tensions-rise/story-fncynjr2-1227364070887

http://rt.com/news/270529-nsa-france-economy-wikileaks/ 

http://www.bloomberg.com/news/articles/2015-06-30/why-china-wants-a-strong-euro-as-greece-teeters

http://www.smh.com.au/federal-politics/political-news/china-not-fit-for-global-leadership-says-top-canberra-official-michael-thawley-20150630-gi1o1f.html 

- it makes sense that companies try to run lean rather than try to create. Everybody knows how to save. It's much more difficult to create something of value

- advertising is a broadcast means of achieving increased transactions but in spite of targeted advertising it is still incredibly inefficient. Based on previous experience even single digit click through rates for online advertising is considered suspect/possibly fraudulent 
http://adage.com/article/guest-columnists/study-advertising-half-effective-previously-thought/228409/

- the easiest way of estabishing the difference between what's needed and what's wanted is to turn off all advertising around you. Once you've done that, the difference between need and want becomes very strange and the efficacy of advertising on your perspective becomes much, much clearer

- most businesses fail. A lot of people basically have trouble running a business, have flawed business models, or don't achieve enough transactions to make it worthwhile

http://www.forbes.com/sites/ericwagner/2013/09/12/five-reasons-8-out-of-10-businesses-fail/

https://www.linkedin.com/pulse/20140915223641-170128193-what-are-the-real-small-business-survival-rates

http://www.smh.com.au/business/the-economy/google-says-give-rd-tax-breaks-to-small-techies-not-big-guys-20150407-1mfy30.html

http://smallbiztrends.com/2012/09/failure-rates-by-sector-the-real-numbers.html

http://www.isbdc.org/small-business-failure-rates-causes/

http://www.washingtonpost.com/blogs/fact-checker/wp/2014/01/27/do-9-out-of-10-new-businesses-fail-as-rand-paul-claims/

- immigration is a good thing provided that the people in question bring something to the economy. I look at the Japanese situation and wonder whether or not immigration is a more cost effective means of dealing with their ageing problem than 'Abenomics'. Even if all they do is re-patriate former nationals...

http://www.koreaherald.com/view.php?ud=20150628000326

- if you run through their numbers carefully, and think about where many of the world's top companies are headed, the performance (net profit in particular) of some of them aren't any where near impressive (percentage wise) as the share price growth in recent history. There are many small/mid cap firms that would out do them (% net profit wise) if you're looking to invest

http://www.gurufocus.com/financials/AAPL&affid=45223

https://finance.yahoo.com/q/ks?s=MSFT+Key+Statistics

http://www.marketwatch.com/investing/stock/amzn/financials

http://www.marketwatch.com/investing/stock/goog/financials

https://investor.google.com/financial/tables.html

- in software engineering people continually harp on about the benefits of Agile, Extreme programming and so on. Basically, all it is maintaining regular contact between staff members to get the best out of a piece of work. Peer pressure and continual oversight also forces you to remain productive. Think about this in the real world. The larger the teams are the more difficult it is to maintain oversight particuarly if the manager in question is of a poor standard and there are no systems in place to maintain standards. There is also a problem with unfettered belief in this metholodgy. If in general, the team members are unproductive or of a poor standard this will ripple throughout your team

- GDP is a horrible measure of productivity. As I've stated previously, the difference between perceived, effective, and actual value basically diguises where true value lies. Go spend some time in other parts of the world. I guarantee that there will be a massive difference in the way you view productivity (productivity means amount of work completed per unit time not overall work)

- a good measure of a person's productivity/value is what happens if they take a day off or a have a break. Observe, the increase in workload for each other staff member and how they deal with it

- people keep on harping on about self interest as the best way of maintaining productivity and encouraging people to work hard. However, I have a huge problem with this as it is incredibly hard to differentiate between actual, effective, and perceived value sometimes. At one particular firm, we had difficulties with this as well. I was therefore tasked with writing an application to monitor things (if you intend to write something along these lines please be mindful relevant HR and Surveillance laws in your jurisdiction. Also, keep the program 'silent'. Staff will likely alter their behaviour if they know that the program is running.). The funny thing is that even people you think are productive tend to work in bursts. The main difference is the amount of time that trasnpires between each piece of work and the rate of work that occurs during each burst. The other thing that you should know is that even with senior members of staff when you look at a lot of metrics it can be extremely difficult to justify their wage. Prepare to be surprised if you currently have poor oversight in your organisation. Lack of proper oversight breeds nepotism, lack of productivity, etc...

- you'll be shocked at what poor staff can do to your team. If the members in question is particularly bad he in effect takes a number of other staff out of the equation at the same time. Think about this. You all are recruited for highly skilled jobs but one team member is poor. If he continually has to rely on other staff then he in effects takes out another member of your team simultaneously (possibly more). Think about this when training new staff. Give them enough time/training to get a guage of what they'll be like but if they can't hold up their part of the deal be prepared to move them elsewhere within the organisation or let go of them. The same is also true in the opposite direction. Good employees have a multiplier effect. You'll only figure out the difference with proper oversight and monitoring. Without this, perceived value may completely throw you off

http://programmers.stackexchange.com/questions/179616/a-good-programmer-can-be-as-10x-times-more-productive-than-a-mediocre-one

http://swreflections.blogspot.com.au/2015/01/we-cant-measure-programmer-productivity.html

http://stackoverflow.com/questions/966800/mythical-man-month-10-lines-per-developer-day-how-close-on-large-projects
- we like to focus in on large companies because they supposedly bring in a lot of business. The problem is if they have a monopoly. If they strangle the market of all value and don't put back in via taxes, employment, etc... the state in question could be in a lot of trouble down the line. If/when the company moves the economy would have evolved to see these companies as being a core component. Other surrouding will likely be poorly positioned to adapt when they leave for a place which offers better terms and/or conditions. The other problem is this, based on experience people are willing to except a lower wage to work for such firms (mostly for reasons of financial safety). There is no guarantee that you will be paid what you are worth

http://techcrunch.com/2015/06/28/policy-after-uber/

http://www.businessinsider.com/greeces-former-tax-collection-chief-harry-theoharis-explains-tax-evasion-problem-2015-7

http://www.irishtimes.com/business/economy/smes-account-for-99-7-of-business-enterprises-in-republic-1.2035800

http://www.irishtimes.com/business/economy/economy-primed-for-sustained-growth-says-goldman-sachs-1.2143071

http://www.wsj.com/articles/SB10001424127887324787004578496803472834948

http://www.afr.com/technology/technology-companies/ireland-scraps-google-tech-company-tax-breaks-20141019-119m80

https://en.wikipedia.org/wiki/Double_Irish_arrangement

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

http://www.theguardian.com/commentisfree/2015/jun/28/david-cameron-is-abusing-magna-carta-in-abolishing-our-rights

http://www.theguardian.com/world/2015/mar/25/irelands-economy-starting-to-fire-all-cylinders-imf-report

http://www.irishtimes.com/business/economy/who-owes-more-money-the-irish-or-the-greeks-1.2236034

http://www.theguardian.com/us-news/2015/feb/02/barack-obama-tax-profits-president-budget-offshore

http://www.smh.com.au/business/multinationals-channel-more-money-through-hubs-in-singapore-switzerland-than-ever-before-tax-office-says-20150204-1363u5.html

http://www.smh.com.au/business/retail/jeff-kennett-tells-coles-to-pay-12m-to-suppliers-20150630-gi19wv.html 

- when and if a large company collapses or moves the problem is the number of others who rely on it for business

- people keep on saying that there are safe industries from off shoring and automation. I think they're naive or haven't spent enough time around good technologists. Good employees will try to automate or develop processes to get things done more efficiently. Virtually all industries (or vast chunks of them) can be automated fully given time (trust me on this. I like to read a lot...).

http://www.technologyreview.com/view/519241/report-suggests-nearly-half-of-us-jobs-are-vulnerable-to-computerization/

http://www.futuristspeaker.com/2012/02/2-billion-jobs-to-disappear-by-2030/

http://www.forbes.com/sites/jmaureenhenderson/2012/08/30/careers-are-dead-welcome-to-your-low-wage-temp-work-future/

http://theconversation.com/australia-must-prepare-for-massive-job-losses-due-to-automation-43321

http://www.theguardian.com/business/2015/jun/16/computers-could-replace-five-million-australian-jobs-within-two-decades

Only way to keep yourself safe is to be multi-skilled and entrepreneurial or else extremely skilled at a particular profession. Even then there's no guarantee that you'll be safe

http://time.com/3938678/obamacare-supreme-court-uber/

http://techcrunch.com/2015/06/28/policy-after-uber/

- sometimes I think people just don't get it. A small number of outliers is all it takes in order to change group behaviour. Even if we ban regulate/automation there will be those who adopt it without any misgivings much like organised crime, and use of illegal migrants, cash economy, etc... Only real way is to force a cashless society so that we can run algorithms to check for unusual behaviour and breed a more puritan society

- minimal but effective regulation helps to level out the playing field. Making it too complex creates possible avenues for loopholes to be exploited. Too simple and without enough coverage and you have the same problem

- obvious ways to make sustained, long term money include creating something that others need or want, else have the ability to be able to change perception, to be able to see changes and adapt, arbitrage, and using a broadcast structure

- personal experience and history of others with emerging markets such as Asia and Africa says that results can be extremely variable. Without on the ground knowledge and oversight you can just as easily make a substantial profit as a massive loss through fraud. There is very little you can do about this about from taking due diligence and having measures/knowledge to be able to deal with it should it actually occur

http://timesofindia.indiatimes.com/world/uk/India-UKs-3rd-largest-job-creator-in-2014/articleshow/47714406.cms

- in reality, very few have a genuine chance of making it 'big', "Americans raised at the top and bottom of the income ladder are likely to remain there themselves as adults. Forty-three percent of those who start in the bottom are stuck there as adults, and 70 percent remain below the middle quintile. Only 4 percent of adults raised in the bottom make it all the way to the top, showing that the "rags-to-riches" story is more often found in Hollywood than in reality."

http://www.forbes.com/sites/jmaureenhenderson/2012/08/30/careers-are-dead-welcome-to-your-low-wage-temp-work-future/

- use first mover advantage as quickly as you can but have defensive measures in place

http://www.news.com.au/finance/business/is-the-free-ride-over-for-uber/story-fnda1bsz-1227419310284

- investment from third parties (angel investment, venture capital, etc...) can vary drastically. More and more want a guaranteed return on investment at least though

- based on what I've experienced VC is much more difficult to get locally than in Europe or the United States. Luckily, more companies are willing to invest provided you are posting good numbers. One other thing I've discovered locally is that they are too lazy/unwilling to help even if the idea/s may be good though (though this is changing)

http://www.afr.com/business/health/pharmaceuticals/merck-ceo-ken-frazier-on-keytruda-and-why-australians-miss-out-on-new-drugs-20150628-ghyisc

- we don't want to live day by day or have creditors/shareholders to report to so seek the highest profit whenever possible

- you can select a lot of numbers and prove essentially anything in business but their are certain numbers that you simply can't ignore such as net profit/income

- pay a person with cash by the hour where he has to do the numbers versus lump sump and he will look at things very differently. That goes for any profession, even high earning ones

- growth is great but only if it can be sustained and it is genuine. If you have susbtantial variation in growth such as having a few fantastic years of growth and then a sudden drop off that is fed by massive debt you could be in a bit of trouble. You may say that you can just sell off assets. If the growth wasn't good enough then do you see a problem? Moreover, what if you don't have something that it considered worthwhile or easy to sell off? For a state/business, your credit risk suddenly shoots up and you may possibly be priced out of the market. Targeted, sustainable, growth should be the target not growth at all costs. The Chinese position towards economic management is actually making a lot more sense to me now though I'm not certain that it would work quite as easily or be accepted in other states. You may say that we'll invest during good times? The problem is that we're often not wise enough to know when and where to invest

http://www.businessinsider.com/krugman-europe-greece-2015-6

http://www.businessinsider.com/el-erian-on-how-greece-will-impact-markets-2015-6

http://www.dawn.com/news/1162195/putins-next-challenge-propping-up-russias-troubled-banks

- in many places you are seeing a rise of left wing parties. The worrying thing is that they'll lose sight of the benefits of capitalism and fall into the trap of a more puritan communism/socialist system which hasn't really worked over the long term in the past. The other thing to be concerned about is that a lot of them don't have solid policies or answers to the problems which currently face us

http://theconversation.com/postcard-from-spain-where-now-for-the-quiet-revolution-43779

http://blogs.channel4.com/paul-mason-blog/greece-referendum-euro-die/3978

- if more people could distinguish real value from perceived and effective value, needs and wants, we would have less assetts bubbles and price gouging across the board

http://www.news.com.au/finance/real-estate/bis-shrapnel-report-reveals-property-prices-to-fall/story-fncq3era-1227416605503?from=google_rss&google_editors_picks=true

https://www.ozbargain.com.au/node/104348

http://www.news.com.au/world/breaking-news/nz-govt-slammed-over-10m-ny-apartment/story-e6frfkui-1227416038766

- there will be those who say who cares about the collective. Capitalism is composed of boom and bust cycles. Here's the problem. Most companies require debt to survive. If they can't survive that bust cycle they will be part of a collective collapse in the economy. Moreover, based on information I've come across other developed countries have looked at the plans for the Eurozone and the ways of dealing with high debt and are basically using that as the blueprint for the future. Your assets can and will be raided in the event of the state or systemic entities getting into trouble

http://www.heraldsun.com.au/business/greeks-stashing-money-in-homes-as-deadline-looms-for-debt-repayments/story-fni0d2cj-1227403214181?from=google_rss&google_editors_picks=true

http://www.nytimes.com/2015/06/30/business/dealbook/the-hard-line-on-greece.html?_r=0

http://www.usatoday.com/story/news/2015/06/29/evening-news-roundup-monday/29466899/

http://www.bbc.co.uk/news/world-europe-33324363

http://www.washingtonpost.com/blogs/wonkblog/wp/2015/06/30/7-questions-about-greeces-huge-crisis-you-were-too-embarrassed-to-ask/ 

- people say that we should get educated in order to have a high paying job but the problem is that we are increasingly siloed into specific roles. If we can't use the knowledge, the time and money we've spent on education has been for nothing. We require better organisation between educational curriculums and professional settings

http://www.financialexpress.com/article/companies/infosys-wipro-tech-mahindra-it-giants-revamp-culture-to-attract-young-talent-battle-start-ups/86718/

- even if governments are aware that there are problems that are cropping up with our version of capitalism, it's possible that there are those that may be saying that we have no choice but to keep the cycle going. It's the best of the worst

http://www.bbc.co.uk/news/world-europe-33303105

- globlisation essentially buys us more time before things come to a head (if they do). Most of the sceanarios point to organised debt forgiveness as a means of dealing with the problem. Private asset seizure is something that is being metioned everywhere. Raw commodities stored at secure locations may be your only source of safety if things look bad if you are a private citizen

http://www.washingtonpost.com/blogs/wonkblog/wp/2015/06/29/greece/

http://www.news.com.au/finance/small-business/those-selling-safes-are-cashing-in-on-greeces-financial-uncertainty/story-fn9evb64-1227422325045

http://www.news.com.au/finance/economy/what-a-grexit-would-look-like/story-e6frflo9-1227422412614

http://www.telegraph.co.uk/finance/economics/11712098/Europe-has-suffered-a-reputational-catastrophe-in-Greece.html 

- if you want a resilient economy you need maintain a level playing field, flexible workforce, and possibly limit the size and influence of major companies in your economy

http://www.vice.com/en_uk/read/the-irish-emigration-crisis--a-new-century-an-old-problem

- I don't get it. Heaps of countries have adequate blocking technology to be help deal with this if they deem it illegal. Deploy it correctly and your rioting problem is over with...

http://www.arkansasonline.com/news/2015/jun/27/hollande-uber-unit-illegal-dismantle-it/

http://www.theguardian.com/technology/2015/jun/26/uber-expansion-meets-global-revolt-and-crackdown

http://timesofindia.indiatimes.com/tech/tech-news/Officials-hint-at-possible-win-for-Uber-in-Mexico-City/articleshow/47861342.cms

- as stated previously, I've come to the conclusion that a lot of financial instruments are useless. They effectively provide a means of making money under any conditions. If we remove these instruments from play then I think that it may be possible that we may return to less speculative markets that depend more on fundamentals

- anyone can create something of value. The issue is whether it is negligible versus tangible value. This will also determine your business model

- you may know that ther is a bubble but as China and local experiences have demonstrated popping it gracefully is far from easy. Moreover, by the time you figure out there's a bubble it may often too late. Too many people may have too many vested interests

http://www.reuters.com/article/2015/06/29/us-usa-puertorico-restructuring-idUSKCN0P903Q20150629

http://www.businessinsider.com/puerto-rico-is-struggling-to-repay-its-debt-2015-6

http://jamaica-gleaner.com/article/commentary/20150630/editorial-jamaica-no-greece

http://www.heraldsun.com.au/business/breaking-news/world-bank-warns-china-on-reforms/story-fnn9c0hb-1227423791391?nk=5438df4578f2af3f2d269863d041c50c-1435746465 

- theory helps but you won't figure out how market economies work without first hand experience


http://www.afr.com/news/policy/budget/big-government-flourishes-under-tony-abbott-and-joe-hockey-20150513-gh0sgrhttp://www.dailytelegraph.com.au/news/nsw/joe-blasts-welfare-rich-who-have-more-money-to-spend-than-workers/story-fni0cx12-1227357517141

http://www.smh.com.au/business/australiachina-free-trade-agreement-favours-chinese-investors-20150621-ghthjr.html

http://www.afr.com/technology/telstra-cuts-broadband-plan-fees-to-counter-rivals-20150626-ghyir7

http://www.afr.com/opinion/columnists/trophy-trade-deals-wont-change-the-imfs-dismal-outlook-20150628-ghysnn

http://www.brisbanetimes.com.au/act-news/uberx-australian-drivers-working-as-coequals-to-rideshare-tech-company-20150629-ghvjx1.html

http://www.dailytelegraph.com.au/business/breaking-news/hockey-flying-blind-on-negative-gearing/story-fnn9c0gv-1227417798217?nk=0b226f408634f8d8ba57220c3d074f55-1435471944

http://www.abc.net.au/news/2013-11-08/the-chinese-embassy-bugging-controversy/5079148

http://www.macleans.ca/news/world/why-refugees-are-fleeing-france-for-britain/

http://www.businessinsider.com.au/facebooks-shot-at-cisco-just-got-deadly-2015-3

http://www.theglobeandmail.com/globe-drive/culture/technology/gyroscopes-will-allow-bike-to-stay-upright-when-stopped/article24920123/

http://www.businessinsider.in/5-things-Elon-Musk-believed-would-change-the-future-of-humanity-in-1995/articleshow/46831594.cms

Craige McWhirter: How To Delete a Cinder Snapshot with a Status of error or error_deleting With Ceph Block Storage

When deleting a volume snapshot in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the snapshot.

There are a number of reasons why a snapshot may be reported by Ceph as unable to be deleted, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the snapshots in Cinder, the status is usually error or error_deleting:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

When you check Ceph you may find the following snapshot list:

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2069 snapshot-2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 40960 MB
  2526 snapshot-52c43ec8-e713-4f87-b329-3c681a3d31f2 40960 MB
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

The astute will notice that there are only 3 snapshots listed in Ceph yet 5 listed in Cinder. We can immediately exclude 47fbbfe8 which is available in both Cinder and Ceph, so there's no issues there.

You will also notice that the snapshots with the status error are not in Ceph and the two with error_deleting are. My take on this is that for the status error, Cinder never received the message from Ceph stating that this had been deleted successfully. Whereas for the status error_deleting status, Cinder had been unsuccessful in offloading the request to Ceph.

Each status will need to be handled separately , I'm going to start with the error_deleting snapshots, which are still present in both Cinder and Ceph.

In MariaDB, set the status from error_deleting to available:

MariaDB [cinder]> update snapshots set status='available' where id = '2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='available' where id = '52c43ec8-e713-4f87-b329-3c681a3d31f2';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Check in Cinder that the status of these snapshots has been updated successfully:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

Delete the newly available snapshots from Cinder:

% cinder snapshot-delete 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0
% cinder snapshot-delete 52c43ec8-e713-4f87-b329-3c681a3d31f2

Then check the results in Cinder and Ceph:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

So we are done with Ceph now, as the error snapshots do not exist there. As they only exist in Cinder, we need to mark them as deleted in the Cinder database:

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = '07d75992-bf3f-4c9c-ab4e-efccdfc2fe02';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = 'a595180f-d5c5-4c4b-a18c-ca56561f36cc';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Now check the status in Cinder:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |   Status  |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+

Now your errant Cinder snapshots have been removed.

Enjoy :-)

June 28, 2015

Twitter posts: 2015-06-22 to 2015-06-28

RAID Pain

One of my clients has a NAS device. Last week they tried to do what should have been a routine RAID operation, they added a new larger disk as a hot-spare and told the RAID array to replace one of the active disks with the hot-spare. The aim was to replace the disks one at a time to grow the array. But one of the other disks had an error during the rebuild and things fell apart.

I was called in after the NAS had been rebooted when it was refusing to recognise the RAID. The first thing that occurred to me is that maybe RAID-5 isn’t a good choice for the RAID. While it’s theoretically possible for a RAID rebuild to not fail in such a situation (the data that couldn’t be read from the disk with an error could have been regenerated from the disk that was being replaced) it seems that the RAID implementation in question couldn’t do it. As the NAS is running Linux I presume that at least older versions of Linux have the same problem. Of course if you have a RAID array that has 7 disks running RAID-6 with a hot-spare then you only get the capacity of 4 disks. But RAID-6 with no hot-spare should be at least as reliable as RAID-5 with a hot-spare.

Whenever you recover from disk problems the first thing you want to do is to make a read-only copy of the data. Then you can’t make things worse. This is a problem when you are dealing with 7 disks, fortunately they were only 3TB disks and only each had 2TB in use. So I found some space on a ZFS pool and bought a few 6TB disks which I formatted as BTRFS filesystems. For this task I only wanted filesystems that support snapshots so I could work on snapshots not on the original copy.

I expect that at some future time I will be called in when an array of 6+ disks of the largest available size fails. This will be a more difficult problem to solve as I don’t own any system that can handle so many disks.

I copied a few of the disks to a ZFS filesystem on a Dell PowerEdge T110 running kernel 3.2.68. Unfortunately that system seems to have a problem with USB, when copying from 4 disks at once each disk was reading about 10MB/s and when copying from 3 disks each disk was reading about 13MB/s. It seems that the system has an aggregate USB bandwidth of 40MB/s – slightly greater than USB 2.0 speed. This made the process take longer than expected.

One of the disks had a read error, this was presumably the cause of the original RAID failure. dd has the option conv=noerror to make it continue after a read error. This initially seemed good but the resulting file was smaller than the source partition. It seems that conv=noerror doesn’t seek the output file to maintain input and output alignment. If I had a hard drive filled with plain ASCII that MIGHT even be useful, but for a filesystem image it’s worse than useless. The only option was to repeatedly run dd with matching skip and seek options incrementing by 1K until it had passed the section with errors.

for n in /dev/loop[0-6] ; do echo $n ; mdadm –examine -v -v –scan $n|grep Events ; done

Once I had all the images I had to assemble them. The Linux Software RAID didn’t like the array because not all the devices had the same event count. The way Linux Software RAID (and probably most RAID implementations) work is that each member of the array has an event counter that is incremented when disks are added, removed, and when data is written. If there is an error then after a reboot only disks with matching event counts will be used. The above command shows the Events count for all the disks.

Fortunately different event numbers aren’t going to stop us. After assembling the array (which failed to run) I ran “mdadm -R /dev/md1” which kicked some members out. I then added them back manually and forced the array to run. Unfortunately attempts to write to the array failed (presumably due to mismatched event counts).

Now my next problem is that I can make a 10TB degraded RAID-5 array which is read-only but I can’t mount the XFS filesystem because XFS wants to replay the journal. So my next step is to buy another 2*6TB disks to make a RAID-0 array to contain an image of that XFS filesystem.

Finally backups are a really good thing…

June 27, 2015

git.openstack.org adventures

Over the past few months I started to notice occasional issues when cloning repositories (particularly nova) from git.openstack.org.

It would fail with something like

git clone -vvv git://git.openstack.org/openstack/nova .
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The problem would occur sporadically during our 3rd party CI runs causing them to fail. Initially these went somewhat ignored as rechecks on the jobs would succeed and the world would be shiny again. However, as they became more prominent the issue needed to be addressed.

When a patch merges in gerrit it is replicated out to 5 different cgit backends (git0[1-5].openstack.org). These are then balanced by two HAProxy frontends which are on a simple DNS round-robin.

                          +-------------------+
                          | git.openstack.org |
                          |    (DNS Lookup)   |
                          +--+-------------+--+
                             |             |
                    +--------+             +--------+
                    |           A records           |
+-------------------v----+                    +-----v------------------+
| git-fe01.openstack.org |                    | git-fe02.openstack.org |
|   (HAProxy frontend)   |                    |   (HAProxy frontend)   |
+-----------+------------+                    +------------+-----------+
            |                                              |
            +-----+                                    +---+
                  |                                    |
            +-----v------------------------------------v-----+
            |    +---------------------+  (source algorithm) |
            |    | git01.openstack.org |                     |
            |    |   +---------------------+                 |
            |    +---| git02.openstack.org |                 |
            |        |   +---------------------+             |
            |        +---| git03.openstack.org |             |
            |            |   +---------------------+         |
            |            +---| git04.openstack.org |         |
            |                |   +---------------------+     |
            |                +---| git05.openstack.org |     |
            |                    |  (HAProxy backend)  |     |
            |                    +---------------------+     |
            +------------------------------------------------+

Reproducing the problem was difficult. At first I was unable to reproduce locally, or even on an isolated turbo-hipster run. Since the problem appeared to be specific to our 3rd party tests (little evidence of it in 1st party runs) I started by adding extra debugging output to git.

We were originally cloning repositories via the git:// protocol. The debugging information was unfortunately limited and provided no useful diagnosis. Switching to https allowed for more CURL output (when using GIT_CURL_VERBVOSE=1 and GIT_TRACE=1) but this in itself just created noise. It actually took me a few days to remember that the servers are running arbitrary code anyway (a side effect of testing) and therefore cloning from the potentially insecure http protocol didn’t provide any further risk.

Over http we got a little more information, but still nothing that was conclusive at this point:

git clone -vvv http://git.openstack.org/openstack/nova .

error: RPC failed; result=18, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

After a bit it became more apparent that the problems would occur mostly during high (patch) traffic times. That is, when a lot of tests need to be queued. This lead me to think that either the network turbo-hipster was on was flaky when doing multiple git clones in parallel or the git servers were flaky. The lack of similar upstream failures lead me to initially think it was the former. In order to reproduce I decided to use Ansible to do multiple clones of repositories and see if that would uncover the problem. If needed I would have then extended this to orchestrating other parts of turbo-hipster in case the problem was systemic of something else.

Firstly I need to clone from a bunch of different servers at once to simulate the network failures more closely (rather than doing multiple clones on the one machine or from the one IP in containers for example). To simplify this I decided to learn some Ansible to launch a bunch of nodes on Rackspace (instead of doing it by hand).

Using the pyrax module I put together a crude playbook to launch a bunch of servers. There is likely much neater and better ways of doing this, but it suited my needs. The playbook takes care of placing appropriate sshkeys so I could continue to use them later.

    ---
    - name: Create VMs
      hosts: localhost
      vars:
        ssh_known_hosts_command: "ssh-keyscan -H -T 10"
        ssh_known_hosts_file: "/root/.ssh/known_hosts"
      tasks:
        - name: Provision a set of instances
          local_action:
            module: rax
            name: "josh-testing-ansible"
            flavor: "4"
            image: "Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)"
            region: "DFW"
            count: "15"
            group: "raxhosts"
            wait: yes
          register: raxcreate

        - name: Add the instances we created (by public IP) to the group 'raxhosts'
          local_action:
            module: add_host
            hostname: "{{ item.name }}"
            ansible_ssh_host: "{{ item.rax_accessipv4 }}"
            ansible_ssh_pass: "{{ item.rax_adminpass }}"
            groupname: raxhosts
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

        - name: Sleep to give time for the instances to start ssh
          #there is almost certainly a better way of doing this
          pause: seconds=30

        - name: Scan the host key
          shell: "{{ ssh_known_hosts_command}} {{ item.rax_accessipv4 }} >> {{ ssh_known_hosts_file }}"
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

    - name: Set up sshkeys
      hosts: raxhosts
      tasks:
       - name: Push root's pubkey
         authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

From here I can use Ansible to work on those servers using the rax inventory. This allows me to address any nodes within my tenant and then log into them with the seeded sshkey.

The next step of course was to run tests. Firstly I just wanted to reproduce the issue, so in order to do that it would crudely set up an environment where it can simply clone nova multiple times.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"

By default Ansible runs with 5 folked processes. Meaning that Ansible would work on 5 servers at a time. We want to exercise git heavily (in the same way turbo-hipster does) so we use the –forks param to run the clone on all the servers at once. The plan was to keep launching servers until the error reared its head from the load.

To my surprise this happened with very few nodes (less than 15, but I left that as my minimum testing). To confirm I also ran the tests after launching further nodes to see it fail at 50 and 100 concurrent clones. It turned out that the more I cloned the higher the failure rate percentage was.

Now that I had the problem reproducing, it was time to do some debugging. I modified the playbook to capture tcpdump information during the clone. Initially git was cloning over IPv6 so I turned that off on the nodes to force IPv4 (just in case it was a v6 issue, but the problem did present itself on both networks). I also locked git.openstack.org to one IP rather than randomly hitting both front ends.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      vars:
        cap_file: tcpdump_{{ ansible_hostname }}_{{ ansible_date_time['epoch'] }}.cap
      tasks:
        - name: Disable ipv6 1/3
          sysctl: name="net.ipv6.conf.all.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 2/3
          sysctl: name="net.ipv6.conf.default.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 3/3
          sysctl: name="net.ipv6.conf.lo.disable_ipv6" value=1 sysctl_set=yes
        - name: Restart networking
          service: name=networking state=restarted
        - name: Lock git.o.o to one host
          lineinfile: dest=/etc/hosts line='23.253.252.15 git.openstack.org' state=present
        - name: start tcpdump
          command: "/usr/sbin/tcpdump -i eth0 -nnvvS -w /tmp/{{ cap_file }}"
          async: 6000000
          poll: 0 
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"
          #shell: "git clone http://github.com/openstack/nova"
          ignore_errors: yes
        - name: kill tcpdump
          command: "/usr/bin/pkill tcpdump"
        - name: compress capture file
          command: "gzip {{ cap_file }} chdir=/tmp"
        - name: grab captured file
          fetch: src=/tmp/{{ cap_file }}.gz dest=/var/www/ flat=yes

This gave us a bunch of compressed capture files that I was then able to seek the help of my colleagues to debug (a particular thanks to Angus Lees). The results from an early run can be seen here: http://119.9.51.216/old/run1/

Gus determined that the problem was due to a RST packet coming from the source at roughly 60 seconds. This indicated it was likely we were hitting a timeout at the server or a firewall during the git-upload-pack of the clone.

The solution turned out to be rather straight forward. The git-upload-pack had simply grown too large and would timeout depending on the load on the servers. There was a timeout in apache as well as the HAProxy config for both frontend and backend responsiveness. The relative patches can be found at https://review.openstack.org/#/c/192490/ and https://review.openstack.org/#/c/192649/

While upping the timeout avoids the problem, certain projects are clearly pushing the infrastructure to its limits. As such a few changes were made by the infrastructure team (in particular James Blair) to improve git.openstack.org’s responsiveness.

Firstly git.openstack.org is now a higher performance (30GB) instance. This is a large step up from the previous (8GB) instances that were used as the frontend previously. Moving to one frontend additionally meant the HAProxy algorithm could be changed to leastconn to help balance connections better (https://review.openstack.org/#/c/193838/).

                          +--------------------+
                          | git.openstack.org  |
                          | (HAProxy frontend) |
                          +----------+---------+
                                     |
                                     |
            +------------------------v------------------------+
            |  +---------------------+  (leastconn algorithm) |
            |  | git01.openstack.org |                        |
            |  |   +---------------------+                    |
            |  +---| git02.openstack.org |                    |
            |      |   +---------------------+                |
            |      +---| git03.openstack.org |                |
            |          |   +---------------------+            |
            |          +---| git04.openstack.org |            |
            |              |   +---------------------+        |
            |              +---| git05.openstack.org |        |
            |                  |  (HAProxy backend)  |        |
            |                  +---------------------+        |
            +-------------------------------------------------+

All that was left was to see if things had improved. I rerun the test across 15, 30 and then 45 servers. These were all able to clone nova reliably where they had previously been failing. I then upped it to 100 servers where the cloning began to fail again.

Post-fix logs for those interested:

http://119.9.51.216/run15/

http://119.9.51.216/run30/

http://119.9.51.216/run45/

http://119.9.51.216/run100/

http://119.9.51.216/run15per100/

At this point, however, I’m basically performing a Distributed Denial of Service attack against git. As such, while the servers aren’t immune to a DDoS the problem appears to be fixed.

June 26, 2015

The Value of Money - Part 3

The Western world generally saw the collapse of the Soviet Union as proof positive of the superiority of capitalism over communism/socialism. Most of the arguments bordered along the lines that the sheer scale of managing an economy, that it resulted in nepotism, bred corruption, stifled innovation, and that it didn't feed into the needs and wants of it's constituents were the reasons for their failure. The irony is that you can see many of the same flaws in communism and socialism that you see in capitalism now. Given the fact that more and more developed economies are getting into trouble I wonder whether this is the true way forward. The European Union, United States, Japan, and others have all recently endured serious economic difficulty and have been projected to continue to experience prolonged issues.



My belief that if capitalism and free market economics is to work into the future constraints must be placed on the size of firms relative to the size of the market/economy. Below are some reasons for my belief in this as well as some other notes regarding market economics:
- I believe that one of the reasons we only favour free market economics because it limits the severity of problems if/when someone/something collapses. If a government collapses you have trouble everywhere. If a company collapses it only impacts the company and the immediate supply chain, distributers, retailers, etc...

- the other problem is most of the companies that grow to this size have no choice but to be driven by greed. Even if they pay their fair share of taxes most of them rely on debt of some sort in order to maintain a viable business. Without cash flow from the stock market, their creditors, etc... they can't continue to pay the bills. Hence, they must satisfy their own needs as well as that of their shareholders and creditors at the expense of those in the wider community. An example of this are the large retail chains that operate in many of the more developed countries. The problem is that their power can now rival that of the state. For instance, in Australia, "Almost 40 cents in every dollar we spend at the shops is now taken by a Woolworths or Wesfarmers-owned retail entity" with their interests including the "interests in groceries, fuel, liquor, gambling, office supplies, electronics, general merchandise, insurance and hardware, sparking concerns that consumers will pay more."

http://www.news.com.au/finance/money/coles-and-woolworths-receive-almost-40-per-cent-of-australian-retail-spending/story-e6frfmd9-1226043866311

If the chain collapses it's likely that hundreds of thousands of jobs will be lost in the event of administration/receivership. I'm arguing that we need to spread the risk a bit. If one part collapses it doesn't bring the whole thing crashing down around you

http://www.instantshift.com/2010/02/03/22-largest-bankruptcies-in-world-history/

- despite politician's complaints about MNCs/TNCs not contributing their fair share towards the tax base they aren't willing to make enough of an effort to change things to create those circumstances. There needs to be an understanding that without someone to buy their products and services these companies will go bankrupt. Large firms need employees and consumers as much as we need their tax revenue

- the irony is that we believe that since companies are large they are automatically successful, we should support them. Think about many of the recent large defense programs that were undertaken by large firms. As indicated previously, there's currently no incentive for them to help the state. They just want to survive and generate profits. The JSF program was deliberately structured in such a way that we've ended up with a fighter jet that isn't up to the original design spec, well and truly over the desired budgetary parameters, and way beyond the original design constraints putting the national security of many allied nations at risk

https://medium.com/war-is-boring/test-pilot-admits-the-f-35-can-t-dogfight-cdb9d11a875

http://www.janes.com/article/52715/jpo-counters-media-report-that-f-35-cannot-dogfight

- progress within the context of market economics is often only facilitated through proper competition and regulation. At the moment, many of the largest donors towards political parties are large companies. This results in a significant distortion of the playing field and what the ultimate decision makers deem to be important issues.

http://www.nytimes.com/2015/06/25/business/obama-bolsters-his-leverage-with-trade-victory-but-at-a-cost.html?_r=0

https://en.wikipedia.org/wiki/Global_saving_glut

Think about nature of the pharmaceutical industry and electronics/IT industries. They both complain that progress (research and development) is difficult. The irony is that it's difficult to argue this if you're not making any worthwhile attempts at it. Both sectors sit atop enough savings to be able to cure much of the world's current woes but they have absolutely no incentive to bring it back on shore for it to be properly tax or to spend it

http://money.cnn.com/2015/03/20/investing/stocks-companies-record-cash-level-oil/

http://www.telegraph.co.uk/finance/11038180/Global-firms-sitting-on-7-trillion-war-chest.html

http://www.telegraph.co.uk/finance/budget/9150406/Budget-2012-UK-companies-are-sitting-on-billions-of-pounds-so-why-arent-they-spending-it.html

http://www.theguardian.com/commentisfree/2013/may/13/tax-havens-hidden-billions

Moreover, they more often than not just use their existing position to continue to exploit the market. A lot of electronics now is simply uneconomical or impossible to repair locally which means that you have to purchase new products once it has gone out of warranty and has failed due to engineered lifecycles (they are designed to fail after a particular period. If they didn't they would suffer the same fate that some car manufacturers have been complaining about. If they don't fail no one will buy new cars). My belief is that there should be tax concesssions if they are willing or they should be forced to invest into SME firms (which comprise the bulk of the economy) via secondary small capitalisation type funds (especially if the company doesn't know what to do with spare cash and it is left 'stagnant'). Ironically, returns on broad based funds in this area (longer term) more often than not exceed the growth of the company in question as well as the economy in general

- sometimes I wonder whether or not managing an economy (from a political perspective) is much the same as operating as a market analyst. You're effectively taking calculated bets on how the world will end up in the future. Is it possible that good economic managers need to be more lucky than skillful?

http://www.amazon.com/Random-Walk-Down-Wall-Street/dp/0393330338

https://en.wikipedia.org/wiki/A_Random_Walk_Down_Wall_Street

- in some cases, the nature of capitalism is such that the state has grown so large (because in general government services aren't profitable) that they are beginning to groan under the pressure that many of the more developed nations are now feeling. This is a case of both mis-management and a mis-understanding of how to use capitalism to your advantage

- one of the biggest contradictions in business is that it should all come down to the bottom line. The stupid aspect of this is that most companies have double digit turnover and continue to make the excuse that you should simply put up with whatever is thrown at you even if employee turnover is high. If workplaces were generally more civilised and conditions better then you would have a huge cost removed from your business (loss of employee, advertising, training, etc...)

- normally, when people are taught about life, we start with the small and simple examples and then we are pushed into more complex and advanced examples. The irony is that is often the opposite of the way we are taught about business. We are taught to dream big and win big or else crash and burn and learn your place in society. There is a major problem with this. In the Australian economy, SME business accounts for 96% of the economy. It is similar elsewhere. People leaving our educational institutions basically aren't equipped to be able to run make money by themselves right out of school. Help them/teach them how and you could help the overall economy as well as these students by equipping them to be able to look after their own needs reducing the burden on the social welfare system and giving them valuable employment experience that may be worthwhile later down the track. Most students are equipped to work for other people not to start their own company or operate as individuals

http://www.smartcompany.com.au/technology/information-technology/31806-number-of-businesses-in-australia-continues-to-stagnate-abs.html

- all politicians (and people in general) like to talk about the success of their country in being able to attract MNCs/TNCs to employ people locally. However, the problem is that they aren't the main employment drivers in the economy. Across most of the world's economies small businesses are the driving force ("Such firms comprise around 99% of all businesses in most economies and between half and three quarters of the value added. They also make a significant contribution to employment and are of interest to governments primarily for their potential to create more jobs."). One wonders that even with the increased business (direct and indirect) around a large firm when they exist in a country are you getting value for money (especially if you are subsidising their local existence)?

http://theconversation.com/growing-the-global-economy-through-small-to-medium-enterprise-the-g20-sme-conference-28307

- we actually do ourselves somewhat of a disservice by creating a perception that dreaming and living big is what you should want. Popular culture makes it feel like as though if you don't go to the right schools, work for the right companies, and so on you are a failure. The irony is that if every single graduate were taught about how to commercialise their their ideas while at school I believe that we would have a far more flexible, innovative, economy. Moreover, both they as well as economy in general would get a return on investment. It's no good telling people how to be enterpreneurial if they don't know how to be enterpreneurial.

- the irony of the large donor phenomenon is that SME business accounts for most of the activity within the economy...

http://www.smartcompany.com.au/technology/information-technology/31806-number-of-businesses-in-australia-continues-to-stagnate-abs.html

- as we've discussed previously on this blog the primary ways you can make money are to create something of value or by changing the perceived value of something such that people will want to buy it no matter what the disparity between perceived value versus effective value. Once upon a time I look at German prestige and performance vehicles to be the pinnacle or automative engineering. The more I've learned about them the less impressed I've become. If I told you the evidence points to them being the least reliable, the vehicles which depreciate the most (within any given time frame), most expensive to repair, the most expensive to insure and service, average safety, and that often only have comparable technology to other cars (once you cut through the marketing speak) you'd think that people would be incredibly stupid to purchase them. Yet, this trend continues...

http://usedcars.about.com/od/research/fl/10-Least-Reliable-Used-Car-Brands.htm

http://www.bbc.com/news/business-32332210

http://rac.com.au/motoring/motoring-advice/buying-a-car/running-costs

http://rac.com.au/news-community/road-safety-and-transport/safe-cars/how-safe-is-your-car/used-car-safety-ratings

Another good example is the upper echelons of professional sport and artistry (includes music, art, etc...). If anybody told you that you were paying several hundred dollars an hour to watch a group of individuals kick a ball you'd think that they were mad. The horrible part is when you realise top tier amateur competitions which are free to watch can be just as entertaining and skillful

- in reality, in the real world very very rarely are pure market forces at play and it often takes a lot of time for it to get through to them that for all the stuff/theory that you learn at school there's a lot more that you will also learn in the real world

- most industries fit into the following categories; something that you need or something that you want. By selling people a dream we can turn what you want into something you need and create employment from it

- if you want to make abnormal (excess) profits it's mainly about being able to distinguish between perceived, effective, and actual value. Once you can establish this you can exploit it. This is easier said than done though. Let's say you discovered Lionel Messi playing in the streets of Somalia versus Paris. More than likely, you'd value him much less if we were found in Somalia. Sometimes it can be pretty obvious, at other times it's not much different from predicting the future. For instance, the iPod was essentially a re-modled MP3 player with an integrated software solution/ecosystem, Coke is basically just a sweet, fizzy drink which is actually beaten by Pepsi in blind tests

- we like short term thinking because we like the notion that we can make a lot of money in a short space of time. That means that we can retire early, purchase luxury goods and services. The irony is that this feeds into a disparity between actual, perceived, and effective value which means that flawed businesses can continue to still work. The irony is that this flaw works in practice but in the long term it can results in asset bubbles. Valuation at the correct level is in collective's overall interests

- risk isn't necessarily directly related to reward if you're modelling is good. One way to reduce risk is to let others take it first. You might not make a massive name for yourself but should at least not break bank for a high risk project. This has been a common theme in the Russian and Chinese defense establishments where they have often taken significant cues from American technology

- it's becoming clearer to me that many financial instruments actually aren't required. The industry itself relies on the fact that that many will fall for the perceived notion that you can make a lot of money in a small amount of time or for little labour. However, the reality is that most will make a lot less that what is perceived to be the case. An example of this is the following. Many financial instruments are created for the express purpose of increasing risk exposure and therefore possible profits/losses. In reality, most people lose. It's like a casino where the house wins most the time. The other irony is the following, while liquidity can have a direct correlation with volatility (allows you to reach a more valid price earlier especially if many are involved in pricing), the same is also true in the opposite direction. It only takes a few minor outliers to be able to change the perception where value within the market exists

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

- may SME firms collapse within a short time frame but easy credit makes it easier for bad business models to continue to exist. The same is also true of the United States economy where uncompetitive industries were allowed to continue to exist for a long time without adequate trade barriers. If the barriers are lifted we should create circumstances where we force companies to alter their strategies earlier or force them to re-structure/collapse/declare bankruptcy. It will help to reduce the impact of when we provide credit to flawed companies which ultimately collapse

- the way we measure credit risk is often questionable. Financial institutions often turn away low income earners because they are considered a credit risk. I looked at this further and the rates that they are actually charged are diabolical. At one particular place, they were charging 10% for a one week loan, 15% for a 3 week loan, and then 25% for a month long loan. If the industry was so risky though how does it continue to exist? Most of the people who understand the problem have basically said that people who require this money simply have a hard time budgeting and managing their affairs. Essentially providing them with a lump sum component every once in a while makes them believe that they can spend freely. The irony is that the rest of society is also somewhat guilty of this. If we were paid cash and by the hour (rather than regular lump sum payments) and had to pay a component of our bills and other expenses each day we would look at our purchases very differently

http://www.news.com.au/finance/real-estate/stamp-duty-scandal-tony-abbott-under-pressure-to-scrap-our-worst-tax-amid-disastrous-poll/story-fndban6l-1227398035046?from=google_rss&google_editors_picks=true

http://www.perthnow.com.au/news/breaking-news/welfare-card-trial-sites-still-undecided/story-fnhrvfuw-1227398497797?nk=2dc00eb5accf0aef95bbb39faeb08ba0-1434358050

At the other end of the scale, there exists another paradox/contradiction. I've heard stories about people with relatively high incomes being denied credit even though their credit history was good (companies can't make money if you don't breach credit conditions every once in a while). Despite what we say about free market economics, regulatory frameworks, etc... the system is corrupt. It's just not as overt and no one likes/wants to admit it.

- despite what many may think of him, I think Vladamir Putin is actually trying to look after his country's best interests. The collapse of the Soviet Union gave rise to the oligarch. A circumstance that was facilitated by the nature of free market economics without an adequate framework (rules and regulations such as that provided by law). Essentially, the state was replaced by private enterprise where the needs of the many were placed lower on the pecking order than had the state still been in charge. I understand his perspective but I don't believe in the way he has gone about things

https://en.wikipedia.org/wiki/Revolutions_of_1989

https://en.wikipedia.org/wiki/Socialism

https://en.wikipedia.org/?title=Communism

https://en.wikipedia.org/wiki/Capitalism

http://www.msnbc.com/msnbc/pope-francis-rejects-communism-critique

http://ncronline.org/blogs/francis-chronicles/pope-francis-concern-poor-sign-gospel-not-red-flag-communism

http://www.marxist.com/kievs-contemporary-anti-communism-and-the-crimes-of-the-oligarchys-very-existence.htm

- people say that we should do more and spend more in the fight against organised crime. The stupid, ironic thing is that when society is unfair and unjust organised crime grows much stronger because it provides people with a way of making a living. In Europe, the Italian mafia has grown much stronger with the advent of the European economic difficulties and it was much the same in Japan when their asset bubble burst during the 90s

https://en.wikipedia.org/wiki/Lost_Decade_%28Japan%29

- the EU was borne of the fact that no one wanted war again in Europe. It feels like much the same with the rest of the world. We've used progress and better living conditions as an argument against going to war. However, the world has essentally ended up engaging in an effective 'Cold War'. Much of the world's spending revolves around the notion of deterrence. Namely, if I go to want to go to war with you I know that I'll suffer just as much damage (if not more)

https://en.wikipedia.org/wiki/List_of_countries_by_military_expenditures

http://www.globalissues.org/article/75/world-military-spending

There are a number of ways around this. By reaching a concensus for that countries will no longer attempt to project power outwards (defend yourself only, don't interfere with others. Highly unlikely.), invasion will no longer be part of the future landscape (other countries will come to the aide of those in trouble. Unlikely especially with the rise of terrorism.), or else collapse a economies such that countries will no longer be able to afford to spend on defense. The troubling thing is that the last scenario has actually been outlined in various US intelligence and defense reports. It's essentially war without war. If you can wreak havoc in someone's economy then they'll no longer be a problem for you. The irony is that the larger your intelligence apparatus the more likely you can engage in this style of activity. Previous leaked reports and WikiLeaks has made me highly skeptical that the average country doesn't engage in this style of activity.

http://www.theguardian.com/world/2013/aug/29/us-intelligence-spending-double-9-11-secret-budget

https://en.wikipedia.org/wiki/United_States_intelligence_budget

The irony is that you if you don't engage in these activities you may lose a significant advantage. If you do, you're sort of left to question whether or not you are the good guy in this affair

- people who haven't spent enough time in the real world only often understand the theory. Once you understand how things actually work your whole perspective changes. Let's take the housing asset/bubble that we may be going through. As stated previously, making abnormal profits is about managing the difference between perceieved, actual, and effective value. It's clear that in theory boosting supply may change things. The thing I've discovered is that in free market economics it only takes a small thing to change perception. Once the perception snow balls you're stuck with the same problem. This is the same whether it is a new home buyer or a foreign investor purchasing in the local market

http://www.smh.com.au/nsw/mike-bairds-400m-boost-for-infrastructure-fund-to-tackle-housing-affordability-crisis-20150621-ghtfr8.html

http://www.theglobeandmail.com/report-on-business/milk-surplus-forcing-canadas-dairy-industry-to-dump-supply/article25030753/

http://www.brisbanetimes.com.au/it-pro/rental-growth-slowdown-signals-residential-property-bust-on-the-way-20150626-ghxkdr

http://www.news.com.au/finance/real-estate/economists-claim-australia-in-midst-of-largest-housing-bubble-on-record/story-fncq3era-1227410053643?from=google_rss&google_editors_picks=true

- a business structure is simply a focal point of communication between business and consumer. It also affords the opportunity for a government to tax it more effectively

- by being so insistent on upskilling and education it makes low labour costs almost impossible to achieve. This makes a lot of infrastructure projects in developed countries impossible because they are economically unviable. A good example of this is 457 visas in Australia, and illegal immigration in the United States (especially from Mexico) which are often used and abused to acheive lower labour costs than otherwise would have been possible. Another example of this is the Snowy River Hydroelectricity project. It's said that hardly anyone on site knew English and that often people just learned on the job.

http://www.politico.com/story/2015/06/donald-trump-calls-jeb-bush-unhappy-119153.html?ml=ri

Another recent project put this into perspective. It was said that building a building infrastructure (tunnels, office blocks, etc...) in China, shipping it, and then assembling it here in Australia would be more cost effective then building it here alone. We need to give people a chance no matter what their education or skill level if we are to balance government budgets and to reduce the incidents of off-shoring without necessarily having to resort to often expensive anti-shoring techniques such as tarrifs, rebates, taxes, etc...

- our perception of success feels odd sometimes. If you look up the background of Rupert Murdoch, Donald Trump, and several others you'll see that thye are continually on the point of brankruptcy. Under normal circumstances anyone continually on verge of losing everything would be considered mediocre but in the business world they're considered successful because they can keep the whole thing going... Also, look at the poverty figures for the United States, Germany, United Kingdom, United Arab Emirates, Iran, and Japan. Notice the odd one out? Iran has been under sanction for a long time for their alleged nuclear research activities and yet the level of poverty in Iran is comparable to all these others.

https://en.wikipedia.org/wiki/List_of_countries_by_percentage_of_population_living_in_poverty

- the only other way to achieve lower costs in developed countries is to resort to automation and robots (else tap developing countries for lower priced components). I've looked at Australian car manufacturing plants and American and European plants for mass produced vehicles. The level of automation in American and European plants seem to be significantly higher with build quality that is comparable

http://www.kyodonews.net/news/2015/06/20/21340

http://forums.whirlpool.net.au/archive/2050953

- the perception is that we always should hire the best and brightest in order to get the job done and that we should try to do our best to make them happy. The irony is that I've worked on both sides of the fence. By hiring only the best and brightest (perceived to be. A lot of the time the best and brightest don't necessarily get hired based on what I've seen) and only settling on them we force wages up across the board and we make work more difficult for your existing workforce. It may even be more difficult to keep them happy. The other irony is that there are many wealthy global companies who can afford to hire away your best staff forcing prices up even further. Complete free trade works in favour of those who are already wealthy and makes it harder for those down the chain to make a living and to progress

- if all the best and brightest are hired by the same companies (based on personal experience) you aren't necessarily always going to get the best out of them. Companies have an increasing tendency (regulatory as well as political issues) to pigeon hole them into specific roles which doesn't allow them to realise their full and complete potential. The individual, company, as well as the collective lose out

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

- we believe in out current style of capitalism because we have a perception that it gives everyone a chance in life to be and do whatever they want. In reality, it's a lot more complicated. At it's very core I think it's very much like Winston Churchhill's opinion of the Westminster parlimentary system, "Democracy is the worst form of government, except for all the others."

http://www.goodreads.com/quotes/267224-democracy-is-the-worst-form-of-government-except-for-all

- it's clear that I believe in limited capitalism and for the most part we should try to work with those within our regions to reduce the chances of a systemic collapse. Currency manipulation, foreign investment law, tarrifs, taxation, etc... are all lawful means of changing the playing field. In fact, the exact same techniques that countries use to protect against trade sanctions can be used to guard the economic safety of citizens locally. By playing by the current rules and free trade we are essentially playing into the hands of the larger companies of the world (mostly based in the United States). It's a form of imperialism/conquest (deliberate or not) without necessarily having to engage in open warfare and with the effective ruler being the United States with these companies acting as proxies

http://www.theglobeandmail.com/report-on-business/international-business/european-business/europe-shutters-as-greek-banks-bleed-cash/article25033867/

http://rt.com/business/250497-obama-economy-china-trade/

- making or saving money can sometimes be counterintuitive. If you've ever worked in the IT industry in any sort of support role then you'll realise that no matter what level of support you operate at one of the main aims is to establish whether or not the problem occurs without your own area of oversight. If it is, you try and fix it, if not you ignore it and basically tell the other end to kindly go away because you often don't have the expertise to fix it, nor do you have the oversight to be able to. The medical and pharmaceutical industry is much the same. The irony is that this perspective can result in longer term harm than good. The United States budget is out of whack with one of the major causes being the high cost of drugs as well as short sighted perspective of medical practitioners who tend to not attempt to treat the problem till it's fixed but keep on managing it. Fix it if you can and the problem goes away, your budget is in better shape

http://ourfiniteworld.com/2011/04/08/whats-behind-united-states-budget-problems/

http://www.businessinsider.com/us-budget-deficit-2011-7

- if so many countries are so concerned about profit shifting why don't they simply make it un-economical/impossible to re-locate from now on? That way existing financial centres for such activity can adapt in the meantime while others countries can begin to regain some of their investment

- every company engages in anti-competitive behaviour. Even though (and others) Google are a supposed proponent of the 'Don't be Evil' mantra they still have shareholders to report to meaning that even if they don't want to they have to

- if too few countries make changes their companies are going to be subject to foreign takeover interest (friendly and non-friendly) if adequate measures aren't taken to protect them. Moreover, they will be at a competitive disadvantage when attemping to branch out. The only way to look after these interests is to look at the way companies are structured in order to look after the needs of both the individual and collective simultaneously

- making changes for a fairer and more equatible society isn't easy and the irony is that those who are already successful will always appeal to reduce the chances of the status quo changing. They will insist that since they 'made it' so can others. Moreover, there are always those within the political and public services who will always have differing opinions on how to acheive the same thing

http://www.smh.com.au/world/us-supreme-court-hands-obama-major-victory-on-obamacare-healthcare-reform-20150625-ghy1xq.html#content

http://www.seattletimes.com/seattle-news/a-pathological-refusal-to-see-any-shred-of-good-in-obamacare/

- people say that globalisation and free market capitalism is a guard against collapse. Someone in the system is always going to be looking for money or someone is always going to have money. The problem is that there's no incentive to do this. Moreover, it has been proven in the United States and Europe that pure private, free trade capitalism isn't necessarily going to fill the void should there be significant underlying problems. Even states and unions can not hold back the dam should the market burst. Moreover, firms have shareholders and creditors to report to. Without adequate safeguards in place the needs of the many are never going to be met by the few who are lucky enough to have survived (there is only one exception to this. If there is strong leadership/management in the private sector which I haven't seen to many instances of)

https://en.wikipedia.org/wiki/Great_Recession

http://www.afr.com/markets/commodities/energy/saudis-seen-escalating-battle-for-global-oil-market-share-20150618-ghrxws

https://en.wikipedia.org/wiki/2007%E2%80%9308_world_food_price_crisis

https://en.wikipedia.org/wiki/2000s_energy_crisis

http://www.news.com.au/national/breaking-news/govt-to-explore-social-impact-bonds/story-e6frfku9-1227416495203

http://www.news.com.au/world/breaking-news/pope-talking-drivel-catholic-economist/story-e6frfkui-1227416020721

June 25, 2015

Dutch Court orders Netherlands Government cut CO2 emissions by 25 percent by 2020 | Climate Citizen

http://takvera.blogspot.com.au/2015/06/dutch-court-orders-netherlands.html

A Dutch court in a landmark legal case has just handed down a verdict that the Netherlands Government has the legal duty to take measures against #climate change. Further, the court ordered that a 25% reduction of CO2 emissions, based on 1990 levels, must be accomplished by 2020 by the Dutch government in accordance with IPCC scientific recommendations for industrial countries.

[…]

Sue Higginson, Principal Solicitor for the Environmental Defenders Office (EDO) NSW, said that the same legal arguments are unlikely to be used in Australia, “Dutch civil laws are much more specific in their terms than Australian laws.” she said.

[…]

With Australia, such a case would be much less straightforward as we do not have the incorporation of international human rights or general duty of care directly in our constitution or legal framework.

Hashing Speed: SHA256 vs Murmur3

So I did some IBLT research (as posted to bitcoin-dev ) and I lazily used SHA256 to create both the temporary 48-bit txids, and from them to create a 16-bit index offset.  Each node has to produce these for every bitcoin transaction ID it knows about (ie. its entire mempool), which is normally less than 10,000 transactions, but we’d better plan for 1M given the coming blopockalypse.

For txid48, we hash an 8 byte seed with the 32-byte txid; I ignored the 8 byte seed for the moment, and measured various implementations of SHA256 hashing 32 bytes on on my Intel Core i3-5010U CPU @ 2.10GHz laptop (though note we’d be hashing 8 extra bytes for IBLT): (implementation in CCAN)

  1. Bitcoin’s SHA256: 527.7+/-0.9 nsec
  2. Optimizing the block ending on bitcoin’s SHA256: 500.4+/-0.66 nsec
  3. Intel’s asm rorx: 314.1+/-0.3 nsec
  4. Intel’s asm SSE4 337.5+/-0.5 nsec
  5. Intel’s asm RORx-x8ms 458.6+/-2.2 nsec
  6. Intel’s asm AVX 336.1+/-0.3 nsec

So, if you have 1M transactions in your mempool, expect it to take about 0.62 seconds of hashing to calculate the IBLT.  This is too slow (though it’s fairly trivially parallelizable).  However, we just need a universal hash, not a cryptographic one, so I benchmarked murmur3_x64_128:

  1. Murmur3-128: 23 nsec

That’s more like 0.046 seconds of hashing, which seems like enough of a win to add a new hash to the mix.

Toolchains for OpenPower petitboot environments

Since we're using buildroot for the OpenPower firmware build infrastructure, it's relatively straightforward to generate a standalone toolchain to build add-ons to the petitboot environment. This toolchain will allow you to cross-compile from your build host to an OpenPower host running the petitboot environment.

This is just a matter of using op-build's toolchain target, and specifying the destination directory in the BR2_HOST_DIR variable. For this example, we'll install into /opt/openpower/ :

sudo mkdir /opt/openpower/
sudo chown $USER /opt/openpower/
op-build BR2_HOST_DIR=/opt/openpower/ toolchain

After the build completes, you'll end up with a toolchain based in /opt/openpower.

Using the toolchain

If you add /opt/openpower/usr/bin/ to your PATH, you'll have the toolchain binaries available.

[jk@pecola ~]$ export PATH=/opt/openpower/usr/bin/:$PATH
[jk@pecola ~]$ powerpc64le-buildroot-linux-gnu-gcc --version
powerpc64le-buildroot-linux-gnu-gcc (Buildroot 2014.08-git-g80a2f83) 4.9.0
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Currently, this toolchain isn't relocatable, so you'll need to keep it in the original directory for tools to correctly locate other toolchain components.

OpenPower doesn't (yet) specify an ABI for the petitboot environment, so there are no guarantees that a petitboot plugin will be forwards- or backwards- compatible with other petitboot environments.

Because of this, if you use this toolchain to build binaries for a petitboot plugin, you'll need to either:

  • ensure that your op-build version matches the one used for the target petitboot image; or
  • provide all necessary libraries and dependencies in your distributed plugin archive.

We're working to address this though, by defining the ABI that will be regarded as stable across petitboot builds. Stay tuned for updates.

Using the toolchain for subsequent op-build runs

Because op-build has a facility to use an external toolchain, you can re-use the toolchain build above for subsequent op-build invocations, where you want to build actual firmware binaries. If you're using multiple op-build trees, or are regularly building from scratch, this can save a lot of time as you don't need to continually rebuild the toolchain from source.

This is a matter of configuring your op-build tree to use an "External Toolchain", in the "Toolchain" screen of the menuconfig interface:

You'll need to set the toolchain path to the path you used for BR2_HOST_DIR above, with /usr appended. The other toolchain configuration parameters (kernel header series, libc type, features enabled) will need to match the parameters that were given in the initial toolchain build. However, the buildroot code will check that these match and print a helpful error message if there are any inconsistencies.

For the example toolchain built above, these are the full configuration parameters I used:

BR2_TOOLCHAIN=y
BR2_TOOLCHAIN_USES_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_PREINSTALLED=y
BR2_TOOLCHAIN_EXTERNAL_PATH="/opt/openpower/usr/"
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_3_15=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_INET_RPC=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
BR2_TOOLCHAIN_EXTRA_EXTERNAL_LIBS=""
BR2_TOOLCHAIN_HAS_NATIVE_RPC=y
BR2_TOOLCHAIN_HAS_THREADS=y
BR2_TOOLCHAIN_HAS_THREADS_DEBUG=y
BR2_TOOLCHAIN_HAS_THREADS_NPTL=y
BR2_TOOLCHAIN_HAS_SHADOW_PASSWORDS=y
BR2_TOOLCHAIN_HAS_SSP=y

Once that's done, anything you build using that op-build configuration will refer to the external toolchain, and use that for the general build process.

June 24, 2015

PyCon Australia 2015 Programme Released

PyCon Australia is proud to release our programme for 2015, spread over the weekend of August 1st and 2nd, following our Miniconfs on Friday 31 July.

Following our largest ever response to our Call for Proposals, we are able to present two keynotes, forty eight talks and two tutorials. The conference will feature four full tracks of presentations, covering all aspects of the Python ecosystem, presented by experts and core developers of key Python technology. Our presenters cover a broad range of backgrounds, including industry, research, government and academia.

We are still finalising our Miniconf timetable, but we expect another thirty talks for Friday. We’d like to highlight the inaugural running of the Education Miniconf whose primary aim is to bring educators and the Python community closer together.

The full schedule for PyCon Australia 2015 can be found at http://2015.pycon-au.org/programme/about

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors. Registrations for PyCon Australia 2015 are now open, with prices starting at AU$50 for students, and tickets for the general public starting at AU$240. All prices include GST, and more information can be found at http://2015.pycon-au.org/register/prices

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.

clintonroy

PyCon Australia is proud to release our programme for 2015, spread over the weekend of August 1st and 2nd, following our Miniconfs on Friday 31 July.

Following our largest ever response to our Call for Proposals, we are able to present two keynotes, forty eight talks and two tutorials. The conference will feature four full tracks of presentations, covering all aspects of the Python ecosystem, presented by experts and core developers of key Python technology. Our presenters cover a broad range of backgrounds, including industry, research, government and academia.

We are still finalising our Miniconf timetable, but we expect another thirty talks for Friday. We’d like to highlight the inaugural running of the Education Miniconf whose primary aim is to bring educators and the Python community closer together.

The full schedule for PyCon Australia 2015 can be found at http://2015.pycon-au.org/programme/about

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors. Registrations for PyCon Australia 2015 are now open, with prices starting at AU$50 for students, and tickets for the general public starting at AU$240. All prices include GST, and more information can be found at http://2015.pycon-au.org/register/prices

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.



Filed under: Uncategorized

Custom kernels in OpenPower firmware

As of commit 2aff5ba6 in the op-build tree, we're able to easily replace the kernel in an OpenPower firmware image.

This commit adds a new partition (called BOOTKERNEL) to the PNOR image, which provides the petitboot bootloader environment. Since it's now in its own partition, we can replace the image with a custom build. Here's a little guide to doing that, using an example of using a separate branch of op-build that provides a little-endian kernel.

You can check if your currently-running firmware has this BOOTKERNEL partition by running pflash -i on the BMC. It should list BOOTKERNEL in the partition table listing:

# pflash -i
Flash info:
-----------
Name          = Micron N25Qx512Ax
Total size    = 64MB 
Erase granule = 4KB 

Partitions:
-----------
ID=00            part 00000000..00001000 (actual=00001000)
ID=01            HBEL 00008000..0002c000 (actual=00024000)
[...]
ID=11            HBRT 00949000..00ca9000 (actual=00360000)
ID=12         PAYLOAD 00ca9000..00da9000 (actual=00100000)
ID=13      BOOTKERNEL 00da9000..01ca9000 (actual=00f00000)
ID=14        ATTR_TMP 01ca9000..01cb1000 (actual=00008000)
ID=15       ATTR_PERM 01cb1000..01cb9000 (actual=00008000)
[...]
#  

If your partition table does not contain a BOOTKERNEL partition, you'll need to upgrade to a more recent PNOR image to proceed.

First (if you don't have one already), grab a suitable version of op-build. In this example, we'll use my le branch, which has little-endian support:

git clone --recursive git://github.com/jk-ozlabs/op-build.git
cd op-build
git checkout -b le origin/le
git submodule update

Then, prepare our environment and configure for the relevant platform - in this case, habanero:

. op-build-env
op-build habanero_defconfig

If you'd like to change any of the kernel config (for example, to add or remove drivers), you can do that now, using the 'linux-menuconfig' target. This is only necessary if you wish to make changes. Otherwise, the default kernel config will work.

op-build linux-menuconfig

Next, we build just the userspace and kernel parts of the firmware image, by specifying the linux26-rebuild-with-initramfs build target:

op-build linux26-rebuild-with-initramfs

If you're using a fresh op-build tree, this will take a little while, as it downloads and builds a toolchain, userspace and kernel. Once that's complete, you'll have a built kernel image in the output tree:

 output/build/images/zImage.epapr

Transfer this file to the BMC, and flash using pflash. We specify the -P <PARTITION> argument to write to a single PNOR partition:

pflash -P BOOTKERNEL -e -p /tmp/zImage.epapr

And that's it! The next boot will use your newly-build kernel in the petitboot bootloader environment.

Out-of-tree kernel builds

If you'd like to replace the kernel from op-build with one from your own external source tree, you have two options. Either point op-build at your own tree, or build you own kernel using the initramfs that op-build has produced.

For the former, you can override certain op-build variables to reference a separate source. For example, to use an external git tree:

op-build LINUX_SITE=git://github.com/jk-ozlabs/linux LINUX_VERSION=v3.19

See Customising OpenPower firmware for other examples of using external sources in op-build.

The latter option involves doing a completely out-of-op-build build of a kernel, but referencing the initramfs created by op-build (which is in output/images/rootfs.cpio.xz). From your kernel source directory, add CONFIG_INITRAMFS_SOURCE argument, specifying the relevant initramfs. For example:

make O=obj ARCH=powerpc \
    CONFIG_INITRAMFS_SOURCE=../op-build/output/images/rootfs.cpio.xz

Smart Phones Should Measure Charge Speed

My first mobile phone lasted for days between charges. I never really found out how long it’s battery would last because there was no way that I could use it to deplete the charge in any time that I could spend awake. Even if I had managed to run the battery out the phone was designed to accept 4*AA batteries (it’s rechargeable battery pack was exactly that size) so I could buy spare batteries at any store.

Modern phones are quite different in physical phone design (phones that weigh less than 4*AA batteries aren’t uncommon), functionality (fast CPUs and big screens suck power), and use (games really drain your phone battery). This requires much more effective chargers, when some phones are intensively used (EG playing an action game with Wifi enabled) they can’t be charged as they use more power than the plug-pack supplies. I’ve previously blogged some calculations about resistance and thickness of wires for phone chargers [1], it’s obvious that there are some technical limitations to phone charging based on the decision to use a long cable at ~5V.

My calculations about phone charge rate were based on the theoretical resistance of wires based on their estimated cross-sectional area. One problem with such analysis is that it’s difficult to determine how thick the insulation is without destroying the wire. Another problem is that after repeated use of a charging cable some conductors break due to excessive bending. This can significantly increase the resistance and therefore increase the charging time. Recently a charging cable that used to be really good suddenly became almost useless. My Galaxy Note 2 would claim that it was being charged even though the reported level of charge in the battery was not increasing, it seems that the cable only supplied enough power to keep the phone running not enough to actually charge the battery.

I recently bought a USB current measurement device which is really useful. I have used it to diagnose power supplies and USB cables that didn’t work correctly. But one significant way in which it fails is in the case of problems with the USB connector. Sometimes a cable performs differently when connected via the USB current measurement device.

The CurrentWidget program [2] on my Galaxy Note 2 told me that all of the dedicated USB chargers (the 12V one in my car and all the mains powered ones) supply 1698mA (including the ones rated at 1A) while a PC USB port supplies ~400mA. I don’t think that the Note 2 measurement is particularly reliable. On my Galaxy Note 3 it always says 0mA, I guess that feature isn’t implemented. An old Galaxy S3 reports 999mA of charging even when the USB current measurement device says ~500mA. It seems to me that method the CurrentWidget uses to get the current isn’t accurate if it even works at all.

Android 5 on the Nexus 4/5 phones will tell the amount of time until the phone is charged in some situations (on the Nexus 4 and Nexus 5 that I used for testing it didn’t always display it and I don’t know why). This is an useful but it’s still not good enough.

I think that what we need is to have the phone measure the current that’s being supplied and report it to the user. Then when a phone charges slowly because apps are using some power that won’t be mistaken for a phone charging slowly due to a defective cable or connector.

June 23, 2015

One Android Phone Per Child

I was asked for advice on whether children should have access to smart phones, it’s an issue that many people are discussing and seems worthy of a blog post.

Claimed Problems with Smart Phones

The first thing that I think people should read is this XKCD post with quotes about the demise of letter writing from 99+ years ago [1]. Given the lack of evidence cited by people who oppose phone use I think we should consider to what extent the current concerns about smart phone use are just reactions to changes in society. I’ve done some web searching for reasons that people give for opposing smart phone use by kids and addressed the issues below.

Some people claim that children shouldn’t get a phone when they are so young that it will just be a toy. That’s interesting given the dramatic increase in the amount of money spent on toys for children in recent times. It’s particularly interesting when parents buy game consoles for their children but refuse mobile phone “toys” (I know someone who did this). I think this is more of a social issue regarding what is a suitable toy than any real objection to phones used as toys. Obviously the educational potential of a mobile phone is much greater than that of a game console.

It’s often claimed that kids should spend their time reading books instead of using phones. When visiting libraries I’ve observed kids using phones to store lists of books that they want to read, this seems to discredit that theory. Also some libraries have Android and iOS apps for searching their catalogs. There are a variety of apps for reading eBooks, some of which have access to many free books but I don’t expect many people to read novels on a phone.

Cyber-bullying is the subject of a lot of anxiety in the media. At least with cyber-bullying there’s an electronic trail, anyone who suspects that their child is being cyber-bullied can check that while old-fashioned bullying is more difficult to track down. Also while cyber-bullying can happen faster on smart phones the victim can also be harassed on a PC. I don’t think that waiting to use a PC and learn what nasty thing people are saying about you is going to be much better than getting an instant notification on a smart phone. It seems to me that the main disadvantage of smart phones in regard to cyber-bullying is that it’s easier for a child to participate in bullying if they have such a device. As most parents don’t seem concerned that their child might be a bully (unfortunately many parents think it’s a good thing) this doesn’t seem like a logical objection.

Fear of missing out (FOMO) is claimed to be a problem, apparently if a child has a phone then they will want to take it to bed with them and that would be a bad thing. But parents could have a policy about when phones may be used and insist that a phone not be taken into the bedroom. If it’s impossible for a child to own a phone without taking it to bed then the parents are probably dealing with other problems. I’m not convinced that a phone in bed is necessarily a bad thing anyway, a phone can be used as an alarm clock and instant-message notifications can be turned off at night. When I was young I used to wait until my parents were asleep before getting out of bed to use my PC, so if smart-phones were available when I was young it wouldn’t have changed my night-time computer use.

Some people complain that kids might use phones to play games too much or talk to their friends too much. What do people expect kids to do? In recent times the fear of abduction has led to children doing playing outside a lot less, it used to be that 6yos would play with other kids in their street and 9yos would be allowed to walk to the local park. Now people aren’t allowing 14yo kids walk to the nearest park alone. Playing games and socialising with other kids has to be done over the Internet because kids aren’t often allowed out of the house. Play and socialising are important learning experiences that have to happen online if they can’t happen offline.

Apps can be expensive. But it’s optional to sign up for a credit card with the Google Play store and the range of free apps is really good. Also the default configuration of the app store is to require a password entry before every purchase. Finally it is possible to give kids pre-paid credit cards and let them pay for their own stuff, such pre-paid cards are sold at Australian post offices and I’m sure that most first-world countries have similar facilities.

Electronic communication is claimed to be somehow different and lesser than old-fashioned communication. I presume that people made the same claims about the telephone when it first became popular. The only real difference between email and posted letters is that email tends to be shorter because the reply time is smaller, you can reply to any questions in the same day not wait a week for a response so it makes sense to expect questions rather than covering all possibilities in the first email. If it’s a good thing to have longer forms of communication then a smart phone with a big screen would be a better option than a “feature phone”, and if face to face communication is preferred then a smart phone with video-call access would be the way to go (better even than old fashioned telephony).

Real Problems with Smart Phones

The majority opinion among everyone who matters (parents, teachers, and police) seems to be that crime at school isn’t important. Many crimes that would result in jail sentences if committed by adults receive either no punishment or something trivial (such as lunchtime detention) if committed by school kids. Introducing items that are both intrinsically valuable and which have personal value due to the data storage into a typical school environment is probably going to increase the amount of crime. The best options to deal with this problem are to prevent kids from taking phones to school or to home-school kids. Fixing the crime problem at typical schools isn’t a viable option.

Bills can potentially be unexpectedly large due to kids’ inability to restrain their usage and telcos deliberately making their plans tricky to profit from excess usage fees. The solution is to only use pre-paid plans, fortunately many companies offer good deals for pre-paid use. In Australia Aldi sells pre-paid credit in $15 increments that lasts a year [2]. So it’s possible to pay $15 per year for a child’s phone use, have them use Wifi for data access and pay from their own money if they make excessive calls. For older kids who need data access when they aren’t at home or near their parents there are other pre-paid phone companies that offer good deals, I’ve previously compared prices of telcos in Australia, some of those telcos should do [3].

It’s expensive to buy phones. The solution to this is to not buy new phones for kids, give them an old phone that was used by an older relative or buy an old phone on ebay. Also let kids petition wealthy relatives for a phone as a birthday present. If grandparents want to buy the latest smart-phone for a 7yo then there’s no reason to stop them IMHO (this isn’t a hypothetical situation).

Kids can be irresponsible and lose or break their phone. But the way kids learn to act responsibly is by practice. If they break a good phone and get a lesser phone as a replacement or have to keep using a broken phone then it’s a learning experience. A friend’s son head-butted his phone and cracked the screen – he used it for 6 months after that, I think he learned from that experience. I think that kids should learn to be responsible with a phone several years before they are allowed to get a “learner’s permit” to drive a car on public roads, which means that they should have their own phone when they are 12.

I’ve seen an article about a school finding that tablets didn’t work as well as laptops which was touted as news. Laptops or desktop PCs obviously work best for typing. Tablets are for situations where a laptop isn’t convenient and when the usage involves mostly reading/watching, I’ve seen school kids using tablets on excursions which seems like a good use of them. Phones are even less suited to writing than tablets. This isn’t a problem for phone use, you just need to use the right device for each task.

Phones vs Tablets

Some people think that a tablet is somehow different from a phone. I’ve just read an article by a parent who proudly described their policy of buying “feature phones” for their children and tablets for them to do homework etc. Really a phone is just a smaller tablet, once you have decided to buy a tablet the choice to buy a smart phone is just about whether you want a smaller version of what you have already got.

The iPad doesn’t appear to be able to make phone calls (but it supports many different VOIP and video-conferencing apps) so that could technically be described as a difference. AFAIK all Android tablets that support 3G networking also support making and receiving phone calls if you have a SIM installed. It is awkward to use a tablet to make phone calls but most usage of a modern phone is as an ultra portable computer not as a telephone.

The phone vs tablet issue doesn’t seem to be about the capabilities of the device. It’s about how portable the device should be and the image of the device. I think that if a tablet is good then a more portable computing device can only be better (at least when you need greater portability).

Recently I’ve been carrying a 10″ tablet around a lot for work, sometimes a tablet will do for emergency work when a phone is too small and a laptop is too heavy. Even though tablets are thin and light it’s still inconvenient to carry, the issue of size and weight is a greater problem for kids. 7″ tablets are a lot smaller and lighter, but that’s getting close to a 5″ phone.

Benefits of Smart Phones

Using a smart phone is good for teaching children dexterity. It can also be used for teaching art in situations where more traditional art forms such as finger painting aren’t possible (I have met a professional artist who has used a Samsung Galaxy Note phone for creating art work).

There is a huge range of educational apps for smart phones.

The Wikireader (that I reviewed 4 years ago) [4] has obvious educational benefits. But a phone with Internet access (either 3G or Wifi) gives Wikipedia access including all pictures and is a better fit for most pockets.

There are lots of educational web sites and random web sites that can be used for education (Googling the answer to random questions).

When it comes to preparing kids for “the real world” or “the work environment” people often claim that kids need to use Microsoft software because most companies do (regardless of the fact that most companies will be using radically different versions of MS software by the time current school kids graduate from university). In my typical work environment I’m expected to be able to find the answer to all sorts of random work-related questions at any time and I think that many careers have similar expectations. Being able to quickly look things up on a phone is a real work skill, and a skill that’s going to last a lot longer than knowing today’s version of MS-Office.

There are a variety of apps for tracking phones. There are non-creepy ways of using such apps for monitoring kids. Also with two-way monitoring kids will know when their parents are about to collect them from an event and can stay inside until their parents are in the area. This combined with the phone/SMS functionality that is available on feature-phones provides some benefits for child safety.

iOS vs Android

Rumour has it that iOS is better than Android for kids diagnosed with Low Functioning Autism. There are apparently apps that help non-verbal kids communicate with icons and for arranging schedules for kids who have difficulty with changes to plans. I don’t know anyone who has a LFA child so I haven’t had any reason to investigate such things. Anyone can visit an Apple store and a Samsung Experience store as they have phones and tablets you can use to test out the apps (at least the ones with free versions). As an aside the money the Australian government provides to assist Autistic children can be used to purchase a phone or tablet if a registered therapist signs a document declaring that it has a therapeutic benefit.

I think that Android devices are generally better for educational purposes than iOS devices because Android is a less restrictive platform. On an Android device you can install apps downloaded from a web site or from a 3rd party app download service. Even if you stick to the Google Play store there’s a wider range of apps to choose from because Google is apparently less restrictive.

Android devices usually allow installation of a replacement OS. The Nexus devices are always unlocked and have a wide range of alternate OS images and the other commonly used devices can usually have an alternate OS installed. This allows kids who have the interest and technical skill to extensively customise their device and learn all about it’s operation. iOS devices are designed to be sealed against the user. Admittedly there probably aren’t many kids with the skill and desire to replace the OS on their phone, but I think it’s good to have option.

Android phones have a range of sizes and features while Apple only makes a few devices at any time and there’s usually only a couple of different phones on sale. iPhones are also a lot smaller than most Android phones, according to my previous estimates of hand size the iPhone 5 would be a good tablet for a 3yo or good for side-grasp phone use for a 10yo [5]. The main benefits of a phone are for things other than making phone calls so generally the biggest phone that will fit in a pocket is the best choice. The tiny iPhones don’t seem very suitable.

Also buying one of each is a viable option.

Conclusion

I think that mobile phone ownership is good for almost all kids even from a very young age (there are many reports of kids learning to use phones and tablets before they learn to read). There are no real down-sides that I can find.

I think that Android devices are generally a better option than iOS devices. But in the case of special needs kids there may be advantages to iOS.

June 22, 2015

sswam

I learned a useful trick with the bash shell today.

We can use printf “%q ” to escape arguments to pass to the shell.

This can be useful in combination with ssh, in case you want to pass arguments containing shell special characters or spaces. It can also be used with su -c, and sh -c.

The following will run a command exactly on a remote server:

sshc() {
        remote=$1 ; shift
        ssh "$remote" "`printf "%q " "$@"`"
}

Example:

sshc user@server touch "a test file" "another file"


June 21, 2015

Twitter posts: 2015-06-15 to 2015-06-21

June 20, 2015

Yet another possible cub walk

Jacqui and Catherine kindly agreed to come on another test walk for a possible cub walk. This one was the Sanctuary Loop at Tidbinbilla. To be honest this wasn't a great choice for cubs -- whilst being scenic and generally pleasant, the heavy use of black top paths and walkways made it feel like a walk in the Botanic Gardens, and the heavy fencing made it feel like an exhibit at a zoo. I'm sure its great for a weekend walk or for tourists, but if you're trying to have a cub adventure its not great.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150620-tidbinbilla photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

BTRFS Status June 2015

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT
hetz0/be0-mail@2015-03-10 2.88G 387G
hetz0/be0-mail@2015-03-11 1.12G 388G
hetz0/be0-mail@2015-03-12 1.11G 388G
hetz0/be0-mail@2015-03-13 1.19G 388G

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

June 19, 2015

Mining on a Home DSL connection: latency for 1MB and 8MB blocks

I like data.  So when Patrick Strateman handed me a hacky patch for a new testnet with a 100MB block limit, I went to get some.  I added 7 digital ocean nodes, another hacky patch to prevent sendrawtransaction from broadcasting, and a quick utility to create massive chains of transactions/

My home DSL connection is 11Mbit down, and 1Mbit up; that’s the fastest I can get here.  I was CPU mining on my laptop for this test, while running tcpdump to capture network traffic for analysis.  I didn’t measure the time taken to process the blocks on the receiving nodes, just the first propagation step.

1 Megabyte Block

Naively, it should take about 10 seconds to send a 1MB block up my DSL line from first packet to last.  Here’s what actually happens, in seconds for each node:

  1. 66.8
  2. 70.4
  3. 71.8
  4. 71.9
  5. 73.8
  6. 75.1
  7. 75.9
  8. 76.4

The packet dump shows they’re all pretty much sprayed out simultaneously (bitcoind may do the writes in order, but the network stack interleaves them pretty well).  That’s why it’s 67 seconds at best before the first node receives my block (a bit longer, since that’s when the packet left my laptop).

8 Megabyte Block

I increased my block size, and one node dropped out, so this isn’t quite the same, but the times to send to each node are about 8 times worse, as expected:

  1. 501.7
  2. 524.1
  3. 536.9
  4. 537.6
  5. 538.6
  6. 544.4
  7. 546.7

Conclusion

Using the rough formula of 1-exp(-t/600), I would expect orphan rates of 10.5% generating 1MB blocks, and 56.6% with 8MB blocks; that’s a huge cut in expected profits.

Workarounds

  • Get a faster DSL connection.  Though even an uplink 10 times faster would mean 1.1% orphan rate with 1MB blocks, or 8% with 8MB blocks.
  • Only connect to a single well-connected peer (-maxconnections=1), and hope they propagate your block.
  • Refuse to mine any transactions, and just collect the block reward.  Doesn’t help the bitcoin network at all though.
  • Join a large pool.  This is what happens in practice, but raises a significant centralization problem.

Fixes

  • We need bitcoind to be smarter about ratelimiting in these situations, and stream serially.  Done correctly (which is hard), it could also help bufferbloat which makes running a full node at home so painful when it propagates blocks.
  • Some kind of block compression, along the lines of Gavin’s IBLT idea. I’ve done some preliminary work on this, and it’s promising, but far from trivial.

 

June 18, 2015

Further adventures in the Jerrabomberra wetlands

There was another walk option for cubs I wanted to explore at the wetlands, so I went back during lunch time yesterday. It was raining really quite heavily during this walk, but I still had fun. I think this route might be the winner -- its a bit longer, and a bit more interesting as well.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150618-jerrabomberra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

June 17, 2015

Exploring possible cub walks

I've been exploring possible cub walks for a little while now, and decided that Jerrabomberra Wetlands might be an option. Most of these photos will seem a bit odd to readers, unless you realize I'm mostly interested in the terrain and its suitability for cubs...



                                 



Interactive map for this route.



Tags for this post: blog pictures 20150617-jerrabomerra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

June 16, 2015

Abide the Slide

The holonomic drive robot takes it's first rolls! This is what you get when you contort a 3d printer into a cross format and attach funky wheels. Quite literally as the control board is an Arduino Mega board with Atmel 2650 MCU and a RAMPS 1.4 stepper controller board plugged into it. The show is controlled over rf24 link from a hand made controller. Yes folks, a regression to teleoperating for now. I'll have to throw the thing onto scales later, but the steppers themselves add considerable weight to the project, but there doesn't seem to be much problem moving the thing around under it's own power.







The battery is a little underspeced, it will surely supply enough current, and doesn't get hot after operation, but the overall battery capacity is low so the show is over fairly quickly. A problem that is easily solved by throwing more dollars at the battery. The next phase is to get better mechanical stability by tweaking things and changing the software to account for the fact that one wheel axis is longer than the other. From there some sensor feedback (IMU) and a fly by wire mode will be on the cards.







This might end up going into ROS land too, encapsulating the whole current setup into being a "robot base controller" and using other hardware above to run sensors, navigation, and decision logic.



OPAL firmware specification, conformance and documentation

Now that we have an increasing amount of things that run on top of OPAL:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

and that the OpenPower ecosystem is rapidly growing (especially around people building OpenPower machines), the need for more formal specification, conformance testing and documentation for OPAL is increasing rapidly.

If you look at the documentation in the skiboot tree late last year, you’d notice a grand total of seven text files. Now, we’re a lot better (although far from complete).

I’m proud to say that I won’t merge new code that adds/modifies an OPAL API call or anything in the device tree that doesn’t come with accompanying documentation, and this has meant that although it may not be perfect, we have something that is a decent starting point.

We’re in the interesting situation of starting with a working system, with mainline Linux kernels now for over a year (maybe even 18 months) being able to be booted by skiboot and run on powernv hardware (the more modern the kernel the better though).

So…. if anyone loves going through deeply technical documentation… do I have a project you can contribute to!

June 15, 2015

On Removal of Citizenship – Short Cuts | London Review of Books

PyCon Australia 2015 Early Bird Registrations Now Open!

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.

June 14, 2015

FreeBSD on OpenPower

There’s been some work on porting FreeBSD over to run natively on top of OPAL, that is, on bare metal OpenPower machines (not just under KVM).

This is one of four possible things to run natively on an OPAL system:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

It’s great to see that another fully featured OS is getting ported to POWER8 and OPAL. It’s not yet at a stage where you could say it was finished or anything (PCI support is pretty preliminary for example, and fancy things like disks and networking live on PCI).

Twitter posts: 2015-06-08 to 2015-06-14

hello world as ppc66le OPAL payload!

While the in-tree hello-world kernel (originally by me, and Mikey managed to CUT THE BLOAT of a whole SEVENTEEN instructions down to a tiny ten) is very, very dumb (and does one thing, print “Hello World” to the console), there’s now an alternative for those who like to play with a more feature-rich Hello World rather than booting a more “real” OS such as Linux. In case you’re wondering, we use the hello world kernel as a tiny test that we haven’t completely and utterly broken things when merging/developing code.

https://github.com/andreiw/ppc64le_hello is a wonderful example of a small (INTERACTIVE!) starting point for a PowerNV (as it’s called in Linux) or “bare metal” (i.e. non-virtualised) OS on POWER.

What’s more impressive is that this was all developed using the simulator rather than real hardware (although I think somebody has tried it on some now).

Kind of neat!

June 13, 2015

The Value of Money - Part 2

This is obviously a continuation from my last post, 


No one wants to live from day to day, week to week and for the most part you don't have that when you have a salaried job. You regularly receive a lump sum each fortnight or month from which you draw down to pay for life's expenses.



Over time you actually discover it's an illusion though. A former teacher of mine once said that a salary of about 70-80K wasn't all that much. To kids that seemed liked a lot of money though. Now it actually makes a lot more sense. Factor in tax, life expenses, rental, etc... and most of it dries up very quickly.



When you head to business or law school it's the same thing. You regularly deal with millions, billions, and generally gratuitous amounts of money. This doesn't change all that much when you head out into the real world. The real world creates a perception whereby consumption and possession of certain material goods are almost a necessity in order to live and work comfortably within your profession. Ultimately, this means that no matter how much you earn it still doesn't seem like it's enough.



The greatest irony of this is that you only really discover that the the perception of the value of such (gratuitous) goods changes drastically if you are on your own or you are building a company.



I semi-regularly receive offers of business/job opportunities through this blog and other avenues (scams as well as real offers. Thankfully, most of the 'fishy ones' are picked up by SPAM filters). The irony is this. I know that no matter how much money is thrown at a business there is still no guarantee of success and a lot of the time savings can dry up in a very short space of time (especially if it is a 'standard business'. Namely, one that doesn't have a crazy level of growth ('real growth' not anticipated or 'projected growth')).



This is particularly the case if specialist 'consultants' (they can charge you a lot of money for what seems like obvious advice) need to be brought in. The thing I'm seeing is that basically a lot of what we sell one another is 'mumbo jumbo'. Stuff that we generally don't need but ultimately convince one another of in order to make a living and perhaps even allow us to do something we enjoy.



What complicates this further is that no matter how much terminology and theory we throw at something ultimately most people don't value things at the same value. A good example of this is asking random people what the value of a used iPod Classic 160GB is? I remember questioniong the value (200) of it by a salesman. He justfied the store price by stating that people were selling it for 600-700 on eBay. A struggling student would likely value it at around closer to 150. A person in hospitality valued it at 240. The average, knowledeagble community member would perceive (most likely remember) the associated value with the highest mark though.



Shift this back into the workplace and things become even more complicated. Think about the 'perception' of your profession. A short while back I met a sound engineer who made a decent salary (around 80K) but had to work 18 hour days continuously based on his description. His quality of life was ultimately shot and his wage should have obviously been much higher. His perceived value was 80K. His effective value was much lower.



Think about 'perception' once more. Some doctors/specialists who migrate but have the skills to practice but not the money to purchase insurance, re-take certification exams, etc... become taxi drivers in their new country. Their effective value (as a worker) becomes that of a taxi driver, nothing more.



Many skilled professions actually require extended periods of study/training, an apprenticeship of some form, a huge amount of hours put in, or just time trying to market your skills. A good chunk people may end up making a lot of money but most don't. Perceived value is the end salary but actual value is much lower.


Think about 'perception' in IT. In some companies they look down upon you if you work in this particular area. What's interesting is what they use you for. They basically shove more menial tasks downwards into the IT department because, 'nobody else wants to do it'. The perceived value of the worker in question doesn't seem much more different than a labourer.



The irony is that they're often just as well qualified as anybody in the firm in question and the work can often be varied to make you wonder what exactly is the actual value of an average IT worker. I've been trying to do the calculations. Average IT graduate is worth about 55K.

http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/4125.0main+features2320Jan%202013

http://www.payscale.com/research/AU/Job=Graduate_Software_Engineer/Salary

http://www.graduatecareers.com.au/research/researchreports/graduatesalaries/



Assuming he works at a SME (any industry not just IT) firm he'll be doing a lot of varied tasks (a lot of firms will tend to pigeon hole you into becoming a specialist). At a lot of service providers and SME firms I've looked at one hour of down time equates to about five figures. If you work in the right firm or you end up really good at your job you end up saving your firm somewhere between 5-7 figures each year. At much larger firms this figure is closer to about 6-8 figures each year.



At a lot of firms we suffer from hardware failure. The standard procedure is to simply purchase new hardware to deal with the problem (it's quicker and technically free despite the possible loss of downtime due to diagnosis and response time). The thing I've found out is that if you are actually able to repair/re-design the hardware itself you can actually save/make a lot (particularly telecommunications and network hardware). This is especially the case if the original design cut corners. Once again savings are similar to the previous point.



In an average firm there may be a perception that IT there is simply to support the function a business. It's almost like a utility now (think electricity, water, gas, etc... That's how low some companies perceieve technology. They perceive it to be a mere cost rather than something that can benefit their business). What a lot of people neglect is how much progress can be made given the use of appropiate technology. Savings/productivity gains are similar to the previous points.



What sort of stops us from realising just exactly what our value is is the siloed nature of the modern business world (specialists rather than generalists a lot of the time) and the fact that various laws, regulations, and so on are designed to help stop us from being potentially exploited.



The only way you actually realise what you're worth is if you work as an individual or start a company.



Go ahead, break down what you actually do in your day. You'll be surprised at how much you may actually be worth.



What you ultimately find out though is that (if you're not lazy) you're probably underpaid. The irony is that if the company were to pay you exactly what you were worth they would go bankrupt. Moreover, you only realistically have a small number of chances/opportunities to demonstrate your true worth. A lot of the time jobs are conducted on the basis of intermittency. Namely, you're there to do something specialised difficult every once in a while, not necessarily all the time.



It would be a really interesting world if we didn't have company structures/businesses. I keep on finding out over and over again that you simply get paid more for more skills as an individual. This is especially the case if there is no artificial barrier between you and the getting the job done. The work mightn't be stable but once you deal with that you have a very different perspective of the world even if it's only a part time job.



If you have some talent, I'd suggest you try starting your own company or work as an individual at some point in your life. The obvious problem will be coming up with an idea which will create money though. Don't worry about it. You will find opportunities along the way as you gain more life experience and understand where value comes from. At that point, start doing the numbers and do a few tests to see whether your business instincts are correct. You may be surprised at what you end up finding out.

http://forums.whirlpool.net.au/archive/1505450



Here's are other things I've worked out:

  • if you need a massive and complex business plan in order to justify your business's existence (particularly to investors) then you should rethink your business
  • if you need to 'spin things' or else have a bloated marketing department then there's likely nothing much special about the product or service that you are selling
  • if your business is fairly complex at a small level think about when it will be like when it scales up. Try to remove as many obstacles as you can when you're company is still young to ensure future success if unexpected growth comes your way
  • if you narrow yourself to one particular field you can limit your opportunities. In the normal world it can lead to stagnation (no real change in salary/value), specialisation (guaranteed increase in salary/value) though niether is a given. In smaller companies multiple roles may be critical to the survival/profitability of that particular company. The obvious risk is if they leave you're trying to fill in for multiple roles
  • a lot of goods and services exist in a one to one relationship. You can only sell it once and you have to maximise the profit on that. Through the use of broadcast style technologies we can achieve one to many relationships allowing us to build substantial wealth easily and quickly. This makes valuation of technology companies much more difficult. However, once you factor in overheards and risk of success versus failure things tend to normalise
  • perception means a lot. Think about a pair of Nike runners versus standard supermarket branded ones. There is sometimes very little difference in quality though the price of the Nike runners may be double. The same goes for some of the major fashion labels. They are sometimes produced en-masse in cheap Asian/African countries
  • if there are individuals and companies offering the opportunity to engage in solid business ventures, take them. Your perspective on life and lifestyle will change drastically if things turn out successfully
  • in reality, there are very few businesses where you can genuinely say the future looks bright for all of eternity. This is the same across every single sector
  • make friends with everyone. You'll be surprised at what you can learn and what opportunities you may be able to find
  • the meaning of 'market value' largely dissolves into nothingness in the real world. Managing perception accounts a good deal for what you can charge for something
  • just like investments the value of a good or service will normalise over time. You need volatility (this can be achieved via any means) to be able to make abnormal profits though
  • for companies where goods and services have high overheads 7-8 figures a week/month/year can mean nothing. If the overheads are high enough it's possible that they company may go under in a very short space of time. Find something which doesn't and focus in on that whether it be a primary or side business
  • the more you know the better off you'll be if you're willing to take calculated risk, are patient, and perservere. Most of the time things will normalise
  • in general, the community perception is that making more with high expenses is more successful than making less with no expenses
  • comments from people like Joe Hockey make a lot of sense to those who have had a relatively privileged background but they also go to the core of the matter. There are a lot of impediments in life now. I once recall walking past a begging 'aboriginal'. A white middle-upper class man simply admonished him to get a job. If you've ever worked with people like that or you've ever factored in his background you'll realise that this is almost impossible. Everybody has a go at people who work within the 'cash economy' and do not contribute to the tax base of the country but it's easy to understand a lot of why people do it. There are a lot of impediments in life despite whatever anyone says whether you're working at the top or bottom end of the scale
http://forums.whirlpool.net.au/archive/1937638

http://www.abc.net.au/news/2015-06-10/janda-its-not-hockeys-job-comment-that-should-worry-us/6535484

http://www.smh.com.au/comment/smh-letters/joe-hockey-doesnt-grasp-simple-economics-20150610-ghkl9v.html

http://www.bbc.co.uk/news/education-33109052

  • throw in some wierdness like strange pay for seemingly unskilled jobs and everything looks bizarre. A good example of this is a nightfill worker (stock stacker) at a supermarket in Australia. He can actually earn a lot more than those in skilled professions. It's not just about skills or knowledge when it comes to earning a high wage 
http://forums.whirlpool.net.au/archive/2219972

http://forums.whirlpool.net.au/archive/1937638
  • there are a lot of overqualified people out there (but there are a hell of lot more underqualified people out there are well. I've worked both sides of the equation). If you are lucky someone will give you a chance at a something appropriate to your level but a lot of the time you'll just have to make do
  • you may be shocked at how, who, and what makes money and vice-versa (how, who, and what doesn't make money). For instance, something which you can get for free you can sell while some products/services which have had a lot of effort put into them may not get any sales
https://www.ozbargain.com.au/node/197991

  • there are very few companies that you could genuinely say are 100% technology orientated. Even in companies that are supposedly technology orientated there are still politicial issues that you must deal with
  • by using certain mechanisms you can stop resales of your products/services which can force purchase only from known avenues. This is a common strategy in the music industry with MIDI controllers and stops erosion/canibalisation of sales of new product through minimisation of sales of used products
  • it's easy to be impressed by people who are simply quoting numbers. Do your research. People commonly quote high growth figures but in reality Most aren't impressive as they seem. They seem even less impressive when you factor in inflation, Quantitive Easing programs, etc... In a lot of cases companies/industries (even many countries if you to think about it) would actually be at a standstill or else going backwards.
http://www.inc.com/sageworks/the-15-most-profitable-industries-for-private-companies.html

https://biz.yahoo.com/p/sum_qpmd.html

http://www.forbes.com/sites/sageworks/2013/04/28/the-most-profitable-businesses-to-start/

Feeds I follow: Citylab, Commitstrip, MKBHD, Offsetting Bahaviour

I thought I’d list of the feeds/blogs/sites I currently follow. Mostly I do this via RSS using Newsblur.

FacebookGoogle+Share

June 12, 2015

Logical Volume Management with Debian on Amazon EC2

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances.

However it should be noted that the same auto-management of capacity is not true in the EC2 instance’s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed.

However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible.

Enter: Logical Volume Management, or LVM. It’s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) — and LVM 1 was many years before that — so it’s pretty mature now. It’s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them.

In this post, I’ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you’d do some basic maintenance to add and remove (where possible) storage with minimal interruption.

Getting Started

First a little prep work for a new Debian instance with LVM.

As I’d like to give the instance its own ability to manage its storage, I’ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I’ll create a new Role I’ll name EC2-MyServer (or similar), and at this point I’ll skip giving it any actual privileges (later we’ll update this). As at this date, we can only associate an instance role/profile at instance launch time.

Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I’ll be managing on a separate disk from the root file system.

First, I need to get the LVM utilities installed. It’s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:

apt update && apt install lvm2

After a few moments, the package is installed. I’ll choose a location that I want my data to live in, such as /opt/.  I want a separate disk for this task for a number of reasons:

  1. Root EBS volumes cannot currently be encrypted using Amazon’s Encrypted EBS Volumes at this point in time. If I want to also use AWS’ encryption option, it’ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It’s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that’s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.

I will create this extra volume in the AWS console and present it to this host. I’ll start by using a web browser (we’ll use CLI later) with the EC2 console.

The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won’t be able to connect them to my instance. It’s not a huge issue, as I would just delete the volume and try again.

I navigate to the “Instances” panel of the EC2 Console, and find my instance in the list:

EC2 instance listA (redacted) list of instance from the EC2 console.

Here I can see I have located an instance and it’s running in US-East-1A: that’s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:

wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone

The returned text is simply: “us-east-1a”.

Time to navigate to “Elastic Block Store“, choose “Volumes” and click “Create“:

Creating a volume in AWS EC2: ensure the AZ is the same as your instanceCreating a volume in AWS EC2: ensure the AZ is the same as your instance

You’ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn’t include the t2 family. However, you have an option of using encryption with LVM – where the customer looks after the encryption key – see LUKS.

What’s nice is that I can do both — have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext!

I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as “creating”, and then shortly afterwards as “available”. Time to attach it: select the disk, and either right-click and choose “Attach“, or from the menu at the top of the list, chose “Actions” -> “Attach” (both do the same thing).

Attach volumeAttaching a volume to an instance: you’ll be prompted for the compatible instances in the same AZ.

At this point in time your EC2 instance will now notice a new disk; you can confirm this with “dmesg |tail“, and you’ll see something like:

[1994151.231815]  xvdg: unknown partition table

(Note the time-stamp in square brackets will be different).

Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we’re adding in LVM here – between this “raw” device, and the filesystem we are yet to make….

Marking the block device for LVM

Our first operation with LVM is to put a marker on the volume to indicate it’s being use for LVM – so that when we scan the block device, we know what it’s for. It’s a really simple command:

pvcreate /dev/xvdg

The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:

  Physical volume "/dev/xvdg" successfully created

Checking our EBS Volume

We can check on the EBS volume – which LVM sees as a Physical Volume – using the “pvs” command.

# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/xvdg       lvm2 ---  5.00g 5.00g

Here we see the entire disk is currently unused.

Creating our First Volume Group

Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we’ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:

# vgcreate  OptVG /dev/xvdg
  Volume group "OptVG" successfully created

And likewise we can check our set of Volume Groups with ” vgs”:

# vgs
  VG    #PV #LV #SN Attr   VSize VFree
  OptVG   1   0   0 wz--n- 5.00g 5.00g

The Attribute flags here indicate this is writable, resizable, and allocating extents in “normal” mode. Lets proceed to make our (first) Logical Volume in this Volume Group:

# lvcreate -n OptLV -L 4.9G OptVG
  Rounding up size to full physical extent 4.90 GiB
  Logical volume "OptLV" created

You’ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk – and this will be used later when we want to move data between raw disk devices.

If I wanted to use LVM for Snapshots, then I’d want to leave more space free (unallocated) again.

We can check on our Logical Volume:

# lvs
  LV    VG    Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  OptLV OptVG -wi-a----- 4.90g

The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we’re going to format and mount. But before we do, we should review what file system we’ll use.


Filesystems

Popular Linux file systems
Name Shrink Grow Journal Max File Sz Max Vol Sz
btrfs Y Y N 16 EB 16 EB
ext3 Y off-line Y Y 2 TB 32 TB
ext4 Y off-line Y Y 16 TB 1 EB
xfs N Y Y 8 EB 8 EB
zfs* N Y Y 16 EB 256 ZB

For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I’ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that’s a possibility; but for tried and trusted, I’ll use ext4.

The selection of ext4 also means that I’ll only be able to shrink this file system off-line (unmounted).

I’ll make the filesystem:

# mkfs.ext4 /dev/OptVG/OptLV
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 1285120 4k blocks and 321280 inodes
Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

And now mount this volume and check it out:

# mount /dev/OptVG/OptLV /opt/
# df -HT /opt
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  5.1G   11M  4.8G   1% /opt

Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:

/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0

With this in place, we can now start using this disk.  I selected here not to update the filesystem every time I access a file or folder – updates get logged as normal but access time is just ignored.

Time to expand

After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn’t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4’s online resize ability will allow us to do this transparently.

For this example, we’ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original – we’re going to online-migrate all our data from one to the other.

As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:

[1999786.341602]  xvdh: unknown partition table

And now we initalise this as a Physical volume for LVM:

# pvcreate /dev/xvdh
  Physical volume "/dev/xvdh" successfully created

And then add this disk to our existing OptVG Volume Group:

# vgextend OptVG /dev/xvdh
  Volume group "OptVG" successfully extended

We can now review our Volume group with vgs, and see our physical volumes with pvs:

# vgs
  VG    #PV #LV #SN Attr   VSize  VFree
  OptVG   2   1   0 wz--n- 14.99g 10.09g
# pvs
  PV         VG    Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 a--   5.00g 96.00m
  /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g

There are now 2 Physical Volumes – we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG.

Now its time to stop using the /dev/xvgd volume for any new requests:

# pvchange -x n /dev/xvdg
  Physical volume "/dev/xvdg" changed
  1 physical volume changed / 0 physical volumes not changed

At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I’d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):

# pvmove /dev/sdb1 /dev/sdd1
  /dev/xvdg: Moved: 0.1%
  /dev/xvdg: Moved: 8.6%
  /dev/xvdg: Moved: 17.1%
  /dev/xvdg: Moved: 25.7%
  /dev/xvdg: Moved: 34.2%
  /dev/xvdg: Moved: 42.5%
  /dev/xvdg: Moved: 51.2%
  /dev/xvdg: Moved: 59.7%
  /dev/xvdg: Moved: 68.0%
  /dev/xvdg: Moved: 76.4%
  /dev/xvdg: Moved: 84.7%
  /dev/xvdg: Moved: 93.3%
  /dev/xvdg: Moved: 100.0%

During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric – clearly data should be flowing off the old disk, and on to the new.

A note on disk throughput

The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There’s a few things we can do to tweak this:

  • EBS Optimised: a launch-time option that reserves network throughput from certain instance types back to the EBS service within the AZ. Depending on the size of the instance this is 500 MB/sec up to 4GB/sec. Note that for the c4 family of instances, EBS Optimised is on by default.
  • Size of GP2 disk: the larger the disk, the longer it can sustain high IO throughput – but read this for details.
  • Size and speed of PIOPs disk: if consistent high IO is required, then moving to Provisioned IO disk may be useful. Looking at the (2 weeks) history of Cloudwatch logs for the old volume will give me some idea of the duty cycle of the disk IO.

Back to the move…

Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:

# pvs
  PV         VG    Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 ---   5.00g 5.00g
  /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g

So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:

# vgreduce OptVG /dev/xvdg
  Removed "/dev/xvdg" from volume group "OptVG"

Then I cleanly wipe the labels from the volume:

# pvremove /dev/xvdg
  Labels on physical volume "/dev/xvdg" successfully wiped

If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time

Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:

Detatch volume dialog boxDetach an EBS volume from an EC2 instance

Wait for a few seconds, and the disk is then shown as “available“; I then chose to delete the disk in the EC2 console (and stop paying for it).

Back to the Logical Volume – it’s still 4.9 GB, so I add 4.5 GB to it:

# lvresize -L +4.5G /dev/OptVG/OptLV
  Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).
  Logical volume OptLV successfully resized

We now have 0.6GB free space on the physical volume (pvs confirms this).

Finally, its time to expand out ext4 file system:

# resize2fs /dev/OptVG/OptLV
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.

And with df we can now see:

# df -HT /opt/
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  9.9G   12M  9.4G   1% /opt

Automating this

The IAM Role I made at the beginning of this post is now going to be useful. I’ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CreateNewVolumes",
      "Action": "ec2:CreateVolume",
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:AvailabilityZone": "us-east-1a",
          "ec2:VolumeType": "gp2"
        },
        "NumericLessThanEquals": {
          "ec2:VolumeSize": "250"
        }
      }
    }
  ]
}

This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I’ll add another policy to permit this instance role to tag volumes in this AZ that don’t yet have a tag called InstanceId:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "TagUntaggedVolumeWithInstanceId",
      "Action": [
        "ec2:CreateTags"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",
      "Condition": {
        "Null": {
          "ec2:ResourceTag/InstanceId": "true"
        }
      }
    }
  ]
}

Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",
      "Action": [
        "ec2:CreateSnapshot",
        "ec2:DeleteSnapshot",
        "ec2:DeleteVolume",
        "ec2:DescribeSnapshotAttribute",
        "ec2:DescribeVolumeAttribute",
        "ec2:DescribeVolumeStatus",
        "ec2:ModifyVolumeAttribute"
      ],
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/InstanceId": "i-123456"
        }
      }
    }
  ]
}

Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that’s not currently possible.

Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1434114682836",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/InstanceID": "i-123456"
        }
      }
    },
    {
      "Sid": "Stmt1434114745717",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456"
    }
  ]
}

Now with this in place, we can start to fire up the AWS CLI we spoke of. We’ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.

AZ=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone`

Region=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone|rev|cut -c 2-|rev`

InstanceId=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

VolumeId=`aws ec2 --region ${Region} create-volume --availability-zone ${AZ} --volume-type gp2 --size 1 --query "VolumeId" --output text`

aws ec2 --region ${Region} create-tags --resource ${VolumeID} --tags Key=InstanceId,Value=${InstanceId}

aws ec2 --region ${Region} attach-volume --volume-id ${VolumeId} --instance-id ${InstanceId}

…and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.

clintonroy

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.



Filed under: Uncategorized

gcov code coverage for OpenPower firmware

For skiboot (which provides the OPAL boot and runtime firmware for OpenPower machines), I’ve been pretty interested at getting some automated code coverage data for booting on real hardware (as well as in a simulator). Why? Well, it’s useful to see that various test suites are actually testing what you think they are, and it helps you be able to define more tests to increase what you’re covering.

The typical way to do code coverage is to make GCC build your program with GCOV, which is pretty simple if you’re a userspace program. You build with gcov, run program, and at the end you’re left with files on disk that contain all the coverage information for a tool such as lcov to consume. For the Linux kernel, you can also do this, and then extract the GCOV data out of debugfs and get code coverage for all/part of your kernel. It’s a little bit more involved for the kernel, but not too much so.

To achieve this, the kernel has to implement a bunch of stub functions itself rather than link to the gcov library as well as parse the GCOV data structures that GCC generates and emit the gcda files in debugfs when read. Basically, you replace the part of the GCC generated code that writes the files out. This works really nicely as Linux has fancy things like a VFS and debugfs.

For skiboot, we have no such things. We are firmware, we don’t have a damn file system interface. So, what do we do? Write a userspace utility to parse a dump of the appropriate region of memory, easy! That’s exactly what I did, a (relatively) simple user space app to parse out the gcov gcda files from a skiboot memory image – something we can easily dump out of the simulator, relatively easily (albeit slower) from the FSP on an IBM POWER system and even just directly out of a running system (if you boot a linux kernel with the appropriate config).

So, we can now get a (mostly automated) code coverage report simply for the act of booting to petitboot: https://open-power.github.io/skiboot/boot-coverage-report/ along with our old coverage report which was just for the unit tests (https://open-power.github.io/skiboot/coverage-report/). My current boot-coverage-report is just on POWER7 and POWER8 IBM FSP based systems – but you can see that a decent amount of code both is (and isn’t) touched simply from the act of booting to the bootloader.

The numbers we get are only approximate for any code run on more than one CPU as GCC just generates code that does a load/add/store rather than using an atomic increment.

One interesting observation was that (at least on smaller systems, which are still quite large by many people’s standards), boot time was not really noticeably increased.

For more information on running with gcov, see the in-tree documentation: https://github.com/open-power/skiboot/blob/master/doc/gcov.txt

June 11, 2015

LUV Beginners June Meeting: Getting started with Raspberry Pi

Jun 20 2015 12:30
Jun 20 2015 16:30
Jun 20 2015 12:30
Jun 20 2015 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Wen Lin will introduce the wonder of Raspberry Pi - a project that has taken the world by storm since the first RasPi SBC was introduced in Feb 2012. After some intro, he will take the audience through a brief overview of Raspberry Pi's latest development around the world as well as a quick glance of a sample of Raspberry Pi related projects in the Community. Then, in the second half of his session, Wen will go into some hands-on demos, focussing on getting a Raspberry Pi up and running as a micro computer running Linux.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 20, 2015 - 12:30

read more

June 10, 2015

Repairing Musical Instruments/Electrical Equipment, the Value of Money, and Dating

If you've been reading this blog for a while now you've noticed that I do a lot of tinkering. One of the things I've been tinkering wih a lot of late though has been electronic music hardware/software. Some things to note:


- you should make the assumption that no one is going to help you with with regards to circuit diagrams when it comes to fixing machines, re-designing/modifying them, etc... The best that you'll be able to manage are teardown pictures/diagrams posted by others out on the 'Interwebs'. 


Don't make the assumption that your problem is the exact same as others out there. Most of the time though they'll be the usual problems that other electronic devices face such as improper contact (also referred to as dry soldering) or failed electronic components.The biggest problem that you will face will be the intermittent issues. For instance, thermally related or else physical contact problems that haven't quite made themselves completely obvious. I had something like this recently. A screen on a Maschine was basically malfunctioning from time to time. The owner told me to press down on the screen to make it work. I tried it and it seemed to work. After tearing it down and trying to fix various contacts it became obvious that this one was slightly more difficult to fix. Putting pressure across the board didn't provide any further clues until a capacitor (C207, halfway across the PCB) fell off (and the problem seemed to be consistent). Re-soldering seems to have fixed the problem. 


Interesting facts. Maschine screens are interchangeable from side to side in case you want/need to repair one of these. They are their own separate module (except in the Mikro based on the description I'm seeing). They are not soldered on to the PCB but are connected via ZIF connectors. 
Repairing a lot of (non-trivial) electronics is a balance between luck, skill, perseverence, etc... Tips on dealing with intermittent problems include using physical pressure applied at strategic points to narrow down the source, purchasing better diagnostic equipment (sometimes your only choice), and using hair dryers/compressed air as a means of temperature regulation.

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/812965-maschine-mikro-has-met-beer-how-crack-open.html

http://www.illmuzik.com/forums/threads/maschine-mk1-controller-help.32980/

http://maschinemusic.com/forum/topics/maschine-mk2-defective-screens

https://www.native-instruments.com/forum/threads/maschine-studio-screens.211153/page-2

https://www.native-instruments.com/forum/threads/hardware-screens-not-working-properly.230738/

http://www.illmuzik.com/forums/threads/maschine-mk1-controller-help.32980/

https://www.native-instruments.com/forum/threads/i-got-2-maschines-and-on-botth-the-screens-are-flickering-out-after-not-even-1-year.193729/



- same with software interfacing. Some companies build their equipment with the express purpose of linking their hardware and software. They have no incentive to help you build something that will interface with their hardware/software. It will take luck, perservence and knowledge of reverse engineering to do what you need (See the relevant chapters in my book on 'Cloud and Internet Security' for further details regarding this.).

http://www.native-instruments.com/forum/threads/midi-keyboard-in-maschine-help.149559/

http://www.youtube.com/watch?v=JkDKV9ys3z8

http://www.youtube.com/watch?v=P2zFEHyBoZU

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

http://www.mpc-tutor.com/understanding-midi-on-the-akai-mpc/

http://www.acidboxblues.com/2012/07/so-ableton-live-crashes-and-you-think.html



- there is a good chance that you may be eletrocuted at some point. Take measures to reduce the chances that the amount of power that can exit through your body. I often work with rubber gloves, wear rubber sole shoes, etc... Isolate the problem as much as you can and work across modules. If in doubt order in a new module rather than doing component level repair. It will reduce the chances of you getting 'zapped' and sometimes may be the most viable, economic option available once you factor in the amount of time you must spend working on the problem. Finally, if in doubt send it off to someone more accomplished to have the repair completed. This seems obvious but I've come across some people who have tried to scrimage and have done more bad than good when attemping to 'repair' something.

http://dtbnguyen.blogspot.com.au/2013/03/repairing-laptop-power-bricks.html


- you will come across 'smelly equipment' from time to time. I recently came across a Maschine that had been used in a 'smoky environment'. It was so 'smoky' that I actually felt as though I was getting high from simply being around it. I had to tear it down and soak it (the control pads which seem to be made of a silicone and rubber compound, not the device) in hot water and bleach twice for several hours before I could operate 'normally' within it's vicinity. A tip, if you do have to use solvents or other cleaning chemicals test it at a lower concentration and amount first. You don't want to find out later down the line that the substance you used was actually highly corrosive and may have damaged sensitive electronic components.

https://www.gearslutz.com/board/so-much-gear-so-little-time/811452-how-remove-smoke-odour-gear-2.html

https://www.gearslutz.com/board/so-much-gear-so-little-time/632928-anyway-remove-cigarette-odor-used-gear.html

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/456754-how-do-remove-smell-smoke-y-synthesizer.html

http://forums.whirlpool.net.au/archive/1041569

https://www.gearslutz.com/board/so-much-gear-so-little-time/632928-anyway-remove-cigarette-odor-used-gear.html

http://en.wikipedia.org/wiki/Sodium_hydroxide

http://www.ebay.com/gds/How-to-Eliminate-Smoke-Smell-from-Your-eBay-Purchases-/10000000001669988/g.html

http://www.head-fi.org/t/60646/cigarette-smoke-smell-in-electronics-how-to-get-rid-of



As I was growing up, people often told me to, "do something you enjoy". Other told me to, "do something which will help you make heaps of money". Now, I'm a little, older, and a little bit wiser. I say, try and find a nice balance between the two.



You don't really realise what the value of money is until you actually are forced to consider what you earn and what you actually spend. For instance, the general belief is that everyone goes to school and works hard in an effort to find a good, high earning profession at the end of all of it. Recently, I've been looking at the numbers more carefully and for everything you have to put up with in some places you really wonder whether it's all worth it.



Increasingly, many of us are working extended hours (your job description may say 9 to 5 but in reality your hours are much longer or else you have to deal with an undue number of 'off hour' incidents) with unrealistic expectations, lack of training, favouritism/nepotism, un-supportive/directionless management and/or team mates for not much more and after you've factored in travel time/costs, bills, day to day living costs and so on there's not enough left over to say that it was actually worth it especially if it's not in a role that you particularly enjoy.



Even if you make heaps of money you've given up so much time during the week that you can be too burnt out to enjoy it.



Ironically, it's much the same even in some of the 'glamour industries' such as law, medicine, finance, and IT.

http://forums.whirlpool.net.au/forum-replies.cfm?t=2413007

http://forums.whirlpool.net.au/archive/2345937

http://www.amazon.com/The-Striped-Prison-Lisa-Pryor/dp/0330423509

http://skepticlawyer.com.au/2009/12/21/pin-striped-prison/



Moreover, it's the same with a lot of businesses. Live enough and you basically see that in spite of the impressive numbers (7 to 8 figures a month/year) that a lot of businesses may report it doesn't seem like they're going anywhere. They just seem/feel to be struggling to stay afloat a lot of the time. Things make a lot more sense to me now why many lot of companies seem so paranoid when it comes to profit margins and maintaining large amounts of cash savings on hand in case something goes bad (Microsoft has been somewhat notorious when it comes to this).



The obvious answer to this conundrum is to run your own business (or search for your 'dream job'). Unless you've actually been involved in a startup or have been involved in building a company from the ground up you don't realise how much stress is involved. Unless you actually enjoy the work, you're essentially stuck in the same doom loop scenario. Moreover, finding your 'dream job' is made much more difficult by the lack of opportunities, competition, and the fact by recruiters who may not be entirely up front about the job in question. The only thing I've been learning over and over again is to try and find a balance between time, money, and doing what you enjoy. Moreover, once you find something you enjoy and are making money out of it, make the most of it and stick at it for as long as you possibly can (whether that be your own business or working for someone else).



It's pretty darn obvious people use the information on this blog for all sorts of weird and wonderful things. For those girls who have supposedly been lusting after the man behind this blog, please send photos!!! :-) For those who are looking for immigration benefits though please though photos and send money too!!! :-)

June 09, 2015

Custom MIDI (Hardware and Software) Controllers, MP3 Players, and SD Card Experiments

If you're like me (a technologist who has an interest in music) you've probably looked at a variety of MIDI controllers on the market but haven't found one that quite ticks all the boxes for everything that you want to do. It's also likely that you've looked at having multiple controllers and/or some of the higher end equipment but as always you can't always justify the cost of what you want versus what you actually need.



Of late, I've been looking at building my own (MIDI controllers). After all, these devices are relatively simple and often used highly standardised components (membrance based switches, encoders/knobs/other, some chips, etc...). Look at the following links/teardowns and you'll notice that there is very little to distinguish between them with many components being available from your local electronics store.

https://www.flickr.com/photos/psychlist1972/sets/72157631489556008/detail/

http://www.illuminatedsounds.com/?cat=23

http://www.illuminatedsounds.com/?p=744

http://bangbang-nyc.com/2013/05/ableton-push-disassembled/

http://pushmod.blogspot.com.au/
http://www.synthtopia.com/content/2013/08/26/ableton-push-stripped-bare/

http://www.mpcstuff.com/akstst.html



I've looked at starting from scratch for hardware builds but they have proven to be prohibitively expensive for my experiment (3D printing is an increasingly viable option especially as public libraries let them out for free, public use but there are limitations especially with regards to construction. For instance, many printers will require multiple sessions before a complete device can be constructed, there are durability concerns, etc...). Instead I've been looking at using existing electronics to interface with.

http://www.umidi.co/index.html

http://custommidicontrollers.com/

http://www.instructables.com/id/Custom-Built-MIDI-Controller/



For instance, finding something suitable to turn into a MIDI controller (calculators, toy pianos spring to mind). The circuitry is often very simple and basically all you need to is hook it up to an environmental control interface device with multiple sensors. A hardware interface is then used to provide electrical signal to MIDI control translation (such as an Arduino device). The other option is to analyse the electrical signal on a case by case basis. Then use this as a basis for writing a translation program which will turn the electrical signal into a MIDI signal which can be used to interface with other equipment, your existing software, etc...

http://www.musicradar.com/reviews/tech/akai-mpd24-22920

http://vvvv.org/contribution/mpd24-akai-midi-mapper

http://mods-n-hacks.wonderhowto.com/how-to/build-simple-midi-controller-251069/

http://shiftmore.blogspot.com.au/2009/12/calculator-midi-usb-controller.html

http://www.codetinkerhack.com/2012/11/how-to-turn-piano-toy-into-midi.html

http://www.codetinkerhack.com/2013/01/how-to-add-velocity-aftertouch-midi.html

http://makezine.com/2010/11/30/usbhacking/

https://wiki.python.org/moin/PythonInMusic

http://www.native-instruments.com/forum/threads/turning-any-usb-hardware-into-a-midi-device.47017/

http://createdigitalmusic.com/2009/11/novation-releases-all-midi-details-for-launchpad/

http://www.widisoft.com/english/widi-audio-to-midi-vst.html

http://code.google.com/p/audio2midi/

http://www.synthtopia.com/content/2013/07/26/midimorphosis-converts-audio-to-midi/

http://www.nativekontrol.com/



Another option I've been looking at is using third party electronic devices (such as a tablet or else cheaper MIDI control devices in combination with other software) to provide emulation for often much more expensive hardware. Good examples of this include the the high end hardware controllers such as Native Instrument's Maschine, Ableton's Push, a Akai's MPC/APC series, etc... (Even when purchased second hand these devices can often fetch up to around 80-90% of their retail value. Factor in the problem that few retailers are willing to provide demonstration equipment for them (StoreDJ is an exception) and you can understand why so many people re-sell their equipment with explanations often stating that the piece of equipment quite simply didn't fit into their setup.)

http://motscousus.com/stuff/2011-07_Novation_Launchpad_Ableton_Live_Scripts/
http://www.afrodjmac.com/blog/2013/03/14/more-ways-to-turn-your-launchpad-into-a-push

http://beatwise.proboards.com/thread/1315/free-preset-carbon-push-emulation

http://www.reddit.com/r/abletonlive/comments/1aopop/push_emulation_now_available_on_apc40_free_to/?



There are several main options to look at including TouchOSC, MIDI Designer, and Lemur. The two I've been most curious about are Lemur and TouchOSC though. Installation and setup consist of a daemon/service on your computer, an application of some sort on your tablet, and an editor that can be tablet or computer based. Thereafter, there are often 'templates' which are basically skins and underlying software code which allows you to design a MIDI interface from scratch and interface with other equipment/software directly from your tablet.

https://liine.net/en/products/lemur/

http://iosmidi.com/

http://mididesigner.com/

http://hexler.net/software/touchosc-android

http://djtechtools.com/2013/01/28/how-to-dj-using-liines-lemur-app-for-ipad/

http://www.youtube.com/watch?v=KJxAnm3j8TI

http://createdigitalmusic.com/2014/11/lemur-now-android-supports-cabled-connections-want-touch-app/
https://liine.net/en/community/user-library/view/421/



There are obvious issues here. Apple iPads are almost as expensive as some of the MIDI controllers we're looking at in this document. One option is to purchase the iPad Mini or something second hand. Basically, what I've been reading indicates that either option will do but that the screen size of the iPad Mini may make things a bit fiddly particularly if you have large hands. The other option is to use Android only applications. The only problem is that the iOS universe is often much more diverse than the Android one.

http://www.kvraudio.com/forum/viewtopic.php?t=397495

http://support.liine.net/customer/portal/questions/1244470-ipad-mini-compatibility-with-lemur-drum-pad-

http://forum.liine.net/viewtopic.php?f=25&t=2391

https://www.ableton.com/en/help/article/control-live-mobile-device/

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/700437-ni-maschine-mikro-vs-ipad-lemur.html

http://digitaldjtools.net/mappings/

http://forum.watmm.com/topic/76701-considering-an-ipad-mini/

https://documentation.meraki.com/SM/Monitoring_and_Reporting/Activation_Lock_Bypass_for_iOS_Devices


http://cydiamate.net/doulci-ios-8-3-activation-lock-bypass/



The other thing that needs to be considered is how you should interface. In theory, wireless is a great option. In practice I've been seeing stories about consistently lost connnections. Look at a hardware USB interface if need be.

http://www.djcity.com.au/irig-midi-interface-for-iphone-and-ipad

http://www.djcity.com.au/irig-midi2



To be honest though a lot of the emulators for the Push (and other devices) aren't perfect. You lose a bit of functionality (in some cases you gain a lot of extra functionality though but the emulation still isn't perfect). It's likely going to make you want to purchase these devices more or ward you off of them completely because they don't fit into your workflow.



With the cessation of production of the iPod Classic and other high capacity music player options I've been looking at alternatives on and off for a while. Clearly, high capacity SD based storage options are extremely expensive at this stage at the high end. One alternative though is using adapter cards for inexpensive, readily available, older low capacity MP3 players which utilise hard drives. The adapters required are available for around $10-20. Obvious problems using SD based storage include regarding speed limitations, capacity limitations, high prices, etc... Moreover, some of the adapters won't fit in the case, or there needs to be workarounds. For instance, currently there aren't enough 128GB SD cards at a reasonable price locally so running multiple SD cards in RAID configuration may be the compromise that you have to make for the immediate future.

http://www.ebay.com/bhp/sd-card-to-ide

http://cubicgarden.com/2013/05/05/upgrading-the-pacemakers-hard-drive/

http://www.head-fi.org/t/566780/official-ipod-video-classic-5g-5-5g-6g-6-5g-7g-ssd-mod-thread/270

http://www.ebay.com/itm/SD-SDHC-MMC-Card-to-1-8-ZIF-LIF-CE-SSD-Adapter-40pin-ZIF-LIF-cable-/111091174857



One interesting piece of information that I've come across recently is that there isn't much stopping people using SDXC cards in supposedly SDHC only card readers (either drivers or simple hardware blocks are the limitations). Basically, the primary difference between SDHC and SDXC are that the default file formats are one uses FAT32 as the default format while the other uses exFAT respectively. Clearly this limitation can be overcome with the right tools and knowledge though. For instance, Windows by default doesn't allow this so other options need to be employed.

https://gbatemp.net/threads/how-to-use-a-64gb-micro-sdxc-in-your-sdhc-compliant-flash-cart.335912/

http://www.ridgecrop.demon.co.uk/

http://www.tarkan.info/20121226/tutorials/ipod-and-sdhc-sdxc-cards

http://arstechnica.com/civis/viewtopic.php?t=1151548

http://en.wikipedia.org/wiki/Secure_Digital

https://www.ifixit.com/Guide/iPod+5th+Generation+%28Video%29+CF+or+SDHC-SDXC+Memory+instead+of+HDD+Replacement/7492

https://www.raspberrypi.org/forums/viewtopic.php?f=2&t=2252



http://superuser.com/questions/282202/which-consumes-more-power-hard-drive-or-sd-card-card-reader

http://raspberrypi.stackexchange.com/questions/1765/possible-to-connect-sata-device-to-the-sd-slot

http://www.techbuy.com.au/p/208703/HARD_DRIVE_-_EXTERNAL_DRIVE_CASE_SATA_-_USB_2.5/8WARE/WI21.asp

http://www.warcom.com.au/shop/flypage/computer-parts/media-players/49000?gclid=CKXk-bfN2sUCFUsHvAod-gQAGg

http://www.i-tech.com.au/products/144200_8ware_Portable_Wireless_Streaming.aspx

June 07, 2015

Twitter posts: 2015-06-01 to 2015-06-07

Thoughts on the white spots of Ceres

If you’ve been paying attention to the world of planetary exploration you’ll have noticed the excitement about the unexpected white spots on the dwarf planet Ceres. Here’s an image from May 29th that shows them well.

Ceres with white spots

Having looked at a few images my theory is that impacts are exposing some much higher albedo material, which you can see here at the top of the rebound peak at the center of the crater, and that the impact has thrown some of this material up and that material has fallen back as Ceres has rotated slowly beneath it giving rise to the blobs to the side of the crater.

If my theory is right then if you know Ceres gravity and its rotational speed and the distance between the rebound peak and the other spots then you should be able to work out how far up the material was thrown up. That might tell you something about the size of the impact (depending on how much you know about the structure of Ceres itself).

As an analogy, here’s an impact on Mars captured by the HiRise camera on MRO that shows an area of ice exposed by an impact.

This item originally posted here:



Thoughts on the white spots of Ceres

June 05, 2015

Hiring Subsystem Maintainers

The regular LWN kernel development stats have been posted here for version 4.1 (if you really don’t have a subscription, email me for a free link).  In this, Jon Corbet notes:

over 60% of the changes going into this kernel passed through the hands of developers working for just five companies. This concentration reflects a simple fact: while many companies are willing to support developers working on specific tasks, the number of companies supporting subsystem maintainers is far smaller. Subsystem maintainership is also, increasingly, not a job for volunteer developers..

As most folks reading this would know, I lead the mainline Linux Kernel team at Oracle.  We do have several people on the team who work in leadership roles in the kernel community (myself included), and what I’d like to make clear is that we are actively looking to support more such folk.

If you’re a subsystem maintainer (or acting in a comparable leadership role), please always feel free to contact me directly via email to discuss employment possibilities.  You can also contact Oracle kernel folk who may be presenting or attending Linux conferences.

More coding club

This is the second post about the coding club at my kid's school. I was away for four weeks travelling for work and then getting sick, so I am still getting back up to speed with what the kids have been up to while I've been away. This post is an attempt to gather some resources that I hope will be useful during the session today -- it remains to be seen how this maps to what the kids actually did while I was away.



First off, the adults have decided to give Python for Kids a go as a teaching resource. The biggest catch with this book is that its kind of expensive -- at AUD $35 a copy, we can't just issue a copy to every kid in the room. That said, perhaps the kids don't each need a copy, as long as the adults are just using it as a guide for what things to cover.



It appears that while I was away chapters 1 through 4 have been covered. 1 is about install python, and then 2-3 are language construct introductions. This is things like what a variable is, mathematical operators, strings, tuples and lists. So, that's all important but kind of dull. On the other hand, chapter 4 covers turtle graphics, which I didn't even realize that python had a module for.



I have fond memories of doing logo graphics as a kid at school. Back in my day we'd sometimes even use actual robots to do some of the graphics, although most of it was simulated on Apple II machines of various forms. I think its important to let the kids of today know that these strange exercises they're doing used to relate to physical hardware that schools actually owned. Here are a couple of indicative pictures stolen from the Internet:











So, I think that's what we'll keep going with this week -- I'll let the kids explain where they got to with turtle graphics and then we'll see how far we can take that without it becoming a chore.



Tags for this post: coding_club kids coding python turtle graphics logo

Related posts: Coding club day one: a simple number guessing game in python; JPEG 2 MPEG howto; Graphics from the command line; Implementing SCP with paramiko; Packet capture in python; I'm glad I've turned on comments here



Comment

June 04, 2015

Geocaching at the border

Today's lunch walk was around Tuggeranong Pines again. At the back of the pine forest is the original train line from the 1880s which went down to Cooma. I walked as far as the old Tuggeranong siding before turning back. Its interesting, as there is evidence that there has been track work done here in the last ten years or so, even though the line hasn't been used since 1989.



                       



Interactive map for this route.



Tags for this post: blog pictures 20150604-geocaching photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

June 03, 2015

What Transactions Get Crowded Out If Blocks Fill?

What happens if bitcoin blocks fill?  Miners choose transactions with the highest fees, so low fee transactions get left behind.  Let’s look at what makes up blocks today, to try to figure out which transactions will get “crowded out” at various thresholds.

Some assumptions need to be made here: we can’t automatically tell the difference between me taking a $1000 output and paying you 1c, and me paying you $999.99 and sending myself the 1c change.  So my first attempt was very conservative: only look at transactions with two or more outputs which were under the given thresholds (I used a nice round $200 / BTC price throughout, for simplicity).

(Note: I used bitcoin-iterate to pull out transaction data, and rebuild blocks without certain transactions; you can reproduce the csv files in the blocksize-stats directory if you want).

Paying More Than 1 Person Under $1 (< 500000 Satoshi)

Here’s the result (against the current blocksize):

Sending 2 Or More Sub-$1 Outputs

Let’s zoom in to the interesting part, first, since there’s very little difference before 220,000 (February 2013).  You can see that only about 18% of transactions are sending less than $1 and getting less than $1 in change:

Since March 2013…

Paying Anyone Under 1c, 10c, $1

The above graph doesn’t capture the case where I have $100 and send you 1c.   If we eliminate any transaction which has any output less than various thresholds, we’ll catch that. The downside is that we capture the “sending myself tiny change” case, but I’d expect that to be rarer:

Blocksizes Without Small Output Transactions

This eliminates far more transactions.  We can see only 2.5% of the block size is taken by transactions with 1c outputs (the dark red line following the block “current blocks” line), but the green line shows about 20% of the block used for 10c transactions.  And about 45% of the block is transactions moving $1 or less.

Interpretation: Hard Landing Unlikely, But Microtransactions Lose

If the block size doesn’t increase (or doesn’t increase in time): we’ll see transactions get slower, and fees become the significant factor in whether your transaction gets processed quickly.  People will change behaviour: I’m not going to spend 20c to send you 50c!

Because block finding is highly variable and many miners are capping blocks at 750k, we see backlogs at times already; these bursts will happen with increasing frequency from now on.  This will put pressure on Satoshdice and similar services, who will be highly incentivized to use StrawPay or roll their own channel mechanism for off-blockchain microtransactions.

I’d like to know what timescale this happens on, but the graph shows that we grow (and occasionally shrink) in bursts.  A logarithmic graph prepared by Peter R of bitcointalk.org suggests that we hit 1M mid-2016 or so; expect fee pressure to bend that graph downwards soon.

The bad news is that even if fees hit (say) 25c and that prevents all the sub-$1 transactions, we only double our capacity, giving us perhaps another 18 months. (At that point miners are earning $1000 from transaction fees as well as $5000 (@ $200/BTC) from block reward, which is nice for them I guess.)

My Best Guess: Larger Blocks Desirable Within 2 Years, Needed by 3

Personally I think 5c is a reasonable transaction fee, but I’d prefer not to see it until we have decentralized off-chain alternatives.  I’d be pretty uncomfortable with a 25c fee unless the Lightning Network was so ubiquitous that I only needed to pay it twice a year.  Higher than that would have me reaching for my credit card to charge my Lightning Network account :)

Disclaimer: I Work For BlockStream, on Lightning Networks

Lightning Networks are a marathon, not a sprint.  The development timeframes in my head are even vaguer than the guesses above.  I hope it’s part of the eventual answer, but it’s not the bandaid we’re looking for.  I wish it were different, but we’re going to need other things in the mean time.

I hope this provided useful facts, whatever your opinions.

Current Blocksize, by graphs.

I used bitcoin-iterate and gnumeric to render the current bitcoin blocksizes, and here are the results.

My First Graph: A Moment of Panic

This is block sizes up to yesterday; I’ve asked gnumeric to derive an exponential trend line from the data (in black; the red one is linear)

Woah! We hit 1M blocks in a month! PAAAANIC!

That trend line hits 1000000 at block 363845.5, which we’d expect in about 32 days time!  This is what is freaking out so many denizens of the Bitcoin Subreddit. I also just saw a similar inaccurate [correction: misleading] graph reshared by Mike Hearn on G+ :(

But Wait A Minute

That trend line says we’re on 800k blocks today, and we’re clearly not.  Let’s add a 6 hour moving average:

Oh, we’re only halfway there….

In fact, if we cluster into 36 blocks (ie. 6 hours worth), we can see how misleading the terrible exponential fit is:

What! We’re already over 1M blocks?? Maths, you lied to me!

Clearer Graphs: 1 week Moving Average

Actual Weekly Running Average Blocksize

So, not time to panic just yet, though we’re clearly growing, and in unpredictable bursts.

June 02, 2015

Melrose trig

I went for a short geocaching walk at lunch today. Three geocaches in 45 minutes, so not too shabby. One of those caches was at the Melrose trig point, so bagged that too. There is some confusion here, as John Evans and I thought that Melrose was on private land. However, there is no signage to that effect in the area and the geocache owner asserts this is public land. ACTMAPi says the area is Tuggeranong Rural Block 35, but isn't clear on if the lease holder exists. Color me confused and possibly an accidental trespasser.



         



Interactive map for this route.



Tags for this post: blog pictures 20150602-melrose photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

June 01, 2015

In A Sunburned Country







ISBN: 0965000281

LibraryThing

This is the first Bill Bryson book I've read, and I have to say I enjoyed it. Bill is hilarious and infuriating at the same time, which surprisingly to me makes for a very entertaining combination. I'm sure he's not telling the full story in this book -- its just not possible for someone so ill prepared to not just die in the outback somewhere. Take his visit to Canberra for example -- he drives down from Sydney, hits the first hotel he finds and then spends three days there. No wonder he's bored. Eventually he bothers to drive for another five minutes and finds there is more to the city than one hotel. On the other hand, he maligns my home town in such a hilarious manner I just can't be angry at him.



I loved this book, highly recommended.



Tags for this post: book bill_bryson australia travel

Related posts: In Sydney!; American visas for all!; Melbourne; Sydney Australia in Google Maps; Top Gear Australia; Linux presence at Education Expo
Comment Recommend a book

The linux.conf.au 2016 Call For Proposals is open!

The OpenStack community has been well represented at linux.conf.au over the last few years, which I think is reflective of both the growing level of interest in OpenStack in the general Linux community, as well as the fact that OpenStack is one of the largest Python projects around these days. linux.conf.au is one of the region's biggest Open Source conferences, and has a solid reputation for deep technical content.



Its time to make it all happen again, with the linux.conf.au 2016 Call For Proposals opening today! I'm especially keen to encourage talk proposals which are somehow more than introductions to various components of OpenStack. Its time to talk detail about how people's networking deployments work, what container solutions we're using, and how we're deploying OpenStack in the real world to do seriously cool stuff.



The conference is in the first week of February in Geelong, Australia. I'd be happy to chat with anyone who has questions about the CFP process.



Tags for this post: openstack conference linux.conf.au lca2016

Related posts: LCA 2007 Video: CFQ IO; LCA 2006: CFP closes today; I just noticed...; LCA2006 -- CFP opens soon!; I just noticed...; Updated: linux.conf.au 2007 MythTV tutorial homework



Comment

Block size: rate of internet speed growth since 2008?

I’ve been trying not to follow the Great Blocksize Debate raging on reddit.  However, the lack of any concrete numbers has kind of irked me, so let me add one for now.

If we assume bandwidth is the main problem with running nodes, let’s look at average connection growth rates since 2008.  Google lead me to NetMetrics (who seem to charge), and Akamai’s State Of The Internet (who don’t).  So I used the latter, of course:

Akamai’s Average Connection Speed Chart Q4/07 to Q4/14

I tried to pick a range of countries, and here are the results:

Country % Growth Over 7 years Per Annum
Australia 348 19.5%
Brazil 349 19.5%
China 481 25.2%
Philippines 258 14.5%
UK 333 18.8%
US 304 17.2%

 

Countries which had best bandwidth grew about 17% a year, so I think that’s the best model for future growth patterns (China is now where the US was 7 years ago, for example).

If bandwidth is the main centralization concern, you’ll want block growth below 15%. That implies we could jump the cap to 3MB next year, and 15% thereafter. Or if you’re less conservative, 3.5MB next year, and 17% there after.

May 31, 2015

Twitter posts: 2015-05-25 to 2015-05-31

Dates confirmed for #lca2016

We're thrilled to announce that dates for linux.conf.au 2016 - LCA By the Bay have been confirmed as 1st-5th February 2016 at the wonderful Deakin University Waterfront campus in vibrant Geelong.

May 28, 2015

Square Rock and Mount Franklin

I'm not really sure why it took me so long to write this set of walks up -- I think I just got lost in preparations for the most recent OpenStack summit and simply forgot. That said, here they are...



Tony, Steven and I mounted an expedition to Mount Franklin, which is one of the trigs I hadn't been to yet. Its right on the ACT border with NSW, and despite not being a super long walk its verging of inaccessible in winter (think several feet of snow). So, we decided to get it done while we could.



             



Interactive map for this route.



We also tacked on a trip to Square Rock based on the strong recommendation of a good friend. Square Rock has amazing views, highly recommended.



           



Interactive map for this route.



Tags for this post: blog pictures 20150426-square_rock_franklin photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

May 27, 2015

Las Vegas Style Food Recipes

We interrupt our regular blog posts with a word from our sponsor... LOL



Seriously tough, times are tough in Las Vegas so instead of resorting to standard marketing techniques they've been trying to convince food bloggers (including me) to do their work for them... Just look at the condition of the place! Why would I ever want to go there?



http://www.vegas.com/

http://lasvegasrestaurants.com/best-restaurants/

http://www.lasvegas.com/restaurants/



Anyhow, recently someone from Vegas.com (a company that specialises in promoting hotels, restaurants, locations, and other events in Las Vegas) contacted me and asked me to do a take on some of the dishes available in Las Vegas (A copy of the menu is included, https://sites.google.com/site/dtbnguyen/Vegas_EatDrink_v03.pdf)...



More precisely, dishes from the Aria, Caesars Palace, Bellagio, and The Pallazo. I'm going to take a stab at on a take of a few of these dishes in a way that is inexpensive, quick, and hopefully tasty.



http://www.aria.com/

https://www.caesars.com/caesars-palace

http://www.bellagio.com/

http://www.palazzo.com/



The point of these is to also make them more accessible by substituting ingredients as well (A lot of these ingredients quite simply aren't easily available in other parts of the world and to be honest it's hard to be impressed by something you know little about.).



The following three desserts are designed to be eaten like sundaes.



- ice-cream (vanilla, coffee, or rum-raisan will work best for this)

- crushed peanuts or crushed roasted almonds

- chopped up chocolate bar (Snickers, Picnic, or anything which contains nougat/nuts in it's core. Tip - chop it up in a way that the temperature of the ice cream is unlikely to cause it to freeze hard. Texture/perception of the dish can be changed quite a lot by this)(optional)

- strawberries (or another berry) which have been sliced and left in the fridge in a ice/sugar syrup mix (half an hour is enough. We're just trying to get rid of the extreme tartness of many fresh berries)

- a drizzle of caramel/chocolate/coffee sauce

- cocoa/coffee powder (optional)

Scoop ice cream into bowl or cup. Drizzle other ingredients on top.



- ice-cream (vanilla, coffee, or rum-raisan will work best for this)

- raisins which have been drenched in rum overnight

- crushed peanuts or crushed roasted almonds

- drizzle of caramel/chocolate/coffee sauce

- cocoa/coffee powder (optional)

Scoop ice cream into bowl or cup. Drizzle other ingredients on top.



http://www.lifestylefood.com.au/recipes/15383/chocolate-marzipan-cherries

http://www.daringgourmet.com/2014/06/23/how-to-make-marzipan-almond-paste/

http://en.wikipedia.org/wiki/Marzipan

http://www.taste.com.au/recipes/18597/basic+truffles



- ice-cream (vanilla will work best for this)

- some form of cake (can be made or purchased. My preference is towards something darker such as chocolate or coffee flavour. If cooking please cook it so that it is slightly overcooked as it will be mixed with the ice cream. This will stop it from going soggy too quickly and add a bit of texture to the dish).

- some form of alcohol/liquor (we're targetting aroma here. Use whatever you have here but I think rum, cognac, or something else suitably sweet would do well)

Scoop ice cream into bowl or cup. Break up the cake and drop it around in chunks around the ice cream. Drizzle alcohol/liquor around and over the top.



The following is a dessert which is meant to be eaten/drunk like an 'affogato'.

http://en.wikipedia.org/wiki/Affogato

- ice-cream (vanilla will work best for this)

- crushed macaroon biscuits (can be made or purchased. My preference is towards chocolate or coffee flavours. Texture is to be slightly crusty with a chewy interior. Don't bother making the cream if you don't want to)

https://www.howtocookthat.net/public_html/easy-macaron-macaroon-recipe/

http://amazingalmonds.com.au/2012/11/01/almond-macaroons/

- a side drunk of coffee, cappucino, late, Milo (chocolate malt) (I'd probably go for a powdered cappucino/late drink which only requires boiling water to be added to get this done quick and tasty)

- cocoa/coffee powder (optional)

Scoop ice cream into bowl or cup. Drizzle other ingredients on top.



The following is obviously is my take on a deluxe steak sandwich.

- sandwich bread slices

- steak

- onions

- lettuce

- tomatoes

- bacon

- cheese

- egg

- tomato sauce

- balsamic vinegar (optional)

- mayonnaise (optional)

- mustard (optional) 

Toast or grill sandwich slices. Add cheese as first layer. Fry an egg and add this as the next layer. Fry some bacon and add this as the next layer. Fry off steak slices with some onion, garlic, salt, sugar, pepper, and maybe a tiny drop of balsamic vinegar (I would probably caramelise this slightly in a pan to remove some of the tartness before adding it to the sandwich or not add it at all) and add this as the next layer. Slice vegetables and add this as the next layer. Use tomato sauce (mayonnaise and/or mustard are optional depending on your taste) on the top layer as it will stop it from drenching the sandwich prior to your having completing preparing it. Season to taste.



The following is more savoury and is obviously meant to be a main meal.

- roasted chicken (can be made or purchased)

- pasta in a white sauce (the 'Bacon and Mushroom Carbonara with Pasta' recipe from, http://dtbnguyen.blogspot.com.au/2015/02/simple-pasta-recipes.html would work well here)

- asparagus

- cheese

- potatoes (use the recipes at, http://dtbnguyen.blogspot.com.au/2015/03/fried-fish-with-butter-fried-potatoes.html or http://dtbnguyen.blogspot.com.au/2012/02/butter-fried-potatoes-with-bacon-bits.html and remove relevant ingredients (bacon, cream, and cheese for me) to suit the dish)

Cook pasta. Fry asparagus with garlic, butter, oil or else blanch it. Put it in a microwave for a few seconds with a slice of cheese on top to give it a bit of extra flavour (optional). Serve with roasted chicken and fried potatoes. Season dish to taste. You may need to serve this dish with a salad as it can be very rich or fatty depending on your interpretation.



http://www.taste.com.au/recipes/8071/basic+roast+chicken

http://www.simplyrecipes.com/recipes/kellers_roast_chicken/

http://www.simplyrecipes.com/recipes/classic_baked_chicken/



http://en.wikipedia.org/wiki/Arroz_con_pollo

http://en.wikipedia.org/wiki/Refried_beans

http://www.foodterms.com/encyclopedia/broccoli-rabe/index.html



http://en.wikipedia.org/wiki/Vin_jaune

http://en.wikipedia.org/wiki/Fleur-de-lis

http://en.wikipedia.org/wiki/Tomatillo

brendanscott

Youtube has done wonders for lots of people, but frankly, my reaction to the vast majority of videos is that they are largely or wholly content free.  Those cases where a visual demonstration actually assists are exceedingly slim (some digital illustration videos for example, but even those don’t necessarily show you what you want). Watching videos of ostensibly informative topics is an exercise in entertainment and almost always a waste of my time.  If you have a transcript at least you can jump around to see if it’s got the info you’re looking for. With videos even if you jump around, you’re still pulling down info at the rate they speak (ie slowly). Next time you watch a documentary count the average number of words spoken in a minute. It’s ridiculously low.

It’s something of a farce that for my CLE requirements I can listen to some 5 year out “senior associate” um and arr through some talk at a firm or do some facile online tutorial (are there other kinds?) and get an hour’s credit, but if I read an entire book by an expert in the area or research the cases myself I get exactly 0 points.



May 26, 2015

So Bill is going to bring a Bill

So Bill Shorten has announced that he and the Deputy Leader of the Opposition, Tanya Plibersec will be putting a bill to the house to allow Same Sex Marriage.

Honestly I'm torn.

The cynical part of me thinks the whole thing is an exercise in futility. Unless the Coalition allows a free vote amongst its members the bill is doomed to die in the House of Reps. If I was going to be really cynical I'd think this was an attempt to take the wind out of the sails of the greens who were proposing a similar bill to start in the Senate.

On the other hand, this is probably the first sign I've seen of Shorten actually stepping forward on an issue that hasn't been focus grouped to death. SSM doesn't have universal support within the Labor party (hi Joe deBruyn you reactionary old fart), and by putting his name directly on the bill Shorten is showing some leadership at last.

If you support Same Sex marriage, or as it's known in other parts of the world, Marriage, I'd urge you to let your local MP know how you feel. Do it politely, do it succinctly but make sure you do it. 

If you want to find out if your local MP or Senator supports or opposes SSM this site is a great resource

Blog Catagories: 

May 25, 2015

MrBayes HPC Installation

Mr. Bayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models.

Download, extract. Note that the developers have produced a tarbomb which will require a separate directory created before download. This has been raised as a bug.

Note that more recent versions of MrBayes make much better use of autoconfiguration tools.



cd /usr/local/src/MRBAYES

mkdir mrbayes-3.2.5

cd mrbayes-3.2.5

read more

May 24, 2015

Twitter posts: 2015-05-18 to 2015-05-24

How I Would Solve Plugin Dependencies

lol, I wouldn’t1.


1. If I absolutely had to, I wouldn’t do it the same as Ryan.

WordPress isn’t (and will never be) Linux

ZYpp is the dependency solver used by OpenSUSE (and its PHP port in Composer), it was born of the need to solve complex dependency trees. The good news is, WordPress doesn’t have the same problem, and we shouldn’t create that problem for ourselves.

One of the most common-yet-complex issues is determining how to handle different version requirements by different packages. If My Amazing Plugin requires WP-API 1.9, but Your Wonderful Plugin requires WP-API 2.0, we have a problem. There are two ways to solve it – Windows solves it by installing multiple versions of the dependency, and loading the correct version for each package. This isn’t a particularly viable option in PHP, because trying to load two different versions of the same code in the same process is not my idea of a fun time.

The second option, which ZYpp solves, is to try and find a mutually compatible version of the dependency that each plugin can use. The biggest problem with this method is that it can’t always find a solution. If there’s no compatible way of installing the libraries, it has to throw back to the user to make the decision. This isn’t a viable option, as 99.999*% worth of users wouldn’t be able to tell the difference between WP-API versions 1.9 and 2.0, and nor should they.

But there’s a third option.

Technical Debt as a Service

Code libraries are, by their nature, developer facing. A user never really needs to know that they exist, in the same way that they don’t need to know about WP_Query. In WordPress Core, we strive for (and often achieve) 100% backwards compatibility between major versions. If we were going to implement plugin dependencies, I would make it a requirement that the code libraries shoulder the same burden: don’t make a user choose between upgrades, just always keep the code backwards compatible. If you need to make architectural changes, include a backwards compatible shim to keep things working nicely.

This intentionally moves the burden of upgrading to the developer, rather than the end user.

What Version?

If we’re going to require library developers to maintain backwards compatibility, we can do away with version requirements (and thus, removing the need for a dependency solver). If a plugin needs a library, it can just specify the library slug.

Better Living Through Auto Updates

If we were to implement plugin dependencies, I think it’d be a great place to introduce auto updates being enable by default. There’s no existing architecture for us to take into account, so we can have this use the current WordPress best practices. On top of that, it’s a step towards enabling auto updates for all Core releases, and it encourages developers to create backwards compatible libraries, because their library will almost certainly be updated before a plugin using it is.

Let’s Wrap This Up

I’m still not convinced plugin dependencies is a good thing to put in Core – it introduces significant complexities to plugin updates, as well as adding another dependency on WordPress.org to Core. But it’s definitely a conversation worth having.

May 23, 2015

Usual Debian Server Setup

I manage a few servers for myself, friends and family as well as for the Libravatar project. Here is how I customize recent releases of Debian on those servers.

Hardware tests

apt-get install memtest86+ smartmontools e2fsprogs

Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.

To check memory, I boot into memtest86+ from the grub menu and let it run overnight.

Then I check the hard drives using:

smartctl -t long /dev/sdX
badblocks -swo badblocks.out /dev/sdX

Configuration

apt-get install etckeepr git sudo vim

To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:

  • turn off daily auto-commits
  • turn off auto-commits before package installs

To get more control over the various packages I install, I change the default debconf level to medium:

dpkg-reconfigure debconf

Since I use vim for all of my configuration file editing, I make it the default editor:

update-alternatives --config editor

ssh

apt-get install openssh-server mosh fail2ban

Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.

I also ensure that the locale I use is available on the server by adding it the list of generated locales:

dpkg-reconfigure locales

Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key

KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

UsePrivilegeSeparation sandbox

AuthenticationMethods publickey
PasswordAuthentication no
PermitRootLogin no

AcceptEnv LANG LC_* TZ
LogLevel VERBOSE
AllowGroups sshuser

or the following for wheezy servers:

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256

On those servers where I need duplicity/paramiko to work, I also add the following:

KexAlgorithms ...,diffie-hellman-group-exchange-sha1
MACs ...,hmac-sha1

Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.

I also create a new group and add the users that need ssh access to it:

addgroup sshuser
adduser francois sshuser

and add a timeout for root sessions by putting this in /root/.bash_profile:

TMOUT=600

Security checks

apt-get install logcheck logcheck-database fcheck tiger debsums corekeeper
apt-get remove john john-data rpcbind tripwire

Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/error.log
/var/log/mail.err
/var/log/mail.warn
/var/log/mail.info
/var/log/fail2ban.log

while ensuring that the apache logfiles are readable by logcheck:

chmod a+rx /var/log/apache2
chmod a+r /var/log/apache2/*

and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:

create 644 root adm

I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):

INTRO=0
FQDN=0

Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:

Tiger_Check_RUNPROC=Y
Tiger_Check_DELETED=Y
Tiger_Check_APACHE=Y
Tiger_FSScan_WDIR=Y
Tiger_SSH_Protocol='2'
Tiger_Passwd_Hashes='sha512'
Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres'
Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd'

General hardening

apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra

While the harden packages are configuration-free, AppArmor must be manually enabled:

perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub
update-grub

Entropy and timekeeping

apt-get install haveged rng-tools ntp

To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.

Preventing mistakes

apt-get install molly-guard safe-rm sl

The above packages are all about catching mistakes (such as accidental deletions). However, in order to extend the molly-guard protection to mosh sessions, one needs to manually apply a patch.

Package updates

apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest

These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.

In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:

#!/bin/sh
cat /var/run/reboot-required 2> /dev/null || true

to send me a notification whenever a kernel update requires a reboot to take effect.

Handy utilities

apt-get install renameutils atool iotop sysstat lsof mtr-tiny

Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.

Apache configuration

apt-get install apache2-mpm-event

While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.

I enable these in /etc/apache2/conf.d/security:

<Directory />
    AllowOverride None
    Order Deny,Allow
    Deny from all
</Directory>
ServerTokens Prod
ServerSignature Off

and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.

I also create a new /etc/apache2/conf.d/servername which contains:

ServerName machine_hostname

Mail

apt-get install postfix

Configuring mail properly is tricky but the following has worked for me.

In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.

Change the following in /etc/postfix/main.cf:

inet_interfaces = loopback-only
myhostname = (fully qualified hostname)
smtp_tls_security_level = may
smtp_tls_protocols = !SSLv2, !SSLv3

Set the following aliases in /etc/aliases:

  • set francois as the destination of root emails
  • set an external email address for francois
  • set root as the destination for www-data emails

before running newaliases to update the aliases database.

Create a new cronjob (/etc/cron.hourly/checkmail):

#!/bin/sh
ls /var/mail

to ensure that email doesn't accumulate unmonitored on this box.

Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.

Network tuning

To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline (jessie or later) by putting the following in /etc/sysctl.conf:

net.core.default_qdisc=fq_codel

May 22, 2015

General Atomic and Molecular Electronic Structure System HPC Installation

GAMESS (General Atomic and Molecular Electronic Structure System (GAMESS)) is a general ab initio quantum chemistry package. You will need to agree to the license prior to download, which will provide a link to gamess-current.tar.gz

Download and extract, load the environment variables for atlas and gcc.



cd /usr/local/src/

tar gamess-current.tar.gz

cd gamess

module load atlas/3.10.2

module load gcc/4.9.1

read more

Craige McWhirter: How To Resolve a Volume is Busy Error on Cinder With Ceph Block Storage

When deleting a volume in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the volume because the volume was busy:

2015-05-21 23:31:41.160 16911 ERROR cinder.volume.manager [req-6f77ef4d-bbff-4ff4-8a3e-4c6b264ac5ca \
04b7cb61dd3f4f2f8f80bbd9833addbd 5903e3bda1e840d492fe79fb840acacc - - -] Cannot delete volume \
f8867d43-bc82-404e-bcf5-6d345c32269e: volume is busy

There are a number of reasons why a volume may be reported by Ceph as busy, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the volume in Cinder, that status is usually available, the record looks in order. When you check Ceph, you'll see that the volume still exists there too.

% cinder show f8867d43-bc82-404e-bcf5-6d345c32269e | grep status
|    status    |    available    |

 # rbd -p my.ceph.cinder.pool ls | grep f8867d43-bc82-404e-bcf5-6d345c32269e
 volume-f8867d43-bc82-404e-bcf5-6d345c32269e

Perhaps there's a lock on this volume. Let's check for locks and then remove them if we find one:

# rbd lock list my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e

If there are any locks on the volume, you can use lock remove using the id and locker from the previous command to delete the lock:

# rbd lock remove <image-name> <id> <locker>

What if there are no locks on the volume but you're still unable to delete it from either Cinder or Ceph? Let's check for snapshots:

# rbd -p my.ceph.cinder.pool snap ls volume-f8867d43-bc82-404e-bcf5-6d345c32269e
SNAPID NAME                                              SIZE
  2072 snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3 25600 MB

When you attempt to delete that snapshot you will get the following:

# rbd snap rm my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3
rbd: snapshot 'snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3' is protected from removal.
2015-05-22 01:21:52.504966 7f864f71c880 -1 librbd: removing snapshot from header failed: (16) Device or resource busy

This reveals that it was the snapshot that was busy and locked all along.

Now we need to unprotect the snapshot:

# rbd snap unprotect my.ceph.cinder.pool/volume-f8867d43-bc82-404e-bcf5-6d345c32269e@snapshot-33c4309a-d5f7-4ae1-946d-66ba4f5cdce3

You should now be able to delete the volume and it's snapshot via Cinder.

Enjoy :-)

May 21, 2015

JAGS (Just Another Gibbs Sampler) Installation

JAGS is Just Another Gibbs Sampler. It is a program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation not wholly unlike BUGS.



cd /usr/local/src/JAGS

wget http://downloads.sourceforge.net/project/mcmc-jags/JAGS/3.x/Source/JAGS-...

tar xvf JAGS-3.4.0.tar.gz

mv JAGS-3.4.0 jags-3.4.0

cd jags-3.4.0

../config

make

make check

make install

make installcheck

The config script takes the following form



#!/bin/bash

install=$(basename $(pwd) | sed 's%-%/%')

read more

MuTect Installation

MuTect is a method developed at the Broad Institute for the reliable and accurate identification of somatic point mutations in next generation sequencing data of cancer genomes.

For complete details, please see the publication in Nature Biotechnology:

Cibulskis, K. et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat Biotechnology (2013).doi:10.1038/nbt.2514

Download after login.

read more

PROJ.4 Cartographic Projections library installation

The PROJ.4 Cartographic Projections library was originally written by Gerald Evenden then of the USGS.

Download, extract, install.



cd /usr/local/src/PROJ

wget http://download.osgeo.org/proj/proj-4.9.1.tar.gz

tar xvf proj-4.9.1.tar.gz

cd proj-4.9.1

../config

make

make check

make install

The config file is a quick executable.



#!/bin/bash

./configure --prefix=/usr/local/$(basename $(pwd) | sed 's#-#/#')

read more

Geospatial Data Abstraction Library Installation

GDAL (Geospatial Data Abstraction Library) is a translator library for raster and vector geospatial data formats.

Download, extract, install.



cd /usr/local/src/GDAL

wget http://download.osgeo.org/gdal/1.11.2/gdal-1.11.2.tar.gz

tar gdal-1.11.2.tar.gz

cd gdal-1.11.2

../config

make

make install

The config file is a quick executable.



#!/bin/bash

./configure --prefix=/usr/local/$(basename $(pwd) | sed 's#-#/#')

read more

Rosetta Proteins with SCons (and jam and cream)

Rosetta is a library based object-oriented software suite which provides a robust system for predicting and designing protein structures, protein folding mechanisms, and protein-protein interactions.

You'll need a license

Download, extract, load scons, and compile.



cd /usr/local/src/ROSETTA

tar xvf rosetta_src_2015.19.57819_bundle.tgz

cd rosetta_src_2015.19.57819_bundle/main/src

module load scons

scons

read more

SCons with Modules

SCons is a software construction tool (build tool, or make tool) implemented in Python, that uses Python scripts as "configuration files" for software builds.



cd /usr/local/src/SCONS

wget http://prdownloads.sourceforge.net/scons/scons-2.3.4.tar.gz

tar xvf scons-2.3.4.tar.gz

cd scons-2.3.4

python setup.py install --prefix=/usr/local/scons/2.3.4

Change to the appropriate modules directory, check for .desc and .version and .base, create a symblink to .base



cd /usr/local/Modules/modulefiles/scons

ln -s .base 2.3.4

read more

Freesufer cluster installation

Freesurfer is a set of tools for analysis and visualization of structural and functional brain imaging data.

Check system requirements and download. Note that registration and a license key is required for functionality, but not installation.

Create a source directory, change to it, download, extract, discover that everything is bundled, create the application directory and move everything across.

read more

May 20, 2015

Movement at the Angry Beanie station

Good news everybody!

This week I've started pulling everything together to bring both For Science! and Purser Explores The World back to the internet airwaves :)

I won't reveal what the return episode of Purser Explores The World is going to be about, but suffice to say it's going to continue the same explorations and interview style that previous episodes had.

For Science! of course is going to be the return of Mel, Mags and I doing our thing about science news and getting our rant on (well Mags and Mel more than me but anyway). I'm also going to be looking at either expanding the show to include a new segment or create a smaller podcast that will be talking to researchers around the country, not more than say 15 or 20 minutes long in which we find out a bit more about the work the researcher is doing, how they got started in science and so on.

I have some other thoughts about Angry Beanie and its direction, but they are for another blog post I think.

Blog Catagories: 

APM:Plane 3.3.0 released

APM:Plane 3.3.0 released



The ardupilot development team is proud to announce the release of version 3.3.0 of APM:Plane. This is a major release with a lot of changes. Please read the release notes carefully!



The last stable release was 3 months ago, and since that time we have applied over 1200 changes to the code. It has been a period of very rapid development for ArduPilot. Explaining all of the changes that have been made would take far too long, so I've chosen some key changes to explain in detail, and listed the most important secondary changes in a short form. Please ask for details if there is a change you see listed that you want some more information on.



Arming Changes



This is the first release of APM:Plane where ARMING_CHECK and ARMING_REQUIRE both default to enabled. That means when you upgrade if you didn't previously have arming enabled you will need to learn about arming your plane.



Please see this page for more information on arming:



http://plane.ardupilot.com/wiki/arming-your-plane/



I know many users will be tempted to disable the arming checks, but please don't do that without careful thought. The arming checks are an important part of ensuring the aircraft is ready to fly, and a common cause of flight problems is to takeoff before ArduPilot is ready.



Re-do Accelerometer Calibration



Due to a change in the maximum accelerometer range on the Pixhawk all users must re-do their accelerometer calibration for this release. If you don't then your plane will fail to arm with a message saying that you have not calibrated the accelerometers.



Only 3D accel calibration



The old "1D" accelerometer calibration method has now been removed, so you must use the 3D accelerometer calibration method. The old method was removed because a significant number of users had poor flights due to scaling and offset errors on their accelerometers when they used the 1D method. My apologies for people with very large aircraft who find the 3D method difficult.



Note that you can do the accelerometer calibration with the autopilot outside the aircraft which can make things easier for large aircraft.



Auto-disarm



After an auto-landing the autopilot will now by default disarm after LAND_DISARMDELAY seconds (with a default of 20 seconds). This feature is to prevent the motor from spinning up unexpectedly on the ground

after a landing.



HIL_MODE parameter



It is now possible to configure your autopilot for hardware in the loop simulation without loading a special firmware. Just set the parameter HIL_MODE to 1 and this will enable HIL for any autopilot. This is designed to make it easier for users to try HIL without having to find a HIL firmware.



SITL on Windows



The SITL software in the loop simulation system has been completely rewritten for this release. A major change is to make it possible to run SITL on native windows without needing a Linux virtual machine. There should be a release of MissionPlanner for Windows soon which will make it easy to launch a SITL instance.



The SITL changes also include new backends, including the CRRCSim flight simulator. This gives us a much wider range of aircraft we can use for SITL. See http://dev.ardupilot.com/wiki/simulation-2/ for more information.



Throttle control on takeoff



A number of users had problems with pitch control on auto-takeoff, and with the aircraft exceeding its target speed during takeoff. The auto-takeoff code has now been changed to use the normal TECS throttle control which should solve this problem.



Rudder only support



There is a new RUDDER_ONLY parameter for aircraft without ailerons, where roll is controlled by the rudder. Please see the documentation for more information on flying with a rudder only aircraft:



http://plane.ardupilot.com/wiki/ardupla ... udder_only



APM1/APM2 Support



We have managed to keep support for the APM1 and APM2 in this release, but in order to fit it in the limited flash space we had to disable some more features when building for those boards. For this release the AP_Mount code for controlling camera mounts is disabled on APM1/APM2.



At some point soon it will become impractical to keep supporting the APM1/APM2 for planes. Please consider moving to a 32 bit autopilot soon if you are still using an APM1 or APM2.



New INS code



There have been a lot of changes to the gyro and accelerometer handling for this release. The accelerometer range on the Pixhawk has been changed to 16g from 8g to prevent clipping on high vibration aircraft, and the sampling rate on the lsm303d has been increased to 1600Hz.



An important bug has also been fixed which caused aliasing in the sampling process from the accelerometers. That bug could cause attitude errors in high vibration environments.



Numerous Landing Changes



Once again there have been a lot of improvements to the automatic landing support. Perhaps most important is the introduction of a smooth transition from landing approach to the flare, which reduces the tendency to pitch up too much on flare.



There is also a new parameter TECS_LAND_PMAX which controls the maximum pitch during landing. This defaults to 10 degrees, but for many aircraft a smaller value may be appropriate. Reduce it to 5 degrees if you find you still get too much pitch up during the flare.



Other secondary changes in this release include:



  • a new SerialManager library which gives much more flexible management of serial port assignment
  • changed the default FS_LONG_TIMEOUT to 5 seconds
  • raised default IMAX for roll/pitch to 3000
  • lowered default L1 navigation period to 20
  • new BRD_SBUS_OUT parameter to enable SBUS output on Pixhawk
  • large improvements to the internals of PX4Firmware/PX4NuttX for better performance
  • auto-formatting of microSD cards if they can't be mounted on boot (PX4/Pixhawk only)
  • a new PWM based driver for the PulsedLight Lidar to avoid issues with the I2C interface
  • fixed throttle forcing to zero when disarmed
  • only reset mission on disarm if not in AUTO mode
  • much better handling of steep landings
  • added smooth transition in landing flare
  • added HIL_MODE parameter for HIL without a special firmware
  • lowered default FS_LONG_TIMEOUT to 5 seconds
  • mark old ELEVON_MIXING mode as deprecated
  • fixed 50Hz MAVLink support
  • support DO_SET_HOME MAVLink command
  • fixed larger values of TKOFF_THR_DELAY
  • allow PulsedLight Lidar to be disabled at a given height
  • fixed bungee launch (long throttle delay)
  • fixed a bug handling entering AUTO mode before we have GPS lock
  • added CLI_ENABLED parameter
  • removed 1D accel calibration
  • added EKF_STATUS_REPORT MAVLink message
  • added INITIAL_MODE parameter
  • added TRIM_RC_AT_START parameter
  • added auto-disarm after landing (LAND_DISARMDELAY)
  • added LOCAL_POSITION_NED MAVLink message
  • avoid triggering a fence breach in final stage of landing
  • rebuild glide slope if we are above it and climbing
  • use TECS to control throttle on takeoff
  • added RUDDER_ONLY parameter to better support planes with no ailerons
  • updated Piksi RTK GPS driver
  • improved support for GPS data injection (for Piksi RTK GPS)
  • added NAV_LOITER_TO_ALT mission item
  • fixed landing approach without an airspeed sensor
  • support RTL_AUTOLAND=2 for landing without coming to home first
  • disabled camera mount support on APM1/APM2
  • added support for SToRM32 and Alexmos camera gimbals
  • added support for Jaimes mavlink enabled gimbal
  • improved EKF default tuning for planes
  • updated support for NavIO and NavIO+ boards
  • updated support for VRBrain boards
  • fixes for realtime threads on Linux
  • added simulated sensor lag for baro and mag in SITL
  • made it possible to build SITL for native Windows
  • switched to faster accel sampling on Pixhawk
  • added coning corrections on Pixhawk
  • set ARMING_CHECK to 1 by default
  • disable NMEA and SiRF GPS on APM1/APM2
  • support MPU9255 IMU on Linux
  • updates to BBBMINI port for Linux
  • added TECS_LAND_PMAX parameter
  • switched to synthetic clock in SITL
  • support CRRCSim FDM backend in SITL
  • new general purpose replay parsing code
  • switched to 16g accel range in Pixhawk
  • added FENCE_AUTOENABLE=2 for disabling just fence floor
  • added POS dataflash log message
  • changed GUIDED behaviour to match copter
  • added support for a 4th MAVLink channel
  • support setting AHRS_TRIM in preflight calibration
  • fixed a PX4 mixer out of range error

Many thanks to everyone who contributed to this release. We have a lot of new developers contributing which is really great to see! Also, apologies for those who have contributed a pull request but not yet had it incorporated (or had feedback on the change). We will be trying to get to as many PRs as we can soon.



Best wishes to all APM:Plane users from the dev team, and happy flying!

May 18, 2015

Learning to Cook

I recently noticed a significant spike in traffic to this blog and it's become pretty obvious why. The food recipes... If you're curious why they've been going up online I'm a firm believer in the following philosophy.

Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it ;)

http://en.wikiquote.org/wiki/Linus_Torvalds


Seriously though, I have a tendency to lose things sometimes and thought that posting it here would be my best chance of never losing my them. Since it needed to be presented in public it would also mean that it would force me into writing more complete recipes rather than simply scrawling down whatever seemed pertinent at the time. (I never thought that I would be presented with opportunities through this. More on this later.)



In spite of all this, you're probably wondering why the recipes lack a bit of detail still and how I ended up with this particular style of cooking.



As you can guess from my name, I have an asian (Vietnamese to be more precise) background. Growing up I learnt that our cooking was often extremely tedious, required a lot of preparation, tasted great but often didn't fill me. Ultimately, this meant that my family wanted me to spend less time helping in the kitchen and more time tending to me studies. To a certain extent, this family policy has served us well. Many of the kids are well educated and have done well professionally.



The problem is that if you've ever worked worked a standard week over any period of time then you ultimately realise that a lot of the time you don't want to spend heaps of time cooking whether for yourself or for others (this style doesn't work long term). 


This is where I radically differ from my family. Many of them see cooking as a necessary chore (who wants to die, right? :-)) and they labour over it or else they love it with such a passion that they lose sight of the fact that there's only 24 hours in a day (there are/have been some professional chefs in the family). Ultimately, they end up wearing themselves out day after day but I've learnt to strip back recipes to their core flavours so that I can cook decent tasting food in reasonable amounts of time.



Like others, I went through multiple phases from a culinary perspective. As a child I loved to eat most things thrown at me (but my family didn't want me in the kitchen). In my teenage years, I used to enjoy and revel in fast and fatty foods but basically grew out of it as I discovered that it wasn't all that filling and could result in poor health. Just like the protaganist of 'Supersize Me' I found out that some of my bodily functions didn't work quite as well on this particular diet.



http://en.wikipedia.org/wiki/Super_Size_Me

http://www.imdb.com/title/tt0390521/



Eating out was much the same because they often added unhealthy elements to meals (high levels of MSG, sugar, salt, etc... to boost the taste). Not to mention the fact, that serving sizes could sometimes be low and prices relatively high. I basically had no choice but to learn to cook for myself. In the beginning, I began trying to reproduce restaurant meals badly. I didn't have the reportoire to be able to reproduce and balance flavours well enough to do a half decent job. Over time, I spent more time exploring cheap restaurants, diners, etc... around where I studied and/or worked. I also watched, read, and in general spent more time in the grocer just trying random sauces, spices, and so on... I developed a sense of flavour and how to achieve them from base ingredients.



This is why none of the recipes contain exact amounts of ingredients (at the moment). It's also because that was the way I learnt to cook (I was taught a bit by some of my aunts), some of the lesser talented members of the family had a tendency to fiddle constantly so listing amounts was basically useless, some people (family or not) aren't willing to share ingredients so you just have to figure it out when and if you have to, and finally I figured out that it was the easiest way for me to learn to cook. When you look at a recipe, you're often doing mental arithmetic in order to make it 'taste right'. By developing a better sense of taste I could mostly forgo this and not have to suffer the consequences of a mathematical screw up (it happened enough times in the family for me to learn to not become so reliant on it).



In general my perspective with regards to food are the following:
  • kids will eventually learn what fills them and fast food will make them feel like horrible. They will grow out of it and eat properly eventually if they are exposed to the right foods
  • rely on machinery when you can. Why waste you're time cutting food perfectly if you can get it done in a fraction of the time using the right equipment?
  • why bother with perfection if you can achieve 95% of the taste and 50% apparent effort
  • I'd much rather spend time enjoying food than cooking it
  • prior to marinating any piece of meat I create the core sauce/marinade seperately first and then add the meat. There's no chance of food posioning and I get to have an idea what it will taste like
  • balance of flavours is more important than exact amounts over and over again. You may have a different preference from time to time also. Obviously, the converse is also true. Exact amounts give you a basis from which to work from
  • don't think that more resources will make you a better chef. It's possible that the exact opposite is true at times. Think about the food of the wealthy versus that of the poor. The poor have to make the most of everything that is thrown at them, extracting every last single ounce of flavour from something small/cheap while the wealthy have the basically mix and match the very best each and every time. From a chef's perspective this means that they don't have the chance to understand flavours at a more elemental/core level
  • shop from specialist butchers, fishmongers, etc... they will often be able to get you unusual cuts/meats, have better knowledge, do extra things like cutting down large bones for soup stocks and they are also often quite a bit cheaper
  • don't freeze if you can avoid it (or at least avoid freezing some foods). Some people I know use it as a technique to save time. For some dishes this is true but for others it can alter the actual structure (and sometimes faste. Think about soups versus meats when they are dethawed correctly and incorrectly.) of the food involved leaving it a mess when you finally prepare and eat it
  • fresh means fresh. Leave fish (and some meats) in the fridge for even a day after leaving the better/stable environment at a supermarket or fishmonger and it will begin to smell and taste slightly rank. This effect increases exponentially over time
  • try everything whether that be sauces, spices, restaurants, cultures, etc... You will find cheap opportunties if you go to the right places and ultimately you will end up healther (you learn that better tasting food is often healther as well), happier (variety is the spice of life), and possibly wealthier because of it (you can save a lot by learning to cook well). The wider you're vocabulary, the better your cooking will become...
  • balance of flavours as key. Even if you stuff up a recipe you can rescue it if you know enough about this. Added too much sugar? Use sourness to balance it out, etc...
  • don't learn from a single source. If you learn purely through celebrity chefs and books you'll realise that a lot of what they do is quite gimmicky. A lot of the ingredients that they use aren't very accessible, expensive, in spite of what they say. Use your head to strip the recipes back to core flavours to save you time and money (in procuring them)
  • learning to cook well will take time. Have patience. It took me a long while before I could build a sufficient 'vocabulary' before I could build dishes that were worth staying at home for. It took me more time to learn how to reverse engineer dishes at restaurants. Use every resource at your disposal (the Internet has heaps of free information, remember?).
On a side note, based on the contents of my blog (and other places) people have semi-regularly requested to write here and for me to write for them. I'm more than happy to do this providing I have the time and the task is interesting enough... on any topic.

May 17, 2015

[debian] Fixing some issues with changelogs.debian.net

I got an email last year pointing out a cosmetic issue with changelogs.debian.net. I think at the time of the email, the only problem was some bitrot in PHP's built-in server variables making some text appear incorrectly.

I duly added something to my TODO list to fix it, and it subsequently sat there for like 13 months. In the ensuing time, Debian changed some stuff, and my code started incorrectly handling a 302 as well, which actually broke it good and proper.

I finally got around to fixing it.

I also fixed a problem where sometimes there can be multiple entries in the Sources file for a package (switching to using api.ftp-master.debian.org would also address this), which caused sometimes caused an incorrect version of the changelog to be returned.

In the resulting tinkering, I learned about api.ftp-master.debian.org, which is totally awesome. I could stop maintaining and parsing a local copy of sid's Sources file, and just make a call to this instead.

Finally, I added linking to CVEs, because it was a quick thing to do, and adds value.

In light of api.ftp-master.debian.org, I'm very tempted to rewrite the redirector. The code is very old and hard for present-day Andrew to maintain, and I despise PHP. I'd rather write it in Python today, with some proper test coverage. I could also potentially host it on AppEngine instead of locally, just so I get some experience with AppEngine

It's also been suggested that I fold the changes into the changelog hosting on ftp-master.debian.org. I'm hesitant to do this, as it would require changing the output from plain text to HTML, which would mess up consumers of the plain text (like the current implementation of changelogs.debian.net)

Twitter posts: 2015-05-11 to 2015-05-17

May 15, 2015

Lower SNR limit of Digital Voice

I’m currently working on a Digital Voice (DV) mode that will work at negative SNRs. So I started thinking about where the theoretical limits are:

  1. Lets assume we have a really good rate 0.5 FEC code that approaches the Shannon Limit of perfectly correcting random bit errors up to a channel BER of 12%
  2. A real-world code this good requires a FEC frame size of 1000′s of bits which will mean long latency (seconds). Lets assume that’s OK.
  3. A large frame size with perfect error correction means we can use a really low bit rate speech codec that can analyse seconds of speech at a time and remove all sorts of redundant information (like silence). This will allow us to code more efficiently and lower the bit rate. Also, we only want speech quality just on the limits of intelligibility. So lets assume a 300 bit/s speech codec.
  4. Using rate 0.5 FEC that’s a bit rate over the channel of 600 bit/s.
  5. Lets assume QPSK on a AWGN channel. It’s possible to make a fading channel behave like a AWGN channel if we use diversity, e.g. a long code with interleaving (time diversity), or spread spectrum (frequency diversity).
  6. QPSK at around 12% BER requires an Eb/No of -1dB or an Es/No of Eb/No + 3 = 2dB. If the bit rate is 600 bit/s the QPSK symbol rate is 300 symbols/s

So we have SNR = Es/No – 10*log10(NoiseBW/SymbolRate) = 2 – 10*log10(3000/300) = -8dB. Untrained operators find SSB very hard to use beneath 6dB, however I imagine many Ham contacts (especially brief exchanges of callsigns and signal reports) are made well beneath that. DV at -8dB would be completely noise free, but of low quality (e.g. a little robotic) and high latency.

For VHF applications C/No is a more suitable measurement, this is a C/No = SNR – 10*log10(3000) = 26.7dBHz (FM is a very scratchy readability 5 at around 43dBHz). That’s roughly a 20dB (100 x) power improvement over FM!

May 14, 2015

Leaking Information in Drupal URLs

Update: It turns out the DA was trolling. We all now know that DrupalCon North America 2017 will be in New Orleans. I've kept this post up as I believe the information about handling unpublished nodes is relevant. I have also learned that m4032404 is enabled by default in govCMS.

When a user doesn't have access to content in Drupal a 403 forbidden response is returned. This is the case out of the box for unpublished content. The problem with this is that sensitive information may be contained in the URL. A great example of this the DrupalCon site.

The way to avoid this is to use the m4032404 module which changes a 403 response to a 404. This simple module prevents your site leaking information via URLs.

AttachmentSize
DrupalCon-Philadephia.png139.21 KB

lifeless

in the grit and sunbaked red

you can imagine moonscapes,

endless hot dry emptiness



but the ants commute even on this hot sand

lizards patrol their freeways with quick tongues



improbable silvergrey leaves stand isolated

sand and sticks collecting under the windward edge of any plant.



at dusk the restless kangaroos cross the landscape

in the evenings there are yowls of dingoes

dense clouds of insects orbit the lights

dawn is patterned with tracks



and if it rains

a magician's bouquet

life explodes, instant spring packetmix

just add water.











[tech] LWN Chrome extension published

I finally got around to finishing off and publishing the LWN Chrome extension that I wrote a couple of months ago.

I received one piece of feedback from someone who read my blog via Planet Debian, but didn't appear to email me from a usable email address, so I'll respond to the criticisms here.

I wrote a Chrome extension because I use Google Chrome. To the best of my knowledge, it will work with Chromium as well, but as I've never used it, I can't really say for sure. I've chosen to licence the source under the Apache Licence, and make it freely available. So the extension is available to anyone who cares to download the source and "side load" it, if they don't want to use the Chrome Web Store.

As for whether a userscript would have done the job, maybe, but I have no experience with them.

Basically, I had an itch, and I scratched it, for the browser I choose to use, and I also chose to share it freely.

May 13, 2015

Keeping Songs Fresh

If you've been keeping an eye on this blog then you would have noticed that I've been dabbling in music production of late. This is part of something that I've been working on, on and off now while I've been learning more about the craft.




This is a playlist that I've created to upload such things.




Clearly, the track sample outlined above is fairly early in it's inception but it gives you an idea of some of the stuff that I am likely to produce in future.


As to the purpose of this particular post, it's basically about how to keep a song fresh by altering various aspects of it. For instance, think about the following:
  • alter tempo (don't restrict yourself to a single tempo throughout. Listen to grid music specialists (such as 'Jeremy Ellis', https://www.youtube.com/watch?v=HdFlFxJFnfY) and finger drummers (such as 'Mad Zach', http://www.madzach.com/, https://www.youtube.com/results?search_query=mad+zach) and you'll see that the sound is a lot more natural re is a lot to be gained by not adhering to stringently to tr
  • change key/scales (if you're aware of enough music theory you'll be aware that by altering 'modes/scales' you can change the entire feel of songs through that alone. Also remember that in the world of artificial sounds such as that produced by synthesisers scales can sometimes mean very little. Just go by ear in such cases...)
  • alter instruments for the same section (it's astonishing how much variety in software and ourboard gear you can get. Even if you just work with free stuff you'll have more than enough to build quality songs). While we're at it, give each and every instrument a chance. An example of this 'Doctor Rockit' in 'Cafe De Flore', https://www.youtube.com/watch?v=LFDyalRZbfY
  • the human voice (even when re-modulated/synthesised) can completely alter the feel of a song. The timbre itself can sort of be reproduced by artificial means but not quite yet which means you lose out on a lot by rejecting it. Listen to 'Kayne West's' version of 'Harder, Better, Faster Stronger', https://www.youtube.com/watch?v=3CgkWmKJLuE as opposed to the original version and you'll understand what I mean, https://www.youtube.com/watch?v=GDpmVUEjagg
  • if you have difficulty in finding an vocalist try specific social networks for this such as, http://vocalizr.com/ and https://blend.io/
  • else just become really good with instruments such as 'Chicane' in 'Offshore', https://www.youtube.com/watch?v=ltpCS5P0zCw
  • alter phase/time between tracks (slight changes in phase can have quite a different effect)
  • alter notes and their sequence (sounds obvious but doesn't seem to be sound obvious at times particularly when listening to heaps of club/dance music).
  • which leads us to the following point, learn to improvise and harmonise. I grew up on a lot of RnB and Hip Hop but ended up brancing out. Without this basis you'll find it very difficult to make something that doesn't sound overly repetitive. Examples of great harmony include, 'Boyz II Men' in 'End of The Road', https://www.youtube.com/watch?v=zDKO6XYXioc, 'Four Seasons of Loneliness', https://www.youtube.com/watch?v=fUSOZAgl95A, and 'I'll Make Love to You', https://www.youtube.com/watch?v=fV8vB1BB2qc
  • play around with the usual effects mid sound such as envelopes, modulation, LFO's, phasing, flanging, etc... A good example of this is with 'Flume', https://www.youtube.com/watch?v=fpyDJWxCep8&list=PLfk_Bv3x7xZLaDTrnJwvsJwQD_qJ2PmZ-
  • use of polyrhythms. Can be a little bit confusing to work with but can also achieve good results, http://en.wikipedia.org/wiki/Polyrhythm
  • use of effects such as panning, reverb, delay, EQ, etc... (be careful though. If you plan on deploying to clubs remember that their systems are often monophonic so some of your work may be for nothing. Also, a lot of people's standard stereo systems just don't have the range/ability to be able to do what you may want.)
  • use of automation in order to change relative volume of tracks/instruments in relation to one another
  • production and mixing techniques such as side-chain ducking, parallel processing, etc... Note, that sometimes you can go overboard and it can lose a lot of it's body though
  • split, explode, change sequence, ghost, reverse MIDI sections and/or audio samples
  • 'layering' sounds by having instruments play the same MIDI notes/sequences
  • think about push/pull aspects when dealing with 'fills'. Hear this in parts of Groove Aramda's, 'Lovebox', https://www.youtube.com/watch?v=izMBLSEt16o
  • add random file samples/sounds into the mixture every once in a while. A good example and common user of this technique is 'Daft Punk' in 'Around the World', https://www.youtube.com/watch?v=7nApiS9UTvc
  • gradually build into sections. Keeps it sounding like a song rather than a bunch of clips that have been assembled together. Also, creates a sense of fluidity. An example of this is 'Bob Sinclair' in 'World, Hold On', https://www.youtube.com/watch?v=yzlE7w9wRQk
  • this takes me to my next point, take your time when it comes to building a song. I've been dealing with this problem constantly. It's not just a bunch of clips put together. It's like a story. It's composed of words, phrases, pages, and ultimately a book. Tell the story completely. An example of this is 'Tom Novy's' song ' Take It', https://www.youtube.com/watch?v=_8ddq2NsjjU
  • that said when pushing/pulling/building into different sections one technique you can use to add a bit of 'freshness' is just giving them a hint here and there before hitting them with the complete section
  • think about utilising the entire frequency range. I've heard heaps of songs just cramp their frequency range into too small a range and it ends up losing some expressiveness
  • think about extending notes in breakdowns. A good example of this is 'When the Light's Go Out' by 'Five, https://www.youtube.com/watch?v=mpdcKmaHk_s
  • good songs start with a solid base. Even if they aren't electronic they start with a solid base/beat and build there way up into something great. Listen to 'Kaskade's' song 'This Rhythm' for an example of this, https://www.youtube.com/watch?v=cGRiFhIeWHM as well 'Mousse T' in 'Horny', https://www.youtube.com/watch?v=mGkHc11kSKs
  • use silence to your advantage. If you're just starting out you think you need to just fill every single moment in time with sound. Silence in the right places can change the entire feel of that particular section
  • don't think that pure digital or analogue is best. Fusing the two can produce wonderful results even if they are emulated via software. An example of this is using 'saturation', 'distortion', whitenoise effects to cut through the artificial/pure nature of the sounds that would otherwise be on show
  • use different sounds as well as effects during section transition. A good example of this by 'Doctor Rockit' in 'Cafe De Flore', https://www.youtube.com/watch?v=LFDyalRZbfY
  • listen to heaps of different artists and read a lot. A lot of what I've learnt has actually been from 'Computer Music Magazine' (a lot of content is actually duplicated by other music magazine publishers and articles are often superficially updated by the magazine and re-published. You can save a bit of money by being watchful for these things, http://www.musicradar.com/computermusic). Don't limit yourself to keep yourself interested as well as your listeners interested

May 12, 2015

LUV Main June 2015 Meeting: Using deep mutational scanning to understand protein function / Drupal8 out of the box

Jun 2 2015 19:00
Jun 2 2015 21:00
Jun 2 2015 19:00
Jun 2 2015 21:00
Location: 

200 Victoria St. Carlton VIC 3053

Speakers:

• Alan Rubin: Using deep mutational scanning to understand protein function

• Donna Benjamin: Drupal8 out of the box

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

June 2, 2015 - 19:00

read more

May 11, 2015

Tired but looking up

Ye gods I'm tired.

Mostly I think this is due to the prodigious amount of travel that I've been doing to Melbourne for work, which while interesting, has left me with very little time to get a handle on things that I want to do outside of work.

There is a light on the horizon though, it looks like the Melbourne sojourne is coming to an end, and that means that at the very least psychologically I can start looking at committing to doing things around Angry Beanie that I've been meaning to do for the past six months.

So here's a rundown of the tasks I've set myself.

Rebuild Angry Beanie website using Django. This is mostly an exercise in learning Django and python, both tools I've looked at before but never really had a project to get my teeth into both.

Restart production of For Science and Purser Explores The World. I've been trying to get this restarted all year, but the aforementioned Melbourne trips have really thrown a kink in things.

On the subject of PETW I'd love to hear any subjects that you'd like me to cover. I already have a list of topics I want to look at, but I'm always up for more.

Blog Catagories: 

Upcoming opportunities to talk MySQL/MariaDB in May 2015

May is quickly shaping up to be a month filled with activity in the MySQL/MariaDB space. Just a quick note to talk about where I’ll be; looking forward to meet folk to talk shop. 

  1. The London MySQL Meetup GroupMay 13 2015 – organized by former colleague & friend Ivan Zoratti, we will be doing a wrap up of recent announcements at Percona Live Santa Clara, and I’ll be showing off some of the spiffy new features we are building into MariaDB 10. 
  2. MariaDB Roadshow London – May 19 2015 – I’m going to give an overview of our roadmap, and there will be many excellent talks by colleagues there. I believe MariaDB Corporation CEO Patrik Sallner and Stu Schmidt, President at Zend will also be there. Should be a fun filled day. 
  3. Internet Society (ISOC) Hong Kong World Internet Developer Summit – May 21-22 2015 – I’ll be giving a keynote about MariaDB and how we are trying to make it important Internet infrastructure as well as making it developer friendly. 
  4. O’Reilly Velocity 2015 – May 27-29 2015 – I will in 90 minutes attempt to give a tutorial to attendees (over a 100 have already pre-registered) an overview of MySQL High Availability options and what their choices are in 2015. Expect a lot of talk on replication improvements from both MySQL & MariaDB, Galera Cluster, as well as tools around the ecosystem. 

Oh Hai 👶

 

Rohan Victor

Rohan Victor Pendergast, born 18:33 on 10/5/15.

Boy and Bear seemed to be what got him moving, so here they are now.

(PS: Should you ever find yourself in such a position, I highly recommend finding a hospital that gives the parents champagne for having a babby on Mother’s Day.)

May 10, 2015

Installing Ubuntu 15.04 on Acer Aspire E 11 E3-112-C6YY

Acer Aspire E 11 E3-112-C6YY is a nice 11 inch notebook which I found suitable for doing some work while commuting. And it costs only A$299 at Dick Smith (or on Ebay). Here is a step by step instruction how to setup Ubuntu 15.04 along with preinstalled Microsoft Windows 8.1 (dual boot):

  1. Before you begin it is better to install all pending updates to Windows 8.1 installed on the notebook.
  2. Download the latest desktop version of Ubuntu for amd64 architecture from cdimage.ubuntu.com/daily-live/current/, it is vivid-desktop-amd64.iso at this moment. Then create a bootable Ubuntu USB flash drive using Rufus. Choose GPT partition scheme for UEFI computer and Ubuntu ISO image just downloaded. You need an USB drive of 2GB or more. All data on this USB drive would be lost in the process.
  3. Create a recovery drive with the preinstalled Acer eRecovery Management application. Optional, but you take some risk not doing it. You would need an USB drive of 16GB or more, and all present data on that drive would be lost in the process.
  4. Shrink the main Windows partition (drive C:) by the amount you like to allocate for Ubuntu. Minimum required is just about 6.6GB, though I took the maximum possible (about 50% of 500GB drive).
  5. Turn off fast startup in Windows 8.1
  6. Insert USB drive with Ubuntu into USB port.
  7. Disable Secure Boot (select Notebook section at this link), and before exiting BIOS setup utility move the USB HDD to the top of the Boot priority list in the Boot menu. Save changes and exit.
  8. Now you should see the grub menu suggesting to select either try Ubuntu without installation, either install it. You may try it first to ensure everything works fine and then install it.
  9. When you chose to install Ubuntu the first step is to choose the language of the system. It is English by default.
  10. Next step is configuring Internet connection, usually via Wi-Fi. It is better to have it configured and run to be able to install updates during the installation.
  11. Then the installer checks requirements for successful installation: you should have enough free space; be plugged ti the power source; and be connected to the internet. You may tick boxes permitting to install updates and third-party software if you like.
  12. After that you need to select installation type. As we booting from UEFI enabled USB drive, choose the default: Install Ubuntu alongside Windows Boot Manager. Then hit Install Now and then confirm changes to be made. This is point of no return, you should finish installation passing beyond this point.
  13. In the next setup dialogues you choose your Time Zone; your keyboard layout; specify your name and the name of your laptop; choose username and password (if required) to be used.
  14. And now you need to wait until installation complete with the dialogue suggesting reboot the system to start using it. Reboot it and remove USB drive so the laptop would be booted... into Windows. That is expected.
  15. Move the mouse pointer to the bottom left corner and right click on the window icon and choose "Command Prompt (Admin)". Within the administrator's command prompt type the following command:
    
        bcdedit /set "{bootmgr}" path \EFI\ubuntu\grubx64.efi
    
  16. Shutdown Windows 8.1 again; power it on and hit F2 once Acer logo appears to enter BIOS setup utility. Move the USB HDD below HDD: xxxxxxx-xxx in the Boot priority list in the Boot menu. Save changes and exit.
  17. Now you should see GRUB menu suggesting to boot either Ubuntu, either Windows Boot Manager. Try to boot first Ubuntu; then Windows 8.1 to verify everything works fine.

lappy