Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

July 28, 2015

Chet and I went on an adventure to LA-96

So, I've been fascinated with American nuclear history for ages, and Chet and I got talking about what if any nuclear launch facilities there were in LA. We found LA-96 online and set off on an expedition to explore. An interesting site, its a pity there are no radars left there. Apparently SF-88 is the place to go for tours from vets and radars.



                                       



See more thumbnails



I also made a quick and dirty 360 degree video of the view of LA from the top of the nike control radar tower:







Interactive map for this route.



Tags for this post: blog pictures 20150727-nike_missile photo california

Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; Views from a lookout on Mulholland Drive, Bel Air



Comment

July 27, 2015

July 26, 2015

Geocaching with TheDevilDuck

In what amounts to possibly the longest LAX layover ever, I've been hanging out with Chet at his place in Altadena for a few days on the way home after the Nova mid-cycle meetup. We decided that being the dorks that we are we should do some geocaching. This is just some quick pics some unexpected bush land -- I never thought LA would be so close to nature, but this part certainly is.



         



Interactive map for this route.



Tags for this post: blog pictures 20150727 photo california bushwalk

Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica



Comment

Twitter posts: 2015-07-20 to 2015-07-26

July 25, 2015

Microphone Placement and Speech Codecs

This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.

People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.

So why does microphone placement matter?

Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.

The Model

A microphone is a bit like a radio front end:

We assume linearity (the microphone signal isn’t clipping).

Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.

Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.

When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.

A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.

Science in my Lounge Room 1 – Proximity Effect

I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.

To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).

This spreadsheet shows the results over a couple of runs (levels in dB).

So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!

Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.

Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.

Science in my Lounge Room 2 – Multipath

So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.

The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.

Lets take a look at the frequency response close up and at 25cm:

Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:

  1. The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
  2. They both have a resonance around 500Hz.
  3. The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.

OK, lets look at the reflections:

A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.

Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.

Looking at a zoomed version of the 25cm spectrum:

I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.

Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:

Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.

OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!

If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.

How to break a low bit rate speech codec

Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.

This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.

Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?

Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.

July 24, 2015

Linux Security Summit 2015 Update: Free Registration

In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee.  This year, LSS has been upgraded to a hosted event.  I didn’t realize that this meant that LSS registration was available entirely standalone.  To quote an email thread:

If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration.  You will only have access to The Linux Security Summit.

Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.

There may be a number of people who registered for LinuxCon but who only wanted to attend LSS.   In that case, please contact the program committee at lss-pc_AT_lists.linuxfoundation.org.

Apologies for any confusion.

July 23, 2015

Virtualenv and library fun

Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.



Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)



For example:

mrda@host:~$ python -c 'import pywsman'

# Works


mrda@host:~$ tox -evenv --notest
(venv)mrda@host:~$ python -c 'import pywsman'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pywsman
# WAT?


Let's try something else that's installed system-wide
(venv)mrda@host:~$ python -c 'import six'
# Works


Why does six work, and pywsman not?
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*
-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info
-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/six.py
-rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*
-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/pywsman.py
-rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/_pywsman.so


The only thing that comes to mind is that pywsman wraps a .so


A work-around is to tell venv that it should use the system-wide install of pywsman, like this:


# Kill the old venv first
(venv)mrda@host:~$ deactivate
mrda@host:~$ rm -rf .tox/venv


Now startover
mrda@host:~$ tox -evenv --notest --sitepackages pywsman
(venv)mrda@host:~$ python -c "import pywsman"
# Fun and Profit!




Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation

A while back (several years ago) I wrote about self replacing code in my 'Cloud and Security' report (p.399-402)(I worked on it on and off over an extended period of time) within the context of building more secure codebases. DARPA are currently funding projects within this space. Based on I've seen it's early days. To be honest it's not that difficult to build if you think about it carefully and break it down. Much of the code that is required is already in wide spread use and I already have much of the code ready to go. The problem is dealing with the sub-components. There are some aspects that are incredibly tedious to deal with especially within the context of multiple languages.



If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...

https://en.wikipedia.org/wiki/DARPA

http://www.darpa.mil/

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

http://www.brisbanetimes.com.au/it-pro/security-it/csail-fixes-software-bugs-automatically-in-any-language-by-copying-from-safer-applications-20150720-gifyo3

http://www.smh.com.au/it-pro/security-it/csail-fixes-software-bugs-automatically-in-any-language-by-copying-from-safer-applications-20150720-gifyo3

http://www.theverge.com/2015/7/8/8911493/darpa-cyber-grand-challenge-finalists-defcon

http://www.networkworld.com/article/2945443/security0/darpas-4m-cyber-threat-clash-down-to-seven-challengers.html



If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.

http://www.itnews.com.au/News/406655,infosec-firms-oppose-misguided-exploit-export-controls.aspx

http://www.theaustralian.com.au/business/technology/australian-firms-under-attack-every-week-centrify/story-e6frgakx-1227444081288?nk=2712dc6b13f189c643cb547351652f41-1437018757



It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?

http://www.theguardian.com/world/2010/nov/29/wikileaks-cables-china-reunified-korea

https://en.wikipedia.org/wiki/Contents_of_the_United_States_diplomatic_cables_leak_%28People%27s_Republic_of_China%29

https://en.wikipedia.org/wiki/Reactions_to_the_United_States_diplomatic_cables_leak

https://en.wikipedia.org/wiki/Reception_of_WikiLeaks

http://www.theguardian.com/world/2010/dec/01/wikileaks-cables-russia-mafia-kleptocracy

http://www.telegraph.co.uk/news/worldnews/wikileaks/8304654/WikiLeaks-cables-US-agrees-to-tell-Russia-Britains-nuclear-secrets.html

https://www.techdirt.com/articles/20130910/13145824474/former-nsa-officer-wikileaks-is-front-russian-intelligence-snowdens-probably-spy.shtml

http://www.news.com.au/finance/work/the-human-tragedy-of-mh17-could-boost-vladimir-putins-popularity-despite-damning-video-evidence/story-fn5tas5k-1227448079141



If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.

http://www.reddit.com/r/osx/comments/1oey0b/download_os_x_109_mavericks_gm_final_dmg/

http://www.techglobex.net/2013/10/download-os-x-109-mavericks-gm-final.html



If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.

http://www.lostpassword.com/kit-forensic.htm

http://www.top-password.com/download.html

http://www.passwordsrecoverytool.com/download/

http://www.passwordsrecoverytool.com/downloads/

https://www.elcomsoft.com/eprb.html

http://www.password-changer.com/

http://www.password-changer.com/download.htm

http://www.livecd.com/

http://livecd.com/DataStudio/download.htm

http://pcsupport.about.com/od/toolsofthetrade/tp/passrecovery.htm

http://www.techrepublic.com/blog/five-apps/five-trustworthy-password-recovery-tools/



There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.

http://reverseengineering.stackexchange.com/questions/1943/what-are-the-techniques-and-tools-to-obfuscate-python-programs

http://stackoverflow.com/questions/14997414/obfuscating-python-bytecode-through-interpreter-mutation

http://stackoverflow.com/questions/261638/how-do-i-protect-python-code

July 22, 2015

Configuring Zotero PDF full text indexing in Debian Jessie

Background

Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd
$ mv .mozilla/firefox/*.default/zotero .zotero

Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

    $ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine
Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero
$ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m)
$ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero
$ wget -O redirect.sh https://raw.githubusercontent.com/zotero/zotero/4.0/resource/redirect.sh
$ chmod a+x redirect.sh

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero
$ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version
$ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing
  pdftotext version 0.26.5 is installed
  pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.

LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences

Aug 4 2015 18:30
Aug 4 2015 20:30
Aug 4 2015 18:30
Aug 4 2015 20:30
Location: 

200 Victoria St. Carlton VIC 3053

Speakers:

• Jon Oxer, Open Machines Building Open Hardware

• Chris Samuel, VLSCI: Supercomputing for Life Sciences

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 4, 2015 - 18:30

read more

DHCP and NUCs

I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.



Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).



There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).



So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!



There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...

Self Driving Cars

I’m a believer in self driving car technology, and predict it will have enormous effects, for example:

  1. Our cars currently spend most of the time doing nothing. They could be out making money for us as taxis while we are at work.
  2. How much infrastructure and frustration (home garage, driveways, car parks, finding a park) do we devote to cars that are standing still? We could park them a few km away in a “car hive” and arrange to have them turn up only when we need them.
  3. I can make interstate trips laying down sleeping or working.
  4. Electric cars can recharge themselves.
  5. It throws personal car ownership into question. I can just summon a car on my smart phone then send the thing away when I’m finished. No need for parking, central maintenance. If they are electric, and driverless, then very low running costs.
  6. It will decimate the major cause of accidental deaths, saving untold misery. Imagine if your car knew the GPS coordinates of every car within 1000m, even if outside of visual range, like around a corner. No more t-boning, or even car doors opening in the path of my bike.
  7. Speeding and traffic fines go away, which will present a revenue problem for governments like mine that depend on the statistical likelihood of people accidentally speeding.
  8. My red wine consumption can set impressive new records as the car can drive me home and pour me into bed.

I think the time will come when computers do a lot better than we can at driving. The record of these cars in the US is impressive. The record for humans in car accidents dismal (a leading case of death).

We already have driverless planes (autopilot, anti-collision radar, autoland), that do a pretty good job with up to 500 lives at a time.

I can see a time (say 20 years) when there will be penalties (like a large insurance excess) if a human is at the wheel during an accident. Meat bags like me really shouldn’t be in control of 1000kg of steel hurtling along at 60 km/hr. Incidentally that’s 144.5 kJ of kinetic energy. A 9mm bullet exits a pistol with 0.519 kJ of energy. No wonder cars hurt people.

However many people are concerned about “blue screens of death”. I recently had an email exchange on a mailing list, here are some key points for and against:

  1. The cars might be hacked. My response is that computers and micro-controllers have been in cars for 30 years. Hacking of safety critical systems (ABS or EFI or cruise control) is unheard of. However unlike a 1980′s EFI system, self driving cars will have operating systems and connectivity, so this does need to be addressed. The technology will (initially at least) be closed source, increasing the security risk. Here is a recent example of a modern car being hacked.
  2. Planes are not really “driverless”, they have controls and pilots present. My response is that long distance commercial aircraft are under autonomous control for the majority of their flying hours, even if manual controls are present. Given the large number of people on board an aircraft it is of course prudent to have manual control/pilot back up, even if rarely used.
  3. The drivers of planes are sometimes a weak link. As we saw last year and on Sep 11 2001, there are issues when a malicious pilot gains control. Human error is also behind a large number of airplane incidents, and most car accidents. It was noted that software has been behind some airplane accidents too – a fair point.
  4. Compared to aircraft the scale is much different for cars (billions rather than 1000s). The passenger payload is also very different (1.5 people in a car on average?), and the safety record of cars much much worse – it’s crying out for improvement via automation. So I think automation of cars will eventually be a public safety issue (like vaccinations) and controls will disappear.
  5. Insurance companies may refuse a claim if the car is driverless. My response is that insurance companies will look at the actuarial data as that’s how they make money. So far all of the accidents involving Google driverless cars have been caused by meat bags, not silicon.

I have put my money where my mouth is and invested in a modest amount of Google shares based on my belief in this technology. This is also an ethical buy for me. I’d rather have some involvement in an exciting future that saves lives and makes the a world a better place than invest in banks and mining companies which don’t.

July 21, 2015

Joint Strike Fighter F-35 Notes

Below are a bunch of thoughts, collation of articles about the F-35 JSF, F-22 Raptor, and associated technologies...



- every single defense analyst knows that comprimises had to be made in order to achieve a blend of cost effectiveness, stealth, agility, etc... in the F-22 and F-35. What's also clear is that once things get up close and personal things mightn't be as clear cut as we're being told. I was of the impression that the F-22 would basically outdo anything and everything in the sky all of the time. It's clear that based on training excercises that unless the F-22's have been backing off it may not be as phenomenal as we're being led to believe (one possible reason to deliberately back off is to not provide intelligence on max performance envelope to provide less of a target for near peer threats with regards to research and engineering). There are actually a lot of low speed manouvres that I've seen a late model 3D-vectored Sukhoi perform that a 2D-vectored F-22 has not demonstrated. The F-35 is dead on arrival in many areas (at the moment. Definitely from a WVR perspective) as many people have stated. My hope and expectation is that it will have significant upgrades throughout it's lifetime

https://medium.com/war-is-boring/don-t-think-the-f-35-can-fight-it-does-in-this-realistic-war-game-fc10706ba9f4

https://medium.com/war-is-boring/one-analyst-predicted-the-f-35s-s-dogfight-failure-50a942d0cf8a

http://asia.rbth.com/why_the_indonesian_air_force_wants_the_su-35_45943.html 

https://defenseissues.wordpress.com/2013/12/21/on-rafale-vs-f-22-bfm/

http://www.flightglobal.com/blogs/the-dewline/2009/11/rafale-beats-f-35-f-22-in-flig/

http://bestfighter4canada.blogspot.com.au/2014/09/fighter-jet-fight-club-f-35-vs-gripen.html

http://asia.rbth.com/science_and_tech/2013/10/09/su-35s_overtakes_american_f-22_in_terms_intellect_30639.html 

http://www.migflug.com/jetflights/p-i-r-a-t-e-versus-raptor.html

https://defenseissues.wordpress.com/2012/10/20/cleaning-up-red-flag-alaska-f-22-vs-typhoon-debate/

http://www.businessinsider.com/f-22-wont-win-a-dogfight-thrust-vectoring-raptor-typhoon-eurofighter-2013-2

F22 vs Rafale dogfight video

https://www.youtube.com/watch?v=ioTTnjxNc7o

Dogfight: Rafale vs F22 (Close combat)

https://www.youtube.com/watch?v=KOswfrc7Xtg

F-22 RAPTOR vs F-15 EAGLE

https://www.youtube.com/watch?v=wr-8dSkfs8Y

- in the past public information/intelligence regarding some defense programs/equipment have been limited to reduce the chances of a setting off arms race. That way the side who has disemminated the mis-information can be guaranteed an advantage should there be a conflict. Here's the problem though, while some of this may be such, I doubt that all of it is. My expectation that due to some of the intelligence leaks (many terabytes. Some details of the breach are available publicly) regarding designs of the ATF (F-22) and JSF (F-35) programs is also causing some problems as well. They need to overcome technical problems as well as problems posed by previous intelligence leaks. Some of what is being said makes no sense as well. Most of what we're being sold on doesn't actually work (yet) (fusion, radar, passive sensors, identification friend-or-foe, etc...)...

http://www.wired.com/2013/03/f-35-blind-spot/

http://blogs.crikey.com.au/planetalking/2009/07/12/the-f-35-jsf-predator-or-prey/

https://medium.com/war-is-boring/no-the-f-35-can-t-fight-at-long-range-either-5508913252dd

https://medium.com/war-is-boring/one-analyst-predicted-the-f-35s-s-dogfight-failure-50a942d0cf8a

https://medium.com/war-is-boring/fd-how-the-u-s-and-its-allies-got-stuck-with-the-worlds-worst-new-warplane-5c95d45f86a5 

http://english.pravda.ru/opinion/columnists/30-04-2009/107481-jsf_swindle-0/

http://www.wired.com/2012/04/f35-videos/

http://aviationweek.com/defense/opinion-joint-strike-fighter-debate-enters-new-phase

http://www.vanityfair.com/news/2013/09/joint-strike-fighter-lockheed-martin

http://www.theaustralian.com.au/national-affairs/policy/jsf-the-only-way-to-fly-into-future/story-fn59nlz9-1226936460799

http://www.dailytech.com/Report+Air+Forces+Spoiled+F35+Superjet+Has+No+Code+to+Shoot+Its+Gun/article37043.htm

http://www.smh.com.au/national/china-stole-plans-for-a-new-fighter-plane-spy-documents-have-revealed-20150118-12sp1o.html

- if production is really as problematic as they say that it could be without possible recourse then the only thing left is to bluff. Deterrence is based on the notion that your opponent will not attack because you have a qualitative or quantitative advantage... Obviously, the problem if there is actual conflict we have a huge problem. We purportedly want to be able to defend ourselves should anything potentially bad occur. The irony is that our notion of self defense often incorporates force projection in far off, distant lands...

F22 Raptor Exposed - Why the F22 Was Cancelled

https://www.youtube.com/watch?v=KaoYz90giTk

F-35 - a trillion dollar disaster

https://www.youtube.com/watch?v=39AO-axUd-k

4/6 F-35 JOINT STRIKE FIGHTER IS A LEMON

https://www.youtube.com/watch?v=ojPnp2hwqaE

JSF 35 vs F18 superhornet

https://www.youtube.com/watch?v=IUf_hhxngK4

- we keep on giving Lockheed Martin a tough time regarding development and implementation but we keep on forgetting that they have delivered many successful platforms including the U-2, the Lockheed SR-71 Blackbird, the Lockheed F-117 Nighthawk, and the Lockheed Martin F-22 Raptor

https://en.wikipedia.org/wiki/Lockheed_Martin

https://en.wikipedia.org/wiki/Skunk_Works

https://en.wikipedia.org/wiki/The_Boeing_Company

https://en.wikipedia.org/wiki/Boeing_Phantom_Works

f-22 raptor crash landing

https://www.youtube.com/watch?v=faB5bIdksi8

- SIGINT/COMINT often produces a lot of a false positives. Imagine listening to every single conversation that you overheard every single conversation about you. Would you possibly be concerned about your security? Probably more than usual despite whatever you might say? As I said previously in posts on this blog it doesn't makes sense that we would have such money invested in SIGINT/COMINT without a return on investment. I believe that we may be involved in far more 'economic intelligence' then we may be led to believe

http://dtbnguyen.blogspot.com/2015/06/the-value-of-money-part-4.html

- despite what is said about the US (and what they say about themselves), they do tell half-truths/falsehoods. They said that the Patriot missile defense systems were a complete success upon release with ~80% success rates when first released. Subsequent revisions of past performance have indicated actual success rate of about half that. It has been said that the US has enjoyed substantive qualitative and quantitative advantages over Soviet/Russian aircraft for a long time. Recently released data seems to indicate that it is closer to parity (not 100% sure about the validity of this data) when pilots are properly trained. There seems to be indications that Russian pilots may have been involved in conflicts where they shouldn't have been or were unknown to be involved...

https://en.wikipedia.org/wiki/MIM-104_Patriot

https://en.wikipedia.org/wiki/Aegis_Ballistic_Missile_Defense_System

https://en.wikipedia.org/wiki/Anti-ballistic_missile

https://en.wikipedia.org/wiki/Missile_defense_systems_by_country

https://en.wikipedia.org/wiki/Missile_defense

- the irony between the Russians and US is that they both deny that their technology is worth pursuing and yet time seems to indicate otherwise. A long time ago Russian scientists didn't bother with stealth because they though it was overly expensive without enough of a gain (especially in light of updated sensor technology) and yet the PAK-FA/T50 is clearly a test bed for such technology. Previously, the US denied that that thrust vectoring was worth pursuing and yet the the F-22 clearly makes use of it

- based on some estimates that I've seen the F-22 may be capable of close to Mach 3 (~2.5 based on some of the estimates that I've seen) under limited circumstances

- people keep on saying maintaining a larger, indigenous defense program is simply too expensive. I say otherwise. Based on what has been leaked regarding the bidding process many people basically signed on without necessarily knowing everything about the JSF program. If we had more knowledge we may have proceeded a little bit differently

- a lot of people who would/should have classified knowledge of the program are basically implying that it will work and will give us a massive advantage give more development time. The problem is that there is so much core functionality that is so problematic that this is difficult to believe...

http://www.thedailybeast.com/articles/2014/04/28/new-u-s-stealth-jet-can-t-hide-from-russian-radar.html

http://www.thedailybeast.com/articles/2014/12/31/new-u-s-stealth-jet-can-t-fire-its-gun-until-2019.html

http://www.vanityfair.com/news/2013/09/joint-strike-fighter-lockheed-martin

https://en.wikipedia.org/wiki/Joint_Strike_Fighter_program

http://www.dailytech.com/Report+Air+Forces+Spoiled+F35+Superjet+Has+No+Code+to+Shoot+Its+Gun/article37043.htm

- the fact that pilots are being briefed not to allow for particular circumstances tells us that there are genuine problems with the JSF

- judging by the opinions in the US military many people are guarded regarding the future performance of the aircraft. We just don't know until it's deployed and see how others react from a technological perspective

- proponents of the ATF/JSF programs keep on saying that since you can't see it you can't shoot. If that's the case, I just don't understand why we don't push up development of 5.5/6th gen fighters (stealth drones basically) and run a hybrid force composed of ATF, JSF, and armed drones (some countries including France are already doing this)? Drones are somewhat of a better known quantity and without life support issues to worry about should be able to go head to head with any manned fighter even with limited AI and computing power. Look at the following videos and you'll notice that the pilot is right on the physical limit in a 4.5 gen fighter during an excercise with an F-22. A lot of stories are floating around indicating that the F-22 enjoys a big advantage but that under certain circumstance it can be mitigated. Imagine going up against a drone where you don't have to worry about the pilot blacking out, pilot training (incredibly expensive to train. Experience has also told us that pilots need genuine flight time not just simulation time to maintain their skills), a possible hybrid propulsion system (for momentary speed changes/bursts (more than that provided by afterburner systems) to avoid being hit by a weapon or being acquired by a targeting system), and has more space for weapons and sensors? I just don't understand how you would be better off with a mostly manned fleet as opposed to a hybrid fleet unless there are technological/technical issues to worry about (I find this highly unlikely given some of the prototypes and deployments that are already out there)

https://defenseissues.wordpress.com/2013/12/21/on-rafale-vs-f-22-bfm/

F22 vs Rafale dogfight video

https://www.youtube.com/watch?v=ioTTnjxNc7o

Dogfight: Rafale vs F22 (Close combat)

https://www.youtube.com/watch?v=KOswfrc7Xtg

F-22 RAPTOR vs F-15 EAGLE

https://www.youtube.com/watch?v=wr-8dSkfs8Y

https://en.wikipedia.org/wiki/Dogfight

http://www.defenseindustrydaily.com/f-22-raptor-capabilities-and-controversies-019069/

http://www.wired.com/2012/07/f-22-germans/

http://www.defensenews.com/story/defense/air-space/strike/2015/07/15/typhoon-eurofighter-aerodynamic-modifications-agility/30181011/

- if I were a near peer aggressor or looking to defend against 5th gen threats I'd just to straight to 5.5/6th gen armed drone fighter development. You wouldn't need to fulfil all the requirements and with the additional lead time you may be able to achieve not just parity but actual advantages while possibly being cheaper with regards to TCO (Total Cost of Ownership). There are added benefits going straight to 5.5/6th gen armed drone development. You don't have to compromise so much on design. The bubble shaped (or not) canopy to aide dogfighting affects aerodynamic efficiency and actually is one of the main causes of increased RCS (Radar Cross Section) on a modern fighter jet. The pilot and additional equipment (ejector sear, user interface equipment, life support systems, etc...) would surely add a large amount of weight which can now be removed. With the loss in weight and increase in aerodynamic design flexibility you could save a huge amount of money. You also have a lot more flexibility in reducing RCS. For instance, some of the biggest reflectors of RADAR signals is the canopy (a film is used to deal with this) and the pilot's helmet and one of the biggest supposed selling points of stealth aircraft are RAM coatings. They're incredibly expensive though and wear out (look up the history of the B-2 Spirit and the F-22 Raptor). If you have a smaller aicraft to begin with though you have less area to paint leading to lower costs of ownership while retaining the advantages of low observable technology

https://en.wikipedia.org/wiki/Radar-absorbent_material

https://en.wikipedia.org/wiki/Stealth_technology

https://en.wikipedia.org/wiki/Northrop_Grumman_B-2_Spirit

https://en.wikipedia.org/wiki/Lockheed_Martin_F-22_Raptor

http://www.defensenews.com/story/defense/air-space/strike/2015/01/21/northrop-6th-gen-fighter/22089857/

https://en.wikipedia.org/wiki/Sixth-generation_jet_fighter

https://en.wikipedia.org/wiki/Next_Generation_Air_Dominance

http://asia.rbth.com/science_and_tech/2013/08/30/russian_air_force_views_unmanned_fighters_as_the_future_29375.html

- the fact that it has already been speculated that 6th gen fighters may focus less on stealth and speed and more on weapons capability means that the US is aware of increasingly effective defense systems against 5th gen fighters such as the F-22 Raptor and F-35 JSF which rely heavily on low observability

https://en.wikipedia.org/wiki/Next_Generation_Air_Dominance 

- based on Wikileaks and other OSINT (Open Source Intelligence) everyone involved with the United States seems to acknowledge that they get a raw end of the deal to a certain extent but they also seem to acknowledge/imply that life is easier with them than without them. Read enough and you'll realise that even when classified as a closer partner rather than just a purchaser of their equipment you sometimes don't/won't receive much extra help

http://blogs.crikey.com.au/planetalking/2009/07/12/the-f-35-jsf-predator-or-prey/

http://larvatusprodeo.net/archives/2007/10/us-sold-us-crippled-hornets-in-80s-according-to-beazley/

http://www.darkgovernment.com/news/australia-cracked-u-s-radar-codes/

- if we had the ability I'd be looking to develop our own indigineous program defense programs. At least when we make procurements we'd be in a better position to be able to make a decision as to whether what was being presented to us was good or bad. We've been burnt on so many different programs with so many different countries... The only issue that I may see is that the US may attempt to block us from this. It has happened in the past with other supposed allies before...

https://en.wikipedia.org/wiki/Stealth_aircraft

http://www.telegraph.co.uk/news/newstopics/howaboutthat/5773358/Nazis-were-close-to-building-stealth-bomber-that-could-have-changed-course-of-history.html

- I just don't get it sometimes. Most of the operations and deployments that US and allied countries engage in are counter-insurgency and CAS significant parts of our operations involving mostly un-manned drones (armed or not). 5th gen fighters help but they're overkill. Based on some of what I've seen the only two genuine near peer threats are China and Russia both of whom have known limitations in their hardware (RAM coatings/films, engine performance/endurance, materials design and manufacturing, etc...). Sometimes it feels as though the US looks for enemies that mightn't even exist. Even a former Australian Prime-Ministerial advister said that China doesn't want to lead the world, "China will get in the way or get out of the way." The only thing I can possibly think of is that the US has intelligence that may suggest that China intends to project force further outwards (which it has done) or else they're overly paranoid. Russia is a slightly different story though... I'm guessing it would be interesting reading up more about how the US (overall) interprets Russian and Chinese actions behinds the scenes (lookup training manuals for allied intelligence officers for an idea of what our interpretation of what their intelligence services are like)

https://en.wikipedia.org/wiki/Russian_Air_Force

https://en.wikipedia.org/wiki/People%27s_Liberation_Army_Air_Force

http://www.smh.com.au/federal-politics/political-news/china-not-fit-for-global-leadership-says-top-canberra-official-michael-thawley-20150630-gi1o1f.html

http://thediplomat.com/2015/07/china-wants-to-develop-a-new-long-range-strategic-bomber/

http://www.defenceaviation.com/2008/07/pakda-a-russian-stealth-bomber.html

http://theaviationist.com/2013/10/30/usaf-lrs/

http://hamptonroads.com/2015/07/stealthy-f22-jet-serves-escort-ensures-other-warfighting-aircraft-survive

http://www.washingtonpost.com/blogs/post-politics/wp/2015/07/21/obama-defends-iran-deal-decries-over-reliance-on-military-force/

http://warontherocks.com/2015/07/chinas-new-intelligence-war-against-the-united-states/?singlepage=1

http://nationalinterest.org/feature/gun-hire-5-russian-weapons-war-sale-13411

http://www.forbes.com/sites/lorenthompson/2015/07/23/f-35-fighter-engines-how-the-pentagon-will-make-sure-pratt-whitney-performs/

- sometimes people say that the F-111 was a great plane but in reality there was no great use of it in combat. It could be the exact same circumstance with the F-35

http://australianaviation.com.au/2014/07/f-35-rollout-highlights-raafs-greatest-opportunity-for-evolutionary-change/

https://en.wikipedia.org/wiki/General_Dynamics_F-111_Aardvark

- there could be a chance the aircraft could become like the B-2 and the F-22. Seldom used because the actual true, cost of running it is horribly high. Also imagine the ramifications/blowback of losing such an expensive piece of machinery should there be a chance that it can be avoided

- defending against 5th gen fighters isn't easy but it isn't impossible. Sensor upgrades, sensor blinding/jamming technology, integrated networks, artificial manipulation of weather (increased condensation levels increases RCS), faster and more effective weapons, layered defense (with strategic use of disposable (and non-disposable) decoys so that you can hunt down departing basically, unarmed fighters), experimentation with cloud seeing with substances that may help to speed up RAM coating removal or else reduce the effectiveness of stealth technology (the less you have to deal with the easier your battles will be), forcing the battle into unfavourable conditions, etc... Interestingly, there have been some accounts/leaks of being able to detect US stealth bombers (B-1) lifting off from some US air bases from Australia using long range RADAR. Obviously, it's one thing to be able to detect and track versus achieving a weapons quality lock on a possible target

http://news.usni.org/2014/05/14/can-chinas-new-destroyer-find-u-s-stealth-fighters

RUSSIAN RADAR CAN NOW SEE F-22 AND F-35 Says top US Aircraft designer

https://www.youtube.com/watch?v=Z_vXqtCkVy8

https://en.wikipedia.org/wiki/Jindalee_Operational_Radar_Network

https://en.wikipedia.org/wiki/Over-the-horizon_radar

- following are rough estimate on RCS of various modern defense aircraft. It's clear that while Chinese and Russian technology aren't entirely on par they make the contest unconfortably close. Estimates on the PAK-FA/T-50 indicate RCS of about somewhere between the F-35 and F-22. Ultiamtely this comes back down to a sensor game. Rough estimates seem to indicate a slight edge to the F-22 in most areas. Part me thinks that the RCS of the PAK-FA/T-50 must be propoganda, the other part leads me to believe that there is no way countries would consider purchase of the aircraft if it didn't offer a competitive RCS

http://www.globalsecurity.org/military/world/stealth-aircraft-rcs.htm

http://www.ausairpower.net/APA-NOTAM-300309-1.html

http://www.f-16.net/forum/viewtopic.php?t=4408

https://www.youtube.com/watch?v=Z_vXqtCkVy8

http://www.ausairpower.net/APA-2011-03.html

http://www.flightglobal.com/blogs/the-dewline/2009/02/growler-power-ea-18g-boasts-f-/

http://www.theage.com.au/technology/technology-news/revolutionary-f35-joint-strike-fighter-pilots-smart-helmet-will-cost-a-bomb-20150224-13ko9d.html

http://www.news.com.au/technology/the-1-trillion-f35-tries-to-be-all-things-but-succeeds-at-few-say-critics-but-is-australias-new-weapon-now-too-big-to-fail/story-e6frfrnr-1226950254330

http://in.rbth.com/blogs/2013/04/08/why_australia_should_scratch_the_f-35_and_fly_sukhois_23629.html

http://ozzyblizzard.blogspot.com.au/2008/12/air-power-australia-flanker-analysis.html

https://www.facebook.com/notes/f-22-raptor/t-50-advantages-over-f22-and-why-f22-is-the-only-fighter-which-can-match-its-cou/10151667295298040?_fb_noscript=1

- it's somehwat bemusing that that you can't take pictures/videos from certain angles of the JSF in some of the videos mentioned here and yet there are heaps of pictures online of LOAN systems online including high resolution images of the back end of the F-35 and F-22

http://www.f-16.net/f-16_versions_article20.html

http://air-attack.com/images/single/740/The-F-35B-Lightning-II-rotates-its-engine-nozzle.html

http://defence.pk/threads/low-observable-nozzles-exhausts-on-stealth-aircraft.328253/
F 22 Raptor F 35 real shoot super clear

https://www.youtube.com/watch?v=FmLa-5R6TrI

- people keep on saying that if you can't see and you can't lock on to stealth aircraft they'll basically be gone by the time. The converse is true. Without some form of targeting system the fighter in question can't lock on to his target. Once you understand how AESA RADAR works you also understand that given sufficient computing power, good implementation skills, etc... it's also subject to the same issue that faces the other side. You shoot what you can't see and by targeting you give away your position. My guess is that detection of tracking by RADAR is somewhat similar to a lot of de-cluttering/de-noising algorithms (while making use of wireless communication/encryption & information theories as well) but much more complex... which is why there has been such heavy investment and interest in more passive systems (infra-red, light, sound, etc...)

https://en.wikipedia.org/wiki/Active_electronically_scanned_array

https://en.wikipedia.org/wiki/Dassault_Rafale

https://en.wikipedia.org/wiki/Euroradar_CAPTOR

https://en.wikipedia.org/wiki/Infra-red_search_and_track

https://en.wikipedia.org/wiki/Optronique_secteur_frontal

https://en.wikipedia.org/wiki/Targeting_pod

https://en.wikipedia.org/wiki/Forward_looking_infrared

http://aviationweek.com/defense/opinion-joint-strike-fighter-debate-enters-new-phase

F-35 JSF Distributed Aperture System (EO DAS)

https://www.youtube.com/watch?v=9fm5vfGW5RY



Lockheed Martin F-35 Lightning II- The Joint Strike Fighter- Full Documentary.

https://www.youtube.com/watch?v=AA2nvhHG6y4

4195: The Final F-22 Raptor

https://www.youtube.com/watch?v=fYEx9BiJNfE

https://en.wikipedia.org/wiki/Joint_Strike_Fighter_program

http://theaviationist.com/2015/07/13/f-35-pilot-about-flight-helmet/

http://www.wesh.com/politics/the-f35-is-it-worth-the-cost/34194708

http://forums.bharat-rakshak.com/viewtopic.php?f=3&t=5400&start=320

http://www.vanityfair.com/news/2013/09/joint-strike-fighter-lockheed-martin

http://www.f-16.net/forum/viewtopic.php?p=94991

http://forum.keypublishing.com/archive/index.php/t-81329-p-2.html

http://aviationweek.com/blog/read-f-35-accident-report

Rafale beats F 35 & F 22 in Flight International

https://www.youtube.com/watch?v=Bq4-TxgE8iU

Eurofighter Typhoon fighter jet Full Documentary

https://www.youtube.com/watch?v=WkBHpSBNnM4

Eurofighter Typhoon vs Dassault Rafale

https://www.youtube.com/watch?v=2wWkHKYcvos

DOCUMENTARY - SUKHOI Fighter Jet Aircrafts Family History - From Su-27 to PAK FA 50

https://www.youtube.com/watch?v=CYAw-FhTxHw

Green Lantern : F35 v/s UCAVs

https://www.youtube.com/watch?v=XtXPQNW2HqE

Building the LLVM Fuzzer on Debian.

I've been using the awesome American Fuzzy Lop fuzzer since late last year but had also heard good things about the LLVM Fuzzer. Getting the code for the LLVM Fuzzer is trivial, but when I tried to use it, I ran into all sorts of road blocks.

Firstly, the LLVM Fuzzer needs to be compiled with and used with Clang (GNU GCC won't work) and it needs to be Clang >= 3.7. Now Debian does ship a clang-3.7 in the Testing and Unstable releases, but that package has a bug (#779785) which means the Debian package is missing the static libraries required by the Address Sanitizer options. Use of the Address Sanitizers (and other sanitizers) increases the effectiveness of fuzzing tremendously.

This bug meant I had to build Clang from source, which nnfortunately, is rather poorly documented (I intend to submit a patch to improve this) and I only managed it with help from the #llvm IRC channel.

Building Clang from the git mirror can be done as follows:

  mkdir LLVM
  cd LLVM/
  git clone http://llvm.org/git/llvm.git
  (cd llvm/tools/ && git clone http://llvm.org/git/clang.git)
  (cd llvm/projects/ && git clone http://llvm.org/git/compiler-rt.git)
  (cd llvm/projects/ && git clone http://llvm.org/git/libcxx.git)
  (cd llvm/projects/ && git clone http://llvm.org/git/libcxxabi)

  mkdir -p llvm-build
  (cd llvm-build/ && cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=$(HOME)/Clang/3.8 ../llvm)
  (cd llvm-build/ && make install)

If all the above works, you will now have working clang and clang++ compilers installed in $HOME/Clang/3.8/bin and you can then follow the examples in the LLVM Fuzzer documentation.

FreeDV Robustness Part 6 – Early Low SNR Results

Anyone who writes software should be sentenced to use it. So for the last few days I’ve been radiating FreeDV 700 signals from my home in Adelaide to this websdr in Melbourne, about 800km away. This has been very useful, as I can sample signals without having to bother other Hams. Thanks John!

I’ve also found a few bugs and improved the FreeDV diagnostics to get a feel for how the system is working over real world channels.

I am using a simple end fed dipole a few meters off the ground and my IC7200 at maximum power (100W I presume, I don’t have a power meter). A key goal is comparable performance to SSB at low SNRs on HF channels – that is where FreeDV has struggled so far. This has been a tough nut to crack. SSB is really, really good on HF.

Here is a sample taken this afternoon, in a marginal channel. It consists of analog/DV/analog/DV speech. You might need to listen to it a few times, it’s hard to understand first time around. I can only get a few words in analog or DV. It’s right at the lower limit of intelligibility, which is common in HF radio.

Take a look at the spectrogram of the off air signal. You can see the parallel digital carriers, the diagonal stripes is the frequency selective fading. In the analog segments every now and again some low frequency energy pops up above the noise (speech is dominated by low frequency energy).

This sample had a significant amount of frequency selective fading, which occasionally drops the whole signal down into the noise. The DV mutes in the middle of the 2nd digital section as the signal drops out completely.

There was no speech compressor on SSB. I am using the “analog” feature of FreeDV, which allows me to use the same microphone and quickly swap between SSB and DV to ensure the HF channel is roughly the same. I used my laptops built in microphone, and haven’t tweaked the SSB or DV audio with filtering or level adjustment.

I did confirm the PEP power is about the same in both modes using my oscilloscope with a simple “loop” antenna formed by clipping the probe ground wire to the tip. It picked up a few volts of RF easily from the nearby antenna. The DV output audio level is a bit quiet for some reason, have to look into that.

I’m quite happy with these results. In a low SNR, barely usable SSB channel, the new coherent PSK modem is hanging on really well and we could get a message through on DV (e.g. phonetics, a signal report). When the modem locks it’s noise free, a big plus over SSB. All with open source software. Wow!

My experience is consistent with this FreeDV 700 report from Kurt KE7KUS over a 40m NVIS path.

Next step is to work on the DV speech quality to make it easy to use conversationally. I’d say the DV speech quality is currently readability 3 or 4/5. I’ll try a better microphone, filtering of the input speech, and see what can be done with the 700 bit/s Codec.

One option is a new mode where we use the 1300 bit/s codec (as used in FreeDV 1600) with the new, cohpsk modem. The 1300 bit/s codec sounds much better but would require about 3dB more SNR (half an s-point) with this modem. The problem is bandwidth. One reason the new modem works so well is that I use all of the SSB bandwidth. I actually send the 7 x 75 symbol/s carriers twice, to get 14 carriers total. These are then re-combined in the demodulator. This “diversity” approach makes a big difference in the performance on frequency selective fading channels. We don’t have room for that sort of diversity with a codec running much faster.

So time to put the thinking hat back on. I’d also like to try some nastier fading channels, like 20m around the world, or 40m NVIS. However I’m very pleased with this result. I feel the modem is “there”, however a little more work required on the Codec. We’re making progress!

July 20, 2015

Why DANE isn't going to win

In a comment to my previous post, Daniele asked the entirely reasonable question,

Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?

Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.

My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.

DNS Is Awful

Quoting Google security engineer Adam Langley:

But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.

Consider that TXT records are far, far older than TLSA records. It seems likely that TLSA records would fail to be retrieved greater than 4% of the time. Extrapolate to the likely failure rate for lookup of TLSA records would be, and imagine what that would do to the reliability of DANE verification. It would either be completely unworkable, or else would cause a whole new round of “just click through the security error” training. Ugh.

This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.

Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.

Finally, performance is also a concern. Having to go out-of-band to retrieve TLSA records delays page generation, and nobody likes slow page loads.

DNSSEC Is Awful

Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.

1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.

DNS Providers are Awful

While we all poke fun at CAs who get compromised, consider how often someone’s DNS control panel gets compromised. Now ponder the fact that, if DANE is supported, TLSA records can be manipulated in that DNS control panel. Those records would then automatically be DNSSEC signed by the DNS provider and served up to anyone who comes along. Ouch.

In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?

Conclusion

None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.

July 19, 2015

Twitter posts: 2015-07-13 to 2015-07-19

Craige McWhirter: How To Configure Debian to Use The Tiny Programmer ISP Board

So, you've gone and bought yourself a Tiny Programmer ISP, you've plugged into your Debian system, excitedly run avrdude only to be greeted with this:

% avrdude -c usbtiny -p m8


avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted
avrdude: initialization failed, rc=-1
         Double check connections and try again, or use -F to override
         this check.


avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted

avrdude done.  Thank you.

I resolved this permissions error by adding the following line to /etc/udev/rules.d/10-usbtinyisp.rules:

SUBSYSTEM=="usb", ATTR{idVendor}=="1781", ATTR{idProduct}=="0c9f", GROUP="plugdev", MODE="0666"

Then restarting udev:

% sudo systemctl restart udev

Plugged the Tiny Programmer ISP back in the laptop and ran avrdude again:

% sudo avrdude -c usbtiny -p m8

avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.00s

avrdude: Device signature = 0x1e9587
avrdude: Expected signature for ATmega8 is 1E 93 07
         Double check chip, or use -F to override this check.

avrdude done.  Thank you.

You should now have avrdude love.

Enjoy :-)

July 18, 2015

Casuarina Sands to Kambah Pool

I did a walk with the Canberra Bushwalking Club from Casuarina Sands (in the Cotter) to Kambah Pool (just near my house) yesterday. It was very enjoyable. I'm not going to pretend to be excellent at write ups for walks, but will note that the walk leader John Evans has a very detailed blog post about the walk up already. We found a bunch of geocaches along the way, with John doing most of the work and ChifleyGrrrl and I providing encouragement and scrambling skills. A very enjoyable day.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150718-casurina_sands_to_kambah_pool photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

July 17, 2015

OSX Bundling Soprano and other joys

Libferris has been moving to use more Qt/KDE technologies over the years. Ferris is also a fairly substantial software project in it's own right, with many plugins and support for multiple libraries. Years back I moved from using raw redland to using soprano for RDF handling in libferris.



Over recent months, from time to time, I've been working on an OSX bundle for libferris. The idea is to make installation as simple as copying Ferris.app to /Applications. I've done some OSX packaging before, so I've been exposed to the whole library paths inside dylib stuff, and also the freedesktop specs expecting things in /etc or whatever and you really want it to look into /Applications/YouApp/Contents/Resources/.../etc/whatever.



The silver test for packaging is to rename the area that is used to build the source to something unexpected and see if you can still run the tools. The Gold test is obviously to install from the app.dmz onto a fresh machine and see that it runs.



I discovered a few gotchas during silver testing and soprano usage. If you get things half right then you can get to a state that allows the application to run but that does not allow a redland RDF model to ever be created. If your application assumes that it can always create an in memory RDF store, a fairly secure bet really, then bad things will befall the app bundle on osx.



Plugins are found by searching for the desktop files first and then loading the shared libary plugin as needed. The desktop files can be found with the first line below, while the second line allows the plugin shared libraries to be found and loaded.



export SOPRANO_DIRS=/Applications/Ferris.app/Contents/Resources/usr/share

export LD_LIBRARY_PATH=/Applications/Ferris.app/Contents/Resources/usr/local/lib/soprano/



You have to jump through a few more hoops. You'll find that the plugin ./lib/soprano/libsoprano_redlandbackend.so links to lib/librdf.0.dylib and librdf will link to other redland libraries which themselves link to things like libxml2 which you might not have bundled yet.



There are also many cases of things linking to QtCore and other Qt libraries. These links are normally to nested paths like Library/Frameworks/QtCore.framework/Versions/4/QtCore which will not pass the silver test. Actually, links inside dylibs like that tend to cause the show to segv and you are left to work out where and why that happened. My roll by hand solution is to create softlinks to these libraries like QtCore in the .../lib directory and then resolve the dylib links to these softlinks.



In the end I'd also like to make an app bundle for specific KDE apps. Just being able to install okular by drag and drop would be very handy. It is my preferred reader for PDF files and having a binary that doesn't depend on a build environment (homebrew or macports) makes it simpler to ensure I can always have okular even when using an osx machine.





Selling Software Online, Installer, Packaging, and Packing Software, Desktop Automation, and More

Selling software online is deceptively simple. Actually making money out of it can be much more difficult.

http://www.cio.com/article/2388308/enterprise-software/14-tips-for-selling-software-and-services-online.html

http://www.quora.com/What-is-best-way-to-sell-software-online

http://www.softwarecasa.com/sell-software-i-4.html?ModPagespeed=noscript

http://www.forbes.com/sites/kathycaprino/2013/05/21/why-your-online-program-just-wont-sell/

http://www.fastspring.com/selling-software-online



Heaps of packaging/installer programs out there. Some cross platform solutions out there as well. Interestingly, just like a lot of businesses out there (even a restaurant that I frequent will offer you a free drink if you 'Like' them via Facebook) now they make use of guerilla style marketing techniques. Write a blog article for them and they may provide you with a free license.

https://en.wikipedia.org/wiki/List_of_installation_software
http://www.techrepublic.com/blog/five-apps/five-apps-for-creating-installation-packages/

http://www.advancedinstaller.com/free-license.html

http://www.jrsoftware.org/isinfo.php

https://en.wikipedia.org/wiki/List_of_software_package_management_systems

http://www.flexerasoftware.com/producer/products/software-installation/installshield-software-installer/

http://www.flexerasoftware.com/producer/resources/free-trials/#installshield



I've always wondered how much money software manufacturers make from bloatware and other advertising... It can vary drastically. Something that to watch for are silent/delayed installs though. Namely, installation of software even though it doesn't show up the Window's 'Control Panel'.

http://www.lifehacker.com.au/2015/05/crapware-is-a-horrible-problem-and-its-all-our-fault/

http://www.howtogeek.com/168691/how-to-avoid-installing-junk-programs-when-downloading-free-software/?PageSpeed=noscript

http://www.lifehacker.com.au/2013/11/unchecky-ensures-you-never-accidentally-install-bundleware-again/

http://unchecky.com/

http://www.makeuseof.com/tag/fight-toolbar-installer-bloatware-opinion/

https://www.google.com/admob/monetize.html

http://www.mobyaffiliates.com/blog/how-to-make-more-money-from-your-app-monetization-tips-from-appflood/

http://www.codefuel.com/developers

http://www.incomediary.com/7-best-plugins-for-monetization

http://www.amonetize.com/

http://installmonetizer.com/

http://www.sterkly.com/installer-monetization/
https://unityads.unity3d.com/help/Frequently%20Asked%20Questions/faq

http://www.revenyou.com/

http://www.buzinga.com.au/buzz/how-to-make-money-from-apps/



Even though product activation/DRM can be simple to implement (depending on the solution), cost can vary drastically depending on the company and solution that is involved.

https://en.wikipedia.org/wiki/Product_activation

http://stackoverflow.com/questions/3481594/how-to-program-a-super-simple-software-activation-system

https://activatar.codeplex.com/

https://www.fingoo.net/lib/asp/packages.asp

http://stackoverflow.com/questions/822468/is-there-an-open-source-drm-solution

http://www.fatbit.com/fab/launch-best-gaana-clone-script-features-website-details-confirm/

http://www.fileopen.com/

https://en.wikipedia.org/wiki/Digital_rights_management

https://en.wikipedia.org/wiki/Copy_protection



Sometimes you just want to know what packers and obfuscation a company may have used to protect/compress their program. It's been a while since I looked at this and it looks like things were just like last time. A highly specialised tool with few genuinely good, quality candidates...

https://en.wikibooks.org/wiki/Reverse_Engineering/File_Formats

http://stackoverflow.com/questions/1271550/how-to-detect-what-was-the-pe-packer-used-on-the-given-exe

http://www.woodmann.com/collaborative/tools/index.php/Category:Packer_Identifiers

http://reverseengineering.stackexchange.com/questions/3184/packers-protectors-for-linux

http://ntinfo.biz/

https://www.digitalocean.com/community/tutorials/how-to-install-and-get-started-with-packer-on-an-ubuntu-12-04-vps

https://en.wikipedia.org/wiki/Executable_compression

http://upx.sourceforge.net/

https://malwr.com



A nice way of earning some extra/bonus (and legal) income if you have a history being able to spot software bugs.

https://bugcrowd.com/list-of-bug-bounty-programs

http://www.businessinsider.com.au/twitter-hackerone-bounty-program-2014-9

http://www.siteslike.com/similar/vupen.com

https://en.wikipedia.org/wiki/Pwn2Own



If you've never used screen/desktop automation software before there are actually quiet a few options out there. Think of it as 'Macros' for the Windows desktop. The good thing is that a lot of them may use a scripting language for the backend and have other unexpected functionality as well opening up further opportunities for productivity and automation gains.

http://alternativeto.net/software/sikuli/

http://stackoverflow.com/questions/11497613/what-better-tool-than-sikuli-to-use-for-screen-automation-on-windows-7-or-prefe

https://answers.launchpad.net/sikuli/+question/141373

http://stackoverflow.com/questions/6337629/how-to-send-ctrl-c-in-sikuli

https://answers.launchpad.net/sikuli/+question/185777

https://answers.launchpad.net/sikuli/+question/232900



A lot of partition management software claim to be able to basically handle all circumstances. The strange thing is that disk cloning to an external drive doesn't seem to be handled as well. The easiest/simplest way seems to be just using a caddy/internal in combination with whatever software you may be using.

http://forum.easeus.com/viewtopic.php?t=20183

http://kb.easeus.com/art.php?id=10039

http://www.partition-tool.com/easeus-partition-manager/disk-copy.htm



There are some free Australian accounting solutions out there. A bit lacking feature wise though.

http://www.flyingsolo.com.au/forums/index.php?threads/free-accounting-software-australia-recommendations.29338/

http://www.bit.com.au/Review/344651,7-accounting-packages-for-australian-small-businesses-compared-including-myob-quickbooks-online-reckon-xero.aspx

http://bas-i.com.au/

http://l-lists.com/en/lists/rn52ao.html



Every once in a while someone sends you an email in a 'eml' format which can't be decoded by your local mail client. Try using 'ripmime'...

http://superuser.com/questions/187106/extract-save-a-mail-attachment-using-bash

Terry && EL

After getting headlights Terry now has a lighted arm. This is using the 3 meter EL wire and a 2xAA battery inverter to drive it. The around $20 entry point to bling is fairly hard to resist. The EL tape looks better IMHO but seems to be a little harder to work with from what I've read about cutting the tape and resoldering / reconnecting.



I have a 1 meter red EL tape which I think I'll try to wrap around the pan/tilt assembly. From an initial test it can make it around the actobotics channel length I'm using around twice. I'll probably print some mounts for it so that the tape doesn't have to try to make right angle turns at the ends of the channel.



July 16, 2015

Terry - Lights, EL and solid Panner

Terry the robot now has headlights! While the Kinect should be happy in low light I found some nice 3 watt LEDs on sale and so headlights had to happen. The lights want a constant current source of 700mA so I grabbed an all in one chip solution do to that and mounted the lights in series. Yes, there are a load of tutorials on building a constant current driver for a few bucks around the net, but sometimes I don't really want to dive in and build every part. I think it will be interesting at some stage to test some of the constant current setups and see the ripple and various metrics of the different designs. That part of he analysis is harder to find around the place.





And just how does this all look when the juice is flowing I hear you ask. I have tilted the lights ever so slightly downwards to save the eyes from the full blast. Needless to say, you will be able to see Terry coming now, and it will surely see you in full colour 1080 glory as you become in the sights. I thought about mounting the lights on the pan and tilt head unit, but I really don't want these to ever get to angles that are looking right into a person's eyes as they are rather bright.





On another note, I now have some EL wire and EL tape for Terry itself. So the robot will be glowing in a sublte way itself. The EL tape is much cooler looking than the wire IMHO but the tape is harder to cut (read I probably won't be doing that). I think the 1m of tape will end up wrapped around the platform on the pan and tilt board.



Behind the LED is quite a heatsink, so they shouldn't pop for quite some time. In the top right you can just see the heatshrink direct connected wires on the LED driver chip and the white wire mounts above it. I have also trimmed down the quad encoder wires and generally cleaned up that area of the robot.





A little while ago I moved the pan mechanism off axle. The new axle is hollow and setup to accomodate a slip ring at the base. I now have said slip ring and am printing a crossover plate for that to mount to channel. Probably by the next post Terry will be able to continuiously rotate the panner without tangling anything up. The torque multiplier of the brass to alloy wheels together with the 6 rpm gearmotor having very high torque means that the panner will tend to stay where it is. Without powering the motor the panner is nearly impossible to move, the grub screws will fail before the motor gives way.





Although the EL tape is tempting, the wise move is to fit the slip ring first.

July 15, 2015

Wanderings

I am on vacation this week, so I took this afternoon to do some walking and geocaching...



That included a return visit to Narrabundah trig to clean up some geocaches I missed last visit:



               



Interactive map for this route.



And exploring the Lindsay Pryor arboretum because I am trying to collect the complete set of arboretums in Canberra:



                   



Interactive map for this route.



And then finally the Majura trig, which was a new one for me:



   



See more thumbnails



Interactive map for this route.



I enjoyed the afternoon. I found a fair few geocaches, and walked for about five hours (not including driving between the locations). I would have spent more time geocaching at Majura, except I made it back to the car after sunset as it was.



Tags for this post: blog pictures 20150715-wanderings photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

July 14, 2015

8 Mega Watts in your bare hands

I recently went on a nice road trip to Gippstech, an interstate Ham radio conference, with Andrew, VK5XFG. On the way, we were chatting about Electric Cars, and how much of infernal combustion technology is really just a nasty hack. Andrew made the point that if petrol cars had been developed now, we would have all sorts of Hazmat rules around using them.

Take refueling. Gasoline contains 42MJ of energy in every litre. On one of our stops we took 3 minutes to refuel 36 litres. That’s 42*36/180 or 8.4MJ/s. Now one watt is 1J/s, so that’s a “power” (the rate energy is moving) of 8.4MW. Would anyone be allowed to hold an electrical cable carrying 8.4MW? That’s like 8000V at 1000A. Based on an average household electricity consumption of 2kW, that’s like hanging onto the HT line supplying 4200 homes.

But it’s OK, as long as your don’t smoke or hold a mobile phone!

The irony is that while I was sitting on 60 litres of high explosive, spraying fumes along the Princes Highway and bitching about petrol cars I was enjoying the use of one. Oh well, bring on the Tesla charge stations and low cost EVs. Infrastructure, the forces of mass production and renewable power will defeat the evils of fossil fuels.

Reading Further

Energy Equivalents of a Krispy Kreme Factory.

Fuel Consumption of a Pedestrian Crossing

LUV Beginners July Meeting: Ask the experts

Jul 18 2015 12:30
Jul 18 2015 16:30
Jul 18 2015 12:30
Jul 18 2015 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

This month we'll be asking attendees for their pressing Linux and Open Source issues, and our resident Linux experts will then attempt to explain the topics to your satisfaction! If you've got something you always wanted to know more about, or something you'd like to know how to do, come along and ask.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 18, 2015 - 12:30

Minutes of Council Meeting 17 June 2015

Wed, 2015-06-17 19:49 - 20:41

1. Meeting overview and key information

Present

Josh Hesketh, Sae Ra Germaine, Christopher Neugebauer, Craige McWhirter, Josh Stewart, James Iseppi

Apologies:

Tony Breeds

Meeting opened by Josh H at 1949hrs and quorum was achieved

Key Stats

Stats not recorded this fortnight

MOTION that the previous minutes of 03 June are correct

Moved: Josh H

Seconded: Chris

Passed Unanimously

2. Log of correspondence

Motions moved on list

Nil

General correspondence

GovHack 2015 as a subcommittee

MOTION by Josh H We accept govhack as an LA Sub-committee with the task of running GovHack at a national level with:

Geoff Mason - lead

Alysha Thomas

Pia Waugh - as the liaison to LA

Sharen Scott

Diana Ferry

Alex Sadleir

Richard Tubb

Jan Bryson

Keith Moss

Under the Sub-committee policy v1 to allow the committee to run with autonomy and to use an external entity for administration.

Seconded Chris

Passed Unanimously

The old Subcommittee policy will need to come into effect

Invoice from LESLIE POOLE - Reminder notice from Donna and Leslie have arrived.

Supporting the Drupal Accelerate fund

UPDATE: In Progress. Tony to process in Xero

Admin Team draft budget from STEVEN WALSH

UPDATE: To be discussed when Tony is available and Council Budget has been revised.

Also includes the requirement of a wildcard cert for *.linux.org.au

MOTION by Josh H accepts the expenditure of $150 per year on a wildcard SSL certificate on linux.org.au

Seconded: James Iseppi

Passed unanimously.

UPDATE: Awaiting for a more firm budget

3. Review of action items from previous meetings

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.

Discussion held about means of finding people willing to assist with both the maintenance of the website platform as well as the content available on this.

JOSH H to speak to Donna regarding this

UPDATE: Ongoing

UPDATE: to be moved to a general action item. To do a call for help to work on the website. Could this be treated as a project.

We need to at least get the website to D8 and automate the updating process.

ACTION: Josh to get a backup of the site to Craig

ACTION: Craige to stage the website to see how easy it is to update.

UPDATE: Craige to log in to the website to elevate permissions.

ACTION with Josh Hesketh to ensure 3 year server support package in progress

Actions are in progress with Admin Team

UPDATE: A budget will be put forward by the admin team. An initial online hackfest has been conducted. Pending item.

UPDATE: Ongoing.

Update: To be removed from the agenda.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.

Update: In progress

Update: To be done on friday.

ACTION: Josh H to check with PyconAU to check their budgetary status.

UPDATE: Budget looks fine and trust the treasurer’s accounting abilities.

ACTION: JOSH to seek actuals in budget from PyconAU committee

UPDATE: Completed

Update: to be removed from agenda

ACTION WordCamp Brisbane - JOSH H to contact Brisbane members who may possibly be able to attend conference closing

ACTION: Sae Ra to send through notes on what to say to James.

UPDATE: James delivered a thank you message to WordCamp.

WordCamp was a successful event. Thank you to the organisers.

ACTION: Josh H to get a wrap up/closing report

Potential sponsorship of GovHack.

More information is required on the types of sponsorship that LA can look at.

Clarify with GovHack. LA may not be able to sponsor a prize as you would also need to

UPDATE: Criteria would need to be developed. LA would be able to provide their own judge. Josh S to come with some wording and criteria motion to be held on list.

Value of the prize also to be discussed after budget has been analysed by Josh H and Tony B.

ACTION: Josh H to follow-up on Invoices from WordCamp Sydney

4. Items for discussion

LCA2016 update

Cfp has opened and going very well.

LCA2017 update

Nothing to report

PyCon AU update

Registrations opened. Early birds are looking to sell out very quickly.

Sponsorship is looking good and

ACTION: Sae Ra to approve payment

Drupal South

ACTION: Follow-up on DrupalSouth 2016 enquiry. will need to setup a sub-committee

UPDATE: To work out the sub-committee details with organisers.

WordCamp Brisbane

Seeking a closure report

OSDConf

ACTION: Josh H to follow-up on budget status

5. Items for noting

6. Other business

Backlog of minutes

ACTION: Josh H to help Sae Ra with updating the website and mailing list.

UPDATE: Ongoing.

UPDATE: Completed.

MOTION by Josh H Minutes to be published to planet.linux.org.au

Seconded: Craige

Passed unanimously

Bank account balances need rebalancing

ACTION: Tony to organise transfers to occur including NZ account.

Appropriate treasurers to be notified.

UPDATE: to be discussed on friday

Membership of auDA

Relationship already exists.

LA has the potential to influence the decisions that are made.

ACTION: Council to investigate and look into this further. To be discussed at next fortnight.

UPDATE:

zookeepr

David would like to keep working on ZooKeepr.

We will need to find a solution that does not block volunteers from helping work on ZooKeepr.

ACTION: James to look at ZooKeepr

ACTION: Josh S to catch up with David Bell regarding the documentation.

Grant Request from Kathy Reid for Renai LeMay’s Frustrated State

MOTION by Josh H given the timing the council has missed the opportunity to be involved in the Kickstarter campaign. The council believes this project is still of interest to its members and will reach out to Renai on what might be helpful in an in kind, financial or other way. Therefore the grant request is no longer current and to be closed.

Seconded Sae Ra Germaine

Passed unanimously

ACTION: Josh S to contact Renai

7. In Camera

2 Items were discussed in camera

2041 Close

MySQL on NUMA machines just got better!

A followup to my previous entry , my patch that was part of Bug #72811 Set NUMA mempolicy for optimum mysqld performance has been merged!

I hope it’s enabled by default so that everything “just works”.

I also hope it filters down through MariaDB and Percona Server fairly quickly.

Also, from the release notes on that bug, I think we can expect 5.7.8 any day now.

July 13, 2015

Quartz trig

A morning of vacation geocaching, wandering, and walking to quartz trig. Quartz was a disappointment as its just a bolt in the ground, but this was a really nice area I am glad I wandered around in. This terrain would be very good for cubs and inexperienced scouts.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150713-quartz photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

July 12, 2015

Twitter posts: 2015-07-06 to 2015-07-12

Git: Renaming/swapping “master” with a branch on Github

I was playing around with some code and after having got it working I thought I’d make just one more little quick easy change to finish it off and found that I was descending a spiral of additional complexity due to the environment in which it had to work. As this was going to be “easy” I’d been pushing the commits to master on Github (I’m the only one using this code) and of course a few reworks in I’d realised that this was never going to work out well and needed to be abandoned.

So, how to fix this? The ideal situation would be to just disappear all the commits after the last good one, but that’s not really an option, so what I wanted was to create a branch from the last good point and then swap master and that branch over. Googling pointed me to some possibilities, including this “deprecated feedback” item from “githubtraining” which was a useful guide so I thought I should blog what worked for me in case it helps others.

  1. git checkout -b good $LAST_GOOD_COMMIT # This creates a new branch from the last good commit
  2. git branch -m master development # This renames the "master" branch to "development"
  3. git branch -m good master # This renames the "good" branch to "master".
  4. git push origin development # This pushes the "development" branch to Github
  5. In the Github web interface I went to my repos “Settings” on the right hand side (just above the “clone URL” part) and changed the default branch to “development“.
  6. git push origin :master # This deletes the "master" branch on Github
  7. git push --all # This pushes our new master branch (and everything else) to Github
  8. In the Github web interface I went and changed my default branch back to “master“.

…and that was it, not too bad!

You probably don’t want to do this if anyone else is using this repo though. 😉

This item originally posted here:



Git: Renaming/swapping “master” with a branch on Github

Trellis Decoding for Codec 2

OK, so FreeDV 700 was released a few weeks ago and I’m working on some ideas to improve it. Especially those annoying R2D2 noises due to bit errors at low SNRs.

I’m trying some ideas to improve the speech quality without the use of Forward Error Correction (FEC).

Speech coding is the art of “what can I throw away”. Speech codecs remove a bunch of redundant information. As much as they can. Hopefully with whats left you can still understand the reconstructed speech.

However there is still a bit of left over redundancy. One sample of a model parameter can look a lot like the previous and next sample. If our codec quantisation was really clever, adjacent samples would look like noise. The previous and next samples would look nothing like the current one. They would be totally uncorrelated, and our codec bit rate would be minimised.

This leads to a couple of different approaches to the problem of sending coded speech over channel with bit errors:

The first, conventional approach is to compress the speech as much as we can. This lowers the bit rate but makes the coded speech very susceptible to bit errors. One bit error might make a lot of speech sound bad. So we insert Forward Error correction (FEC) bits, raising the overall bit rate (not so great), but protecting the delicate coded speech bits.

This is also a common approach for sending data over dodgy channels. For data, we cannot tolerate any bit errors, so we use FEC, which can correct every error (or die trying).

However speech is not like data. If we get a click or a pop in the decoded speech we don’t care much. As long as we can sorta make out what was said. Our “Brain FEC” will then work out what the message was.

Which leads us to another approach. If we leave a little redundancy in the coded speech, we can use that to help correct or at least smooth out the received speech. Remember that for speech, it doesn’t have to be perfect. Near enough is good enough. That can be exploited to get us gain over a system that uses FEC.

Turns out that in the Bit Error Rate (BER) ranges we are playing with (5-10%) it’s hard to get a good FEC code. Many of the short ones break – they introduce more errors than they correct. The really good ones are complex with large block sizes (1000s of bits) that introduce unacceptable delay. For example at 700 bit/s, a 7000 bit FEC codeword is 10 seconds of coded speech. Ooops. Not exactly push to talk. And don’t get me started on the memory, MIPs, implementation complexity, and modem synchronisation issues.

These ideas are not new, and I have been influenced by some guys I know who have worked in this area (Philip and Wade if you’re out there). But not influenced enough to actually look up and read their work yet, lol.

The Model

So the idea is to exploit the fact that each codec model parameter changes fairly slowly. Another way of looking at this is the probability of a big change is low. Take a look at the “trellis” diagram below, drawn for a parameter that is represented by a 2 bit “codeword”:

Lets say we know our current received codeword at time n is 00. We happen to know it’s fairly likely (50%) that the next received bits at time n+1 will be 00. A 11, however, is very unlikely (0%), so if we receive a 11 after a 00 there is very probably an error, which we can correct.

The model I am using works like this:

  1. We examine three received codewords: the previous, current, and next.
  2. Given a received codeword we can work out the probability of each possible transmitted codeword. For example we might BPSK modulate the two bit codeword 00 as -1 -1. However when we add noise the receiver will see -1.5 -0.25. So the receiver can then say, well … it’s most likely -1 -1 was sent, but it also could have been a -1 1, and maybe the noise messed up the last bit.
  3. So we work out the probability of each sequence of three codewords, given the probability of jumping from one codeword to the next. For example here is one possible “path”, 00-11-00:

    total prob =

    (prob a 00 was sent at time n-1) AND

    (prob of a jump from 00 at time n-1 to 11 at time n) AND

    (prob a 11 was sent at time n) AND

    (prob of a jump from 11 at time n to 00 at time n+1) AND

    (prob a 00 was sent at time n+1)

  4. All possible paths of the three received values are examined, and the most likely one chosen.

The transition probabilities are pre-computed using a training database of coded speech. Although it is possible to measure these on the fly, training up to each speaker.

I think this technique is called maximum likelihood decoding.

Demo and Walk through

To test this idea I wrote a GNU Octave simulation called trellis.m

Here is a test run for a single trellis decode. The internal states are dumped for your viewing pleasure. You can see the probability calculations for each received codeword, the transition probabilities for each state, and the exhaustive search of all possible paths through the 3 received codewords. At the end, it get’s the right answer, the middle codeword is decoded as a 00.

For convenience the probability calculations are done in the log domain, so rather than multiplies we can use adds. So a large negative “probability” means really unlikely, a positive one likely.

Here is a plot of 10 seconds of a 4 bit LSP parameter:

You can see a segments where it is relatively stable, and some others where it’s bouncing around. This is a mesh plot of the transition probabilities, generated from a small training database:

It’s pretty close to a “eye” matrix. For example, if you are in state 10, it’s fairly likely the next state will be close by, and less likely you will jump to a remote state like 0 or 15.

Here is test run using data from several seconds of coded speech:



octave:130> trellis

loading training database and generating tp .... done

loading test database .... done

Eb/No: 3.01 dB nerrors 28 29 BER: 0.03 0.03 std dev: 0.69 1.76

We are decoding using trellis based decoding, and simple hard decision decoding. Note how the number of errors and BER is the same? However the std dev (distance) between the transmitted and decoded codewords is much better for trellis based decoding. This plot shows the decoder errors over 10 seconds of a 4 bit parameter:

See how the trellis decoding produces smaller errors?

Not all bit errors are created equal. The trellis based decoding favours small errors that have a smaller perceptual effect (we can’t hear them). Simple hard decision decoding has a random distribution of errors. Sometimes you get the Most Significant Bit (MSB) of the binary codeword flipped which is bad news. You can see this effect above, with a 4 bit codeword, a MSB error means a jump of +/- 8. These large errors are far less likely with trellis decoding.

Listen

Hear are some samples that compare trellis based decoding to simple hard decision decoding, when applied to Codec2 at 700 bit/s on a AWGN channel using PSK. Only the 6 LSP parameters are tested (short term spectrum), no errors or correction are applied to the excitation parameters (voicing, pitch, energy).

Eb/No (dB) BER Trellis Simple (hard dec)
big 0.00 Listen Listen
3.0 0.02 Listen Listen
0.0 0.08 Listen Listen

At 3dB, the trellis based decoding removes most of the effects of bit errors, and it sounds similar to the no error reference. Compared to simple decoding, the bloops and washing machine noises have gone away. At 0dB Eb/No, the speech quality is improved, with some exceptions. Fast changes, like the “W” in double-you, and the “B” in Bruce become indistinct. This is because when the channel noise is high, the probability model favours slow changes in the parameters.

Still – getting any sort of speech at 8% bit error rates with no FEC is pretty cool.

Further Work

These techniques could also be applied to FreeDV 1600, improving the speech quality with no additional overhead. Further work is required to extend these ideas to all the codec parameters, such as pitch, energy, and voicing.

I need to train the transition probabilities with a larger database, or make it train in real time using off air data.

We could include other information in the model, like the relationship of adjacent LSPs, or how energy and pitch change slowly in strongly voiced speech.

Now 10% BER is an interesting, rarely explored area. The data guys start to sweat above 1E-6, and assume everyone else does. At 10% BER FEC codes don’t work well, you need a really long block size or a low FEC rate. Modems struggle due to syncronisation issues. However at 10% the Eb/No versus BER curves start to get flat, so a few dB either way doesn’t change the BER much. This suggests small changes in intelligibility (not much of a threshold effect). Like analog.

Summary

For speech, we don’t need to correct all errors; we just need to make it sound like they are corrected. By leaving some residual redundancy in the coded speech parameters we can use probability models to correct errors in the decoded speech with no FEC overhead.

This work is another example of experimental work we can do with an open source codec. It combines knowledge of the channel, the demodulator and the codec parameters to produce a remarkable result – improved performance with no FEC.

This work is in it’s early stages. But the gains all add up. A few more dB here and there.

References

  1. I found this description of Soft Decision Viterbi decoding useful.
  2. Last year I did some related work on natural versus gray coding.

July 11, 2015

Electronics (TV) Repair, Working at Amazon, and Dealing With a Malfunctioning Apple iDevice

I obviously do component level electronics repair from time to time (I've been doing electronics repair/modification since I was fairly young on devices ranging from food processers all the way up to advanced electronic component level repair such as laptops). One of recent experiments was with large screen flat panel (Plasma, LCD, LED, etc...) television sets. Some general notes:



- take precautions. If you've ever watched some of those guys on YouTube, you'll realise that they are probably amateur electrcians and have probably never been shocked/electrocuted before. It's one thing to work with small electronic devices. It's an entirely different matter to be working with mains voltage. Be careful...

- a lot of the time electronic failure will take occur gradually over time (although the amount of time can vary drastically obviously)

- don't just focus on repairing it so that power can flow through the circuit once more. It's possible that it will just fail once more. Home in on the problem area, and make sure everything's working. That way you don't have to keep on dealing with other difficulties down the track

http://electronics.stackexchange.com/questions/25055/slow-blow-vs-fast-acting-fuse

http://en.wikipedia.org/wiki/Fuse_%28electrical%29

- it may only be possible to test components outside of circuit. While testing components with a multimeter will help you may need to purchase more advanced and expensive diagnostic equipment to really figure out what the true cause of the problem is

http://www.eevblog.com/forum/beginners/is-there-any-way-to-test-capacitors-while-on-the-circuit-board/

http://electronics.stackexchange.com/questions/95663/how-to-test-capacitors-of-non-working-circuit-board-using-capacitor-meter

http://en-us.fluke.com/training/training-library/test-tools/digital-multimeters/how-to-measure-capacitance-with-a-digital-multimeter.html

http://www.learningaboutelectronics.com/Articles/How-to-test-a-capacitor

http://www.tpub.com/neets/book7/24l.htm

http://www.ladyada.net/library/metertut/resistance.html?PageSpeed=noscript

https://en.wikipedia.org/wiki/Oscilloscope

- setup a proper test environment. Ideally, one where you have a seperate circuit and where there are safety mechanisms in place to reduce the chances of a total blackout in your house and to increase your personal safety

- any information that you take from this is at your own risk. Please don't think that any of the information here will turn you into a qualified electronics technician or will allow you to solve most problems that you will face

- a lot of the time information on the Internet can be helpful but only applies to particular conditions. Try to understand and work the problem rather than just blindly following what other people do. It may save you a bit of money over the long term

http://en.wikipedia.org/wiki/Failure_modes_of_electronics

http://en.wikipedia.org/wiki/Cascading_failure

https://ca.answers.yahoo.com/question/index?qid=20100213153443AAc1xPb

http://electronics.stackexchange.com/questions/8129/how-do-components-fail

http://www.angelfire.com/planet/funwithtransistors/Book_TS_CHAP-3.html

http://www.angelfire.com/electronic/funwithtubes/Restore_cap.html

http://www.avsforum.com/forum/167-plasma-flat-panel-displays/1490867-samsung-pn60e550d1f-no-picture-has-sound-capacitors-look-ok.html

http://www.avsforum.com/forum/167-plasma-flat-panel-displays/1490976-samsung-51d490-sound-but-no-picture-issue.html

http://forum.allaboutcircuits.com/threads/lg-42-lcd-tv-42lg60fr-picture-goes-black-sound-still-on.86475/

http://www.gamersyde.com/forum_lg_lcd_tv_47lg50_blacked_out_-33_34874_1_en.html

http://www.avsforum.com/forum/166-lcd-flat-panel-displays/1337855-tv-turns-there-sound-but-no-picture.html

https://www.youtube.com/channel/UCUfgq9Gn8S041qQFl0C-CEQ

https://www.youtube.com/user/GrantsPassTVRepairs?feature=hovercard

http://www.explainthatstuff.com/plasmatv.html

http://www.badcaps.net/forum/showthread.php?t=18542

http://www.badcaps.net/forum/showthread.php?t=33333

https://www.ifixit.com/Answers/View/136156/color+on+LCD+fades+to+white+after+10+seconds

http://support-us.samsung.com/cyber/popup/iframe/pop_troubleshooting_fr.jsp?idx=28929&modelname=LT-P227W

https://us.en.kb.sony.com/app/answers/detail/a_id/42696/~/the-tv-turns-off-and-on-by-itself,-reboots,-or-the-standby-light-is-blinking

https://answers.yahoo.com/question/index?qid=20080129052848AAdAtpt

http://forums.cnet.com/7723-13973_102-530929/horizontal-black-lines-on-samsung-50-plasma-tv/

http://forums.cnet.com/7723-13973_102-511585/plasma-tv-has-3-thin-black-lines/

http://everydaylife.globalpost.com/causes-horizontal-lines-lcd-panel-37988.html

Philips 32PFL5522D/05 - Completely dead (no power LED or signs of life) - Diagnosis and repair

https://www.youtube.com/watch?v=TrphsEw8slw
how fix tv site:blogspot.com

- electronics repair is becoming increasingly un-economical. Parts may be impossible to find and replacing the TV rather than fixing it may actually be cheaper (especially when the screen is cracked. It's almost certain that a new replacement is going to cost more than the set itself). The only circumstances where it's likely to be worth it is if you have cheap spare parts on hand or the type of failure involves a relatively small, minor, component. The other thing you should know is that while the device may be physically structured in such a way to appear modularised it may not fail in such a fashion. I've been reading about boards which fail but actually have no mechanism to stop it from bleeding into other modules which means you end up in an infinite, failure loop. Replace one bad component with a good one and the leftover apparently good component fails and takes out the new, good board eventually. The cycle then continues on forever before the technician realises this or news of such design spreads. You may have to replace both boards at the same time which then makes the repair un-economical

http://resistor.cherryjourney.pt/

http://www.eeweb.com/toolbox/4-band-resistor-calculator

http://www.1728.org/resisclr.htm

http://www.camradio.net/resistors.html

- spare parts can be extremely difficult to source or are incredibly expensive. Moreover, the quality of the replacement parts can vary drastically in quality. If at all possible work with a source of known quality. Else, ask for demo parts particularly with Asian suppliers who may provide them for free and as a means of establishing a longer term business relationship

http://www.aliexpress.com/store/product/37PFL7422-93-37TA2800-Power-Supply-715T2484-5-Original-parts/715046_32223080629.html

http://www.aliexpress.com/store/product/Non-Replacement-37PFL7422-Power-Board-715T2484-5/1395137_32228251009.html

http://en.wikipedia.org/wiki/Mains_electricity_by_country

http://www.worldstandards.eu/electricity/plug-voltage-by-country/

https://answers.yahoo.com/question/index?qid=20060619172834AADo9qt

http://electronics.stackexchange.com/questions/120983/what-happens-if-a-240v-appliance-is-connected-in-a-120v-ac-power-supply

http://electronics.stackexchange.com/questions/29259/charging-devices-voltage-and-amperage

https://answers.yahoo.com/question/index?qid=20060619172834AADo9qt

- be careful when replacing parts. Try to do your bet to replace like for like. Certain systems will operate in a degraded state if/when using sub-par replacements but will ultimately fail down the line

http://electronics.stackexchange.com/questions/86530/substituting-capacitors

http://www.antiqueradio.org/recap.htm

- use all your senses (and head) to track down a failure more quickly (sight and smell in particular for burnt out components). Sometimes, it may not be obvious where the actual failure is as opposed to where it may appear to be coming from. For instance, one set I looked at had a chirping power supply. It had actually suffered from failures of multiple components which made it appear/sound as though the transformer had failed. Replacement of all relevant components (not the transformer) resulted in a functional power supply unit and stopping of the chirping sound

http://www.cnet.com/au/news/samsung-power-defect-causes-some-tvs-to-fail-and-a-class-action-suit-follows/

http://techreport.com/forums/viewtopic.php?f=37&t=62360&start=960

http://www.tomshardware.com/forum/363126-28-making-clicking-chirping-noises

https://answers.yahoo.com/question/index?qid=20111006212112AAbUauu

http://www.badcaps.net/forum/showthread.php?t=24657

http://www.badcaps.net/forum/showthread.php?t=18603

http://www.badcaps.net/forum/showthread.php?t=17379

http://superuser.com/questions/487729/motherboard-makes-high-pitched-chirping-noise-will-it-stop-by-itself-or-what-d

http://sound.westhost.com/troubleshooting.htm

http://www.instructables.com/id/Repair-your-electronics-by-replacing-blown-capacit/

http://www.justanswer.com/tv-repair/62e22-philips-42-inch-lcd-no-picture-just-chirping-sound-when-turned.html

http://www.fixya.com/tags/chirping/flat_panel_televisions/philips

http://www.ehow.com/how_7387574_philips-turn-making-chirping-noise.html

http://www.capacitorlab.com/visible-failures/

http://www.badcaps.net/pages.php?vid=5

http://www.angelfire.com/electronic/funwithtubes/Testing_caps.html

http://conradhoffman.com/capchecktut.htm

http://www.justanswer.com/tv-repair/71602-fix-vertical-lines-samsung-plasma-tv.html

http://www.digikey.com.au/product-detail/en/ECE-A1AKA470/P806-ND/6913

- as with musical instruments, teardowns may be the best that you can get with regards to details of how a device should work. This is nothing like school/University where you are given a rough idea of how it should work. You may be completely blind here...

http://dtbnguyen.blogspot.com.au/2015/06/repairing-musical-instrumentselectrical.html

- components may be shared across different manufacturers. It doesn't mean that they will work if swapped though. They could be using different version of the same base reference board (similar to the way in which graphics, sound, telecommunications, and network cards rely on reference designs in the ICT sector)

https://www.ifixit.com/

715t2484-3-schematic.pdf

Magnavox has a very similar layout to a similar size Phillips LCD TV

https://www.youtube.com/watch?v=qSS6ycUxS98



Apparently, Amazon are interested in some local talent.

http://aws.amazon.com/careers/aus-nz-event/

There are some bemusing tales of recruitment and the experience of working there though.

http://gawker.com/working-at-amazon-is-a-soul-crushing-experience-1573522379

http://www.theguardian.com/money/2014/nov/28/being-homeless-is-better-than-working-for-amazon

http://www.glassdoor.com.au/Overview/Working-at-Amazon-com-EI_IE6036.11,21.htm

http://www.salon.com/2014/02/23/worse_than_wal_mart_amazons_sick_brutality_and_secret_history_of_ruthlessly_intimidating_workers/



If your iPhone, iPad, or iPod touch doesn't respond or doesn't turn on. If your device is in a lot of trouble I often just run the following command on the storage, 'dd if=/dev/zero of=/dev/[iPod storage node]'. This will create a corrupted filesystem and force restoration of the iOS to factory settings/setup.

https://support.apple.com/en-au/HT201412

https://www.ifixit.com/Wiki/iPod_Classic_Troubleshooting

https://www.ifixit.com/Answers/View/15461/Won%27t+turn+on+or+charge

https://discussions.apple.com/thread/2468023?tstart=0

https://support.apple.com/en-au/HT201263

http://www.howtogeek.com/216839/what-to-do-when-your-iphone-or-ipad-won%E2%80%99t-turn-on/?PageSpeed=noscript



Sometimes digitizers play up. Apparently, a lot of strange behaviour can occur if certain cables are bent improperly or if there isn't enough space/insulation between certain components.

https://www.ifixit.com/Answers/View/137418/Digitizer+freaks+out+when+laid+flat+on+the+frame



Identify your iPad model.

https://support.apple.com/en-au/HT201471



If your device is suffering from device corruption issues you may need to backup your music first...

http://www.wikihow.com/Copy-Music-from-Your-iPod-to-Your-Computer 

http://www.syncios.com/how-to-backup-ipod-music-to-computer.html

http://lifehacker.com/5869827/how-to-copy-music-from-your-iphone-ipad-or-ipod-touch-to-your-computer-for-free

https://discussions.apple.com/thread/5025610

http://geeknizer.com/how-to-fix-corrupted-ipod/



A lot of substances can be used to remove scratches from your electronic device. Some of them not so obvious in the way that they actually work (solvents and abrasives are the most common techniques that are used).

http://www.macworld.com/article/1046291/scratchremove.html

http://www.instructables.com/id/Buffing-Your-Ipod/

Labor on refugees

Sorry, technical folk, this is going to be a political blog post.

I recently got an email from my local member, Andrew Leigh, that raised an issue I feel passionately about; here is my response.

On 09/07/15 14:55, Andrew Leigh wrote:[snip]
> 
> ▪ Some people have asked me *why Labor supported the government’s bill to
> continue regional processing*. This is a tough question, on which reasonable
> people can disagree, but the best answer to this is to read Bill Shorten’s
> speech to the House of Representatives
> 
> on the day the legislation was introduced.
Hi Andrew,

I'm sorry, but I cannot agree with the logic Bill Shorten and the Labor party has expressed in that speech.

Firstly, anyone watching the international problems with refugees will realise that Australia's intake is pitiful and stingy compared to some of its key allies and comparable nations and especially when compared to its population size and lifestyle. It is hypocritical to say "we don't want people to risk journeying across the sea from Indonesia, but we're happy for them to remain illegal immigrants there", especially when you look at the life that those people face as refugees there.

As an aside, though, I would say that it is still partly correct - it is more humane for them to remain in Indonesia than to be detained indefinitely in the inhuman, underresourced and tortuous conditions on Manus Island and Nauru. It is shameful to me that the Labor party can ignore this obvious contradiction.

But more importantly, the logic that we're somehow denying "people smugglers a product to sell" by pushing boats back into international waters shows no understanding of people smuggling as a business. Australia is still very much a destination, it's just that people now come with visas on planes and they pay even more for this than they used to. There is still a thriving trade in getting people into Australia, it's just been made more expensive - in the same way that making heroin illegal has not caused it to suddenly vanish from the face of the earth.

All we're doing by punishing people who come by boat to seek refuge in Australia is punishing the very desperate, the worst off, the people who have literally fled with their clothes and nothing else.

Other people with money still arrive, overstay their visas, get jobs as illegal immigrants or on tourism visas. The ABC has exposed some of these ridiculous, unethical companies trading on foreign tourists and grey market labourers. The Labor party, of all parties, should be standing up for these people's rights yet it seems remarkably silent on this issue.

The point that I think Labor needs to learn and the point I ask you to express to your colleagues there is that we don't want Labor to return to its policies in 2010. We thought those were inhuman and unjust then, and we still do now. Invoking them as a justification for supporting the Government now is bad.

Personally, I want Labor to do three things with regard to refugees:

  1. Move back to on-shore detention and processing. The current system is vastly more expensive than it needs to be, and makes it more difficult for UN officials and our own members of parliament and judiciary to be able to examine the conditions of detention. The Coalition keeps telling everyone about how expensive their budget is but seems remarkably silent on why we're paying so much to keep refugees offshore.
  2. Provide better ways of settling refugees, such that one can cut the "people smuggler" middle men out of the deal.

    For example, set up refugee processing in places such as Sri Lanka and Afghanistan where many refugees come from. Set a fixed price per person for transportation and processing in Australia, such that it undercuts the people smugglers - according to figures I read in 2010 this could be $10,000 and still be 50% less than black market figures.

  3. Ensure accountability and transparency of the companies such as Serco that are running these centres. If the government was running them and people were being abused, the government would be held accountable; when private companies do this the government wipes its hands and doesn't do a thing.
And on a more conversational note, I'd be interested in your views on this as an economist. There is obviously an economy of people smuggling - do we understand it? Is there any economic justification for offshore detention? All markets must work with a certain amount of illegal activity - can we work _with_ the black market rather than trying to work against it?

I do appreciate your updates and information and I look forward to more of your podcasts.

All the best,

Paul

Gather 2015 – Afternoon Sessions

Panel: “How we work” featuring Lance Wiggs, Dale Clareburt, Robyn Kamira, Amie Holman – Moderated by Nat Torkington

  • Flipside of Startups given by Nat
  • Amie – UX and Services Designer for the Govt, thinks her job is pretty cool. Puts services online.
  • Lance – Works for NZTE better by capital programme. Also runs an early stage fund. Multiple starts and fails
  • Dale – Founded of Weirdly. Worked her way up to top of recruitment company (small to big). Decided to found something for herself.
  • Robyn – Started business 25 years ago. IT consultant, musician, writer.
  • Nat – Look at what you are getting from the new job. Transition to new phase in life. Want ot be positive.
  • Types of jobs: Working for someone else, work for yourself, hire other people, investor. Each has own perks, rewards and downsides.
  • Self employed
    • Big risk around income, peaks and troughs. Robyn always lived at the bottom of the trough level of income. Some people have big fear where next job is coming from.
    • Robyn – Charged Govt as much as possible. Later on charged just below what the really big boys charged. Also has lower rates for community orgs. Sniffed around to find out the rates. Sometimes asked the client. Often RFPs don’t explicityly say so you have to ask.
    • Pricing – You should be embarrassed about how much you charge for services.
    • Robyn – Self promotion is really hard. Found that contracts came out of Wellington. Book meetings in cafes back to back. Chat to people, don’t sell directly.
  • Working for others
    • Amie – Working in a new area of government. But it an area that is growing. Fairly permissive area, lots of gaps that they can fill.
    • Dale – Great experience as an employee. In environment with lot of autonomy in a fast growing company.
    • Lance – Worked from Mobile – Lots of training courses, overseas 6 months after hired. 4 years 4 different cities, steep learning curve, subsidized housing etc. “Learning curve stopped after 4 years and then I left”.
    • Big companies downside: Multiple stakeholders, Lots of rules
    • Big company upside: Can do startup on the side, eg a Family . Secure income. Get to play with big money and big toys.
  • Startup
    • Everything on steroids
    • Really exciting
    • Starting all parts of a company at once
    • Responsibility for business and people in it
    • Crazy ups and downs. Brutal emotional roller-coaster
    • Lance lists 5 businesses off the top of his head that failed that he was at. 3 of which he was the founder
    • Worst that can happen is that you can lose your house
    • Is this life for everyone? – Dale “yes it can be, need to go in with your eyes open”.  “Starting a business can be for everyone. I’m the poorest I’ve ever been now but I’m the happiest I’ve ever been”
    • At a startup you are not working for yourself, you are working for everybody else. Dale says she trys to avoid that.
    • Robyn – “If you life is gone when you are in a business then you are doing it wrong.”
    • If you are working from home you can get isolated, get some peer support and have a drink, coffee with some others.
  • Robyn – Recomends “How to make friends and influence People”
  • Dale
    • Jobhunters – Look for companies 1st and specific job 2nd
    • Startup – Meet everyone that you know and ask their opinion on your pitch
    • Young People going to Uni – You have to get work experience, as a recruiter she looks at experience 1st and pure academic history second.
  • Lance
    • Balance between creating income, creating wealth, learning
    • Know what you are passionate about and good at
    • It is part of our jobs to support everyone around us. Promote other people
  • Amie
    • Find the thing that is your passion
    • When you are deliverying your passion then you are delivering sometime relevant

 Pick and Mix

  • Random Flag generator – @polemic
    • See Wikipedia page for parts of a flag
    • 3 hex numbers are palet
    • 4 numbers represent the pattern
    • Next number will be the location
    • next number which color will be assigned
    • Last number will be a tweak number
    • Up to 8 or 9 of the above
    • Took python pyevolve and run evolution on them.
  • Alex @4lexNZ , @overtime
    • E-sports corporate gaming league
    • untested in NZ
    • Someone suggested cold calling CEOs or writing them letter
  • Simon @slyall (yes me)
    • Low volume site for announcements
  •  Mutate testing
    • Tweak test values of code, to reverse fuzzing
  • Landway learning  – @kiwimrdee
    • Looking for computers to borrow for class
    • They teach lots of stuff
  • Poetry for computers – @kiwimrdee
    • Hire somebody english/arts background who understand language rather than somebody from a CS background who understand machines
  • the.dosprompt.com
    • Lossless image compression for the web
    • Tools vary across the platform
  • Glen – Make computers learn to play Starcraft 1
    • Takes replays of humans playing starcraft
    • Getting computer to learn to play from that DB
    • It is struggling
  • Emergent political structures in tabletops games

Never check in a bag – How to pack

  • 48 hour bag
    • Laptop and power
    • Always – Zip up pouch, tissues , hand sanitizer, universal phone charger, breath mints, the littlest power plug (check will work in multiple voltages), Food bar, chocolate.
    • If more than 48 hours – notebook, miso soup, headphones, pen, laptop charger, apple plugs ( See “world travel kit” on apple site)
    • Get smallest power plug that will charge your laptop
    • Bag 3 – Every video adapter in the world, universal power adapter, airport express.
    • TP-link battery powered wifi adapters
    • If going away just moves laptop etc to this bag
    • Packing Cell
      • Enough clothes to get me through 48 hours
      • 2 * rolled tshirts (ranger rolling)
      • 2 pairs of underwear
      • 2 pairs of socks
      • Toileties. Ziplock back that complies with TSA rules for gels etc.
      • Other toiletries in different bag
      • Rip off stuff from hotels, also Kmart and local stores.
      • Put toiletries ziplock near door to other bag so easy to get out for security.
      • Leave packing cell in Hotel when you go out
    • Learn to Ranger roll socks and shirts etc.
  • 6 weeks worth of stuff
    • In the US you can have huge carry-on
    • Packs 2 weeks worth of clothes
    • Minaal Bag (expensive but cool).
    • Schnozzel bag – Vacuum pack clothing bag
  • Airlines allow 1 carryon bag up to 7 kgs + 1 bag for other items (heavy stuff can go into that)
  • Pick multi-color packing sell so you can color-code them.
  • Elizabeth Holmes and Matilda Kahl and Steve Jobs all wear same stuff every day.
  • Wear Ballet Heals on the plane
  • Woman no more than 2 pairs of shoes every, One of which must be good for walking long distances
  • Always be charging

 Show us your stack

  • I was running this session so didn’t take any notes.
  • We had people from about 5 compnies give a quick overview of some stuff they are running.
  • A bit of chat beforehand
  • Next year if I do this I probably need to do 5 minutes time limits for everyone

Close from Rochelle

  • Thanks to Sponsors
  • Thanks to Panellists
  • Thanks to catering and volunteer teams
  • Will be back in 2016

 

FacebookGoogle+Share

July 10, 2015

Gather 2015 – Morning Sessions

Today I’m at the Gather 2015 conference. This was originally “Barcamp Auckland” before they got their own brand and went off to do random stuff. This is about my 5th year or so here (I missed one or two).

Website is gathergather.co.nz . They do random stuff as well as the conference.

Welcome

  • Welcome and intro to conference history from Ludwig
  • Rochelle thanks the sponsors
  • Where to go for dinner, no smoking, watch out for non-lanyard people., fire alarms, etc
  • Quiet room etc

Lessions learnt from growing a small team from 5-15

  • Around 30 people. Run by Ben, works at sitehost, previously worked at Pitch
  • Really hard work. Takes  a lot of time and real effort to build a great team
  • Need dedicate time and resources to growing team, Need someone who is focussed on growing the team and keeping the current team working
  • Cringe when people say “HR” but you need some in the sort of role and early on.
    • At around 16 people and doesn’t have full HR person yet. Before FT have someone with scheduled time to focus on team or company culture. In ideal world that person might not be in a manager role but be a bit senior (so they hear what the lower level employees say.
  • Variety and inclusion are keep to happy team
    • Once you are at 10+ members team will be diverse so “one size fits all” won’t work anymore. Need to vary team activities, need to vary rewards. Even have team lunches at different places.
  • Hire for culture and fit
    • From the first person
    • Easier to teach someone skills than to be a good team member
    • Anecdote: Hired somebody who didn’t fit culture, was abrasive, good worker but lost productivity from others.
    • Give people a short term trial to see if they fit in.
  • You will need to change the way communicate as a team as it grows
    • A passing comment is not enough to keep everybody in the loop
    • Nobody wants to feel alienated
    • Maybe chat software, noticeboard, shared calendar.
  • Balance the team work the members do
    • Everybody needs to enjoy the work.
    • Give people interesting rewarding work, new tech, customer interaction
    • Share the no-fun stuff too. Even roster if you have to. Even if somebody volunteers don’t make them always do it.
  • Appreciate you team members
    • Praise them if they have put a lot of work into something
    • Praise them before you point out the problems
    • Listen to ideas no matter who they come from.
    • 5 Questions/Minutes rule
  • If someone is working not well, wonder if problem is elsewhere in their life. Maybe talk to them. Job of everyone in the team
  • Appreciate your teams work, reward them for it
  • Do what feels right for your team. What works for some teams might not work for all. No “one size fits all”
  • Building great teams isn’t science it is an art. Experiment a bit.
  • Taking the time to listen to 10 people instead of just 5 takes longer. Maybe this can be naturally taken on by others in the team, no just the “boss”.
  • Have a buddy for each new hire. But make sure the buddys don’t get overloaded my constantly doing this with each new hire.
  • Going from 10 to 100 ppl. They same thing doesn’t work at each company size.
  • The point where you can get everybody in a room till when you can’t. At that point you have multiple teams and tribalism.
  • If you have a project across multiple teams then try and put everybody in that project together in a room.
  • Have people go to each others standups
  • Hire people who can handle change
  • Problem if you you buy a small company, they small company may want to keep their culture.
  • Company that does welcome dinners not farewell dinners
  • Make sure people can get working when they arrive, have an email address etc, find out if they have preferences like nice keyboard.
  • Don’t hire when you are extremely busy that you can’t properly get them onboard (or you may pick the wrong person). Never hire impulsively. Hire ahead of time.
  • Don’t expect them to be fully productive straight away. Give them something small to start on, no too complicated, no to crazy dependant on your internal crazy systems. But make sure it is within their skill level in case they struggle.
  • Maybe summer student projects. Find good people without being stuck with someone. Give them a project that isn’t high enough priority for the FT people.
  • Create training material

 Writing for fun and profit

  • Run by Peter Ravlich
  • Scrivener – IDE for writing
  • Writing full time (with support from partner), currently doing 4 projects simitaniously
  • Less community for Fantasy Writers than for literary writers. Bias in NZ against Genre fiction
  • Community – SpecficNZ – For speculative fiction. SciFi con each year and have a stand at Armageddon each year. $30 per year
  • If you write roleplaying games look at selling via rpgnow.com
  • Remember if publishing with Amazon then remember to be non-exclusive
  • For feature writing you need to know editors who like you and like your work.
  • “Just keep writing” , only way you’ll ever get better
  • Writing a weekly column:
    • The Best way: Write articles week ahead of time, edited by his wife, sent to the editor well in advance.
    • Leaving to last minute, not pre-editing quality varies, speakers column got dropped
  • Find the type of writing that you like and are good at.
  • Run everything past a reading group. “Am I on the right track?”
  • Treated writing as a jobs. Scheduled “Write for an hour, edit for 30 minutes, lunch, then repeat”. Make yourself.
  • Lots of sites that that push you to write a set number of words. Give you badges, pictures of kittens or punishment to keep you to a wordcount
  • Join a online writing group and post regular updates and get a bit of feedback
  • Daily Routines Blog or spinoff book for some ideas
  • Developmental editor or Structural editor
    • Developmental editor – Go to early, guidelines of what you should be doing, what direction you should be going. What is missing. Focused at plot level.
    • Structural Editor – Goes though line-by-line
  • Need to find editor who suits your style of writing, knows genre is important. Looks for those who have edited books/authors in your area.
  • Self editing – set aside novel, change font, new device, read though again. Change context so looking at it with new eyes.
  • Get contract with editor reviewed by Lawyer with experience in the industry (and on your side)
  • Most traditional publishers expect to see an edited novel
  • Talk to agents, query those who work with authors in similar areas to you.
  • Society of Authors
    • Have some legal experts, give you a reference
  • Kindle K-boards, a bit romance orientated but very good for technical stuff.
  • Go to poetry or reading/writing group. Get stuff out to other people. Once you have got it out to some, even just a small group then small jump to send it out to billions.
  • Have a stretegy on how to handle reviews, probably don’t engage with them.
  • Anne Friedman – Disapproval Matrix
  • You are your own toughest reviewer
  • Often people who went to journalism school, although not many actual journalists
  • Starling Literary Journal
  • Lists of Competitions and festivals in various places
  • Hackathon ( Step it up 2015 ) coming up, one group they want is for journalists who want to get more money into the job

The World of Vexillology – Flag Design

  • Dan Newman
  • flagdesign.nz + flagtest.nz
  • NZ Flag design cutoff this coming Thursday (the 16th of July)
  • People interesting in how the flag design originates, eg how Navel custom influences designs
  • 6000 odd submissions -> 60 shortlist -> 4 voted in referendum -> 1 vs current
  • 60 people at meeting in Wellington, less in other places.
  • Government Website
  • first time a country changed a flag by referendium not at the time of signifcant event (eg independence)
  • A lot of politicians are openly republican, but less push and thought in rest of population
  • Concern that silver flag looks like corporate logo
  • Easier to pretend you are an Australian and ask them “What would the NZ flag look like?” . Eg “Green Kangaroo on yellow” , “While silver fern or Kiwi on Black background”
  • Also lots of other countries use the Southern Cross
  • most countries the National team colors are close to that of the flag
  • Feeling if even flag changes now, then after “full independence” will change again
  • What will happen if Celebs come out if favour of a specific design
  • Different colours have different associations ( in different places )
  • All sorts of reasons why different colours are on a flag
  • The Silver fan looks like a fish to some
  • Needs to look good scaled down to emoji size

Bootstrapping your way to freedom

  • From Mark Zeman – Speedcurve
  • Previous gather sessions have been orientated toward VC and similar funding
  • There is an alternative where you self-fund
  • Design teacher – all students wanted to work on LOTR cause it was where all the publicity was.
  • Boostrapping – Doing it your way, self funded, self sustaining, usually smaller
  • Might take Capital later down the track
  • 3Bs seen as derogatory
  • Lots of podcasts, conferences and books etc
  • See Jason Cohen, many bits in present taken from him
  • The “ideal” bootstrapped business. Look at it from your own constraints
  • Low money, low time, self funded, try to create a cash machine
  • SAAS business lower end is very low. Very small amount per year
  • Low time if working on the side
  • Trying to get to maybe $10k/month to go fulltime
  • Reoccurring revenue. 150 customers at $66/month. Not many customers, not huge value product but has to be a reasonable amount.
  • Maybe not one-off product
  • Enterprise vs consumer space
  • Hard to get there with $0.99 one-offs in App store
  • Annual plans create cashflow
  • Option Boutique product. Be honest about who you are, how big you really are, don’t pretend to be a big company
  • B2B is a good space to be in. You can call 150 business and engage with them.
  • Not critical, Not real time (unless you want to be up at 3am)
  • Pick something that has “naturally reoccurring pain”. eg “Not a wedding planner” , probably multiple times per month
  • Aftermarkets. eg “Plugins for wordpress. Something small, put 20 hours into it, put it up for $5″. See also Xero, Salesforce.
  • Pick Big Markets, lots of potential customers
  • “Few NZ clients great for GST since I just get refunds”
  • Better By design. Existing apps mean there is already a market. Took an existing Open source product (webpagetest.org) and put a nice wrapper on it
  • Number of companies have published their numbers. Look at the early days of them while it took them to get to $10k/month (eg many took a year or two to get there).
  • Option to do consultancy on the side if you go “full time”. Cover the gap between you new business and your old wage. Had a 1 year contract that let him go half time on new biz but cover old expenses.
  • Don’t have false expectations on how quickly it will happen
  • Hard when it was a second job. Good because it was different from the day-job, but a lot of work.
  • Prototype and then validate. In most cases you should go the other way around.
  • If you want to talk to someone have something to offer. Have a pay it forward.
  • Big enterprises have people too. Connect to one guy inside and they can buy your product out of his monthly credit card bill.
  • Not everybody is doing all the cool techniques. Even if you are a “B” then you are ahead of a lot of the “C”s . eg creating sites with responsive design.
  • 1/3 each – Building Business, Building Audience, Building Product
  • Loves doing his GST etc
  • In his case he did did each in turn. Product , then Audience then Business
  • Have a goal. Do you want to be a CEO? Or just a little company?
  • His Success measures – Fun, time with kids, travel, money, flexability, learning, holidays, adventures, ideas, sharing
  • Resources: Startups for the Rest of us. A Smart Bear Blog, Amy Hoy – Unicorn Free, GrowthHacker TV, Microconf Videos

FacebookGoogle+Share

Landscape design: a great analogy for the web

Map of the garden at Versailles

I often find myself describing the digital domain to people who don't live and breathe it like I do. It's an intangible thing, and many of the concepts are coded in jargon. It doesn't help that every technology tool set uses it's own specific language, sometimes using the same words for very different things, or different words for the same things. What's a page? A widget? A layout? A template? A module, plugin or extension? It varies. The answer "depends".

Analogies can be a helpful communication tool to get the message across, and get everyone thinking in parallel.

One of my favourites, is to compare a web development project, to a landscape design project.

One of the first things you need to know, is who is this landscape for and what sort of landscape is it? The design required for a public park is very different to one suitable for the back courtyard of an inner city terrace house.

You also need to know what the maintenance resources will be. Will this be watered and tended daily? What about budget? Can we afford established plants, or should we plan to watch the garden grow from seeds or seedlings?

The key point of comparison, is that a garden, whether big or small, is a living thing. It will change, it will grow. It may die from neglect. It may become an un-manageable jungle without regular pruning and maintenance.

What analogies do you use to talk about digital design and development?

Image: XIIIfromTOKYO - Plan of the gardens of Versailles - Wikipedia - CC-BY-SA 3.0

NetHui 2015 – Friday afternoon

Safety and security in SMEs

  • Biggest challenge for one SME IT person very bad password practises
  • PABX issues, default passwords on voicemail resulting in calls getting forwarded overseas, racking up a big bill
    • Disable countries you don’t need
    • Credit Limits on your account
    • Good firewall practice
    • Good pin/password practice
  • SMEs wanted problem to go away since they had a business to run.
  • No standards for IT in small business, everywhere is setup different
  • 9 times out of 10 IT stifles business and makes things worse.
  • Small businesses recognise value, don’t want to spend on stuff that doesn’t return value
  • So many attack directions very hard to secure.
  • If you let other people using your business devices its a huge risk. Do you let your kids play with your work phone/laptop?
  • Biometrics don’t seem to be there yet
  • Maybe cloud-based software is a solution.

Disaster recovery

  • Pictures of before/after of satellite downlink and comms centre in Vanuatu after Cyclone
  • Cellular network survived, Datacentre survived, Fibre network survived
  • One month after disaster 80% of comms were restored
  • NZ team just sent over material via Govt CIO
  • Various other groups on the ground
  • Lots of other people doing stuff. Some were uncoordinated with main efforts
  • NZ people (Dean, Andy) Spent 90% of time on logistics and 10% of time on IT stuff
  • Vanuatu people very busy. eg offshore people had own mailing list to discuss things and then filter them through to people on the ground
  • Lots of offers from people.
  • Plan not in place in Vanuata, they now have one though
  • What people wanted was Generators and Satellite phones. Both of them are hard to ship via air due to Petrol/Lithium.
  • Very hard for non-regular (not the top 5 NGOs) to get access to shipping in military planes etc
  • Echo from people who had similar problems in Christchurch working with the regular agencies
  • Guy from vodafone said their company (globally) has a cellphone site that can be split between normal plane luggage
  • Twitter accounts for Wellington suburbs had a meeting with council
  • Some community outreach from the councils to coordinate with others. community resilience. Paying for street BBQs etc. “Neighbours day”
  • Vital infrastructure needs to have capacity in disaster.
  • Orgs need to have plans in place beforehand
  • Good co-operation between telcos in Christchurch Earthquake
  • Mobile app for 111 currently being looked at
  • Some parts of the privacy act can be loosened when disasters are declared to enable information sharing with agencies
  • Options for UPS on UFB “modems”

Panel: Digital inclusion – Internet for everybody

  • Panelists: Vanisa Dhiru (2020 Communications Trust), Bob Hinden (Internet Society), Professor Charles Crother (Auckland University of Technology), Robyn Kamira (Mitimiti on the Grid Project).
  • Charles
    • Cure-all quick technical fix
    • attitude to non-users
    • Recognise the dark-side of the Internet
    • What sorts of uses do we want to see?
    • Facilitating active vs passive users
    • Various stats on users. At around 80%. Elderly catching up with other groups
  • Vanisa
    • Digital inclusion projects, best know is “computers in homes”, In 1500 homes per year
    • Does digital disadvantaged just mean poor or other groups?
  • Tim
    • Network for Learning ( N4L)
    • Connecting up schools to managed network, many over RBI
    • School gets: router with firewall, some services on top of that
    • Means teachers don’t have to worry about the technical issues
    • map.n4l.co.nz is website with map of connected schools
    • Target 90% by end of the year. Getting down to smaller and more remote schools
    • Not just about having fibre connections and handing out tablets to every student
    • Raspberry Pi at each site they can to remote in to test network
  • Robyn
    • Issues 10 years ago about theft of data and concepts
    • Today we still see instances where models will have [Maori Chin Tattoo] and similar
    • Wellbeeding – health, education
    • Cultural preservation: creation too, not a museum piece
    • Economic development: how to we participate in dev of NZ
    • Mitimiti on the Grid. Very small school in Hokianga Harbour
  • I was tweeting a bit too much rather than typing here.
  • What is “inclusion”
  • Where the leadership be coming from
    • “everyone” . We live in a country small enough for everyone to do that.

 

General NetHui Feedback, some minor nagatives..

  • Need filtering of questions. Too many in all sessions turned into long statements. Val Aurora outlines a good method to prevent this.
  • I went to the “Quiet Room” once and there were people holding a noisy conversation
  • Heard there was a bit of an agressive questionare during the e-Mental health session

FacebookGoogle+Share

July 09, 2015

NetHui 2015 – Friday Morning

Panel: Adapt or die? News media, new media, transmedia

  • Panelists: Megan Whelan (Radio New Zealand), Alex Lee (Documentary Edge Festival), Walid Al-Saqaf (Internet Society), Tim Watkin (Executive Producer of The Nation and blogger), Carrie Stoddart-Smith (blogger).
  • Panel moderator: Paul Brislen.
  • Intro Megan
    • Been at Radio NZ for 10 years. Website back then just frequencies and fax number
    • Good at Radio, not doing Internet very well
    • New Job as community engagement editor
    • Internet completely changed how the job is done.
    • Sacrifice accuracy and context sometimes to get the story out fast
    • Because people can now get their first and publish. They are no longer the gatekeepers of information. Getting used to others knowing more thna we do
  • Alex
    • Sees himself as creative entrepreneur
    • Content a few years ago seeing documentaries play in the cinema
    • Storytelling being distributed. Communities already telling their own stories.
    • 2 types of people in Audience. Skimmers and people wanting to do a deep dive
    • Story tellers only know who to tell the story, sometimes not so much on the technology
    • Developing collaboration between technologists and creatives together
  • Carrie
    • Blogging and social media provided new spaces for stories
    • Maori TV. Maori people in the “Ngati blogosphere”
    • Telling our own stories not just having others telling them
    • Media still highlights negative rather than positive stories about Maori
    • “Social media & blogging and facilitate stories and getting to know each other online”
    • But Internet allows Maori to bypass media to get positive stories out to to National/International audience
  • Walid
    • Internet should be empowering tool
    • Problem with Internet are people on it not the Internet themselves
    • Characteristics of traditionalist media is that there is a gatekeeper
    • New media is that everybody is responsible for their own actions
    • 60% of what is on social media is fake.
  • Tim
    • Every newsroom in NZ is running digital-first
    • No sustainable profit model for media orgs online
    • Digital tools give media a lot more tools and ability to create to stories
    • Speed comes a loss of quality, loss of subeditors
    • Internet has sucks a lot of money out of journalism (especially loss of classifieds)
    • Nostalgia has forgotten how bad journalism has used to be.
    • So much pressure on resources but less money
    • Example of real-time fact checking during interviews
  • Question for Alex.
    • You want people to Interaction with docus, but past has shown people don’t really?
    • Alex says that people have in the past
    • Refers to national Film board of Canada websites and interaction with their documentaries
    • These days all need docs are required by broadcasters and funders to have interaction and social media strategy
  • Mixing of Advertising and journalism undermine content?
    • A bit but it is a source of money that helps keep the rest afloat.
  • Is mainstream media actually verified compared to Social media
    • Yes it is
    • Use varified accounts on twitter to at least ensure the person is real
  • Opinion on tools such as “data miner” which takes news across internet and aggregates it?
    • Newsrooms have a lot of expertise
    • But less now as newsrooms get hallowed out
    • 8 feature editors at NZ Herald 10 years ago. Just 1.5 now
  • People can some fact-check journalism instantly
    • Good in one way
    • But diversity of knowledge means fact checking harder
  • What the Economic side of this? Where do you see economic support for high-quality contact coming from?
    • Sugar Daddy. Eg Washington Post supported by Jeff Bezos
    • Some kind of paywall seems be an main option
  • Responsibility to highlight stories and come back to old/ongoing stories
    • Yes they are revisited by media
  • How far though a digital day could somebody go and only experience Maori?
    • Some people only tweet in Maori
    • Work at places where people primary work in Maori
  • If money is tight and media companies consolidate does media have the room to push against the “powers that be”
    • Pretty much has always been the case
    • Getting harder but not astronomically harder than it used to be.

NZ culture online

  • Facilitators: Amber Craig, Bronwyn Holloway-Smith, Dan Shannan
  • Amber
    • How does NZ tell our story online with youtube etc
    • How to compete with other countries
  • Dan
    • Documentary NZ Trust
    • Looking into content and presentation of content
    • Lots of new platforms. Hard to negotiate with each of them
    • Have to reach people outside the main centres and people not able to attend festival events
    • Funding bodies want to be about NZ only. Won’t fund NZers telling stories about other places on non-NZ stuff. Told for Nz point of view
  • Brownwyn
    • Artist
    • People using various media

I switched to a different session after 10 minutes

Slowing the fast lane – Net Neutrality

  • Theresa legacy pervades in NZ Internet “Marketing by Confusion”
  • Incumbents offer inferior products to smaller ISP and exploit consumer ignorance
  • Almost all NZ ISPs do “not net neutral” stuff to save costs and improve user experience. eg they cache Google.
  • Netflix effect driving up demand across network ( 40% growth in 3 months) . Need to find a way of pricing that. Why shouldn’t we look at options to manage the network and push pricing signals back up the line
  • Why does Spark not have Netflix caches? Why does Spark not peer? Spark guy refuses to answer.
  • Spark gets away since it is the legacy incumbent Telco, the default ISP to get away with a lot of stuff.
  • Spark expect at manipulating the outcome
  • UFB levels the playing field. Market Failure will come from small ISPs not getting the scale to compete.
  • Datacaps now high enough that Zero-rating are now longer a thing. However packet prioritisation is still a thing that ISPs can hang over providers.
  • Alternative IXs being created to dis-satisfaction regarding port costs at the current ISPs
  • Prediction is that if NZ goes down net neutrality path it will fail, cause it won’t moderate unfair use of market power. Legislation will be narrow and based on the technology of the day, will be left behind too quickly.
  • Vote with feet away from Spark and Vodafone. Why they have the share they will keep abusing it.
  • Spark customer: Spark works okay, don’t care about the random polities, they work okay.
  • Peering can go via the Telecom Commissioner, doesn’t need politician
  • The Peering policy is damaging the NZ Internet.
  • ISPs that do not peer are less robust than those who do (especially in emergencies)

Copyright on the Internet

  • Facilitators: Hadyn Green, Matthew Jackson, Trish Hepworth
  • Trish
    • The Internet makes it easy to copy things and violate copyright
    • From the Internet point of view copying is the core function, resptricting that directly impacts the internet
    • Website blocking – Providers get sites blocked by copyright holders. Collatoral damage problem, other sites and other services
    • Track people and sending warning notices. To what extent should monitoring be allowed “just” to prevent copyright infringment
    • Should technologies like VPNs be allowed if they are copyright-circumvention applications
    • Should TPM/DRM be put on everything?
    • Can you copyright an API? Can you restrict people for using it everywhere. How important is interoperability without explicit permission
    • Copyright/Patents over software
    • How do you regulate a digital single market across multiple countries and multiple jurisdictions and cultures?
    • Should data be predicted by copyright? text mining allowed?
  • Paula Browning
    • From the “We create” , creative sector
    • Instead of thinking copyright is broekn cause you cannot want GoT at 1pm. The Internet is a massive opportunity for NZ. Copyright is needed for the industry to make money.
    • Games industry will generate $500 million this year
  • Paddy Buckley – Quickflicks
    • Problem is the current licensing model of content. Licensed by territory
    • Challenge people to name 5 TV series you cannot watch via NZ services
    • Need to keep territory-specific licensing. Cause else services are not going to have any local focus
    • Also content creators make the rules and they want territory licensing
  • You gotta respect their rules cause it is their content.. Maybe not cause random rules sometimes don’t make sense anymore
  • 28% of people use VPNs or some other place changing technology
  • Should we take things to count to get judicial clarity
  • Copyright had originally to do with regulation of printing press (eg “technology” ) not the regulation of content.
  • Copyright originally and up to now always orientation towards large-many distribution. New systems are many-many that involves “copying” for all interactions.
  • Why are VPNs a problem since money is still going to creators? – That is not how the model works, you have to pay for something first before creating it. The selling to different territories is how stuff is funded. Future business models are still not developed enough for everyone.
  • Worry about the amount of effort that goes into enforcing the current business models rather than looking at new ones. Especially what is this happening in NZ which is not solved by the current model.
  • TPP criminalise breaking DRM even when you legally have access to the resulting content
  • Vertical integration like with Disney allows resell of content across songs, TV, parks, re-releases, e over a period of 50+ years
  • Regional Licensing allows a single provider to pay for something and allow the content creator to get a lump-sum of money. They local provider can also prote the show locally.
  • Content providers already do the sums of global vs regional deals
  • Physical good already restricted to different places via exclusive import agreements.
  • Same with software, Example of NZ version of software was 10x more expensive and 2 versions behind.
  • “It seems clumsy attempts to secure copyright are still driving users to piracy. Cost and complexity are not being addressed”
  • Copyright laws should not be written solely by the content industry since they will solely reflect the interest of that industry.
  • Is there are shortage of Music, Movies, TV shows right now? Is copyright rally killing the industry.
  • Bollywood movie industry often legitimately released on Youtube after a few years of standard release.
  • Only reason that publishers have control of copyright (of books) is market failure. You have to go via intermediaries because you can’t do it yourself.
  • VPNs have lots of other uses beyond copyright evasion. Shouldn’t be banned just to prevent that.
  • Getting rid of “Work for hire” would seem to be a problem when something like a film has 5,000 people worked on a Movie.
  • Suggestion that there are should be a license like APRA to allow people to download what they want for a fixed license each year.
  • Need to sort out the problem with lengthening copyright period and orphan works.

 

FacebookGoogle+Share

dpsearch-4.54-2015-07-06

A new snapshot version of DataparkSearch Engine has been released. You can get it on Google Drive.

Here is the list of changes since previous snapshot:

  • Crossword section is now includes value of TITLE attribute of IMG tag and values of ALT and TITLE attributes of A and LINK tags found on documents pointing to the indexing document
  • Meta PROPERTY is now indexing
  • URL info data is now stored for all documents with HTTP status code < 400
  • configure is now understands --without-libextractor switch to build dpsearch without libextractor support even it has been installed
  • robots.txt support is enabled for sites crawling using HTTPS scheme
  • AuthPing command has been added to send authorisation request before getting documents from a web-site. See details below.
  • Cookie command has been added.
  • Add support for SOCKS5 proxy without authorisation and with username authorisation. See details below.
  • A number of minor fixes

AuthPing command

Some web-sites may serve different content to a logged in user. In most cases logging in process consists of sending a POST or GET HTTP request to a specific URL before you start to receive targeted content. You may use AuthPing command to send such authentication request before requesting any document from the web-site.

E.g.:


AuthPing "POST https://commercial-site.ext.au/user/login.php u=bot%40user.ext.au&p=super%40pass"

This command specify a POST request to be send to the URL address https://commercial-site.ext.au/user/login.php with the following CGI loading: u=bot%40user.ext.au&p=super%40pass

AuthPing command should be specified before each Server/Realm/Subnet command it affects. And specified request is sent each time an indexing thread access a web-server for the first time in a run session.

Using SOCKS5 proxy

Proxy command is now accepting proxy type option with value either http either socks5. If you need to use username authentication with SOCKS5 proxy please use ProxyAuthBasic command to specify username and password.

E.g.:


Proxy socks5 localhost:9050

In this example a SOCKS5 proxy connection to local Tor system is specified which uses no authentication method for connection.

Ableton and Ableton Push Hacking

For those who have been tracking this blog, it's been obvious that I've recently been spending more and more time with the Ableton Push.



If you don't know what this is please see the following...

https://www.ableton.com/en/help/learn-push/

http://blog.dubspot.com/learned-30-days-ableton-push/

http://sonicbloom.net/en/two-weekends-with-ableton-push-part-1-review/

http://emilehoogenhout.com/reviews/ableton-push-an-exploration-emile-hoogenhout/

Easy to miss Push features

https://forum.ableton.com/viewtopic.php?f=55&t=196560

Helpful Push information.

https://forum.ableton.com/viewtopic.php?f=55&t=215744



Jeremy Ellis meets Ableton Push

https://www.youtube.com/watch?v=EdCgA54g-aM

Mad Zach Push Performance Walkthrough

https://www.youtube.com/watch?v=iSZm4ECbE5k

Decap Push Performance

https://www.youtube.com/watch?v=ZOtj5WtkR1Q



It's basically an advanced, modern musical instrument/MIDI controller.



There have been others who have attempted to de-compile and extend/modify the behaviour of the device but while information and the extensions that have been provided have been interesting and useful they have been somewhat limited.

https://github.com/gluon/AbletonLive9_RemoteScripts

http://julienbayle.net/ableton-live-9-midi-remote-scripts/

http://julienbayle.net/ableton-push/

ableton: just release the py midi remote scripts

https://forum.ableton.com/viewtopic.php?f=1&t=134338

Live 9 MIDI Remote Scripts revealed...

https://forum.ableton.com/viewtopic.php?f=1&t=190069

http://livecontrol.q3f.org/ableton-liveapi/articles/introduction-to-the-framework-classes/



I'm beginning to understand why. The following link provides an update to automatically generated documentation (via epydoc) of decompiled Ableton Remote Script code (my scripts for decompilation and automated documentation have been included in the package).


https://wiki.python.org/moin/DocumentationTools

http://stackoverflow.com/questions/1125970/python-documentation-generator

http://sphinx-doc.org/

https://packages.debian.org/search?keywords=python-sphinx

https://en.wikipedia.org/wiki/Comparison_of_documentation_generators

https://packages.debian.org/stable/python/python-epydoc

https://packages.debian.org/wheezy/python-epydoc



If you want to make any additional modifications of behaviour you'll need to be aware of the following:

- you'll need to catch up on your Python coding

http://stackoverflow.com/questions/36901/what-does-double-star-and-star-do-for-python-parameters

http://stackoverflow.com/questions/400739/what-does-asterisk-mean-in-python

https://en.wikipedia.org/wiki/Pickle_%28Python%29

http://stackoverflow.com/questions/514371/whats-the-bad-magic-number-error

http://stackoverflow.com/questions/12233837/bad-magic-number-while-trying-to-import-pyc-module

http://stackoverflow.com/questions/6034621/bad-magic-number-error-persists-even-after-rebuilding-the-pyc-file

https://shankaraman.wordpress.com/tag/how-to-fix-runtimeerror-bad-magic-number-in-pyc-file/

http://www.tutorialspoint.com/python/python_exceptions.htm

http://www.linuxtopia.org/online_books/programming_books/python_programming/python_ch17s03.html

- you'll need knowledge of how the device works, music and mathematical theory, Ableton, and core computing knowledge. It is not sufficient to know how they work seperately. You need to know how everything fits together.

- sounds obvious but start small and move up. This is critical particularly with reference to the awkward style of programming that they can sometimes resort to. More on this to below

- the code can vary in quality and style quite significantly at times. At times it seems incredibly clean, elegant, and well documented. At other times, it there is no documentation at all and doesn't seem to be well designed or engineered or have keep maintenance in mind. For instance, a commonly used design pattern is MVC. This doesn't seem to follow that. They use a heap of sentinels throughout there code. Moreover, the characters that are used can be a bit confusing. They don't use preprocesser directives/constants where they may be better suited. If you break certain aspects of the code you can end up breaking a whole lot of other parts. This may be deliberate (to reduce the chances of third party modification which is likely particularly as there seems to be some authentication/handshake mechasnisms in the code to stop it from working with 'uncertified devices') or not (they lack resources or just have difficult timelines to deal with)

https://mail.python.org/pipermail/tutor/2003-July/024206.html

http://stackoverflow.com/questions/2682745/creating-constant-in-python

- be prepared to read through a lot of code just to understand/make a change to something very small. As stated previously strictly speaking at times they don't adhere to good practice. That said, other aspects can be changed extremely easily without breaking other components

- due to the previous two poits it should seem obvious that it can be very difficult to debug things sometimes. Here's the other thing you should note,

- the reason why Ableton suffers from strange crashes and hangs from time to time becomes much more obvious when you look at the way they code. In the past, I've built programs (ones which rely on automated code generation in particular) that relied on a lot of consecutive steps that required proper completion/sequencing for things to work properly. When things work well, things are great. When things break, you feel and look incredibly silly

- you may need to figure out a structure for ensuring and maintaining a clean coding environment. I try to have two screens with one for clean code and another for modified code. Be prepared to restart from scratch by reverting to a clean pyc code and only one or a small number of modified py files.

- caching occurs in situations where you may not entirely expect. If you can not explain what is happening and suspect caching just restart the system. Better yet, maintain your development environment in a virtual machine to reduce hardware stress caused by continual restarts.

- you will need patience. As stated previously, due to the way code has been structured (sometimes) you'll need to understand it properly to allow you to make changes without breaking other parts. Be prepared to modify, delete, or add code just to help you understand it

- if you've ever dealt with firmware or embedded devices on a regular basis you would be entirely familiar with some of what I'm talking about. Like a lot of embedded devices you'll have limited feedback if something goes wrong and you'll be scratching your head with regards to how to work the problem

http://dtbnguyen.blogspot.com.au/2012/07/if-only-reading-were-easier.html

http://dtbnguyen.blogspot.com.au/2012/08/funky-firmware.html

You may require a lot of Linux/UNIX based tools and other debugging utilities such as IDA Pro, Process Explorer and Process Monitor from the Sysinternals Suite. Once you examine Ableton using such utilities, it becomes much clearer how the program has been structured, engineered, and designed. One thing that can cause mayhem in particular is the Ableton Indexer which when it kicks in at the wrong time can make it feel as though the entire system has frozen.

https://technet.microsoft.com/en-us/sysinternals/bb545021.aspx

https://en.wikipedia.org/wiki/Sysinternals

Ablton indexing Crashes

https://forum.ableton.com/viewtopic.php?f=1&t=206205

(42474187) Disable "Ableton Index" possible?

https://forum.ableton.com/viewtopic.php?f=52&t=191573

http://audiosex.pro/index.php?/topic/11798-ableton-9-killing-index-process-to-speed-up-workflow/

http://audiosex.pro/index.php?/topic/11815-a-little-app-to-speed-up-live-9-mac-only/

https://www.ableton.com/en/help/article/indexer-crash/

The actual index file/s are located at

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Database

- the most relevant log file is located at,

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Preferences\Log.txt

The timestamps works on the basis on amount of time since program startup. Time of startup is clearly outlined.

Ableton takes 130 - 2 mins to start up ?

https://forum.ableton.com/viewtopic.php?f=1&t=6308

Delete it if you need to if you get confused about how it works.

https://www.ableton.com/en/help/article/live-crash-packs/

https://www.ableton.com/en/help/article/live-crashing-what-should-i-do-now/

https://www.ableton.com/en/help/article/how-to-out-of-memory/

- be aware that there are some things that you can't do anything about. The original Novation Launchpad was considered somewhat sluggish in terms of refresh rate and latency. The electronics were subsequently updated in the Novation Launchpad S to deal with it. You may encounter similar circumstances here.

Push browser - slow, freezing, sluggish :(

https://forum.ableton.com/viewtopic.php?f=55&t=215539

- they have a strong utility/systems engineering mentality. A lot of files are archives which include relatively unobfuscated content. For instance, look in a

C:\Users\[username]\AppData\Roaming\Ableton\Live 9.1.7\Live Reports

and you'll find a lot of 'alp' files. These are 'Crash Reports' which are sent to Ableton to help debug problems. Rename them to a gz file extension and run it through hexedit. Same with 'adg' audio device group files. Rename to gz and gunzip to see a flat XML file containing some encoded information but mostly free/human readable content. It will be interesting to see how much of this can be manually altered achieving flexibility in programming without having to understand the underlying file format.

https://www.ableton.com/en/help/article/filetypes-used-by-ableton/

- each version of Ableton seems to have a small version of Python included. To make certain, advanced extensions others have suggested installing a libraries seperately or a different version of Python...

- be prepared to learn multiple protocols and languages in order to make the changes that you want

How to control the Push LCD text with sysex messages

https://forum.ableton.com/viewtopic.php?f=55&t=193744
http://archive.monome.org/community/discussion/1648/share-your-arduinomonome-code/p1

http://hackaday.com/2008/08/13/rgb-monome-clone/

https://en.wikipedia.org/wiki/Protofuse

http://julienbayle.net/works/creation/protodeck-midi-controller-for-ableton-live/



For a lot of people, the device seems incredibly expensive for what amounts to a MIDI controller. It was much the same with me. The difference is that it's becoming increasingly clear how flexible the device can be with adequate knowledge of the platform.

http://www.nativekontrol.com/

http://blog.dubspot.com/5-ableton-push-software-devices/
Push feature requests

https://forum.ableton.com/viewtopic.php?f=55&t=192110

The Ableton Push is a good platform but it will never realise it's full potential if the software isn't upgraded.



If you are interested in signing up to test the latest Beta version of Ableton please see the following...

https://ableton.centercode.com/signup/

NetHui 2015 – Thursday Afternoon

Domains: growth, change, transition

  • Transition of .nz to second level domains
  • Some stuff re moving root zone control away from the US
  • Problem with non-ascii domains (IDNs). They work okay, but not 3rd party apps or apps in Organisations. Eg can’t register on Facebook or other websites.
  • 60% of Government Depts don’t accept IDNs as email addresses, lots of other orgs
  • 1/3 of all new .nz domains created at second level
  • Around 95k or 600k .nz domains now at second level (about 2/3s of these from rights are 3LD holder)
  • Some people when you give them your address.nz change it into address.co.nz
  • 1st principles of .nz whois public policy.
  • People are in danger if they address is published
  • But what the ability to contact the real owner of a domain
  • 4 people in room with signed domains
  • 300 signed .nz domains. 150 with DS record
  • Around 3 people in room with new TLDs. See ntldstats.com for current stats

Internet of Things

  • Where does the data from your house appliances go?
  • Forwarded to other companies
  • Issues need to be understandable by ordinary citizens especially terms and conditions
  • Choose the data that you choose to share with the company rather than company choosing what it shares with you (and others)
  • In health care area people worried about sharing data if it will affect their insurance premiums or coverage
  • Many people don’t understand what their data is, they don’t understand that if every time they do something (on a device) it is stored and can be used later. How to educate people without sounding paranoid?
  • “IoT is connecting things whose primary purpose is not connecting to the Internet”
  • “The cost of sharing is bearable, because the sharing is valuable.”
  • More granularities of trust. No current standards or experience or feeling for this since such a new area and rapidly evolving
  • NZ law should override overly aggressive agreements (by overseas companies)
  • Some discussion about standards, lots of them, full stack, piecemeal, rapidly changing
  • Will the IoT make everything useless after the zombie apocalypse?
  • “Denial of Service attack on your IoT pill bottle would be bad!”
  • Concern that something like a pill bottle failing can put life in danger. Very high level of reliability needed which is rare and hard in software

Panel: Parliamentary Internet Forum

  •  With Gareth Hughes (Green Party), Clare Curran (Labour Party), Brett Hudson (National Party), Ria Bond (NZ First), Karen Melhuish Spencer (Core Education), Nigel Robertson (University of Waikato)
  • What roles does the Education system play in the Internet
    • National guy mostly talked about UFB and RBI programmes, computers in homes
    • Gareth Hughes adopts the “I went out to XYZ School” story. Pushes Teachers not trained and 1 in 4 homes don’t have Internet access.
    • Claire – Got distracted about discussion re her pants. But she said 40% of jobs at risk over next 10-15 years due to impact of technology
    • Karen – I got distracted about another clothing related discussion on twitter
    • Nigel – 1. Use the Internet to do what we already do better. Help people to use the Internet better (digital literacy)
  • Lots of discussion about retraining older people to handle jobs in the future as their present jobs go away
  • How much should government be leading vs getting out of the way and just funding it?
    • Nigel – Government should provide direction. Different in tertiary and other sectors
    • Karen – Collaborative and connected but not mandating
  • “We need to prepare people not just for the jobs of the future, but also to create the companies of the future” – Martin Danner
  • Lots of other stuff but I got distracted.

FacebookGoogle+Share

#PerconaLive Amsterdam – schedule now out

The schedule is out for Percona Live Europe: Amsterdam (September 21-23 2015), and you can see it at: https://www.percona.com/live/europe-amsterdam-2015/program.

From MariaDB Corporation/Foundation, we have 1 tutorial: Best Practices for MySQL High Availability – Colin Charles (MariaDB)

And 5 talks:

  1. Using Docker for Fast and Easy Testing of MariaDB and MaxScale – Andrea Tosatto (Colt Engine s.r.l.) (I expect Maria Luisa is giving this talk together – she’s a wonderful colleague from Italy)
  2. Databases in the Hosted Cloud Colin Charles (MariaDB)
  3. Database Encryption on MariaDB 10.1 Jan Lindström (MariaDB Corporation), Sergei Golubchik (Monty Program Ab)
  4. Meet MariaDB 10.1 Colin Charles (MariaDB), Monty Widenius (MariaDB Foundation)
  5. Anatomy of a Proxy Server: MaxScale Internals Ivan Zoratti (ScaleDB Inc.)

OK, Ivan is from ScaleDB now, but he was the SkySQL Ab ex-CTO, and one of the primary architects behind MaxScale! We may have more talks as there are some TBD holes to be filled up, but the current schedule looks pretty amazing already.

What are you waiting for, register now!

July 08, 2015

NetHui 2015 – Thursday Morning

Ministerial address: Hon. Amy Adams, Minister for Communications

  • Mentions she was at community group meeting where people were “shocked” when it was suggested that minutes be sent via email
  • Talk up of the UFB rollout. Various stats about how it is going
  • Also mentioned that Mobile build is part of UFB, better cellular connectivity in rural regions
  • Notes that this will never be 100% complete. The bar keeps moving
  • Very different takeup in different regions. 2% in some 19% in others. Local organisations pushing
  • Good Internet is especially important for remote countries like New Zealand
  • Talk about getting better access in common areas (eg shared driveways) for network builds
  • Notes how Broadcasting and Communications as well as other areas are converging. Previously they were separate silos. Similar for other areas.
  • Harmful Digital Communications Act.
    • Says new framework, adjustment may be needed and bedding down the courts.
    • Says that majority of cases will go to mediation
    • Similar Act in Australia very few things going to courts
    • Gave similar silly literal readings of others acts ( RMA requires a permit to sneeze )
  • 5 “Questions” to minister. 2 on TPP, 1 on Captions, 1 pushing some project and one actual question that she got to answer.
  • Maybe they should look at this idea for the Questions

Keynote: Kathy Brown, ISOC CEO

  • GDP of a National is highly correlated with the growth of the Internet
  • 75% of the benefit of the Internet goes to existing businesses
  • ISOC Global Internet Report 2015
  • Huge growth in Mobile Internet
  • “94% of the global population is covered by mobile networks. Mobile broadband covers 48% of global population”
  • Huge gap between developed and developing counties
  • Report is Online and “Interactional”
  • Challenges
    • Openness of the Internet means information is out there, exposed and gettable by the wrong people sometimes
    • Generational divide in attitude to privacy
  • Privacy is a matter of personal choice. The tools should be available should you wish to use them

Govt 2.0: Digital by default

  • Rachel Prosser and David Farrar facilitating.
  • Room full
  • Result 10 programme background
  • NZ Government Web toolkit
  • 50,000 registered with NZ Realme site
  • Shared rules between local governments, problems with same rules everywhere. Some limitations,. Perhaps at least similar technical standards
  • People don’t care about governments structure, they just want a service, don’t care how depts are arranged.

FacebookGoogle+Share

NetHui 2015 – InTac afternoon

Building an access network for demand and scale – new challengesKurt Rogers, Chorus

  • Over 1 million broadband connections on access network
  • 70-80% of BB connections
  • Average connection sped now near 20Mb/s due to VDSL and Fibre
  • Busiest 15 minute period (around 9pm Thursday) of week averaging 0.5Mb/s per user ( up from 100kb/s just 3 years ago )
  • Jump in mid-2013 when Netflix and Lightbox launched
  • Average bandwidth per user growing 50%/year. Grown that much in 1st half of 2015
  • Quite a few people still on ADSL1 modems when ADSL2 would work
  • Same a lot of people can get VDSL that don’t realize
  • Lots of people on 30Meg fibre plan at the start, now most going for 100Mb/s
  • Rural broadband (RBI)
    • 85k lines upgraded to FTTN
    • Average speed jumped 5.6Mb/s to 15Mb/s after a single rural cabinet upgraded cause everybody could now use ADSL2 and faster uplink. One fibre guy got 48Mb/s on VDSL, other 37Mb/s
    • More speed out there than some people realize
  • VDSL bandplan moving from 997 to 998. Trail average speed increases were from 32 to 46Mb/s for downstream. Minimal change on upstream speed.
  • Capacity
    • Aggregation link bandwidth. Alert threshold at 70%, Max threshold at 90%
  • Technology down the road to speed up aggregation links with Next Generation PON technology

The new smart ISPColin Brown, GM of Networks at Spark

  • Working on caching infrastructure, bigger and closer to their edge
  • Big traffic growth this year
  • Big growth in mobile traffic especially upload
  • 60% of phones in stores are 4G capable
  • Providers investing a lot of money , profits lower. Less like banks, more like airlines
  • Technology refresh every 5 years rather than every 10

FacebookGoogle+Share

The Megatransaction: Why Does It Take 25 Seconds?

Last night f2pool mined a 1MB block containing a single 1MB transaction.  This scooped up some of the spam which has been going to various weakly-passworded “brainwallets”, gaining them 0.5569 bitcoins (on top of the normal 25 BTC subsidy).  You can see the megatransaction on blockchain.info.

It was widely reported to take about 25 seconds for bitcoin core to process this block: this is far worse than my “2 seconds per MB” result in my last post, which was considered a pretty bad case.  Let’s look at why.

How Signatures Are Verified

The algorithm to check a transaction input (of this form) looks like this:

  1. Strip the other inputs from the transaction.
  2. Replace the input script we’re checking with the script of the output it’s trying to spend.
  3. Hash the resulting transaction with SHA256, then hash the result with SHA256 again.
  4. Check the signature correctly signed that hash result.

Now, for a transaction with 5570 inputs, we have to do this 5570 times.  And the bitcoin core code does this by making a copy of the transaction each time, and using the marshalling code to hash that; it’s not a huge surprise that we end up spending 20 seconds on it.

How Fast Could Bitcoin Core Be If Optimized?

Once we strip the inputs, the result is only about 6k long; hashing 6k 5570 times takes about 265 milliseconds (on my modern i3 laptop).  We have to do some work to change the transaction each time, but we should end up under half a second without any major backflips.

Problem solved?  Not quite….

This Block Isn’t The Worst Case (For An Optimized Implementation)

As I said above, the amount we have to hash is about 6k; if a transaction has larger outputs, that number changes.  We can fit in fewer inputs though.  A simple simulation shows the worst case for 1MB transaction has 3300 inputs, and 406000 byte output(s): simply doing the hashing for input signatures takes about 10.9 seconds.  That’s only about two or three times faster than the bitcoind naive implementation.

This problem is far worse if blocks were 8MB: an 8MB transaction with 22,500 inputs and 3.95MB of outputs takes over 11 minutes to hash.  If you can mine one of those, you can keep competitors off your heels forever, and own the bitcoin network… Well, probably not.  But there’d be a lot of emergency patching, forking and screaming…

Short Term Steps

An optimized implementation in bitcoind is a good idea anyway, and there are three obvious paths:

  1. Optimize the signature hash path to avoid the copy, and hash in place as much as possible.
  2. Use the Intel and ARM optimized SHA256 routines, which increase SHA256 speed by about 80%.
  3. Parallelize the input checking for large numbers of inputs.

Longer Term Steps

A soft fork could introduce an OP_CHECKSIG2, which hashes the transaction in a different order.  In particular, it should hash the input script replacement at the end, so the “midstate” of the hash can be trivially reused.  This doesn’t entirely eliminate the problem, since the sighash flags can require other permutations of the transaction; these would have to be carefully explored (or only allowed with OP_CHECKSIG).

This soft fork could also place limits on how big an OP_CHECKSIG-using transaction could be.

Such a change will take a while: there are other things which would be nice to change for OP_CHECKSIG2, such as new sighash flags for the Lightning Network, and removing the silly DER encoding of signatures.

The sad state of MySQL and NUMA

Way back in 2010, MySQL Bug 57241 was filed, pointing out that the “swap insanity” problem was getting serious on x86 systems – with NUMA being more and more common back then.

The swapping problem is due to running out of memory on a NUMA node and having to swap things to other nodes (see Jeremy Cole‘s blog entry also from 2010 on the topic of swap insanity). This was back when 64GB and dual quad core CPUs was big – in the past five years big systems have gotten bigger.

Back then there were two things you could do to have your system be usable: 1) numa=off as kernel boot parameter (this likely has other implications though) and 2) “numactl –interleave all” in mysqld_safe script (I think MariaDB currently has this built in if you set an option but I don’t think MySQL does, otherwise perhaps the bug would have been closed).

Anyway, it’s now about 5 years since this bug was opened and even when there’s been a patch in the Twitter MySQL branch for a while (years?) and my Oracle Contributor Agreement signed patch attached to bug 72811 since May 2014 (over a year) we still haven’t seen any action.

My patch takes the approach of you want things allocated at server startup to be interleaved across nodes (e.g. buffer pool) while runtime allocations are probably per connection and are thus fine (in fact, better) to do node local allocations.

Without a patch like this, or without running mysqld with the right numactl incantation, you end up either having all your memory on one NUMA node (potentially not utilising full memory bandwidth of the hardware), or you end up with swap insanity, or you end up with some other not exactly what you’d expect situation.

While we could have MySQL be more NUMA aware and perhaps do a buffer pool instance per NUMA node or some such thing, it’s kind of disappointing that for dedicated database servers bought in the past 7+ years (according to one comment on one of the bugs) this crippling issue hasn’t been addressed upstream.

Just to make it even more annoying, on certain workloads you end up with a lot of mutex contention, which can end up meaning that binding MySQL to fewer NUMA nodes (memory and CPU) ends up increasing performance (cachelines don’t have as far to travel) – this is a different problem than swap insanity though, and one that is being addressed.

Update: My patch as part of https://bugs.mysql.com/bug.php?id=72811 has been merged! MySQL on NUMA machines just got a whole lot better. I just hope it’s enabled by default…

July 07, 2015

NetHui 2015 – InTac morning

IntroductionDean Pemberton, InternetNZ

Dean was going to do an intro but got cock-blocked by some guy in a High-Vis vest.

The People Factor: what users wantPaul Brislen, ex-CEO of TUANZ

  • Working from home since 1999, 30kb/s at first. Made it work
  • Currently has 10Mb/s shared with busy family, often congested, not using much TV yet
  • Television driving demand.
  • Some infrastructure showing the strain
  • Southern cross replacement will be via Sydney. A couple of thousand km in the wrong direction when going to the US
  • Rural broadband still to deliver on the promise, no uptake stats, not great service level
  • Internet access critical path for economic development. lack of political will
  • Dean got to do his intro talk now.
  • Will Internet be priced on peak usage? A: Already offpeak discounts, some ISPs manage home/biz customer ratio to keep traffic balanced
  • Average usage per customer is 5Mb/s for ISP with streaming orientated ISP (acct sold with device).
  • 60% of International traffic going to Aus (to CDNS)
  • Consumers don’t accept buffering, high quality video (bitrate and production quality). Want TV to just-work.
  • NZ doesn’t want to be a “rural” level of internet access, equiv to a farm in more connected countries
  • Could multicast work for live events like sport?
  • Hard to get overage to work to work when people leave TV on all day
  • Plenty of people in Auckland not getting UFB till 2017 (or later)

The connected home and the Internet of ThingsAmber Craig, ANZ

  • At top of Hype cycle
  • Has home Switches on Wemo (have to get upgraded)
  • Lots of devices generating a lot of data
  • Video Blogging – 10GB of raw data, 1GB of finished for just 5 minutes. Uploading to shared drives, sending back and forth through multiple edits
  • Network capacity if probably not much for IoT compared to video, but home will be a source of a lot more uploads
  • With IPv6 maybe less NAT, harder to manage (since people are not used to it).
  • Whose responsibility is it to ensure that Internet works in every room
  • Building standards, what are customers, government, ISP each prepared to pay for?
  • What about medical dependency people who need Internet. A lot of this goes over GSM since that is more “reliable”

Lightbox – content delivery in New ZealandKym Nyblock, Chief Executive of Lightbox

  • Lightbox is part of Spark ventures, morepork, skinny, bigpipe
  • Lighbox – On line TV service, $12.99/month thousands of hours of online content
  • 40% of US household have SVOD, but pay-TV only down 25%
  • Many providers around the world, multiple providers in many countries. Youtube also bit player in the corner
  • SVOD have some impact on piracy, especially those who only pirate cause they want content same day as programme airs in the US
  • Lots of screens now in the house, TV not only viewed on TVs
  • Lightbox challenges
    • Rights issues, lots of competition with other providers, some with fuzzy launch dates
    • NZ Internet not too bad
    • Had to work within an existing company
  • Existing providers
    • Sky – 850k homes, announced own product, has most sports
    • Netflix – approx 30k homes, coming to NZ soon
  • From Biz plan to launch in 12 months
  • Marketing job to be very simple – “Grandma Rule” ( can be explained to Grandma, used by her)
  • Express service delivers content right after views in the US. Lots of views for the episodes that are brand new. One new episode can be 10% of days total views
  • Very agile company, plans changed a lot.
  • Future
    • Customers will have several providers and change often
    • Multiple providers in the market, more to come
    • Premium and exclusive content will drive, simple interface will keep it
    • Rights issues are a problem but locked into the studio system
    • Try to “grow the category”, majority on consumers still using linear, scheduled TV
    • Try to address local rights ownership. This is the bit where they dug at US based providers and people using them.
    • Working on a Sports offering
    • and then she showed a Lightbox ad :(
    • Question costs of other ISPs of getting good lightbox due to charges from Spark-Wholesale for bandwidth exchanged. Not really answered

Quickflix – another view of content delivery in New ZealandPaddy Buckley, MD of Quickflix NZ

  • 1st service to launch in March 2012
  • Subscription service for movies and TV shows and Standalone pay-per-view service for new-release movies and some TV shows
  • Across lots of devices, Smart TVs, phones, computers, games consoles, tablets, tivo, chromecast. No Linux Client :(
  • Just 15% of views via the website now
  • Content: New release movies, subscriptions content movies, TV shows
  • Uses Akamai for delivery. Hosting Centers in Sydney and Perth. AWS/Azure
  • Unwritten 5 second rule. Content should play within 5 seconds of pressing play
  • The future
    • Multiple Models, Not just SVOD, eg TVOD, AVOD, EVOD, EST
    • More fibre, fast home wifi and better hardware
    • VOD content getting nearer to the viewer. HbbTV combines broadcast and on-demand being done by freeview
    • Android TV
    • Viewing levels to increase (volume and frequency), people will pick and mix between providers
    • Aiming at 50% of households, 1 million is quite a lots for any scale.
  • Coming soon
    • 1080p/4K , 5.1 surround sound
    • Fewer device limits. All services and all devices
    • More streams
    • Changing release windows
    • Live streaming
    • PPV options to compliment
    • Download now, view later
  • What we need from ISPs
    • Significant bandwidth
    • Mooorrreee bandwidth
    • People will change ISPs if the ISP can’t provide the level of service
    • Netflix is naming and shaming. Netflix best/worst list
  • Prediction that NZ could hit 50% SVOD within a couple of years
  • Asked if they will be going broke in next few months. Says he’s done deal with Presto in Aus and will ease funding problems but business as normal in the NZ
  • SVOD has evolved from back-catalog TV shows a few years ago to first-run now. Will probably keep going forward with individual shows being provider-exclusive for now, especially since services are fairly low cost per month
  • A few questions about subtitles. Usually available (although can cost extra) but not good support with end devices to turn on/off .

FacebookGoogle+Share

Linux Security Summit 2015 Schedule Published

The schedule for the 2015 Linux Security Summit is now published!

The refereed talks are:

  • CC3: An Identity Attested Linux Security Supervisor Architecture – Greg Wettstein, IDfusion
  • SELinux in Android Lollipop and Android M – Stephen Smalley, NSA
  • Linux Incident Response – Mike Scutt and Tim Stiller, Rapid7
  • Assembling Secure OS Images – Elena Reshetova, Intel
  • Linux and Mobile Device Encryption – Paul Lawrence and Mike Halcrow, Google
  • Security Framework for Constraining Application Privileges – Lukasz Wojciechowski, Samsung
  • IMA/EVM: Real Applications for Embedded Networking Systems – Petko Manolov, Konsulko Group, and Mark Baushke, Juniper Networks
  • Ioctl Command Whitelisting in SELinux – Jeffrey Vander Stoep, Google
  • IMA/EVM on Android Device – Dmitry Kasatkin, Huawei Technologies

There will be several discussion sessions:

  • Core Infrastructure Initiative – Emily Ratliff, Linux Foundation
  • Linux Security Module Stacking Next Steps – Casey Schaufler, Intel
  • Discussion: Rethinking Audit – Paul Moore, Red Hat

Also featured are brief updates on kernel security subsystems, including SELinux, Smack, AppArmor, Integrity, Capabilities, and Seccomp.

The keynote speaker will be Konstantin Ryabitsev, sysadmin for kernel.org.  Check out his Reddit AMA!

See the schedule for full details, and any updates.

This year’s summit will take place on the 20th and 21st of August, in Seattle, USA, as a LinuxCon co-located event.  As such, all Linux Security Summit attendees must be registered for LinuxCon. Attendees are welcome to attend the Weds 19th August reception.

Hope to see you there!

It's 10pm, do you know where your SSL certificates are?

The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.

The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.

However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.

This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.

Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched sslaware.com, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.

If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.

July 06, 2015

Bitcoin Core CPU Usage With Larger Blocks

Since I was creating large blocks (41662 transactions), I added a little code to time how long they take once received (on my laptop, which is only an i3).

The obvious place to look is CheckBlock: a simple 1MB block takes a consistent 10 milliseconds to validate, and an 8MB block took 79 to 80 milliseconds, which is nice and linear.  (A 17MB block took 171 milliseconds).

Weirdly, that’s not the slow part: promoting the block to the best block (ActivateBestChain) takes 1.9-2.0 seconds for a 1MB block, and 15.3-15.7 seconds for an 8MB block.  At least it’s scaling linearly, but it’s just slow.

So, 16 Seconds Per 8MB Block?

I did some digging.  Just invalidating and revalidating the 8MB block only took 1 second, so something about receiving a fresh block makes it worse. I spent a day or so wrestling with benchmarking[1]…

Indeed, ConnectTip does the actual script evaluation: CheckBlock() only does a cursory examination of each transaction.  I’m guessing bitcoin core is not smart enough to parallelize a chain of transactions like mine, hence the 2 seconds per MB.  On normal transaction patterns even my laptop should be about 4 times faster than that (but I haven’t actually tested it yet!).

So, 4 Seconds Per 8MB Block?

But things are going to get better: I hacked in the currently-disabled libsecp256k1, and the time for the 8MB ConnectTip dropped from 18.6 seconds to 6.5 seconds.

So, 1.6 Seconds Per 8MB Block?

I re-enabled optimization after my benchmarking, and the result was 4.4 seconds; that’s libsecp256k1, and an 8MB block.

Let’s Say 1.1 Seconds for an 8MB Block

This is with some assumptions about parallelism; and remember this is on my laptop which has a fairly low-end CPU.  While you may not be able to run a competitive mining operation on a Raspberry Pi, you can pretty much ignore normal verification times in the blocksize debate.


 

[1] I turned on -debug=bench, which produced impenetrable and seemingly useless results in the log.

So I added a print with a sleep, so I could run perf.  Then I disabled optimization, so I’d get understandable backtraces with perf.  Then I rebuilt perf because Ubuntu’s perf doesn’t demangle C++ symbols, which is part of the kernel source package. (Are we having fun yet?).  I even hacked up a small program to help run perf on just that part of bitcoind.   Finally, after perf failed me (it doesn’t show 100% CPU, no idea why; I’d expect to see main in there somewhere…) I added stderr prints and ran strace on the thing to get timings.

July 05, 2015

Twitter posts: 2015-06-29 to 2015-07-05

CCR at OSCON

What is conflict?

I've given a "Constructive Conflict Resolution" talk twice now. First at DrupalCon Amsterdam, and again at DrupalCon Los Angeles. It's something I've been thinking about since joining the Drupal community working group a couple of years ago. I'm giving the talk again at OSCON in a couple of weeks. But this time, it will be different. Very different. Here's why.

After seeing tweets about Gina Likins keynote at ApacheCon earlier this year I reached out to her to ask if she'd be willing to collaborate with me about Conflict Resolution in open source, and ended up inviting her to co-present with me at OSCON. We've been working together over the past couple of weeks. It's been a joy, and a learning experience! I'm really excited about where the talk is heading now. If you're going to be at OSCON, please come along. If you're interested, please follow our tweets tagged #osconCCR.

Jen Krieger from Opensource.com interviewed Gina and I about our talk - here's the article: Teaching open source communities about conflict resolution

In the meantime, do you have stories of conflict in Open Source Communities to share?

  • How were they resolved?
  • Were they intractable?
  • Do the wounds still fester?
  • Was positive change an end result?
  • Do you have resources for dealing with conflict?

Tweet your thoughts to me @kattekrab

Here's the slides

July 03, 2015

Python Decompilation, Max4Live Programming, Ableton Push Colour Calibration, Automated DJ'ing and More

I was recently discussing with someone how Ableton programming/scripting works. This was particularly within the context of the Ableton Push device and possible hacking of other devices to allow for more sophisticated functionality. Apparently, many of the core scripts use Python. They need to be decompiled to allow you to have a proper look at them though. Obviously, some of the scripts are non-tricial and will require a sufficient understanding of both music as well as programming to be useful.



A decompilation of all files in the following directory,

C:\ProgramData\Ableton\Live9Suite\Resources\MIDI Remote Scripts\

is available here. The reason why I've done it is because others who have previously done it have removed it from their websites.



http://julienbayle.net/ableton-live-9-midi-remote-scripts/

http://blogs.bl0rg.net/netzstaub/2008/08/15/writing-ableton-control-surface-scripts/

http://remotescripts.blogspot.com.au/

http://julienbayle.net/PythonLiveAPI_documentation/Live9.0.6.xml



The decompilation was achieved using two small scripts which I created available here and use uncompyle2, https://github.com/Mysterie/uncompyle2 at their core. Since the current code contains an error which doesn't allow for a successful RPM build I've had to make a small modification.


For those who want to know the uncompyle2 currently only works with Python 2.7. To get it running in a Debian based environment I had to change a symlink so that /usr/bin/python -> python2.7 as opposed to /usr/bin/python -> python2.6



To get the RPM build working I had to copy README.rst to README.
Running 'python setup.py bdist_rpm' would give me an RPM package. Running 'alien' allows conversion of the RPM to a DEB package for easy installation on a Debian based platform.

http://sourceforge.net/projects/easypythondecompiler/

http://stackoverflow.com/questions/8189352/decompile-python-2-7-pyc

http://depython.com/

http://reverseengineering.stackexchange.com/questions/1701/decompiling-pyc-files


Successful RPM and DEB packages are available from my website, https://sites.google.com/site/dtbnguyen/

The following ZIP archive contains updated code, RPM, and DEB packages.
The following ZIP archive contains the decompiled code and scripts to automate decompilation of the Ableton code.


For those who are interested, Max4Live programming looks rather interesting for building devices and effects. It also looks like a perfect choice for those who may be on a limited budget and looking to extend Ableton's capabilities.

https://www.ableton.com/en/blog/programming-in-max-for-live/

http://www.youtube.com/playlist?list=PLasl9I6VeCCrNLAoOiKibDqJc1rsjLSDi

http://www.patrickmuller.de/n-e-w-s/max-msp-programming/

https://docs.cycling74.com/max5/vignettes/intro/doclive.html

https://www.ableton.com/en/help/article/how-get-started-max-live-9/

http://www.abletonop.com/2012/07/sell-your-live-devices-on-abletonop/

http://www.synthtopia.com/content/2009/11/24/5-reasons-to-avoid-max-for-live/

http://roberthenke.com/technology/m4l.html

https://cycling74.com/support/faq-maxforlive/

http://community.akaipro.com/akai_professional/topics/apc-mini-sequencers

https://www.youtube.com/watch?v=bmn8eJYEe9s

http://www.maxforlive.com/library/device.php?id=877



There have has been some grumbles regarding Ableton Push quality control (Novaton has sort of had similar problems with their Launchpad series but it hasn't been as obvious because most current models have only relied on a limited set of colours. Note to others this issue isn't actually covered by warranty either and it's a difficult problem to fix from a manufacturing perpsective. Hence, the need for this particular solution.) with regards to inconsitent colouring of LEDs. There was a small application that was created but wasn't publicly released. It's called, 'PUSH_RGB_Calibration_Tool.zip' and basically allows for calibration of white on the device by altering internal colour balance of primary colours. It's available on some file sharing websites. You'll require firmware version 1.7 tor it to run.

https://forum.ableton.com/viewtopic.php?f=55&t=191939&start=45

https://www.ableton.com/en/help/article/push-firmware-release-notes/

https://archive.is/zLbnS

http://filepi.com/i/J9mGlId

https://www.virustotal.com/en/file/687a48127a65226eaae13ded393aafaccee46e445cec33a63964ece921fddb51/analysis/



Someone recently asked me about automated DJ options. I've seen a few but they seem to be becoming increasingly sophisticated.

https://www.youtube.com/results?search_query=how+to+dj

How To DJ - Phil K (Intermediate Level)

https://www.youtube.com/watch?v=4r3Pw8VJtq0
http://www.mixmeister.com/products-comparison.php

http://forum.djtechtools.com/showthread.php?t=21834

http://www.virtualdj.com/wiki/Automix.html

https://www.native-instruments.com/forum/threads/is-there-an-auto-dj-function.28501/

http://djtechtools.com/2012/08/06/what-controller-is-right-for-you-all-in-one-vs-modular-dj-set-ups/



Apparently, some of my ideas and perspectives regarding the modern world and capitalism are similar to that of Thomas Piketty. However, the way in which we would set about rebalancing global economics to ensure a more fair and just global economic system for all is somewhat different. More on this in time...

http://www.smh.com.au/world/pushing-back-on-socialism-ecuador-vents-its-presidential-ire-on-the-streets-20150702-gi3ew2.html

https://en.wikipedia.org/wiki/Thomas_Piketty

https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Century

http://blog.melbournemusiccentre.com.au/2011/07/why-are-the-same-or-similar-items-cheaper-overseas/



Some options for puchasing used music equipment locally.

http://www.musicswopshop.com.au/

http://www.yourinstrument.com.au/

http://www.musosales.com.au/

http://melbourneexchange.com.au/

http://www.quicksales.com.au/



In case you've ever wanted to download videos from various websites, there are quite a few options out there.

http://www.clipconverter.cc/

http://keepvid.com/

http://www.flvdown.com/

http://www.flvdown.com/docs.php?doc=api

https://addons.mozilla.org/en-us/firefox/addon/flashgot/

http://sourceforge.net/projects/ytd2/

http://stackoverflow.com/questions/4032766/how-to-download-videos-from-youtube-on-java

http://superuser.com/questions/114196/how-to-find-the-stream-behind-a-flash-player



If you've had minor scratches on your optical discs you know that they can be extraordinarily frustrating. There are quite a few solutions out there for it though.

http://www.wisebread.com/quickly-removing-scratches-from-cds-and-dvds

http://www.wikihow.com/Fix-a-Scratched-CD

http://www.apartmenttherapy.com/7-bizarre-home-remedies-that-r-152502

http://howto.wired.com/wiki/Fix_a_Scratched_CD

http://www.instructables.com/id/Re-surfacing-CDs-so-they-work-again./



If you ever have to use automated imaging/partitioning software sometimes things don't turn out perfectly. Hidden partitions appear when they shouldn't wreaking havoc with links throughout your system. Changing the partition type is the solution though the actual 'type/code/number' may vary depending on the circumstances.

https://forums.lenovo.com/t5/Lenovo-P-Y-and-Z-series/Disk-Partitioning-and-OneKey-Recovery-Feature/td-p/8036

https://forums.lenovo.com/t5/Windows-7-Discussion/How-to-re-hide-OEM-partition/td-p/278948

https://forums.lenovo.com/t5/Lenovo-U-and-S-Series-Notebooks/help-me-with-OKR/m-p/133135#M11494

http://www.smh.com.au/digital-life/digital-life-news/meet-australian-company-ipechelon-one-of-the-biggest-antipiracy-operations-in-the-world-20150430-1mw93k.html



Options for locking down a device in case it is lost or stolen are increasingly popular nowadays even in consumer class devices. It's interesting how far, some companies are willing to take this and what their implementation is like.

http://www.computerworld.com/article/2481347/endpoint-security/the-down-side-of-hard-drive-passwords.html

http://www.tomshardware.com/forum/258614-32-access-password-protected-caddy

http://www.sevenforums.com/hardware-devices/246015-toshiba-hdd-locked.html

http://www.computing.net/answers/hardware/how-to-clear-hard-drive-password/48527.html

http://forum.thinkpads.com/viewtopic.php?t=104873

http://www.freakyacres.com/remove_computrace_lojack

http://en.wikipedia.org/wiki/LoJack_for_Laptops

http://www.pcworld.com/product/1344774/acer-aspire-s3-951-2634g25nss-ultrabook.html

http://www.manualslib.com/manual/546525/Acer-Aspire-S5-391.html?page=56

https://www.technibble.com/forums/threads/unlockhd-exe-help.42725/

http://www.experts-exchange.com/Hardware/Laptops_Notebooks/Q_26614042.html

http://www.allservice.ro/forum/viewtopic.php?p=7958&sid=6d91be00086c530afa8311af27da81c5

http://www.allservice.ro/acer/

http://www.tomsguide.com/forum/61372-35-locked-laptop-harddrive

http://www.techrepublic.com/pictures/cracking-open-acer-aspire-s3-ultrabook/

http://www.techrepublic.com/blog/cracking-open/acer-aspire-s3-teardown-good-hardware-lackluster-construction/



Help evaluate, test, and design Windows 10.

https://insider.windows.com/

WTF Internal Combustion?

At the moment I’m teaching my son to drive in my Electric Car. Like my daughter before him it’s his first driving experience. Recently, he has started to drive his grandfathers pollution generator, which has a manual transmission. So I was trying to explain why the clutch is needed, and it occurred to me just how stupid internal combustion engines are.

Dad: So if you dump the clutch too early the engine stops.

Son: Why?

Dad: Well, a petrol engine needs a certain amount of energy to keep it running, for like compression for the next cycle. If you put too big a load on the engine, it doesn’t have enough power to move the car and keep the engine running.

Dad: Oh yeah and that involves a complex clutch that can be burnt out if you don’t use it right. Or an automatic transmission that requires a complex cooling system and means you use even more (irreplaceable) fossil fuel as it’s less efficient.

Dad: Oh, and petrol motors only work well in a very narrow range of RPM so we need complex gearboxes.

Dad thinks to himself: WTF internal combustion?

Electric motors aren’t like that. Mine works better at 0 RPM (more torque), not worse. When the car stops my electric motor stops. It’s got one moving part and one gear ratio. Why on earth would you keep using irreplaceable fossil fuels when stopped at the traffic lights? It just doesn’t make sense.

The reason of course is energy density. We need to store a couple of hundred km worth of energy in a reasonable amount of weight. Petrol has about 44 MJ/kg. Let see, one of my Lithium cells weighs 3.3kg, and is rated at 100AH at 3.2V. So thats (100AH)(3600 seconds/H)(3.2V)/(3kg)=0.386MJ/kg or about 100 times worse than petrol. However that’s not the whole story, an EV is about 85% efficient in converting that energy into movement while a dinosaur juice combuster is only about 15% efficient.

Anyhoo it’s now possible to make EVs with 500 km range (hello Tesla) so energy density has been nailed. The rest is a business problem, like establishing a market for smart phones. We’re quite good at solving business problems, as someone tends to get rich.

I mean, if we can make billions of internal combustion engines with 1000′s of moving parts, cooling systems, gearboxes, anti-pollution, fuel injection, engine management, controlled detonation of an explosive (they also make napalm out of petrol) and countless other ancillary systems I am sure human kind can make a usable battery!

Internal combustion is just a bad hack.

History is going to judge us as very stupid. We are chewing through every last drop of fossil fuel to keep driving to and from homes in the suburbs that we can’t afford, to buy stuff we don’t need, making plastic for gadgets we throw away, and flying 1000′s of km to exotic locations for holidays, and overheating the planet using our grandchildren’s legacy of hydrocarbons that took 75 million years to form.

Oh that’s right. It’s for the economy.

Wrapper for running perf on part of a program.

Linux’s perf competes with early git for title of least-friendly Linux tool.  Because it’s tied to kernel versions, and the interfaces changes fairly randomly, you can never figure out how to use the version you need to use (hint: always use -g).

But when it works, it’s very useful.  Recently I wanted to figure out where bitcoind was spending its time processing a block; because I’m a cool kid, I didn’t use gprof, I used perf.  The problem is that I only want information on that part of bitcoind.  To start with, I put a sleep(30) and a big printf in the source, but that got old fast.

Thus, I wrote “perfme.c“.  Compile it (requires some trivial CCAN headers) and link perfme-start and perfme-stop to the binary.  By default it runs/stops perf record on its parent, but an optional pid arg can be used for other things (eg. if your program is calling it via system(), the shell will be the parent).

July 02, 2015

LUV Main July 2015 Meeting: Ansible / BTRFS / Educating People to become Linux Users

Jul 7 2015 18:30
Jul 7 2015 20:30
Jul 7 2015 18:30
Jul 7 2015 20:30
Location: 

200 Victoria St. Carlton VIC 3053

Speakers:

• Andrew Pam, An introduction to Ansible

• Russell Coker, BTRFS update

• Lev Lafayette, Educating People to become Linux Users: Some Key Insights from Adult Education

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 7, 2015 - 18:30

read more

Certification: Necessary Evil?

Certified Professional

I wrote this as a comment in response to Dries' post about the Acquia certification program - I thought I'd share it here too. I've commented there before.

I've also been conflicted about certifications. I still am. And this is because I fully appreciate the pros and cons. The more I've followed the issue, the more conflicted I've become about it.



My current stand, is this. Certifications are a necessary evil. Let me say a little on why that is.

I know many in the Drupal community are not in favour of certification, mostly because it can't possibly adequately validate their experience.



It also feels like an insult to be expected to submit to external assessment after years of service contributing to the code-base, and to the broader landscape of documentation, training, and professional service delivery.



Those in the know, know how to evaluate a fellow Drupalist. We know what to look for, and more importantly where to look. We know how to decode the secret signs. We can mutter the right incantations. We can ask people smart questions that uncover their deeper knowledge, and reveal their relevant experience.



That's our massive head start. Or privilege. 



Drupal is now a mature platform for web and digital communications. The new challenge that comes with that maturity, is that non-Drupalists are using Drupal. And non specialists are tasked with ensuring sites are built by competent people. These people don't have time to learn what we know. The best way we can help them, is to support some form of certification.



But there's a flip side. We've all laughed at the learning curve cartoon about Drupal. Because it's true. It is hard. And many people don't know where to start. Whilst a certification isn't going to solve this completely, it will help to solve it, because it begins to codify the knowledge many of us take for granted.



Once that knowledge is codified, it can be studied. Formally in classes, or informally through self-directed exploration and discovery.



It's a starting point.



I empathise with the nay-sayers. I really do. I feel it too. But on balance, I think we have to do this. But even more, I hope we can embrace it with more enthusiasm.



I really wish the Drupal Association had the resources to run and champion the certification system, but the truth is, as Dries outlines above, it's a very time-consuming and expensive proposition to do this work.



So, Acquia - you have my deep, albeit somewhat reluctant, gratitude!



:-)



Thanks Dries - great post.



cheers,

Donna

(Drupal Association board member)

Tell your MP you support Same Sex Marriage

If you support the right for two people to get married regardless of gender, then please respectfully and politely contact your local federal member and let them know.

Those who oppose this have already started up their very effective networks, and we will need to work very hard to counter it.

If you're not sure who your local MP or Senators are, I recommend you use http://www.openaustralia.org.au/ to find out. Just punch in your post code and it will let you know, as well as give you a run down of their voting history.

Do it, DO IT NOW!

This message brought to you by the realisation that I'm going to be rainbow haired soon.

Blog Catagories: 

July 01, 2015

Hunting for GC1D1NB

I went for an after work walk to try and find GC1D1NB on Tuggeranong Hill yesterday. It wasn't a great success. I was in the right area but I just couldn't find it. Eventually I ran out of time and had to turn back. I am sure I'll have another attempt at this one soon.



   



Interactive map for this route.



Tags for this post: blog pictures 20150701-tuggeranong_hill photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

FreeDV Robustness Part 5 – FreeDV 700

We’ve just released FreeDV v0.98 GUI software, which includes the new FreeDV 700 mode. This new mode has poorer speech quality than FreeDV 1600 but is far more robust, close to SSB on low SNR fading HF channels. Mel Whitten and the test team have made contacts over 1000 km using just 1 Watt!

You can download the Windows version of FreeDV 0.98 here.

To build it you need the latest codec2-dev and fdmdv2-dev from SVN, follow the Quickstart 1 instructions in fdmdv-dev/README.txt. I’ve been cross compiling for Windows on my Ubuntu Linux machine which is a time saver for me. Thanks Richard Shaw for your help with the cmake build system.

Mel and the team have been testing the software for the past few weeks and we’ve removed most of the small UI bugs. Thanks guys! I’m working on some further improvements to the robustness which I will release in a few weeks. Once we are happy with the FreeDV 700 mode, it will be ported to the SM1000. If you have time, and gcc/embedded experience I’d love to have some help with this!

It’s sounds pretty bad at 700 bit/s but so does SSB at 0dB SNR. The new modem uses a pilot symbol assisted coherent PSK modem (FreeDV 1600 uses a differential PSK modem). The new modem also has diversity; the 7 x 75 symb/s QPSK carriers are copied to form a total of 14 half power carriers. Overall this gives us significantly lower operating point SNR than FreeDV 1600 for fading channels. However the bandwidth is a little wider (800 – 2400 Hz), lets see how that goes through real radios.

Simulations indicate it has readability 4/5 at 0dB SNR on CCIR poor (fast) fading channels. It also has a PAPR of 7dB so if your PA can handle it you can hammer out 5dB more power than FreeDV 1600 (be careful).

For those of you who are integrating FreeDV into your own applications the FreeDV API now contains the 700 bit/s mode and freedv_tx and freedv_rx have been updated to demo it. The API interface has changed, we now have variables for the number of modem and speech samples which change with the mode. The coherent PSK modem has the very strange sample rate of 7500 Hz which at this stage the user (that’s you) has to deal with (libresample is your friend).

The 700 bit/s codec (actually 650 bit/s plus 2 data bits/frame) band limits the input speech between 600 and 2200 Hz to reduce the amount of information we need to encode. This might be something we can tweak, however Mel and the team have shown we can communicate OK using this mode. Here are some samples at 1300 (the codec rate used in FreeDV 1600) and 700 bit/s with no errors for comparison.

Lots more to talk about. I’ll blog some more when I pause and take a breath.

Comparing D7 and D8 outta the box

I did another video the other day. This time I've got a D7 and D8 install open side by side, and compare the process of adding an article.

Linux Australia council meeting minutes to be published on the planet

Wed, 2015-07-01 11:33

Last fortnight the Linux Australia council resolved to begin publishing their minutes to planet.linux.org.au.

While meeting minutes may seem boring, they in fact contain a lot of useful and interesting information about what the organisation and its various subcommittees are up to. As such we felt that this was useful information to publish wider and starting from now we'll be publishing them to the planet.

If you are interested in previous meetings and minute notes, you can find them at http://linux.org.au/news

June 30, 2015

New Charger for my EV

On Sunday morning I returned home and plugged in my trusty EV to feed it some electrons. Hmm, something is wrong. No lights on one of the chargers. Oh, and the charger circuit breaker in the car has popped. Always out for adventure, and being totally incompetent at anything above 5V and 1 Amp, I connected it directly to the mains. The shed lights started to waver ominously. Humming sounds like a Mary Shelley novel. And still no lights on the charger.

Oh Oh. Since disposing of my nasty carbon burner a few years ago I only have one car and it’s the EV. So I needed a way to get on the road quickly.

But luck was with me. I scoured my local EV association web site, and found a 2nd hand Zivan NG3 charger, that was configured for a 120V lead acid pack. I have a 36 cell Lithium pack that is around 120V when charged. Different batteries have different charging profiles, for example the way current tapers. However all I really need is a bulk current source, my external Battery Management System will shut down the charger when the cells are charged.

Using some residual charge I EVed down the road where I met Richard, a nice man, fellow engineer, and member of our local EV association. I arranged to buy his surplus NG3, took it home and fired it up. Away it went, fairly hosing electrons into my EV at 20A. The old charger was just 10A so this is a bonus – my charging time will be halved. I started popping breakers again, as I was sucking 2.4kW out of the AC. So I re-arranged a few AC wires, ripped out the older chargers, rewired the BMS module loop a little and away I went with the new charger.

Here is the lash up for the initial test. The new Zivan NG3 is the black box on the left, the dud charger the yellow box on the right. The NG3 replaces the 96V dud charger and two 12V chargers (all wired in series) that I needed to charge the entire pack. My current clamp meter (so useful!) is reading 17A.

Old chargers removed and looking a bit neater. I still need to secure the NG3 somehow. My BMS controller is the black box behind the NG3. It shuts down the AC power to the chargers when the batteries signal they are full.

Pretty red lights in the early morning. Each Lithium cell has a BMS module across it, that monitors the cell voltage, The red light means “just about full”. When the first cell hits 4.1V, it signals the BMS controller to shut down the charger. Richard pointed out that the BMS modules are shunt regulators, so will discharge each cell back down to about 3.6V, ensuring they are all at about the same state of charge.

This is the only reason I go to petrol stations. For air. There is so little servicing on EVs that I forget to check the air for a year, some tyres were a bit low.

The old charger lasted 7 years and was used almost every day (say 2000 times) so I can’t complain. The NG3 was $875 2nd hand. Since converting to the Lithium pack in 2009 I have replaced the electric motor armature (about $900) as I blew it up from overheating, 2 cells ($150 ea) as we over discharged them, a DC-DC converter ($200 ish) and now this charger. Also tyres and brakes last year, which are the only wearing mechanical parts left. In that time I’ve done 45,000 electric km.

Percival trig

I had a pretty bad day, so I knocked off early and went for a walk before going off to the meeting at a charity I help out with. The walk was to Percival trig, which I have to say was one of the more boring trigs I've been to. Some of the forest nearly was nice enough, but the trig itself is stranded out in boring grasslands. Meh.



   



Interactive map for this route.



Tags for this post: blog pictures 20150630-percival photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

June 29, 2015

A team walk around Red Hill

My team at work is trying to get a bit more active, so a contingent from the Canberra portion of the team went for a walk around Red Hill. I managed to sneak in a side trip to Davidson trig, but it was cheating because it was from the car park at the top of the hill. A nice walk, with some cool geocaches along the way.



 



Interactive map for this route.



Tags for this post: blog pictures 20150629-davidson photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

The Value of Money - Part 4

- I previously remarked that since we use the concept of 'deterrence' so readily throughout the world we are in a de-facto state of 'Cold War' whose weapons are defense, intelligence, and economics. There's a lot of interesting information out there...

http://blogs.telegraph.co.uk/news/shashankjoshi/100224247/france-should-remember-its-own-history-before-complaining-too-much-about-american-espionage/

https://wikileaks.org/gifiles/docs/11/1172615_-ct-analysis-an-economic-security-role-for-european-spy.html

http://www.wikileaks-forum.com/nsa/332/r-james-woolsey-why-we-spy-on-our-allies-17-03-2000/24575/

http://www.abc.net.au/news/2013-11-08/australian-nsa-involvement-explained/5079786

http://www.abc.net.au/news/2013-11-08/the-chinese-embassy-bugging-controversy/5079148

http://www.news.com.au/national/australia-must-choose-between-chinese-cash-and-loyalty-to-the-us-as-se-asia-tensions-rise/story-fncynjr2-1227364070887

http://rt.com/news/270529-nsa-france-economy-wikileaks/ 

http://www.bloomberg.com/news/articles/2015-06-30/why-china-wants-a-strong-euro-as-greece-teeters

http://www.smh.com.au/federal-politics/political-news/china-not-fit-for-global-leadership-says-top-canberra-official-michael-thawley-20150630-gi1o1f.html 

- it makes sense that companies try to run lean rather than try to create. Everybody knows how to save. It's much more difficult to create something of value

- advertising is a broadcast means of achieving increased transactions but in spite of targeted advertising it is still incredibly inefficient. Based on previous experience even single digit click through rates for online advertising is considered suspect/possibly fraudulent 
http://adage.com/article/guest-columnists/study-advertising-half-effective-previously-thought/228409/

- the easiest way of estabishing the difference between what's needed and what's wanted is to turn off all advertising around you. Once you've done that, the difference between need and want becomes very strange and the efficacy of advertising on your perspective becomes much, much clearer

- most businesses fail. A lot of people basically have trouble running a business, have flawed business models, or don't achieve enough transactions to make it worthwhile

http://www.forbes.com/sites/ericwagner/2013/09/12/five-reasons-8-out-of-10-businesses-fail/

https://www.linkedin.com/pulse/20140915223641-170128193-what-are-the-real-small-business-survival-rates

http://www.smh.com.au/business/the-economy/google-says-give-rd-tax-breaks-to-small-techies-not-big-guys-20150407-1mfy30.html

http://smallbiztrends.com/2012/09/failure-rates-by-sector-the-real-numbers.html

http://www.isbdc.org/small-business-failure-rates-causes/

http://www.washingtonpost.com/blogs/fact-checker/wp/2014/01/27/do-9-out-of-10-new-businesses-fail-as-rand-paul-claims/

- immigration is a good thing provided that the people in question bring something to the economy. I look at the Japanese situation and wonder whether or not immigration is a more cost effective means of dealing with their ageing problem than 'Abenomics'. Even if all they do is re-patriate former nationals...

http://www.koreaherald.com/view.php?ud=20150628000326

- if you run through their numbers carefully, and think about where many of the world's top companies are headed, the performance (net profit in particular) of some of them aren't any where near impressive (percentage wise) as the share price growth in recent history. There are many small/mid cap firms that would out do them (% net profit wise) if you're looking to invest

http://www.gurufocus.com/financials/AAPL&affid=45223

https://finance.yahoo.com/q/ks?s=MSFT+Key+Statistics

http://www.marketwatch.com/investing/stock/amzn/financials

http://www.marketwatch.com/investing/stock/goog/financials

https://investor.google.com/financial/tables.html

- in software engineering people continually harp on about the benefits of Agile, Extreme programming and so on. Basically, all it is maintaining regular contact between staff members to get the best out of a piece of work. Peer pressure and continual oversight also forces you to remain productive. Think about this in the real world. The larger the teams are the more difficult it is to maintain oversight particuarly if the manager in question is of a poor standard and there are no systems in place to maintain standards. There is also a problem with unfettered belief in this metholodgy. If in general, the team members are unproductive or of a poor standard this will ripple throughout your team

- GDP is a horrible measure of productivity. As I've stated previously, the difference between perceived, effective, and actual value basically diguises where true value lies. Go spend some time in other parts of the world. I guarantee that there will be a massive difference in the way you view productivity (productivity means amount of work completed per unit time not overall work)

- a good measure of a person's productivity/value is what happens if they take a day off or a have a break. Observe, the increase in workload for each other staff member and how they deal with it

- people keep on harping on about self interest as the best way of maintaining productivity and encouraging people to work hard. However, I have a huge problem with this as it is incredibly hard to differentiate between actual, effective, and perceived value sometimes. At one particular firm, we had difficulties with this as well. I was therefore tasked with writing an application to monitor things (if you intend to write something along these lines please be mindful relevant HR and Surveillance laws in your jurisdiction. Also, keep the program 'silent'. Staff will likely alter their behaviour if they know that the program is running.). The funny thing is that even people you think are productive tend to work in bursts. The main difference is the amount of time that trasnpires between each piece of work and the rate of work that occurs during each burst. The other thing that you should know is that even with senior members of staff when you look at a lot of metrics it can be extremely difficult to justify their wage. Prepare to be surprised if you currently have poor oversight in your organisation. Lack of proper oversight breeds nepotism, lack of productivity, etc...

- you'll be shocked at what poor staff can do to your team. If the members in question is particularly bad he in effect takes a number of other staff out of the equation at the same time. Think about this. You all are recruited for highly skilled jobs but one team member is poor. If he continually has to rely on other staff then he in effects takes out another member of your team simultaneously (possibly more). Think about this when training new staff. Give them enough time/training to get a guage of what they'll be like but if they can't hold up their part of the deal be prepared to move them elsewhere within the organisation or let go of them. The same is also true in the opposite direction. Good employees have a multiplier effect. You'll only figure out the difference with proper oversight and monitoring. Without this, perceived value may completely throw you off

http://programmers.stackexchange.com/questions/179616/a-good-programmer-can-be-as-10x-times-more-productive-than-a-mediocre-one

http://swreflections.blogspot.com.au/2015/01/we-cant-measure-programmer-productivity.html

http://stackoverflow.com/questions/966800/mythical-man-month-10-lines-per-developer-day-how-close-on-large-projects
- we like to focus in on large companies because they supposedly bring in a lot of business. The problem is if they have a monopoly. If they strangle the market of all value and don't put back in via taxes, employment, etc... the state in question could be in a lot of trouble down the line. If/when the company moves the economy would have evolved to see these companies as being a core component. Other surrouding will likely be poorly positioned to adapt when they leave for a place which offers better terms and/or conditions. The other problem is this, based on experience people are willing to except a lower wage to work for such firms (mostly for reasons of financial safety). There is no guarantee that you will be paid what you are worth

http://techcrunch.com/2015/06/28/policy-after-uber/

http://www.businessinsider.com/greeces-former-tax-collection-chief-harry-theoharis-explains-tax-evasion-problem-2015-7

http://www.irishtimes.com/business/economy/smes-account-for-99-7-of-business-enterprises-in-republic-1.2035800

http://www.irishtimes.com/business/economy/economy-primed-for-sustained-growth-says-goldman-sachs-1.2143071

http://www.wsj.com/articles/SB10001424127887324787004578496803472834948

http://www.afr.com/technology/technology-companies/ireland-scraps-google-tech-company-tax-breaks-20141019-119m80

https://en.wikipedia.org/wiki/Double_Irish_arrangement

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

http://www.theguardian.com/commentisfree/2015/jun/28/david-cameron-is-abusing-magna-carta-in-abolishing-our-rights

http://www.theguardian.com/world/2015/mar/25/irelands-economy-starting-to-fire-all-cylinders-imf-report

http://www.irishtimes.com/business/economy/who-owes-more-money-the-irish-or-the-greeks-1.2236034

http://www.theguardian.com/us-news/2015/feb/02/barack-obama-tax-profits-president-budget-offshore

http://www.smh.com.au/business/multinationals-channel-more-money-through-hubs-in-singapore-switzerland-than-ever-before-tax-office-says-20150204-1363u5.html

http://www.smh.com.au/business/retail/jeff-kennett-tells-coles-to-pay-12m-to-suppliers-20150630-gi19wv.html 

- when and if a large company collapses or moves the problem is the number of others who rely on it for business

- people keep on saying that there are safe industries from off shoring and automation. I think they're naive or haven't spent enough time around good technologists. Good employees will try to automate or develop processes to get things done more efficiently. Virtually all industries (or vast chunks of them) can be automated fully given time (trust me on this. I like to read a lot...).

http://www.technologyreview.com/view/519241/report-suggests-nearly-half-of-us-jobs-are-vulnerable-to-computerization/

http://www.futuristspeaker.com/2012/02/2-billion-jobs-to-disappear-by-2030/

http://www.forbes.com/sites/jmaureenhenderson/2012/08/30/careers-are-dead-welcome-to-your-low-wage-temp-work-future/

http://theconversation.com/australia-must-prepare-for-massive-job-losses-due-to-automation-43321

http://www.theguardian.com/business/2015/jun/16/computers-could-replace-five-million-australian-jobs-within-two-decades

Only way to keep yourself safe is to be multi-skilled and entrepreneurial or else extremely skilled at a particular profession. Even then there's no guarantee that you'll be safe

http://time.com/3938678/obamacare-supreme-court-uber/

http://techcrunch.com/2015/06/28/policy-after-uber/

- sometimes I think people just don't get it. A small number of outliers is all it takes in order to change group behaviour. Even if we ban regulate/automation there will be those who adopt it without any misgivings much like organised crime, and use of illegal migrants, cash economy, etc... Only real way is to force a cashless society so that we can run algorithms to check for unusual behaviour and breed a more puritan society

- minimal but effective regulation helps to level out the playing field. Making it too complex creates possible avenues for loopholes to be exploited. Too simple and without enough coverage and you have the same problem

- obvious ways to make sustained, long term money include creating something that others need or want, else have the ability to be able to change perception, to be able to see changes and adapt, arbitrage, and using a broadcast structure

- personal experience and history of others with emerging markets such as Asia and Africa says that results can be extremely variable. Without on the ground knowledge and oversight you can just as easily make a substantial profit as a massive loss through fraud. There is very little you can do about this about from taking due diligence and having measures/knowledge to be able to deal with it should it actually occur

http://timesofindia.indiatimes.com/world/uk/India-UKs-3rd-largest-job-creator-in-2014/articleshow/47714406.cms

- in reality, very few have a genuine chance of making it 'big', "Americans raised at the top and bottom of the income ladder are likely to remain there themselves as adults. Forty-three percent of those who start in the bottom are stuck there as adults, and 70 percent remain below the middle quintile. Only 4 percent of adults raised in the bottom make it all the way to the top, showing that the "rags-to-riches" story is more often found in Hollywood than in reality."

http://www.forbes.com/sites/jmaureenhenderson/2012/08/30/careers-are-dead-welcome-to-your-low-wage-temp-work-future/

- use first mover advantage as quickly as you can but have defensive measures in place

http://www.news.com.au/finance/business/is-the-free-ride-over-for-uber/story-fnda1bsz-1227419310284

- investment from third parties (angel investment, venture capital, etc...) can vary drastically. More and more want a guaranteed return on investment at least though

- based on what I've experienced VC is much more difficult to get locally than in Europe or the United States. Luckily, more companies are willing to invest provided you are posting good numbers. One other thing I've discovered locally is that they are too lazy/unwilling to help even if the idea/s may be good though (though this is changing)

http://www.afr.com/business/health/pharmaceuticals/merck-ceo-ken-frazier-on-keytruda-and-why-australians-miss-out-on-new-drugs-20150628-ghyisc

- we don't want to live day by day or have creditors/shareholders to report to so seek the highest profit whenever possible

- you can select a lot of numbers and prove essentially anything in business but their are certain numbers that you simply can't ignore such as net profit/income

- pay a person with cash by the hour where he has to do the numbers versus lump sump and he will look at things very differently. That goes for any profession, even high earning ones

- growth is great but only if it can be sustained and it is genuine. If you have susbtantial variation in growth such as having a few fantastic years of growth and then a sudden drop off that is fed by massive debt you could be in a bit of trouble. You may say that you can just sell off assets. If the growth wasn't good enough then do you see a problem? Moreover, what if you don't have something that it considered worthwhile or easy to sell off? For a state/business, your credit risk suddenly shoots up and you may possibly be priced out of the market. Targeted, sustainable, growth should be the target not growth at all costs. The Chinese position towards economic management is actually making a lot more sense to me now though I'm not certain that it would work quite as easily or be accepted in other states. You may say that we'll invest during good times? The problem is that we're often not wise enough to know when and where to invest

http://www.businessinsider.com/krugman-europe-greece-2015-6

http://www.businessinsider.com/el-erian-on-how-greece-will-impact-markets-2015-6

http://www.dawn.com/news/1162195/putins-next-challenge-propping-up-russias-troubled-banks

- in many places you are seeing a rise of left wing parties. The worrying thing is that they'll lose sight of the benefits of capitalism and fall into the trap of a more puritan communism/socialist system which hasn't really worked over the long term in the past. The other thing to be concerned about is that a lot of them don't have solid policies or answers to the problems which currently face us

http://theconversation.com/postcard-from-spain-where-now-for-the-quiet-revolution-43779

http://blogs.channel4.com/paul-mason-blog/greece-referendum-euro-die/3978

- if more people could distinguish real value from perceived and effective value, needs and wants, we would have less assetts bubbles and price gouging across the board

http://www.news.com.au/finance/real-estate/bis-shrapnel-report-reveals-property-prices-to-fall/story-fncq3era-1227416605503?from=google_rss&google_editors_picks=true

https://www.ozbargain.com.au/node/104348

http://www.news.com.au/world/breaking-news/nz-govt-slammed-over-10m-ny-apartment/story-e6frfkui-1227416038766

- there will be those who say who cares about the collective. Capitalism is composed of boom and bust cycles. Here's the problem. Most companies require debt to survive. If they can't survive that bust cycle they will be part of a collective collapse in the economy. Moreover, based on information I've come across other developed countries have looked at the plans for the Eurozone and the ways of dealing with high debt and are basically using that as the blueprint for the future. Your assets can and will be raided in the event of the state or systemic entities getting into trouble

http://www.heraldsun.com.au/business/greeks-stashing-money-in-homes-as-deadline-looms-for-debt-repayments/story-fni0d2cj-1227403214181?from=google_rss&google_editors_picks=true

http://www.nytimes.com/2015/06/30/business/dealbook/the-hard-line-on-greece.html?_r=0

http://www.usatoday.com/story/news/2015/06/29/evening-news-roundup-monday/29466899/

http://www.bbc.co.uk/news/world-europe-33324363

http://www.washingtonpost.com/blogs/wonkblog/wp/2015/06/30/7-questions-about-greeces-huge-crisis-you-were-too-embarrassed-to-ask/ 

- people say that we should get educated in order to have a high paying job but the problem is that we are increasingly siloed into specific roles. If we can't use the knowledge, the time and money we've spent on education has been for nothing. We require better organisation between educational curriculums and professional settings

http://www.financialexpress.com/article/companies/infosys-wipro-tech-mahindra-it-giants-revamp-culture-to-attract-young-talent-battle-start-ups/86718/

- even if governments are aware that there are problems that are cropping up with our version of capitalism, it's possible that there are those that may be saying that we have no choice but to keep the cycle going. It's the best of the worst

http://www.bbc.co.uk/news/world-europe-33303105

- globlisation essentially buys us more time before things come to a head (if they do). Most of the sceanarios point to organised debt forgiveness as a means of dealing with the problem. Private asset seizure is something that is being metioned everywhere. Raw commodities stored at secure locations may be your only source of safety if things look bad if you are a private citizen

http://www.washingtonpost.com/blogs/wonkblog/wp/2015/06/29/greece/

http://www.news.com.au/finance/small-business/those-selling-safes-are-cashing-in-on-greeces-financial-uncertainty/story-fn9evb64-1227422325045

http://www.news.com.au/finance/economy/what-a-grexit-would-look-like/story-e6frflo9-1227422412614

http://www.telegraph.co.uk/finance/economics/11712098/Europe-has-suffered-a-reputational-catastrophe-in-Greece.html 

- if you want a resilient economy you need maintain a level playing field, flexible workforce, and possibly limit the size and influence of major companies in your economy

http://www.vice.com/en_uk/read/the-irish-emigration-crisis--a-new-century-an-old-problem

- I don't get it. Heaps of countries have adequate blocking technology to be help deal with this if they deem it illegal. Deploy it correctly and your rioting problem is over with...

http://www.arkansasonline.com/news/2015/jun/27/hollande-uber-unit-illegal-dismantle-it/

http://www.theguardian.com/technology/2015/jun/26/uber-expansion-meets-global-revolt-and-crackdown

http://timesofindia.indiatimes.com/tech/tech-news/Officials-hint-at-possible-win-for-Uber-in-Mexico-City/articleshow/47861342.cms

- as stated previously, I've come to the conclusion that a lot of financial instruments are useless. They effectively provide a means of making money under any conditions. If we remove these instruments from play then I think that it may be possible that we may return to less speculative markets that depend more on fundamentals

- anyone can create something of value. The issue is whether it is negligible versus tangible value. This will also determine your business model

- you may know that ther is a bubble but as China and local experiences have demonstrated popping it gracefully is far from easy. Moreover, by the time you figure out there's a bubble it may often too late. Too many people may have too many vested interests

http://www.reuters.com/article/2015/06/29/us-usa-puertorico-restructuring-idUSKCN0P903Q20150629

http://www.businessinsider.com/puerto-rico-is-struggling-to-repay-its-debt-2015-6

http://jamaica-gleaner.com/article/commentary/20150630/editorial-jamaica-no-greece

http://www.heraldsun.com.au/business/breaking-news/world-bank-warns-china-on-reforms/story-fnn9c0hb-1227423791391?nk=5438df4578f2af3f2d269863d041c50c-1435746465 

- theory helps but you won't figure out how market economies work without first hand experience


http://www.afr.com/news/policy/budget/big-government-flourishes-under-tony-abbott-and-joe-hockey-20150513-gh0sgrhttp://www.dailytelegraph.com.au/news/nsw/joe-blasts-welfare-rich-who-have-more-money-to-spend-than-workers/story-fni0cx12-1227357517141

http://www.smh.com.au/business/australiachina-free-trade-agreement-favours-chinese-investors-20150621-ghthjr.html

http://www.afr.com/technology/telstra-cuts-broadband-plan-fees-to-counter-rivals-20150626-ghyir7

http://www.afr.com/opinion/columnists/trophy-trade-deals-wont-change-the-imfs-dismal-outlook-20150628-ghysnn

http://www.brisbanetimes.com.au/act-news/uberx-australian-drivers-working-as-coequals-to-rideshare-tech-company-20150629-ghvjx1.html

http://www.dailytelegraph.com.au/business/breaking-news/hockey-flying-blind-on-negative-gearing/story-fnn9c0gv-1227417798217?nk=0b226f408634f8d8ba57220c3d074f55-1435471944

http://www.abc.net.au/news/2013-11-08/the-chinese-embassy-bugging-controversy/5079148

http://www.macleans.ca/news/world/why-refugees-are-fleeing-france-for-britain/

http://www.businessinsider.com.au/facebooks-shot-at-cisco-just-got-deadly-2015-3

http://www.theglobeandmail.com/globe-drive/culture/technology/gyroscopes-will-allow-bike-to-stay-upright-when-stopped/article24920123/

http://www.businessinsider.in/5-things-Elon-Musk-believed-would-change-the-future-of-humanity-in-1995/articleshow/46831594.cms

Craige McWhirter: How To Delete a Cinder Snapshot with a Status of error or error_deleting With Ceph Block Storage

When deleting a volume snapshot in OpenStack you may sometimes get an error message stating that Cinder was unable to delete the snapshot.

There are a number of reasons why a snapshot may be reported by Ceph as unable to be deleted, however the most common reason in my experience has been that a Cinder client connection has not yet been closed, possibly because a client crashed.

If you were to look at the snapshots in Cinder, the status is usually error or error_deleting:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | error_deleting | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | snappy:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

When you check Ceph you may find the following snapshot list:

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2069 snapshot-2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 40960 MB
  2526 snapshot-52c43ec8-e713-4f87-b329-3c681a3d31f2 40960 MB
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

The astute will notice that there are only 3 snapshots listed in Ceph yet 5 listed in Cinder. We can immediately exclude 47fbbfe8 which is available in both Cinder and Ceph, so there's no issues there.

You will also notice that the snapshots with the status error are not in Ceph and the two with error_deleting are. My take on this is that for the status error, Cinder never received the message from Ceph stating that this had been deleted successfully. Whereas for the status error_deleting status, Cinder had been unsuccessful in offloading the request to Ceph.

Each status will need to be handled separately , I'm going to start with the error_deleting snapshots, which are still present in both Cinder and Ceph.

In MariaDB, set the status from error_deleting to available:

MariaDB [cinder]> update snapshots set status='available' where id = '2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='available' where id = '52c43ec8-e713-4f87-b329-3c681a3d31f2';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Check in Cinder that the status of these snapshots has been updated successfully:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-05-18T00:00:01Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| 52c43ec8-e713-4f87-b329-3c681a3d31f2 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-24T14:00:02Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

Delete the newly available snapshots from Cinder:

% cinder snapshot-delete 2db84ec7-6e1a-41f8-9dc9-1dc14e6ecef0
% cinder snapshot-delete 52c43ec8-e713-4f87-b329-3c681a3d31f2

Then check the results in Cinder and Ceph:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |     Status     |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+
| 07d75992-bf3f-4c9c-ab4e-efccdfc2fe02 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-26T14:00:02Z |  40  |
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |   available    | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
| a595180f-d5c5-4c4b-a18c-ca56561f36cc | 3004d6e9-7934-4c95-b3ee-35a69f236e46 |     error      | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-25T14:00:02Z |  40  |
+--------------------------------------+--------------------------------------+----------------+------------------------------------------------------------------+------+

# rbd snap ls my.pool.cinder.block/volume-3004d6e9-7934-4c95-b3ee-35a69f236e46
SNAPID NAME                                              SIZE
  2558 snapshot-47fbbfe8-643c-4711-a066-36f247632339 40960 MB

So we are done with Ceph now, as the error snapshots do not exist there. As they only exist in Cinder, we need to mark them as deleted in the Cinder database:

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = '07d75992-bf3f-4c9c-ab4e-efccdfc2fe02';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

MariaDB [cinder]> update snapshots set status='deleted', deleted='1' where id = 'a595180f-d5c5-4c4b-a18c-ca56561f36cc';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Now check the status in Cinder:

% cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
|                  ID                  |              Volume ID               |   Status  |                           Display Name                           | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+
| 47fbbfe8-643c-4711-a066-36f247632339 | 3004d6e9-7934-4c95-b3ee-35a69f236e46 | available | tuttle:3004d6e9-7934-4c95-b3ee-35a69f236e46:2015-06-29T03:00:14Z |  40  |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------------------------------+------+

Now your errant Cinder snapshots have been removed.

Enjoy :-)

June 28, 2015

Twitter posts: 2015-06-22 to 2015-06-28

RAID Pain

One of my clients has a NAS device. Last week they tried to do what should have been a routine RAID operation, they added a new larger disk as a hot-spare and told the RAID array to replace one of the active disks with the hot-spare. The aim was to replace the disks one at a time to grow the array. But one of the other disks had an error during the rebuild and things fell apart.

I was called in after the NAS had been rebooted when it was refusing to recognise the RAID. The first thing that occurred to me is that maybe RAID-5 isn’t a good choice for the RAID. While it’s theoretically possible for a RAID rebuild to not fail in such a situation (the data that couldn’t be read from the disk with an error could have been regenerated from the disk that was being replaced) it seems that the RAID implementation in question couldn’t do it. As the NAS is running Linux I presume that at least older versions of Linux have the same problem. Of course if you have a RAID array that has 7 disks running RAID-6 with a hot-spare then you only get the capacity of 4 disks. But RAID-6 with no hot-spare should be at least as reliable as RAID-5 with a hot-spare.

Whenever you recover from disk problems the first thing you want to do is to make a read-only copy of the data. Then you can’t make things worse. This is a problem when you are dealing with 7 disks, fortunately they were only 3TB disks and only each had 2TB in use. So I found some space on a ZFS pool and bought a few 6TB disks which I formatted as BTRFS filesystems. For this task I only wanted filesystems that support snapshots so I could work on snapshots not on the original copy.

I expect that at some future time I will be called in when an array of 6+ disks of the largest available size fails. This will be a more difficult problem to solve as I don’t own any system that can handle so many disks.

I copied a few of the disks to a ZFS filesystem on a Dell PowerEdge T110 running kernel 3.2.68. Unfortunately that system seems to have a problem with USB, when copying from 4 disks at once each disk was reading about 10MB/s and when copying from 3 disks each disk was reading about 13MB/s. It seems that the system has an aggregate USB bandwidth of 40MB/s – slightly greater than USB 2.0 speed. This made the process take longer than expected.

One of the disks had a read error, this was presumably the cause of the original RAID failure. dd has the option conv=noerror to make it continue after a read error. This initially seemed good but the resulting file was smaller than the source partition. It seems that conv=noerror doesn’t seek the output file to maintain input and output alignment. If I had a hard drive filled with plain ASCII that MIGHT even be useful, but for a filesystem image it’s worse than useless. The only option was to repeatedly run dd with matching skip and seek options incrementing by 1K until it had passed the section with errors.

for n in /dev/loop[0-6] ; do echo $n ; mdadm –examine -v -v –scan $n|grep Events ; done

Once I had all the images I had to assemble them. The Linux Software RAID didn’t like the array because not all the devices had the same event count. The way Linux Software RAID (and probably most RAID implementations) work is that each member of the array has an event counter that is incremented when disks are added, removed, and when data is written. If there is an error then after a reboot only disks with matching event counts will be used. The above command shows the Events count for all the disks.

Fortunately different event numbers aren’t going to stop us. After assembling the array (which failed to run) I ran “mdadm -R /dev/md1” which kicked some members out. I then added them back manually and forced the array to run. Unfortunately attempts to write to the array failed (presumably due to mismatched event counts).

Now my next problem is that I can make a 10TB degraded RAID-5 array which is read-only but I can’t mount the XFS filesystem because XFS wants to replay the journal. So my next step is to buy another 2*6TB disks to make a RAID-0 array to contain an image of that XFS filesystem.

Finally backups are a really good thing…

June 27, 2015

git.openstack.org adventures

Over the past few months I started to notice occasional issues when cloning repositories (particularly nova) from git.openstack.org.

It would fail with something like

git clone -vvv git://git.openstack.org/openstack/nova .
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed

The problem would occur sporadically during our 3rd party CI runs causing them to fail. Initially these went somewhat ignored as rechecks on the jobs would succeed and the world would be shiny again. However, as they became more prominent the issue needed to be addressed.

When a patch merges in gerrit it is replicated out to 5 different cgit backends (git0[1-5].openstack.org). These are then balanced by two HAProxy frontends which are on a simple DNS round-robin.

                          +-------------------+
                          | git.openstack.org |
                          |    (DNS Lookup)   |
                          +--+-------------+--+
                             |             |
                    +--------+             +--------+
                    |           A records           |
+-------------------v----+                    +-----v------------------+
| git-fe01.openstack.org |                    | git-fe02.openstack.org |
|   (HAProxy frontend)   |                    |   (HAProxy frontend)   |
+-----------+------------+                    +------------+-----------+
            |                                              |
            +-----+                                    +---+
                  |                                    |
            +-----v------------------------------------v-----+
            |    +---------------------+  (source algorithm) |
            |    | git01.openstack.org |                     |
            |    |   +---------------------+                 |
            |    +---| git02.openstack.org |                 |
            |        |   +---------------------+             |
            |        +---| git03.openstack.org |             |
            |            |   +---------------------+         |
            |            +---| git04.openstack.org |         |
            |                |   +---------------------+     |
            |                +---| git05.openstack.org |     |
            |                    |  (HAProxy backend)  |     |
            |                    +---------------------+     |
            +------------------------------------------------+

Reproducing the problem was difficult. At first I was unable to reproduce locally, or even on an isolated turbo-hipster run. Since the problem appeared to be specific to our 3rd party tests (little evidence of it in 1st party runs) I started by adding extra debugging output to git.

We were originally cloning repositories via the git:// protocol. The debugging information was unfortunately limited and provided no useful diagnosis. Switching to https allowed for more CURL output (when using GIT_CURL_VERBVOSE=1 and GIT_TRACE=1) but this in itself just created noise. It actually took me a few days to remember that the servers are running arbitrary code anyway (a side effect of testing) and therefore cloning from the potentially insecure http protocol didn’t provide any further risk.

Over http we got a little more information, but still nothing that was conclusive at this point:

git clone -vvv http://git.openstack.org/openstack/nova .

error: RPC failed; result=18, HTTP code = 200
fatal: The remote end hung up unexpectedly
fatal: protocol error: bad pack header

After a bit it became more apparent that the problems would occur mostly during high (patch) traffic times. That is, when a lot of tests need to be queued. This lead me to think that either the network turbo-hipster was on was flaky when doing multiple git clones in parallel or the git servers were flaky. The lack of similar upstream failures lead me to initially think it was the former. In order to reproduce I decided to use Ansible to do multiple clones of repositories and see if that would uncover the problem. If needed I would have then extended this to orchestrating other parts of turbo-hipster in case the problem was systemic of something else.

Firstly I need to clone from a bunch of different servers at once to simulate the network failures more closely (rather than doing multiple clones on the one machine or from the one IP in containers for example). To simplify this I decided to learn some Ansible to launch a bunch of nodes on Rackspace (instead of doing it by hand).

Using the pyrax module I put together a crude playbook to launch a bunch of servers. There is likely much neater and better ways of doing this, but it suited my needs. The playbook takes care of placing appropriate sshkeys so I could continue to use them later.

    ---
    - name: Create VMs
      hosts: localhost
      vars:
        ssh_known_hosts_command: "ssh-keyscan -H -T 10"
        ssh_known_hosts_file: "/root/.ssh/known_hosts"
      tasks:
        - name: Provision a set of instances
          local_action:
            module: rax
            name: "josh-testing-ansible"
            flavor: "4"
            image: "Ubuntu 12.04 LTS (Precise Pangolin) (PVHVM)"
            region: "DFW"
            count: "15"
            group: "raxhosts"
            wait: yes
          register: raxcreate

        - name: Add the instances we created (by public IP) to the group 'raxhosts'
          local_action:
            module: add_host
            hostname: "{{ item.name }}"
            ansible_ssh_host: "{{ item.rax_accessipv4 }}"
            ansible_ssh_pass: "{{ item.rax_adminpass }}"
            groupname: raxhosts
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

        - name: Sleep to give time for the instances to start ssh
          #there is almost certainly a better way of doing this
          pause: seconds=30

        - name: Scan the host key
          shell: "{{ ssh_known_hosts_command}} {{ item.rax_accessipv4 }} &gt;&gt; {{ ssh_known_hosts_file }}"
          with_items: raxcreate.success
          when: raxcreate.action == 'create'

    - name: Set up sshkeys
      hosts: raxhosts
      tasks:
       - name: Push root's pubkey
         authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

From here I can use Ansible to work on those servers using the rax inventory. This allows me to address any nodes within my tenant and then log into them with the seeded sshkey.

The next step of course was to run tests. Firstly I just wanted to reproduce the issue, so in order to do that it would crudely set up an environment where it can simply clone nova multiple times.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"

By default Ansible runs with 5 folked processes. Meaning that Ansible would work on 5 servers at a time. We want to exercise git heavily (in the same way turbo-hipster does) so we use the –forks param to run the clone on all the servers at once. The plan was to keep launching servers until the error reared its head from the load.

To my surprise this happened with very few nodes (less than 15, but I left that as my minimum testing). To confirm I also ran the tests after launching further nodes to see it fail at 50 and 100 concurrent clones. It turned out that the more I cloned the higher the failure rate percentage was.

Now that I had the problem reproducing, it was time to do some debugging. I modified the playbook to capture tcpdump information during the clone. Initially git was cloning over IPv6 so I turned that off on the nodes to force IPv4 (just in case it was a v6 issue, but the problem did present itself on both networks). I also locked git.openstack.org to one IP rather than randomly hitting both front ends.

    ---
    - name: Prepare servers for git testing
      hosts: josh-testing-ansible*
      serial: "100%"
      tasks:
        - name: Install git
          apt: name=git state=present update_cache=yes
        - name: remove nova if it is already cloned
          shell: 'rm -rf nova'

    - name: Clone nova and monitor tcpdump
      hosts: josh-testing-ansible*
      serial: "100%"
      vars:
        cap_file: tcpdump_{{ ansible_hostname }}_{{ ansible_date_time['epoch'] }}.cap
      tasks:
        - name: Disable ipv6 1/3
          sysctl: name="net.ipv6.conf.all.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 2/3
          sysctl: name="net.ipv6.conf.default.disable_ipv6" value=1 sysctl_set=yes
        - name: Disable ipv6 3/3
          sysctl: name="net.ipv6.conf.lo.disable_ipv6" value=1 sysctl_set=yes
        - name: Restart networking
          service: name=networking state=restarted
        - name: Lock git.o.o to one host
          lineinfile: dest=/etc/hosts line='23.253.252.15 git.openstack.org' state=present
        - name: start tcpdump
          command: "/usr/sbin/tcpdump -i eth0 -nnvvS -w /tmp/{{ cap_file }}"
          async: 6000000
          poll: 0 
        - name: Clone nova
          shell: "git clone http://git.openstack.org/openstack/nova"
          #shell: "git clone http://github.com/openstack/nova"
          ignore_errors: yes
        - name: kill tcpdump
          command: "/usr/bin/pkill tcpdump"
        - name: compress capture file
          command: "gzip {{ cap_file }} chdir=/tmp"
        - name: grab captured file
          fetch: src=/tmp/{{ cap_file }}.gz dest=/var/www/ flat=yes

This gave us a bunch of compressed capture files that I was then able to seek the help of my colleagues to debug (a particular thanks to Angus Lees). The results from an early run can be seen here: http://119.9.51.216/old/run1/

Gus determined that the problem was due to a RST packet coming from the source at roughly 60 seconds. This indicated it was likely we were hitting a timeout at the server or a firewall during the git-upload-pack of the clone.

The solution turned out to be rather straight forward. The git-upload-pack had simply grown too large and would timeout depending on the load on the servers. There was a timeout in apache as well as the HAProxy config for both frontend and backend responsiveness. The relative patches can be found at https://review.openstack.org/#/c/192490/ and https://review.openstack.org/#/c/192649/

While upping the timeout avoids the problem, certain projects are clearly pushing the infrastructure to its limits. As such a few changes were made by the infrastructure team (in particular James Blair) to improve git.openstack.org’s responsiveness.

Firstly git.openstack.org is now a higher performance (30GB) instance. This is a large step up from the previous (8GB) instances that were used as the frontend previously. Moving to one frontend additionally meant the HAProxy algorithm could be changed to leastconn to help balance connections better (https://review.openstack.org/#/c/193838/).

                          +--------------------+
                          | git.openstack.org  |
                          | (HAProxy frontend) |
                          +----------+---------+
                                     |
                                     |
            +------------------------v------------------------+
            |  +---------------------+  (leastconn algorithm) |
            |  | git01.openstack.org |                        |
            |  |   +---------------------+                    |
            |  +---| git02.openstack.org |                    |
            |      |   +---------------------+                |
            |      +---| git03.openstack.org |                |
            |          |   +---------------------+            |
            |          +---| git04.openstack.org |            |
            |              |   +---------------------+        |
            |              +---| git05.openstack.org |        |
            |                  |  (HAProxy backend)  |        |
            |                  +---------------------+        |
            +-------------------------------------------------+

All that was left was to see if things had improved. I rerun the test across 15, 30 and then 45 servers. These were all able to clone nova reliably where they had previously been failing. I then upped it to 100 servers where the cloning began to fail again.

Post-fix logs for those interested:

http://119.9.51.216/run15/

http://119.9.51.216/run30/

http://119.9.51.216/run45/

http://119.9.51.216/run100/

http://119.9.51.216/run15per100/

At this point, however, I’m basically performing a Distributed Denial of Service attack against git. As such, while the servers aren’t immune to a DDoS the problem appears to be fixed.

June 26, 2015

The Value of Money - Part 3

The Western world generally saw the collapse of the Soviet Union as proof positive of the superiority of capitalism over communism/socialism. Most of the arguments bordered along the lines that the sheer scale of managing an economy, that it resulted in nepotism, bred corruption, stifled innovation, and that it didn't feed into the needs and wants of it's constituents were the reasons for their failure. The irony is that you can see many of the same flaws in communism and socialism that you see in capitalism now. Given the fact that more and more developed economies are getting into trouble I wonder whether this is the true way forward. The European Union, United States, Japan, and others have all recently endured serious economic difficulty and have been projected to continue to experience prolonged issues.



My belief that if capitalism and free market economics is to work into the future constraints must be placed on the size of firms relative to the size of the market/economy. Below are some reasons for my belief in this as well as some other notes regarding market economics:
- I believe that one of the reasons we only favour free market economics because it limits the severity of problems if/when someone/something collapses. If a government collapses you have trouble everywhere. If a company collapses it only impacts the company and the immediate supply chain, distributers, retailers, etc...

- the other problem is most of the companies that grow to this size have no choice but to be driven by greed. Even if they pay their fair share of taxes most of them rely on debt of some sort in order to maintain a viable business. Without cash flow from the stock market, their creditors, etc... they can't continue to pay the bills. Hence, they must satisfy their own needs as well as that of their shareholders and creditors at the expense of those in the wider community. An example of this are the large retail chains that operate in many of the more developed countries. The problem is that their power can now rival that of the state. For instance, in Australia, "Almost 40 cents in every dollar we spend at the shops is now taken by a Woolworths or Wesfarmers-owned retail entity" with their interests including the "interests in groceries, fuel, liquor, gambling, office supplies, electronics, general merchandise, insurance and hardware, sparking concerns that consumers will pay more."

http://www.news.com.au/finance/money/coles-and-woolworths-receive-almost-40-per-cent-of-australian-retail-spending/story-e6frfmd9-1226043866311

If the chain collapses it's likely that hundreds of thousands of jobs will be lost in the event of administration/receivership. I'm arguing that we need to spread the risk a bit. If one part collapses it doesn't bring the whole thing crashing down around you

http://www.instantshift.com/2010/02/03/22-largest-bankruptcies-in-world-history/

- despite politician's complaints about MNCs/TNCs not contributing their fair share towards the tax base they aren't willing to make enough of an effort to change things to create those circumstances. There needs to be an understanding that without someone to buy their products and services these companies will go bankrupt. Large firms need employees and consumers as much as we need their tax revenue

- the irony is that we believe that since companies are large they are automatically successful, we should support them. Think about many of the recent large defense programs that were undertaken by large firms. As indicated previously, there's currently no incentive for them to help the state. They just want to survive and generate profits. The JSF program was deliberately structured in such a way that we've ended up with a fighter jet that isn't up to the original design spec, well and truly over the desired budgetary parameters, and way beyond the original design constraints putting the national security of many allied nations at risk

https://medium.com/war-is-boring/test-pilot-admits-the-f-35-can-t-dogfight-cdb9d11a875

http://www.janes.com/article/52715/jpo-counters-media-report-that-f-35-cannot-dogfight

- progress within the context of market economics is often only facilitated through proper competition and regulation. At the moment, many of the largest donors towards political parties are large companies. This results in a significant distortion of the playing field and what the ultimate decision makers deem to be important issues.

http://www.nytimes.com/2015/06/25/business/obama-bolsters-his-leverage-with-trade-victory-but-at-a-cost.html?_r=0

https://en.wikipedia.org/wiki/Global_saving_glut

Think about nature of the pharmaceutical industry and electronics/IT industries. They both complain that progress (research and development) is difficult. The irony is that it's difficult to argue this if you're not making any worthwhile attempts at it. Both sectors sit atop enough savings to be able to cure much of the world's current woes but they have absolutely no incentive to bring it back on shore for it to be properly tax or to spend it

http://money.cnn.com/2015/03/20/investing/stocks-companies-record-cash-level-oil/

http://www.telegraph.co.uk/finance/11038180/Global-firms-sitting-on-7-trillion-war-chest.html

http://www.telegraph.co.uk/finance/budget/9150406/Budget-2012-UK-companies-are-sitting-on-billions-of-pounds-so-why-arent-they-spending-it.html

http://www.theguardian.com/commentisfree/2013/may/13/tax-havens-hidden-billions

Moreover, they more often than not just use their existing position to continue to exploit the market. A lot of electronics now is simply uneconomical or impossible to repair locally which means that you have to purchase new products once it has gone out of warranty and has failed due to engineered lifecycles (they are designed to fail after a particular period. If they didn't they would suffer the same fate that some car manufacturers have been complaining about. If they don't fail no one will buy new cars). My belief is that there should be tax concesssions if they are willing or they should be forced to invest into SME firms (which comprise the bulk of the economy) via secondary small capitalisation type funds (especially if the company doesn't know what to do with spare cash and it is left 'stagnant'). Ironically, returns on broad based funds in this area (longer term) more often than not exceed the growth of the company in question as well as the economy in general

- sometimes I wonder whether or not managing an economy (from a political perspective) is much the same as operating as a market analyst. You're effectively taking calculated bets on how the world will end up in the future. Is it possible that good economic managers need to be more lucky than skillful?

http://www.amazon.com/Random-Walk-Down-Wall-Street/dp/0393330338

https://en.wikipedia.org/wiki/A_Random_Walk_Down_Wall_Street

- in some cases, the nature of capitalism is such that the state has grown so large (because in general government services aren't profitable) that they are beginning to groan under the pressure that many of the more developed nations are now feeling. This is a case of both mis-management and a mis-understanding of how to use capitalism to your advantage

- one of the biggest contradictions in business is that it should all come down to the bottom line. The stupid aspect of this is that most companies have double digit turnover and continue to make the excuse that you should simply put up with whatever is thrown at you even if employee turnover is high. If workplaces were generally more civilised and conditions better then you would have a huge cost removed from your business (loss of employee, advertising, training, etc...)

- normally, when people are taught about life, we start with the small and simple examples and then we are pushed into more complex and advanced examples. The irony is that is often the opposite of the way we are taught about business. We are taught to dream big and win big or else crash and burn and learn your place in society. There is a major problem with this. In the Australian economy, SME business accounts for 96% of the economy. It is similar elsewhere. People leaving our educational institutions basically aren't equipped to be able to run make money by themselves right out of school. Help them/teach them how and you could help the overall economy as well as these students by equipping them to be able to look after their own needs reducing the burden on the social welfare system and giving them valuable employment experience that may be worthwhile later down the track. Most students are equipped to work for other people not to start their own company or operate as individuals

http://www.smartcompany.com.au/technology/information-technology/31806-number-of-businesses-in-australia-continues-to-stagnate-abs.html

- all politicians (and people in general) like to talk about the success of their country in being able to attract MNCs/TNCs to employ people locally. However, the problem is that they aren't the main employment drivers in the economy. Across most of the world's economies small businesses are the driving force ("Such firms comprise around 99% of all businesses in most economies and between half and three quarters of the value added. They also make a significant contribution to employment and are of interest to governments primarily for their potential to create more jobs."). One wonders that even with the increased business (direct and indirect) around a large firm when they exist in a country are you getting value for money (especially if you are subsidising their local existence)?

http://theconversation.com/growing-the-global-economy-through-small-to-medium-enterprise-the-g20-sme-conference-28307

- we actually do ourselves somewhat of a disservice by creating a perception that dreaming and living big is what you should want. Popular culture makes it feel like as though if you don't go to the right schools, work for the right companies, and so on you are a failure. The irony is that if every single graduate were taught about how to commercialise their their ideas while at school I believe that we would have a far more flexible, innovative, economy. Moreover, both they as well as economy in general would get a return on investment. It's no good telling people how to be enterpreneurial if they don't know how to be enterpreneurial.

- the irony of the large donor phenomenon is that SME business accounts for most of the activity within the economy...

http://www.smartcompany.com.au/technology/information-technology/31806-number-of-businesses-in-australia-continues-to-stagnate-abs.html

- as we've discussed previously on this blog the primary ways you can make money are to create something of value or by changing the perceived value of something such that people will want to buy it no matter what the disparity between perceived value versus effective value. Once upon a time I look at German prestige and performance vehicles to be the pinnacle or automative engineering. The more I've learned about them the less impressed I've become. If I told you the evidence points to them being the least reliable, the vehicles which depreciate the most (within any given time frame), most expensive to repair, the most expensive to insure and service, average safety, and that often only have comparable technology to other cars (once you cut through the marketing speak) you'd think that people would be incredibly stupid to purchase them. Yet, this trend continues...

http://usedcars.about.com/od/research/fl/10-Least-Reliable-Used-Car-Brands.htm

http://www.bbc.com/news/business-32332210

http://rac.com.au/motoring/motoring-advice/buying-a-car/running-costs

http://rac.com.au/news-community/road-safety-and-transport/safe-cars/how-safe-is-your-car/used-car-safety-ratings

Another good example is the upper echelons of professional sport and artistry (includes music, art, etc...). If anybody told you that you were paying several hundred dollars an hour to watch a group of individuals kick a ball you'd think that they were mad. The horrible part is when you realise top tier amateur competitions which are free to watch can be just as entertaining and skillful

- in reality, in the real world very very rarely are pure market forces at play and it often takes a lot of time for it to get through to them that for all the stuff/theory that you learn at school there's a lot more that you will also learn in the real world

- most industries fit into the following categories; something that you need or something that you want. By selling people a dream we can turn what you want into something you need and create employment from it

- if you want to make abnormal (excess) profits it's mainly about being able to distinguish between perceived, effective, and actual value. Once you can establish this you can exploit it. This is easier said than done though. Let's say you discovered Lionel Messi playing in the streets of Somalia versus Paris. More than likely, you'd value him much less if we were found in Somalia. Sometimes it can be pretty obvious, at other times it's not much different from predicting the future. For instance, the iPod was essentially a re-modled MP3 player with an integrated software solution/ecosystem, Coke is basically just a sweet, fizzy drink which is actually beaten by Pepsi in blind tests

- we like short term thinking because we like the notion that we can make a lot of money in a short space of time. That means that we can retire early, purchase luxury goods and services. The irony is that this feeds into a disparity between actual, perceived, and effective value which means that flawed businesses can continue to still work. The irony is that this flaw works in practice but in the long term it can results in asset bubbles. Valuation at the correct level is in collective's overall interests

- risk isn't necessarily directly related to reward if you're modelling is good. One way to reduce risk is to let others take it first. You might not make a massive name for yourself but should at least not break bank for a high risk project. This has been a common theme in the Russian and Chinese defense establishments where they have often taken significant cues from American technology

- it's becoming clearer to me that many financial instruments actually aren't required. The industry itself relies on the fact that that many will fall for the perceived notion that you can make a lot of money in a small amount of time or for little labour. However, the reality is that most will make a lot less that what is perceived to be the case. An example of this is the following. Many financial instruments are created for the express purpose of increasing risk exposure and therefore possible profits/losses. In reality, most people lose. It's like a casino where the house wins most the time. The other irony is the following, while liquidity can have a direct correlation with volatility (allows you to reach a more valid price earlier especially if many are involved in pricing), the same is also true in the opposite direction. It only takes a few minor outliers to be able to change the perception where value within the market exists

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

- may SME firms collapse within a short time frame but easy credit makes it easier for bad business models to continue to exist. The same is also true of the United States economy where uncompetitive industries were allowed to continue to exist for a long time without adequate trade barriers. If the barriers are lifted we should create circumstances where we force companies to alter their strategies earlier or force them to re-structure/collapse/declare bankruptcy. It will help to reduce the impact of when we provide credit to flawed companies which ultimately collapse

- the way we measure credit risk is often questionable. Financial institutions often turn away low income earners because they are considered a credit risk. I looked at this further and the rates that they are actually charged are diabolical. At one particular place, they were charging 10% for a one week loan, 15% for a 3 week loan, and then 25% for a month long loan. If the industry was so risky though how does it continue to exist? Most of the people who understand the problem have basically said that people who require this money simply have a hard time budgeting and managing their affairs. Essentially providing them with a lump sum component every once in a while makes them believe that they can spend freely. The irony is that the rest of society is also somewhat guilty of this. If we were paid cash and by the hour (rather than regular lump sum payments) and had to pay a component of our bills and other expenses each day we would look at our purchases very differently

http://www.news.com.au/finance/real-estate/stamp-duty-scandal-tony-abbott-under-pressure-to-scrap-our-worst-tax-amid-disastrous-poll/story-fndban6l-1227398035046?from=google_rss&google_editors_picks=true

http://www.perthnow.com.au/news/breaking-news/welfare-card-trial-sites-still-undecided/story-fnhrvfuw-1227398497797?nk=2dc00eb5accf0aef95bbb39faeb08ba0-1434358050

At the other end of the scale, there exists another paradox/contradiction. I've heard stories about people with relatively high incomes being denied credit even though their credit history was good (companies can't make money if you don't breach credit conditions every once in a while). Despite what we say about free market economics, regulatory frameworks, etc... the system is corrupt. It's just not as overt and no one likes/wants to admit it.

- despite what many may think of him, I think Vladamir Putin is actually trying to look after his country's best interests. The collapse of the Soviet Union gave rise to the oligarch. A circumstance that was facilitated by the nature of free market economics without an adequate framework (rules and regulations such as that provided by law). Essentially, the state was replaced by private enterprise where the needs of the many were placed lower on the pecking order than had the state still been in charge. I understand his perspective but I don't believe in the way he has gone about things

https://en.wikipedia.org/wiki/Revolutions_of_1989

https://en.wikipedia.org/wiki/Socialism

https://en.wikipedia.org/?title=Communism

https://en.wikipedia.org/wiki/Capitalism

http://www.msnbc.com/msnbc/pope-francis-rejects-communism-critique

http://ncronline.org/blogs/francis-chronicles/pope-francis-concern-poor-sign-gospel-not-red-flag-communism

http://www.marxist.com/kievs-contemporary-anti-communism-and-the-crimes-of-the-oligarchys-very-existence.htm

- people say that we should do more and spend more in the fight against organised crime. The stupid, ironic thing is that when society is unfair and unjust organised crime grows much stronger because it provides people with a way of making a living. In Europe, the Italian mafia has grown much stronger with the advent of the European economic difficulties and it was much the same in Japan when their asset bubble burst during the 90s

https://en.wikipedia.org/wiki/Lost_Decade_%28Japan%29

- the EU was borne of the fact that no one wanted war again in Europe. It feels like much the same with the rest of the world. We've used progress and better living conditions as an argument against going to war. However, the world has essentally ended up engaging in an effective 'Cold War'. Much of the world's spending revolves around the notion of deterrence. Namely, if I go to want to go to war with you I know that I'll suffer just as much damage (if not more)

https://en.wikipedia.org/wiki/List_of_countries_by_military_expenditures

http://www.globalissues.org/article/75/world-military-spending

There are a number of ways around this. By reaching a concensus for that countries will no longer attempt to project power outwards (defend yourself only, don't interfere with others. Highly unlikely.), invasion will no longer be part of the future landscape (other countries will come to the aide of those in trouble. Unlikely especially with the rise of terrorism.), or else collapse a economies such that countries will no longer be able to afford to spend on defense. The troubling thing is that the last scenario has actually been outlined in various US intelligence and defense reports. It's essentially war without war. If you can wreak havoc in someone's economy then they'll no longer be a problem for you. The irony is that the larger your intelligence apparatus the more likely you can engage in this style of activity. Previous leaked reports and WikiLeaks has made me highly skeptical that the average country doesn't engage in this style of activity.

http://www.theguardian.com/world/2013/aug/29/us-intelligence-spending-double-9-11-secret-budget

https://en.wikipedia.org/wiki/United_States_intelligence_budget

The irony is that you if you don't engage in these activities you may lose a significant advantage. If you do, you're sort of left to question whether or not you are the good guy in this affair

- people who haven't spent enough time in the real world only often understand the theory. Once you understand how things actually work your whole perspective changes. Let's take the housing asset/bubble that we may be going through. As stated previously, making abnormal profits is about managing the difference between perceieved, actual, and effective value. It's clear that in theory boosting supply may change things. The thing I've discovered is that in free market economics it only takes a small thing to change perception. Once the perception snow balls you're stuck with the same problem. This is the same whether it is a new home buyer or a foreign investor purchasing in the local market

http://www.smh.com.au/nsw/mike-bairds-400m-boost-for-infrastructure-fund-to-tackle-housing-affordability-crisis-20150621-ghtfr8.html

http://www.theglobeandmail.com/report-on-business/milk-surplus-forcing-canadas-dairy-industry-to-dump-supply/article25030753/

http://www.brisbanetimes.com.au/it-pro/rental-growth-slowdown-signals-residential-property-bust-on-the-way-20150626-ghxkdr

http://www.news.com.au/finance/real-estate/economists-claim-australia-in-midst-of-largest-housing-bubble-on-record/story-fncq3era-1227410053643?from=google_rss&google_editors_picks=true

- a business structure is simply a focal point of communication between business and consumer. It also affords the opportunity for a government to tax it more effectively

- by being so insistent on upskilling and education it makes low labour costs almost impossible to achieve. This makes a lot of infrastructure projects in developed countries impossible because they are economically unviable. A good example of this is 457 visas in Australia, and illegal immigration in the United States (especially from Mexico) which are often used and abused to acheive lower labour costs than otherwise would have been possible. Another example of this is the Snowy River Hydroelectricity project. It's said that hardly anyone on site knew English and that often people just learned on the job.

http://www.politico.com/story/2015/06/donald-trump-calls-jeb-bush-unhappy-119153.html?ml=ri

Another recent project put this into perspective. It was said that building a building infrastructure (tunnels, office blocks, etc...) in China, shipping it, and then assembling it here in Australia would be more cost effective then building it here alone. We need to give people a chance no matter what their education or skill level if we are to balance government budgets and to reduce the incidents of off-shoring without necessarily having to resort to often expensive anti-shoring techniques such as tarrifs, rebates, taxes, etc...

- our perception of success feels odd sometimes. If you look up the background of Rupert Murdoch, Donald Trump, and several others you'll see that thye are continually on the point of brankruptcy. Under normal circumstances anyone continually on verge of losing everything would be considered mediocre but in the business world they're considered successful because they can keep the whole thing going... Also, look at the poverty figures for the United States, Germany, United Kingdom, United Arab Emirates, Iran, and Japan. Notice the odd one out? Iran has been under sanction for a long time for their alleged nuclear research activities and yet the level of poverty in Iran is comparable to all these others.

https://en.wikipedia.org/wiki/List_of_countries_by_percentage_of_population_living_in_poverty

- the only other way to achieve lower costs in developed countries is to resort to automation and robots (else tap developing countries for lower priced components). I've looked at Australian car manufacturing plants and American and European plants for mass produced vehicles. The level of automation in American and European plants seem to be significantly higher with build quality that is comparable

http://www.kyodonews.net/news/2015/06/20/21340

http://forums.whirlpool.net.au/archive/2050953

- the perception is that we always should hire the best and brightest in order to get the job done and that we should try to do our best to make them happy. The irony is that I've worked on both sides of the fence. By hiring only the best and brightest (perceived to be. A lot of the time the best and brightest don't necessarily get hired based on what I've seen) and only settling on them we force wages up across the board and we make work more difficult for your existing workforce. It may even be more difficult to keep them happy. The other irony is that there are many wealthy global companies who can afford to hire away your best staff forcing prices up even further. Complete free trade works in favour of those who are already wealthy and makes it harder for those down the chain to make a living and to progress

- if all the best and brightest are hired by the same companies (based on personal experience) you aren't necessarily always going to get the best out of them. Companies have an increasing tendency (regulatory as well as political issues) to pigeon hole them into specific roles which doesn't allow them to realise their full and complete potential. The individual, company, as well as the collective lose out

http://blogs.cfainstitute.org/investor/2015/06/11/solutions-to-a-misbehaving-finance-industry/

- we believe in out current style of capitalism because we have a perception that it gives everyone a chance in life to be and do whatever they want. In reality, it's a lot more complicated. At it's very core I think it's very much like Winston Churchhill's opinion of the Westminster parlimentary system, "Democracy is the worst form of government, except for all the others."

http://www.goodreads.com/quotes/267224-democracy-is-the-worst-form-of-government-except-for-all

- it's clear that I believe in limited capitalism and for the most part we should try to work with those within our regions to reduce the chances of a systemic collapse. Currency manipulation, foreign investment law, tarrifs, taxation, etc... are all lawful means of changing the playing field. In fact, the exact same techniques that countries use to protect against trade sanctions can be used to guard the economic safety of citizens locally. By playing by the current rules and free trade we are essentially playing into the hands of the larger companies of the world (mostly based in the United States). It's a form of imperialism/conquest (deliberate or not) without necessarily having to engage in open warfare and with the effective ruler being the United States with these companies acting as proxies

http://www.theglobeandmail.com/report-on-business/international-business/european-business/europe-shutters-as-greek-banks-bleed-cash/article25033867/

http://rt.com/business/250497-obama-economy-china-trade/

- making or saving money can sometimes be counterintuitive. If you've ever worked in the IT industry in any sort of support role then you'll realise that no matter what level of support you operate at one of the main aims is to establish whether or not the problem occurs without your own area of oversight. If it is, you try and fix it, if not you ignore it and basically tell the other end to kindly go away because you often don't have the expertise to fix it, nor do you have the oversight to be able to. The medical and pharmaceutical industry is much the same. The irony is that this perspective can result in longer term harm than good. The United States budget is out of whack with one of the major causes being the high cost of drugs as well as short sighted perspective of medical practitioners who tend to not attempt to treat the problem till it's fixed but keep on managing it. Fix it if you can and the problem goes away, your budget is in better shape

http://ourfiniteworld.com/2011/04/08/whats-behind-united-states-budget-problems/

http://www.businessinsider.com/us-budget-deficit-2011-7

- if so many countries are so concerned about profit shifting why don't they simply make it un-economical/impossible to re-locate from now on? That way existing financial centres for such activity can adapt in the meantime while others countries can begin to regain some of their investment

- every company engages in anti-competitive behaviour. Even though (and others) Google are a supposed proponent of the 'Don't be Evil' mantra they still have shareholders to report to meaning that even if they don't want to they have to

- if too few countries make changes their companies are going to be subject to foreign takeover interest (friendly and non-friendly) if adequate measures aren't taken to protect them. Moreover, they will be at a competitive disadvantage when attemping to branch out. The only way to look after these interests is to look at the way companies are structured in order to look after the needs of both the individual and collective simultaneously

- making changes for a fairer and more equatible society isn't easy and the irony is that those who are already successful will always appeal to reduce the chances of the status quo changing. They will insist that since they 'made it' so can others. Moreover, there are always those within the political and public services who will always have differing opinions on how to acheive the same thing

http://www.smh.com.au/world/us-supreme-court-hands-obama-major-victory-on-obamacare-healthcare-reform-20150625-ghy1xq.html#content

http://www.seattletimes.com/seattle-news/a-pathological-refusal-to-see-any-shred-of-good-in-obamacare/

- people say that globalisation and free market capitalism is a guard against collapse. Someone in the system is always going to be looking for money or someone is always going to have money. The problem is that there's no incentive to do this. Moreover, it has been proven in the United States and Europe that pure private, free trade capitalism isn't necessarily going to fill the void should there be significant underlying problems. Even states and unions can not hold back the dam should the market burst. Moreover, firms have shareholders and creditors to report to. Without adequate safeguards in place the needs of the many are never going to be met by the few who are lucky enough to have survived (there is only one exception to this. If there is strong leadership/management in the private sector which I haven't seen to many instances of)

https://en.wikipedia.org/wiki/Great_Recession

http://www.afr.com/markets/commodities/energy/saudis-seen-escalating-battle-for-global-oil-market-share-20150618-ghrxws

https://en.wikipedia.org/wiki/2007%E2%80%9308_world_food_price_crisis

https://en.wikipedia.org/wiki/2000s_energy_crisis

http://www.news.com.au/national/breaking-news/govt-to-explore-social-impact-bonds/story-e6frfku9-1227416495203

http://www.news.com.au/world/breaking-news/pope-talking-drivel-catholic-economist/story-e6frfkui-1227416020721

June 25, 2015

Dutch Court orders Netherlands Government cut CO2 emissions by 25 percent by 2020 | Climate Citizen

http://takvera.blogspot.com.au/2015/06/dutch-court-orders-netherlands.html

A Dutch court in a landmark legal case has just handed down a verdict that the Netherlands Government has the legal duty to take measures against #climate change. Further, the court ordered that a 25% reduction of CO2 emissions, based on 1990 levels, must be accomplished by 2020 by the Dutch government in accordance with IPCC scientific recommendations for industrial countries.

[…]

Sue Higginson, Principal Solicitor for the Environmental Defenders Office (EDO) NSW, said that the same legal arguments are unlikely to be used in Australia, “Dutch civil laws are much more specific in their terms than Australian laws.” she said.

[…]

With Australia, such a case would be much less straightforward as we do not have the incorporation of international human rights or general duty of care directly in our constitution or legal framework.

Hashing Speed: SHA256 vs Murmur3

So I did some IBLT research (as posted to bitcoin-dev ) and I lazily used SHA256 to create both the temporary 48-bit txids, and from them to create a 16-bit index offset.  Each node has to produce these for every bitcoin transaction ID it knows about (ie. its entire mempool), which is normally less than 10,000 transactions, but we’d better plan for 1M given the coming blopockalypse.

For txid48, we hash an 8 byte seed with the 32-byte txid; I ignored the 8 byte seed for the moment, and measured various implementations of SHA256 hashing 32 bytes on on my Intel Core i3-5010U CPU @ 2.10GHz laptop (though note we’d be hashing 8 extra bytes for IBLT): (implementation in CCAN)

  1. Bitcoin’s SHA256: 527.7+/-0.9 nsec
  2. Optimizing the block ending on bitcoin’s SHA256: 500.4+/-0.66 nsec
  3. Intel’s asm rorx: 314.1+/-0.3 nsec
  4. Intel’s asm SSE4 337.5+/-0.5 nsec
  5. Intel’s asm RORx-x8ms 458.6+/-2.2 nsec
  6. Intel’s asm AVX 336.1+/-0.3 nsec

So, if you have 1M transactions in your mempool, expect it to take about 0.62 seconds of hashing to calculate the IBLT.  This is too slow (though it’s fairly trivially parallelizable).  However, we just need a universal hash, not a cryptographic one, so I benchmarked murmur3_x64_128:

  1. Murmur3-128: 23 nsec

That’s more like 0.046 seconds of hashing, which seems like enough of a win to add a new hash to the mix.

Toolchains for OpenPower petitboot environments

Since we're using buildroot for the OpenPower firmware build infrastructure, it's relatively straightforward to generate a standalone toolchain to build add-ons to the petitboot environment. This toolchain will allow you to cross-compile from your build host to an OpenPower host running the petitboot environment.

This is just a matter of using op-build's toolchain target, and specifying the destination directory in the BR2_HOST_DIR variable. For this example, we'll install into /opt/openpower/ :

sudo mkdir /opt/openpower/
sudo chown $USER /opt/openpower/
op-build BR2_HOST_DIR=/opt/openpower/ toolchain

After the build completes, you'll end up with a toolchain based in /opt/openpower.

Using the toolchain

If you add /opt/openpower/usr/bin/ to your PATH, you'll have the toolchain binaries available.

[jk@pecola ~]$ export PATH=/opt/openpower/usr/bin/:$PATH
[jk@pecola ~]$ powerpc64le-buildroot-linux-gnu-gcc --version
powerpc64le-buildroot-linux-gnu-gcc (Buildroot 2014.08-git-g80a2f83) 4.9.0
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Currently, this toolchain isn't relocatable, so you'll need to keep it in the original directory for tools to correctly locate other toolchain components.

OpenPower doesn't (yet) specify an ABI for the petitboot environment, so there are no guarantees that a petitboot plugin will be forwards- or backwards- compatible with other petitboot environments.

Because of this, if you use this toolchain to build binaries for a petitboot plugin, you'll need to either:

  • ensure that your op-build version matches the one used for the target petitboot image; or
  • provide all necessary libraries and dependencies in your distributed plugin archive.

We're working to address this though, by defining the ABI that will be regarded as stable across petitboot builds. Stay tuned for updates.

Using the toolchain for subsequent op-build runs

Because op-build has a facility to use an external toolchain, you can re-use the toolchain build above for subsequent op-build invocations, where you want to build actual firmware binaries. If you're using multiple op-build trees, or are regularly building from scratch, this can save a lot of time as you don't need to continually rebuild the toolchain from source.

This is a matter of configuring your op-build tree to use an "External Toolchain", in the "Toolchain" screen of the menuconfig interface:

You'll need to set the toolchain path to the path you used for BR2_HOST_DIR above, with /usr appended. The other toolchain configuration parameters (kernel header series, libc type, features enabled) will need to match the parameters that were given in the initial toolchain build. However, the buildroot code will check that these match and print a helpful error message if there are any inconsistencies.

For the example toolchain built above, these are the full configuration parameters I used:

BR2_TOOLCHAIN=y
BR2_TOOLCHAIN_USES_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM=y
BR2_TOOLCHAIN_EXTERNAL_PREINSTALLED=y
BR2_TOOLCHAIN_EXTERNAL_PATH="/opt/openpower/usr/"
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_PREFIX="$(ARCH)-linux"
BR2_TOOLCHAIN_EXTERNAL_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_HEADERS_3_15=y
BR2_TOOLCHAIN_EXTERNAL_CUSTOM_GLIBC=y
BR2_TOOLCHAIN_EXTERNAL_INET_RPC=y
BR2_TOOLCHAIN_EXTERNAL_CXX=y
BR2_TOOLCHAIN_EXTRA_EXTERNAL_LIBS=""
BR2_TOOLCHAIN_HAS_NATIVE_RPC=y
BR2_TOOLCHAIN_HAS_THREADS=y
BR2_TOOLCHAIN_HAS_THREADS_DEBUG=y
BR2_TOOLCHAIN_HAS_THREADS_NPTL=y
BR2_TOOLCHAIN_HAS_SHADOW_PASSWORDS=y
BR2_TOOLCHAIN_HAS_SSP=y

Once that's done, anything you build using that op-build configuration will refer to the external toolchain, and use that for the general build process.

June 24, 2015

PyCon Australia 2015 Programme Released

PyCon Australia is proud to release our programme for 2015, spread over the weekend of August 1st and 2nd, following our Miniconfs on Friday 31 July.

Following our largest ever response to our Call for Proposals, we are able to present two keynotes, forty eight talks and two tutorials. The conference will feature four full tracks of presentations, covering all aspects of the Python ecosystem, presented by experts and core developers of key Python technology. Our presenters cover a broad range of backgrounds, including industry, research, government and academia.

We are still finalising our Miniconf timetable, but we expect another thirty talks for Friday. We’d like to highlight the inaugural running of the Education Miniconf whose primary aim is to bring educators and the Python community closer together.

The full schedule for PyCon Australia 2015 can be found at http://2015.pycon-au.org/programme/about

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors. Registrations for PyCon Australia 2015 are now open, with prices starting at AU$50 for students, and tickets for the general public starting at AU$240. All prices include GST, and more information can be found at http://2015.pycon-au.org/register/prices

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.

clintonroy

PyCon Australia is proud to release our programme for 2015, spread over the weekend of August 1st and 2nd, following our Miniconfs on Friday 31 July.

Following our largest ever response to our Call for Proposals, we are able to present two keynotes, forty eight talks and two tutorials. The conference will feature four full tracks of presentations, covering all aspects of the Python ecosystem, presented by experts and core developers of key Python technology. Our presenters cover a broad range of backgrounds, including industry, research, government and academia.

We are still finalising our Miniconf timetable, but we expect another thirty talks for Friday. We’d like to highlight the inaugural running of the Education Miniconf whose primary aim is to bring educators and the Python community closer together.

The full schedule for PyCon Australia 2015 can be found at http://2015.pycon-au.org/programme/about

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors. Registrations for PyCon Australia 2015 are now open, with prices starting at AU$50 for students, and tickets for the general public starting at AU$240. All prices include GST, and more information can be found at http://2015.pycon-au.org/register/prices

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.



Filed under: Uncategorized

Custom kernels in OpenPower firmware

As of commit 2aff5ba6 in the op-build tree, we're able to easily replace the kernel in an OpenPower firmware image.

This commit adds a new partition (called BOOTKERNEL) to the PNOR image, which provides the petitboot bootloader environment. Since it's now in its own partition, we can replace the image with a custom build. Here's a little guide to doing that, using an example of using a separate branch of op-build that provides a little-endian kernel.

You can check if your currently-running firmware has this BOOTKERNEL partition by running pflash -i on the BMC. It should list BOOTKERNEL in the partition table listing:

# pflash -i
Flash info:
-----------
Name          = Micron N25Qx512Ax
Total size    = 64MB 
Erase granule = 4KB 

Partitions:
-----------
ID=00            part 00000000..00001000 (actual=00001000)
ID=01            HBEL 00008000..0002c000 (actual=00024000)
[...]
ID=11            HBRT 00949000..00ca9000 (actual=00360000)
ID=12         PAYLOAD 00ca9000..00da9000 (actual=00100000)
ID=13      BOOTKERNEL 00da9000..01ca9000 (actual=00f00000)
ID=14        ATTR_TMP 01ca9000..01cb1000 (actual=00008000)
ID=15       ATTR_PERM 01cb1000..01cb9000 (actual=00008000)
[...]
#  

If your partition table does not contain a BOOTKERNEL partition, you'll need to upgrade to a more recent PNOR image to proceed.

First (if you don't have one already), grab a suitable version of op-build. In this example, we'll use my le branch, which has little-endian support:

git clone --recursive git://github.com/jk-ozlabs/op-build.git
cd op-build
git checkout -b le origin/le
git submodule update

Then, prepare our environment and configure for the relevant platform - in this case, habanero:

. op-build-env
op-build habanero_defconfig

If you'd like to change any of the kernel config (for example, to add or remove drivers), you can do that now, using the 'linux-menuconfig' target. This is only necessary if you wish to make changes. Otherwise, the default kernel config will work.

op-build linux-menuconfig

Next, we build just the userspace and kernel parts of the firmware image, by specifying the linux26-rebuild-with-initramfs build target:

op-build linux26-rebuild-with-initramfs

If you're using a fresh op-build tree, this will take a little while, as it downloads and builds a toolchain, userspace and kernel. Once that's complete, you'll have a built kernel image in the output tree:

 output/build/images/zImage.epapr

Transfer this file to the BMC, and flash using pflash. We specify the -P <PARTITION> argument to write to a single PNOR partition:

pflash -P BOOTKERNEL -e -p /tmp/zImage.epapr

And that's it! The next boot will use your newly-build kernel in the petitboot bootloader environment.

Out-of-tree kernel builds

If you'd like to replace the kernel from op-build with one from your own external source tree, you have two options. Either point op-build at your own tree, or build you own kernel using the initramfs that op-build has produced.

For the former, you can override certain op-build variables to reference a separate source. For example, to use an external git tree:

op-build LINUX_SITE=git://github.com/jk-ozlabs/linux LINUX_VERSION=v3.19

See Customising OpenPower firmware for other examples of using external sources in op-build.

The latter option involves doing a completely out-of-op-build build of a kernel, but referencing the initramfs created by op-build (which is in output/images/rootfs.cpio.xz). From your kernel source directory, add CONFIG_INITRAMFS_SOURCE argument, specifying the relevant initramfs. For example:

make O=obj ARCH=powerpc \
    CONFIG_INITRAMFS_SOURCE=../op-build/output/images/rootfs.cpio.xz

Smart Phones Should Measure Charge Speed

My first mobile phone lasted for days between charges. I never really found out how long it’s battery would last because there was no way that I could use it to deplete the charge in any time that I could spend awake. Even if I had managed to run the battery out the phone was designed to accept 4*AA batteries (it’s rechargeable battery pack was exactly that size) so I could buy spare batteries at any store.

Modern phones are quite different in physical phone design (phones that weigh less than 4*AA batteries aren’t uncommon), functionality (fast CPUs and big screens suck power), and use (games really drain your phone battery). This requires much more effective chargers, when some phones are intensively used (EG playing an action game with Wifi enabled) they can’t be charged as they use more power than the plug-pack supplies. I’ve previously blogged some calculations about resistance and thickness of wires for phone chargers [1], it’s obvious that there are some technical limitations to phone charging based on the decision to use a long cable at ~5V.

My calculations about phone charge rate were based on the theoretical resistance of wires based on their estimated cross-sectional area. One problem with such analysis is that it’s difficult to determine how thick the insulation is without destroying the wire. Another problem is that after repeated use of a charging cable some conductors break due to excessive bending. This can significantly increase the resistance and therefore increase the charging time. Recently a charging cable that used to be really good suddenly became almost useless. My Galaxy Note 2 would claim that it was being charged even though the reported level of charge in the battery was not increasing, it seems that the cable only supplied enough power to keep the phone running not enough to actually charge the battery.

I recently bought a USB current measurement device which is really useful. I have used it to diagnose power supplies and USB cables that didn’t work correctly. But one significant way in which it fails is in the case of problems with the USB connector. Sometimes a cable performs differently when connected via the USB current measurement device.

The CurrentWidget program [2] on my Galaxy Note 2 told me that all of the dedicated USB chargers (the 12V one in my car and all the mains powered ones) supply 1698mA (including the ones rated at 1A) while a PC USB port supplies ~400mA. I don’t think that the Note 2 measurement is particularly reliable. On my Galaxy Note 3 it always says 0mA, I guess that feature isn’t implemented. An old Galaxy S3 reports 999mA of charging even when the USB current measurement device says ~500mA. It seems to me that method the CurrentWidget uses to get the current isn’t accurate if it even works at all.

Android 5 on the Nexus 4/5 phones will tell the amount of time until the phone is charged in some situations (on the Nexus 4 and Nexus 5 that I used for testing it didn’t always display it and I don’t know why). This is an useful but it’s still not good enough.

I think that what we need is to have the phone measure the current that’s being supplied and report it to the user. Then when a phone charges slowly because apps are using some power that won’t be mistaken for a phone charging slowly due to a defective cable or connector.

June 23, 2015

One Android Phone Per Child

I was asked for advice on whether children should have access to smart phones, it’s an issue that many people are discussing and seems worthy of a blog post.

Claimed Problems with Smart Phones

The first thing that I think people should read is this XKCD post with quotes about the demise of letter writing from 99+ years ago [1]. Given the lack of evidence cited by people who oppose phone use I think we should consider to what extent the current concerns about smart phone use are just reactions to changes in society. I’ve done some web searching for reasons that people give for opposing smart phone use by kids and addressed the issues below.

Some people claim that children shouldn’t get a phone when they are so young that it will just be a toy. That’s interesting given the dramatic increase in the amount of money spent on toys for children in recent times. It’s particularly interesting when parents buy game consoles for their children but refuse mobile phone “toys” (I know someone who did this). I think this is more of a social issue regarding what is a suitable toy than any real objection to phones used as toys. Obviously the educational potential of a mobile phone is much greater than that of a game console.

It’s often claimed that kids should spend their time reading books instead of using phones. When visiting libraries I’ve observed kids using phones to store lists of books that they want to read, this seems to discredit that theory. Also some libraries have Android and iOS apps for searching their catalogs. There are a variety of apps for reading eBooks, some of which have access to many free books but I don’t expect many people to read novels on a phone.

Cyber-bullying is the subject of a lot of anxiety in the media. At least with cyber-bullying there’s an electronic trail, anyone who suspects that their child is being cyber-bullied can check that while old-fashioned bullying is more difficult to track down. Also while cyber-bullying can happen faster on smart phones the victim can also be harassed on a PC. I don’t think that waiting to use a PC and learn what nasty thing people are saying about you is going to be much better than getting an instant notification on a smart phone. It seems to me that the main disadvantage of smart phones in regard to cyber-bullying is that it’s easier for a child to participate in bullying if they have such a device. As most parents don’t seem concerned that their child might be a bully (unfortunately many parents think it’s a good thing) this doesn’t seem like a logical objection.

Fear of missing out (FOMO) is claimed to be a problem, apparently if a child has a phone then they will want to take it to bed with them and that would be a bad thing. But parents could have a policy about when phones may be used and insist that a phone not be taken into the bedroom. If it’s impossible for a child to own a phone without taking it to bed then the parents are probably dealing with other problems. I’m not convinced that a phone in bed is necessarily a bad thing anyway, a phone can be used as an alarm clock and instant-message notifications can be turned off at night. When I was young I used to wait until my parents were asleep before getting out of bed to use my PC, so if smart-phones were available when I was young it wouldn’t have changed my night-time computer use.

Some people complain that kids might use phones to play games too much or talk to their friends too much. What do people expect kids to do? In recent times the fear of abduction has led to children doing playing outside a lot less, it used to be that 6yos would play with other kids in their street and 9yos would be allowed to walk to the local park. Now people aren’t allowing 14yo kids walk to the nearest park alone. Playing games and socialising with other kids has to be done over the Internet because kids aren’t often allowed out of the house. Play and socialising are important learning experiences that have to happen online if they can’t happen offline.

Apps can be expensive. But it’s optional to sign up for a credit card with the Google Play store and the range of free apps is really good. Also the default configuration of the app store is to require a password entry before every purchase. Finally it is possible to give kids pre-paid credit cards and let them pay for their own stuff, such pre-paid cards are sold at Australian post offices and I’m sure that most first-world countries have similar facilities.

Electronic communication is claimed to be somehow different and lesser than old-fashioned communication. I presume that people made the same claims about the telephone when it first became popular. The only real difference between email and posted letters is that email tends to be shorter because the reply time is smaller, you can reply to any questions in the same day not wait a week for a response so it makes sense to expect questions rather than covering all possibilities in the first email. If it’s a good thing to have longer forms of communication then a smart phone with a big screen would be a better option than a “feature phone”, and if face to face communication is preferred then a smart phone with video-call access would be the way to go (better even than old fashioned telephony).

Real Problems with Smart Phones

The majority opinion among everyone who matters (parents, teachers, and police) seems to be that crime at school isn’t important. Many crimes that would result in jail sentences if committed by adults receive either no punishment or something trivial (such as lunchtime detention) if committed by school kids. Introducing items that are both intrinsically valuable and which have personal value due to the data storage into a typical school environment is probably going to increase the amount of crime. The best options to deal with this problem are to prevent kids from taking phones to school or to home-school kids. Fixing the crime problem at typical schools isn’t a viable option.

Bills can potentially be unexpectedly large due to kids’ inability to restrain their usage and telcos deliberately making their plans tricky to profit from excess usage fees. The solution is to only use pre-paid plans, fortunately many companies offer good deals for pre-paid use. In Australia Aldi sells pre-paid credit in $15 increments that lasts a year [2]. So it’s possible to pay $15 per year for a child’s phone use, have them use Wifi for data access and pay from their own money if they make excessive calls. For older kids who need data access when they aren’t at home or near their parents there are other pre-paid phone companies that offer good deals, I’ve previously compared prices of telcos in Australia, some of those telcos should do [3].

It’s expensive to buy phones. The solution to this is to not buy new phones for kids, give them an old phone that was used by an older relative or buy an old phone on ebay. Also let kids petition wealthy relatives for a phone as a birthday present. If grandparents want to buy the latest smart-phone for a 7yo then there’s no reason to stop them IMHO (this isn’t a hypothetical situation).

Kids can be irresponsible and lose or break their phone. But the way kids learn to act responsibly is by practice. If they break a good phone and get a lesser phone as a replacement or have to keep using a broken phone then it’s a learning experience. A friend’s son head-butted his phone and cracked the screen – he used it for 6 months after that, I think he learned from that experience. I think that kids should learn to be responsible with a phone several years before they are allowed to get a “learner’s permit” to drive a car on public roads, which means that they should have their own phone when they are 12.

I’ve seen an article about a school finding that tablets didn’t work as well as laptops which was touted as news. Laptops or desktop PCs obviously work best for typing. Tablets are for situations where a laptop isn’t convenient and when the usage involves mostly reading/watching, I’ve seen school kids using tablets on excursions which seems like a good use of them. Phones are even less suited to writing than tablets. This isn’t a problem for phone use, you just need to use the right device for each task.

Phones vs Tablets

Some people think that a tablet is somehow different from a phone. I’ve just read an article by a parent who proudly described their policy of buying “feature phones” for their children and tablets for them to do homework etc. Really a phone is just a smaller tablet, once you have decided to buy a tablet the choice to buy a smart phone is just about whether you want a smaller version of what you have already got.

The iPad doesn’t appear to be able to make phone calls (but it supports many different VOIP and video-conferencing apps) so that could technically be described as a difference. AFAIK all Android tablets that support 3G networking also support making and receiving phone calls if you have a SIM installed. It is awkward to use a tablet to make phone calls but most usage of a modern phone is as an ultra portable computer not as a telephone.

The phone vs tablet issue doesn’t seem to be about the capabilities of the device. It’s about how portable the device should be and the image of the device. I think that if a tablet is good then a more portable computing device can only be better (at least when you need greater portability).

Recently I’ve been carrying a 10″ tablet around a lot for work, sometimes a tablet will do for emergency work when a phone is too small and a laptop is too heavy. Even though tablets are thin and light it’s still inconvenient to carry, the issue of size and weight is a greater problem for kids. 7″ tablets are a lot smaller and lighter, but that’s getting close to a 5″ phone.

Benefits of Smart Phones

Using a smart phone is good for teaching children dexterity. It can also be used for teaching art in situations where more traditional art forms such as finger painting aren’t possible (I have met a professional artist who has used a Samsung Galaxy Note phone for creating art work).

There is a huge range of educational apps for smart phones.

The Wikireader (that I reviewed 4 years ago) [4] has obvious educational benefits. But a phone with Internet access (either 3G or Wifi) gives Wikipedia access including all pictures and is a better fit for most pockets.

There are lots of educational web sites and random web sites that can be used for education (Googling the answer to random questions).

When it comes to preparing kids for “the real world” or “the work environment” people often claim that kids need to use Microsoft software because most companies do (regardless of the fact that most companies will be using radically different versions of MS software by the time current school kids graduate from university). In my typical work environment I’m expected to be able to find the answer to all sorts of random work-related questions at any time and I think that many careers have similar expectations. Being able to quickly look things up on a phone is a real work skill, and a skill that’s going to last a lot longer than knowing today’s version of MS-Office.

There are a variety of apps for tracking phones. There are non-creepy ways of using such apps for monitoring kids. Also with two-way monitoring kids will know when their parents are about to collect them from an event and can stay inside until their parents are in the area. This combined with the phone/SMS functionality that is available on feature-phones provides some benefits for child safety.

iOS vs Android

Rumour has it that iOS is better than Android for kids diagnosed with Low Functioning Autism. There are apparently apps that help non-verbal kids communicate with icons and for arranging schedules for kids who have difficulty with changes to plans. I don’t know anyone who has a LFA child so I haven’t had any reason to investigate such things. Anyone can visit an Apple store and a Samsung Experience store as they have phones and tablets you can use to test out the apps (at least the ones with free versions). As an aside the money the Australian government provides to assist Autistic children can be used to purchase a phone or tablet if a registered therapist signs a document declaring that it has a therapeutic benefit.

I think that Android devices are generally better for educational purposes than iOS devices because Android is a less restrictive platform. On an Android device you can install apps downloaded from a web site or from a 3rd party app download service. Even if you stick to the Google Play store there’s a wider range of apps to choose from because Google is apparently less restrictive.

Android devices usually allow installation of a replacement OS. The Nexus devices are always unlocked and have a wide range of alternate OS images and the other commonly used devices can usually have an alternate OS installed. This allows kids who have the interest and technical skill to extensively customise their device and learn all about it’s operation. iOS devices are designed to be sealed against the user. Admittedly there probably aren’t many kids with the skill and desire to replace the OS on their phone, but I think it’s good to have option.

Android phones have a range of sizes and features while Apple only makes a few devices at any time and there’s usually only a couple of different phones on sale. iPhones are also a lot smaller than most Android phones, according to my previous estimates of hand size the iPhone 5 would be a good tablet for a 3yo or good for side-grasp phone use for a 10yo [5]. The main benefits of a phone are for things other than making phone calls so generally the biggest phone that will fit in a pocket is the best choice. The tiny iPhones don’t seem very suitable.

Also buying one of each is a viable option.

Conclusion

I think that mobile phone ownership is good for almost all kids even from a very young age (there are many reports of kids learning to use phones and tablets before they learn to read). There are no real down-sides that I can find.

I think that Android devices are generally a better option than iOS devices. But in the case of special needs kids there may be advantages to iOS.

June 22, 2015

sswam

I learned a useful trick with the bash shell today.

We can use printf “%q ” to escape arguments to pass to the shell.

This can be useful in combination with ssh, in case you want to pass arguments containing shell special characters or spaces. It can also be used with su -c, and sh -c.

The following will run a command exactly on a remote server:

sshc() {
        remote=$1 ; shift
        ssh "$remote" "`printf "%q " "$@"`"
}

Example:

sshc user@server touch "a test file" "another file"


June 21, 2015

Twitter posts: 2015-06-15 to 2015-06-21

June 20, 2015

Yet another possible cub walk

Jacqui and Catherine kindly agreed to come on another test walk for a possible cub walk. This one was the Sanctuary Loop at Tidbinbilla. To be honest this wasn't a great choice for cubs -- whilst being scenic and generally pleasant, the heavy use of black top paths and walkways made it feel like a walk in the Botanic Gardens, and the heavy fencing made it feel like an exhibit at a zoo. I'm sure its great for a weekend walk or for tourists, but if you're trying to have a cub adventure its not great.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150620-tidbinbilla photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

BTRFS Status June 2015

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT
hetz0/be0-mail@2015-03-10 2.88G 387G
hetz0/be0-mail@2015-03-11 1.12G 388G
hetz0/be0-mail@2015-03-12 1.11G 388G
hetz0/be0-mail@2015-03-13 1.19G 388G

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

June 19, 2015

Mining on a Home DSL connection: latency for 1MB and 8MB blocks

I like data.  So when Patrick Strateman handed me a hacky patch for a new testnet with a 100MB block limit, I went to get some.  I added 7 digital ocean nodes, another hacky patch to prevent sendrawtransaction from broadcasting, and a quick utility to create massive chains of transactions/

My home DSL connection is 11Mbit down, and 1Mbit up; that’s the fastest I can get here.  I was CPU mining on my laptop for this test, while running tcpdump to capture network traffic for analysis.  I didn’t measure the time taken to process the blocks on the receiving nodes, just the first propagation step.

1 Megabyte Block

Naively, it should take about 10 seconds to send a 1MB block up my DSL line from first packet to last.  Here’s what actually happens, in seconds for each node:

  1. 66.8
  2. 70.4
  3. 71.8
  4. 71.9
  5. 73.8
  6. 75.1
  7. 75.9
  8. 76.4

The packet dump shows they’re all pretty much sprayed out simultaneously (bitcoind may do the writes in order, but the network stack interleaves them pretty well).  That’s why it’s 67 seconds at best before the first node receives my block (a bit longer, since that’s when the packet left my laptop).

8 Megabyte Block

I increased my block size, and one node dropped out, so this isn’t quite the same, but the times to send to each node are about 8 times worse, as expected:

  1. 501.7
  2. 524.1
  3. 536.9
  4. 537.6
  5. 538.6
  6. 544.4
  7. 546.7

Conclusion

Using the rough formula of 1-exp(-t/600), I would expect orphan rates of 10.5% generating 1MB blocks, and 56.6% with 8MB blocks; that’s a huge cut in expected profits.

Workarounds

  • Get a faster DSL connection.  Though even an uplink 10 times faster would mean 1.1% orphan rate with 1MB blocks, or 8% with 8MB blocks.
  • Only connect to a single well-connected peer (-maxconnections=1), and hope they propagate your block.
  • Refuse to mine any transactions, and just collect the block reward.  Doesn’t help the bitcoin network at all though.
  • Join a large pool.  This is what happens in practice, but raises a significant centralization problem.

Fixes

  • We need bitcoind to be smarter about ratelimiting in these situations, and stream serially.  Done correctly (which is hard), it could also help bufferbloat which makes running a full node at home so painful when it propagates blocks.
  • Some kind of block compression, along the lines of Gavin’s IBLT idea. I’ve done some preliminary work on this, and it’s promising, but far from trivial.

 

June 18, 2015

Further adventures in the Jerrabomberra wetlands

There was another walk option for cubs I wanted to explore at the wetlands, so I went back during lunch time yesterday. It was raining really quite heavily during this walk, but I still had fun. I think this route might be the winner -- its a bit longer, and a bit more interesting as well.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150618-jerrabomberra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

June 17, 2015

Exploring possible cub walks

I've been exploring possible cub walks for a little while now, and decided that Jerrabomberra Wetlands might be an option. Most of these photos will seem a bit odd to readers, unless you realize I'm mostly interested in the terrain and its suitability for cubs...



                                 



Interactive map for this route.



Tags for this post: blog pictures 20150617-jerrabomerra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

June 16, 2015

Abide the Slide

The holonomic drive robot takes it's first rolls! This is what you get when you contort a 3d printer into a cross format and attach funky wheels. Quite literally as the control board is an Arduino Mega board with Atmel 2650 MCU and a RAMPS 1.4 stepper controller board plugged into it. The show is controlled over rf24 link from a hand made controller. Yes folks, a regression to teleoperating for now. I'll have to throw the thing onto scales later, but the steppers themselves add considerable weight to the project, but there doesn't seem to be much problem moving the thing around under it's own power.







The battery is a little underspeced, it will surely supply enough current, and doesn't get hot after operation, but the overall battery capacity is low so the show is over fairly quickly. A problem that is easily solved by throwing more dollars at the battery. The next phase is to get better mechanical stability by tweaking things and changing the software to account for the fact that one wheel axis is longer than the other. From there some sensor feedback (IMU) and a fly by wire mode will be on the cards.







This might end up going into ROS land too, encapsulating the whole current setup into being a "robot base controller" and using other hardware above to run sensors, navigation, and decision logic.



OPAL firmware specification, conformance and documentation

Now that we have an increasing amount of things that run on top of OPAL:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

and that the OpenPower ecosystem is rapidly growing (especially around people building OpenPower machines), the need for more formal specification, conformance testing and documentation for OPAL is increasing rapidly.

If you look at the documentation in the skiboot tree late last year, you’d notice a grand total of seven text files. Now, we’re a lot better (although far from complete).

I’m proud to say that I won’t merge new code that adds/modifies an OPAL API call or anything in the device tree that doesn’t come with accompanying documentation, and this has meant that although it may not be perfect, we have something that is a decent starting point.

We’re in the interesting situation of starting with a working system, with mainline Linux kernels now for over a year (maybe even 18 months) being able to be booted by skiboot and run on powernv hardware (the more modern the kernel the better though).

So…. if anyone loves going through deeply technical documentation… do I have a project you can contribute to!

June 15, 2015

On Removal of Citizenship – Short Cuts | London Review of Books

PyCon Australia 2015 Early Bird Registrations Now Open!

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.

June 14, 2015

FreeBSD on OpenPower

There’s been some work on porting FreeBSD over to run natively on top of OPAL, that is, on bare metal OpenPower machines (not just under KVM).

This is one of four possible things to run natively on an OPAL system:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

It’s great to see that another fully featured OS is getting ported to POWER8 and OPAL. It’s not yet at a stage where you could say it was finished or anything (PCI support is pretty preliminary for example, and fancy things like disks and networking live on PCI).

Twitter posts: 2015-06-08 to 2015-06-14

hello world as ppc66le OPAL payload!

While the in-tree hello-world kernel (originally by me, and Mikey managed to CUT THE BLOAT of a whole SEVENTEEN instructions down to a tiny ten) is very, very dumb (and does one thing, print “Hello World” to the console), there’s now an alternative for those who like to play with a more feature-rich Hello World rather than booting a more “real” OS such as Linux. In case you’re wondering, we use the hello world kernel as a tiny test that we haven’t completely and utterly broken things when merging/developing code.

https://github.com/andreiw/ppc64le_hello is a wonderful example of a small (INTERACTIVE!) starting point for a PowerNV (as it’s called in Linux) or “bare metal” (i.e. non-virtualised) OS on POWER.

What’s more impressive is that this was all developed using the simulator rather than real hardware (although I think somebody has tried it on some now).

Kind of neat!

June 13, 2015

The Value of Money - Part 2

This is obviously a continuation from my last post, 


No one wants to live from day to day, week to week and for the most part you don't have that when you have a salaried job. You regularly receive a lump sum each fortnight or month from which you draw down to pay for life's expenses.



Over time you actually discover it's an illusion though. A former teacher of mine once said that a salary of about 70-80K wasn't all that much. To kids that seemed liked a lot of money though. Now it actually makes a lot more sense. Factor in tax, life expenses, rental, etc... and most of it dries up very quickly.



When you head to business or law school it's the same thing. You regularly deal with millions, billions, and generally gratuitous amounts of money. This doesn't change all that much when you head out into the real world. The real world creates a perception whereby consumption and possession of certain material goods are almost a necessity in order to live and work comfortably within your profession. Ultimately, this means that no matter how much you earn it still doesn't seem like it's enough.



The greatest irony of this is that you only really discover that the the perception of the value of such (gratuitous) goods changes drastically if you are on your own or you are building a company.



I semi-regularly receive offers of business/job opportunities through this blog and other avenues (scams as well as real offers. Thankfully, most of the 'fishy ones' are picked up by SPAM filters). The irony is this. I know that no matter how much money is thrown at a business there is still no guarantee of success and a lot of the time savings can dry up in a very short space of time (especially if it is a 'standard business'. Namely, one that doesn't have a crazy level of growth ('real growth' not anticipated or 'projected growth')).



This is particularly the case if specialist 'consultants' (they can charge you a lot of money for what seems like obvious advice) need to be brought in. The thing I'm seeing is that basically a lot of what we sell one another is 'mumbo jumbo'. Stuff that we generally don't need but ultimately convince one another of in order to make a living and perhaps even allow us to do something we enjoy.



What complicates this further is that no matter how much terminology and theory we throw at something ultimately most people don't value things at the same value. A good example of this is asking random people what the value of a used iPod Classic 160GB is? I remember questioniong the value (200) of it by a salesman. He justfied the store price by stating that people were selling it for 600-700 on eBay. A struggling student would likely value it at around closer to 150. A person in hospitality valued it at 240. The average, knowledeagble community member would perceive (most likely remember) the associated value with the highest mark though.



Shift this back into the workplace and things become even more complicated. Think about the 'perception' of your profession. A short while back I met a sound engineer who made a decent salary (around 80K) but had to work 18 hour days continuously based on his description. His quality of life was ultimately shot and his wage should have obviously been much higher. His perceived value was 80K. His effective value was much lower.



Think about 'perception' once more. Some doctors/specialists who migrate but have the skills to practice but not the money to purchase insurance, re-take certification exams, etc... become taxi drivers in their new country. Their effective value (as a worker) becomes that of a taxi driver, nothing more.



Many skilled professions actually require extended periods of study/training, an apprenticeship of some form, a huge amount of hours put in, or just time trying to market your skills. A good chunk people may end up making a lot of money but most don't. Perceived value is the end salary but actual value is much lower.


Think about 'perception' in IT. In some companies they look down upon you if you work in this particular area. What's interesting is what they use you for. They basically shove more menial tasks downwards into the IT department because, 'nobody else wants to do it'. The perceived value of the worker in question doesn't seem much more different than a labourer.



The irony is that they're often just as well qualified as anybody in the firm in question and the work can often be varied to make you wonder what exactly is the actual value of an average IT worker. I've been trying to do the calculations. Average IT graduate is worth about 55K.

http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/4125.0main+features2320Jan%202013

http://www.payscale.com/research/AU/Job=Graduate_Software_Engineer/Salary

http://www.graduatecareers.com.au/research/researchreports/graduatesalaries/



Assuming he works at a SME (any industry not just IT) firm he'll be doing a lot of varied tasks (a lot of firms will tend to pigeon hole you into becoming a specialist). At a lot of service providers and SME firms I've looked at one hour of down time equates to about five figures. If you work in the right firm or you end up really good at your job you end up saving your firm somewhere between 5-7 figures each year. At much larger firms this figure is closer to about 6-8 figures each year.



At a lot of firms we suffer from hardware failure. The standard procedure is to simply purchase new hardware to deal with the problem (it's quicker and technically free despite the possible loss of downtime due to diagnosis and response time). The thing I've found out is that if you are actually able to repair/re-design the hardware itself you can actually save/make a lot (particularly telecommunications and network hardware). This is especially the case if the original design cut corners. Once again savings are similar to the previous point.



In an average firm there may be a perception that IT there is simply to support the function a business. It's almost like a utility now (think electricity, water, gas, etc... That's how low some companies perceieve technology. They perceive it to be a mere cost rather than something that can benefit their business). What a lot of people neglect is how much progress can be made given the use of appropiate technology. Savings/productivity gains are similar to the previous points.



What sort of stops us from realising just exactly what our value is is the siloed nature of the modern business world (specialists rather than generalists a lot of the time) and the fact that various laws, regulations, and so on are designed to help stop us from being potentially exploited.



The only way you actually realise what you're worth is if you work as an individual or start a company.



Go ahead, break down what you actually do in your day. You'll be surprised at how much you may actually be worth.



What you ultimately find out though is that (if you're not lazy) you're probably underpaid. The irony is that if the company were to pay you exactly what you were worth they would go bankrupt. Moreover, you only realistically have a small number of chances/opportunities to demonstrate your true worth. A lot of the time jobs are conducted on the basis of intermittency. Namely, you're there to do something specialised difficult every once in a while, not necessarily all the time.



It would be a really interesting world if we didn't have company structures/businesses. I keep on finding out over and over again that you simply get paid more for more skills as an individual. This is especially the case if there is no artificial barrier between you and the getting the job done. The work mightn't be stable but once you deal with that you have a very different perspective of the world even if it's only a part time job.



If you have some talent, I'd suggest you try starting your own company or work as an individual at some point in your life. The obvious problem will be coming up with an idea which will create money though. Don't worry about it. You will find opportunities along the way as you gain more life experience and understand where value comes from. At that point, start doing the numbers and do a few tests to see whether your business instincts are correct. You may be surprised at what you end up finding out.

http://forums.whirlpool.net.au/archive/1505450



Here's are other things I've worked out:

  • if you need a massive and complex business plan in order to justify your business's existence (particularly to investors) then you should rethink your business
  • if you need to 'spin things' or else have a bloated marketing department then there's likely nothing much special about the product or service that you are selling
  • if your business is fairly complex at a small level think about when it will be like when it scales up. Try to remove as many obstacles as you can when you're company is still young to ensure future success if unexpected growth comes your way
  • if you narrow yourself to one particular field you can limit your opportunities. In the normal world it can lead to stagnation (no real change in salary/value), specialisation (guaranteed increase in salary/value) though niether is a given. In smaller companies multiple roles may be critical to the survival/profitability of that particular company. The obvious risk is if they leave you're trying to fill in for multiple roles
  • a lot of goods and services exist in a one to one relationship. You can only sell it once and you have to maximise the profit on that. Through the use of broadcast style technologies we can achieve one to many relationships allowing us to build substantial wealth easily and quickly. This makes valuation of technology companies much more difficult. However, once you factor in overheards and risk of success versus failure things tend to normalise
  • perception means a lot. Think about a pair of Nike runners versus standard supermarket branded ones. There is sometimes very little difference in quality though the price of the Nike runners may be double. The same goes for some of the major fashion labels. They are sometimes produced en-masse in cheap Asian/African countries
  • if there are individuals and companies offering the opportunity to engage in solid business ventures, take them. Your perspective on life and lifestyle will change drastically if things turn out successfully
  • in reality, there are very few businesses where you can genuinely say the future looks bright for all of eternity. This is the same across every single sector
  • make friends with everyone. You'll be surprised at what you can learn and what opportunities you may be able to find
  • the meaning of 'market value' largely dissolves into nothingness in the real world. Managing perception accounts a good deal for what you can charge for something
  • just like investments the value of a good or service will normalise over time. You need volatility (this can be achieved via any means) to be able to make abnormal profits though
  • for companies where goods and services have high overheads 7-8 figures a week/month/year can mean nothing. If the overheads are high enough it's possible that they company may go under in a very short space of time. Find something which doesn't and focus in on that whether it be a primary or side business
  • the more you know the better off you'll be if you're willing to take calculated risk, are patient, and perservere. Most of the time things will normalise
  • in general, the community perception is that making more with high expenses is more successful than making less with no expenses
  • comments from people like Joe Hockey make a lot of sense to those who have had a relatively privileged background but they also go to the core of the matter. There are a lot of impediments in life now. I once recall walking past a begging 'aboriginal'. A white middle-upper class man simply admonished him to get a job. If you've ever worked with people like that or you've ever factored in his background you'll realise that this is almost impossible. Everybody has a go at people who work within the 'cash economy' and do not contribute to the tax base of the country but it's easy to understand a lot of why people do it. There are a lot of impediments in life despite whatever anyone says whether you're working at the top or bottom end of the scale
http://forums.whirlpool.net.au/archive/1937638

http://www.abc.net.au/news/2015-06-10/janda-its-not-hockeys-job-comment-that-should-worry-us/6535484

http://www.smh.com.au/comment/smh-letters/joe-hockey-doesnt-grasp-simple-economics-20150610-ghkl9v.html

http://www.bbc.co.uk/news/education-33109052

  • throw in some wierdness like strange pay for seemingly unskilled jobs and everything looks bizarre. A good example of this is a nightfill worker (stock stacker) at a supermarket in Australia. He can actually earn a lot more than those in skilled professions. It's not just about skills or knowledge when it comes to earning a high wage 
http://forums.whirlpool.net.au/archive/2219972

http://forums.whirlpool.net.au/archive/1937638
  • there are a lot of overqualified people out there (but there are a hell of lot more underqualified people out there are well. I've worked both sides of the equation). If you are lucky someone will give you a chance at a something appropriate to your level but a lot of the time you'll just have to make do
  • you may be shocked at how, who, and what makes money and vice-versa (how, who, and what doesn't make money). For instance, something which you can get for free you can sell while some products/services which have had a lot of effort put into them may not get any sales
https://www.ozbargain.com.au/node/197991

  • there are very few companies that you could genuinely say are 100% technology orientated. Even in companies that are supposedly technology orientated there are still politicial issues that you must deal with
  • by using certain mechanisms you can stop resales of your products/services which can force purchase only from known avenues. This is a common strategy in the music industry with MIDI controllers and stops erosion/canibalisation of sales of new product through minimisation of sales of used products
  • it's easy to be impressed by people who are simply quoting numbers. Do your research. People commonly quote high growth figures but in reality Most aren't impressive as they seem. They seem even less impressive when you factor in inflation, Quantitive Easing programs, etc... In a lot of cases companies/industries (even many countries if you to think about it) would actually be at a standstill or else going backwards.
http://www.inc.com/sageworks/the-15-most-profitable-industries-for-private-companies.html

https://biz.yahoo.com/p/sum_qpmd.html

http://www.forbes.com/sites/sageworks/2013/04/28/the-most-profitable-businesses-to-start/

Feeds I follow: Citylab, Commitstrip, MKBHD, Offsetting Bahaviour

I thought I’d list of the feeds/blogs/sites I currently follow. Mostly I do this via RSS using Newsblur.

FacebookGoogle+Share

June 12, 2015

Logical Volume Management with Debian on Amazon EC2

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances.

However it should be noted that the same auto-management of capacity is not true in the EC2 instance’s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed.

However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible.

Enter: Logical Volume Management, or LVM. It’s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) — and LVM 1 was many years before that — so it’s pretty mature now. It’s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them.

In this post, I’ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you’d do some basic maintenance to add and remove (where possible) storage with minimal interruption.

Getting Started

First a little prep work for a new Debian instance with LVM.

As I’d like to give the instance its own ability to manage its storage, I’ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I’ll create a new Role I’ll name EC2-MyServer (or similar), and at this point I’ll skip giving it any actual privileges (later we’ll update this). As at this date, we can only associate an instance role/profile at instance launch time.

Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I’ll be managing on a separate disk from the root file system.

First, I need to get the LVM utilities installed. It’s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:

apt update && apt install lvm2

After a few moments, the package is installed. I’ll choose a location that I want my data to live in, such as /opt/.  I want a separate disk for this task for a number of reasons:

  1. Root EBS volumes cannot currently be encrypted using Amazon’s Encrypted EBS Volumes at this point in time. If I want to also use AWS’ encryption option, it’ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It’s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that’s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.

I will create this extra volume in the AWS console and present it to this host. I’ll start by using a web browser (we’ll use CLI later) with the EC2 console.

The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won’t be able to connect them to my instance. It’s not a huge issue, as I would just delete the volume and try again.

I navigate to the “Instances” panel of the EC2 Console, and find my instance in the list:

EC2 instance list

A (redacted) list of instance from the EC2 console.

Here I can see I have located an instance and it’s running in US-East-1A: that’s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:

wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone

The returned text is simply: “us-east-1a”.

Time to navigate to “Elastic Block Store“, choose “Volumes” and click “Create“:

Creating a volume in AWS EC2: ensure the AZ is the same as your instance

Creating a volume in AWS EC2: ensure the AZ is the same as your instance

You’ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn’t include the t2 family. However, you have an option of using encryption with LVM – where the customer looks after the encryption key – see LUKS.

What’s nice is that I can do both — have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext!

I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as “creating”, and then shortly afterwards as “available”. Time to attach it: select the disk, and either right-click and choose “Attach“, or from the menu at the top of the list, chose “Actions” -> “Attach” (both do the same thing).

Attach volume

Attaching a volume to an instance: you’ll be prompted for the compatible instances in the same AZ.

At this point in time your EC2 instance will now notice a new disk; you can confirm this with “dmesg |tail“, and you’ll see something like:

[1994151.231815]  xvdg: unknown partition table

(Note the time-stamp in square brackets will be different).

Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we’re adding in LVM here – between this “raw” device, and the filesystem we are yet to make….

Marking the block device for LVM

Our first operation with LVM is to put a marker on the volume to indicate it’s being use for LVM – so that when we scan the block device, we know what it’s for. It’s a really simple command:

pvcreate /dev/xvdg

The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:

  Physical volume "/dev/xvdg" successfully created

Checking our EBS Volume

We can check on the EBS volume – which LVM sees as a Physical Volume – using the “pvs” command.

# pvs
  PV         VG   Fmt  Attr PSize PFree
  /dev/xvdg       lvm2 ---  5.00g 5.00g

Here we see the entire disk is currently unused.

Creating our First Volume Group

Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we’ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:

# vgcreate  OptVG /dev/xvdg
  Volume group "OptVG" successfully created

And likewise we can check our set of Volume Groups with ” vgs”:

# vgs
  VG    #PV #LV #SN Attr   VSize VFree
  OptVG   1   0   0 wz--n- 5.00g 5.00g

The Attribute flags here indicate this is writable, resizable, and allocating extents in “normal” mode. Lets proceed to make our (first) Logical Volume in this Volume Group:

# lvcreate -n OptLV -L 4.9G OptVG
  Rounding up size to full physical extent 4.90 GiB
  Logical volume "OptLV" created

You’ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk – and this will be used later when we want to move data between raw disk devices.

If I wanted to use LVM for Snapshots, then I’d want to leave more space free (unallocated) again.

We can check on our Logical Volume:

# lvs
  LV    VG    Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  OptLV OptVG -wi-a----- 4.90g

The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we’re going to format and mount. But before we do, we should review what file system we’ll use.


Filesystems

Popular Linux file systems
Name Shrink Grow Journal Max File Sz Max Vol Sz
btrfs Y Y N 16 EB 16 EB
ext3 Y off-line Y Y 2 TB 32 TB
ext4 Y off-line Y Y 16 TB 1 EB
xfs N Y Y 8 EB 8 EB
zfs* N Y Y 16 EB 256 ZB

For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I’ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that’s a possibility; but for tried and trusted, I’ll use ext4.

The selection of ext4 also means that I’ll only be able to shrink this file system off-line (unmounted).

I’ll make the filesystem:

# mkfs.ext4 /dev/OptVG/OptLV
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 1285120 4k blocks and 321280 inodes
Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

And now mount this volume and check it out:

# mount /dev/OptVG/OptLV /opt/
# df -HT /opt
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  5.1G   11M  4.8G   1% /opt

Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:

/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0

With this in place, we can now start using this disk.  I selected here not to update the filesystem every time I access a file or folder – updates get logged as normal but access time is just ignored.

Time to expand

After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn’t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4’s online resize ability will allow us to do this transparently.

For this example, we’ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original – we’re going to online-migrate all our data from one to the other.

As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:

[1999786.341602]  xvdh: unknown partition table

And now we initalise this as a Physical volume for LVM:

# pvcreate /dev/xvdh
  Physical volume "/dev/xvdh" successfully created

And then add this disk to our existing OptVG Volume Group:

# vgextend OptVG /dev/xvdh
  Volume group "OptVG" successfully extended

We can now review our Volume group with vgs, and see our physical volumes with pvs:

# vgs
  VG    #PV #LV #SN Attr   VSize  VFree
  OptVG   2   1   0 wz--n- 14.99g 10.09g
# pvs
  PV         VG    Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 a--   5.00g 96.00m
  /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g

There are now 2 Physical Volumes – we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG.

Now its time to stop using the /dev/xvgd volume for any new requests:

# pvchange -x n /dev/xvdg
  Physical volume "/dev/xvdg" changed
  1 physical volume changed / 0 physical volumes not changed

At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I’d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):

# pvmove /dev/sdb1 /dev/sdd1
  /dev/xvdg: Moved: 0.1%
  /dev/xvdg: Moved: 8.6%
  /dev/xvdg: Moved: 17.1%
  /dev/xvdg: Moved: 25.7%
  /dev/xvdg: Moved: 34.2%
  /dev/xvdg: Moved: 42.5%
  /dev/xvdg: Moved: 51.2%
  /dev/xvdg: Moved: 59.7%
  /dev/xvdg: Moved: 68.0%
  /dev/xvdg: Moved: 76.4%
  /dev/xvdg: Moved: 84.7%
  /dev/xvdg: Moved: 93.3%
  /dev/xvdg: Moved: 100.0%

During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric – clearly data should be flowing off the old disk, and on to the new.

A note on disk throughput

The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There’s a few things we can do to tweak this:

  • EBS Optimised: a launch-time option that reserves network throughput from certain instance types back to the EBS service within the AZ. Depending on the size of the instance this is 500 MB/sec up to 4GB/sec. Note that for the c4 family of instances, EBS Optimised is on by default.
  • Size of GP2 disk: the larger the disk, the longer it can sustain high IO throughput – but read this for details.
  • Size and speed of PIOPs disk: if consistent high IO is required, then moving to Provisioned IO disk may be useful. Looking at the (2 weeks) history of Cloudwatch logs for the old volume will give me some idea of the duty cycle of the disk IO.

Back to the move…

Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:

# pvs
  PV         VG    Fmt  Attr PSize  PFree
  /dev/xvdg  OptVG lvm2 ---   5.00g 5.00g
  /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g

So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:

# vgreduce OptVG /dev/xvdg
  Removed "/dev/xvdg" from volume group "OptVG"

Then I cleanly wipe the labels from the volume:

# pvremove /dev/xvdg
  Labels on physical volume "/dev/xvdg" successfully wiped

If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time

Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:

Detatch volume dialog box

Detach an EBS volume from an EC2 instance

Wait for a few seconds, and the disk is then shown as “available“; I then chose to delete the disk in the EC2 console (and stop paying for it).

Back to the Logical Volume – it’s still 4.9 GB, so I add 4.5 GB to it:

# lvresize -L +4.5G /dev/OptVG/OptLV
  Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).
  Logical volume OptLV successfully resized

We now have 0.6GB free space on the physical volume (pvs confirms this).

Finally, its time to expand out ext4 file system:

# resize2fs /dev/OptVG/OptLV
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.

And with df we can now see:

# df -HT /opt/
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/OptVG-OptLV ext4  9.9G   12M  9.4G   1% /opt

Automating this

The IAM Role I made at the beginning of this post is now going to be useful. I’ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CreateNewVolumes",
      "Action": "ec2:CreateVolume",
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:AvailabilityZone": "us-east-1a",
          "ec2:VolumeType": "gp2"
        },
        "NumericLessThanEquals": {
          "ec2:VolumeSize": "250"
        }
      }
    }
  ]
}

This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I’ll add another policy to permit this instance role to tag volumes in this AZ that don’t yet have a tag called InstanceId:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "TagUntaggedVolumeWithInstanceId",
      "Action": [
        "ec2:CreateTags"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",
      "Condition": {
        "Null": {
          "ec2:ResourceTag/InstanceId": "true"
        }
      }
    }
  ]
}

Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",
      "Action": [
        "ec2:CreateSnapshot",
        "ec2:DeleteSnapshot",
        "ec2:DeleteVolume",
        "ec2:DescribeSnapshotAttribute",
        "ec2:DescribeVolumeAttribute",
        "ec2:DescribeVolumeStatus",
        "ec2:ModifyVolumeAttribute"
      ],
      "Effect": "Allow",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/InstanceId": "i-123456"
        }
      }
    }
  ]
}

Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that’s not currently possible.

Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1434114682836",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*",
      "Condition": {
        "StringEquals": {
          "ec2:ResourceTag/InstanceID": "i-123456"
        }
      }
    },
    {
      "Sid": "Stmt1434114745717",
      "Action": [
        "ec2:AttachVolume"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456"
    }
  ]
}

Now with this in place, we can start to fire up the AWS CLI we spoke of. We’ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.

AZ=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone`

Region=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone|rev|cut -c 2-|rev`

InstanceId=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

VolumeId=`aws ec2 --region ${Region} create-volume --availability-zone ${AZ} --volume-type gp2 --size 1 --query "VolumeId" --output text`

aws ec2 --region ${Region} create-tags --resource ${VolumeID} --tags Key=InstanceId,Value=${InstanceId}

aws ec2 --region ${Region} attach-volume --volume-id ${VolumeId} --instance-id ${InstanceId}

…and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.

clintonroy

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.



Filed under: Uncategorized