Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

December 11, 2018

A little Evening Bag Can Still Light Your Beautiful That Is Similar to Rare Jewelry

Someone once said: “If you don’t love handbags, you don’t love the life itself.” Then I can say that the designers of Bvlgari are really too passionate about life and create so gentle evening bag!

The story of evening handbag
The handbag change originated in the early 16th century at the social dance of the European royal family. The handbag has been closely linked to elegance since its birth. The combination of jewelry and handbags became the best piece of personality in the 1920s. They have been through for more than 100 years, but they still look like new ones. This is the charm of jewelry handbags.

18th century popular French handbag
French Mary Antoinette Queen Portrait

As early as the mid-18th century, the European continent had already popularized women’s handbags. Then the fashion Queen Marie Antoinette especially loved holding handbags to participate in various grand events. And she is passionate about gambling and she is pushing the fashion of handbags. Jewelry collection evening bag inspired by the prototype is also derived from this.

17-18 Century Casino Handbags
early dinner package

Since the beginning of the 1920s, when Hollywood stars and socialites took a custom dinner bag for special occasions, the evening bag has gradually become one of the standard for evening wear.

Trendy in the streets of New York in the 1920s

The times are different, and it is no longer popular to take a small bag out. Wearing a luxurious jewelry evening bag is the trend of the people.

Handbags are the most intimate girlfriends of women, and every time the fashion bloggers shoot, it will always be the protagonist of the trend that cannot be ignored. And now the Bvlgari Jewelry Serpenti Clutch evening bag can light the beautiful of people when wear them on the body!

The dinner bag is mostly rectangular. And most of these bags are equipped with a bracelet that can be easily carried. This replicated Bvlgari Serpenti Clutch Jewelry evening bag is made according to the craft of brand. Pure black lambskin, which is delicate and soft. The seal of bag is made of rose gold Metal Tab that attached to the body by the screws. Snake head type with green gems pressure closure is unique and luxury.

Spun silk Lining in dark green has a zip pocket, which can order the items. This evening bag also has a round jewelry ring to make bag more comfortable. And the design is unique that looks like the scales of snake, showing special feeling when wearing it. What’s more, there is a zipper closure pocket at the back of body that is easy to keep things at hand. The square bottom can give the strong support to keep the bag stand on the table by itself.

December 10, 2018

UNDP 2018: Evidence based vs experimentation based policy

Recently I have a remote talk to a UNDP event about Evidence based versus experimentation based policy. Below are the notes.
  • We invented all of this, and we can reinvent it. We can co-create a better future for everyone, if we choose. But if we settle for making things just a bit better, a bit more sustainable, a bit anything, then we will fundamentally fail the world because change and complexity is growing exponentially, and we need an exponential response to keep up.
  • There is a dramatic shift in paradigm from control to enablement, from being a king in a castle to a node in a network, which assumes a more collaborative approach to governance.
  • Evidence based approaches are great to identify issues, but we need experimentation based approaches, equitably co-designed with communities, so create sustainable and effective solutions. Evidence based solutions often are normative rather than transformative.
  • We need both evidence and experimentation based policy making, combined with system thinking and public engagement to make a real difference.
  • Digital transformation is often mistaken for meaning the digitisation of or service design led improvement of services, but digital transformation means creating institutions that are fit for purpose for the 21st century, from policy, regulation, services, public engagement, a full rethink and redesign of our social, economic and political systems.
  • History in implementation, and we realised that it was the disconnect between policy and implementation, the idea of policy as separate to implementation is undermining the possibility of meeting the policy intent through implementation.
  • Measurement ends up being limited to the context of function rather than outcomes.
  • Urgently need to reform how we do policy, regulation and legislation, to embrace an outcomes based approach, to bring design thinking and system design into the process from the start, from policy development in the first instance.
  • Working in the open is essential to getting both the demand and supply of evidence based policy, and working openly also means engaging in the shared design of policy and services with the communities we serve, to draw on the experience, expertise and values of the communities.
  • Public Values Management
  • Evidence based AND experimentation based policy.
  • Examples:
    • Service Innovation Lab – NZ
      • Service design and delivery – rapid prototyping is trusted for service design
      • Applying design thinking to regulation and policy
      • Legislation as code – rapid testing of policy and legislation, Holidays Act, it is critical if we want to have a chance of ensuring traceable, accountable and trusted decision making by public sectors as we see more automated decision making with the adoption of AI and ML grow.
      • Simultaneous legislation and implementation, to ensure implementation has a chance of meeting the original policy intent.
    • Taiwan – Uber case study, civic deliberation
    • Their Future Matters – data driven insights and outcomes mapping and then co-design of solutions, co-design with Aboriginal NGOs
    • 50 year optimistic future – to collaboratively design what a contextual, cultural and values driven “good” looks like for a society, so we can reverse engineer what we need to put in place to get us there.
  • Final point – if we want people to trust our policies, services and legislation, we need to do open government data, models, traceable and accountable decision making, and representative and transparent public participation in policy.
  • Links:

 

It Turns out to be The Protagonist of The Story. Come And Have A Look

In the Garden of Eden, Adam and Eve were tempted to steal the forbidden fruit and were punished by God. The image of the snake has been poor in the West: hey, ferocious, and jealous. But there are some exceptions. When Queen Victoria of England chose to marry, she chose an unusual snake-shaped ring, symbolizing eternal life, ancient power and eternal love.

Due to the choice of Queen Victoria, celebrities from all walks of life in Britain followed suit. In the 19th century, the jewels were very popular with snake-shaped decorations, mostly in the shape of a ring: the snake’s mouth biting the snake’s tail, which means that the snake eats itself, then re-born again, and circulates forever in the flowing time.

In 2010, the British Portrait Gallery exhibited a portrait of Queen Elizabeth I painted in the 16th century. The Queen originally held a bouquet of roses, but after a few hundred years, the paint fell off and people found that there was a black snake under the rose.
Why in this painting, the Queen will hold the black snake and be covered up. This has become a mystery of history. British experts speculated that perhaps this is because the snake’s meaning is not clear, it is both the original sin and the symbol of wisdom. This may represent the controversial position of the snake in Western culture.

The Portrait of Mysterious Queen

Perhaps because of this unclear meaning of the snake, the major luxury brands are competing to use each other and reinterpreting the meaning of the snake with their own products. Inspired by the Mediterranean Eden, Bvlgati’s Spring and Summer 2017 collection is inspired by legendary craftsmanship and is perfected in modern design.
Exquisite accessories, capturing the geometric patterns of the garden, the innocence of the petals and the seductive temptation of the snake. This is a profound interpretation of the radiance and darkness of the Garden of Eden. In the spring and summer of 2017, the Bulgari accessories family pays tribute to the skilled creators, of which the Serpenti series is the most popular.

Bulgari inspires its jewelry into its new accessories, creating a collection of exquisite leather goods in a colorful and unparalleled craftsmanship that adds to the brand’s unique vibrancy, showcasing the brand’s bold and fearless soul, blending luxury with avant-garde skills. And each piece creates a luxury piece with unique creativity of Bulgari.

The mastery of color is eye-catching, and exquisite Italian craftsmanship of brand is also known.

Serpenti Forever red or blue Serpentage leather snake head jewelry bag

Serpentage Scale Snake Head Bag was inspired by the 1965 Serpenti Fine Jewelry Collection, turning the entire bag into a large “jewel”. Let jewelry and leather bags cleverly connect together to become the most popular works of brand.

Chaos Monkeys

Share

A very well written tale of a Wall Street quant who left during the GFC to adventure in startup land and ended up at Facebook attempting to solve their monetization problems for an indifferent employer. Martinez must have been stomping around Mountain View because his description of the environment and what its like to work inside a Silicon Valley company ring very true to me.

A good read.

Chaos Monkeys Book Cover Chaos Monkeys
Antonio Garcia Martinez
Business & Economics
Harper Paperbacks
July 24, 2018
320

The instant New York Times bestseller, now available in paperback and featuring a new afterword from the author--the insider's guide to the Facebook/Cambridge Analytica scandal, the inner workings of the tech world, and who really runs Silicon Valley “Incisive.... The most fun business book I have read this year.... Clearly there will be people who hate this book — which is probably one of the things that makes it such a great read.” — Andrew Ross Sorkin, New York Times Imagine a chimpanzee rampaging through a datacenter powering everything from Google to Facebook. Infrastructure engineers use a software version of this “chaos monkey” to test online services’ robustness—their ability to survive random failure and correct mistakes before they actually occur. Tech entrepreneurs are society’s chaos monkeys. One of Silicon Valley’s most audacious chaos monkeys is Antonio García Martínez. After stints on Wall Street and as CEO of his own startup, García Martínez joined Facebook’s nascent advertising team. Forced out in the wake of an internal product war over the future of the company’s monetization strategy, García Martínez eventually landed at rival Twitter. In Chaos Monkeys, this gleeful contrarian unravels the chaotic evolution of social media and online marketing and reveals how it is invading our lives and shaping our future.

Share

December 09, 2018

Donations 2018

Each year I do the majority of my Charity donations in early December (just after my birthday) spread over a few days (so as not to get my credit card suspended).

I also blog about it to hopefully inspire others. See: 2017, 2016, 2015

All amounts this year are in $US

My main donations was to Givewell (to allocate to projects as they prioritize). I’m happy that they are are making efficient uses of donations.

I gave some money to the Software Conservancy to allocate across the projects (mostly open source software) they support and also to Mozilla to support the Firefox browser (which I use) and other projects.

Next were three advocacy and infrastructure projects.

and finally I gave some money to a couple of outlets whose content I consume. Signum University produce various education material around science-fiction, fantasy and medieval literature. In my case I’m following their lectures on Youtube about the Lord of the Rings. The West Wing Weekly is a podcast doing a episode-by-episode review of the TV series The West Wing.

 

Share

December 08, 2018

School-wide Understanding Our World® implementations

Are you considering implementing our integrated HASS+Science program, but getting a tad confused by the pricing?  Our subscription model didn’t not provide a So nowstraightforward calculation for a whole school or year-level.  However, it generally works out to $4.40 (inc.GST) per student.  So now we’re providing this as an option directly: implement our integrated HASS+Science program […]

December 06, 2018

Codec 2 and TWELP

DSP Innovations have recently published comparisons of Codec 2 with their TWELP codec at 2400 and 600 bit/s.

Along with some spirited rhetoric, they have published some TWELP 600 samples (including source). The comparison, especially in the 600 bit/s range, is very useful to my work.

I’ve extracted a random subset of the 600 bit/s a_eng.wav samples, broken up into a small chunks to make them easier to compare. Have a listen, and see what you think:

Sample Source MELP 600e Codec 2 700c TWELP 600
1 Listen Listen Listen Listen
2 Listen Listen Listen Listen
3 Listen Listen Listen Listen
4 Listen Listen Listen Listen
5 Listen Listen Listen Listen
6 Listen Listen Listen Listen

The samples do have quite a bit of background noise. The usual approach for noisy samples is to use a noise suppression algorithm first, e.g. we use the Speex noise suppression in FreeDV. However it’s also a test of the codecs robustness to background noise, so I didn’t perform any noise suppression for the Codec 2 samples.

Comparison

I am broadly in agreement with their results. Using the samples provided, the TWELP codec appears to be comparable to MELP 2400, with Codec 2 2400 a little behind both. This is consistent with other Codec 2 versus MELP/AMBE comparisons at 2400 bits/s. That’s not a rate I have been focussing on, most of my work has been directed at lower rates required for HF Digital voice.

I think – for these samples – their 600 bit/s codec also does better than Codec 2 700C, but not by a huge margin. Their results support our previous findings that Codec 2 is as good as (or even a little better) than MELP 600e. It does depend on the samples used, as I will explain below.

DSP Innovations have done some fine work in handling non-speech signals, a common weakness with speech codecs in this range.

Technology Claims

As to claims of superior technology, and “30 year old technology”:

  1. MELP 2400 was developed in the 1990’s, and DSP Innovations own results show similar speech quality, especially at 2400 bits/s.
  2. AMBE is in widespread use, and uses a very similar harmonic sinusoidal model to Codec 2.
  3. The fundamental work on speech compression was done in the 1970s and 80’s, and much of what we use today (e.g. in your mobile phone) is based on incremental advances on that.
  4. As any reader of this blog will know, Codec 2 has been under continual development for the past decade. I haven’t finished, still plenty of “DSP Innovation” to come!

While a fine piece of engineering, TWELP isn’t in a class of it’s own – it’s still a communications quality speech codec in the MELP/AMBE/Codec 2 quality range. They have not released any details of their algorithms, so they cannot be evaluated objectively by peer review.

PESQ and Perceptual evaluation of speech quality

DSP Innovations makes extensive use of the PESQ measure, for both this study and for comparisons to other competitors.

Speech quality is notoriously hard to estimate. The best way is through controlled subjective testing but this is expensive and time consuming. A utility to accurately estimate fine differences in speech quality would be a wonderful research tool. However in my experience (and the speech coding R&D community in general), such a tool does not exist.

The problem is even worse for speech codecs beneath 4 kbit/s, as they distort the signal so significantly.

The P.862 standard acknowledges these limits, and explicitly states in Table 3 “Factors, technologies and applications for which PESQ has not currently been validated … CELP and hybrid codecs < 4 kbit/s". The standard they are quoting does not support use of PESQ for their tests.

PESQ is designed for phone networks, and much higher bit rate codecs. In section 2 of the standard they present best-case correlation results of +/- 0.5 MOS points (note on a scale of 1-5, this is +/- 10% error). That’s when it is used for speech codecs > 4 kbit/s that it is designed for.

So DSP Innovations statements like “Superiority of the TWELP 2400 and MELPe 2400 over CODEC2 2400 is on average 0.443 and 0.324 PESQ appropriately” are unlikely to be statistically valid.

The PESQ algorithm (Figure 4a of the standard) throws away all phase information, keeping just the FFT power spectrum. This means it cannot evaluate aspects of the speech signal that are very important for speech quality. For example PESQ could not tell the difference between voiced speech (like a vowel) an unvoiced (like a consonant) with the same power spectrum.

DSP Innovations haven’t shown any error bars or standard deviations on their results. Even the best subjective tests will have error bars wider than the PESQ results DSP Innovations are claiming as significant.

I do sympathise with them. This isn’t a huge market, they are a small company, and subjective testing is expensive. Numbers look good on a commercial web site from a marketing sense. However I suggest you disregard the PESQ numbers.

Speech Samples

Speech codecs tend to work well with some samples and fall over with others. It is natural to present the best examples of your product. DSP Innovations chose what speech material they would present in their evaluation of Codec 2. I have asked them to give me the same courtesy and code speech samples of my choice using TWELP. I have received no response to my request.

Support and Porting

An open source codec can be ported to another machine in seconds (rather than months that DSP Innovations quote) with a cross compiler. At no cost.

Having the source code makes minor problems easy to fix yourself. We have a community that can answer many questions. For tougher issues; well I’m available for paid support – just like DSP Innovations.

Also …. well open source is just plain cool. As a reminder, here are the reasons I started Codec 2, nearly 10 years ago.

To quote myself:

A free codec helps a large amount of people and promotes development and innovation. A closed codec helps a small number people make money at the expense of stifled business and technical development for the majority.

Reading Further

Open Source Low Rate Speech Codec Part 1, the post that started Codec 2.
P.862 PESQ standard.
CODEC2 vs TWELP on 2400 bps. DSP Innovations evaluate Codec 2, MELP, and TWELP at 2400 bits/s.
CODEC2 vs TWELP on 700 bps. DSP Innovations evaluate Codec 2, MELP, and TWELP at 600 (ish) bits/s.
AMBE+2 and MELPe 600 Compared to Codec 2. An earlier comparison, using samples from DSP Innovations.

Entrepreneurs’ Mental Health and Well-being Survey

Jamie Pride has partnered with Swinburne University and Dr Bronwyn Eager to conduct the largest mental health and well-being survey of Australian entrepreneurs and founders. This survey will take approx 5 minutes to complete. Can you also please spread the word and share this via your networks!

Getting current and relevant Australian data is extremely important! The findings of this study will contribute to the literature on mental health and well-being in entrepreneurs, and that this will potentially lead to future improvements in the prevention and treatment of psychological distress.

Jamie is extremely passionate about this cause! Your help is greatly appreciated.

Configuring Solarized Colours in Termonad

I'm currently using Termonad as my terminal of choice. What is Termonad?

Termonad is a terminal emulator configurable in Haskell. It is extremely customizable and provides hooks to modify the default behavior. It can be thought of as the "XMonad" of terminal emulators.

As a long time Xmonad user, this is a rather appealing description as well as a fairly lofty and worthy goal. It's also a niche not currently filled, one that I'm pretty happy to see being filled. By default, Termonad looks like this.

Default Termonad palette

Which is pretty standard as far as terminal defaults go but a long way from the eye-soothing grace of the Solarized palette which I essentially won't work without.

The Solarized dark output looks like this (Haskell in vim):

Solarized (dark) Termonad palette

This is the function controlling the dark palette:

    solarizedDark1 :: Vec N8 (Colour Double)
    solarizedDark1 =
         sRGB24   0  43  54 -- base03, background
      :* sRGB24 220  50  47 -- red
      :* sRGB24 133 153   0 -- green
      :* sRGB24 181 137   0 -- yellow
      :* sRGB24  38 139 210 -- blue
      :* sRGB24 211  54 130 -- magenta
      :* sRGB24  42 161 152 -- cyan
      :* sRGB24 238 232 213 -- base2
      :* EmptyVec

    solarizedDark2 :: Vec N8 (Colour Double)
    solarizedDark2 =
         sRGB24   7  54  66 -- base02, background highlights
      :* sRGB24 203  75  22 -- orange
      :* sRGB24  88 110 117 -- base01, comments / secondary text
      :* sRGB24 131 148 150 -- base0, body text / default code / primary content
      :* sRGB24 147 161 161 -- base1, optional emphasised content
      :* sRGB24 108 113 196 -- violet
      :* sRGB24 101 123 131 -- base00
      :* sRGB24 253 246 227 -- base3
      :* EmptyVec

The Solarized light output looks like this (Haskell in vim):

Solarized (light) Termonad palette

This is the function controlling the light palette:

    solarizedLight1 :: Vec N8 (Colour Double)
    solarizedLight1 =
         sRGB24 238 232 213 -- base2, background highlights
      :* sRGB24 220  50  47 -- red
      :* sRGB24 133 153   0 -- green
      :* sRGB24 181 137   0 -- yellow
      :* sRGB24  38 139 210 -- blue
      :* sRGB24 211  54 130 -- magenta
      :* sRGB24  42 161 152 -- cyan
      :* sRGB24   7  54  66 -- base02
      :* EmptyVec

    solarizedLight2 :: Vec N8 (Colour Double)
    solarizedLight2 =
         sRGB24 253 246 227 -- base3, background
      :* sRGB24 203  75  22 -- orange
      :* sRGB24 147 161 161 -- base1, comments / secondary text
      :* sRGB24 101 123 131 -- base00, body text / default code / primary content
      :* sRGB24  88 110 117 -- base01, optional emphasised content
      :* sRGB24 108 113 196 -- violet
      :* sRGB24 131 148 150 -- base0
      :* sRGB24   0  43  54 -- base03
      :* EmptyVec

You can see how I've applied these options in this commit.

Update: My complete example is now included in the termonad examples.

December 05, 2018

Audiobooks – November 2018

The Vanity Fair Diaries 1983-1992 by Tina Brown

Well written although I forgot who was who at times. The author came over very real and it is interesting to feel what has/hasn’t changed since the 1980s. 7/10

His Last Bow and The Valley of Fear by Sir Arthur Conan Doyle. Read by Stephen Fry

The Valley of Fear is solid. The short stories are not among my favorites but everything is well produced 7/10

First Man: The Life of Neil A. Armstrong by James R. Hansen

I read this prompted by the movie. Unlike the movie covers his family, early and post-moon life and has a lot more detail everywhere. Not overly long however 8/10

Don’t Make Me Pull Over! : An Informal History of the Family Road Trip by Richard Ratay

Nice combination of the author’s childhood experiences in the early-70s along with a history of the hotel, highway and related topics. 8/10

Giants’ Star by James P. Hogan

3rd book in the trilogy. Worth reading if you read and liked the first two. 6/10

U.S.S. Seawolf: Submarine Raider of the Pacific by Joseph Eckberg

First person account of a crew-member of a US Sub before & during the first year (up to Jan 1943) of US involvement in WW2. Published during the war and solely sourced for one person, so missing some details due to wartime censorship and lack of reference to other sources. Engaging though. 8/10

Mind of the Raven: Investigations and Adventures with Wolf-Birds by Bernd Heinrich

I didn’t like these quite as much as “Summer World” and “Winter World” since 100% ravens got a bit much but still it was well written & got me interested in the birds. 7/10

The Greater Journey: Americans in Paris by David McCullough

Covering American visitors (mostly artists, writers and doctors) to Paris mainly from 1830 to 1900. Covering how they lived and how Paris influenced them along with some history of the city. 9/10

Share

December 04, 2018

Finding Bugs in an Aercus WS3083

Aercus WS3083

While we're not drought declared, there's been precious little rain over the last 6 months or so, which made it hard to work out if I really had a problem or not.

The rainfall had been light and infrequent while each time the weather station recorded 0mm rainfall, which didn't seem unreasonable but was dubious nonetheless.

All the sensors appeared to be working OK. I re-seated, reconnected and reset everything to make sure that the system was connected and working fine but with no change in the result.

I eventually was driven to digging out the manual gauge which recorded 18mm the next time it rained while the weather station recorded 0mm.

No choice this time but to get up the ladder and disassemble the rain gauge.

I removed the cover and everything passed an initial eye balling.

Aercus WS3083 rain gauge

The Aercus WS3083 uses a lever arm to measure and discard collected rain. Being a simple primate, I attempted toggled it with my index finger, instantly noted it wasn't moving freely before a massive grasshopper|cricket flew out from under the arm and into my face. It had made a nice home under the arm, preventing it from dipping down and counting rain.

Unsurprisingly it moved freely after that and has been accurately measuring rain since.

Postscript: I think the chickens are still upset at me for failing to catch the grasshopper|cricket and feed it to them.

December 03, 2018

LUV December 2018 Main Meeting: Linux holiday gift ideas

Dec 4 2018 18:30
Dec 4 2018 19:30
Dec 4 2018 18:30
Dec 4 2018 19:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE ONE HOUR DURATION

6:30 PM to 7:30 PM Tuesday, December 4, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Chalmers
  • Andrew Pam

 

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

December 4, 2018 - 18:30

read more

November 30, 2018

Artemis

Share

Its been ages since I’ve read a book in a couple of days, let alone stayed up late when I really shouldn’t in order to finish a book. Artemis is the book which broke me out of that rut — this is a fun, clever, light read. Its quite different when compared to The Martian, but I think that’s good. Weir has attempted to do something new instead of just playing on his previous successes.

An excellent book, and Mr Weir is solidly landing on my buy-everything-he-writes list.

Artemis Book Cover Artemis
Andy Weir
Fiction
Del Rey
November 13, 2017
384

She grew up on the moon, of course she has a dark side... Jazz Bashara is a criminal. Well, sort of. Life on Artemis, the first and only city on the moon, is tough if you're not a rich tourist or an eccentric billionaire. So smuggling in the occasional harmless bit of contraband barely counts, right? Not when you've got debts to pay and your job as a porter barely covers the rent. Everything changes when Jazz sees the chance to commit the perfect crime, with a reward too lucrative to turn down. But pulling off the impossible is just the start of Jazz's problems, as she learns that she's stepped square into a conspiracy for control of Artemis itself - and that now, her only chance at survival lies in a gambit even more unlikely than the first.

Share

November 29, 2018

Turmoil

Share

A very readable set of essays from Robyn Williams, the broadcaster of the Australian Science Show, not the comedian. Covering the state of modern science, journalism, the ABC, and whether modern democracy is doomed in an approachable and very readable form. I enjoyed this book greatly. A good Sunday morning and vacation read if you’re into approachable non-fiction.

Turmoil Book Cover Turmoil
Robyn Williams
Memoir
Newsouth Press

Robyn Williams, presenter of The Science Show on ABC Radio, reveals all in Turmoil, a searingly honest and often blackly funny reflection on his life, friends, the people he loves and loathes, and a multi-faceted career that includes over forty years on radio. Robyn writes frankly about everything, from performing with Monty Python, his impressions of fellow scientists Richard Dawkins and David Attenborough, and his unique insights on climate change and the recent devaluing of science, to frugality and being treated for bowel cancer.

Share

November 22, 2018

Dark Doodad

It's been a while since I did a blog, so after twiddling the way the front page of the site displays, it's time to post a new one. The attached photo is of my favourite dark nebula, "The Dark Doodad". What looks like a long thin nebula is apparently a sheet over 40 light years wide that we happen to be seeing edge-on. On the left you can see a few dark tendrils that are par of the coal sack nebula. The Dark Doodad This is one of the first images created from a stack of subs I took using AstroDSLR. Each exposure is 2 minutes and I stacked 20 of them. My polar alignment was pretty decent, I think!

Simple Design and Unique Pattern: Replica Bvlgari Serpenti Blue Chain Bag

I actually didn’t know the types of leather, and think leather accessories are smooth and delicate. It was not until the party that I saw my friend’s snakeskin bag, which was immediately attracted by this beautiful personality pattern. I inquire about this kind of leather, and then I bought a snakeskin bag for myself– Bvlgari Serpenti bag with gold chain, which is a imitated bag with high quality snakeskin leather.

I chose a blue flip-top shoulder bag with a snake-head jewel. The size of the bag(25cm x 19cm x 5cm) is suitable for person. As I need a daily bag that have not much demand for the size, then this standard version bag is selected. The interior is a main space that can store some necessities. There is also a zippered flat pocket on the back side of the bag to prevent more expensive and smaller items. The jewelry decoration process of the bag is very particular, which has the same appearance as the real one. The leather body with a gold chain shoulder strap is very stylish and luxurious. I like it very much. I rarely know about this type of leather, so I have collected some relevant knowledge and shared it with friends who like this bag.

Snakeskin features
The snakeskin is like a human fingerprint with unique uniqueness! Each snake is unique, so each piece is unique!

Snakeskin scales are neatly arranged in leopard-like spots, which is dense and natural.
Each scale is accompanied by one-third of the skin that is not skinned that is not mimicked by other materials.

The three-dimensional scale is the most important feature of snakeskin. Snakeskin is light, delicate, soft and elastic. Dry scales retain the true touch of leather, giving the fingertips the most beautiful feeling.

The best way to maintain this material is to keep it dry. The most feared of leather is the humid environment, and snakeskin is no exception. A damp environment can make leather prone to mold and bacteria. Some snakeskin treatments are semi-folded. If the preservation environment is too moist, it will not allow the lining to naturally appear in a half-turn state, or even stick to the leather. Therefore, dry preservation environment is really important for snakeskin.

Snakeskin should not be cleaned with any cleaning oil. Only use a clean and dry cloth to wipe the dirt off. Keep in mind that people must use the eraser in the direction of the snake skin, or it will destroy the texture of the entire snakeskin.

It is a very basic storage method when the bag is stuffed with dehumidification paper during storage, which can absorb moisture much and the shape can be maintained in a certain state.

November 21, 2018

OpenPOWER Summit Europe 2018: A Software Developer's Introduction to OpenCAPI

Last month, I was in Amsterdam at OpenPOWER Summit Europe. It was great to see so much interest in OpenPOWER, with a particularly strong contingent of researchers sharing how they're exploiting the unique advantages of OpenPOWER platforms, and a number of OpenPOWER hardware partners announcing products.

(It was also my first time visiting Europe, so I had a lot of fun exploring Amsterdam, taking a few days off in Vienna, then meeting some of my IBM Linux Technology Centre colleagues in Toulouse. I also now appreciate just what ~50 hours on planes does to you!)

One particular area which got a lot of attention at the Summit was OpenCAPI, an open coherent high-performance bus interface designed for accelerators, which is supported on POWER9. We had plenty of talks about OpenCAPI and the interesting work that is already happening with OpenCAPI accelerators.

I was invited to present on the Linux Technology Centre's work on enabling OpenCAPI from the software side. In this talk, I outline the OpenCAPI software stack and how you can interface with an OpenCAPI device through the ocxl kernel driver and the libocxl userspace library.

My slides are available, though you'll want to watch the presentation for context.

Apart from myself, the OzLabs team were well represented at the Summit:

Unfortunately none of their videos are up yet, but they'll be there over the next few weeks. Keep an eye on the Summit website and the Summit YouTube playlist, where you'll find all the rest of the Summit content.

If you've got any questions about OpenCAPI feel free to leave a comment!

November 20, 2018

LPCNet meets Codec 2

The previous post described my attempts to come up to speed with NN based speech synthesis, with the kind help of Jean-Marc Valin and his LPCNet system.

As an exercise, I have adapted LPCNet to use Codec 2 features, and have managed to synthesise high quality speech at a sample rate of 8kHz. Here are the output speech samples:

Sample original LPCNet Codec 2
cq_ref Listen Listen
hts1a Listen Listen
hts2a Listen Listen
mmt1 Listen Listen
morig Listen Listen
speech_orig Listen Listen

I’m happy with all of the samples except cq_ref. That sample has a lot of low freq energy (like the pitch fundamental) which may not have been well represented in the training database. mmt1 has some artefacts, but this system already does better than any other low rate codec on this sample.

This is not quite a quantised speech codec, as I used unquantised Codec 2 parameters (10 Line Spectral Pairs, pitch, energy, and a binary voicing flag). However it does show how LPCNet (and indeed NN synthesis in general) can be trained to use different sets of input features, and the system I have built is close to an open source version of the Codec 2/NN system presented by Kleijn et al.

Why 8kHz rather than the higher quality 16 kHz? Well LPCNet requires a set of Linear Prediction Coefficients (LPCs). The LPCs dumped by Codec 2 are sampled at 8kHz. It’s possible, but not straight forward, to resample the LPC spectra at 16 kHz, but I chose to avoid that step for now.

Training

My initial attempts led to good quality speech using samples from within the training database, but poor quality on speech samples (like the venerable hts1a) from outside the training database. In Machine Learning land, this suggests “not enough training data”. So I dug up an old TIMIT speech sample database, and did a bunch of filtering on my speech samples to simulate what I have seen from microphones in my Codec 2 adventures. It’s all described in gory detail here (Training Tips section). Then, much to my surprise, it worked! Clean, good quality speech from all sorts of samples.

Further Work

  • Add code to generate 16 kHz LPCs from 8 kHz LPCs and try for 16 kHz synthesised speech
  • Use quantised Codec 2 parameters from say Codec 2 2400 or 1300 and see how it sounds.
  • Help Jean-Marc convert LPCNet to C and get it running in real time on commodity hardware.
  • Make a real world, over the air contact using NN based speech synthesis and FreeDV.
  • A computationally large part of the LPCNet (and indeed any *Net speech synthesis system) is dedicated to handling periodic pitch information. The harmonic sinusoidal model used in Codec 2 can remove this information and hence much of the CPU load. So a dramatic further reduction in the number of weights (and hence CPU load) is possible, although this may result in some quality reduction. Another way of looking at this (as highlighted by Jean-Marc’s LPCNet paper) is “how do we model the excitation” in source/filter type speech systems.
  • The Kleijn et al paper had the remarkable result that we can synthesise high quality speech from low bit rate Codec 2 features. What is quality trade off between the bit rate of the features and the speech quality? How coarsely can we quantise the speech features and still get high quality speech? How much of the quality is due to the NN, and how much the speech features?

Reading Further

Jean Marc’s blog post on LPCNet, including links to LPCNet source code and his ICASSP 2019 paper.
WaveNet and Codec 2
Source Code for my Codec 2 version of LPCNet

LPCNet – Open Source Neural Net Speech Synthesis

Jean-Marc Valin has been working on Neural Network (NN) based speech synthesis in his project called LPCNet. It has similar speech quality to Wavenet, but is based on an architecture called WaveRNN, and includes many new innovations.

Jean-Marc’s work is aimed at reducing the synthesis CPU load down to the level of a modern CPU, for example a mobile phone or Raspberry Pi, and he has made significant progress in that direction.

As well as being useful for his research – this code is a working, open source reference system for Neural Net (NN) based synthesis projects. He has also written an ICASSP 2019 paper on LPCNet, which explains many of the finer details of NN speech synthesis. Fantastic resources for other people coming up to speed in NN synthesis. Well done Jean-Marc!

Over the past few weeks Jean-Marc has kindly answered many NN-noob questions from me. I have used the answers to comment his code and add to his README. There are still many aspects of how this code works that I do not understand. However I can drive his software well enough to synthesise high quality speech:


The first sample was from inside the training database, the second outside.

The network is driven by some speech codec like parameters, but it’s not actually running as a speech codec at present. However it’s a great starting point for high quality speech (de)coding, or indeed speech synthesis.

How I trained

My GTX1060 GPU isn’t quite up to spec, so for training I had to reduce the batch_size to 16, and run for 60 epochs. I used the TSP speech database discussed in the LPCNet README, and followed Jean-Marc’s suggestion of resampling it twice (once at +5% Fs, once at -5% Fs), to get 3x the training data. It took 14 hours for me to train. Synthesis runs 10 times slower than real time on my GPU, however much of this is overhead. If the Keras code was ported to C – it would be close to real time on a modern laptop/phone CPU.

References

Jean Marc’s blog post on LPCNet, including links to LPCNet source code and his ICASSP 2019 paper.
WaveNet and Codec 2
WaveRNN
FFTNet, some good figures that helped me get my head around the idea of sampling a probability distribution.

November 16, 2018

Hidden in The High-end Jewelry Feast of Wild Pop: Bulgari Serpenti Forever Flap Cover Bag

Bulgari new Wild Pop jewelry collection was released in Peking, China, on November 8th after it was released in Rome. The Wild Pop jewellery collection is inspired by the enthusiasm and pop art of the 1980s, bringing people into the wild 80s and enjoying the Italian style of “Larger Than Life”. Bulgari CEO Jean-Christophe Babin, Bulgari brand spokesperson Shu Qi and Jolin Tsai, with brand ambassador Lady Kitty Spencer attended the feast .

Lady Bagty Spencer, the Bulgari brand ambassador

Jolin Tsai, Bulgari brand spokesperson

Variety ShuFanny uses colored gemstones with high-quality jewellery and black dresses to chase pure self and feel the boldness and beauty of the new high-end jewellery collection. And the green Bulgari Serpenti Forever flip bag in her hand adds a touch of colour to her decoration.

Bulgari jewelry is perfect. The brand will inject enthusiasm into the new accessories collection, adding a unique color to the exquisite leather with colorful tones and unparalleled craftsmanship. The perfect combination of high-end jewelry makes every bag unique. The control of color for Bulgari is also quite eye-catching, and its exquisite Italian craftsmanship is also known. Bulgari accessories redefine the modern paradigm of Italian aesthetics with fine leather, which has elegant shapes and quality materials.

The spring and summer of 2017 accessories Serpenti series and BVLGARI BVLGARI series have the perfect craftsmanship, and the jeweler’s fascinating ingenuity enriches its connotation well. They are the two most iconic collections of Bulgari, showcasing the bold and fearless soul of brand, blending luxury with avant-garde skills, and creating each luxury item with Bulgari’s unique creativity. Today we followed this green Bulgari Serpenti Forever bag belongs to ShuFanny into the classic Serpenti collection.

Since 2011, Bulgari launched the Serpenti Forever series high-end bags. And then this classic snake head that is inlaid with jewelry has become the logo of brand. This bag is sought after, not only because of its stylish appearance, but also because it has the exquisite elegance of jewelry.

The brand combines the rigorous craftsmanship of the watchmaking with the bold creativity of making jewelry, giving the bag a delicate heart under the simple appearance. In addition to the decoration of the flap belongs to the lovely bag is unforgettable. This leather bag with beautiful lines is complemented by a gold metal chain shoulder strap. It looks quite slender but solid, and people can hold it or carry it across the body. However, it must be said that this shoulder strap is really inconvenient to use in the summer, which invisibly increases the force to shoulder. Therefore, the autumn and winter is the seasons that are very suitable for this chain shoulder strap. And the metallic luster will add a lot of fashion to the overall wear.

November 14, 2018

Most Popular Replica Bvlgari Handbags and Jewelry on Rus.tl

About Rus.tl

Rus.tl is a UK based online retailer of luxury replica handbags, jewelry and watches. We provide affordable items for both men and women who want to buy luxurious and fashionable products, without spending too much money. We stand out from other retailers of replica products in that we provide worldwide free shipping for our customers.

replica bvlgari handbags sale price

Our replica Bvlgari handbags

Our Bvlgari copy handbags are finely embellished and made from prestigious materials suitable for ladies who want to make a fashion statement. Some of our best selling models include:-

  1. I) Serpenti Forever. This luxurious replica handbag is made from calf leather and is available in ‘Absolute Black’ and ‘Absolute White’ colors, giving it the elegant look of a modern woman. The bag has a timeless yet contemporary design, and embodies the mystic and charming seductive attraction of a serpent. It’s the perfect handbag for young women who want a contemporary blend of diversity and complementarity.
  2. II) Divas Dream. The copy Divas Dream handbags feature a striking design and high-quality craftsmanship, they were made to give the wearer a modern urban couture look. These bags are available in various designer colors to choose from including; jade green, cream and brown among others. They are made from python skin and often embossed with items such as gold-plated hardware, chain strap and internal metal tag showing the replica Bvlgari logo. The python skin gives it a classic look like no other handbag in the market, it features natural markings and lines which resemble that of the python.

Apart from these two popular brands, other replicas that can also be found on Rus.tl are; Bvlgari Cocktail, Serpenti Viper and Hypnotic.

designer replica jewelry bvlgari

Our replica Bvlgari jewelry

Our jewelry collection encompasses replica rings, bracelets, necklaces, cufflinks and earrings from the designer brand. Most of our customers prefer buying copy Bvlgari necklaces from us since they embody the forms and shades of ancient Rome. Women who wear our replica necklaces feel as if they are part of the great City, no matter where they are living around the world. Our jewelry are multipurpose and can blend well with both formal and informal clothing. Currently, one of our top selling models is the copy B. Zero 1 series which resembles an 18kt rose gold chain and 18kt pendant with black ceramic top cover.

quality replica designer websites

Advantages of our replicas

Our Bvlgari replicas are much better than the original version since they are less costly, meaning anyone can afford them without breaking their bank. Likewise, they look just the same as the original brand and nobody can tell the difference when wearing them. Furthermore, they are high-quality and will last a very long time without getting worn out or damaged.

The Best Replica Bvlgari Bags Selling Website

About group.as

Group.as is a top replica handbag website all-time leading seller of genuine leather luxury replica bags for ladies in the UK as well as the United States. If at all you’re looking for a site that will provide you with genuine leather replica bags bearing the designer’s label, group.as is the place to be. This website will offer you genuine and top-quality luxury leather replica bags from all famous brands such as Bvlgari, Celine, Balenciaga, Hermes, Fendi, as well as Saint Laurent and so on. All of them have won reputation globally due to their good quality and affordable prices. Once you have ordered your handbag, it’s shipped to you freelybest bvlgari replica bags online for sales

Replica Bvlgari Bags

These replicated Bvlgari bags are uniquely made with top quality products to match the genuine ones. They are made from original calfskin or cowhide leather plated with gold hardware. And they come with different sizes, texture, styles, and colors, which are considered the best ladies’ replica designer bags. So they are liked by many ladies due to their high quality. What’s more, you can’t differentiate these replica bags from the genuine ones. Here are the Best and Hot selling Replica Bvlgari Bags

  • 2018 Bvlgari Serpenti Enamel Flap cover Replica Bag

This 2018 Bvlgari serpent replica bag is among the hot sellers in group.as. It’s a high-quality handbag pink in color, made of high-quality leather material.

Its interior lining is embossed with leather and has many flap pockets as well as the Bvlgari logo in the metal plate.

Its handle is detachable and has black and white colored enamel which features a snake’s body.

These bags come with metallic texture and different styles and sizes. It has an enameled snake head snake pressure closure and is loved by ladies.

  • 2018 Women Bvlgari Serpenti Shoulder Bag

This Bvlgari shoulder bag is among the hot selling replica Bvlgari bags. These bags come with different colors, styles, and textures. It’s among the favorite replica Bvlgari bags women are falling in love with.

It’s made of top-quality snakeskin leather, thus making it durable. The exterior part of it is made from snake skin and has a flip over flap closure.

It’s plated with a light gold. At the center of flip over flap, has serpent head enamel. Its shoulder strap is shaped like a snake body on the top.

The interior part of it is lined with leather material and has metal plate containing Bvlgari logo. Also, it has many flap pockets. Women like these handbags so much due to their quality.

  • Bvlgari Serpenti Gussets Khaki Leather Long Zipped Wallet

These original Bvlgari long zipped leather wallets are among the selling products in group.as. They are among the best ladies’ replica designer bags.

Women love them because they are the product of high-quality leather material from calfskin and cowhide.

They have beautiful styles and textures and they come with different colors and sizes.

These bags have Bvlgari logo on top of their front side, around closure zipper, serpenti enamel containing green eyes and are gold plated.

These bags have various credit cards slots as well as zipper coins in their interior.

high quality designer replica handbags

Conclusion

Group.as sells the best replica Bvlgari bags globally. The quality of the products sold at group.as is the same as the genuine one. You can’t differentiate the replica and genuine bags. This has made this website gain more reputation globally because the products it sells meet all standards.

November 13, 2018

About Topbiz.md Replica Gucci Bags

Topbiz.md is a reliable online store in USA that sells replica products, such as Gucci bags including crossbody bag, tote bag, and shoulder bag. What makes this site popular among fashion enthusiasts is, providing the most affordable replicas with the exact material, styles, and almost the same look as the original versions.

gucci soho bags replica cheap price

Replica Gucci Soho Bags

The replica Gucci Soho bags featured on topbiz.md come in a variety of styles, and complement perfectly with hipsters. With this series, you’ll find an evening bag, clutch bag and other elegant designs presented in dazzling colors and stamped with a double G interlocking logo.

The best bags are professionally made from original leather or canvas of a top quality. That basically means they are designed to last for a long time, giving you the best value for your money. Well, the best-selling in this series is the Gucci Soho Luxury Gold Cowhide Leather Bag Price.

Featuring a curved base and a double G logo, the gold color bag made from cowhide leather has an oversized flip-over flap with a magnet button closure. Also, it features an elegant brass link chain shoulder strap, which is easily adjusted to suit your height. On one side of the bag, there’s a tassel trim that adds a unique charm.

gucci bamboo shopper leather replica

Replica Gucci Bamboo Shopper bags

Counterfeit Gucci Bamboo Shopper bags are one of the most selling on this platform. The handbags are uniquely designed for an easy-carrying experience, while flawlessly highlighting luxury beauty. The main materials are genuine cowhide leather and original canvas, which are treated with different vibrant colors to match your taste and style.

We’d like to recommend the Good Price Small Bamboo Shopper Double Top Handles leather Trimming Women’s Earth Yellow Shoulder Bag. It’s one of the best-selling in this group, thanks to its high-end features. Made from cowhide leather in Italy, this handbag has a detachable leather strap and slim central strap with a magnetic touch closure.

The rounded top handles have hook-buckle, narrow top. The bottom is pretty wide and has protecting studs. On the interior, it has a central zipper, a flat zipper pocket, two cell phone pockets, as well as an open flat pocket. The bag is ideal when going shopping or attending interviews.

best replica gucci websites usa

Topbiz.md Replica Gucci Bags Vs. Original Version

The topbiz.md high quality Replica Gucci Bags are crafted using state-of-the-art technology under strict requirements, ensuring that the end result is the exact copy of the real version. Since the quality, design, and style are matched, it becomes hard to differentiate between the original pieces and replicas. The site is not only popular for providing replicas at affordable prices, but also offers free shipping, great discounts, and a 30-days free return policy.

Top Replica Bvlgari Jewelry on Elog.io

Bvlgari is an esteemed name brand that is synonymous with class and sophistication. With their strong, Italian-inspired styles, each piece of Bvlgari jewelry is easy to recognize. Elog.io in USA proudly stocks the finest selection of replica Bvlgari jewelry at reasonable prices. Whether you’re looking for a fantastic replica Bvlgari necklace, rings, earrings or cufflinks, elog.io gives you a blast.

best replica jewelry sites australia

Anybody in pursuit of fashion can quench thirst here. Each jewelry piece on elog.io is delicately crafted, shaped and perfectly sized to suit and accent any taste and splendor. Their jewelry comes in various aaa top-quality replica jewelry including silver plating that render them stylish and comfortable. Their bracelets, for instance, are elegant and feature unique aesthetics making them hard to distinguish from real ones. They also have exquisite gold plated and multi-colored jewelry that showcase grace and nobility.

The creative, unique and bold designs is a revival of the memories from the ancient Roman and Greek periods. Bvlgari necklaces, bracelets, rings and cufflinks tend to hover around the skin surface, unlike most conventional brands that cling to the skin instead. They have different colors and styles to suit varying tastes and gender.

Their necklaces come with double toned gold designs that make them highly coveted. They are made of 18k gold and come in different custom colors to fit any personality. They are highly appealing and won’t go unnoticed. The polishing and finishing on their surfaces are just far beyond the obvious.

They also have stunning cufflinks which are not only created for men, but for the brave and daring women as well. Elog.io replica Bvlgari cufflinks come in a comprehensive range of vibrant colors such as black, red, blue, purple and many more. If you’re looking forward to gracing your wrist with a clear cuff link, then pick a Clone Bvlgari Serpenti Blue Leather Decked Cufflink. This is a hot-selling piece that features a spiteful snake. It has a green enamel and makes it strikingly beautiful for any urbane woman. It has a leather opening and can match that beautiful dress.

Bvlgari replica rings aren’t just ordinary rings. The women rings sold on elog.io come with bizarre designs that combine both minimalisms with ancient definitive elements and artistic touches. One of the most notable design is the multiple band design which makes them both functional and stylish. Knock-off Bvlgari rings come with sturdy solitaire designs that make them highly adaptable and can match with any outfit. They also don’t catch clothing and other linen easily. They are ideal for those who adore durable yet stylish rings.

925 sterling silver replica bvlgari jewelry

The imitated Bvlgari pieces of jewelry on elog.io are trendy and fashionable, but very affordable. Every single feature has been crafted to resemble their originals. Right from the design, shape, color, and materials used, Bvlgari pieces of jewelry never fail to amaze. If you’re looking for quality and fashion at a reasonable price, then visit elog.io to explore a stunning selection of Bvlgari jewelry.

AAA Quality Replica Bvlgari Watches on Mca.mn

Bvlgari isn’t just a great name in the jewelry world. They also have stunning watches that are popularly for their precision and fashion. Being the most splendid watches, Bvlgari never fades out of style. However, the scary part of these watches is that they come with a price. Mca.mn goes breaks the bounds to give you a taste of these luxury watches for less. They stock a delightful selection of replica Bvlgari watches for all gender, outfit, and taste.

replica bvlgari watch bands sale

Their replica watches come in varying designs and styles. With an exquisite merge with Swiss style and precision, these watches feature contemporary Italian designs. They are made of quality and highly precious materials such as carbon gold, titanium, bronze, steel or platinum. You are guaranteed to find the right replica Bvlgari watch material that suits your taste. The materials they use are durable, classy and highly functional.

There also have both manual and automatic typologies. Automatic movement guarantee accuracy. Automatic means your watch never veers from the correct time and don’t need any intervention. Although the manual types sound ancient, they feature beautiful displays. With their transparent case backs, manual replica Bvlgari watches can accent any persona.

Mca.mn also sells watches with different functions including hour-minute, mono-retro, minute-repeater, Grande sonnerie, small second and tourbillion and many more. You can select the comfortable bracelet material that fits your style. The type of bracelet materials available includes rubber, stainless steel, alligator leather and rose gold and more.

Bvlgari B.zero1 101423 BZ30WHDSGL is one hot-selling women replica watch on mca.mn. The watch features a sapphire crystal glass that guarantees resistance and toughness. Being the second sturdiest material after jewel, sapphire remains both durable and aesthetically appealing. The watch also features a round case shape and a Swiss steel case material to make it both functional and attractive. The quartz (kinetic) movement guarantee improved accuracy. Other trendy features you’ll love in this watch is the silver band color and a legendary Bvlgari logo in the middle. They make the watch appear new and classy every day.

Bvlgari 2017 Spring Summer Collection BV096 is equipped with triangle hour markers that make it inimitable. This watch has no flaw and perfectly resembles its original counterpart. The black dial set, willow-shaped steel hour and minute hands are other stunning features of this watch.

The other best-selling Bvlgari watch on mca.mn is the B.zero1 BZ30BHSL BV007. This features a white dial color that makes it appear large. The round case shape makes it cool, versatile and wearable. There is also the dull-finished stainless steel bezel that enhances the appearance and functionality of this watch. It makes it ideal for everyday use.

best replica sites for watches

You don’t have to pay more to have a taste of quality. Mca.mn sells you quality replica watches online at reasonable prices. Their watches are identical to the genuine ones. They are purely authentic and come at discounted prices that you cannot find elsewhere. Visit their site to explore a vast range of replica Bvlgari watches for both men and women.

Best Replica Ladies Audemars Piguet Watches Recommendations on Sciu.Com.Au

If you are fond of stylish watches within your budget then you must know about sciu.com.au. Sciu.com.au is a popular online store that offers wide variety of top swiss replica watches with good reviews of a number of popular brands. In fact, they allow you to enjoy the lavishness of original products which may not be affordable for you.

audemars piguet replica watch lady

Sciu.com.au in Australia can provide you a variety of counterfeit ladies Audemars Piguet watches which are known for their texture, shape and design along with their technical specialties. Some of these replica watches are briefly discussed here under for your consideration.

Royal Oak Offshore Replica Sapphire Lady: It is one of the delicate and cheap Audemars Piguet ceramic watches especially designed for ladies from Singapore. This watch in polygon pattern includes Japan quartz movement, Date function and hard wearing sapphire crystal. This watch with black colored case and face also includes case and band made from high-tech ceramic.

AP Royal Oak Offshore Lady Alinghi: This fine and cheap replica watch form Audemars Piguet is the replica of a limited edition watch that is designed for the ladies in Singapore. This ladies watch with AAA+ grade includes Japan quartz caliber and date and chronograph functions along with hard wearing sapphire crystal to make it more durable. The case of this polygon pattern watch is made from Swiss 316L Steel and band is made from rubber. The color of its face is brown and that of its case is silver.

trusted replica watches websites australia

The comparison of the reviews of some of the ladies Audemars Piguet knock-off watches with that of their original models proves that sciu.com.au allows everyone to enjoy the style and luxury of the watches that are not affordable for them. They take care of every aspect while creating replica watches so that they not only look like their originals but also work like them.

November 11, 2018

LUV November 2018 Workshop: Introduction to shell programming

Nov 17 2018 12:30
Nov 17 2018 16:30
Nov 17 2018 12:30
Nov 17 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Introduction to shell programming

A lot of things in Linux are done on the command line. This talk starts with a very basic knowledge of the command line and describes how to leverage this into a powerful programming system. Transform those snips of commands into powerful self documenting scripts to super charge your productivity.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

November 17, 2018 - 12:30

read more

November 06, 2018

DevOpsDaysNZ 2018 – Day 2 – Session 4

Allen Geer, Amanda Baker – Continuously Testing govt.nz

  • Various .govt.nz sites
  • All Silverstripe and Common Web Platform
  • Many sites out of date, no automated testing, no test metrics, manual testing
  • Micro-waterfall agile
  • Specification by example (prod owner, Devops, QA)  created Gherkin tests
  • Standardised on CircleCI
  • Visualised – Spec by example
  • Prioritised feature tests
  • Ghirkinse
  • Test at start of dev process. Bake Quality in at the start
  • Visualise and display metrics, people could then improve.
  • Path to automation isn’t binary
  • Involve everyone in the team
  • Automation only works if humanised

Jules Clements – Configuration Pipeline : Ruling the One Ring

  • Desired state
  • I didn’t quite understand what he was saying

Nigel Charman – Keep Calm and Carry On Organising

  • 71 Conferences worldwide this year
  • NZ following the rules
  • Lots of help from people
  • Stuff stuff stuff

Jessica DeVita – Retrospecting our Retrospectives

  • Works on Azure DevOps
  • Post-mortems
  • What does it mean to have robust systems and resilience? Is resilience even a property? It just Is. When we fly on planes, we’re trusting machines and automation. Even planes require regular reboots to avoid catastrophic failures, and we just trust that it happen
  • CEO after a million dollar outage said “Can you get me a million dollars of learning out of this?”
  • After US Navy had accidents caused by slept deprivation switched to new watch structure
  • Postmortems are not magic, they don’t automatically make things change
  • http://stella.report
  • We dedicate a lot of time to to below the line, looking at the technology. Not a lot of conversation about above-the-line things like mental models.
  • Resilience is above the line
  • Catching the Apache SNAFU
  • The Ironies of Automation – Lisanne Bainbridge
  • Well facilitated debriefings support recalibration of mental models
  • US Forest Service – Learning Review – Blame discourages people speaking up about problems
  • We never know where the accident boundary is, only when we have crossed it.
    • SRE, Chaos Engineer and Human Factors help hadle
  • In postmortems please be mindful of judging timelines without context. Saying something happened in a short or long period of time is damanging
  • Ask “what made it hard to get that team on the phone?” , “What were you trying to achieve”
  • Etsy Debriefing Guide – lots of important questions.
  • “Moving post shallow incident data” – Adaptive Capacity Labs
  • Safety is a characteristics of Systems and not of their components
  • Ask people about their history, ask every person about what they do and how they got there because that is what shapes your culture as an organisation

Share

DevOpsDaysNZ 2018 – Day 2 – Session 3

Kubernetes

I’ll fill this in later.

Observability

  • Honeycomb, Sumologic. Use AI to look at what happened at same time and magically correlate
  • Expensive or hard to send all logs as volumes go up
  • What is the logging is wrong or missing?
  • Metrics
    • Export in prometheus format
    • Read RED and USE paper
    • Create a company schema with half a dozen metrics that all services expose
  • Had and event or transaction ID that flows across all the microservices sorry logs can be correlated
  • Non technical solutions
    • Refer to previous incident logs
    • Part of deliverables for product is SLA stats which require logs etc
  • Testing logs
    • Make sure certain events produce a log
  • Chaos Monkey

ANZ Drivetrain

  • Change control cares about
    • Avaiability
    • Risk
    • Dependencies
    • Rollback
  • But the team doing the change knows about these all
  • Saw tools out there that seem very opinated
  • Drivetrain
    • Automated Checklist
    • Work with Change people to create checklist
    • Pipeline talks to drivetrain and tells it what has been down
    • Slack messages sent for manual changes (they login to app to approve)
  • Looked at some other tools (eg chef automate, udeploy )
    • Forced team to work in a certain pattern
  • But use ServiceNow tool as official corporate standard
    • Looking at making DriveTrail fill in ServiceNow forms
  • People worried about stages in tool often didn’t realise the existing process had same limitations
  • Risk assessed at the Story and Feature level. Not release level
  • Not suitable for products that due huge released every few months with a massive number of changes.

 

 

Share

DevOpsDaysNZ 2018 – Day 2 – Session 2

Interesting article I read today

Why Doctors Hate their Computers by Atul Gawande

Mrinal Mukherjee – A DevOps Confessional

  • Not about accidents, it is about Planned Blunders that people are doing in DevOps
  • One Track DevOps
    • From Infrastructure background
    • Job going into places, automated the low hanging fruit, easy wins
    • Column of tools on resume
    • Started becoming the bottleneck, his team was the only one who knew how the infrastructure worked.
    • Not able to “DevOps” a company since only able to fix the infrastructure, not able to fix testing etc so not dilvering everything that company expected
    • If you are the only person who understands the infrastructure you are the only one blamed when it goes wrong
    • Fixes
      • Need to take all team on a journey
      • But need to have right expectations set
      • Need to do learning in areas where you have gaps
      • DevOps is not about individual glory, Devops is about delivering value
      • HR needs to make sure they don’t reward the wrong thing
  • MVP-Driven Devops
    • Mostly working on Greenfields products that need to be delivered quickly
    • MVP = Maximum Technical Debt
    • MVP = Delays later and Security audits = Your name attached to the problem
    • Minimum Standard of Engineering
      • Test cases, Documentation, Modular
      • Peer review
    • Evolve architecture, not re-architect
  • Judgemental Devops
    • That team sucks, they are holding things up, playing a different game from us
    • Laughing at other teams
    • Consequence – Stubbornness from the other team
    • Empathy
      • Find out why things are they way they are
    • Collaborate to find common ground and improve
    • Design my system to I plan to work within constraints of the other team

Share

November 05, 2018

DevOpsDaysNZ 2018 – Day 2 – Session 1

Alison Polton-Simon – The DevOps Experiments: Reflections From a Scaling Startup

  • Software engineer at Angaza, previously Thoughtworks, “Organizational Anthropologist”
  • Angaza
    • Enable sales of life-changing products (eg solar chargers, water pumps, cook stoves in 3rd world countries)
    • Originally did hardware, changed to software company doing pay-as-you-go of existing devices
    • ~50 people. Kenya and SF, 50% engineering
    • No dedicated Ops
    • Innovate with empathy, Maximise impact
    • Model is to provide software tools to activate/deactivate products sold to peopel with low credit-scores. Plus out software around the activity like reports for distributors.
  • Reliability
    • Platform is business critical
    • Outages disrupt real people (households without light, farmers without irrigation)
    • Buildkite, Grafana, Zendesk
  • Constraints
    • Operate in 30+ countries
    • Low connectivity, 2G networks best case
    • Rural and peri-urban areas
    • Team growing by 50% in 2018 (2 eng teams in Kenya + 1 QA)
    • Most customers in timezone where day = SF night
  • Eras of experimentation
    • Ad Hoc
    • Tributes (sacrifice yourself for the stake of the team)
    • Collectives (multiple teams)
    • Product teams
  • ad Hoc – 5 eng
    • 1 eng team
    • Ops by day – you broke, you fix
    • Ops by night – Pagerduty rotation
    • Paged on all backend exception, 3 pages = amnesty
    • Good
      • Small but senior
      • JIT maturity
      • Everyone sitting next to each other
    • Bad
      • Each incident higherly disruptive
      • prioritized necessity over sustainability
  • Tribute – 5-12 eng
    • One person protecting team from interuptions
    • Introduced support person and triage
    • Expanded PD rotation
    • Good
      • More sustainable
      • Blue-Green deploys
      • Clustered workloads
    • Not
      • Headcount != horizontal scaling
      • Hard to hire
      • Customer service declined
  • Collectives 13-20 engs
    • Support and Ops teams – Ops staffed with devs
    • Other teams build roadmaps and requests
    • Teams rotate quarterly – helps onboarding
    • Good
      • Cross train ppl
      • Allow for focus
      • allowed ppl to get depth
    • Bad
      • Teams don’t op what they built
      • Quarter flies by quickly
      • Context switch is costly
      • Still a juggling act
      • 1m ramping up, 6w going okay, 2w winging down
  • Product teams  21 -? eng
    • 5 teams, 2 in Nairobi
    • Teams allighned with business virticals, KPIs
    • Dev, own and maintain services
    • Per-team tributes
    • No [Dev]Ops team
    • Intended goals
      • Independent teams
      • own what build
      • Support biz KPIs
      • cross team coordination
    • Expected Chellenges
      • Ownership without responsbility
      • Global knowledge sharing
      • Return to tribute system (2w out of the workflow)
  • Next
    • Keep growing team
    • Working groups
    • Eventual SRE
    • 24h global coverage
  • Case a “Constitution” of values that everybody who is hired signs
  • Takeaways
    • Maximise impact
      • Dependable tools over fashionable ones
      • Prefer industry-std tech
      • But get creative when necessary
    • Define what reliability means for your system
    • Evolve with Empathy
      • Don’t be dogmatic without structure
      • Serve your customers and your team
      • Adapt when necessary
      • Talk to people
      • Be explicit as to the tradeoffs you are making

Anthony Borton – Four lessons learnt from Microsoft’s DevOps Transformation

  • Microsoft starting in 1975
  • 93k odd engineers at Microsoft
    • 78k deployments per day
    • 2m commits per month
    • 4.4 builds/month
    • 500 million tests/day
  • 2018 State of Devops reports looks at Elite performers in the space
  • TFS – Team Foundation Server
    • Move product to the cloud
    • Moved on-prem to one instance
    • Each account had it’s own DB (broke stuff at 11k DBs)
  • 4 lessons
    • Customer focussed
      • Listen to customers, uservoice.com
      • Lots of team members keep eye on it
      • Stackoverflow
      • Embed with customers
      • Feedback inside product
      • Have to listen in all the channels
    • Failure is an opportunity to learn
    • Definition of done
      • Live in prod, collecting telemetary that examines hypotheses that it was created to prove
    • “For those of you who don’t know who Encarta is, look it up on Wikipedia”
  • Team Structure
    • Combined engineering = devs + testing
      • Some testers left team or organisation
    • Feature team
      • Physical team rooms
      • Cross discipline
      • 10-12 people
      • self managing
      • Clear charter and goals
      • Intact for 12-18 months
    • Sticky note exercise, people pick which teams they would like to join (first 3 choices)
      • 20% choose to change
      • 100% get the choice
  • New constants and requirements
    • Problems
      • Tests took too long – 22h to 2days
      • Tests failed frequently – On 60% passed 100%
      • Quality signal unreliable in master
    • Publish VSTS Quality vision
      • Sorted by exteranl dependancies
      • Unit tests
        • L0 – in-memory unit tests
        • L1 – More with SQL
      • Functional Tests
        • L2 – Functional tests against testable service deployment
        • L3 – Restricted class integration tests that run against production
      • 83k L0 tests run agains all pulls very fast
  • Deploy to rings of users
    • Ring 0 – Internal Only
    • Ring 1 – Small Datacentre 1-1.5m accounts in Brazil (same TZ as US)
    • Ring 2 – Public accounts, medium-large data centre
    • Ring 3 – Large internal accounts
    • Ring 4-5 – everyone else
    • Takes about a week for normal releases.
    • Binaries go out and then the database changes
    • Delays of minutes (up to 75) during the deploys to allow bugs to manafest
    • Some customers have access to feature flags
    • Customers who are risk tolerant can opt in to early deploys. Allows them to get faster feedback from people who are able to provide it
  • More features delivered in 2016 than previous 4 years. 50% more in 2017

 

Share

DevOpsDaysNZ 2018 – Day 1 – Session 4

Everett Toews – A Trip Down CI/CD Lane

I missed most of this talk. Sounded Good.

Jeff Smith – Creating Shared Contexts

  • Ideas and viewpoints are different from diff people
  • Happens in organisation, need to make sure everybody is on the same page
  • Build a shared context via conversations
  • Info exchange
  • Communications tools
  • Context Tools
  • X/Y Problem
  • Data can bridge conversations. Shared reality.
  • Use the same tools as other teams so you know what they are doing
  • Give the context to your requests, ask for them and it will automatically happen eventually.

Peter Sellars – 2018: A Build Engineers Odyssey

  • Hungry, Humble and Smart

Katrina Clokie – Testing in DevOps for Engineers

  • We can already write, so how hard can it be to write a novel?
  • Hopefully some of you are doing testing already
  • Problem is that people overestimate their testing skills, not interested in finding out anything else.
  • The testing you are doing now might be with one tool, in one spot. You are probably finding stuff but missing other things
  • Why important
    • Testing is part of you role
    • In Devops testing goes though Operations as well
    • Testing is DevOps is like air, it is all around you in every role
    • Roles of testers is to tech people to breath continuously and naturally.
  • Change the questions that you ask
    • How do you know that something is okay? What questions are you asking of your product?
    • Oracles are the ways that we recognise a problem
    • Devs ask: “Does it work how we think it should?”
    • Ops ask: “Does it work how it usually works?”
    • Devs on claims, Ops on history
    • Does it work like our competors, does it meet it’s purpose without harmful side effects, doesn’t it meet legal requirements, Does it work like similar services.
    • HICCUPPS – Testing without a Map – Michael Bolton, 2005
    • How do we compare to what other people are doing?  ( Not just a BA’s job , cause the customer will be asking a question and so should you)
    • Flip the Oracle, compare them against other things not just the usual.
    • Audit – Continuous compliance, Always think about if it works like the standards say it should.
    • These are things that the business is asking. If you ask then gain confidence of business
  • Look for Answers in Other Places
    • Number of tests: UI <  Service < Unit
    • The Test Pyramid as a bug catcher. Catch the Simple bugs first and then the subtle ones
  • Testing mesh
    • Unit tests – fine mesh
    • Intergration – Bigger/Fewer tests but cover more
    • Next few layers: End to End, Alerting, Monitoring, Logging. Each stops different types of bugs
    • Conversation should be “Where do we put our mesh?”, “How far can this bug fall?”.
    • If another layer will pick the bug up do we need a test.
  • Use Monitoring as testing
    • Push risk really late, no in all cases but can often work
  • A/B testing
    • Ops needs to monitor
    • Dev needs framework to role out and put in different options
  • Chaos Engineering
    • Start with something small, communicate well and do during daylight hours.
    • Yours customers are testing in production all the time, so why arn’t you too?
  • https://leanpub.com/testingindevops

 

Share

DevOpsDaysNZ 2018 – Day 1 – Session 3

Open Space 1 – Prod Support, who’s responsible

  • Problem that Ops doesn’t know products, devs can’t fix, product support owners not technical enough
  • Xero have embedded Ops and dev in teams. Each person oncall maybe 2 weeks in 20
  • Customer support team does everything?
  • “Ops have big graphs on screens, BI have a couple of BI stats on screens, Devs have …youtube videos”
  • Tiers support vs Product team vs Product support team
  • Tiered support
    • Single point of entry
    • lower paid person can handle easy stuff
    • Context across multiple apps
  • Product Team
    • Buck stops with someone
    • More likely to be able to action
    • Ownership of issues
    • Everyone must be enabled to do stuff
    • Everyone needs to be upskilled
  • Prod Support
    • Big skilled can fix anything team
    • Devs not keen
    • Even the best teams don’t know everything

Open Space 2 – DevOps at NZ Scale

  • Devops team, 3rd silo
    • Sometimes they are the new team doing cool stuff
    • One model is evangelism team
  • Do you want devops culture or do you just want somebody to look after your pipeline?
  • Companies often don’t know what they want to hire
  • Companies get some benefit with the tools (pipelines, agile)  but not the culture. But to get the whole benefit they need to adopt everything.
  • The Way of Ways article by John Cutler

Open Space 3 – Responding Quickly

I was taking notes on the board.

Share

DevOpsDaysNZ 2018 – Day 1 – Session 2

Mark Simpson, Carlie Osborne – Transforming the Bank: pipelines not paperwork

  • Change really is possible even in the least likely places
  • Big and risk adverse
    • Lots of paperwork and progress, very slow
  • Needed to change – In the beginning – 18 months ago
    • 6 months talking about how we could change things
  • Looked for a pilot project – Internet Banking team – ~80 people – Go-money platform
    • Big monolith, 1m lines of code
    • New release every 6 weeks
    • 10 weeks for feature from start to production
    • Release on midnight on a Friday night, 4-5 hours outage, 20-25 people.
    • Customer complaints about outage at midnight, move to 2am Sunday morning
  • Change to release every single week
    • Has to be middle of the day, no outage
    • How do we do this?
  • Took whole Internet banking team for 12 weeks to create process, did nothing else.
  • What we didn’t do
    • Didn’t replatform, no time
  • What we did
    • Jenkins – Created a single Pipeline, from commit to master all the way to projection
    • Got rid of selenium tests
    • Switched to cypress.io
      • Just tested 5 key customer journeys
    • Drivetrain – Internal App
      • Wanted to empower the teams, but lots of limits within industry/regulations
      • Centralise decision making
      • Lightweight Rules engine, checks that all the requirements have been done by the team before going to the next stage.
    • Cannery Deployments
      • Two versions running, ability to switch users to one or other
  • Learning to Break things down into small chunks
  • Change Process
    • Lots of random rules, eg mandatory standdown times
    • New change process for teams using Drivetrain, certified process no each release
  • Lots of times spent talking to people
    • Had to get lots of signoffs
  • Result
    • Successful
    • 16 weeks rather than 12
    • 28 releases in less than 6 months (vs approx 4 previously)
    • 95% less toil for each release
  • Small not Big changes
    • Now takes just 4-5 weeks to cycle though a feature
    • Don’t like saying MVP. Pitch is as quickly delivering a bit of value
    • and iterating
    • 2 week pilot, not iterations -> 8 week pilot, 4 iterations
    • Solution at start -> Solution derived over time
  • Sooner, not later
    • Previously
      • Risk, operations people not engaged until too late
      • Dev team disengaged from getting things into production
    • Now
      • Everybody engaged earlier
  • Other teams adopting similar approach

Ryan McCarvill – Fighting fires with DevOps

  • Lots of information coming into a firetruck, displayed on dashboard
  • Old System was 8-degit codes
  • Rugged server in each each truck
    • UPS
    • Raspberry Pi
    • Storage
    • Lots of different networking
  • Requirements
    • Redundant Comms
    • Realtime
    • Offline Mpas
    • Offline documentation, site reports, photos, building info
    • Offline Hazzards
    • Allow firefighters to update
    • Track appliance and firefighter status
    • Be a hub for an incident
    • Needs to be very secure
  • Stack on the Truck
    • Ansible, git, docker, .netcode, redis, 20 micoservices
  • What happens if update fails?
  • More than 1000 trucks, might be offline for months at a time
  • How to keep secure
  • AND iterate quickly
  • Pipeline
    • Online update when truck is at home
    • Don’t update if moving
    • Blue/Green updates
    • Health probes
  • Visual Studio Team Services -> Azure cont registry
  • Playbooks on git , ansible pull,
  • Nginx in front of blue/green
  • Built – there were problems
    • Some overheating
    • Server in truck taken out of scope, lost offline strategy
    • No money or options to buy new solution
  • MVP requirements
    • Lots of gigs of data, made some so only online
    • But many gigs still needed online
    • Create virtual firetruck in the sky, worked for online
    • Still had communication device – 1 core, minimum storage, locked down Linux
  • Put a USB stick in the back device and updated it
    • Can’t use a lot of resources or will inpact comms
    • Hazard search
      • Java/python app, no much impact on system
      • Re-wrote in rust, low impact and worked
      • Changed push to rsync and bash
  • Lessons
    • Automation gots us flexability to change
    • Automation gave us flexability to grow
    • Creativity can solve any problem
    • You can solve new problems with old technology
    • Sometimes the only way to get buy in is to just do it.

Share

November 04, 2018

DevOpsDaysNZ 2018 – Day 1 – Session 1

Jeff Smith – Moving from Ops to DevOps: Centro’s Journey to the Promiseland

  • Everyone’s transformation will look a little different
  • Tools are important but not the main problem (dev vs Ops)
  • Hiring a DevOps Manager
    • Just sounds like a normal manager
    • Change it to “Director of Production Operations”
  • A “DevOps Manager” is the 3rd silo on top of Dev and Ops
  • What did peopel say was wrong when he got there?
    • Paternalistic Ops view
      • Devs had no rights on instances
      • Devs no prod access
      • Devs could not create alerts
  • Fix to reduce Ops load
    • Devs get root to instances, but access to easily destroy and recreate if they broke it
    • Devs get access to common safe tasks, required automation and tools (which Ops also adopted)
    • Migrated to datadog – Single tool for all monitoring that anyone could access/update.
    • Shared info about infrastructure. Docs, lunch and learns. Pairing.
  • Expanding the scope of Ops
    • Included in the training and dev environment, CICD. Customers are internal and external
    • Used same code to build every environment
    • Offering Operation expertise
      • Don’t assume the people who write the software know the best way to run it
  • Behaviour can impact performance
    • See book “Turn the Ship around”
    • Participate in Developer rituals – Standups, Retros
    • Start with “Yes.. But” instead of “No” for requests. Assume you can but make it safe
    • Can you give me some context. Do just do the request, get the full picture.
  • Metrics to Track
    • Planned vs unplanned work
    • What are you doing lots of times.
  • What we talk about?
    • Don’t allow your ops dept to be a nanny
    • Remove nanny state but maintain operation safety
    • Monitor how your language impacts behavour
    • Monitor and track the type of work you are doing

François Conil – Monitoring that cares (the end of user based monitoring)

  • User Based monitoring (when people who are affected let you know it is down)
  • Why are we not getting alerts?
    • We are are not measuring the right thing
    • Just ignore the dashboard (always orange or red)
    • Just don’t understand the system
  • First Steps
    • Accept that things are not fine
    • Decide what you need to be measuring, who needs to know, etc. First principals
    • A little help goes a long way ( need a team with complementary strengths)
  • Actionable Alerts
    • Something Broken, User affected, I am the best person to fix, I need to fix immediately
    • Unless all 4 apply then nobody should be woken up.
      • Measured: Take to QA or performance engineers to find out the baseline
      • User affected: If nobody is affected do we care? Do people even work nights? How do you gather feedback?
      • Best person to fix: Should ops guys who doesn’t understand it be the first person to page?
      • Do it need to be fixed? – Backup environment, Too much detail in the alerts, Don’t alert on everything that is broken, just the one causing the problem
  • Fix the cause of the alerts that are happening the most often
  • You need time to get things done
    • Talk to people
    • Find time for fixes
  • You need money to get things done
    • How much is the current situation costing the company?
    • Tech-Debt Friday

Share

November 02, 2018

LUV November 2018 Main Meeting: Computerbank / Designing Open Democracy

Nov 7 2018 18:30
Nov 7 2018 20:30
Nov 7 2018 18:30
Nov 7 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE CHANGE OF DAY DUE TO MELBOURNE CUP HOLIDAY

6:30 PM to 8:30 PM Wednesday, November 7, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

November 7, 2018 - 18:30

read more

November 01, 2018

Lean data in practice

Mozilla has been promoting the idea of lean data for a while. It's about recognizing both that data is valuable and that it is a dangerous thing to hold on to. Following these lean data principles forces you to clarify the questions you want to answer and think hard about the minimal set of information you need to answer these questions.

Out of these general principles came the Firefox data collection guidelines. These are the guidelines that every team must follow when they want to collect data about our users and that are enforced through the data stewardship program.

As one of the data steward for Firefox, I have reviewed hundreds of data collection requests and can attest to the fact that Mozilla does follow the lean data principles it promotes. Mozillians are already aware of the problems with collecting large amounts of data, but the Firefox data review process provides an additional opportunity for an outsider to question the necessity of each piece of data. In my experience, this system is quite effective at reducing the data footprint of Firefox.

What does lean data look like in practice? Here are a few examples of changes that were made to restrict the data collected by Firefox to what is truly needed:

  • Collecting a user's country is not particularly identifying in the case of large countries likes the USA, but it can be when it comes to very small island nations. How many Firefox users are there in Niue? Hard to know, but it's definitely less than the number of Firefox users in Germany. After I raised that issue, the team decided to put all of the small countries into a single "other" bucket.

  • Similarly, cities generally have enough users to be non-identifying. However, some municipalities are quite small and can lead to the same problems. There are lots of Firefox users in Portland, Oregon for example, but probably not that many in Portland, Arkansas or Portland, Pennsylvania. If you want to tell the Oregonian Portlanders apart, it might be sufficient to bucket Portland users into "Oregon" and "not Oregon", instead of recording both the city and the state.

  • When collecting window sizes and other pixel-based measurements, it's easier to collect the exact value. However, that exact value could be stable for a while and create a temporary fingerprint for a user. In most cases, teams wanting to collect this kind of data have agreed to round the value in order to increase the number of users in each "bucket" without affecting their ability to answer their underlying questions.

  • Firefox occasionally runs studies which involve collecting specific URLs that users have consented to share with us (e.g. "this site crashes my Firefox"). In most cases though, the full URL is not needed and so I have often been able to get teams to restrict the collection to the hostname, or to at least remove the query string, which could include username and passwords on badly-designed websites.

  • When making use of Google Analytics, it may not be necessary to collect everything it supports by default. For example, my suggestion to trim the referrers was implemented by one of the teams using Google Analytics since while it would have been an interesting data point, it wasn't necessary to answer the questions they had in mind.

Some of these might sound like small wins, but to me they are a sign that the process is working. In most cases, requests are very easy to approve because developers have already done the hard work of data minimization. In a few cases, by asking questions and getting familiar with the problem, the data steward can point out opportunities for further reductions in data collection that the team may have missed.

October 29, 2018

Installing Vidyo on Ubuntu 18.04

Following these instructions as well as the comments in there, I was able to get Vidyo, the proprietary videoconferencing system that Mozilla uses internally, to work on Ubuntu 18.04 (Bionic Beaver). The same instructions should work on recent versions of Debian too.

Installing dependencies

First of all, install all of the package dependencies:

sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module

Then, ensure you have a system tray application running. This should be the case for most desktop environments.

Building a custom Vidyo package

Download version 3.6.3 from the CERN Vidyo Portal but don't expect to be able to install it right away.

You need to first hack the package in order to remove obsolete dependencies.

Once that's done, install the resulting package:

sudo dpkg -i vidyodesktop-custom.deb

Packaging fixes and configuration

There are a few more things to fix before it's ready to be used.

First, fix the ownership on the main executable:

sudo chown root:root /usr/bin/VidyoDesktop

Then disable autostart since you don't probably don't want to keep the client running all of the time (and listening on the network) given it hasn't received any updates in a long time and has apparently been abandoned by Vidyo:

sudo rm /etc/xdg/autostart/VidyoDesktop.desktop

Remove any old configs in your home directory that could interfere with this version:

rm -rf ~/.vidyo ~/.config/Vidyo

Finally, launch VidyoDesktop and go into the settings to check "Always use VidyoProxy".

October 28, 2018

Audiobooks – October 2018

How to Rig an Election by Nic Cheeseman & Brian Klaas

The authors take experiences in various countries (mostly recent 3rd-world examples) as to how elections are rigged. Some advice for reducing it. 8/10

The Hound of the Baskervilles by Sir Arthur Conan Doyle. Read by Stephen Fry

Once again great reading by Fry and a great story. Works very well with all Holmes and Watson action and no giant backstory. 8/10

Our Oriental Heritage: Story of Civilization Series, Book 1 by Will Durant

Covers the early history of Egypt, the Middle East, India, China and Japan. In some cases up to the 20th Century. The book cover arts, religion and philosophy as well Kings and dates. This was written in the 1930s so has some stuff that has been superseded and out of date attitudes to race and religion in places. It long (50 hours) with another 11 volumes still to go but it is pretty good if you don’t mind these problems. 7/10

Chasing New Horizons: Inside the Epic First Mission to Pluto by Alan Stern & David Grinspoon

Stern was one of the originators and principal investigator of the mission so lots of firsthand details about all stages of the project from first conception though various versions.

Share

October 27, 2018

Six years and 9 months...

The Drupal Association

Six years and 9 months... is a relatively long time. Not as long as some things, longer than others. Relative. As is everything.

But Six years and 9 months is the length of time I've been on the board of the Drupal Association.

I was elected to serve on the board by the community in February 2012, and then nominated to serve for another two terms. That second term expires on 31 October. My original candidate statement makes somewhat nostalgic reading now... and it's now that I wonder, what I achieved. If anything?

But that's the wrong question. There's nothing useful to be gained in trying to answer it.

Instead - I want to reflect on what I learned.

I learned something from everyone at that table. Honestly, I never really lost my sense of imposter syndrome, and I'm freely and gleefully willing to admit that.

Cary Gordon - we shared a passion for DrupalCon. That show grew into the incredible event it is because of seeds you sewed. And your experience running big shows, and supporting small community libraries seemed to be the perfect mix for fueling what Drupal needed.

Steve Purkiss - we were elected together! Your passion for cooperatives, for Drupal, and for getting on with it, and making things happen was infectious! Thank you for standing with me in those weird first few months of being in this weird new place, called the board of the Drupal Association!

Pedro Cambra - I wish I'd heed the lesson you taught me more often. Listen carefully. Speak only when there's something important to say, or to make the case for a perspective that's being missed. But also good humour. And Thank you for helping make the election process better, and helping the DA "own" the mechanics.

Morten - brother. I can't even find the words to say. Your passion for Drupal, for theming, and for our community always inspired me. I miss your energy.

Angie "webchick" Byron - mate! I still can't fathom how you did what you do so effortlessly! Well, I know it's not effortless, but you make it look that way. Your ability to cut through noise, sort things out, get things done, and inspire the Drupal masses to greatness is breathtaking.

Matthew Saunders - you made me appreciate the importance of governance from a different perspective. Thank you for the work you did to strengthen our board processes.

Addison Berry - Sorry Addi - this is a bit shameful, but it was the mezcal, tequila and bourbon lessons that really stuck.

Danese Cooper - I was so grateful for your deep wisdom of Open Source, and the twists and turns of the path it's followed over such a long time. Your eye to pragmatism over zealotry, but steadfast in the important principles.

Shyamala Rajaram - Oh Shyamala! I can't believe we only first met at DrupalCon Mumbai, or perhaps it was only the first time, this time! Thank you for teaching us all how important it is for us to be in India, and embrace our global community.

Ryan Szrama - you stepped onto the board at such a tough moment, but you stepped up into the role of community elected Director, and helped make sense out of what was happening. Sorry not to see you in Drupal Europe.

Rob Gill - Running. I didn't learn this. Sorry.

Tiffany Farriss - You're formidable! You taught me the importance of having principles, and sticking to them. And then using them to build a foundation in the bedrock. You do this with such style, and grace, and good humour. I'm so thankful I've had this time with you.

Jeff Walpole - You made me question my assumptions all the time! You made me laugh, and you gave me excellent bourbon. You always had a way of bringing us back to the real world when we waded too deep into the weeds.

Vesa Palmu - So many things - but the one that still resonates, is we should all celebrate failure. We should create ritual around it, and formalise the lessons failure teaches. We all learn so much more from mistakes, than from successes.

Sameer Verna - For a time, we were the only linux users at the table, and then I defected back to MacOS - I still feel a bit guilty about this, I admit. You championed Free Software at every step - but also, so often, guided us through the strategic mumbo jumbo, to get to the point we needed to.

Steve Francia - "It's not as bad as you all seem to think it is" I don't know why, but I hear this mantra, spoken with your voice, whenever I think of you. Thank you for your Keynote in Nashville, and for everything.

Mike Lamb - I've not yet put into practice the lesson I need to learn from you. To switch off. To really go home, and be home, and switch off the world. I need me some of that, after all of this. Thank you so much for all you've done, but more for your positive, real world perspective. Ta!

Annie - I missed your presence in Germany so much - I feel like I've still got so much to learn from you. You bridged the worlds of digital and marketing, and brought much needed perspective to our thinking. Twas an honour to serve with you.

Audra - With you too, I feel like I was only beginning to get into the groove of the wisdom you're bringing to the table. I hope our paths continue to cross, so I can keep learning!

Baddy Sonja Breidert - A powerful lesson - as volunteers, we have to account for the time, passion and energy we borrow from the rest of our lives, when we give it to Drupal. And Drupal needs to properly recognise it too.

Ingo Rübe - You taught me how to have courage to bring big ideas to the table, and show grace in letting them go.

Michel van Velde - You taught me to interrogate my assumptions, with fun, with good humour, and honest intention of doing good.

George Matthes - You taught me the power of questioning the received wisdom from history. You reminded me of the importance of bringing fresh eyes to every challenge.

Adam Goodman - a simple, but important lesson. That leadership is about caring for people.

Suzanne Dergacheva - newly elected, and about to start your term - I had too little chance to learn from you at the board table, but I already learned that you can teach the whole community kindness by giving them carnations! #DrupalThanks to you too. And power to your arms as you take the oars as a community elected director, and help row us forward!

And to all the staff who've served over the years, your dedication to this organisation and community it serves is incredible. You've all made a difference, together, to all of us. Special mentions for four of you...

Kris - from Munich to Vienna - my constant companion, and my dive bar adventure buddy. Til next time there is cheese...

Holly - Inspiring me to knit! Or, more accurately, to wish I could knit better than I can. To knit with conviction! It's a metaphor for so much, but also very very literally. Also I miss you.

Steph - Your vibrant enthusiasm, and commitment to DrupalCon always inspired me. Your advice on food trucks in Portland nourished me.

Megan - where to start? I'd never finish. Kindness, compassion, steely focus, commercial reality, "operational excellence", and cactus margaritas.

I save my penultimate words for Dries... Thank you for having faith in me. Thank you for creating Drupal, and for sharing it with all of us. Also, thank you sharing many interesting kinds of Gin!

These final words are for Tim - as you take the reins of this crazy sleigh ride into the future - I feel like I'm leaving just before the party is really about to kick off.

Go you good thing.

Good bye, so long, and thanks for all the fish.

The DA does amazing work.
If you rely on Drupal, you rely on them.

Please consider becoming a member, or a supporting partner.

October 26, 2018

Iterating on Merge Proposals

Developing new WordPress features as plugins has been a wonderfully valuable process for all sorts of features to come into being, from the MP6 Dashboard Redesign, to oEmbed endpoints, and including multiple Customiser enhancements over the years. Thanks to the flexibility that this model offers, folks have been able to iterate rapidly on a wide range of features, touching just about every part of WordPress.

The “Features as Plugins” idea was first introduced during the WordPress 3.7 development cycle, during which the features were merged after a short discussion during a core chat: it was only in the WordPress 3.8 cycle that the idea of a merge proposal post (called “Present Your Feature” back then) came into being. It was envisioned as a way to consult with WordPress leaders, key contributors, and the wider WordPress community on the readiness of this feature to be released. Ultimately, WordPress leaders would make a decision on whether the feature was right for WordPress, and the release lead would decide if it was ready for that release.

Since then, most feature plugins have published some form of merge proposal post before they were ultimately merged into WordPress, and they’ve nearly all benefited to some degree from this process.

The merge proposal process has worked well for smaller features, but it struggled with larger changes.

The REST API is a great example of where the merge proposal process didn’t work. The REST API was a significant change, and trying to communicate the scope of that change within the bounds of a single merge proposal post didn’t really do it justice. It was impossible to convey everything that was changing, how it all worked together, and what it meant for WordPress.

I’d go so far as to say that the shortcomings of the merge proposal process are at least partially responsible for why the REST API hasn’t seen the level of adoption we’d hoped for. It’s managed to gain a moderate amount of popularity with WordPress development agencies, and a handful of plugins use it in some ways, but it never really entered into mainstream usage in the ways it could’ve.

In a project that prides itself on being willing to try new ideas, the merge proposal process has remained largely static for many years.

Gutenberg is the first opportunity since the REST API was merged where we can examine the shortcomings of the merge proposal process, and see how we can apply the original intent of it to the Gutenberg project’s scope and long term vision.

Merge Consultation

Over the last six months, Gutenberg project leads have been consulting with teams across the WordPress project. Helping them get involved when they didn’t have any Gutenberg experience, explaining how their focus fit into the vision for Gutenberg, and listening to feedback on where things needed to be improved. In many circumstances, this consultation process has been quite successful: the WordPress Media and REST API teams are great examples of that. Both teams have got up to speed on the Gutenberg project, and have provided their valuable experience to make it even better.

That’s not to say it’s been entirely successful. There’s been a lot of discussion about Gutenberg and Accessibility recently, much of it boils down to what Joe Dolson summarised as being “too little, too late”. He’s correct, the Accessibility team should’ve been consulted more closely, much earlier in the process, and that’s a mistake I expect to see rectified as the Gutenberg project moves into its next phase after WordPress 5.0. While Gutenberg has always aimed to prioritise accessibility, both providing tools to make the block editor more accessible, as well as encouraging authors to publish accessible content, there are still areas where we can improve.

While there’s much to be discussed following WordPress 5.0, we can already see now that different teams needed to be consulted at different points during the project. Where Gutenberg has aimed to consult with teams earlier than a previous feature plugin would’ve, we need to push that further, ensuring that teams are empowered to get involved earlier still in the process.

All feature plugins in the future, great and small, will benefit from this iteration.

Creating a framework for more fluid feedback over the entire lifecycle of a feature project is beneficial for everyone. WordPress teams can ensure that their feedback is taken on board at the right time, project leads gain experience across the broad range of teams that work on WordPress, and projects themselves are able to produce a better resulting feature.

They important thing to remember throughout all of this is that everything is an experiment. We can try an approach, discover the weaknesses, and iterate. We’re all only human, we all make mistakes, but every mistake is an opportunity to ensure the same mistake can’t happen again. Sometimes that means changing the software, and sometimes that means changing the processes that help build the software. Either way, we’re always able to iterate further, and make WordPress fun for everyone. 🙂

October 18, 2018

Performance Improvements with GPUs for Marine Biodiversity: A Cross-Tasman Collaboration

Identifying probable dispersal routes and for marine populations is a data and processing intensive task of which traditional high performance computing systems are suitable, even for single-threaded applications. Whilst processing dependencies between the datasets exist, a large level of independence between sets allows for use of job arrays to significantly improve processing time. Identification of bottle-necks within the code base suitable for GPU optimisation however had led to additional performance improvements which can be coupled with the existing benefits from job arrays. This small example offers an example of how to optimise single-threaded applications suitable for GPU architectures for significant performance improvements. Further development is suggested with the expansion of the GPU capability of the University of Melbourne’s “Spartan” HPC system.

A presentation to EResearchAustralasia 2018.

October 16, 2018

Children in Singapore will no longer be ranked by exam results. Here’s why | World Economic Forum

https://www.weforum.org/agenda/2018/10/singapore-has-abolished-school-exam-rankings-here-s-why The island nation is changing its educational focus to encourage school children to develop the life skills they will need when they enter the world of work.

October 14, 2018

Replica Bulgari Snakeskin Leather Serpenti Forever Medium Shoulder Bag

Bulgari Serpenti bag collection has been loved by global female stars and fashion icons since it was launched in 2012. It keeps the high reputation till today. Girls who often look through street photos and latest information on fashion on the internet or your smart phones must be familiar with it. Are you moved by those Bulgari Serpenti bags? Forget them for fancy prices. You just pay one tenth of the fancy price for a fine Bulgari Serpenti replica bag, and it will add brilliance to your present splendor. Replica Bulgari Snakeskin Leather Serpenti Forever shoulder bag is available in five different colors, and it fits confident, attractive ladies better.

A serpent is full of mystery, attractive charm and historical culture, even, the fairy tale about the serpent can be dated back to Myths of Ancient Greece and Rome. People believe that it is a symbol of wisdom, vitality and energy. Among Bulgari Serpenti bags in different styles, the Serpenti Forever shoulder bag in snakeskin leather reflects attractive charm of businesswomen and sexy women. Not all like snakeskin leather bags, and they are not versatile for ladies at all ages. The material reminds people of cool and attractive beauty. Whether does replica Bulgari Snakeskin Leather Serpenti Forever shoulder bag move you?


Replica Bulgari Snakeskin Leather Serpenti Forever Shoulder Bag
W25 x H20 x D5cm (W9.8″ x H7.8″ x D1.9″)
Replica Bulgari Serpenti Forever flap cover shoulder bag in snakeskin leather. Brass light gold plated snake head closure in black, white enamel with eyes in green. Medium size with one gusset and snake body-shaped chain. The Serpenti replica bag is available in green, brown, turquoise blue, watermelon red and pink. It has only a flip open pocket and an interior zipper pocket. It can store a phone, some handkerchief paper, a lipstick, a mirror and little items. It allows girls to wear in two ways: across the body or over the shoulder. It’s a good partner in leisure life. By comparing to other versions, it features vivid snake head, one refined snake body-shaped chain and well-made details. The bag body in snakeskin leather is waterproof, so it can be carried in a rainy or snowy days.

 

Lots of celebrities chose the Serpenti Forever shoulder bag in snakeskin leather in green, like Beatrice Grimaldi, Jessica Alba, Leighton Meester. The green Serpenti Forever shoulder bag adorn them a lot in the daily use. It also can match with fashion clothes at a dinner or on formal occasion. Miranda Kerr and Zhang Ziyi chose the dark brown, and they adorn each other. Evan Rachel Wood and her boyfriend were shot in the street, and then she was wearing a Bulgari Snakeskin Leather Serpenti Forever shoulder bag in red. Among five colors, which one is your dream bag?

October 12, 2018

International HPC Certification Program

The HPC community has always considered the training of new and existing HPC practitioners to be of high importance to its growth. The significance of training will increase even further in the era of Exascale when HPC encompasses even more scientic disciplines. This diversification of HPC practitioners challenges the traditional training approaches, which are not able to satisfy the specific needs of users, often coming from non-traditionally HPC disciplines and only interested in learning a particular set of skills. HPC centres are struggling to identify and overcome the gaps in users' knowledge. How should we support prospective and existing users who are not aware of their own knowledge gaps? We are working towards the establishment of an International HPC Certification program that would clearly categorize, define and examine them similarly to a school curriculum. Ultimately, we aim for the certificates to be recognized and respected by the HPC community and industry.

International HPC Certification Program, International Supercomputing Conference, Frankfurt, June, 2018

Julian Kunkel (University of Reading), Kai Himstedt (Universität Hamburg), Weronika Filinger (University of Edinburgh), Jean-Thomas Acquaviva (DDN), William Jalby (Université de Versailles Saint-Quentin), Lev Lafayette (University of Melbourne)

October 10, 2018

Audiobooks – September 2018

Lone Star: A History of Texas and the Texans by T. R. Fehrenbach

About 80% of the 40 hour book covers the period 1820-1880. Huge amounts of detail during then but skips over the rest quickly. Great stories though. 8/10

That’s Not English – Britishisms, Americanisms, and What Our English Says About Us by Erin Moore

A series of short chapters (usually one per word) about how the English language is used differently in England from the US. Fun light read. 7/10

The Complacent Class: The Self-Defeating Quest for the American Dream by Tyler Cowen

How American culture (and I’d extend that to countries like NZ) has stopped innovating and gone the safe route in most areas. Main thesis is that pressure is building up and things may break hard. Interesting 8/10

A History of Britain, Volume 2 : The British Wars 1603 – 1776 by Simon Schama

Covering the Civil War, Glorious Revolution and bits of the early empire and American revolution. A nice overview. 7/10

I Find Your Lack of Faith Disturbing: Star Wars and the Triumph of Geek Culture by A. D. Jameson

A personal account of the author’s journey though Geekdom (mainly of the Sci-Fi sort) mixed in with a bit of analysis of how the material is deeper than critics usually credit. 7/10

Share

October 08, 2018

Open Source Firmware Conference 2018

I recently had the pleasure of attending the 2018 Open Source Firmware Conference in Erlangen, Germany. Compared to other more general conferences I've attended in the past, the laser focus of OSFC on firmware and especially firmware security was fascinating. Seeing developers from across the world coming together to discuss how they are improving their corner of the stack was great, and I've walked away with plenty of new knowledge and ideas (and several kilos of German food and drink..).


What was especially exciting though is that I had the chance to talk about my work on Petitboot and what's happened from the POWER8 launch until now. If you're interested in that, or seeing how I talk after 36 hours of travel, check it out here:


OSFC have made all the talks from the first two days available in a playlist on Youtube
If you're after a few suggestions there was, in no particular order:

Ryan O'Leary giving an update on Linuxboot - also known as NERF, Google's approach to a Linux bootloader all written in Go.

Subrate Banik talking about porting Coreboot on top of Intel's FSP

Ron Minnich describing his work with "rompayloads" on Coreboot

Vadmin Bendebury describing Google's "Secure Microcontroller" Chip

Facebook presenting their use of Linuxboot and "systemboot"

And heaps more, check out the full playlist!

Not The Best Customer Service (laptop.com.au)

You would think with a website like laptop.com.au you would be sitting on a gold mine of opportunity. It would take real effort not to turn such a domain advantage into a real advantage, to become the country's specialist and expert provider of laptops. But alas, some effort is required in this regard and it involves what, in my considered opinion, is not doing the right thing. I leave you, gentle reader, to form your own opinion on the matter from the facts provided.

In mid-August 2018 I purchased a laptop from said provider. I didn't require anything fancy, but it did need to be light and small. The Lenovo Yoga 710-11ISK for $699 seemed to fit the bill. The dispatch notice was sent on August 14, and on August 21st I received the item and noticed that there were a few things wrong. Firstly, the processor was nowhere near as powerful as advertised (and no surprise there - they're advertising the bust-speed of a water-cooled processor, not an air-cooled small laptop). Further, the system came with half of the advertised 8GB of RAM.

With the discrepancy pointed out they offered what I considered a paltry sum of $100 - which would be quite insufficient for the loss of performance, and it was not the kind of system that could be upgraded with ease. Remarkably they made the claim "We would offer to swap over, however, it's expensive to ship back and forth and we don't have another in stock at this time". I asked that if this was the case why they were still advertising the supposedly absent model on their website (and, at the time of writing, October 8), it is apparently still available. I pointed out that their own terms and conditions stated: "A refund, repair, exchange or credit is available if on arrival the goods are advised faulty or the model or the specifications are incorrect", which was certainly the case here.

Receiving no reply several days later I had to contact them again. The screen on the system had completely ceased to function. I demanded that the refund the cost of the laptop plus postage, as per their own terms and conditions. The system was faulty and the specifications are incorrect. They offered to replace the machine. I told them I preferred a refund, as I now had reasonable doubts about their quality control, as per Victorian consumer law.

I sent the laptop, express post with signature, and waited. A week later I had to contact them again and provided the Australia Post tracking record to show that it had been delivered (but not collected). It was at the point that, instead of providing a refund as I had requested, they sent a second laptop, completely contrary to my wishes. They responded that they had "replaced machine with original spec that u ordered. Like new condition" and that "We are obliged under consumer law to provide a refund within 30 days of purchase" (any delays were due to their inaction). At that point a case was opened at the Commonwealth Bank (it was paid via credit card), and Consumer Affair Victoria.

But it gets better. They sent the wrong laptop again. This time with a completely different processor, and significantly heavier and larger. It was pointed out to them that they have sent the wrong machine, twice, and the second time contrary to my requests. It was pointed out to them that all they had to do was provide a refund as requested for the machine and my postage costs. It was pointed out that it is my fault that you sent the wrong machine and that was their responsibility. It was pointed out that that it was not my fault that they sent a second, wrong, machine, contrary to my request, and that, again, their responsibility. Indeed, they could benefit by having someone look at their business processes and quality assurance - because there has been many years of this retailer showing less than optimal customer service.

At this point, they buckled and agreed to provide a full refund if I sent the second laptop back - which I have done and will update this 'blog post as the story unfolds.

Now whilst some of you gentle readers may think that surely it couldn't have been that bad, and surely there's another side to this story. So it is in the public interest and in the principle of disclosure and transparency, that I provide a full set of the correspondence as a text file attached. You can make up your own mind.

October 03, 2018

WordPress 5.0 Needs You!

Yesterday, we started the WordPress 5.0 release cycle with an announcement post.

It’s a very exciting time to be involved in WordPress, and if you want to help make it the best, now’s an excellent opportunity to jump right in.

A critical goal of this release cycle is transparency.

As a member of the WordPress 5.0 leadership team, the best way for me to do my job is to get feedback from the wider WordPress community as early, and as quickly as possible. I think I speak for everyone on the leadership team when I say that we all feel the same on this. We want everyone to be able to participate, which will require some cooperation from everyone in the wider WordPress community.

The release post was published as soon as it was written, we wanted to get it out quickly, so everyone could be aware of what’s going on. Publishing quickly does mean that we’re still writing the more detailed posts about scope, and timeline, and processes. Instead of publishing a completed plan all at once, we intentionally want to include everyone from the start, and evolve plans as we get feedback.

With no other context, the WordPress 5.0 timeline of “release candidate in about a month” would be very short, which is why we’ve waited until Gutenberg had proved itself before setting a timeline. As we mentioned in the post, WordPress 5.0 will be “WordPress 4.9.8 + Gutenberg”. The Gutenberg plugin is running on nearly 500k sites, and WordPress 4.9.8 is running on millions of sites. For comparison, it’s considered a well tested major version if we see 20k installs before the final release date. Gutenberg is a bigger change than we’ve done in the past, so should be held to a higher standard, and I think we can agree that 500k sites is a pretty good test base: it arguably meets, or even exceeds that standard.

We can have a release candidate ready in a month.

The Gutenberg core team are currently focussed on finishing off the last few features. The Gutenberg plugin has evolved exceedingly quickly thanks to their work, it’s moved so much faster than anything we’ve done in WordPress previously. As we transition to bug fixing, you should expect to see the same rapid improvement.

The block editor’s backwards compatibility with the classic editor is important, of course, and the Classic Editor plugin is a part of that: if you have a site that doesn’t yet work with the block editor, please go ahead and install the plugin. I’d be happy to see the Classic Editor plugin getting 10 million or more installs, if people need it. That would both show a clear need for the classic interface to be maintained for a long time, and because it’s the official WordPress plugin for doing it, we can ensure that it’s maintained for as long as it’s needed. This isn’t a new scenario to the WordPress core team, we’ve been backporting security fixes to WordPress 3.7 for years. We’re never going to leave site owners out in the cold there, and exactly the same attitude applies to the Classic Editor plugin.

The broader Gutenberg project is a massive change, and WordPress is a big ship to turn.

It’s going to take years to make this transition, and it’s okay if WordPress 5.0 isn’t everything for everyone. There’ll be a WordPress 5.1, and 5.2, and 5.3, and so on, the block editor will continue to evolve to work for more and more people.

My role in WordPress 5.0 is to “generally shepherd the merge”. I’ve built or guided some of the most complex changes we’ve made in Core in recent years, and they’ve all been successful. I don’t intend to change that record, WordPress 5.0 will only be released when I’m as confident in it as I was for all of those previous projects.

Right now, I’m asking everyone in the WordPress community for a little bit of trust, that we’re all working with the best interests of WordPress at heart. I’m also asking for a little bit of patience, we’re only human, we can only type so fast, and we do need to sleep every now and then. 😉

WordPress 5.0 isn’t the finish line, it’s the starter pistol.

This is a marathon, not a sprint, and the goal is to set WordPress up for the next 15 years of evolution. This can only happen one step at a time though, and the best way to get there will be by working together. We can have disagreements, we can have different priorities, and we can still come together to create the future of WordPress.

October 02, 2018

CNC made close up lens filter holder

Close up filters attach to the end of a camera lens and allow you to take photos closer to the subject than you normally would have been able to do. This is very handy for electronics and other work as you can get clear images of circuit boards and other small detail. I recently got a collection of 3 such filters which didn't come with any sort of real holder, the container they shipped in was not really designed for longer term use.


The above is the starting design for a filter holder cut in layers from walnut and stacked together to create the enclosure. The inside is shown below where the outer diameter can hold the 80mm black ring and the inner circles are 70mm and are there to keep the filters from touching each other. Close up filters can be quite fish eyed looking with a substantial curve to the lens on the filter, so a gap is needed to keep each filter away from the next one. A little felt is used to cushion the filter from the walnut itself which adds roughly 1.5mm to the design so the felt layer all have space to live as well.



The bottom has little feet which extend slightly beyond the tangent of the circle so they both make good contact with the ground and there is no rocking. Using two very cheap hinges works well in this design to try to minimize the sideways movement (slop) in the hinges themselves. A small leather strap will finish the enclosure off allowing it to be secured closed.

It is wonderful to be able to turn something like this around. I can only imagine what the world looks like from the perspective of somebody who is used to machining with 5 axis CNC.



September 28, 2018

Helping Migrants to Australia

The end of the school year is fast approaching with the third term either over or about to end and the start of the fourth term looming ahead. There never seems to be enough time in the last term with making sure students have met all their learning outcomes for the year and with final […]

Codec 2 2200 Candidate D

Every time I start working on Deep Learning and Codec 2 I get side tracked! This time, I started developing a reference codec that could be used to explore machine learning, however the reference codec was sounding pretty good early in it’s development so I have pushed it through to a fully quantised state. For lack of a better name it’s called candidate D, as that’s where I am up to in a series of codec prototypes.

The previous Codec 2 2200 post described candidate C. That also evolved from a “quick effort” to develop a reference codec to explore my deep learning ideas.

Learning about Vector Quantisation

This time, I explored Vector Quantisation (VQ) of spectral magnitude samples. I feel my VQ skills are weak, so did a bit of reading. I really enjoy learning, especially in areas I have been fooling around for a while but never really understood. It’s a special feeling when the theory clicks into place with the practical.

So I have these vectors of K=40 spectral magnitude samples, that I want to quantise. To get a feel for the data I started out by looking at smaller 2 and 3 dimensional vectors. Forty dimensions is a bit much to handle, so I started out by plotting smaller slices. Here are 2D and 3D scatter plots of adjacent samples in the vector:


The data is highly correlated, almost a straight line relationship. An example of a 2-bit, 2D vector quantiser for this data might be the points (0,0) (20,20) (30,30) (40,40). Consider representing the same data with two 1D (scalar) quantisers over the same 2 bit range (0,20,30,40). This would take 4 bits in total, and be wasteful as it would represent points that would never occur, such as (60,0).

[1] helped me understand the relationship between covariance and VQ, using 2D vectors. For Candidate D I extended this to K=40 dimensions, the number of samples I am using for the spectral magnitudes. Then [2] (thirty year old!) paper how the DCT relates to vector quantisation and the eigenvector/value rotations described in [1]. I vaguely remember snoring my way through eigen-thingies at math lectures in University!

My VQ work to date has used minimum Mean Square Error (MSE) to train and match vectors. I have been uncomfortable with MSE matching for a while, as I have observed poor choices in matching vectors to speech. For example if the target vector falls off sharply at high frequencies (say a LPF at 3500 Hz), the VQ will try to select a vector that matches that fall off, and ignore smaller, more perceptually important features like formants.

VQs are often trained to minimise the average error. They tend to cluster VQ points closer to those samples that are more likely to occur. However I have found that beneath a certain threshold, we can’t hear the quantisation error. In Codec 2 it’s hard to hear any distortion when spectral magnitudes are quantised to 6 dB steps. This suggest that we are wasting bits with fine quantiser steps, and there may be better ways to design VQs, for example a uniform grid of points that covers a few standard deviations of data on the scatter plots above.

I like the idea of uniform quantisation across vector dimensions and the concepts I learnt during this work allowed me to do just that. The DCT effectively lets me use scalar quantisation of each vector element, so I can easily choose any quantiser shape I like.

Spectral Quantiser

Candidate D uses a similar design and bit allocation to Candidate C. Candidate D uses K=40 resampling of the spectral magnitudes, to help preserve narrow high frequency formants that are present for low pitch speakers like hts1a. The DCT of the rate K vectors is computed, and quantised using a Huffman code.

There are not enough bits to quantise all of the coefficients, so we stop when we run out of bits, typically after 15 or 20 (out of a total of 40) DCTs. On each frame the algorithm tries direct or differential quantisation, and chooses the method with the lowest error.

Results

I have a couple of small databases that I use for listening tests (about 15 samples in total). I feel Candidate D is better than Codec 2 1300, and also Codec 2 2400 for most (but not all) samples.

In particular, Candidate D handles samples with lots of low frequency energy better, e.g. cq_ref and kristoff in the table below.

Sample 1300 2400 2200 D
cq_ref Listen Listen Listen
kristoff Listen Listen Listen
me Listen Listen Listen
vk5local_1 Listen Listen Listen
ebs Listen Listen Listen

For a high quality FreeDV mode I want to improve speech quality over FreeDV 1600 (which uses Codec 2 1300 plus some FEC bits), and provide better robustness to different speakers and recording conditions. As you can hear – there is a significant jump in quality between the 1300 bit/s codec and candidate D. Implemented as a FreeDV mode, it would compare well with SSB at high SNRs.

Next Steps

There are many aspects of Candidate D that could be explored:

  • Wideband audio, like the work from last year.
  • Back to my original aim of exploring deep learning with Codec 2.
  • Computing the DCT coefficients from the rate L (time varying) magnitude samples.
  • Better time/freq quantisation using a 2D DCT rather than the simple difference in time scheme used for Candidate D.
  • Porting to C and developing a real time FreeDV 2200 mode.

The current candidate D 2200 codec is implemented in Octave, so porting to C is required before it is usable for real world applications, plus some more C to integrate with FreeDV.

If anyone would like to help, please let me know. It’s fairly straight forward C coding, I have already done the DSP. You’ll learn a lot, and be part of the open source future of digital radio.

Reading Further

[1] A geometric interpretation of the covariance matrix, really helped me understand what was going on with VQ in 2 dimensions, which can then be extended to larger dimensions.

[2] Vector Quantization in Speech Coding, Makhoul et al.

[3 Codec 2 Wideband, previous DCT based Codec 2 Work.

September 27, 2018

2018 Linux Security Summit North America: Wrapup

The 2018 Linux Security Summit North America (LSS-NA) was held last month in Vancouver, BC.

Attendance continued to grow this year, with a record of 220+ attendees.  Our room was upgraded as a result, with spectacular views.

LSS-NA 2018 Vancouver BC

Linux Security Summit NA 2018, Vancouver,BC

We also had many great proposals and the schedule ended up being a very tight fit.  We’ve asked for an extra day for LSS-NA next year — here’s hoping.

Slides of all presentations are available here: https://events.linuxfoundation.org/events/linux-security-summit-north-america-2018/program/slides/

Videos may be found in this youtube playlist.

Once again, as is typical, the conference was focused around development, somewhat uniquely in the world of security conferences.  It’s interesting to see more attention seemingly being paid to the lower parts of the stack: secure booting, firmware, and hardware roots of trust, as well as the continued efforts in hardening the kernel.

LWN provided some excellent coverage of LSS-NA:

Paul Moore has a brief writeup here.

Thanks to everyone involved in the event for 2018: the speakers, attendees, the program committee, the sponsors, and the organizing team at the Linux Foundation.  LSS-NA would not be possible without all of you!

Our interwoven ancestry

In 2008 a new group of human ancestors – the Denisovans, were defined on the basis of a single finger knuckle (phalanx) bone discovered in Denisova cave in the Altai mountains of Siberia. A molar tooth, found at Denisova cave earlier (in 2000) was determined to be of the same group. Since then extensive work […]

September 24, 2018

Straight White Guy Discovers Diversity and Inclusion Problem in Open Source

This is a bit of strange post for me to write, it’s a topic I’m quite inexperienced in. I’ll warn you straight up: there’s going to be a lot of talking about my thought processes, going off on tangents, and a bit of over-explaining myself for good measure. Think of it something like high school math, where you had to “show your work”, demonstrating how you arrived at the answer. 20 years later, it turns out there really is a practical use for high school math. 😉


I’m Gary. I come from a middle-class, white, Australian family. My parents both worked, but also had the time to encourage me to do well in school. By way of doing well in school, I was able to get into a good university, I could support myself on a part time job, because I only had to pay my rent and bar tab. There I met many friends, who’ve helped me along the way. From that, I’ve worked a series of well paid tech jobs, allowing me to have savings, and travel, and live in a comfortable house in the suburbs.

I’ve learned that it’s important for me to acknowledge the privileges that helped me get here. As a “straight white male”, I recognise that a few of my privileges gave me a significant boost that many people aren’t afforded. This is backed up by the data, too. Men are paid more than women. White women are paid more than black women. LGBT people are more likely to suffer workplace bullying. The list goes on and on.

Some of you may’ve heard the term “privilege” before, and found it off-putting. If that’s you, here’s an interesting analogy, take a moment to read it (and if the title bugs you, please ignore it for a moment, we’ll get to that), then come back.

Welcome back! So, are you a straight white male? Did that post title make you feel a bit uncomfortable at being stereotyped? That’s okay, I had a very similar reaction when I first came across the “straight white male” stereotype. I worked hard to get to where I am, trivialising it as being something I only got because of how I was born hurts. The thing is, this is something that many people who aren’t “straight white males” experience all the time. I have a huge amount of respect for people who have to deal with that on daily basis, but are still able to be absolute bosses at their job.

My message to my dudes here is: don’t sweat it. A little bit of a joke at your expense is okay, and I find it helps me see things from another person’s perspective, in what can be a light-hearted, friendly manner.

Diversity Makes WordPress Better

My job is to build WordPress, which is used by just shy of a third of the internet. That’s a lot of different people, building a lot of different sites, for a lot of different purposes. I can draw on my experiences to imagine all of those use cases, but ultimately, this is a place where my privilege limits me. Every time I’ve worked on a more diverse team, however, I’m exposed to a wider array of experiences, which makes the things we build together better.

Of course, I’m not even close to being the first person to recognise how diversity can improve WordPress, and I have to acknowledge the efforts of many folks across the community. The WordPress Community team are doing wonderful work helping folks gain confidence with speaking at WordPress events. WordCamps have had a Code of Conduct for some time, and the Community team are working creating a Code of Conduct for the entire WordPress project. The Design team have built up excellent processes and resources to help folks get up to speed with helping design WordPress. The Core Development team run regular meetings for new developers to learn how to write code for WordPress.

We Can Do Better. I Can Do Better.

As much as I’d love it to be, the WordPress community isn’t perfect. We have our share of problems, and while I do believe that everyone in our community is fundamentally good, we don’t always do our best. Sometimes we’re not as welcoming, or considerate, as we could be. Sometimes we don’t take the time to consider the perspectives of others. Sometimes it’s just a bunch of tech-dude-bros beating their chests. 🙃

Nobody wins when we’re coming from a place of inequality.

So, this post is one of my first steps in recognising there’s a real problem, and learning about how I can help make things better. I’m not claiming to know the answers, I barely know where to start. But I’m hoping that my voice, added to the many that have come before me, and the countless that will come after, will help bring about the changes we need.

September 22, 2018

Scared Weird Frozen Guy

Share

The true life story of a kid from Bribie Island (I’ve been there!) running a marathon in Antartica, via being a touring musical comedian, doing things like this:

This book is an interesting and light read, and came kindly recommended by Michael Carden, who pretty much insisted I take the book off him at a cafe. I don’t regret reading it and would recommend it to people looking for a light autobiography for a rainy (and perhaps cold) evening or two.

Oh, and the Scared Weird Little Guys of course are responsible for this gem…

This book is highly recommended and now I really want to go for a run.

Scared Weird Frozen Guy Book Cover Scared Weird Frozen Guy
Rusty Berther
Comedians
2012
325

After 20 incredible years as part of a musical comedy duo, Scared Weird Little Guy, Rusty Berther found himself running a marathon in Antarctica. What drove him to this? In this hilarious and honest account of his life as a Scared Weird Little Guy, and his long journey attempting an extreme physical and mental challenge at the bottom of the world, Rusty examines where he started from, and where he just might be going to.

Share

September 20, 2018

Words Have Meanings

As a follow-up to my post with Suggestions for Trump Supporters [1] I notice that many people seem to have private definitions of words that they like to use.

There are some situations where the use of a word is contentious and different groups of people have different meanings. One example that is known to most people involved with computers is “hacker”. That means “criminal” according to mainstream media and often “someone who experiments with computers” to those of us who like experimenting with computers. There is ongoing discussion about whether we should try and reclaim the word for it’s original use or whether we should just accept that’s a lost cause. But generally based on context it’s clear which meaning is intended. There is also some overlap between the definitions, some people who like to experiment with computers conduct experiments with computers they aren’t permitted to use. Some people who are career computer criminals started out experimenting with computers for fun.

But some times words are misused in ways that fail to convey any useful ideas and just obscure the real issues. One example is the people who claim to be left-wing Libertarians. Murray Rothbard (AKA “Mr Libertarian”) boasted about “stealing” the word Libertarian from the left [2]. Murray won that battle, they should get over it and move on. When anyone talks about “Libertarianism” nowadays they are talking about the extreme right. Claiming to be a left-wing Libertarian doesn’t add any value to any discussion apart from demonstrating the fact that the person who makes such a claim is one who gives hipsters a bad name. The first time penny-farthings were fashionable the word “libertarian” was associated with left-wing politics. Trying to have a sensible discussion about politics while using a word in the opposite way to almost everyone else is about as productive as trying to actually travel somewhere by penny-farthing.

Another example is the word “communist” which according to many Americans seems to mean “any person or country I don’t like”. It’s often invoked as a magical incantation that’s supposed to automatically win an argument. One recent example I saw was someone claiming that “Russia has always been communist” and rejecting any evidence to the contrary. If someone was to say “Russia has always been a shit country” then there’s plenty of evidence to support that claim (Tsarist, communist, and fascist Russia have all been shit in various ways). But no definition of “communism” seems to have any correlation with modern Russia. I never discovered what that person meant by claiming that Russia is communist, they refused to make any comment about Russian politics and just kept repeating that it’s communist. If they said “Russia has always been shit” then it would be a clear statement, people can agree or disagree with that but everyone knows what is meant.

The standard response to pointing out that someone is using a definition of a word that is either significantly different to most of the world (or simply inexplicable) is to say “that’s just semantics”. If someone’s “contribution” to a political discussion is restricted to criticising people who confuse “their” and “there” then it might be reasonable to say “that’s just semantics”. But pointing out that someone’s writing has no meaning because they choose not to use words in the way others will understand them is not just semantics. When someone claims that Russia is communist and Americans should reject the Republican party because of their Russian connection it’s not even wrong. The same applies when someone claims that Nazis are “leftist”.

Generally the aim of a political debate is to convince people that your cause is better than other causes. To achieve that aim you have to state your cause in language that can be understood by everyone in the discussion. Would the person who called Russia “communist” be more or less happy if Russia had common ownership of the means of production and an absence of social classes? I guess I’ll never know, and that’s their failure at debating politics.

September 17, 2018

LUV October 2018 Workshop: CoCalc and Nectar

Oct 20 2018 12:30
Oct 20 2018 16:30
Oct 20 2018 12:30
Oct 20 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

CoCalc and Nectar

Paul Leopardi will give a talk and live demo of the cloud services that he is currently using, CoCalc to host Jupyter notebooks, Python code, and similar and the Nectar research cloud service to host a PostgreSQL database and a Plotly Dash dashboard to disseminate his research results:

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 20, 2018 - 12:30

read more

LUV October 2018 Main Meeting: Non-alphabetic languages / Haskell

Oct 2 2018 18:30
Oct 2 2018 20:30
Oct 2 2018 18:30
Oct 2 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, October 2, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Wen Lin, Using Linux/FOSS in non-alphabetic languages like Chinese
  • Shannon Pace, Haskell

Using Linux/FOSS in non-alphabetic languages like Chinese

Wen will discuss different input methods on a keyboard first designed only for alphabetic-base languages.

Many of us like to go for dinner nearby after the meeting, typically at Brunetti's or Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

October 2, 2018 - 18:30

read more

September 16, 2018

The Mission: Democratise Publishing

It’s exciting to see the Drupal Gutenberg project getting under way, it makes me proud of the work we’ve done ensuring the flexibility of the underlying Gutenberg architecture. One of the primary philosophies of Gutenberg’s technical architecture is platform agnosticism, and we can see the practical effects of this practice coming to fruition across a variety of projects.

Yoast are creating new features for the block editor, as well as porting existing features, which they’re able to reuse in the classic editor.

Outside of WordPress Core, the Automattic teams who work on Calypso have been busy adding Gutenberg support, in order to make the block editor interface available on WordPress.com. Gutenberg and Calypso are large JavaScript applications, built with strong opinions on design direction and technical architecture, and having significant component overlap. That these two projects can function together at all is something of an obscure engineering feat that’s both difficult and overwhelming to appreciate.

If we reached the limit of Gutenberg’s platform agnosticism here, it would still be a successful project.

But that’s not where the ultimate goals of the Gutenberg project stand. From early experiments in running the block editor as a standalone application, to being able to compile it into a native mobile component, and now seeing it running on Drupal, Gutenberg’s technical goals have always included a radical level of platform agnosticism.

Better Together

Inside the WordPress world, significant effort and focus has been on ensuring backwards compatibility with existing WordPress sites, plugins, and practices. Given that WordPress is such a hugely popular platform, it’s exceedingly important to ensure this is done right. With Gutenberg expanding outside of the WordPress world, however, we’re seeing different focuses and priorities arise.

The Gutenberg Cloud service is a fascinating extension being built as part of the Drupal Gutenberg project, for example. It provides a method for new blocks to be shared and discovered, the sample hero block sets a clear tone of providing practical components that can be rapidly put together into a full site. While we’ve certainly seen similar services appear for the various site builder plugins, this is the first one (that I’m aware of, at least) build specifically for Gutenberg.

By making the Gutenberg experience available for everyone, regardless of their technical proficiency, experience, or even preferred platform, we pave the way for a better future for all.

Democratising Publishing

You might be able to guess where this is going. 😉

WordPress’ mission is to “democratise publishing”. It isn’t to “be the most popular CMS”, or to “run on old versions of PHP”, though it’s easy to think that might be the case on the surface. That these statements are true is simply a side effect of the broader principle: All people, regardless of who they are or where they come from, should be able to publish their content as part of a free and open web.

The WordPress mission is not to “democratise publishing with WordPress”.

WordPress has many advantages that make it so popular, but hoarding those to ourselves doesn’t help the open web, it just creates more silos. The open web is the only platform on which publishing can be democratised, so it makes sense for Gutenberg to work anywhere on the open web, not just inside WordPress. Drupal isn’t a competitor here, we’re all working towards the same goal, the different paths we’ve taken have made the open web stronger as a whole.

Much as the block editor has been the first practical implementation of the Gutenberg architecture, WordPress is simply the first practical integration of the block editor into a CMS. The Gutenberg project will expand into site customisation and theming next, and while there’s no requirement that Drupal make use of these, I’d be very interested to see what they came up with if they did. Bringing together our many years of experience in tackling these complex problems can only make the end result better.

I know I’m looking forward to all of us working together for the betterment of the open web.

September 14, 2018

Porting a LDPC Decoder to a STM32 Microcontroller

A few months ago, FreeDV 700D was released. In that post, I asked for volunteers to help port 700D to the STM32 microcontroller used for the SM1000. Don Reid, W7DMR stepped up – and has been doing a fantastic job porting modules of C code from the x86 to the STM32.

Here is a guest post from Don, explaining how he has managed to get a powerful LDPC decoder running on the STM32.

LDPC for the STM32

The 700D mode and its LDPC function were developed and used on desktop (x86) platforms. The LDPC decoder is implemented in the mpdecode_core.c source file.

We’d like to run the decoder on the SM1000 platform which has an STM32F4 processor. This requires the following changes:

  • The code used doubles in several places, while the stm32 has only single precision floating point hardware.
  • It was thought that the memory used might be too much for a system with just 192k bytes of RAM.
  • There are 2 LDPC codes currently supported, HRA_112_112 used in 700D and, H2064_516_sparse used for Balloon Telemetry. While only the 700D configuration needed to work on the STM32 platform, any changes made to the mainstream code needed to work with the H2064_516_sparse code.

Testing

Before making changes it was important to have a well defined test process to validate new versions. This allowed each change to be validated as it was made. Without this the final debugging would likely have been very difficult.

The ldpc_enc utility can generate standard test frames and the ldpc_dec utility receive the frames and measure bit errors. So errors can be detected directly and BER computed. ldpc_enc can also output soft decision symbols to emulate what the modem would receive and pass into the LDPC decoder. A new utility ldpc_noise was written to add AWGN to the sample values between the above utilities. here is a sample run:

$ ./ldpc_enc /dev/zero - --sd --code HRA_112_112 --testframes 100 | ./ldpc_noise - - 1 | ./ldpc_dec - /dev/null --code HRA_112_112 --sd --testframes
single sided NodB = 1.000000, No = 1.258925
code: HRA_112_112
code: HRA_112_112
Nframes: 100
CodeLength: 224 offset: 0
measured double sided (real) noise power: 0.640595
total iters 3934
Raw Tbits..: 22400 Terr: 2405 BER: 0.107
Coded Tbits: 11200 Terr: 134 BER: 0.012

ldpc_noise is passed a “No” (N-zero) level of 1dB, Eb=0, so Eb/No = -1, and we get a 10% raw BER, and 1% after LDPC decoding. This is a typical operating point for 700D.

A shell script (ldpc_check) combines several runs of these utilities, checks the results, and provides a final pass/fail indication.

All changes were made to new copies of the source files (named *_test*) so that current users of codec2-dev were not disrupted, and so that the behaviour could be compared to the “released” version.

Unused Functions

The code contained several functions which are not used anywhere in the FreeDV/Codec2 system. Removing these made it easier to see the code that was used and allowed the removal of some variables and record elements to reduce the memory used.

First Compiles

The first attempt at compiling for the stm32 platform showed that the the code required more memory than was available on the processor. The STM32F405 used in the SM1000 system has 128k bytes of main RAM.

The largest single item was the DecodedBits array which was used to saved the results for each iteration, using 32 bit integers, one per decoded bit.

    int *DecodedBits = calloc( max_iter*CodeLength, sizeof( int ) );

This used almost 90k bytes!

The decode function used for FreeDV (SumProducts) used only the last decoded set. So the code was changed to save only one pass of values, using 8 bit integers. This reduced the ~90k bytes to just 224 bytes!

The FreeDV 700D mode requires on LDPC decode every 160ms. At this point the code compiled and ran but was too slow – using around 25ms per iteration, or 300 – 2500ms per frame!

C/V Nodes

The two main data structures of the LDPC decoder are c_nodes and v_nodes. Each is an array where each node contains additional arrays. In the original code these structures used over 17k bytes for the HRA_112_112 code.

Some of the elements of the c and v nodes (index, socket) are indexes into these arrays. Changing these from 32 bit to 16 bit integers and changing the sign element into a 8 bit char saved about 6k bytes.

The next problem was the run time. Each 700D frame must be fully processed in 160 ms and the decoder was taking several times this long. The CPU load was traced to the phi0() function, which was calling two maths library functions. After optimising the phi0 function (see below) the largest use of time was the index computations of the nested loops which accessed these c and v node structures.

With each node having separate arrays for index, socket, sign, and message these indexes had to be computed separately. By changing the node structures to hold an array of sub-nodes instead this index computation time was significantly reduced. An additional benefit was about a 4x reduction in the number of memory blocks allocated. Each allocation block includes additional memory used by malloc() and free() so reducing the number of blocks reduces memory use and possible heap fragmentation.

Additional time was saved by only calculating the degree elements of the c and v nodes at start-up rather than for every frame. That data is kept in memory that is statically allocated when the decoder is initialized. This costs some memory but saves time.

This still left the code calling malloc several hundred times for each frame and then freeing that memory later. This sort of memory allocation activity has been known to cause troubles in some embedded systems and is usually avoided. However the LDPC decoder needed too much memory to allow it to be statically allocated at startup and not shared with other parts of the code.

Instead of allocating an array of sub-nodes for each c or v node, a single array of bytes is passed in from the parent. The initialization function which calculates the degree elements of the nodes also counts up the memory space needed and reports this to its caller. When the decoder is called for a frame, the node’s pointers are set to use the space of this array.

Other arrays that the decoder needs were added to this to further reduce the number of separate allocation blocks.

This leaves the decisions of how to allocate and share this memory up to a higher level of the code. The plan is to continue to use malloc() and free() at a higher level initially. Further testing can be done to look for memory leakage and optimise overall memory usage on the STM32.

PHI0

There is a non linear function named “phi0” which is called inside several levels of nested loops within the decoder. The basic operation is:

   phi0(x) = ln( (e^x + 1) / (e^x - 1) )

The original code used double precision exp() and log(), even though the input, output, and intermediate values are all floats. This was probably an oversight. Changing to the single single precision versions expf() and logf() provided some improvements, but not enough to meet our CPU load goal.

The original code used piecewise approximation for some input values. This was extended to cover the full range of inputs. The code was also structured differently to make it faster. The original code had a sequence of if () else if () else if () … This can take a long time when there are many steps in the approximation. Instead two ranges of input values are covered with linear steps that is implemented with table lookups.

The third range of inputs in non linear and is handled by a binary tree of comparisons to reduce the number of levels. All of this code is implemented in a separate file to allow the original or optimised version of phi0 to be used.

The ranges of inputs are:

             x >= 10      result always 0
      10   > x >=  5      steps of 1/2
       5   > x >= 1/16    steps of 1/16
    1/16   > x >= 1/4096  use 1/32, 1/64, 1/128, .., 1/4096
    1/4096 > x            result always 10

The range of values that will appear as inputs to phi0() can be represented with as fixed point value stored in a 32 bit integer. By converting to this format at the beginning of the function the code for all of the comparisons and lookups is reduced and uses shifts and integer operations. The step levels use powers of 2 which let the table index computations use shifts and make the fraction constants of the comparisons simple ones that the ARM instruction set can create efficiently.

Misc

Two of the configuration values are scale factors that get multiplied inside the nested loops. These values are 1.0 in both of the current configurations so that floating point multiply was removed.

Results

The optimised LDPC decoder produces the same output BER as the original.

The optimised decoder uses 12k of heap at init time and needs another 12k of heap at run time. The original decoder just used heap at run time, that was returned after each call. We have traded off the use of static heap to clean up the many small heap allocations and reduce execution time. It is probably possible to reduce the static space further perhaps at the cost of longer run times.

The maximum time to decode a frame using 100 iterations is 60.3 ms and the median time is 8.8 ms, far below our budget of 160ms!

Future Possibilities

The remaining floating point computations in the decoder are addition and subtraction so the values could be represented with fix point values to eliminate the floating point operations.

Some values which are computed from the configuration (degree, index, socket) are constants and could be generated at compile time using a utility called by cmake. However this might actually slow down the operation as the index computations might become slower.

The index and socket elements of C and V nodes could be pointers instead of indexes into arrays.

Experiments would be required to ensure these changes actually speed up the decoder.

Bio

Don got his first amateur license in high school but was soon distracted with getting an engineering degree (BSEE, Univ. of Washington), then family and life. He started his IC design career with the CPU for the HP-41C calculator. Then came ICs for printers and cameras, work on IC design tools, and some firmware for embedded systems. Exposure to ARES public service lead to a new amateur license – W7DMR and active involvement with ARES. He recently retired after 42 years and wanted to find an open project that combined radio, embedded systems and DSP.

Don lives in Corvallis, Oregon, USA a small city with the state technical university and several high tech companies.

Open Source Projects and Volunteers

Hi it’s David back again ….

Open source projects like FreeDV and Codec 2 rely on volunteers to make them happen. The typical pattern is people get excited, start some work, then drift away after a few weeks. Gold is the volunteer that consistently works week in, week out until their particular project is done. The number of hours/week doesn’t matter – it’s the consistency that is really helpful to the projects. I have a few contributors/testers/users of FreeDV in this category and I appreciate you all deeply – thank you.

If you would like to help out, please contact me. You’ll learn a lot and get to work towards an open source future for HF radio.

If you can’t help out technically, but would like to support this work, please consider Patreon or PayPal.

Reading Further

LDPC using Octave and the CML library. Our LDPC decoder comes from Coded Modulation Library (CML), which was originally used to support Matlab/Octave simulations.

Horus 37 – High Speed SSTV Images. The CML LDPC decoder was converted to a regular C library, and used for sending images from High Altitude Balloons.

Steve Ports an OFDM modem from Octave to C. Steve is another volunteer who put in a fine effort on the C coding of the OFDM modem. He recently modified the modem to handle high bit rates for voice and HF data applications.

Rick Barnich KA8BMA did a fantastic job of designing the SM1000 hardware. Leading edge, HF digital voice hardware, designed by volunteers.

September 12, 2018

Tony K2MO Tests FreeDV

Tony, K2MO, has recently published some fine videos of FreeDV 1600, 700C, and 700D passing through simulated HF channels. The results are quite interesting.

This video shows the 700C mode having the ability to decode with 50% of it’s carriers removed:

This 700C modem sends two copies of the tx signal at high and low frequencies, a form of diversity to help overcome selective fading. These are the combined at the receiver.

Tony’s next video shows three FreeDV modes passing through a selective fading HF channel simulation:

This particular channel has slow fading, a notch gradually creeps across the spectrum.

Tony originally started testing to determine which FreeDV mode worked best on NVIS paths. He used path parameters based on VOACAP prediction models which show the relative time delay and signal power for the each propagation mode i.e., F1, F2:

Note the long delay paths (5ms). The CCIR NVIS path model also suggests a path delay of 7ms. That much delay puts the F-layer at 1000 km (well out into space), which is a bit of a puzzle.

This video shows the results of the VOCAP NVIS path:

In this case 700C does better than 700D. The 700C modem (COHPSK) is a parallel tone design, which is more robust to long multipath delays. The OFDM modem used for 700D is configured for multipath delays of up to 2ms, but tends to fall over after that as the “O” for Orthogonal assumption breaks down. It can be configured for longer delays, at a small cost in low SNR performance.

The OFDM modem gives much tighter packing for carriers, which allows us to include enough bits for powerful FEC, and have a very narrow RF bandwidth compared to 700C. FreeDV 700D has the ability to perform interleaving (Tools-Options “FreeDV 700 Options”), which is a form of time diversity. This feature is not widely used at present, but simulations suggest it is worth up to 4dB.

It would be interesting to combine frequency diversity, LDPC, and OFDM in a wider bandwidth signal. If anyone is interested in doing a little C coding to try this let me know.

I’ve actually seen long delay on NVIS paths in the “real world”. Here is a 40M 700D contact between myself and Mark, VK5QI, who is about 40km away from me. Note at times there are notches on the waterfall 200Hz apart, indicating a round trip path delay of 1500km:

Reading Further

Modems for HF Digital Voice Part 1
, explaining the frequency diversity used in 700C
Testing FreeDV 700C, shows how to use some built in test features like noise insertion and interfering carriers.
FreeDV 700D
FreeDV User Guide, including new 700D features like interleaving

September 11, 2018

Thinkpad X1 Carbon Gen 6

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

ABC iview and the ‘Australia tax’

Unless you have been living in a cave, it is probable that you heard about a federal parliamentary inquiry into IT pricing (somewhat aptly entitled “At what cost? — IT pricing and the Australia tax”) which reported that, amongst other things, online geo-blocking can as much as double pricing for IT products in what is blatant price discrimination. Not only do Australians pay, on average, 42% more than US’ians for Adobe products, and 66% more for Microsoft products, but music (such as the iTunes Store), video games, and e-books (e.

iPads as in-flight entertainment

I’m writing this whilst sitting on a Qantas flight from Perth to Sydney, heading home after attending the fantastic linux.conf.au 2014. The plane is a Boeing 767, and unlike most flights I have been on in the last decade, this one has no in-flight entertainment system built into the backs of seats. Instead, every passenger is issued with an Apple iPad (located in the back seat pocket), fitted with what appears to be a fairly robust leather jacket emblazoned with the words “SECURITY DEVICE ATTACHED” (presumably to discourage theft).

You reap what you sow

So the ABC has broken iview for all non–Chrome Linux users. How so? Because the ABC moved iview to use a streaming format supported only by the latest versions of Adobe Flash (e.g. version 11.7, which is available on Windows and OS X), but Adobe have ceased Linux support for Flash as of version 11.2 (for reasons I don’t yet understand, some users report that the older Flash 10.3 continues to work with iview).

Turning out the lights

We put an unbelievable amount of data in the hands of third parties. In plain text. Traceable right back to you with mimimal effort to do so. For me, giving my data to the likes of Google, Apple, Microsoft, and the rest of the crowd, has always been a tradeoff: convenience vs. privacy. Sometimes, privacy wins. Most of the time, convenience wins. My iPhone reports in to Apple. My Android phone reports in to Google.

We do not tolerate bugs; they are of the devil

I was just reading an article entitled “Nine traits of the veteran network admin”, and this point really struck a chord with me: Veteran network admin trait No. 7: We do not tolerate bugs; they are of the devil On occasion, conventional troubleshooting or building new networks run into an unexplainable blocking issue. After poring over configurations, sketching out connections, routes, and forwarding tables, and running debugs, one is brought no closer to solving the problem.

SPA525G with ASA 9.1.x

At work, we have a staff member who has a Cisco SPA525G phone at his home that has built-in AnyConnect VPN support. Over the weekend, I updated our Cisco ASA firewall (which sits in front of our UC500 phone system) from version 8.4.7 to 9.1.3 and the phone broke with the odd error “Failed to obtain WebVPN cookie”. Turns out the fix was very simple. Just update the firmware on the SPA525G to the latest version.

Floppy drive music

Some time in 2013 I set up a rig to play music with a set of floppy drives. At linux.conf.au 2015 in Auckland I gave a brief lightning talk about this, and here is a set of photos and some demo music to accompany. The hardware consists of six 3.5″ floppy drives connected to a LeoStick (Arduino) via custom vero board that connects the direction and step pins (18 and 20, respectively) as well as permanently grounding the select pin A (14).

Configuring Windows for stable IPv6 addressing

By default, Windows will use randomised IPv6 addresses, rather than using stable EUI-64 addresses derived from the MAC address. This is great for privacy, but not so great for servers that need a stable address. If you run an Exchange mail server, or need to be able to access the server remotely, you will want a stable IPv6 address assigned. You may think this is possible simply by editing the NIC and setting a manual IPv6 address.

One week with the Nexus 5

My ageing Motorola Milestone finally received a kick to the bucket last week when my shiny new Nexus 5 phone arrived. Though fantastic by 2009 standards, the Milestone could only officially run Android 2.2, and 2.3 with the help of an unofficial CyanogenMod port. Having been end-of-lifed for some time now, and barely being able to render a complex web page without running out of memory, it was time for me to move on.

Restore ASA 5500 configuration from blank slate

The Cisco ASA 5500 series (e.g. 5505, 5510) firewalls have a fairly nice GUI interface called ASDM. It can sometimes be a pain, but it could be a lot worse than it is. One of the nice things ASDM does it let you save a .zip file backup of your entire ASA configuration. It includes your startup-configuration, VPN secrets, AnyConnect image bundles, and all those other little niceties. But when you set up an ASA from scratch to restore from said .

September 09, 2018

Fail2ban

I’ve recently setup fail2ban [1] on a bunch of my servers. It’s purpose is to ban IP addresses associated with password guessing – or whatever other criteria for badness you configure. It supports Linux, OpenBSD [2] and probably most Unix type OSs too. I run Debian so I’ve been using the Debian packages of fail2ban.

The first thing to note is that it is very easy to install and configure (for the common cases at least). For a long time installing it had been on my todo list but I didn’t make the time to do it, after installing it I realised that I should have done it years ago, it was so easy.

Generally to configure it you just create a file under /etc/fail2ban/jail.d with the settings you want, any settings that are different from the defaults will override them. For example if you have a system running dovecot on the default ports and sshd on port 999 then you could put the following in /etc/fail2ban/jail.d/local.conf:

[dovecot]
enabled = true

[sshd]
port = 999

By default the Debian package of fail2ban only protects sshd.

When fail2ban is running on Linux the command “iptables -L -n -v|grep f2b” will show the rules that match inbound traffic and the names of the chains they direct traffic to. To see if fail2ban has acted to protect a service you can run a command like “iptables -L f2b-sshd -n” to see the iptables rules.

The fail2ban entries in the INPUT table go before other rules, so it should work with any custom iptables rules you have configured as long as either fail2ban is the last thing to be started or your custom rules don’t flush old entries.

There are hooks for sending email notifications etc, that seems excessive to me but it’s always good to have options to extend a program.

In the past I’ve tried using kernel rate limiting to minimise hostile activity. That didn’t work well as there are legitimate end users who do strange things (like a user who setup their web-cam to email them every time it took a photo).

Conclusion

Fail2ban has some good features. I don’t think it will do much good at stopping account compromise as anything that is easily guessed could be guessed using many IP addresses and anything that has a good password can’t be guessed without taking many years of brute-force attacks while also causing enough noise in the logs to be noticed. What it does do is get rid of some of the noise in log files which makes it easier to find and fix problems. To me the main benefit is to improve the signal to noise ratio of my log files.

September 08, 2018

Google and Certbot (Letsencrypt)

Like most people I use Certbot AKA Letsencrypt to create SSL certificates for my sites. It’s a great service, very easy to use and it generally works well.

Recently the server running www.coker.com.au among other domains couldn’t get a certbot certificate renewed, here’s the error message:

Failed authorization procedure. mail.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "mail.gw90.de" was considered an unsafe domain by a third-party API, listen.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "listen.gw90.de" was considered an unsafe domain by a third-party API

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: mail.gw90.de
   Type:   unauthorized
   Detail: "mail.gw90.de" was considered an unsafe domain by a third-
   party API

   Domain: listen.gw90.de
   Type:   unauthorized
   Detail: "listen.gw90.de" was considered an unsafe domain by a
   third-party API

It turns out that Google Safebrowsing had listed those two sites. Visit https://listen.gw90.de/ or https://mail.gw90.de/ today (and maybe for some weeks or months in the future) using Google Chrome (or any other browser that uses the Google Safebrowsing database) and it will tell you the site is “Dangerous” and probably refuse to let you in.

One thing to note is that neither of those sites has any real content, I only set them up in Apache to get SSL certificates that are used for other purposes (like mail transfer as the name suggests). If Google had listed my blog as a “Dangerous” site I wouldn’t be so surprised, WordPress has had more than a few security issues in the past and it’s not implausible that someone could have compromised it and made it serve up hostile content without me noticing. But the two sites in question have a DocumentRoot that is owned by root and was (until a few days ago) entirely empty, now they have a index.html that just says “This site is empty”. It’s theoretically possible that someone could have exploited a RCE bug in Apache to make it serve up content that isn’t in the DocumentRoot, but that seems unlikely (why waste an Apache 0day on one of the less important of my personal sites). It is possible that the virtual machine in question was compromised (a VM on that server has been compromised before [1]) but it seems unlikely that they would host bad things on those web sites if they did.

Now it could be that some other hostname under that domain had something inappropriate (I haven’t yet investigated all possibilities). But if so Google’s algorithm has a couple of significant problems, firstly if they are blacklisting sites related to one that had an issue then it would probably make more sense to blacklist by IP address (which means including some coker.com.au entries on the same IP). In the case of a compromised server it seems more likely to have multiple bad sites on one IP than multiple bad subdomains on different IPs (given that none of the hostnames in question have changed IP address recently and Google of course knows this). The next issue is that extending blacklisting doesn’t make sense unless there is evidence of hostile intent. I’m pretty sure that Google won’t blacklist all of ibm.com when (not if) a server in that domain gets compromised. I guess they have different policies for sites of different scale.

Both I and a friend have reported the sites in question to Google as not being harmful, but that hasn’t changed anything yet. I’m very disappointed in Google, listing sites, not providing any reason why (it could be a hostname under that domain was compromised and if so it’s not fixed yet BECAUSE GOOGLE DIDN’T REPORT A PROBLEM), and not removing the listing when it’s totally obvious there’s no basis for it.

While it makes sense for certbot to not issue SSL certificates to bad sites. It seems that they haven’t chosen a great service for determining which sites are bad.

Anyway the end result was that some of my sites had an expired SSL certificate for a day. I decided not to renew certificates before they expired to give Google a better chance of noticing their mistake and then I was busy at the time they expired. Now presumably as the sites in question have an invalid SSL certificate it will be even harder to convince anyone that they are not hostile.

September 07, 2018

A floating shelf for tablets

The choice of replacing the small marble table entirely or trying to "work around" it with walnut. The lower walnut tabletop is about 44cm by 55cm and is just low enough to give easy access to slide laptop(s) under the main table top. The top floating shelf is wide enough to happily accommodate two ipad sized tablets. The top shelf and lower tabletop are attached to the backing by steel brackets which cut through to the back through four CNC created mortises.


Cutting the mortises was interesting, I had to drop back to using a 1/2 inch cutting bit in order to service the 45mm depth of the timber. The back panel was held down with machining clamps but toggles would have done the trick, it was just what was on hand at the time. I cut the mortises through from the back using an upcut bit and the front turned out very clean without any blow out. You could probably cut yourself on the finish it was so clean.

The upcut doesn't make a difference in this job but it is always good to plan and see the outcomes for the next time when the cut will be exposed. The fine grain of walnut is great to work with CNC, though most of my bits are upcut for metal work.

I will likely move on to adding a head rest to the eames chair next. But that is a story for another day.

September 06, 2018

Money Matters

I have a few things I need to write, but am still a bit too sick with the flu to put together something novel, so instead I’m going to counter-blog Rob Collins recent claim that Money doesn’t matter. Rob’s thoughts are similar to ones I’ve had before, but I think they’re ultimately badly mistaken.

There’s three related, but very different, ways of thinking about money: as a store of value, as a medium of exchange, and as a unit of account. In normal times, dollars (or pounds or euros) work for all three things, so it’s easy to confuse them, but when you’re comparing different moneys some are better at one than another, and when a money starts failing, it will generally fail at each purpose at different rates.

Rob says “Money isn’t wealth” — but that’s wrong. In so far as money serves as a store of value, it is wealth. That’s why having a million dollars in your bank account makes you feel wealthy. The obvious failure mode for store of value is runaway inflation, and that quickly becomes a humanitarian disaster. Money can be one way to store value, but it isn’t the only way: you can store value by investing in artwork, buying property, building a company, or anything else that you expect to be able to sell at some later date. The main difference between those forms of investment versus money is that, ideally, monetary investments have low risk (perhaps the art you bought goes out of fashion and becomes worthless, or the company goes bankrupt, but your million dollars remains a million dollars), and low variance (you won’t make any huge profits, but you won’t make huge losses either). Unlike other assets, money also tends to be very fungible — if you earn $1000, you can spend $100 and have $900 left over; but if you have an artwork worth $1000 it’s a lot harder to sell one tenth of it.

Rob follows up by saying that money is “a thing you can exchange for other things”, which is true — money is a medium of exchange. Ideally it’s cheap and efficient, hard to counterfeit, and easy to verify. This is mostly a matter of technology: pretty gems are good at these things in some ways, coins and paper notes are good in others, cheques kind of work though they’re a bit to easy to counterfeit and a bit too hard to verify, and these days computer networks make credit card systems pretty effective. Ultimately a lot of modern systems have ended up as walled gardens though, and while they’re efficient, they aren’t cheap: whether you consider the 1% fees credit card companies charge, or the 2%-4% fees paypal charges, or the 30% fees from the Apple App Store or Google Play Stores, those are all a lot larger than how much you’d lose accepting a $50 note from someone directly. I have a lot of hope that Bitcoin’s Lightning Network will eventually have a huge impact here. Note that if money isn’t wealth — that is, it doesn’t manage to be a good store of value even in the short term, it’s not a good medium of exchange either: you can’t buy things with it because the people selling will have to immediately get rid of it or they’ll be making a loss; which is why currencies undergoing hyperinflation result in black markets where trade happens in stable currencies.

With modern technology and electronic derivatives, you could (in theory) probably avoid ever holding money. If you’re a potato farmer and someone wants to buy a potato from you, but you want to receive fertilizer for next season’s crop rather than paper money, the exchange could probably be fully automated by an online exchange so that you end up with an extra hundred grams of fertilizer in your next order, with all the details automatically worked out. If you did have such a system, you’d entirely avoid using money as a store of value (though you’d probably be using a credit account with your fertilizer supplier as a store of value), and you’d at least mostly avoid using money as a medium of exchange, but you’d probably still end up using money as a medium of account — that is you’d still be listing the price of potatoes in dollars.

A widely accepted unit of account is pretty important — you need it in order to make contracts work, and it makes comparing different trades much easier. Compare the question “should I sell four apples for three oranges, or two apples for ten strawberries?” with “should I sell four apples for $5, or two apples for $3” and “should I buy three oranges for $5 or ten strawberries for $3?” While I suppose it’s theoretically possible to do finance and economics without a common unit of account, it would be pretty difficult.

This is a pretty key part and it’s where money matters a lot. If you have an employment contract saying you’ll be paid $5000 a month, then it’s pretty important what “$5000” is actually worth. If a few months down the track there’s a severe inflation event, and it’s only worth significantly less, then you’ve just had a severe pay cut (eg, the Argentinian Peso dropped from 5c USD in April to 2.5c USD in September). If you’ve got a well managed currency, that usually means low but positive inflation, so you’ll instead get a 2%-5% pay cut every year — which is considered desirable by economists as it provides an automatic way to devote less resources to less valuable jobs, without managers having to deliberately fire people, or directly cut peoples’ pay. Of course, people tend to be as smart as economists, and many workers expect automatic pay rises in line with inflation anyway.

Rob’s next bit is basically summarising the concept of sticky prices: if there’s suddenly more money to go around, the economy goes weird because people aren’t able to fix prices to match the new reality quickly, causing shortages if there’s more money before there’s higher prices, or gluts (and probably a recession) if there’s less money and people can’t afford to buy all the stuff that’s around — this is what happened in the global financial crisis in 2008/9, though I don’t think there’s really a consensus on whether the blame for less money going around should be put on the government via the Federal Reserve, or the banks, or some other combination of actors.

To summarise so far: money does matter a lot. Having a common unit of account so you can give things meaningful prices is essential, having a convenient store of value that you can use for large and small amounts, and being able to easily trade it for goods and services is a really big deal. Screwing it up hurts people directly, and can be really massively harmful. You could probably use something different for medium of exchange than method of account (eg, a lot of places accepting cryptocurrencies use the cryptocurrency as medium of exchange, but use regular dollars for both store of value and pricing); but without a store of value you don’t have a medium of exchange, and once you’ve got a method of account, having it also work as a store of value is probably too convenient to skip.

But all that said, money is just a tool — generally money isn’t what anyone wants, people want the things they can get with money. Rob phrases that as “resources and productivity”, which is fine; I think the economics jargon would be “real GDP” — ie, the actual stuff that goes into GDP, as opposed to the dollar figure you put on it. Things start going wonky quickly though, in particular with the phrase “If, given the people currently in our country, and what they are being paid to do today, we have enough resources, and enough labour-and-productivity to …” — this starts mixing up nominal and real terms: people expect to be paid in dollars, but resources and labour are real units. If you’re talking about allocating real resources rather than dollars, you need to balance that against paying people in real resources rather than dollars as well, because that’s what they’re going to buy with their nominal dollars.

Why does that matter? Ultimately, because it’s very easy to get the maths wrong and not have your model of the national economy balanced: you allocate some resources here, pay some money there, then forget that the people you paid will use that money to reallocate some resources. If the error’s large enough and systemic enough, you’ll get runaway inflation and all the problems that go with it.

Rob has a specific example here: an unemployed (but skilled) builder, and a homeless family (who need a house built). Why not put the two together, magic up some money to prime the system and build a house? Voila the builder has a job, and the family has a home and everyone is presumably better off. But you can do the same thing without money: give the homeless family a loaded gun and introduce them to the builder: the builder has a job, and the family get a home, and with any luck the bullet doesn’t even get used! The key problem was that we didn’t inspect the magic sufficiently: the builder doesn’t want a job, or even money, he wants the rewards that the job and the money obtain. But where do those rewards come from? Maybe we think the family will contribute to the economy once they have a roof over their heads — if so, we could commit to that: forget the gun, the family goes to a bank, demonstrates they’ll be able to earn an income in future, and takes out a loan, then goes to the builder and pays for their house, and then they get jobs and pay off their mortgage. But if the house doesn’t let the family get jobs and pay for the home, the things the builder buys with his pay have to come from somewhere, and the only way that can happen is by making everyone else in the country a little bit poorer. Do that enough, and everyone who can will move to a different country that doesn’t have that problem.

Loans are a serious answer to the problem in general: if the family is going to be able to work and pay for the house eventually, the problem isn’t one of money, it’s one of risk: whoever currently owns the land, or the building supplies, or whatever doesn’t want to take the risk they’ll never see anything for letting the house get built. But once you have someone with founds who is willing to take the risk, things can start happening without any change in government policies. Loaning directly to the family isn’t the only way; you could build a set of units on spec, and run a charity that finds disadvantaged families, and sets them up, and maybe provide them with training or administrative support to help them get into the workforce, at which point they can pay you back and you can either turn a profit, or help the next disadvantaged family; or maybe both.

Rob then asks himself a bunch of questions, which I’ll answer too:

  • What about the foreign account deficit? (It doesn’t matter in the first place, unless perhaps you’re anti-immigrant, and don’t want foreigners buying property)
  • What about the fact that lots of land is already owned by someone? (There’s enough land in Australia outside of Sydney/Melbourne that this isn’t an issue; I don’t have any idea what it’s like in NZ, but see Tokyo for ways of housing people on very little land if it is a problem)
  • How do we fairly get the family the house they deserve? (They don’t deserve a house; if they want a nice house, they should work and save for it. If they’re going through hard times, and just need a roof over their heads, that’s easily and cheaply done, and doesn’t need a lot of space)
  • Won’t some people just ride on the coat-tails of others? (Yes, of course they will. That’s why you target the assistance to help them survive and get back on their feet, and if they want to get whatever it is they think they deserve, they can work for it, like everyone else)
  • Isn’t this going to require taking things other people have already earnt? (Generally, no: people almost always buy houses with loans, for instance, rather than being given them for free, or buying them outright; there might be a need to raises taxes, but not to fundamentally change them, though there might be other reasons why larger reform is worthwhile)

This brings us back to the claim Rob makes at the start of his blog: that the whole “government cannot pay for healthcare” thing is nonsense. It’s not nonsense: at the extreme, government can’t pay for enough healthcare for everyone to live to 120 while feeling like they’re 30. Even paying enough for everyone to have the best possible medical care isn’t feasible: even if NZ has a uniform health care system with 100% of its economy devoted to caring for the sick and disabled, there’s going to be a specialist facility somewhere overseas that does a better job. If there isn’t a uniform healthcare system (and there won’t be, even if only due to some doctors/nurses being individually more talented), there’ll also be better and worse places to go in NZ. The reason we have worrying fiscal crises in healthcare and aged support isn’t just a matter of money that can be changed with inflation, it’s that the real economic resources we’re expecting to have don’t align with the promises we’re already making. Those resources are usually expressed in dollar terms, but that’s because having a unit of account makes talking about these things easier: we don’t have to explicitly say “we’ll need x surgeons and y administrators and z MRI machines and w beds” but instead can just collect it all and say “we’ll need x billion dollars”, and leave out a whole mass of complexity, while still being reasonably accurate.

(Similar with “education” — there are limits to how well you can educate everyone, and there’s a trade off between how many resources you might want to put into educating people versus how many resources other people would prefer. In a democracy, that’s just something that’s going to get debated. As far as land goes, on the other hand, I don’t think there’s a fundamental limit to the government taking control over land it controls, though at least in Australia I believe that’s generally considered to be against the vibe of the constitution. If you want to fairly compensate land holders for taking their land, that goes back to budget negotiations and government priorities, and doesn’t seem very interesting in the abstract)

Probably the worst part of Rob’s blog is this though: “We get 10% less things done. Big deal.” Getting 10% less things done is a disaster, for comparison the Great Recession in the US had a GDP drop of less than half that, at -4.2% between 2007Q4 and 2009Q2, and the Great Depression was supposedly about -15% between 1929 and 1932. Also, saying “we’d want 90% of folk not working” is pretty much saying “90% of folk have nothing of value to contribute to anyone else”, because if they did, they could do that, be paid for it, and voila, they’re working. That simply doesn’t seem plausible to me, and I think things would get pretty ugly if it ended up that way despite it’s implausibility.

(Aside: for someone who’s against carbs, “potato farmer” as the go to example seems an interesting choice… )

September 04, 2018

New Developments in Supercomputing

Over the past 33 years the International Super Computing conference in Germany has become one of the world's major computing events with the bi-annual announcement of the Top500 systems, which continues to be dominated in entirety by Linux systems. In June this year over 3,500 people attended ISC with a programme of tutorials, workshops and miniconferences, poster sessions, student competitions, a vast vendor hall, and numerous other events.

This presentation gives an overview of ISC and makes an attempt to cover many of the new developments and directions in supercomputing including new systems. metrics measurement, machine learning, and HPC education. In addition, the presentation will also feature material from the HPC Advisory Council conference in Fremantle held in August.

Kubernetes Fundamentals: Setting up nginx ingress

Share

I’m doing the Linux Foundation Kubernetes Fundamentals course at the moment, and I was very disappointed in the chapter on Ingress Controllers. To be honest it feels like an after thought — there is no lab, and the provided examples don’t work if you re-type them into Kubernetes (you can’t cut and paste of course, just to add to the fun).

I found this super annoying, so I thought I’d write up my own notes on how to get nginx working as an Ingress Controller on Kubernetes.

First off, the nginx project has excellent installation resources online at github. The only wart with their instructions is that they changed the labels used on the pods for the ingress controller, which means the validation steps in the document don’t work until that is fixed. That is reported in a github issue and there was a proposed fix that didn’t have an associated issue that pre-dates the creation of the issue.

The basic process, assuming a baremetal Kubernetes install, is this:

$ NGINX_GITHUB="https://raw.githubusercontent.com/kubernetes/ingress-nginx"
$ kubectl apply -f $NGINX_GITHUB/master/deploy/mandatory.yaml
$ kubectl apply -f $NGINX_GITHUB/master/deploy/provider/baremetal/service-nodeport.yaml

Wait for the pods to fetch their images, and then check if the pods are healthy:

$ kubectl get pods -n ingress-nginx
NAME                                       READY     STATUS    RESTARTS   AGE
default-http-backend-6586bc58b6-tn7l5      1/1       Running   0          20h
nginx-ingress-controller-79b7c66ff-m8nxc   1/1       Running   0          20h

That bit is mostly explained by the Linux Foundation course. Well, he links to the github page at least and then you just read the docs. The bit that isn’t well explained is how to setup ingress for a pod. This is partially because kubectl doesn’t have a command line to do this yet — you have to POST an API request to get it done instead.

First, let’s create a target deployment and service:

$ kubectl run ghost --image=ghost
deployment.apps/ghost created
$ kubectl expose deployments ghost --port=2368
service/ghost exposed

The YAML to create an ingress for this new “ghost” service looks like this:

$ cat sample_ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ghost
spec:
  rules:
  - host: ghost.10.244.2.13.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: ghost
          servicePort: 2368

Where 10.244.2.13 is the IP that my CNI assigned to the nginx ingress controller. You can look that up with a describe of the nginx ingress controller pod:

$ kubectl describe pod nginx-ingress-controller-79b7c66ff-m8nxc -n ingress-nginx | grep IP
IP:                 10.244.2.13

Now we can create the ingress entry for this ghost deployment:

$ kubectl apply -f sample_ingress.yaml 
ingress.extensions/ghost created

This causes the nginx configuration to get re-created inside the nginx pod by magix pixies. Now, assuming we have a route from our desktop to 10.244.2.13, we can just go to http://ghost.10.244.2.13.nip.io in a browser and you should be greeted by the default front page for the ghost installation (which turns out to be a publishing platform, who knew?).

To cleanup the ingress, you can use the normal “get”, “describe”, and “delete” verbs that you use for other things in kubectl, with the object type of “ingress”.

Share

September 03, 2018

Audiobooks – August 2018

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

An interesting listen. Covers both history of humanity and then extrapolates ways things might go in the future. Many plausible ideas (although no doubt some huge misses). 8/10

Higher: A Historic Race to the Sky and the Making of a City by Neal Bascomb

The architects, owners & workers behind the Manhattan Trust Building, the Chrysler Building and the Empire State Building all being built New York at the end of the roaring 20s. Fascinating and well done. 9/10

The Invention Of Childhood by Hugh Cunningham

The story of British childhood from the year 1000 to the present. Lots of quotes (by actors) from primary sources such as letters (which is less distracting than sometimes). 8/10

The Sign of Four by Arthur Conan Doyle – Read by Stephen Fry

Very well done reading by Fry. Story excellent of course. 8/10

My Happy Days in Hollywood: A Memoir by Garry Marshall

Memoir by writer, producer (Happy Days, etc) and director (Pretty Woman, The Princess Diaries, etc). Great stories mostly about the positive side of the business. Very inspiring 8/10

Napoleon by J. Christopher Herold

A biography of Napoleon with a fair amount about the history of the rest of Europe during the period thrown in. A fairly short (11 hours) book but some but not exhausting detail. 7/10

Storm in a Teacup: The Physics of Everyday Life by Helen Czerski

A good popular science book linking everyday situations and objects with bigger concepts (eg coffee stains to blood tests). A fun listen. 7/10

All These Worlds Are Yours: The Scientific Search for Alien Life by Jon Willis

The author reviews recent developments in the search for life and suggests places it might be found and how missions to search them (he gives himself a $4 billion budget) should be prioritised. 8/10

Ready Player One by Ernest Cline

I’m right in the middle of the demographic for most of the references here so I really enjoyed it. Good voicing by Wil Wheaton too. Story is straightforward but all pretty fun. 8/10

Share

September 01, 2018

Suggestions for Trump Supporters

I’ve had some discussions with Trump supporters recently. Here are some suggestions for anyone who wants to have an actual debate about political issues. Note that this may seem harsh to Trump supporters. But it seems harsh to me when Trump supporters use a social event to try and push their beliefs without knowing any of the things I list in this post. If you are a Trump supporter who doesn’t do these things then please try to educate your fellow travellers, they are more likely to listen to you than to me.

Facts

For a discussion to be useful there has to be a basis in facts. When one party rejects facts there isn’t much point. Anyone who only takes their news from an ideological echo chamber is going to end up rejecting facts. The best thing to do is use fact checking sites of which Snopes [1] is the best known. If you are involved in political discussions you should regularly correct people who agree with you when you see them sharing news that is false or even merely unsupported by facts. If you aren’t correcting mistaken people on your own side then you do your own cause a disservice by allowing your people to discredit their own arguments. If you aren’t regularly seeking verification of news you read then you are going to be misled. I correct people on my side regularly, at least once a week. How often do you correct your side?

The next thing is that some background knowledge of politics is necessary. Politics is not something that you can just discover by yourself from first principles. If you aren’t aware of things like Dog Whistle Politics [2] then you aren’t prepared to have a political debate. Note that I’m not suggesting that you should just learn about Dog Whistle Politics and think you are ready to have a debate, it’s one of many things that you need to know.

Dog whistle politics is nothing new or hidden, if you don’t know about such basics you can’t really participate in a discussion of politics. If you don’t know such basics and think you can discuss politics then you are demonstrating the Dunning-Kruger effect [3].

The Southern Strategy [4] is well known by everyone who knows anything about US politics. You can think it’s a good thing if you wish and you can debate the extent to which it still operates, but you can’t deny it happened. If you are unaware of such things then you can’t debate US politics.

The Civil rights act of 1964 [5] is one of the most historic pieces of legislation ever passed in the US. If you don’t know about it then you just don’t know much about US politics. You may think that it is a bad thing, but you can’t deny that it happened, or that it happened because of the Democratic party. This was the time in US politics when the Republicans became the party of the South and the Democrats became the centrist (possibly left) party that they are today. It is ridiculous to claim that Republicans are against racism because Abraham Lincoln was a Republican. Ridiculous claims might work in an ideological echo chamber but they won’t convince anyone else.

Words Have Meanings

To communicate we need to have similar ideas of what words mean. If you use words in vastly different ways to other people then you can’t communicate with them. Some people in the extreme right claim that because the Nazi party in Germany was the
“Nationalsozialistische Deutsche Arbeiterpartei” (“NSDAP”) which translates to English as “National Socialist German Workers Party” that means that they were “socialists”. Then they claim that “socialists” are “leftist” so therefore people on the left are Nazis. That claim requires using words like “left” and “socialism” in vastly different ways to most people.

Snopes has a great article about this issue [6], I recommend that everyone read it, even those who already know that Nazis weren’t (and aren’t) on the left side of politics.

The Wikipedia page of the Unite the Right rally [7] (referenced in the Snopes article) has a photo of people carrying Nazi flags and Confederate flags. Those people are definitely convinced that Nazis were not left wing! They are also definitely convinced that people on the right side of politics (which in the US means the Republican party) support the Confederacy and oppose equal rights for Afro-American people. If you want to argue that the Republican party is the one opposed to racism then you need to come up with an explanation for how so many people who declare themselves on the right of politics got it wrong.

Here’s a local US news article about the neo-Nazi who had “commie killer” written on his helmet while beating a black man almost to death [8]. Another data point showing that Nazis don’t like people on the left.

In other news East Germany (the German Democratic Republic) was not a
democracy. North Korea (the Democratic People’s Republic of Korea) is not a democracy either. The use of “socialism” by the original Nazis shouldn’t be taken any more seriously than the recent claims by the governments of East Germany and North Korea.

Left vs right is a poor summary of political positions, the Political Compass [9] is better. While Hitler and Stalin have different positions on economics I think that citizens of those countries didn’t have very different experiences, one extremely authoritarian government is much like another. I recommend that you do the quiz on the Political Compass site and see if the people it places in similar graph positions to you are ones who you admire.

Sources of Information

If you are only using news sources that only have material you agree with then you are in an ideological echo chamber. When I recommend that someone look for other news sources what I don’t expect in response is an email analysing a single article as justification for rejecting that entire news site. I recommend sites like the New York Times as having good articles, but they don’t only have articles I agree with and they sometimes publish things I think are silly.

A news source that makes ridiculous claims such as that Nazis are “leftist” is ridiculous and should be disregarded. A news source that merely has some articles you disagree with might be worth using.

Also if you want to convince people outside your group of anything related to politics then it’s worth reading sites that might convince them. I often read The National Review [10], not because I agree with their articles (that is a rare occurrence) but because they write for rational conservatives and I hope that some of the extreme right wing people will find their ideas appealing and come back to a place where we can have useful discussions.

When evaluating news articles and news sources one thing to consider is Occam’s Razor [11]. If an article has a complex and implausible theory when a simpler theory can explain it then you should be sceptical of that article. There are conspiracies but they aren’t as common as some people believe and they are generally of limited complexity due to the difficulty people have in keeping secrets. An example of this is some of the various conspiracy theories about storage of politicians’ email. The simplest explanation (for politicians of all parties) is that they tell someone like me to “just make the email work” and if their IT staff doesn’t push back and refuse to do it without all issues being considered then it’s the IT staff at fault. Stupidity explains many things better than conspiracies. Regardless of the party affiliation, any time a politician is accused of poor computer security I’ll ask whether someone like me did their job properly.

Covering for Nazis

Decent people have to oppose Nazis. The Nazi belief system is based on the mass murder of people based on race and the murder of people who disagree with them. In Germany in the 1930s there were some people who could claim not to know about the bad things that Nazis were doing and they could claim to only support Nazis for other reasons. Neo-Nazis are not about creating car companies like VolksWagen all they are about is hatred. The crimes of the original Nazis are well known and well documented, it’s not plausible that anyone could be unaware of them.

Mitch McConnell has clearly stated “There are no good neo-Nazis” [12] in clear opposition to Trump. While I disagree with Mitch on many issues, this is one thing we can agree on. This is what decent people do, they work together with people they usually disagree with to oppose evil. Anyone who will support Nazis out of tribal loyalty has demonstrated the type of person they are.

Here is an article about the alt-right meeting to celebrate Trump’s victory where Richard Spencer said “hail Trump, hail our people, hail victory” while many audience members give the Nazi salute [13]. You can skip to 42 seconds in if you just want to see that part. Trump supporters try to claim it’s the “Roman salute”, but that’s not plausible given that there’s no evidence of Romans using such a salute and it was first popularised in Fascist Italy [14]. The Wikipedia page for the Nazi Salute [15] notes that saying “hail Hitler” or “hail victory” was standard practice while giving the salute. I think that it’s ridiculous to claim that a group of people offering the Hitler salute while someone says “hail Trump” and “hail victory” are anything but Nazis. I also think it’s ridiculous to claim to not know of any correlation between the alt-right and Nazis and then immediately know about the “Roman Salute” defence.

The Americans used to have a salute that was essentially the same as the Nazi Salute, the Bellamy Salute was officially replaced by the hand over heart salute in 1942 [16]. They don’t want anything close to a Nazi salute, and no-one did until very recently when neo-Nazis stopped wearing Klan outfits in the US.

Every time someone makes claims about a supposed “Roman salute” explanation for Richard Spencer’s fans I wonder if they are a closet Nazi.

Anti-Semitism

One final note, I don’t debate people who are open about neo-Nazi beliefs. When someone starts talking about a “Jewish Conspiracy” or use other Nazi phrases then the conversation is over. Nazis should be shunned. One recent conversation with a Trump supported ended quickly after he started talking about a “Jewish conspiracy”. He tried to get me back into the debate by claiming “there are non-Jews in the conspiracy too” but I was already done with him.

Decent Trump Supporters

If you want me to believe that you are one of the decent Trump supporters a good way to start is to disclaim the horrible ideas that other Trump supporters endorse. If you can say “I believe that black people and Jews are my equal and I will not stand next to or be friends with anyone who carries a Nazi flag” then we can have a friendly discussion about politics. I’m happy to declare “I have never supported a Bolshevik revolution or the USSR and will never support such things” if there is any confusion about my ideas in that regard. While I don’t think any reasonable person would think that I supported the USSR I’m happy to make my position clear.

I’ve had people refuse to disclaim racism when asked. If you can’t clearly say that you consider people of other races to be your equal then everyone will think that you are racist.

August 31, 2018

Exploring Issues in Event-Based HPC Cloudbursting

The use of cloud compute, especially in proportion to single-node tasks, provides a more effective allocation of financial resources. The introduction of cloud-bursting to scheduling systems could ideally provide on-demand compute resources for High Performance Computing (HPC) systems, where queue wait-times are a source of user consternation.

Using experiential examples in Slurm's Cloudbursting capability (an extension of the scheduler's power management features), initial successes and bug discoveries highlight the problems of replication and latency that limit the scope of cloudbursting. Nevertheless, under such circumstances wrapper scripts for particular subsets of jobs are still considered viable; an example of this approach is indicated by MOAB/NODUS.

A presentation to HPC:AI 2018 Perth Conference

Keystone Federated Swift – Multi-region cluster, multiple federation, access same account

Welcome to the final post in the series, it has been a long time coming. If required/requested I’m happy to delve into any of these topics deeper, but I’ll attempt to explain the situation, the best approach to take and how I got a POC working, which I am calling the brittle method. It definitely isn’t the best approach but as it was solely done on the Swift side and as I am a OpenStack Swift dev it was the quickest and easiest for me when preparing for the presentation.

To first understand how we can build a federated environment where we have access to our account no matter where we go, we need to learn about how keystone authentication works from a Swift perspective. Then we can look at how we can solve the problem.

Swift’s Keystoneauth middleware

As mentioned in earlier posts, there isn’t any magic in the way Swift authentication works. Swift is an end-to-end storage solution and so authentication is handled via authentication middlewares. further a single Swift cluster can talk to multiple auth backends, which is where the `reseller_prefix` comes into play. This was the first approach I blogged about in these series.

 

There is nothing magical about how authentication works, keystoneauth has it’s own idiosyncrasies, but in general it simply makes a decision whether this request should be allowed. It makes writing your own simple, and maybe an easily way around the problem. Ie. write an auth middleware to auth directly to your existing company LDAP server or authentication system.

 

To setup keystone authentication, you use keystones authtoken middleware and directly afterwards in the pipeline place the Swift keystone middleware, configuring each of them in the proxy configuration:

pipeline = ... authtoken keystoneauth ... proxy-server

The authtoken middleware

Generally every request to Swift will include a token, unless it’s using tempurl, container-sync or to a container that has global read enabled but you get the point.

As the swift-proxy is a python wsgi app the request first hits the first middleware in the pipeline (left most) and works it’s way to the right. When it hits the authtoken middleware the token in the request will be sent to keystone to be authenticated.

The resulting metadata, ie the user, storage_url, groups, roles etc, and dumped into the request environment and then passed to the next middleware. The keystoneauth middleware.

The keystoneauth middleware

The keystoneauth middleware checks the request environment for the metadata dumped by the authtoken middleware and makes a decision based on that. Things like:

  • If the token was one for one of the reseller_admin roles, then they have access.
  • If the user isn’t a swift user of the account/project the request is for, is there an ACL that will allow it.
  • If the user has a role that identifies them as a swift user/operator of this Swift account then great.

 

When checking to see if the user has access to the given account (Swift account) it needs to know what account the request is for. This is easily determined as it’s defined by the path of the URL your hitting. The URL you send to the Swift proxy is what we call the storage url. And is in the form of:

http(s)://<url of proxy or proxy vip>/v1/<account>/<container>/<object>

The container and object elements are optional as it depends on what your trying to do in Swift. When the keystoneauth middleware is authenticating it’ll check that the project_id (or tenant_id) metadata dumped by authtoken, when this is concatenated with the reseller_prefix, matches the account in the given storage_url. For example let’s say the following metadata was dumped by authtoken:

{
"X_PROJECT_ID": 'abcdefg12345678',
"X_ROLES": "swiftoperator",
...
}

And the reseller_prefix for keystone auth was AUTH_ and we make any member of the swiftoperator role (in keystone) a swift operator (a swift user on the account). Then keystoneauth would allow access if the account in the storage URL matched AUTH_abcdefg12345678.

 

When you authenticate to keystone the object storage endpoint will point not only to the Swift endpoint (the swift proxy or swift proxy load balancer), but it will also include your account. Based on your project_id. More on this soon.

 

Does that make sense? So simply put to use keystoneauth in a multi federated environment, we just need to make sure no matter which keystone we end up using and asking for the swift endpoint always returns the same Swift account name.

And there lies our problem, the keystone object storage endpoint and the metadata authtoken dumps uses the project_id/tenant_id. This isn’t something that is synced or can be passed via federation metadata.

NOTE: This also means that you’d need to use the same reseller_prefix on all keystones in every federated environment. Otherwise the accounts wont match.

 

Keystone Endpoint and Federation Side

When you add an object storage endpoint in keystone, for swift, the url looks something like:

http://swiftproxy:8080/v1/AUTH_$(tenant_id)s

 

Notice the $(tenant_id)s at the end? This is a placeholder that keystone internally will replace with the tenant_id of the project you authenticated as. $(project_id)s can also be used and maps to the same thing. And this is our problem.

When setting up federation between keystones (assuming keystone 2 keystone federation) you generate a mapping. This mapping can include the project name, but not the project_id. Theses ids are auto-generated, not deterministic by name, so creating the same project on different federated keystone servers will have different project_id‘s. When a keystone service provider (SP) federates with a keystone identity provider (IdP) the mapping they share shows how the provider should map federated users locally. This includes creating a shadow project if a project doesn’t already exist for the federated user to be part of.

Because there is no way to sync project_id’s in the mapping the SP will create the project which will have a unique project_id. Meaning when the federated user has authenticated their Swift storage endpoint from keystone will be different, in essence as far as Swift is concerned they will have access but to a completely different Swift account. Let’s use an example, let’s say there is a project on the IdP called ProjectA.

           project_name        project_id
  IdP      ProjectA            75294565521b4d4e8dc7ce77a25fa14b
  SP       ProjectA            cb0d5805d72a4f2a89ff260b15629799

Here we have a ProjectA on both IdP and SP. The one on the SP would be considered a shadow project to map the federated user too. However the project_id’s are both different, because they are uniquely  generated when the project is created on each keystone environment. Taking the Object Storage endpoint in keystone as our example before we get:

 

          Object Storage Endpoint
  IdP     http://swiftproxy:8080/v1/AUTH_75294565521b4d4e8dc7ce77a25fa14b
  SP      http://swiftproxy:8080/v1/AUTH_cb0d5805d72a4f2a89ff260b15629799

So when talking to Swift you’ll be accessing different accounts, AUTH_75294565521b4d4e8dc7ce77a25fa14b and AUTH_cb0d5805d72a4f2a89ff260b15629799 respectively. This means objects you write in one federated environment will be placed in a completely different account so you wont be able access them from elsewhere.

 

Interesting ways to approach the problem

Like I stated earlier the solution would simply be to always be able to return the same storage URL no matter which federated environment you authenticate to. But how?

  1. Make sure the same project_id/tenant_id is used for _every_ project with the same name, or at least the same name in the domains that federation mapping maps too. This means direct DB hacking, so not a good solution, we should solve this in code, not make OPs go hack databases.
  2. Have a unique id for projects/tenants that can be synced in federation mapping, also make this available in the keystone endpoint template mapping, so there is a consistent Swift account to use. Hey we already have project_id which meets all the criteria except mapping, so that would be easiest and best.
  3. Use something that _can_ be synced in a federation mapping. Like domain and project name. Except these don’t map to endpoint template mappings. But with a bit of hacking that should be fine.

Of the above approaches, 2 would be the best. 3 is good except if you pick something mutable like the project name, if you ever change it, you’d now authenticate to a completely different swift account. Meaning you’d have just lost access to all your old objects! And you may find yourself with grumpy Swift Ops who now need to do a potentially large data migration or you’d be forced to never change your project name.

Option 2 being unique, though it doesn’t look like a very memorable name if your using the project id, wont change. Maybe you could offer people a more memorable immutable project property to use. But to keep the change simple being able simply sync the project_id should get us everything we need.

 

When I was playing with this, it was for a presentation so had a time limit, a very strict one, so being a Swift developer and knowing the Swift code base I hacked together a varient on option 3 that didn’t involve hacking keystone at all. Why, because I needed a POC and didn’t want to spend most my time figuring out the inner workings of Keystone, when I could just do a few hacks to have a complete Swift only version. And it worked. Though I wouldn’t recommend it. Option 3 is very brittle.

 

The brittle method – Swift only side – Option 3b

Because I didn’t have time to simply hack keystone, I took a different approach. The basic idea was to let authtoken authenticate and then finish building the storage URL on the swift side using the meta-data authtoken dumps into wsgi request env. Thereby modifying the way keystoneauth authenticates slightly.

Step 1 – Give the keystoneauth middleware the ability to complete the storage url

By default we assume the incoming request will point to a complete account, meaning the object storage endpoint in keystone will end with something like:

'<uri>/v1/AUTH_%(tenant_id)s'

So let’s enhance keystoneauth to have the ability to if given only the reseller_prefix to complete the account. So I added a use_dynamic_reseller option.

If you enable use_dynamic_reseller then the keystoneauth middleware will pull the project_id from authtoken‘s meta-data dumped in the wsgi environment. This allows a simplified keystone endpoint in the form:

'<uri>/v1/AUTH_'

This shortcut makes configuration easier, but can only be reliably used when on your own account and providing a token. API elements like tempurl  and public containers need the full account in the path.

This still used project_id so doesn’t solve our problem, but meant I could get rid of the $(tenant_id)s from the endpoints. Here is the commit in my github fork.

Step 2 – Extend the dynamic reseller to include completing storage url with names

Next, we extend the keystoneauth middleware a little bit more. Give it another option, use_dynamic_reseller_name, to complete the account with either project_name or domain_name and project_name but only if your using keystone authentication version 3.

If you are, and want to have an account based of the name of the project, then you can enable use_dynamic_reseller_name in conjuction with use_dynamic_reseller to do so. The form used for the account would be:

<reseller_prefix><project_domain_name>_<project_name>

So using our example previously with a reseller_preix of AUTH_, a project_domain_name of Domain and our project name of ProjectA, this would generate an account:

AUTH_Domain_ProjectA

This patch is also in my github fork.

Does this work, yes! But as I’ve already mentioned in the last section, this is _very_ brittle. But this also makes it confusing to know when you need to provide only the reseller_prefix or your full account name. It would be so much easier to just extend keystone to sync and create shadow projects with the same project_id. Then everything would just work without hacking.

Moving to Melbourne

Now that the paperwork has finally all been dealt with, I can announce that I’ll be moving down to Melbourne to take up a position with the Australian Synchrotron, basically a super duper x-ray machine used for research of all types. My official position is a >in< Senior Scientific Software Engineer <out> I’ll be moving down to Melbourne shortly, staying with friends (you remember that offer you made, months ago?) until I find a rental near Monash Uni, Clayton.

I will be leaving behind Humbug, the computer group that basically opened up my entire career, and The Edge, SLQ, my home-away-from-home study. I do hope to be able to find replacements for these down south.

I’m looking at having a small farewell nearby soon.

A shout out to Netbox Blue for supplying all my packing boxes. Allll of them.

This Week in Australian History

The end of August and beginning of September is traditionally linked to the beginning of Spring in Australia, although the change in seasons is experienced in different ways in different parts of the country and was marked in locally appropriate ways by Aboriginal people. As a uniquely Australian celebration of Spring, National Wattle Day, celebrated […]

Mā te wā, Aotearoa

Today I have some good news and sad news. The good news is that I’ve been offered a unique chance to drive “Digital Government” Policy and Innovation for all of government, an agenda including open government, digital transformation, technology, open and shared data, information policy, gov as a platform, public innovation, service innovation and policy innovation. For those who know me, these are a few of my favourite things :)

The sad news, for some folk anyway, is I need to leave New Zealand Aotearoa to do it.

Over the past 18 months (has it only been that long!) I have been helping create a revolutionary new way of doing government. We have established a uniquely cross-agency funded and governed all-of-government function, a “Service Innovation Lab”, for collaborating on the design and development of better public services for New Zealand. By taking a “life journey” approach, government agencies have a reason to work together to improve the full experience of people rather than the usual (and natural) focus on a single product, service or portfolio. The Service Innovation Lab has a unique value in providing an independent place and way to explore design-led and evidence-based approaches to service innovation, in collaboration with service providers across public, private and non-profit sectors. You can see everything we’ve done over the past year here  and from the first 10 week experiment here. I have particularly enjoyed working with and exploring the role of the Citizen Advice Bureau in New Zealand as a critical and trusted service delivery organisation in New Zealand. I’m also particularly proud of both our work in exploring optimistic futures as a way to explore what is possible, rather than just iterate away from pain, and our exploration of better rules for government including legislation as code. The next stage for the Lab is very exciting! As you can see in the 2017-18 Final Report, there is an ambitious work programme to accelerate the delivery of more integrated and more proactive services, and the team is growing with new positions opening up for recruitment in the coming weeks!

Please see the New Zealand blog (which includes my news) here

Professionally, I get most excited about system transformation. Everything we do in the Lab is focused on systemic change, and it is doing a great job at having an impact on the NZ (and global) system around it, especially for its size. But a lot more needs to be done to scale both innovation and transformation. Personally, I have a vision for a better world where all people have what they need to thrive, and I feel a constant sense of urgency in transitioning our public institutions into the 21st century, from an industrial age to the information age, so they can more effectively support society as the speed of change and complexity exponentially grows. This is going to take a rethink of how the entire system functions, especially at the policy and legislative levels.

With this in mind, I have been offered an extraordinary opportunity to explore and demonstrate systemic transformation of government. The New South Wales Department of Finance, Services and Innovation (NSW DFSI) has offered me the role of Executive Director for Digital Government, a role responsible for the all-of-government policy and innovation for ICT, digital, open, information, data, emerging tech and transformation, including a service innovation lab (DNA). This is a huge opportunity to drive systemic transformation as part of a visionary senior leadership team with Martin Hoffman (DFSI Secretary) and Greg Wells (GCIDO). I am excited to be joining NSW DFSI, and the many talented people working in the department, to make a real difference for the people of NSW. I hope our work and example will help raise the bar internationally for the digital transformation of governments for the benefit of the communities we serve.

Please see the NSW Government media release here.

One of the valuable lessons from New Zealand that I will be taking forward in this work has been in how public services can (and should) engage constructively and respectfully with Indigenous communities, not just because they are part of society or because it is the right thing to do, but to integrate important principles and context into the work of serving society. Our First Australians are the oldest cluster of cultures in the world, and we have a lot to learn from them in how we live and work today.

I want to briefly thank the Service Innovation team, each of whom is utterly brilliant and inspiring, as well as the wonderful Darryl Carpenter and Karl McDiarmid for taking that first leap into the unknown to hire me and see what we could do. I think we did well :) I’m delighted that Nadia Webster will be taking over leading the Lab work and has an extraordinary team to take it forward. I look forward to collaborating between New Zealand and New South Wales, and a race to the top for more inclusive, human centred, digitally enabled and values drive public services.

My last day at the NZ Government Service Innovation Lab is the 14th September and I start at NSW DFSI on the 24th September. We’ll be doing some last celebratory drinks on the evening of the 13th September so hold the date for those in Wellington. For those in Sydney, I can’t wait to get started and will see you soon!

August 30, 2018

Simple Keras “Hello World” Example – Mean Removal

Inspired by the Wavenet work with Codec 2 I’m dipping my toe into the word of Deep Learning (DL) using Keras. I’ve read Deep Learning with Python (quite an enjoyable read) and set up a Linux box with a GTX graphics card that is making my teenage sons weep with jealousy.

So after a couple of days of messing about here is my first “hello world” Keras example: mean_removal.py. It might be helpful for other Keras noobs. Assuming you have all the packages installed, it runs with either Python 2:

$ python mean_removal.py

Or Python 3:

$ python3 mean_removal.py

It removes the mean from vectors, using just a single layer regression model. The script runs OK on a regular PC without a chunky graphics card.

So I start by generating vectors from random numbers with a zero mean. I then add a random offset to each sample in the vector. Here are 5 vectors with random offsets added to them:

The Keras script is then trained to estimate and remove the offsets, so the output vectors look like:

Estimating the offset is the same as finding the “mean” of the vector. Yes I know we can do that with a “mean” function, but where’s the fun in that!

Here are some other plots that show the training and validation measures, and error metrics at the output:



The last two plots show pretty much all of the offset is removed and it restores the original (non offset) vectors with just a tiny bit of noise. I had to wind back the learning rate to get it to converge without “NAN” losses, possibly as I’m using fairly big input numbers. I’m familiar with the idea of learning rate from NLMS adaptive filters, such as those used for my work in echo cancellation.

Deep Learning for Codec 2

My initial ambitions for DL are somewhat more modest than the sample-by-sample synthesis used in the Wavenet work. I have some problems with Vector Quantisation (VQ) in the low rate Codec 2 modes. The VQ is used to compactly describe the speech spectrum, which carries the intelligibility of the signal.

The VQ gets upset with different microphones, speakers, or minor spectral shaping like gentle high pass/low pass filtering. This shaping often causes a poor vector to be chosen, which results in crappy speech quality. The current VQ error measure can’t tell the difference between spectral features that matter and those that don’t.

So I’d like to try DL to address those issues, and train a system to say “look, this speech and this speech are actually the same. Yes I know one of them has a 3dB/octave low pass filter, please ignore that”.

As emphasised in the text book above, some feature extraction can help with DL problems. For my first pass I’ll be working on parameters extracted by the Codec 2 model (like a compact version of the speech spectrum) rather than speech samples like Wavenet. This will reduce my CPU load significantly, at the expense of speech quality, which will be limited by the unquantised Codec 2 model. But that’s OK as a first step. A notch or two up on Codec 2 at 700 bit/s would be very useful, especially if it can run on a CPU from the first two decades of the 21st century.

Mean Removal on Speech Vectors

So to get started with Keras I chose mean removal. The mean level or constant offset is like the volume or energy in a speech signal, its the simplest form of spectral shaping I could imagine. I trained and tested it with vectors of random numbers, using numbers in the range of the speech spectral samples that Codec 2 plays with.

It’s a bit like an equaliser, vectors with arbitrary spectral shaping go in, “flat” unshaped vectors come out. They can then be sent to a Vector Quantiser. There are probably smarter ways to do this, but I need to start somewhere.

So as a next step I tried offset removal with vectors that represent the spectrum of 40ms speech frame:


This is pretty cool – the network was trained on random numbers but works well with real speech frames. You can also see the spectral slope I mentioned above, the speech energy gradually falls off at high frequencies. This doesn’t affect the intelligibility of the speech but tends to upset traditional Vector Quantisers. Especially mine.

Now that I have something super-basic working, the next step is to train and test networks to deal with some non-trivial spectral shaping.

Reading Further

Deep Learning with Python
WaveNet and Codec 2
Codec 2 700C, the current Codec 2 700 bit/s mode. With better VQ we can improve on this.
Codec 2 at 450 bit/s, some fine work from Thomas and Stefan, that uses a form of machine learning to synthesise 16 kHz speech from 8 kHz inputs.
FreeDV 700D, the recently released FreeDV mode that uses Codec 2 700C. A FreeDV Mode also includes a modem, FEC, protocol.
RNNoise: Learning Noise Suppression, Jean-Marc’s DL network for noise reduction. Thanks Jean-Marc for the brainstorming emails!

What’s missing from the ONAP community — an open design process

Share

I’ve been thinking a fair bit about ONAP and its future releases recently. This is in the context of trying to implement a system for a client which is based on ONAP. Its really hard though, because its hard to determine how various components of ONAP are intended to work, or interoperate.

It took me a while, but I’ve realised what’s missing here…

OpenStack has an open design process. If you want to add a new feature to Nova for example, the first step is you need to write down what the feature is intended to do, how it integrates with the rest of Nova, and how people might use it. The target audience for that document is both the Nova development team, but also people who operate OpenStack deployments.

ONAP has no equivalent that I can find. So for example, they say that in Casablanca they are going to implement a “AAI Enricher” to ease lookup of data from external systems in their inventory database, but I can’t find anywhere where they explain how the integration between arbitrary external systems and ONAP AAI will work.

I think ONAP would really benefit from a good hard look at their design processes and how approachable they are for people outside their development teams. The current use case proposal process (videos, conference talks, and powerpoint presentations) just isn’t great for people who are trying to figure out how to deploy their software.

Share

August 29, 2018

Software Freedom Day 2018 and LUV AGM

Sep 15 2018 13:00
Sep 15 2018 17:00
Sep 15 2018 13:00
Sep 15 2018 17:00
Location: 
Electron Workshop, 31 Arden Street North Melbourne 3051

It's time once again to get excited about all the benefits that Free and Open Source Software have given us over the past year and get together to talk about how Freedom and Openness can improve our human rights, our privacy, our security and our communities. It's Software Freedom Day!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 15, 2018 - 13:00

read more

LUV September 2018 Main Meeting: New Developments in Supercomputing

Sep 4 2018 18:30
Sep 4 2018 20:30
Sep 4 2018 18:30
Sep 4 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, September 4, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 4, 2018 - 18:30

read more

Band Pass Filter and Power Amplifier for Simple HF Data

Is it possible to move data over HF radio using very simple, low cost hardware and clever SDR software? In the last few posts (here and here) I’ve been constructing and testing building blocks for a simple HF data terminal. This post describes a few more, a 3-8 MHz Band Pass Filter (BPF) and 1W Power Amplifier (PA).

Band Pass Filter

The RTL-SDR samples at 28.8 MHz, capturing a broad chunk of spectrum. In direct mode we just sample the Q-channel, so any energy above 14.4 MHz will be aliased into our passband; e.g. both 21 and 7 MHz will appear as a 7 MHz sampled signal.

In the previous post we determined the ADC overloads at -30dBm, so we want to remove any strong signals above or near that level. One source of strong signals is broadcast band AM radio between 500 to 1600 kHz.

The use case is “100 mile” data links so I’d like the receiver to work on the 80M (3.5 MHz) as well as 40M (7.1 MHz) bands, which sets the BPF passband at 3 to 8 MHz. I hooked up my spec-an to a 40M antenna and could see AM broadcast signals peaking at -40dBm, so I set a BPF specification of > 20dB attenuation at 1.5 MHz to keep the sum of all those signals well away from the -30dBm limit. At the high frequency end I specified at > 30dB attenuation at 21 MHz, to reduce any energy aliased down to 7 MHz.

I designed a cascaded High Pass Low Pass/Filter using some tables from my ancient (but still excellent) copy of “RF Circuit Design”, by Chris Bowick. The Octave rtl_sdr script does the calculations for me. A spreadsheet would work well too.

I simulated the BPF using LTSpice, fixed a few bugs, and tweaked it for real world component values. Here is the circuit and frequency response on log and linear scales:



I soldered up the BPF Manhattan style using commercial axial 1uH inductors and ceramic capacitors, then tested it using the spec-an and tracking generator (note linear scale):

The table at the bottom shows the measured attenuation at some important frequencies. The attenuation is a bit low at 21 MHz, perhaps due to the finite Q of the real world inductors. Quite a good match to the LTSpice simulation and close enough for my experiments. The little step at around 10 MHz is a tracking generator artefact.

The next plot shows the effect of the BPF when my spec-an is connected to my 40M dipole (0 to 10MHz span). Yellow is the received signal without the filter, purple with the filter.

The big spike around 0 Hz is an artefact on the spec-an. The filter is doing a good job of nailing the AM broadcast band energy. You can see a peak around 7.4 MHz where the dipole is resonant. Actually this is a bit of a surprise to me, as I want it resonant around 7.2MHz, I better check that out! At 7.2-ish the insertion loss (difference between the purple and yellow) is a few dB as per the tracking generator plot above. It’s more like 6dB at 7.4 MHz (the dipole peak), not quite sure why. An insertion loss of 3dB at 7.2 MHz is OK for my application.

Power Amplifier

A few weeks ago I hooked the rpitx to my 40M dipole and managed to demodulate the 11mW signal a few km away (over an urban channel) using a mag loop and my FT-817. I decided to build a small 1W PA to make the system usable over “100 mile” HF channels. The actual power is not that critical, as we can trade power off against bit rate. For example if a given HF channel supports 100 bit/s at 1W, we then know we can get 1000 bit/s at 10W.

Even low bit rates can be very useful if you have no other communication. A text message or Tweet, allowing for some overhead, averages about 1000 bits. So at 1000 bit/s you can send 1 txt per second, 3600 an hour, or 86,000/day. That’s very useful communication if you are in a disaster situation and want to tell family you are alive. Or perhaps live in a remote area with no other communication. Of course HF channels come and go, so the actual throughput will be less than that.

I explored the junk box and found a partially constructed Beach 40. I isolated the driver and PA stage and poked it with my signal generator. Turns out it had a bit too much gain (the rpitx has plenty of drive) so I ended up with this simple PA circuit:



The only spurious output I can see is the 2nd harmonic is at -44 dBC, meeting ACMA specs:

The low pass filter at the output has a 3dB point at about 10 MHz which is a little high. It could be brought down a little to increase stop-band attenuation and reduce the 2nd harmonic further. I haven’t done anything about impedance matching the input, as it hits 1W (30dBm) output with 14dBm drive from the rpitx. The 1 inch square heatsink is quite warm after 10 minutes but I can still hold it. It’s not very efficient, 2.9W DC input power for 1W out, however 16dB power gain is quite good for a PA. Anyhoo, it’s a fine starting point for my experiments, we can optimise the PA later if necessary.

Next Steps

OK, so I have most of the building blocks I need for some over the air HF data experiments. There was a bit of engineering involved in building the BPF and PA, but the designs are very simple and can be constructed for a few $ or even from road kill (recycled) components. We now have a very low cost HF data radio, running high performance modems, connected to a Linux computer and Wifi.

Next I will put some software together to estimate data throughput, set the system up with real antennas, and gather some experimental results over real world HF channels.

Reading Further

Rpitx and 2FSK, first part in this series.
Testing a RTL-SDR with FSK on HF, second part in this series.
rtl_sdr.m script that calculates component values for the BPF.

August 26, 2018

Forking is a Feature

There’s a new WordPress fork called ClassicPress that’s been making some waves recently, with various members of the Twitterati swinging between decrying it as an attempt to fracture the WordPress community, to it being an unnecessary over-reaction, to it being a death knell for WordPress.

Personally, I don’t think it’s any of the above.

Some years ago, Anil Dash wrote an article on this topic (which I totally ripped forked the name from), you should read it for some context.

With that context, I genuinely applaud ClassicPress for exercising their fundamental rights under the GPL. The WordPress Bill of Rights makes it quite clear that forking is not just allowed, it’s encouraged. You can and should fork WordPress if you choose to. This isn’t a flaw in the system, this is how it’s supposed to work.

Forks should aways be encouraged.

Forks are a fundamentally healthy aspect of Open Source software. A relatively recent example is the io.js fork of Node.js, which resulted in significant changes to how the Node.js project is governed and developed. WordPress has seen forks in the past, too: Lyceum was a fork that added multi-site support, before it existed in WordPress. WordPress MU was something of a sibling fork which also added multi-site support, and was ultimately merged back into WordPress.

There are examples of forks that went on to become independent projects: WordPress itself is a fork of cafelog/b2. X.org is a fork of XFree86. LibreOffice is a fork of OpenOffice. Blink is a fork of WebKit, which in turn is a fork of KHTML. MariaDB is a fork of MySQL. XBMC has been forked dozens of times. Joomla is a fork of Mambo. (Fun historical coincidence: I very nearly accepted a job offer from Miro, the company behind Mambo, just a couple of months before Joomla came into being!)

Maintaining a fork is hard, thankless work.

All of these independent forks have a common thread: they started with a group of people who were highly experienced in building the software they were forking (often comprising of core developers of the original software). That’s not to say that non-core developers can’t drive a fork, but it does seem to require fairly fundamental knowledge of the strengths and weaknesses of the software, in order to successfully fork it into an independent product.

From a practical perspective, I can tell you that maintaining a fork of WordPress would require an extraordinary amount of work. For example, WordPress.com effectively maintains a fork (which happens to almost exactly match the Core codebase) of WordPress. The task of maintaining this fork falls to a talented team of devops folks, who review and merge each patch.

Now, WordPress.com is really only an internal fork. To maintain a product fork of WordPress would require so much more effort. You’d need to maintain the web infrastructure to push out updates. As the fork diverges from WordPress Core, you would need to figure out how to maintain plugin and theme compatibility. You’d likely need to do your own bug and security fixes, on top of what’s merged from WordPress.

I’m not saying this to dissuade anyone from forking WordPress, rather, it’s important to go into this aware of the challenges that lay ahead. For anyone who uses a fork (whether it be a fork of WordPress, or any other software product), I’m sure the maintainer would appreciate a word of thanks for the work they’ve done to make it possible. 🙂

What’s next for ClassicPress?

Well, that’s ultimately up to the folks building it, and the people who use it. As a member of the WordPress core team, I certainly hold no ill-feelings towards them, and I hope they’ll be open to working with us in the future. I hope we’ll be able to learn from their work, to improve WordPress for everyone.

It’s humbling and inspiring to build something that’s used by so many millions of sites, but at times it involves accepting that we can’t build the tool that will work for 100% of people, 100% of the time. Regardless of WordPress’ future popularity, there’ll always be a place for alternatives, whether that be forks like ClassicPress, different CMSes like Drupal or Joomla, or even different publishing concepts, like MediaWiki or Mastodon.

Ultimately, we all share the same goal: creating a free and open web, for everyone to enjoy. While ClassicPress has styled itself as a protest against Gutenberg for now, I hope they’ll find their voice for something, instead of just against something. 💖

August 25, 2018

AWS Parameter Store

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

August 24, 2018

Learning from the mistakes that even big projects make

Share

The following is a blog post version of a talk presented at pyconau 2018. Slides for the presentation can be found here (as Microsoft powerpoint, or as PDF), and a video of the talk (thanks NextDayVideo!) is below:

 

OpenStack is an orchestration system for setting up virtual machines and associated other virtual resources such as networks and storage on clusters of computers. At a high level, OpenStack is just configuring existing facilities of the host operating system — there isn’t really a lot of difference between OpenStack and a room full of system admins frantically resolving tickets requesting virtual machines be setup. The only real difference is scale and predictability.

To do its job, OpenStack needs to be able to manipulate parts of the operating system which are normally reserved for administrative users. This talk is the story of how OpenStack has done that thing over time, what we learnt along the way, and what I’d do differently if I had my time again. Lots of systems need to do these things, so even if you never use OpenStack hopefully there are things to be learnt here.

DividerThat said, someone I respect suggested last weekend that good conference talks are actionable. A talk full of OpenStack war stories isn’t actionable, so I’ve spent the last week re-writing this talk to hopefully be more of a call to action than just an interesting story. I apologise for any mismatch between the original proposal and what I present here that might therefore exist.DividerBack to the task in hand though — providing control of virtual resources to untrusted users. OpenStack has gone through several iterations of how it thinks this should be done, so perhaps its illustrative to start by asking how other similar systems achieve this. There are lots of systems that have a requirement to configure privileged parts of the host operating system. The most obvious example I can think of is Docker. How does Docker do this? Well… its actually not all that pretty. Docker presents its API over a unix domain socket by default in order to limit control to local users (you can of course configure this). So to provide access to Docker, you add users to the docker group, which owns that domain socket. The Docker documentation warns that “the docker group grants privileges equivalent to the root user“. So that went well.

Docker is really an example of the simplest way of solving this problem — by not solving it at all. That works well enough for systems where you can tightly control the users who need access to those privileged operations — in Docker’s case by making them have an account in the right group on the system and logging in locally. However, OpenStack’s whole point is to let untrusted remote users create virtual machines, so we’re going to have to do better than that.Divider

The next level up is to do something with sudo. The way we all use sudo day to day, you allow users in the sudoers group to become root and execute any old command, with a configuration entry that probably looks a little like this:

# Allow members of group sudo to execute any command
%sudo	ALL=(ALL:ALL) ALL

Now that config entry is basically line noise, but it says “allow members of the group called sudo, on any host, to run any command as root”. You can of course embed this into your python code using subprocess.call() or similar. On the security front, its possible to do a little bit better than a “nova can execute anything” entry. For example:

%sudo ALL=/bin/ls

This says that the sudo group on all hosts can execute /bin/ls with any arguments. OpenStack never actually specified the complete list of commands it executed. That was left as a job for packagers, which of course meant it wasn’t done well.

Divider

So there’s our first actionable thing — if you assume that someone else (packagers, the ops team, whoever) is going to analyse your code well enough to solve the security problem that you can’t be bothered solving, then you have a problem. Now, we weren’t necessarily deliberately punting here. Its obvious to me how to grep the code for commands run as root to add them to a sudo configuration file, but that’s unfair. I wrote some of this code, I am much closer to it than a system admin who just wants to get the thing deployed.

Divider

We can of course do better than just raw sudo. Next we tried a thing called rootwrap, which was mostly an attempt to provide a better boundary around exactly what commands you can expect an OpenStack binary to execute. So for example, maybe its ok for me to read the contents of a configuration file specific to a virtual machine I am managing, but I probably shouldn’t be able to read /etc/shadow or whatever. We can do that by doing something like this:

sudo nova-rootwrap /etc/nova/rootwrap.conf /bin/ls /etc

Where nova-rootwrap is a program which takes a configuration file and a command line to run. The contents of the configuration file are used to determine if the command line should be executed.

Now we can limit the sudo configuration file to only needing to be able to execute nova-rootwrap. I thought about putting in a whole bunch of slides about exactly how to configure rootwrap, but then I realised that this talk is only 25 minutes and you can totally google that stuff.

Divider

So instead, here’s my second actionable thing… Is there a trivial change you can make which will dramatically improve security? I don’t think anyone would claim that rootwrap is rocket science, but it improved things a lot — deployers didn’t need to grep out the command lines we executed any more, and we could do things like specify what paths we were allowed to do things in. Are there similarly trivial changes that you can make to improve your world?

Divider

But wait! Here’s my third actionable thing as well — what are the costs of your design? Some of these are obvious — for example with this design executing something with escalated permissions causes us to pay to fork a process. In fact its worse with rootwrap, because we pay to fork, start a python interpreter to parse a configuration file, and then fork again for the actual binary we wanted in the first place. That cost adds up if you need to execute many small commands, for example when plugging in a new virtual network interface. At one point we measured this for network interfaces and the costs were in the tens of seconds per interface.

There is another cost though which I think is actually more important. The only way we have with this mechanism to do something with escalated permissions is to execute it as a separate process. This is a horrible interface and forces us to do some really weird things. Let’s checkout some examples…

Divider

Which of the following commands are reasonable?

shred –n3 –sSIZE PATH
touch PATH
rm –rf PATH
mkdir –p PATH

These are just some examples, there are many others. The first is probably the most reasonable. It doesn’t seem wise to me for us to implement our own data shredding code, so using a system command for that seems reasonable. The other examples are perhaps less reasonable — the rm one is particularly scary to me. But none of these are the best example…

Divider

How about this one?

utils.execute('tee',
              ('/sys/class/net/%s/bridge/multicast_snooping' %
               br_name),
              process_input='0',
              run_as_root=True,
              check_exit_code=[0, 1])

Some commentary first. This code existed in the middle of a method that does other things. Its one of five command lines that method executes. What does it do?

Its actually not too bad. Using root permissions, it writes a zero to the multicast_snooping sysctl for the network bridge being setup. It then checks the exit code and raises an exception if its not 0 or 1.

That said, its also horrid. In order to write a single byte to a sysctl as root, we are forced to fork, start a python process, read a configuration file, and then fork again. For an operation that in some situations might need to happen hundreds of times for OpenStack to restart on a node.

Divider

This is how we get to the third way that OpenStack does escalated permissions. If we could just write python code that ran as root, we could write this instead:

with open(('/sys/class/net/%s/bridge/multicast_snooping' %
           br_name), 'w') as f:
    f.write('0')

Its not perfect, but its a lot cheaper to execute and we could put it in a method with a helpful name like “disable multicast snooping” for extra credit. Which brings us to…

Divider

Hire Angus Lees and make him angry. Angus noticed this problem well before the rest of us. We were all lounging around basking in our own general cleverness. What Angus proposed is that instead of all this forking and parsing and general mucking around, that we just start a separate process at startup with special permissions, and then send it commands to execute.

He could have done that with a relatively horrible API, for example just sending command lines down the pipe and getting their responses back to parse, but instead he implemented a system of python decorators which let us call a method which is marked up as saying “I want to run as root!”.

Divider

So here’s the destination in our journey, how we actually do that thing in OpenStack now:

@nova.privsep.sys_admin_pctxt.entrypoint
def disable_multicast_snooping(bridge):
    path = ('/sys/class/net/%s/bridge/multicast_snooping' %
            bridge)
    if not os.path.exists(path):
        raise exception.FileNotFound(file_path=path)
    with open(path, 'w') as f:
        f.write('0')

The decorator before the method definition is a bit opaque, but basically says “run this thing as root”, and the rest is a method which can be called from anywhere within our code.

There are a few things you need to do to setup privsep, but I don’t have time in this talk to discuss the specifics. Effectively you need to arrange for the privsep helper to start with escalated permissions, and you need to move the code which will run with one of these decorators to a sub path of your source tree to stop other code from accidentally being escalated. privsep is also capable of running with more than one set of permissions — it will start a helper for each set. That’s what this decorator is doing, specifying what permissions we need for this method.

Divider

And here we land at my final actionable thing. Make it easy to do the  right thing, and hard to do the wrong thing. Rusty Russell used to talk about this at linux.conf.au when he was going through a phase of trying to clean up kernel APIs — its important that your interfaces make it obvious how to use them correctly, and make it hard to use them incorrectly.

In the example used for this talk, having command lines executed as root meant that the prevalent example of how to do many things was a command line. So people started doing that even when they didn’t need escalated permissions — for example calling mkdir instead of using our helper function to recursively make a set of directories.

We’ve cleaned that up, but we’ve also made it much much harder to just drop a command line into our code base to run as root, which will hopefully stop some of this problem re-occuring in the future. I don’t think OpenStack has reached perfection in this regard yet, but we continue to improve a little each day and that’s probably all we can hope for.

Divider

privsep can be used for non-OpenStack projects too. There’s really nothing specific about most of OpenStack’s underlying libraries in fact, and there’s probably things there which are useful to you. In fact the real problem is working out what is where because there’s so much of it.

One final thing — privsep makes it possible to specify the exact permissions needed to do something. For example, setting up a network bridge probably doesn’t need “read everything on the filesystem” permissions. We originally did that, but stepped back to using a singled escalated permissions set that maps to what you get with sudo, because working out what permissions a single operation needed was actually quite hard. We were trying to lower the barrier for entry for doing things the right way. I don’t think I really have time to dig into that much more here, but I’d be happy to chat about it sometime this weekend or on the Internet later.

Divider

So in summary:

  • Don’t assume someone else will solve the problem for you.
  • Are there trivial changes you can make that will drastically improve security?
  • Think about the costs of your design.
  • Hire smart people and let them be annoyed about things that have always “just been than way”. Let them fix those things.
  • Make it easy to do things the right way and hard to do things the wrong way.Divider

I’d be happy to help people get privsep into their code, and its very usable outside of OpenStack. There are a couple of blog posts about that on my site at http://www.madebymikal.com/?s=privsep, but feel free to contact me at mikal@stillhq.com if you’d like to chat.

Share

August 23, 2018

Custom output pods for the Standard Research CG635 Clock Generator

As part of my previously mentioned side project the ability to replace crystal oscillators in a circuit with a higher quality frequency reference is really handy, to let me eliminate a bunch of uncertainty from some test setups.

A simple function generator is the classic way to handle this, although if you need square wave output it quickly gets hard to find options, with arbitrary waveform generators (essentially just DACs) the common option. If you can get away with just sine wave output an RF synthesizer is the other main option.

While researching these I discovered the CG635 Clock Generator from Stanford Research, and some time later picked one of these up used.

As well as being a nice square wave generator at arbitrary voltages these also have another set of outputs on the rear of the unit on an 8p8c (RJ45) connector, in both RS422 (for lower frequencies) and LVDS (full range) formats, as well as some power rails to allow a variety of less common output formats.

All I needed was 1.8v LVCMOS output, and could get that from the front panel output, but I'd then need a coax tail on my boards, as well as potentially running into voltage rail issues so I wanted to use the pod output instead. Unfortunately none of the pods available from Stanford Research do LVCMOS output, so I'd have to make my own, which I did.

The key chip in my custom pod is the TI SN65LVDS4, a 1.8v capable single channel LVDS reciever that operates at the frequencies I need. The only downside is this chip is only available in a single form factor, a 1.5mm x 2mm 10 pin UQFN, which is far too small to hand solder with an iron. The rest of the circuit is just some LED indicators to signal status.


Here's a rendering of the board from KiCad.

Normally "not hand solderable" for me has meant getting the board assembled, however my normal assembly house doesn't offer custom PCB finishes, and I wanted these to have white solder mask with black silkscreen as a nice UX when I go to use them, so instead I decided to try my hand at skillet reflow as it's a nice option given the space I've got in my tiny apartment (the classic tutorial on this from SparkFun is a good read if you're interested). Instead of just a simple plate used for cooking you can now buy hot plates with what are essentially just soldering iron temperature controllers, sold as pre-heaters making it easier to come close to a normal soldering profile.

Sadly, actually acquiring the hot plate turned into a bit of a mess, the first one I ordered in May never turned up, and it wasn't until mid-July that one arrived from a different supplier.

Because of the aforementioned lack of space instead of using stencils I simply hand-applied (leaded) paste, without even an assist tool (which I probably will acquire for next time), then hand-mounted the components, and dropped them on the plate to reflow. I had one resistor turn 90 degrees, and a few bridges from excessive paste, but for a first attempt I was really happy.


Here's a photo of the first two just after being taken off the hot plate.

Once the reflow was complete it was time to start testing, and this was where I ran into my biggest problems.

The two big problems were with the power supply I was using, and with my oscilloscope.

The power supply (A Keithley 228 Voltage/Current source) is from the 80's (Keithley's "BROWN" era), and while it has nice specs, doesn't have the most obvious UI. Somehow I'd set it to limit at 0ma output current, and if you're not looking at the segment lights it's easy to miss. At work I have an EEZ H24005 which also resets the current limit to zero on clear, however it's much more obvious when limiting, and a power supply with that level of UX is now on my "to buy" list.

The issues with my scope were much simpler. Currently I only have an old Rigol DS1052E scope, and while it works fine it is a bit of a pain to use, but ultimately I made a very simple mistake while testing. I was feeding in a trigger signal direct from the CG635's front outputs, and couldn't figure out why the generator was putting out such a high voltage (implausibly so). To cut the story short, I'd simply forgotten that the scope was set for use with 10x probes, and once I realised that everything made rather more sense. An oscilloscope with auto-detection for 10x probes, as well as a bunch of other features I want in a scope (much bigger screen for one), has now been ordered, but won't arrive for a while yet.

Ultimately the boards work fine, but until the new scope arrives I can't determine signal quality of them, but at least they're ready for when I'll need them, which is great for flow.

August 19, 2018

Trying Mastodon

It’s no secret that Twitter is red hot garbage fire, so I’ve signed up for a Mastodon account to give them a try. Because I’m super vain, I decided to create my own Mastodon instance, with a custom domain.

Mastodon is kind of weird to sign up for: think of it as being kind of like email. You can sign up for a big provider like Gmail or Outlook, you can run your own server, or you can pay someone to run a server for you. In my case, I’m paying for a personal account on Masto.host, they’ve been super helpful in getting it all configured. If you’re just looking to try it out for free, here’s a tool to help you choose an instance.

Some Initial Impressions

It’s pretty quiet, of course. I’m currently following 13 people on Mastodon, and 500 on Twitter, there’s going to be a difference in volume.

Mastodon Bridge is a useful tool for you to be able to find your Twitter friends when you start out. I highly recommend using it.

None of the apps have the polish that I’m used to with Twitter apps like Twitterific and Fenix, but they’re evolving quickly. I do miss Tweet Marker support. I’ve settled on using Whalebird for my MacOS app, none of the various Android apps have appealed enough for me to start using them.

Search is pretty bad. There’s a confusing limitation (for a reasonable, but not really satisfactory technical reason): You can only search through the Toots that people on the same instance as you have subscribed to. So, because I’m on only person on pento.net, I can only search through the Toots of people I’m immediately subscribed to. If you sign up for a big host, you’ll see many more results, but you still won’t see everything.

I’m going to have to share this post manually, because Jetpack doesn’t know how to share to Mastodon yet. 😉

Will I Keep Using It?

Yep, for now. There’s a lot of potential, I’m interested to see where it goes.

August 15, 2018

How Fourier Transforms Work

The best explanation of the Discrete Fourier Transform (DFT) I have ever seen from Bill Cowley on his Low SNR blog.

Improving performance of Phoronix benchmarks on POWER9

Recently Phoronix ran a range of benchmarks comparing the performance of our POWER9 processor against the Intel Xeon and AMD EPYC processors.

We did well in the Stockfish, LLVM Compilation, Zstd compression, and the Tinymembench benchmarks. A few of my colleagues did a bit of investigating into some the benchmarks where we didn't perform quite so well.

LBM / Parboil

The Parboil benchmarks are a collection of programs from various scientific and commercial fields that are useful for examining the performance and development of different architectures and tools. In this round of benchmarks Phoronix used the lbm benchmark: a fluid dynamics simulation using the Lattice-Boltzmann Method.

lbm is an iterative algorithm - the problem is broken down into discrete time steps, and at each time step a bunch of calculations are done to simulate the change in the system. Each time step relies on the results of the previous one.

The benchmark uses OpenMP to parallelise the workload, spreading the calculations done in each time step across many CPUs. The number of calculations scales with the resolution of the simulation.

Unfortunately, the resolution (and therefore the work done in each time step) is too small for modern CPUs with large numbers of SMT (simultaneous multi-threading) threads. OpenMP doesn't have enough work to parallelise and the system stays relatively idle. This means the benchmark scales relatively poorly, and is definitely not making use of the large POWER9 system

Also this benchmark is compiled without any optimisation. Recompiling with -O3 improves the results 3.2x on POWER9.

x264 Video Encoding

x264 is a library that encodes videos into the H.264/MPEG-4 format. x264 encoding requires a lot of integer kernels doing operations on image elements. The math and vectorisation optimisations are quite complex, so Nick only had a quick look at the basics. The systems and environments (e.g. gcc version 8.1 for Skylake, 8.0 for POWER9) are not completely apples to apples so for now patterns are more important than the absolute results. Interestingly the output video files between architectures are not the same, particularly with different asm routines and compiler options used, which makes it difficult to verify the correctness of any changes.

All tests were run single threaded to avoid any SMT effects.

With the default upstream build of x264, Skylake is significantly faster than POWER9 on this benchmark (Skylake: 9.20 fps, POWER9: 3.39 fps). POWER9 contains some vectorised routines, so an initial suspicion is that Skylake's larger vector size may be responsible for its higher throughput.

Let's test our vector size suspicion by restricting Skylake to SSE4.2 code (with 128 bit vectors, the same width as POWER9). This hardly slows down the x86 CPU at all (Skylake: 8.37 fps, POWER9: 3.39 fps), which indicates it's not taking much advantage of the larger vectors.

So the next guess would be that x86 just has more and better optimized versions of costly functions (in the version of x264 that Phoronix used there are only six powerpc specific files compared with 21 x86 specific files). Without the time or expertise to dig into the complex task of writing vector code, we'll see if the compiler can help, and turn on autovectorisation (x264 compiles with -fno-tree-vectorize by default, which disables auto vectorization). Looking at a perf profile of the benchmark we can see that one costly function, quant_4x4x4, is not autovectorised. With a small change to the code, gcc does vectorise it, giving a slight speedup with the output file checksum unchanged (Skylake: 9.20 fps, POWER9: 3.83 fps).

We got a small improvement with the compiler, but it looks like we may have gains left on the table with our vector code. If you're interested in looking into this, we do have some active bounties for x264 (lu-zero/x264).

Test Skylake POWER9
Original - AVX256 9.20 fps 3.39 fps
Original - SSE4.2 8.37 fps 3.39 fps
Autovectorisation enabled, quant_4x4x4 vectorised 9.20 fps 3.83 fps

Nick also investigated running this benchmark with SMT enabled and across multiple cores, and it looks like the code is not scalable enough to feed 176 threads on a 44 core system. Disabling SMT in parallel runs actually helped, but there was still idle time. That may be another thing to look at, although it may not be such a problem for smaller systems.

Primesieve

Primesieve is a program and C/C++ library that generates all the prime numbers below a given number. It uses an optimised Sieve of Eratosthenes implementation.

The algorithm uses the L1 cache size as the sieve size for the core loop. This is an issue when we are running in SMT mode (aka more than one thread per core) as all threads on a core share the same L1 cache and so will constantly be invalidating each others cache-lines. As you can see in the table below, running the benchmark in single threaded mode is 30% faster than in SMT4 mode!

This means in SMT-4 mode the workload is about 4x too large for the L1 cache. A better sieve size to use would be the L1 cache size / number of threads per core. Anton posted a pull request to update the sieve size.

It is interesting that the best overall performance on POWER9 is with the patch applied and in SMT2 mode:

SMT level baseline patched
1 14.728s 14.899s
2 15.362s 14.040s
4 19.489s 17.458s

LAME

Despite its name, a recursive acronym for "LAME Ain't an MP3 Encoder", LAME is indeed an MP3 encoder.

Due to configure options not being parsed correctly this benchmark is built without any optimisation regardless of architecture. We see a massive speedup by turning optimisations on, and a further 6-8% speedup by enabling USE_FAST_LOG (which is already enabled for Intel).

LAME Duration Speedup
Default 82.1s n/a
With optimisation flags 16.3s 5.0x
With optimisation flags and USE_FAST_LOG set 15.6s 5.3x

For more detail see Joel's writeup.

FLAC

FLAC is an alternative encoding format to MP3. But unlike MP3 encoding it is lossless! The benchmark here was encoding audio files into the FLAC format.

The key part of this workload is missing vector support for POWER8 and POWER9. Anton and Amitay submitted this patch series that adds in POWER specific vector instructions. It also fixes the configuration options to correctly detect the POWER8 and POWER9 platforms. With this patch series we get see about a 3x improvement in this benchmark.

OpenSSL

OpenSSL is among other things a cryptographic library. The Phoronix benchmark measures the number of RSA 4096 signs per second:

$ openssl speed -multi $(nproc) rsa4096

Phoronix used OpenSSL-1.1.0f, which is almost half as slow for this benchmark (on POWER9) than mainline OpenSSL. Mainline OpenSSL has some powerpc multiplication and squaring assembly code which seems to be responsible for most of this speedup.

To see this for yourself, add these four powerpc specific commits on top of OpenSSL-1.1.0f:

  1. perlasm/ppc-xlate.pl: recognize .type directive
  2. bn/asm/ppc-mont.pl: prepare for extension
  3. bn/asm/ppc-mont.pl: add optimized multiplication and squaring subroutines
  4. ppccap.c: engage new multipplication and squaring subroutines

The following results were from a dual 16-core POWER9:

Version of OpenSSL Signs/s Speedup
1.1.0f 1921 n/a
1.1.0f with 4 patches 3353 1.74x
1.1.1-pre1 3383 1.76x

SciKit-Learn

SciKit-Learn is a bunch of python tools for data mining and analysis (aka machine learning).

Joel noticed that the benchmark spent 92% of the time in libblas. Libblas is a very basic BLAS (basic linear algebra subprograms) library that python-numpy uses to do vector and matrix operations. The default libblas on Ubuntu is only compiled with -O2. Compiling with -Ofast and using alternative BLAS's that have powerpc optimisations (such as libatlas or libopenblas) we see big improvements in this benchmark:

BLAS used Duration Speedup
libblas -O2 64.2s n/a
libblas -Ofast 36.1s 1.8x
libatlas 8.3s 7.7x
libopenblas 4.2s 15.3x

You can read more details about this here.

Blender

Blender is a 3D graphics suite that supports image rendering, animation, simulation and game creation. On the surface it appears that Blender 2.79b (the distro package version that Phoronix used by system/blender-1.0.2) failed to use more than 15 threads, even when "-t 128" was added to the Blender command line.

It turns out that even though this benchmark was supposed to be run on CPUs only (you can choose to render on CPUs or GPUs), the GPU file was always being used. The GPU file is configured with a very large tile size (256x256) - which is fine for GPUs but not great for CPUs. The image size (1280x720) to tile size ratio limits the number of jobs created and therefore the number threads used.

To obtain a realistic CPU measurement with more that 15 threads you can force the use of the CPU file by overwriting the GPU file with the CPU one:

$ cp
~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_cpu.blend
~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_gpu.blend

As you can see in the image below, now all of the cores are being utilised! Blender with CPU Blend file

Fortunately this has already been fixed in pts/blender-1.1.1. Thanks to the report by Daniel it has also been fixed in system/blender-1.1.0.

Pinning the pts/bender-1.0.2, Pabellon Barcelona, CPU-Only test to a single 22-core POWER9 chip (sudo ppc64_cpu --cores-on=22) and two POWER9 chips (sudo ppc64_cpu --cores-on=44) show a huge speedup:

Benchmark Duration (deviation over 3 runs) Speedup
Baseline (GPU blend file) 1509.97s (0.30%) n/a
Single 22-core POWER9 chip (CPU blend file) 458.64s (0.19%) 3.29x
Two 22-core POWER9 chips (CPU blend file) 241.33s (0.25%) 6.25x

tl;dr

Some of the benchmarks where we don't perform as well as Intel are where the benchmark has inline assembly for x86 but uses generic C compiler generated assembly for POWER9. We could probably benefit with some more powerpc optimsed functions.

We also found a couple of things that should result in better performance for all three architectures, not just POWER.

A summary of the performance improvements we found:

Benchmark Approximate Improvement
Parboil 3x
x264 1.1x
Primesieve 1.1x
LAME 5x
FLAC 3x
OpenSSL 2x
SciKit-Learn 7-15x
Blender 3x

There is obviously room for more improvements, especially with the Primesieve and x264 benchmarks, but it would be interesting to see a re-run of the Phoronix benchmarks with these changes.

Thanks to Anton, Daniel, Joel and Nick for the analysis of the above benchmarks.

August 13, 2018

city2surf 2018 wrap up

Share

city2surf 2018 was yesterday, so how did the race go? First off, thanks to everyone who helped out with my fund raising for the Black Dog Institute — you raised nearly $2,000 AUD for this important charity, which is very impressive. Thanks for everyone’s support!

city2surf is 14kms, with 166 meters of vertical elevation gain. For the second year running I was in the green start group, which is for people who have previously finished the event in less than 90 minutes. There is one start group before this, red, which is for people who can finish in less than 70 minutes. In reality I think its unlikely that I’ll ever make it to red — it would require me to shave about 30 seconds per kilometre off my time to just scrape in, and I think that would be hard to do.

Training for city2surf last year I tore my right achilles, so I was pretty much starting from scratch for this years event — at the start of the year I could run about 50 meters before I had issues. Luckily I was referred to an excellent physiotherapist who has helped me build back up safely — I highly recommend Cameron at Southside Physio Therapy if you live in Canberra.

Overall I ran a lot in training for this year — a total of 540 kilometres. I was also a lot more consistent than in previous years, which is something I’m pretty proud of given how cold winters are in Canberra. Cold weather, short days, and getting sick seem to always get in the way of winter training for me.

On the day I was worried about being cold while running, but that wasn’t an issue. It was about 10 degrees when we started and maybe a couple of degrees warmer than that at the end. The maximum for the day was only 16, which is cold for Sydney at this time of year. There was a tiny bit of spitting rain, but nothing serious. Wind was the real issue — it was very windy at the finish, and I think if it had been like that for the entire race it would have been much less fun.

That said, I finished in 76:32, which is about three minutes faster than last year and a personal best. Overall, an excellent experience and I’ll be back again.

Share

August 10, 2018

Keystone Federated Swift – Final post coming

This is a quick post to say the final topology post is coming. It’s currently in draft from and I hope to post it soon. I just realised it’s been a while so thought I’d better give an update.

 

The last post goes into what auth does, what is happening in keystone, what needs to happen  to really make this topology work and then talks about the brittle POC I created to have something to demo. I’ll be discussing other better options/alternative. But all this means it’s become much more detailed then I originally expected. I’ll hope to get it up by mid next week.

 

Thanks for waiting.