Planet Linux Australia
Celebrating Australians & Kiwis in the Linux and Free/Open-Source community...

February 28, 2017

Mildred Dresselhaus, the Queen of Carbon | NY Times

“Dr. Dresselhaus, who helped transform carbon into the superstar of modern materials science, was renowned for her efforts to promote the cause of women in science.”

Millie Dresselhaus (nee Spiewak) high school yearbook 19481948 A tribute at Hunter High School.

“Mildred (Millie) Dresselhaus, a professor emerita at the Massachusetts Institute of Technology whose research into the fundamental properties of carbon helped transform it into the superstar of modern materials science and the nanotechnology industry, died on Monday in Cambridge, Mass. She was 86.”

Read more.

February 27, 2017

LUV Main March 2017 Meeting: Multicore World / Patching with quilt

Mar 7 2017 18:30
Mar 7 2017 20:30
Mar 7 2017 18:30
Mar 7 2017 20:30
Location: 
Level 29, 570 Bourke St. Melbourne

PLEASE NOTE NEW LOCATION

Tuesday, March 7, 2017
6:30 PM to 8:30 PM
Level 29, 570 Bourke St. Melbourne

Speakers:

• Lev Lafayette, MultiCore World 2017 Wellington
• Russell Coker, Patching with quilt

570 Bourke St. Melbourne, between King and William streets

Late arrivals needing access to the building and the twenty-ninth floor please call 0490 627 326.
 

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue.

LUV would like to acknowledge Dell for their help in obtaining the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

March 7, 2017 - 18:30

read more

LUV Beginners March Meeting: TBA

Mar 18 2017 12:30
Mar 18 2017 16:30
Mar 18 2017 12:30
Mar 18 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Workshop TBA

 


There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

March 18, 2017 - 12:30

February 26, 2017

This Week in HASS – term 1, week 5

This week students are exploring a vast range of topics, across the year levels. From using a torch and a tennis ball to investigate how the Earth experiences Day and Night to case studies on natural disasters, celebrations and indigenous peoples, there is a broad range of topics to spark interest.

Foundation to Year 3

Our youngest students (Foundation/Prep – Unit F.1) are talking about where they, and other members of their family, were born. Once again, this activity gets them interacting with maps and thinking about how we represent locations, whilst reinforcing their sense of identity and how they relate to others. Students in Years 1 to 3 (Units 1.1, 2.1 and 3.1) are using a torch and a tennis ball to investigate how the Earth experiences Night and Day, Seasons, Equinoxes and Solstices. This activity ties in what we experience as weather, seasons and their related celebrations to the Physics of how it all works, allowing students to draw connections between what they experience and what they are learning, and providing essential context for the more abstract knowledge. Teachers can easily tailor this activity to the needs of each class and explore the concepts in as much detail as required.

Years 3 to 6

Charlotte St, Brisbane 1893 floodsCharlotte St, Brisbane 1893 floods

Students in Years 3 to 6 (Units 3.5, 4.1, 5.1 and 6.1) are looking at a range of different case studies pertinent to their year-level curriculum requirements, this week. Year 3 students are examining celebrations in Australia and around the world (the Celebrations Around the World resource has been updated this year, and contains some new material, please check that you have the latest copy, and re-download it if necessary) and Year 4 students examine areas of natural beauty in Australia. Year 5 students are looking at the effects of natural disasters, especially here in Australia. Case studies on floods, such as the Brisbane Floods of 1893, and bushfires, such as the infamous Black Friday fires in Victoria, are available for more in depth study by teachers and students wishing to explore the topic in more detail. Year 6 students are examining Indigenous groups of people from Australia and Asia. A range of case studies are available for this topic, from groups within Australia, holding Native Title, such as the Quandamooka People, to groups from the mountains of Southern China, such as the Yi people. The larger number of case studies available, which can be found in our store resource category Indigenous Peoples, allows for Year 6 students to pursue more individual lines of enquiry, suited to their developing abilities.

Life in Afghanistan, Random Stuff, and More

On Afghanistan: - the place reminds me of a lot of other Middle Eastern countries.... Despite this and their internal stability issues clear that they have their own 'identity'. The deeper you look the more you'll realise why the US/West is having such difficulties there Afghanistan, officially the Islamic Republic of Afghanistan, is a landlocked country located within South Asia and Central

Nyriad: An Agile Startup Done Right

I have recently spent a few days in the company of Nyriad, a New Zealand IT company specialisng in GPU software. I wish to make a point of a few observations of the company because they are an example of both a startup company that uses agile project management, two terms much maligned and subject to justified cynicism, and does it right. Because I have seen so many colleagues burned by companies and organisations which profess such values and do not do it right, I hope the following observations will be useful for future organisations.

read more

February 23, 2017

This Week in HASS – term 1, week 4

This week the Understanding Our World program for primary schools has younger students looking at time passing, both in their own lives and as marked by others, including the seasons recognised by different Aboriginal groups. Older students are looking at how Aboriginal people interacted with the Australian environment, as it changed at the end of the Ice Age, and how they learnt to manage the environment and codified that knowledge into their lore.

Foundation to Year 3

This week our standalone Foundation classes  (Unit F.1) are thinking about what they were like as babies. They are comparing photographs or drawings of themselves as babies, with how they are now. This is a great week to involve family members and carers into class discussions, if appropriate. Students in multi-age classes and Years 1 to 3 (Units F.5, 1.1, 2.1 and 3.1) , are examining how weather and seasons change throughout the year and comparing our system of seasons with those used by different groups of Aboriginal people in different parts of Australia. Students can compare these seasons to the weather where they live and think about how they would divide the year into seasons that work where they live. Students can also discuss changes in weather over time with older members of the community.

Years 3 to 6

Older students, having followed the ancestors of Aboriginal people all the way to Australia, are now examining how the climate changed in Australia after the Ice Age, and how this affected Aboriginal people. They learn how Aboriginal people adapted to their changing environment and learned to manage it in a sustainable way. This vitally important knowledge about how to live with, and manage, the Australian environment, was codified into Aboriginal lore and custom and handed down in stories and laws, from generation to generation. Students start to examine the idea of Country/Place, in this context.

February 22, 2017

Making a USB powered soldering iron that doesn't suck

Today's evil project was inspired by a suggestion after my talk on USB-C & USB-PD at this years's linux.conf.au Open Hardware miniconf.

Using a knock-off Hakko driver and handpiece I've created what may be the first USB powered soldering iron that doesn't suck (ok, it's not a great iron, but at least it has sufficient power to be usable).

Building this was actually trivial, I just wired the 20v output of one of my USB-C ThinkPad boards to a generic Hakko driver board, the loss of power from using 20v not 24v is noticeable, but for small work this would be fine (I solder in either the work lab or my home lab, where both have very nice soldering stations so I don't actually expect to ever use this).

If you were to turn this into a real product you could in fact do much better, by doing both power negotiation and temperature control in a single micro, the driver could instead be switched to a boost converter instead of just a FET, and by controlling the output voltage control the power draw, and simply disable the regulator to turn off the heater. By chance, the heater resistance of the Hakko 907 clone handpieces is such that combined with USB-PD power rules you'd always be boost converting, never needing to reduce voltage.

With such a driver you could run this from anything starting with a 5v USB-C phone charger or battery (15W for the nicer ones), 9v at up to 3A off some laptops (for ~25W), or all the way to 20V@5A for those who need an extremely high-power iron. 60W, which happens to be the standard power level of many good irons (such as the Hakko FX-888D) is also at 20v@3A a common limit for chargers (and also many cables, only fixed cables, or those specially marked with an ID chip can go all the way to 5A). As higher power USB-C batteries start becoming available for laptops this becomes a real option for on-the-go use.

Here's a photo of it running from a Chromebook Pixel charger:

February 21, 2017

$AUD 35k available in 2017 Grants Program

Linux Australia is delighted to announce the availability of $AUD 35,000
for open source, open data, open government, open education, open
hardware and open culture projects, as part of the organisation’s
commitment to free and open source systems and communities in the region.

This year, we have deliberately weighted some areas in which we strongly
welcome grant applications.

More information is available at: https://linux.org.au/projects/grants

Please do share this with colleagues who may find it of interest, and
feel free to contact the Linux Australia Council if you would like a
private discussion.

With kind regards,

Kathy Reid

President, Linux Australia

February 20, 2017

Life in India, Prophets/Pre-Cogs/Stargate Program 7, and More

On India: - your life is very much dependent on how you were born, how much you money you have, etc... In spite of being a capitalist, democracy it still bears aspects of being stuck with a caste/feudal/colonial system. Electrical power stability issues still. Ovens generally lacking. Pollution and traffic problems no matter where you live in India. They have the same number of hours in each

New LinkedIn Interface Delete Your Data? Here’s How to Bring it Back.

Over the past few years it has seemed like LinkedIn were positioning themselves to take over your professional address book. Through offering CRM-like features, users were able to see a summary of their recent communications with each connection as well as being able to add their own notes and categorise their connections with tags. It appeared to be a reasonable strategy for the company, and many users took the opportunity to store valuable business information straight onto their connections.

Then at the start of 2017 LinkedIn decided to progressively foist a new user experience upon its users, and features like these disappeared overnight in lieu of a more ‘modern’ interface. People who grew to depend on this integration were in for a rude shock — all of a sudden it was missing. Did LinkedIn delete the information? There was no prior warning given and I still haven’t seen any acknowledgement or explanation (leave alone an apology) from LinkedIn/Microsoft on the inconvenience/damage caused.

If anything, this reveals the risks in entrusting your career/business to a proprietary cloud service. Particularly with free/freemium (as in cost) services, the vendor is more likely to change things on a whim or move that functionality to a paid tier.

It’s another reason why I’ve long been an advocate for open standards and free and open source software.

Fortunately there’s a way to export all of your data from LinkedIn. This is what we’ll use to get back your tags and notes. These instructions are relevant for the new interface. Go to your account settings and in the first section (“Basics”) you should see an option called “Getting an archive of your data”.

altLinkedIn: Getting an archive of your data

Click on Request Archive and you’ll receive an e-mail when it’s available for download. Extract the resulting zip file and look for a file called Contacts.csv. You can open it in a text editor, or better yet a spreadsheet like LibreOffice Calc or Excel.

In my copy, my notes and tags were in columns D and E respectively. If you have many, it may be a lot of work to manually integrate them back into your address book. I’d love suggestions on how to automate this. Since I use Gmail, I’m currently looking into Google’s address book import/export format, which is CSV based.

As long as Microsoft/LinkedIn provide a full export feature, this is a good way to maintain ownership of your data. It’s good practice to take an export every now and then to give yourself some peace-of-mind and avoid vendor lock-in.

This article has also been published on LinkedIn.

February 19, 2017

HPC/Cloud Hybrids for Efficient Resource Allocation and Throughput

HPC systems running massively parallel jobs need a fairly static software operating environment running on bare metal hardware, a high speed interconnect to reach their full potential, and offer linear performance scaling for cleverly designed applications. Cloud computing, on the other hand, offers flexible virtual environments and can be used for pleasingly parallel workloads.

read more

CareerNexus: a new way to find work

The start-up that I have co-founded, CareerNexus, is looking for job seekers to take part in a product test and market experiment. If you, or someone you know, wants to know more and potentially take part, message me.

If we can help just a fraction of those people who have difficulty finding work through traditional means — people returning from parental leave, people looking for roles after being made redundant, mature workers, even some highly skilled professionals — we’ll be doing something great.

As an alternate means of finding work, it need not replace any mechanisms that you may already be engaged in. In other words, there is nothing for you to lose and hopefully much for you to gain.

Published in Engineers Without Borders Magazine

Engineers Without Borders asked me to write something for their Humanitarian Engineering magazine about One Laptop per Child. Here is what I wrote.

The school bell rings, and the children filter into the classroom. Each is holding an XO – their own personal learning device.

altStudents from Doomadgee often use their XOs for outdoors education. The sunlight-readable screen
combined with the built-in camera allow for hands-on exploration of their environment.

This is no ordinary classroom. As if by magic, the green and white XOs automatically see each other as soon as they are started up, allowing children to easily share information and collaborate on activities together. The kids converse on how they can achieve the tasks at hand. One girl is writing a story on her XO, and simultaneously on the same screen she can see the same story being changed by a boy across the room. Another group of children are competing in a game that involves maths questions.

altChildren in Kiwirrkurra, WA, collaborate on an activity with help from teachers.

Through the XO, the learning in this classroom has taken on a peer-to-peer character. By making learning more fun and engaging, children are better equipped to discover and pursue their interests. Through collaboration and connectivity, they can exchange knowledge with their peers and with the world. In the 21st century, textbooks should be digital and interactive. They should be up-to-date and locally relevant. They should be accessible and portable.

Of course, the teacher’s role remains vital, and her role has evolved into that of a facilitator in this knowledge network. She is better placed to provide more individual pathways for learning. Indeed the teacher is a learner as well, as the children quickly adapt to the new technology and learn skills that they can teach back.

altA teacher in Jigalong, WA, guides a workgroup of children in their class.

Helping to keep the classroom session smoothly humming along are children who have proven themselves to be proficient with assisting their classmates and fixing problems (including repairing hardware). These kids have taken part in training programmes that award them for their skills around the XO. In the process, they are learning important life skills around problem solving and teamwork.

altDozens of students in Doomadgee State School are proficient in fixing XO hardware.

This is all part of the One Education experience, an initiative from One Laptop per Child (OLPC) Australia. This educational programme provides a holistic educational scaffolding around the XO, the laptop developed by the One Laptop per Child Association that has its roots in the internationally-acclaimed MIT Media Lab in the USA.

The XO was born from a desire to empower each and every child in the world with their own personal learning device. Purpose-built for young children and using solid open source software, the XO provides an ideal platform for classroom learning. Designed for outdoors, with a rugged design and a high-resolution sunlight-readable screen, education is no longer confined to a classroom or even to the school grounds. Learning time needn’t stop with the school bell – many children are taking their XOs home. Also important is the affordability and full repairability of the devices, making it cost-effective versus non-durable and ephemeral items such as stationery, textbooks and other printed materials. There are over 3 million XOs in distribution, and in some countries (such as Uruguay) every child owns one.

altA One Education classroom in Kenya.

One Education’s mission is to provide educational opportunities to every child, no matter how remote or disadvantaged. The digital divide is a learning divide. This can be conquered through a combination of modern technology, training and support, provided in a manner that empowers local schools and communities. The story told above is already happening in many classrooms around the country and the world.

altA One Education classroom in northern Thailand.

With teacher training often being the Achilles’ heel of technology programmes in the field of education, One Education focuses only on teachers who have proven their interest and aptitude through the completion of a training course. Only then are they eligible to receive XOs (with an allocation of spare parts) into their classroom. Certified teachers are eligible for ongoing support from OLPC Australia, and can acquire more hardware and parts as required.

As a not-for-profit, OLPC Australia works with sponsors to heavily subsidise the costs of the One Education programme for low socio-economic status schools. In this manner, the already impressive total cost of ownership can be brought down even further.

High levels of teacher turnover are commonplace in remote Australian schools. By providing courses online, training can be scalable and cost-effective. Local teachers can even undergo further training to gain official trainer status themselves. Some schools have turned this into a business – sending their teacher-trainers out to train teachers in other schools.

altStudents in Geeveston in Tasmania celebrate their attainment of XO-champion status, recognising
their proficiency in using the XO and their helpfulness in the classroom.

With backing from the United Nations Development Programme, OLPC are tackling the Millennium Development Goals by focusing on Goal 2 (Achieve Universal Primary Education). The intertwined nature of the goals means that progress made towards this goal in turn assists the others. For example, education on health can lead to better hygiene and lower infant mortality. A better educated population is better empowered to help themselves, rather than being dependent on hand-outs. For people who cannot attend a classroom (perhaps because of remoteness, ethnicity or gender), the XO provides an alternative. OLPC’s focus on young children means that children are becoming engaged in their most formative years. The XO has been built with a minimal environmental footprint, and can be run off-grid using alternate power sources such as solar panels.

One Education is a young initiative, formed based on experiences learnt from technology deployments in Australia and other countries. Nevertheless, results in some schools have been staggering. Within one year of XOs arriving in Doomadgee State School in northern Queensland, the percentage of Year 3 pupils meeting national literacy standards leapt from 31% to 95%.

altA girl at Doomadgee State School very carefully removes the screen from an XO.

2013 will see a rapid expansion of the programme. With $11.7m in federal government funding, 50,000 XOs will be distributed as part of One Education. These schools will be receiving the new XO Duo (AKA XO-4 Touch), a new XO model developed jointly with the OLPC Association. This version adds a touch-screen user experience while maintaining the successful laptop form factor. The screen can swivel and fold backwards over the keyboard, converting the laptop into a tablet. This design was chosen in response to feedback from educators that a hardware keyboard is preferred to a touch-screen for entering large amounts of information. As before, the screen is fully sunlight-readable. Performance and battery life have improved significantly, and it is fully repairable as before.

As One Education expands, there are growing demands on OLPC Australia to improve the offering. Being a holistic project, there are plenty of ways in which we could use help, including in education, technology and logistics. We welcome you to join us in our quest to provide educational opportunities to the world’s children.

February 18, 2017

Choose your own adventure – keynote

This is a blog version of the keynote I gave at linux.conf.au 2017. Many thanks to everyone who gave such warm feedback, and I hope it helps spur people to think about systemic change and building the future. The speech can be watched at https://www.youtube.com/watch?v=J6IqGuxCKa8.

I genuinely believe we are at a tipping point right now. A very important tipping point where we have at our disposal all the philosophical and technical means to invent whatever world we want, but we’re at risk of reinventing the past with shiny new things. This talk is about trying to make active choices about how we want to live in future and what tools we keep or discard to get there. Passive choices are still a choice, they are choosing the status quo. We spend a lot of our time tinkering around the edges of life as it is, providing symptomatic relief for problems we find, but we need to take a broader systems based view and understand what systemic change we can make to properly address those problems.

We evolved over hundreds of thousands of years using a cooperative competitive social structure that helped us work together to flourish in every habitat, rapidly and increasingly evolve an learn, and establish culture, language, trade and travel. We were constantly building on what came before and we built our tools as we went.

In recent millennia we invented systems of complex differentiated and interdependent skills, leading to increasingly rapid advancements in how we live and organise ourselves physically, politically, economically and socially, especially as we started building huge cities. Lots of people meant a lot of time to specialise, and with more of our basic needs taken care of, we had more time for philosophy and dreaming.

Great progress created great surplus, creating great power, which we generally centralised in our great cities under rulers that weren’t always so great. Of course, great power also created great inequalities so sometimes we burned down those great cities, just to level the playing field. We often took a symptomatic relief approach to bad leaders by replacing them, without fundamentally changing the system.

But in recent centuries we developed the novel idea that all people have inalienable rights and can be individually powerful. This paved the way for a massive culture shift and distribution of power combined with heightened expectations of individuals in playing a role in their own destiny, leading us to the world as we know it today. Inalienable rights paved the way for people thinking differently about their place in the world, the control they had over their lives and how much control they were happy to cede to others. This makes us, individually, the most powerful we have ever beed, which changes the game moving forward.

You see, the internet was both a product and an amplifier of this philosophical transition, and of course it lies at the heart of our community. Technology has, in large part, only sped up the cooperative competitive models of adapting, evolving and flourishing we have always had. But the idea that anyone has a right to life and liberty started a decentralisation of power and introduced the need for legitimate governance based on the consent of citizens (thank you Locke).

Citizens have the powers of publishing, communications, monitoring, property, even enforcement. So in recent decades we have shifted fundamentally from kings in castles to nodes in a network, from scarcity to surplus or reuse models, from closed to open systems, and the rate of human progress only continues to grow towards an asymptoic climb we can’t even imagine.

To help capture this, I thought I’d make a handy change.log on human progress to date.

# Notable changes to homo sapiens – change.log
## [2.1.0] – 1990s CE “technology revolution & internet”
### Changed
– New comms protocol to distribute “rights”. Printing press patch unexpectedly useful for distributing resources. Moved from basic multi-core to clusters of independent processors with exponential growth in power distribution.

## [2.0.0] – 1789 CE “independence movements”
### Added
– Implemented new user permissions called “rights”, early prototype of multi-core processing with distributed power & comms.

## [1.2.0] – 1760 CE “industrial revolution”
### Changed
– Agricultural libraries replaced by industrial libraries, still single core but heaps faster.

## [1.1.1] – 1440 CE “gutenberg”
### Patched
– Printing press a minor patch for more efficient instructions distribution, wonder if it’d be more broadly useful?

## [1.1.0] – 2,000 BCE “cities era”
### Changed
– Switched rural for urban operating environment. Access to more resources but still on single core.

## [1.0.0] – 8,000 BCE “agricultural revolution”
### Added
– New agricultural libraries, likely will create surplus and population explosion. Heaps less resource intensive.

## [0.1.0] – 250,000 BCE “homo sapiens”
### Added
– Created fork from homo erectus, wasn’t confident in project direction though they may still submit contributions…

(For more information about human evolution, see https://www.bighistoryproject.com)

The point to this rapid and highly oversimplified historical introduction is threefold: 1) we are more powerful than ever before, 2) the rate of change is only increasing, and 3) we made all this up, and we can make it up again. It is important to recognise that we made all of this up. Intellectually we all understand this but it matters because we often assume things are how they are, and then limit ourselves to working within the constraints of the status quo. But what we invented, we can change, if we choose.

We can choose our own adventure, or we let others choose on our behalf. And if we unthinkingly implement the thinking, assumptions and outdated paradigms of the past, then we are choosing to reimplement the past.

Although we are more individually and collectively powerful than ever before, how often do you hear “but that’s just how we’ve always done it”, “but that’s not traditional”, or “change is too hard”. We are demonstrably and historically utter masters at change, but life has become so big, so fast, and so interrelated that change has become scary for many people, so you see them satisfied by either ignoring change or making iterative improvements to the status quo. But we can do better. We must do better.

I believe we are at a significant tipping point in history. The world and the very foundations our society were built on have changed, but we are still largely stuck in the past in how we think and plan for the future. If we don’t make some active decisions about how we live, think and act, then we will find ourselves subconsciously reinforcing the status quo at every turn and not in a position to genuinely create a better future for all.

So what could we do?

  • Solve poverty and hunger: distributed property through nanotechnology and 3D printing, universal education and income.
  • Work 2 days a week, automate the rest: work, see “Why the Future is Workless” by Tim Dunlop
  • Embrace and extend our selves: Transhumanism, para olympics, “He was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” — William Gibson, from Johnny Mnemonic. Why are we so conservative about what it means to be human? About our picture of self? Why do we get caught up on what is “natural” when almost nothing we do is natural.
  • Overcome the tyranny of distance: rockets for international travel, interstellar travel, the opportunity to have new systems of organising ourselves
  • Global citizens: Build a mighty global nation where everyone can flourish and have their rights represented beyond the narrow geopolitical nature of states: peer to peer economy, international rights, transparent gov, digital democracy, overcome state boundaries,
  • ?? What else ?? I’m just scratching the surface!

So how can we build a better world? Luckily, the human species has geeks. Geeks, all of us, are special because we are the pioneers of the modern age and we get to build the operating system for all our fellow humans. So it is our job to ensure what we do makes the world a better place.

rOml is going to talk more about future options for open source in the Friday keynote, but I want to explore how we can individually and collectively build for the future, not for the past.

I would suggest, given our role as creators, it is incumbent on us to both ensure we build a great future world that supports all the freedoms we believe in. It means we need to be individually aware of our unconscious bias, what beliefs and assumptions we hold, who benefits from our work, whether diversity is reflected in our life and work, what impact we have on society, what we care about and the future we wish to see.

Collectively we need to be more aware of whether we are contributing to future or past models, whether belief systems are helping or hindering progress, how we treat others and what from the past we want to keep versus what we want to get rid of.

Right now we have a lot going on. On the one hand, we have a lot of opportunities to improve things and the tools and knowledge at our disposal to do so. On the other hand we have locked up so much of our knowledge and tools, traditional institutions are struggling to maintain their authority and control, citizens are understandably frustrated and increasingly taking matters into their own hands, we have greater inequality than ever before, an obsession with work at the cost of living, and we are expected to sacrifice our humanity at the alter of economics

Questions to ask yourself:

Who are/aren’t you building for?
What is the default position in society?
What does being human mean to you?
What do we value in society?
What assumptions and unconscious bias do you have?
How are you helping non-geeks help themselves?
What future do you want to see?

What should be the rights, responsibilities and roles of
citizens, governments, companies, academia?

Finally,we must also help our fellow humans shift from being consumers to creators. We are all only as free as the tools we use, and though geeks will always be able to route around damage, be that technical or social, many of our fellow humans do not have the same freedoms we do.

Fundamental paradigm shifts we need to consider in building the future.

Scarcity → Surplus
Closed → Open
Centralised → Distributed
Belief → Rationalism
Win/lose → Cooperative competitive
Nationalism → Transnationalism
Normative humans → Formative humans

Open source is the best possible modern expression of cooperative competitiveness that also integrates our philosophical shift towards human rights and powerful citizens, so I know it will continue to thrive and win when pitted against closed models, broadly speaking.

But in inventing the future, we need to be so very careful that we don’t simply rebuild the past with new shiny tools. We need to keep one eye always on the future we want to build, on how what we are doing contributes to that future, and to ensuring we have enough self awareness and commitment to ensuring we don’t accidentally embed in our efforts the outdated and oftentimes repressive habits of the past.

To paraphrase Gandhi, build the change you want to see. And build it today.

Thank you, and I hope you will join me in forging a better future.

Life in Sudan (technically North and South Sudan), Life in Somalia, and More

- Kingdom of Kush was one of first recorded instances of Sudan in history. Strong influence of ancient Egyptian empire and their Gods. Strong religious influence with Christianity and then Islam thereafter. Has struggled with internal conflict for a long time  sudan history https://www.youtube.com/results?search_query=sudan+history https://en.wikipedia.org/wiki/Foreign_relations_of_Sudan -

February 16, 2017

New Research on Our Little Cousins to the North!

Homo floresiensis

Last year, several research papers were published on the ongoing excavations and analysis of material from the island of Flores in Indonesia, where evidence of very small stature hominins was found in the cave of Liang Bua, in 2003. The initial dates dated these little people to between 50,000 and about 14,000 years ago, which would have meant that they lived side-by-side with anatomically modern humans in Indonesia, in the late Ice Age. The hominins, dubbed Homo floresiensis, after the island on which they were found, stood about 1m tall – smaller than any group of modern humans known. Their tiny size included a tiny brain – more in the range of 4 million year old Australopithecus than anything else. However, critical areas of higher order thinking in their brains were on par with modern humans.

Baffled by the seeming wealth of contradictions, these little people raised, researchers returned to the island, and the cave of Liang Bua, determined to check all of their findings in even more detail. Last year, they reported that they had in fact made some mistakes, the first time around. Very, very subtle changes in the sediments of the deposits, revealed that the Homo floresiensis bones belonged to some remnant older deposits, which had been eroded away in other parts of the cave, and replaced by much younger layers. Despite the samples for dating having been taken from close to the hominin bones, as luck would have it, they were all in the younger deposits! New dates, run on the actual sediments containing the bones, gave ages of between 190,000 to 60,000 years. Dates from close to the stone tools found with the hominins gave dates down to 50,000 years ago, but no later.

Liang Bua. Image by Rosino

The researchers – demonstrating a high level of ethics and absolutely correct scientific procedure, published the amended stratigraphy and dates, showing how the errors had occurred. At another site, Mata Menge, they had also found some ancestral hominins – very similar in body type to the ones from Liang Bua, dated to 700,000 years ago. Palaeoanthropologists were able to find similarities linking these hominins to the early Homo erectus found on Java and dated to about 1.2 million years ago, leading researchers to suggest that Homo floresiensis was a parallel evolution to modern humans, out of early Homo erectus in Indonesia, making them a fairly distant cousin on the grand family tree.

Careful examination of the deposits has now also called in to question whether Homo floresiensis could control fire. We know that they made stone tools – of a type pretty much unchanged over more than 600,000 years, and they used these tools to help them hunt Stegodon – an Ice Age dwarf elephant, which was as small as 1.5m at the shoulder. However, researchers now think that evidence of controlled fire is only in layers associated with modern humans. It is this cross-over between Homo floresiensis and modern humans, arriving about 60,000 – 50,000 years ago, that is a focus of current research – including that of teams working there now. At the moment, it looks as if Homo floresiensis disappears at about the same time that modern humans arrive, which sadly, is a not totally unlikely pattern.

Stegodon. Image by I, Vjdchauhan.

What does this have to do with Australia? Well, it’s always interesting to get information about our immediate neighbours and their history (and prehistory). But beyond that – we know that the ancestors of Aboriginal people (modern humans) were in Australia by about 60,000 – 50,000 years ago, so understanding how they arrived is part of understanding our own story. For more case studies on interesting topics in archaeology and palaeontology see our Archaeology Textbook resources for Year 11 students.

New D&D Cantrip

Name: Alternative Fact
Level: 0
School: EN
Time: 1 action
Range: global, contagious
Components: V, S, M (one racial, cultural or religious minority to blame)
Duration: Permanent (irrevocable)
Classes: Cleric, (Grand) Wizard, Con-man Politician

The caster can tell any lie, no matter how absurd or outrageous (in fact, the more outrageous the better), and anyone hearing it (or hearing about it later) with an INT of 10 or less will believe it instantly, with no saving throw. They will defend their new belief to the death – theirs or yours.

This belief can not be disbelieved, nor can it be defeated by any form of education, logic, evidence, or reason. It is completely incurable. Dispel Magic does not work against it, and Remove Curse is also ineffectual.

New D&D Cantrip is a post from: Errata

Two Weeks’ Notice

Last week, a rather heavy document envelope showed up in the mail.

Inside I found a heavy buff-coloured envelope, along with my passport — now containing a sticker featuring an impressive collection of words, numbers, and imagery of landmarks from the United States of America. I’m reliably informed that sticker is the valid US visa that I’ve spent the last few months applying for.

Having that visa issued has unblocked a fairly important step in my path to moving in with Josh (as well as eventually getting married, but that’s another story). I’m very very excited about making the move, though very sad to be leaving the city I’ve grown up in and come to love, for the time being.

Unrelatedly, I happened to have a trip planned to Montréal to attend ConFoo in March. Since I’ll already be in the area, I’m using that trip as my opportunity to move.

My last day in Hobart will be Thursday 2 March. Following that, I’ll be spending a day in Abu Dhabi (yes, there is a good reason for this), followed by a week in Montréal for ConFoo.

After that, I’ll be moving in with Josh in Petaluma, California on Saturday 11 March.

But until then, I definitely want to enjoy what remaining time I have in Hobart, and catch up with many many people.

Over the next two weeks I’ll be:

  • Attending, and presenting a talk at WD42 — my talk will be one of my pieces for ConFoo, and is entirely new material. Get along!
  • Having a farewell do, *probably* on Tuesday 28 February (but that’s not confirmed yet). I’ll post details about where and when that’ll be in the near future (once I’ve made plans)
  • Madly packing and making sure that that I use up as close to 100% of my luggage allowance as possible

If you want to find some time to catch up over the next couple of weeks, before I disappear for quite some time, do let me know.

February 15, 2017

FreeDV 700C

Over the past month the FreeDV 700C mode has been developed, integrated into the FreeDV GUI program version 1.2, and tested. Windows versions (64 and 32 bit) of this program can be downloaded from freedv.org. Thanks Richard Shaw for all your hard work on the release and installers.

FreeDV 700C uses the Codec 2 700C vocoder with the COHPSK modem. Some early results:

  • The US test team report 700C contacts over 2500km at SNRs down to -2dB, in conditions where SSB cannot be heard.
  • My own experience: the 700C speech quality is not quite as good as FreeDV 1600, but usable for conversation. That’s OK – it’s very early days for the 700C codec, and hey, it’s half the bit rate of 1600. I’m actually quite excited that 700C can be used conversationally at this early stage! I experienced a low SNR channel where FreeDV 700C didn’t work but SSB did, however 700C certainly works at much lower SNRs than 1600.
  • Some testers in Europe report 700C falling over at relatively high SNRs (e.g. 8dB). I also experienced this on a 1500km contact. Suspect this is a bug or corner case we can fix, especially in light of the US teams results.

Tony, K2MO, has put together this fine video demonstrating the various FreeDV modes over a simulated HF channel:

It’s early days for 700C, and there are mixed reports. However it’s looking promising. My next steps are to further explore the real world operation of FreeDV 700C, and work on improving the low SNR performance further.

Life in Libya, Going off the Grid, and More

On Libya: - Berber tribes are indigenous people. For most of its history, Libya has been subjected to varying degrees of foreign control, from Europe, Asia, and Africa. The modern history of independent Libya began in 1951. The history of Libya comprises six distinct periods: Ancient Libya, the Roman era, the Islamic era, Ottoman rule, Italian rule, and the Modern era. Very small population of

Modems for HF Digital Voice Part 2

In the previous post I argued that pushing bits through a HF channel involves much wailing and gnashing of teeth. Now we shall apply numbers and graphs to the problem, which is – in a nutshell – Engineering.

QPSK Modem Simulation

I have worked up a GNU Octave modem simulation called hf_modem_curves.m. This operates at 1 sample/symbol, i.e. the sample rate is the symbol rate. So we takes some random bits, map them to QPSK symbols, add noise, then turn the noisy symbols back into bits and count errors:

The simulation ignores a few real world details like timing and phase synchronisation, so is a best case model. That’s OK for now. QPSK uses symbols that each carry 2 bits of information, here is the symbol set or “constellation”:

Four different points, each representing a different 2 bit combination. For example the bits ’00’ would be the cross at 45 degrees, ’10’ at 135 degrees etc. The plot above shows all possible symbols, but we just send one at a time. However it’s useful to plot all of the received symbols like this, as an indication of received signal quality. If the channel is playing nice, we receive something like this:

Each cross is now a fuzzy dot, as noise has been added by the channel. No bit errors yet – a bit error happens when we get enough noise to move received symbols into another quadrant. This sort of channel is called Additive White Gaussian Noise (AWGN). Line of site UHF radio is a good example of a real world AWGN channel – all you have to worry about is additive noise.

With a fading or multipath channel like HF we end up with something like:

In a fading channel the received symbol amplitudes bounce up and down as the channel fades in and out. Sometimes the symbols dip down into the noise and we get lots of bit errors. Sometimes the signal is reinforced, and the symbol amplitude gets bigger.

The simulation used for the multipath or HF channel uses a two path model, with additive noise as per the AWGN simulation:

Graphs and Modem Performance

Turns out there are some surprisingly good models to help us work out the expected Bit Error Rate (BER) for a modem. By “model” I mean people have worked out the maths to describe the Bit Error Rate (BER) for a QPSK Modem. This graph shows us how to work out the BER for QPSK (and BPSK):

So the red line shows us the BER given Eb/No (E-B on N-naught), which is a normalised form of Signal to Noise Ratio (SNR). Think about Eb/No as a modem running at 1 bit per second, with the noise power measured in 1 Hz of bandwidth. It’s a useful scale for comparing modems and modulation schemes.

Looking at the black lines, we can see that for an Eb/No or 4dB, we can expect a BER of 1E-2 or 0.01 or 1% of our bits will be received in error over an AWGN channel. This curve is for QPSK or BPSK, different curves would be used for other modems like FSK.

Given Eb/No you can work out the SNR if you know the bit rate and noise bandwidth:

    SNR = S/N = EbRb/NoB

or in dB:

    SNR(dB) = Eb/No(dB) + 10log10(Rb/B)

For example at Rb = 1600 bit/s and a noise bandwidth B = 3000 Hz:

    SNR(dB) = 4 + 10log10(1600/3000) = 1.27 dB

OK, so that was for ideal QPSK. Lets add a few more curves to our graph:

We have added the experimental results for our QPSK simulation (green), and for Differential QPSK (DQPSK – blue). Our QPSK modem simulation (green) is right on top of the theoretical QPSK curve (red) – this is good and shows our simulation is working really well.

DQPSK was discussed in Part 1. Phase differences are sent, which helps with phase errors in the channels but costs us extra bit errors. This is evident on the curves – at the 1E-2 BER line, DQPSK requires 7dB Eb/No, 3dB more (double the power) of QPSK.

Now lets look at modem performance for HF (multipath) channels, on this rather busy graph (click for larger version):

Wow, HF sucks. Looking at the theoretical HF QPSK performance (straight red line) to achieve a BER of 1E-2, we need 14dB of Eb/No. That’s 10dB worse than QPSK on the AWGN channel. With DQPSK, we need about 16dB.

For HF, a lot of extra power is required to make a small difference in BER.

Some of the kinks in the HF curves (e.g. green QPSK HF simulated just under red QPSK HF theory) are due to not enough simulation points – it’s not actually possible to do better than theory!

Estimated Performance of FreeDV Modes

Now we have the tools to estimate the performance of FreeDV modes. FreeDV 1600 uses Codec 2 at 1300 bit/s, plus a little FEC at 300 bit/s to give a total of 1600 bit/s. With the FEC, lets say we can get reasonable voice quality at 4% BER. FreeDV 1600 uses a DQPSK modem.

On an AWGN channel, that’s an Eb/No of 4.4dB for DQPSK, and a SNR of:

    SNR(dB) = 4.4 + 10log10(1600/3000) = 1.7 dB

On a multipath channel, that’s an Eb/No of 11dB for DQPSK, and a SNR of:

    SNR(dB) = 11 + 10log10(1600/3000) = 8.3 dB

As discussed in Part 1, FreeDV 700C uses diversity and coherent QPSK, and has a multipath (HF) performance curve plotted in cyan above, and close to ideal QPSK on AWGN channels. The payload data rate is 700 bit/s, however we have an overhead of two pilot symbols for every 4 data symbols. This means we effectively need a bit rate of Rb = 700*(4+2)/4 = 1050 bit/s to pump 700 bits/s through the channel. It doesn’t have any FEC (yet, anyway), so we need a BER of a little lower than FreeDV 1600, about 2%. Running the numbers:

On an AWGN channel, for 2% BER we need an Eb/No of 3dB for QPSK, and a SNR of:

    SNR(dB) = 3 + 10log10(1050/3000) = -1.5 dB

On a multipath channel, diversity (cyan line) helps a lot, that’s an Eb/No of 8dB, and a SNR of:

    SNR(dB) = 8 + 10log10(1050/3000) = 3.4 dB

The diversity model in the simulation uses two carriers. The amplitudes of each carrier after passing through the multipath model are plotted below:

Often when one carrier is faded, the other is not faded, so when we recombine them at the receiver we get an average that is closer to AWGN performance. However diversity is not perfect, occasionally both carriers are wiped out at the same time by a fade.

So we can see FreeDV 700C is about 4 dB in front of FreeDV 1600, which matches the best reports from early adopters. I’ve had reports of FreeDV 700C operating at as low as -2dB , which is presumably on channels that don’t have heavy fading and are more like AWGN. Also some reports of 700C falling over at high SNRs (around like 8dB)! However that is probably a bug, e.g. a sync issue or something else we can track down in time.

Real world channels can vary. The multipath model above doesn’t take into account fast or slow fading, it just calculates the average bit errors rate. In practice, slow fading is hard to handle in digital voice applications, as the whole channel might be wiped out for a few seconds.

Now that we have a reasonable 700 bit/s codec – we can also consider other schemes, such as a more powerful FEC code rather than diversity. Like diversity, FEC codes provide “coding gain”, moving our operating point to the left. Really good codes operate at 10% BER, right over on the Eb/No = 2dB region of the curve. No free lunch of course – such codes may require long latency (seconds) or be expensive to decode.

Next Steps

I’d like to “instrument” FreeDV 700C and work with the 700C early adopters to find out how well it’s working, why and how it falls over, and work through any obvious bugs. Then start experimenting with ways to make it operate at lower SNRs, such as more powerful FEC codes or even non-redundant techniques like Trellis decoding.

Now we have shown Codec 700C has sufficient quality for conversations over the air, I’m planning another iteration of the Codec 2 700C vocoder design to see if we can improve speech quality.

Links

Modems for HF Digital Voice Part 1.

More Eb/No to SNR worked examples.

Similar modem calculations were used to develop a 100 kbit/s telemetry system to send HD images from High Altitude Balloons.

February 14, 2017

j-core + Numato Spartan 6 board + Fedora 25

A couple of changes to http://j-core.org/#download_bitstream made it easy for me to get going:

  • In order to make ModemManager not try to think it’s a “modem”, create /etc/udev/rules.d/52-numato.rules with the following content:
    # Make ModemManager ignore Numato FPGA board
    ATTRS{idVendor}=="2a19", ATTRS{idProduct}=="1002", ENV{ID_MM_DEVICE_IGNORE}="1"
  • You will need to install python3-pyserial and minicom
  • The minicom command line i used was:
    sudo stty -F /dev/ttyACM0 -crtscts && minicom -b 115200 -D /dev/ttyACM0

and along with the instructions on j-core.org, I got it to load a known good build.

This Week in HASS – term 1, week 3

This is a global week in HASS for primary students. Our youngest students are marking countries around the world where they have family members, slightly older students are examining the Mayan calendar, while older students get nearer to Australia, examining how people reached Australia and encountered its unique wildlife.

Foundation to Year 3

Mayan date

Foundation students doing the Me and My Global Family unit (F.1) are working with the world map this week, marking countries where they have family members with coloured sticky dots. Those doing the My Global Family unit (F.6), and students in Years 1 to 3 (Units 1.1; 2.1 and 3.1), are examining the Mayan calendar this week. The Mayan calendar is a good example of an alternative type of calendar, because it is made up of different parts, some of which do not track the seasons, and is cyclical, based on nested circles. The students learn about the 2 main calendars used by the Mayans – a secular and a celebratory sacred calendar, as well as how the Mayans divided time into circles running at different scales – from the day to the millennium and beyond. And no, in case anyone is still wondering – they did not predict the end of the world in 2012, merely the end of one particular long-range cycle, and hence, the beginning of a new one…

Years 3 to 6

Lake Mungo, where people lived at least 40,000 years ago.

Students doing the Exploring Climates unit (3.6), and those in Years 4 to 6 (Units 4.1, 5.1 and 6.1), are examining how people reached Australia during the Ice Age, and what Australia was like when they arrived. People had to cross at least 90km of open sea to reach Australia, even during the height of the Ice Age, and this sea gap led to the relative isolation of animals in Australia from others in Asia. This phenomenon was first recorded by Alfred Wallace, who drew a line on a map marking the change in fauna. This line became known as the Wallace line, as a result. Students will also examine the archaeological evidence, and sites of the first people in Australia, ancestors of Aboriginal people. The range of sites across Australia, with increasingly early dates, amply demonstrate the depth of antiquity of Aboriginal knowledge and experience in Australia.

February 13, 2017

Life in Syria, Why the JSF isn't Worth It, and More

Given what has happened wanted to see what has been happening inside Syria: https://en.wikipedia.org/wiki/Syria http://www.aljazeera.com/topics/country/syria.html http://www.aljazeera.com/programmes/specialseries/2017/02/boy-started-syrian-war-170208093451538.html - complicated colonial history with both British and French. Has had limited conflict with some of it's neighbours including

High Power Lustre

(Most of the hard work here was done by fellow blogger Rashmica - I just verified her instructions and wrote up this post.)

Lustre is a high-performance clustered file system. Traditionally the Lustre client and server have run on x86, but both the server and client will also work on Power. Here's how to get them running.

Server

Lustre normally requires a patched 'enterprise' kernel - normally an old RHEL, CentOS or SUSE kernel. We tested with a CentOS 7.3 kernel. We tried to follow the Intel instructions for building the kernel as much as possible - any deviations we had to make are listed below.

Setup quirks

We are told to edit ~/kernel/rpmbuild/SPEC/kernel.spec. This doesn't exist because the directory is SPECS not SPEC: you need to edit ~/kernel/rpmbuild/SPECS/kernel.spec.

I also found there was an extra quote mark in the supplied patch script after -lustre.patch. I removed that and ran this instead:

for patch in $(<"3.10-rhel7.series"); do \
      patch_file="$HOME/lustre-release/lustre/kernel_patches/patches/${patch}" \
      cat "${patch_file}" >> $HOME/lustre-kernel-x86_64-lustre.patch \
done

The fact that there is 'x86_64' in the patch name doesn't matter as you're about to copy it under a different name to a place where it will be included by the spec file.

Building for ppc64le

Building for ppc64le was reasonably straight-forward. I had one small issue:

[build@dja-centos-guest rpmbuild]$ rpmbuild -bp --target=`uname -m` ./SPECS/kernel.spec
Building target platforms: ppc64le
Building for target ppc64le
error: Failed build dependencies:
       net-tools is needed by kernel-3.10.0-327.36.3.el7.ppc64le

Fixing this was as simple as a yum install net-tools.

This was sufficient to build the kernel RPMs. I installed them and booted to my patched kernel - so far so good!

Building the client packages: CentOS

I then tried to build and install the RPMs from lustre-release. This repository provides the sources required to build the client and utility binaries.

./configure and make succeeded, but when I went to install the packages with rpm, I found I was missing some dependencies:

error: Failed dependencies:
        ldiskfsprogs >= 1.42.7.wc1 is needed by kmod-lustre-osd-ldiskfs-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
    sg3_utils is needed by lustre-iokit-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        attr is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le
        lsof is needed by lustre-tests-2.9.52_60_g1d2fbad_dirty-1.el7.centos.ppc64le

I was able to install sg3_utils, attr and lsof, but I was still missing ldiskfsprogs.

It seems we need the lustre-patched version of e2fsprogs - I found a mailing list post to that effect.

So, following the instructions on the walkthrough, I grabbed the SRPM and installed the dependencies: yum install -y texinfo libblkid-devel libuuid-devel

I then tried rpmbuild -ba SPECS/e2fsprogs-RHEL-7.spec. This built but failed tests. Some failed because I ran out of disk space - they were using 10s of gigabytes. I found that there were some comments in the spec file about this with suggested tests to disable, so I did that. Even with that fix, I was still failing two tests:

  • f_pgsize_gt_blksize: Intel added this to their fork, and no equivalent exists in the master e2fsprogs branches. This relates to Intel specific assumptions about page sizes which don't hold on Power.
  • f_eofblocks: This may need fixing for large page sizes, see this bug.

I disabled the tests by adding the following two lines to the spec file, just before make %{?_smp_mflags} check.

rm -rf tests/f_pgsize_gt_blksize
rm -rf tests/f_eofblocks

With those tests disabled I was able to build the packages successfully. I installed them with yum localinstall *1.42.13.wc5* (I needed that rather weird pattern to pick up important RPMs that didn't fit the e2fs* pattern - things like libcom_err and libss)

Following that I went back to the lustre-release build products and was able to successfully run yum localinstall *ppc64le.rpm!

Testing the server

After disabling SELinux and rebooting, I ran the test script:

sudo /usr/lib64/lustre/tests/llmount.sh

This spat out one scary warning:

mount.lustre FATAL: unhandled/unloaded fs type 0 'ext3'

The test did seem to succeed overall, and it would seem that is a known problem, so I pressed on undeterred.

I then attached a couple of virtual harddrives for the metadata and object store volumes, and having set them up, proceeded to try to mount my freshly minted lustre volume from some clients.

Testing with a ppc64le client

My first step was to test whether another ppc64le machine would work as a client.

I tried with an existing Ubuntu 16.04 VM that I use for much of my day to day development.

A quick google suggested that I could grab the lustre-release repository and run make debs to get Debian packages for my system.

I needed the following dependencies:

sudo apt install module-assistant debhelper dpatch libsnmp-dev quilt

With those the packages built successfully, and could be easily installed:

dpkg -i lustre-client-modules-4.4.0-57-generic_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deblustre-utils_2.9.52-60-g1d2fbad-dirty-1_ppc64el.deb

I tried to connect to the server:

sudo mount -t lustre $SERVER_IP@tcp:/lustre /lustre/

Initially I wasn't able to connect to the server at all. I remembered that (unlike Ubuntu), CentOS comes with quite an aggressive firewall by default. I ran the following on the server:

systemctl stop firewalld

And voila! I was able to connect, mount the lustre volume, and successfully read and write to it. This is very much an over-the-top hack - I should have poked holes in the firewall to allow just the ports lustre needed. This is left as an exercise for the reader.

Testing with an x86_64 client

I then tried to run make debs on my Ubuntu 16.10 x86_64 laptop.

This did not go well - I got the following error:

liblustreapi.c: In function ‘llapi_get_poollist’:
liblustreapi.c:1201:3: error: ‘readdir_r’ is deprecated [-Werror=deprecated-declarations]

This looks like one of the new errors introduced in recent GCC versions, and is a known bug. To work around it, I found the following stanza in a lustre/autoconf/lustre-core.m4, and removed the -Werror:

AS_IF([test $target_cpu == "i686" -o $target_cpu == "x86_64"],
        [CFLAGS="$CFLAGS -Wall -Werror"])

Even this wasn't enough: I got the following errors:

/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: error: initialization from incompatible pointer type [-Werror=incompatible-pointer-types]
         .d_compare = ll_dcompare,
                  ^~~~~~~~~~~
/home/dja/dev/lustre-release/debian/tmp/modules-deb/usr_src/modules/lustre/lustre/llite/dcache.c:387:22: note: (near initialization for ‘ll_d_ops.d_compare’)

I figured this was probably because Ubuntu 16.10 has a 4.8 kernel, and Ubuntu 16.04 has a 4.4 kernel. Work on supporting 4.8 is ongoing.

Sure enough, when I fired up a 16.04 x86_64 VM with a 4.4 kernel, I was able to build and install fine.

Connecting didn't work first time - the guest failed to mount, but I did get the following helpful error on the server:

LNetError: 2595:0:(acceptor.c:406:lnet_acceptor()) Refusing connection from 10.61.2.227: insecure port 1024

Refusing insecure port 1024 made me thing that perhaps the NATing that qemu was performing for me was interfering - perhaps the server expected to get a connection where the source port was privileged, and qemu wouldn't be able to do that with NAT.

Sure enough, switching NAT to bridging was enough to get the x86 VM to talk to the ppc64le server. I verified that ls, reading and writing all succeeded.

Next steps

The obvious next steps are following up the disabled tests in e2fsprogs, and doing a lot of internal performance and functionality testing.

Happily, it looks like Lustre might be in the mainline kernel before too long - parts have already started to go in to staging. This will make our lives a lot easier: for example, the breakage between 4.4 and 4.8 would probably have already been picked up and fixed if it was the main kernel tree rather than an out-of-tree patch set.

In the long run, we'd like to make Lustre on Power just as easy as Lustre on x86. (And, of course, more performant!) We'll keep you up to date!

(Thanks to fellow bloggers Daniel Black and Andrew Donnellan for useful feedback on this post.)

February 12, 2017

Life in Egypt, Life in Saudi Arabia, and More

On Egypt: - the thing that we mostly know Egypt for is the ancient Egyptian empire. Once upon a time the Middle East was effectively the global centre of knowledge, culture, etc... The pyramids were so spectacular for their age that their have been rumors throughout time of aliens being in contact with them. Would actually make a lot of stories in the Holy Scriptures a lot more sense as well?

Printer bracket fix

Similar to many 3d printer designs, many of the parts on this 3d printer are plastic. Where the Z-Axis meets the Y-Axis is held in place by two top brackets (near the gear on the stepper is a bolt to the z alloy extrusion) and the bottom bracket. One flaw here is that there are no bolts to the z-axis on the bottom bracket. It was also cracked in two places so the structural support was low and the x-axis would droop over time. Not so handy.


The plastic is about 12mm thick and smells like a 2.5D job done by a 3d printer 'just because'.  So a quick tinker in Fusion 360 and the 1/2 inch thick flatland part was born. After removing the hold down tabs and flapping the remains away 3 M6 bolt holds were hand drilled. Notice the subtle shift on the inside of the part where the extrusion and stepper motor differ in size.


It was quicker to just do that rather than try to remount and register on the cnc and it might not have even worked with the limited z range of the machine.


The below image only has two of the three bolts in place. With the addition of the new bolt heading into the z axis the rigidity of the machine went right up. The shaft that the z axis is mounted onto goes into the 12mm empty hole in the part.


This does open up the mental thoughts of how many other parts would be better served by not being made out of plastic.


February 11, 2017

Multicore World 2017

The 6th Multicore World will be held on Monday 20th to Wednesday 22nd of February 2017 at Shed 6 on the Wellington (NZ) waterfront. Nicolás Erdödy (Open Parallel) has once again done an amazing job at finding some the significant speakers in the world in parallel programming and multicore systems to attend. Although a short - and not an enormous conference - the technical quality is always extremely high, dealing with some of the most fundamental problems and recent experiences in these fields.

read more

Librarians take up arms against fake news | Seattle Times

Librarians are stepping into the breach to help students become smarter evaluators of the information that floods into their lives. That’s increasingly necessary in an era in which fake news is a constant.

Spotting fake news - librarian Janelle Hagen - Lakeside School SeattleSpotting fake news – by librarian Janelle Hagen – Lakeside School Seattle

Read more: http://www.seattletimes.com/seattle-news/librarians-take-up-arms-against-fake-news

5 Tech Non Profs you should support right now!

icons for the 5 tech non profs whose actual icons appear below anyway.

Join 'em, support 'em, donate, promote... whatever. They all do good work. Really good work. And we should all support them as much as we can. Help me, help them, by following them, amplifying their voices, donating or even better?Joining them! And if all you've got is gratitude for the work they do, then drop 'em a line and just say a simple thank you :)

 

Software Freedom Conservancy Logo

Software Freedom Conservancy

Follow: @conservancy

Donate: sfconservancy.org/donate

Join: sfconservancy.org/supporter

 

Open Source Initiative

Follow: @OpenSourceOrg

Donate: opensource.org/donate

Join: opensource.org/join

 

 

Drupal Association

Follow: @drupalassoc

Donate: assoc.drupal.org/donate

Join: www.drupal.org/association/individual-membership

 

 

Internet Archive

Follow: @internetarchive

Donate: archive.org/donate

Join: as above, just choose monthly sustaining member

 

 

Wikimedia Foundation

Follow: @Wikimedia

Donate: donate.wikimedia.org

Join: wikimediafoundation.org/wiki/Volunteer_opportunities

February 10, 2017

This Week in HASS: term 1 week 2

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineFoundation to Year 3

Our standalone Foundation (Prep/Kindy etc) students are introduced to the World Map this week, as they start putting stickers on it, showing where in the world they and their families come from – the origin of the title of this unit (Me and My Global Family). This helps students to feel connected with each other and to start to understand both the notion of the ‘global family’, as well as the idea that places can be represented by pictures (maps). Of course, we don’t expect most 5 year olds to understand the world map, but the sooner they start working with it, the deeper the familiarity and understanding later on.

Year 1-3 Building Stonehenge Activity - OpenSTEM History/Geography program for Primary SchoolsStudents building Stonehenge with blocks

All the other younger students are learning about movements of celestial bodies (the Earth and Moon, as they go around the Sun and each other) and that people have measured time in the past with reference to both the Sun and the Moon – Solar and Lunar calendars. To make these ideas more concrete, students study ancient calendars, such as Stonehenge, Newgrange and Abu Simbel, and take part in an activity building a model of Stonehenge from boxes or blocks.

Years 3 to 6

Demon Duck of Doom

Our older primary students are going back into the Ice Age (and who wouldn’t want to, in this weather!), as they explore the routes of modern humans leaving Africa, as part of understanding how people reached Australia. Aboriginal people arrived in Australia as part of the waves of modern humans spreading across the world. However, the Australia they encountered was very different from today. It was cold, dry and very dusty, inhabited by giant Ice Age animals (the Demon Duck of Doom is always a hot favourite with the students!) and overall, a pretty dangerous place. We challenge students to imagine life in those times, and thereby start to understand the basis for some of the Dreamtime stories, as well as the long and intricate relationship between Aboriginal people and the Australian environment.

This Week in HASS: term 1 week 1

We thought it would be fun to track what’s happening in schools using our primary HASS program, on a weekly basis. Now we know that some of you are doing different units and some will start in different weeks, depending on what state you’re in, what term dates you have etc, but we will run these posts based off those schools which are implementing the units in numerical order and starting in the week beginning 30 January, 2017.

Week 1 is an introductory week for all units, and usually sets some foundations for the rest of the unit.

Foundation to Year 3

Our youngest students are still finding their feet in the new big world of school! We have 2 units for Term 1, depending on whether the class is standalone, or integrating with some Year 1 students. This week standalone classes will be starting a discussion about their families – geared towards making our newest students feel welcome and comfortable at school.

Those integrating with Year 1 or possibly Year 2, as well, will start working with their teachers on a Class Calendar, marking terms and holidays, as well as celebrations such as birthdays and public holidays. This helps younger students start to map out the coming year, as well as provide a platform for discussions about how they spent the holidays. Year 2 and 3 students may choose to focus more on discussing which season we are in now, and what the weather’s like at the moment (I’m sure most of you are in agreement that it’s too hot!). Students can track the weather on the calendar as well.

Years 3 to 6

Some Year 3 students may be in classes integrating with Year 4 students, rather than Year 2. Standalone Year 3 classes have a choice of doing either unit. These older students will be undertaking the Timeline Activity and getting a physical sense of history and spans of time. Students love an excuse to get outdoors, even when it’s hot, and this activity gives them a preview of material they will be covering later in the year, as well as giving them a hands-on understanding of how time has passed and how where we are compares to past events. This activity can even reinforce the concept of a number line from Maths, in a very kinaesthetic way.

February 09, 2017

Modems for HF Digital Voice Part 1

The newly released FreeDV 700C mode uses the Coherent PSK (COHPSK) modem which I developed in 2015. This post describes the challenges of building HF modems for DV, and how the COHPSK modem evolved from the FDMDV modem used for FreeDV 1600.

HF channels are tough. You need a lot of SNR to push bits through them. There are several problems to contend with:

When the transmit signal is reflected off the ionosphere, two or more copies arrive at the receiver antenna a few ms apart. These echoes confuse the demodulator, just like a room with bad echo can confuse a listener.

Here is a plot of a BPSK baseband signal (top). Lets say we receive two copies of this signal, from two paths. The first is identical to what we sent (top), but the second is delayed a few samples and half the amplitude (middle). When you add them together at the receiver input (bottom), it’s a mess:

The multiple paths combining effectively form a comb filter, notching out chunks of the modem signal. Loosing chunks of the modem spectrum is bad. Here is the magnitude and phase frequency response of a channel with the two paths used for the time domain example above:

Note that comb filtering also means the phase of the channel is all over the place. As we are using Phase Shift Keying (PSK) to carry our precious bits, strange phase shifts are more bad news.

All of these impairments are time varying, so the echoes/notches, and phase shifts drift as the ionosphere wiggles about. As well as the multipath, it must deal with noise and operate at SNRs of around 0dB, and frequency offsets between the transmitter and receiver of say +/- 100 Hz.

If commodity sound cards are used for the ADC and DAC, the modem must also handle large sample clock offsets of +/-1000 ppm. For example the transmitter DAC sample clock might be 7996 Hz and the receiver ADC 8004 Hz, instead of the nominal 8000 Hz.

As the application is Push to Talk (PTT) Digital Voice, the modem must sync up quickly, in the order of 100ms, even with all the challenges above thrown at it. Processing delay should be around 100ms too. We can’t wait seconds for it to train like a data modem, or put up with several seconds of delay in the receive speech due to processing.

Using standard SSB radio sets we are limited to around 2000 Hz of RF bandwidth. This bandwidth puts a limit on the bit rate we can get through the channel. The amplitude and phase distortion caused by typical SSB radio crystal filters is another challenge.

Designing a modem for HF Digital Voice is not easy!

FDMDV Modem

In 2012, the FDMDV modem was developed as our first attempt at a modem for HF digital voice. This is more or less a direct copy of the FDMDV waveform which was developed by Francesco Lanza, HB9TLK and Peter Martinez G3PLX. The modem software was written in GNU Octave and C, carefully tested and tuned, and most importantly – is open source software.

This modem uses many parallel carriers or tones. We are using Differential QPSK, so every symbol contains 2 bits encoded as one of 4 phases.

Lets say we want to send 1600 bits/s over the channel. We could do this with a single QPSK carrier at Rs = 800 symbols a second. Eight hundred symbols/s times two bit/symbol for QPSK is 1600 bit/s. The symbol period Ts = 1/Rs = 1/800 = 1.25ms. Alternatively, we could use 16 carriers running at 50 symbols/s (symbol period Ts = 20ms). If the multipath channel has echoes 1ms apart it will make a big mess of the single carrier system but the parallel tone system will do much better, as 1ms of delay spread won’t upset a 20ms symbol much:

We handle the time-varying phase of the channel using Differential PSK (DPSK). We actually send and receive phase differences. Now the phase of the channel changes over time, but can be considered roughly constant over the duration of a few symbols. So when we take a difference between two successive symbols the unknown phase of the channel is removed.

Here is an example of DPSK for the BPSK case. The first figure shows the BPSK signal top, and the corresponding DBPSK signal (bottom). When the BPSK signal changes, we get a +1 DBPSK value, when it is the same, we get a -1 DBPSK value.

The next figure shows the received DBPSK signal (top). The phase shift of the channel is a constant 180 degrees, so the signal has been inverted. In the bottom subplot the recovered BPSK signal after differential decoding is shown. Despite the 180 degree phase shift of the channel it’s the same as the original Tx BPSK signal in the first plot above.

This is a trivial example, in practice the phase shift of the channel will vary slowly over time, and won’t be a nice neat number like 180 degrees.

DPSK is a neat trick, but has an impact on the modem Bit Error Rate (BER) – if you get one symbol wrong, the next one tends to be corrupted as well. It’s a two for one deal on bit errors, which means crappier performance for a given SNR than regular (coherent) PSK.

To combat frequency selective fading we use a little Forward Error Correction (FEC) on the FreeDV 1600 waveform. So if one carrier gets notched out, we can use bits in the other carriers to recover the missing bits. Unfortunately we don’t have the bandwidth available to protect all bits, and the PTT delay requirement means we have to use a short FEC code. Short FEC codes don’t work as well as long ones.

COHPSK Modem

Over the next few years I spent some time thinking about different modem designs and trying a bunch of different ideas, most of which failed. Research and disappointment. You just have to learn from your mistakes, talk to smart people, and keep trying. Then, towards the end of 2014, a few ideas started to come together, and the COHPSK modem was running in real time in mid 2015.

The major innovations of the COHPSK modem are:

  1. The use of diversity to help combat frequency selective fading. The baseline modem has 7 carriers. A copy of these are made, and sent at a higher frequency to make 14 tones in total. Turns out the HF channel giveth and taketh away. When one tone is notched out another is enhanced (an anti-fade). So we send each carrier twice and add them back together at the demodulator, averaging out the effect of frequency selective fades:
  2. To use diversity we need enough bandwidth to fit a copy of the baseline modem carriers. This implies the need for a vocoder bit rate of much less than 1600 bit/s – hence several iterations at a 700 bits/s speech codec – a completely different skill set – and another 18 months of my life to develop Codec 2 700C.
  3. Coherent QPSK detection is used instead of differential detection, which halves the number of bit errors compared to differential detection. This requires us to estimate the phase of the channel on the fly. Two known symbols are sent followed by 4 data symbols. These known, or Pilot symbols, allow us to measure and correct for the current phase of each carrier. As the pilot symbols are sent regularly, we can quickly acquire – then track – the phase of the channel as it evolves.

Here is a figure that shows how the pilot and data symbols are distributed across one frame of the COHPSK modem. More information of the frame design is available in the cohpsk frame design spreadsheet, including performance calculations which I’ll explain in the next blog post in this series.

Coming Next

In the next post I’ll show how reading a few graphs and adding a few dBs together can help us estimate the performance of the FDMDV and COHPSK modems on HF channels.

Links

Modems for HF Digital Voice Part 2

cohpsk_plots.m Octave script used to generate plots for this post.

FDMDV Modem Page

FreeDV Robustness Part 1

FreeDV Robustness Part 2

FreeDV Robustness Part 3

Life in Iran, Examining Prophets/Pre-Cogs 6/Hyperspace Travel, and More

Wanted to take a look inside Iran given how much trouble it seems to in: https://en.wikipedia.org/wiki/Iran https://en.wikipedia.org/wiki/Supreme_Leader_of_Iran http://president.ir/en/ http://smartraveller.gov.au/Countries/middle-east/Pages/iran.aspx https://www.theguardian.com/world/iran http://www.aljazeera.com/topics/country/iran.html https://www.lonelyplanet.com/iran - ancient

Adding a Docker Runner to GitLab

In my particular scenario, I need to run both docker and docker-compose to test and build our changes. The first step to achieving this is to add an appropriate GitLab runner.

We especially need to run a privileged runner to make this happen.

Assuming that GitLab Runner has already been successfully installed, head to Admin -> Runner in the webUI of your GitLab instance and note your Registration token.

From a suitable account on your GitLab instance register a shared runner:

% sudo /usr/bin/gitlab-ci-multi-runner register --docker-privileged \
    --url https://gitlab.my.domain/ci \
    --registration-token REGISTRATION_TOKEN \
    --executor docker \
    --description "My Docker Runner" \
    --docker-image "docker:latest" \

Your shared runner should now be ready to run.

This applies to self-hosting a GitLab instance. If you are using the gitlab.com hosted services, a suitable runner is already supplied.

There are many types of executors for runners, suiting a variety of scenarios. This example's scenario is that both GitLab and the desired runner are on the same instance.

February 08, 2017

Manage Intel Turbo Boost with systemd

If you have a little laptop with an Intel CPU that supports turbo boost, you might find that it’s getting a little hot when you’re using it on your lap.

For example, taking a look at my CPU:
lscpu |egrep "Model name|MHz"

We can see that it’s a 2.7GHz CPU with turbo boost taking it up to 3.5GHz.

Model name: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
CPU MHz: 524.633
CPU max MHz: 3500.0000
CPU min MHz: 400.0000

Here’s a way that you can enable and disable turbo boost with a systemd service, which lets you hook it into other services or disable it on boot.

By default, turbo boost is on, so starting our service will disable it.

Create the service.
cat << EOF | sudo tee \
/etc/systemd/system/disable-turbo-boost.service
[Unit]
Description=Disable Turbo Boost on Intel CPU
 
[Service]
ExecStart=/bin/sh -c "/usr/bin/echo 1 > \
/sys/devices/system/cpu/intel_pstate/no_turbo"
ExecStop=/bin/sh -c "/usr/bin/echo 0 > \
/sys/devices/system/cpu/intel_pstate/no_turbo"
RemainAfterExit=yes
 
[Install]
WantedBy=sysinit.target
EOF

Reload systemd manager configuration.
sudo systemctl daemon-reload

Test it by running something CPU intensive and watching the current running MHz.

cat /dev/urandom > /dev/null &
lscpu |grep "CPU MHz"

CPU MHz: 3499.859

Now disable turbo boost and check the CPU speed again.
sudo systemctl start disable-turbo-boost
lscpu |grep "CPU MHz"

CPU MHz: 2699.987

Don’t forget to kill the CPU intensive process 🙂

kill %1

If you want to disable turbo boost on boot by default, just enable the service.

sudo systemctl enable disable-turbo-boost

February 07, 2017

Fixing webcam flicker in Linux with udev

I recently got a new Dell XPS 13 (9360) laptop for work and it’s running Fedora pretty much perfectly.

However, when I load up Cheese (or some other webcam program) the video from the webcam flickers. Given that I live in Australia, I had to change the powerline frequency from 60Hz to 50Hz to fix it.

sudo dnf install v4l2-ctl
v4l2-ctl --set-ctrl power_line_frequency=1

I wanted this to be permanent each time I turned my machine on, so I created a udev rule to handle that.

cat << EOF | sudo tee /etc/udev/rules.d/50-dell-webcam.rules
SUBSYSTEM=="video4linux", \
SUBSYSTEMS=="usb", \
ATTRS{idVendor}=="0c45", \
ATTRS{idProduct}=="670c", \
PROGRAM="/usr/bin/v4l2-ctl --set-ctrl \
power_line_frequency=1 --device /dev/%k", \
SYMLINK+="dell-webcam"
EOF

It’s easy to test. Just turn flicker back on, reload the rules and watch the flicker in Cheese automatically disappear 🙂

v4l2-ctl --set-ctrl power_line_frequency=0
sudo udevadm control --reload-rules && sudo udevadm trigger

Of course I also tested with a reboot.

It’s easy to do with any webcam, just take a look on the USB bus for the vendor and product IDs. For example, here’s a Logitech C930e (which is probably the nicest webcam I’ve ever used, and also works perfectly under Fedora).

Bus 001 Device 022: ID 046d:0843 Logitech, Inc. Webcam C930e

So you would replace the following in your udev rule:

  • ATTRS{idVendor}==“046d”
  • ATTRS{idProduct}==“0843”
  • SYMLINK+=“c930e”

Note that SYMLINK is not necessary, it just creates an extra /dev entry, such as /dev/c930e, which is useful if you have multiple webcams.

OpenStack and the OpenStack Barcelona Summit

Presentation to Linux Users of Victoria, 7th February, 2017

An overview of cloud computing platforms in general, and OpenStack in particular, is provided introduces this presentation. Cloud computing is one of the most significant changes to IT infrastructure and employment in the past decade, with major corporate services (Amazon, Microsoft) gaining particular significance in the late 2000s. In mid-2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack, with initial code coming from NASA's Nebula project and Rackspace's Cloud Files project, and soon gained prominence as the largest open-source cloud platform. Although a cross-platform service, it was quickly available on various Linux distributions including Debian, Ubuntu, SuSE (2011), and Red Hat (2012).

OpenStack is governed by the OpenStack Foundation, a non-profit corporate entity established in September 2012. Correlating with the release cycle of the product, OpenStack Summits are held every six months for developers, users and managers. The most recent Summit was held in Barcelona in late October 2016, with over 5000 attendees, almost 1000 organisations and companies, and 500 sessions, spread out over three days, plus one day of "Upstream University" prior to the main schedule, plus one day after the main schedule for contributor working parties. The presentation will cover the major announcements of the conference as well as a brief overview of the major streams, as well the direction of OpenStack as the November Sydney Summit approaches.

read more

February 06, 2017

Life in Venezuela, Examining Prophets/Pre-Cogs 5, and More

Wanted to see what life was like in Venezuela given their recent problems: - complicated colonial history with conflict between Spaniards and local indigenous people (led by Native caciques, such as Guaicaipuro and Tamanaco). One of first to declare independence in Latin America. History of military strongmen and corruption? Political and economic instability over many years... venezuela

SE Linux in Debian/Stretch

Debian/Stretch has been frozen. Before the freeze I got almost all the bugs in policy fixed, both bugs reported in the Debian BTS and bugs that I know about. This is going to be one of the best Debian releases for SE Linux ever.

Systemd with SE Linux is working nicely. The support isn’t as good as I would like, there is still work to be done for systemd-nspawn. But it’s close enough that anyone who needs to use it can use audit2allow to generate the extra rules needed. Systemd-nspawn is not used by default and it’s not something that a new Linux user is going to use, I think that expert users who are capable of using such features are capable of doing the extra work to get them going.

In terms of systemd-nspawn and some other rough edges, the issue is the difference between writing policy for a single system vs writing policy that works for everyone. If you write policy for your own system you can allow access for a corner case without a lot of effort. But if I wrote policy to allow access for every corner case then they might add up to a combination that can be exploited. I don’t recommend blindly adding the output of audit2allow to your local policy (be particularly wary of access to shadow_t and write access to etc_t, lib_t, etc). But OTOH if you have a system that’s running in enforcing mode that happens to have one daemon with more access than is ideal then all the other daemons will still be restricted.

As for previous releases I plan to keep releasing updates to policy packages in my own apt repository. I’m also considering releasing policy source to updates that can be applied on existing Stretch systems. So if you want to run the official Debian packages but need updates that came after Stretch then you can get them. Suggestions on how to distribute such policy source are welcome.

Please enjoy SE Linux on Stretch. It’s too late for most bug reports regarding Stretch as most of them won’t be sufficiently important to justify a Stretch update. The vast majority of SE Linux policy bugs are issues of denying wanted access not permitting unwanted access (so not a security issue) and can be easily fixed by local configuration, so it’s really difficult to make a case for an update to Stable. But feel free to send bug reports for Buster (Stretch+1).

February 05, 2017

IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo
iface lo inet loopback

allow-hotplug eth0
iface eth0 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eth0 inet6 static
    address 2600:3c01::xxxx:xxxx:xxxx:939f/64
    gateway fe80::1
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

iface tun0 inet6 static
    address 2600:3c01:xxxx:xxxx::/64
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1

OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp

to:

proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64
push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:

net.ipv6.conf.all.forwarding=1

and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn
-A INPUT -p udp --dport 1194 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf
ip6tables-apply
systemctl restart openvpn.service

Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:

ping6 ipv6.google.com

and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

LUV Main February 2017 Meeting: OpenStack Summit/Data Structures and Algorithms

Feb 7 2017 18:30
Feb 7 2017 18:30
Location: 
6th Floor, Trinity College (EPA Victoria building), 200 Victoria St., Carlton

Tuesday, February 7, 2017
6:30 PM to 8:30 PM
6th Floor, Trinity College (EPA Victoria building)
200 Victoria St., Carlton

Speakers:

• Lev Lafayette, OpenStack and the OpenStack Barcelona Summit
• Jacinta Richardson, Data Structures and Algorithms in the 21st Century

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 049 589.
 

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

February 7, 2017 - 18:30

read more

Astrophotography with Mac OS X

It's been a good three years now since I swapped my HP laptop for a Macbook Pro. In the mean time, I've started doing a bit more astrophotography and of course the change of operating system has affected the tools I use to obtain and process photos.

Amateur astronomers have traditionally mostly used Windows, so there are a lot of Windows tools, both freeware and payware, to help. I used to run the freeware ones in Wine on Ubuntu with varying levels of success.

When I first got the Mac, I had a lot of trouble getting Wine to run reliably and eventually ended up doing my alignment and processing manually in The Gimp. However, that's time consuming and rather fiddly and limited to stacking static exposures.

However, I've recently started finding quite a bit of Mac OS based astrophotography software. I don't know if that means it's all fairly new or whether my Google skills failed me over the past years :-)

Software

I thought I'd document what I use, in the hope that I can save others who want to use their Macs some searching.

Some are Windows software, but run OK on Mac OS X. You can turn them into normal double click applications using a utility called WineSkin Winery.

Obtaining data from video camera:

Format-converting video data:

Processing video data:

  • AutoStakkert! (Windows + Wine, free for non-commercial use, donationware)

Obtaining data from DSLR:

Processing and stacking DSLR files and post-processing video stacks:

Post-processing:

Telescope guiding:

  • AstroGuider (Mac OS X, payware, free trial)
  • PHD2 (Mac OS X, free, open source)

Hardware

M42 - Orion NebulaA few weeks ago I bought a ZWO ASI120MC-S astro camera, as that was on sale and listed by Nebulosity as supported by OSX. Until then I'd messed around with a hacked up Logitech webcam, which seemed to only be supported by the Photo Booth app.

I've not done any guiding yet (I need a way to mount the guide scope on the main scope - d'oh) but the camera works well with Nebulosity 4 and oaCapture. I'm looking forward to being able to grab Jupiter with it in a month or so and Saturn and Mars later this year.

The image to the right is a stack of 24x5 second unguided exposures of the trapezium in M42. Not too bad for a quick test on a half-moon night.

Settings

I've been fiddling with Nebulosity  abit, to try and get it to stack the RAW images from my Nikon D750 as colour. I found a conversion matrix that was supposed to be decent, but as it turns out that made all images far too blue.

The current matrix I use is listed below. If you find a better one, please let me know.

  R G B
R 0.50 0.00 1.00
G 0.00 1.00 0.00
B 1.00 0.00 0.50

February 04, 2017

Career Opportunities

Had a friendly meeting a few days ago with a young person debating their future career path. They had a very good IT-orientated resume (give this person a job, seriously) but were debating whether they should go down the path of a Business Analyst. It was fairly clear that they lived and breathed IT, whereas the BA choice was one of some indifference. In reverse, there was a situation when VPAC had a year of summer school graduates where it became quickly obvious that none of them had any passion for IT.

read more

CMA Equalisation of FSK

We’ve just released a new experimental mode for Digital Voice called FreeDV 800XA. This uses the Codec 700C mode, 100 bit/s for synchronisation, and a 4FSK modem, actually the same modem that has been so successful for images from High Altitude Balloons.

FSK has the advantage of being a constant amplitude waveform, so efficient class C amplifiers can be used. However as it currently stands, 800XA has no real protection for the multipath common on HF channels, for example symbols that have an echo delayed by a few ms.

So I decided to start looking at equalisers. Some Googling suggested the Constant Modulus Algorithm (CMA) Equaliser might be a suitable choice for FSK, and turned up some sample code on DSP stack exchange.

I had a bit of trouble getting the algorithm to work for bandpass FSK signals, so posted this question on CMA equalisation for FSK. I received some kind help, and eventually made the equaliser work on a simulated HF channel. Here is the Octave simulation cma.m

How it works

The equaliser attempts to correct for the channel using the received signal, which is corrupted by noise.

There is a “gotcha” in using a FIR filter to equalise a channel response. Consider a channel H(z) with a simple 3 sample impulse response h(n). Now we could equalise this with the exact inverse 1/H(z). Here is a plot of our example channel frequency response and the ideal equaliser which is exactly the inverse:

Now here is a plot of the impulse responses of the channel h(n), and equaliser h'(n):

The ideal equaliser response h'(n) is much longer than the 3 samples of the channel impulse response h(n). The CMA algorithm requires our equaliser to be a FIR filter. Counter-intuitively, we need to use an FIR equaliser with a number of taps significantly larger than the expected channel impulse response we are trying to equalise.

One explanation for this – the channel response can be considered to be a Finite Impulse response (FIR) filter H(z). The exact inverse 1/H(z), when expressed in the time domain, is an Infinite Impulse Response (IIR) filter, which have, you know, an infinitely long impulse response!

Simulation

The figures below show the CMA equaliser doing it’s thing in a multipath channel with AWGN noise. In Figure 1 the error is reduced over time, and the lower plot shows the combined channel-equaliser impulse response. If the equaliser was perfect the combined channel-equaliser response would be 1.

Figure 2 below shows the CMA going to work on a FSK signal. The top subplot is the transmitted FSK signal, you can see the two different frequencies in the waveform. The middle plot shows the received signal, after it has been messed up by the multipath channel. It’s clear that the tone amplitudes are different. Looking carefully at the point where the tones transition (e.g. around sample 25 and 65) there is intersymbol interference due to multipath echoes, messing up the start of each FSK symbol.

However in the bottom subplot the equaliser has worked it’s magic and the waveform is looking quite nice. The tone levels are nearly equal and much of the ISI removed. Yayyyyyy.

Figure 4 shows the magnitude frequency response at several stages in the simulation. The top subplot is the channel response. It’s a comb filter, typical of multipath channels. The middle subplot is the equaliser response. Ideally, this should be the exact inverse of the channel. It’s pretty close at the low end but seems to lose it’s way at very low and high frequencies. The lower plot is the combined response, which is close to 0dB at the low frequencies. Cool.

Figure 4 is the transmit spectrum of the modem signal (top), and the spectrum after the channel has mangled it (lower). Note one tone is now lower than the other. Also note that the modem signal only has energy in the low-mid range of the spectrum. This might explain why the equaliser does a good job in that region of the spectrum – it’s where we have energy to drive the adaption.

Problems for HF Digital Voice

Unfortunately the CMA equaliser only works well at high SNRs, and takes seconds to converge. I am interested in low SNR (around 0dB in a 3000 Hz noise bandwidth) and it’s Push To Talk (PTT) radio so we a need fast initial training, around 100ms. Then it must follow the time varying HF channel, continually retraining on the fly.

For further work I really should measure BER versus Eb/No for a variety of SNRs and convergence times, and measure what BER improvement we are buying with equalisation. BER is King, much easier that squinting at time domain waveforms.

If the CMA cost function was used with known information (like pilot symbols or the Unique Word we have in 800XA) it might be able to work faster. This would involve deconvolution on the fly, rather than using iterative or adaptive techniques.

February 03, 2017

Trump Background, Random Stuff, and More

Given his recent inauguration, I thought it would be interesting to take a look at the background of the new US president, Donald Trump: https://www.bloomberg.com/politics/articles/2017-01-21/merkel-said-to-scour-trump-archive-for-clues-on-how-to-read-him https://www.rt.com/viral/374666-twitter-gifts-trump-followers/?utm_source=rss&utm_medium=rss&utm_campaign=RSS - well known background,

Nova vendordata deployment, an excessively detailed guide

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user's behalf.

Nova supports a mechanism to add "vendordata" to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.


Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.


The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>


Where name is a short string not including the '@' character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125


Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json


For each dynamic target, there will be an entry in the JSON file named after that target. For example::

        {
            "testing": {
                "value1": 1,
                "value2": 2,
                "value3": "three"
            }
        }


Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time


Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request -- you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behavior is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata
$ cd vendordata
$ apt-get install virtualenvwrapper
$ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
$ mkvirtualenv vendordata
$ pip install -r requirements.txt


We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn't what you're using:

[keystone_authtoken]
insecure = False
auth_plugin = password
auth_url = http://172.29.236.100:35357
auth_uri = http://172.29.236.100:5000
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
region_name = RegionOne


Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
$ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`


We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/


Configuring nova to use the external metadata service

Now we're ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api]
vendordata_providers=DynamicJSON
vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888


Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo


We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool
{
    "testing": {
        "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
    }
}


Tags for this post: openstack nova metadata vendordata configdrive cloud-init
Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment

February 01, 2017

LUV Beginners February Meeting: Static websites with Jekyll, Hugo and Forestry

Feb 25 2017 12:30
Feb 25 2017 16:30
Feb 25 2017 12:30
Feb 25 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

PLEASE NOTE CHANGE OF DATE THIS MONTH ONLY

Static websites with Jekyll, Hugo and Forestry

Andrew Pam will demonstrate a new way to make websites complete with content management that doesn't require software running on a web server.  This technique enhances both performance and security.  More information at:

 

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

February 25, 2017 - 12:30

read more

LUV Main February 2017 Meeting: OpenStack Barcelona Summit / Data Structures and Algorithms

Feb 7 2017 18:30
Feb 7 2017 20:30
Feb 7 2017 18:30
Feb 7 2017 20:30
Location: 
6th Floor, 200 Victoria St. Carlton VIC 3053

Speakers:

• Lev Lafayette, OpenStack and the OpenStack Barcelona Summit
• Jacinta Richardson, Data Structures and Algorithms in the 21st Century

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 049 589.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

February 7, 2017 - 18:30

read more

January 31, 2017

Giving serial devices meaningful names

This is a hack I've been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$  cat /etc/udev/rules.d/60-local.rules 
KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
    ATTRS{serial}=="A8003Ye7", \
    SYMLINK+="radish"


This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to "/dev/radish".

You find out the vendor and product ID from lsusb like this:

$ lsusb
Bus 003 Device 003: ID 0624:0201 Avocent Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that's great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more... difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
    PROGRAM="/usr/bin/usbtest /dev/%k", \
    SYMLINK+="%c"


This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is -- in my case either a currentcost or a solar panel inverter.

Tags for this post: linux udev serial usb usbserial
Related posts: SMART and USB storage; Video4Linux, ov511, and RGB24 palettes; ov511 hackery; Ubuntu, Dapper Drake, and that difficult Dell e310; Roomba serial cables; Via M10000, video, and a Belkin wireless USB thing

Comment

NAMD on NVLink

NAMD is a molecular dynamics program that can use GPU acceleration to speed up its calculations. Recent OpenPOWER machines like the IBM Power Systems S822LC for High Performance Computing (Minsky) come with a new interconnect for GPUs called NVLink, which offers extremely high bandwidth to a number of very powerful Nvidia Pascal P100 GPUs. So they're ideal machines for this sort of workload.

Here's how to set up NAMD 2.12 on your Minsky, and how to debug some common issues. We've targeted this script for CentOS, but we've successfully compiled NAMD on Ubuntu as well.

Prerequisites

GPU Drivers and CUDA

Firstly, you'll need CUDA and the NVidia drivers.

You can install CUDA by following the instructions on NVidia's CUDA Downloads page.

yum install epel-release
yum install dkms
# download the rpm from the NVidia website
rpm -i cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le.rpm
yum clean expire-cache
yum install cuda
# this will take a while...

Then, we set up a profile file to automatically load CUDA into our path:

cat >  /etc/profile.d/cuda_path.sh <<EOF
# From http://developer.download.nvidia.com/compute/cuda/8.0/secure/prod/docs/sidebar/CUDA_Quick_Start_Guide.pdf - 4.4.2.1
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
EOF

Now, open a new terminal session and check to see if it works:

cuda-install-samples-8.0.sh ~
cd ~/NVIDIA_CUDA-8.0_Samples/1_Utilities/bandwidthTest
make && ./bandwidthTest

If you see a figure of ~32GB/s, that means NVLink is working as expected. A figure of ~7-8GB indicates that only PCI is working, and more debugging is required.

Compilers

You need a c++ compiler:

yum install gcc-c++

Building NAMD

Once CUDA and the compilers are installed, building NAMD is reasonably straightforward. The one hitch is that because we're using CUDA 8.0, and the NAMD build scripts assume CUDA 7.5, we need to supply an updated Linux-POWER.cuda file. (We also enable code generation for the Pascal in this file.)

We've documented the entire process as a script which you can download. We'd recommend executing the commands one by one, but if you're brave you can run the script directly.

The script will fetch NAMD 2.12 and build it for you, but won't install it. It will look for the CUDA override file in the directory you are running the script from, and will automatically move it into the correct place so it is picked up by the build system..

The script compiles for a single multicore machine setup, rather than for a cluster. However, it should be a good start for an Ethernet or Infiniband setup.

If you're doing things by hand, you may see some errors during the compilation of charm - as long as you get charm++ built successfully. at the end, you should be OK.

Testing NAMD

We have been testing NAMD using the STMV files available from the NAMD website:

cd NAMD_2.12_Source/Linux-POWER-g++
wget http://www.ks.uiuc.edu/Research/namd/utilities/stmv.tar.gz
tar -xf stmv.tar.gz
sudo ./charmrun +p80 ./namd2 +pemap 0-159:2 +idlepoll +commthread stmv/stmv.namd

This binds a namd worker thread to every second hardware thread. This is because hardware threads share resources, so using every hardware thread costs overhead and doesn't give us access to any more physical resources.

You should see messages about finding and using GPUs:

Pe 0 physical rank 0 binding to CUDA device 0 on <hostname>: 'Graphics Device'  Mem: 4042MB  Rev: 6.0

This should be significantly faster than on non-NVLink machines - we saw a gain of about 2x in speed going from a machine with Nvidia K80s to a Minsky. If things aren't faster for you, let us know!

Downloads

Other notes

Namd requires some libraries, some of which they supply as binary downloads on their website. Make sure you get the ppc64le versions, not the ppc64 versions, otherwise you'll get errors like:

/bin/ld: failed to merge target specific data of file .rootdir/tcl/lib/libtcl8.5.a(regfree.o)
/bin/ld: .rootdir/tcl/lib/libtcl8.5.a(regerror.o): compiled for a big endian system and target is little endian
/bin/ld: failed to merge target specific data of file .rootdir/tcl/lib/libtcl8.5.a(regerror.o)
/bin/ld: .rootdir/tcl/lib/libtcl8.5.a(tclAlloc.o): compiled for a big endian system and target is little endian

The script we supply should get these right automatically.

linux.conf.au 2017 review

I recently attended LCA 2017, where I gave a talk at the Linux Kernel miniconf (run by fellow sthbrx blogger Andrew Donnellan!) and a talk at the main conference.

I received some really interesting feedback so I've taken the opportunity to write some of it down to complement the talk videos and slides that are online. (And to remind me to follow up on it!)

Miniconf talk: Sparse Warnings

My kernel miniconf talk was on sparse warnings (pdf slides, 23m video).

The abstract read (in part):

sparse is a semantic parser for C, and is one of the static analysis tools available to kernel devs.

Sparse is a powerful tool with good integration into the kernel build system. However, we suffer from warning overload - there are too many sparse warnings to spot the serious issues amongst the trivial. This makes it difficult to use, both for developers and maintainers.

Happily, I received some feedback that suggests it's not all doom and gloom like I had thought!

  • Dave Chinner told me that the xfs team uses sparse regularly to make sure that the file system is endian-safe. This is good news - we really would like that to be endian-safe!

  • Paul McKenney let me know that the 0day bot does do some sparse checking - it would just seem that it's not done on PowerPC.

Main talk: 400,000 Ephemeral Containers

My main talk was entitled "400,000 Ephemeral Containers: testing entire ecosystems with Docker". You can read the abstract for full details, but it boils down to:

What if you want to test how all the packages in a given ecosystem work in a given situation?

My main example was testing how many of the Ruby packages successfully install on Power, but I also talk about other languages and other cool tests you could run.

The 44m video is online. I haven't put the slides up yet but they should be available on GitHub soonish.

Unlike with the kernel talk, I didn't catch the names of most of the people with feedback.

Docker memory issues

One of the questions I received during the talk was about running into memory issues in Docker. I attempted to answer that during the Q&A. The person who asked the question then had a chat with me afterwards, and it turns out I had completely misunderstood the question. I thought it was about memory usage of running containers in parallel. It was actually about memory usage in the docker daemon when running lots of containers in serial. Apparently the docker daemon doesn't free memory during the life of the process, and the question was whether or not I had observed that during my runs.

I didn't have a good answer for this at the time other than "it worked for me", so I have gone back and looked at the docker daemon memory usage.

After a full Ruby run, the daemon is using about 13.9G of virtual memory, and 1.975G of resident memory. If I restart it, the memory usage drops to 1.6G of virtual and 43M of resident memory. So it would appear that the person asking the question was right, and I'm just not seeing it have an effect.

Other interesting feedback

  • Someone was quite interested in testing on Sparc, once they got their Go runtime nailed down.

  • A Rackspacer was quite interested in Python testing for OpenStack - this has some intricacies around Py2/Py3, but we had an interesting discussion around just testing to see if packages that claim Py3 support provide Py3 support.

  • A large jobs site mentioned using this technique to help them migrate their dependencies between versions of Go.

  • I was 'gently encouraged' to try to do better with how long the process takes to run - if for no other reason than to avoid burning more coal. This is a fair point. I did not explain very well what I meant with diminishing returns in the talk: there's lots you could do to make the process faster, it's just comes at the cost of the simplicity that I really wanted when I first started the project. I am working (on and off) on better ways to deal with this by considering the dependency graph.

January 30, 2017

Linux BASH CLI RSS Reader, Explaining Prophets 4, and More

- built my own RSS feed reader yesterday. It actually took a lot less time then going out to search for one that suited my needs. It's based on someone else's code (credit given in code but since that code was so buggy that it wouldn't work) I guess it's mine now? https://sites.google.com/site/dtbnguyen/rssread-1.11.tar.gz https://sites.google.com/site/dtbnguyen/ - code to extract from

My Personal Travel Ban

I plan to avoid any and all travel to the USA for the foreseeable future due to the complete mess unfolding there with Trump’s executive orders banning immigration from some Muslim-majority countries, related protests, illegal detainment, etc. etc. (the list goes on, and I expect it to get longer).

It’s not that I’m from one of the blacklist countries, and I’m not a Muslim. I’m even white. But I no longer consider travel to the USA safe (especially bearing in mind my ridiculous beard and long hair), and even if I did, I’d want to stand in solidarity with the people who are currently being screwed. The notion of banning entire groups of people based on a single shared trait (in this case, probable adherence to a particular religion) is abhorrent; it demonizes our fellow humans, divides us and builds walls – whether metaphorical or physical – between our various communities. The fact that this immigration ban will impact refugees and asylum seekers just makes matters worse. I am deeply ashamed by Australia’s record on that front too, and concerned that our government will not do much better.

So I won’t be putting in any talks for Cephalocon - which is a damn shame, as I’m working on Ceph – or for any other US-based tech conference unless and until the situation over there changes.

I realise this post may not make much difference in the grander scheme of things, but one more voice is one more voice.

A pythonic example of recording metrics about ephemeral scripts with prometheus

In my previous post we talked about how to record information from short lived scripts (I call them ephemeral scripts by the way) with prometheus. The example there was a script which checked the SMART status of each of the disks in a machine and reported that via pushgateway. I now want to work through a slightly more complicated example.

I think you hit the limits of reporting simple values in shell scripts via curl requests fairly quickly. For example with the SMART monitoring script, SMART is capable of returning a whole heap of metrics about the performance of a disk, but we boiled that down to a single "health" value. This is largely because writing a parser for all the other values that smartctl returns would be inefficient and fragile in shell. So for this post, we're going to work through an example of how to report a variety of values from a python script. Those values could be the parsed output of smartctl, but to mix things up a bit, I'm going to use a different script I wrote recently.

This new script uses the Weather Underground API to lookup weather stations near my house, and then generate graphics of the weather forecast. These graphics are displayed on the various Cisco SIP phones I already had around the house. The forecasts look like this:



The script to generate these weather forecasts is relatively simple python, and you can see the source code on github.

My cunning plan here is to use prometheus' time series database and alert capabilities to drive home automation around my house. The first step for that is to start gathering some simple facts about the home environment so that we can do trending and decision making on them. The code to do this isn't all that complicated. First off, we need to add the python prometheus client to our python environment, which is hopefully a venv:

pip install prometheus_client
pip install six


That second dependency isn't a strict requirement for prometheus, but the script I'm working on needs it (because it needs to work out what's a text value, and python 3 is bonkers).

Next we import the prometheus client in our code and setup the counter registry. At the same time I record when the script was run:

from prometheus_client import CollectorRegistry, Gauge, push_to_gateway

registry = CollectorRegistry()
Gauge('job_last_success_unixtime', 'Last time the weather job ran',
      registry=registry).set_to_current_time()


And then we just add gauges for any values we want to add to the pushgateway

Gauge('_'.join(field), '', registry=registry).set(value)


Finally, the values don't exist in the pushgateway until we actually push them there, which we do like this:

push_to_gateway('localhost:9091', job='weather', registry=registry)


You can see the entire patch I wrote to add prometheus support on github if you're interested in an example with more context.

Now we can have pretty graphs of temperature and stuff!

Tags for this post: prometheus monitoring python pushgateway
Related posts: Recording performance information from short lived processes with prometheus; Basic prometheus setup; Implementing SCP with paramiko; Mona Lisa Overdrive; Packet capture in python; mbot: new hotness in Google Talk bots

Comment

Creating a home music server using mpd

I recently setup a music server on my home server using the Music Player Daemon, a cross-platform free software project which has been around for a long time.

Basic setup

Start by installing the server and the client package:

apt install mpd mpc

then open /etc/mpd.conf and set these:

music_directory    "/path/to/music/"
bind_to_address    "192.168.1.2"
bind_to_address    "/run/mpd/socket"
zeroconf_enabled   "yes"
password           "Password1"

before replacing the alsa output:

audio_output {
   type    "alsa"
   name    "My ALSA Device"
}

with a pulseaudio one:

audio_output {
   type    "pulse"
   name    "Pulseaudio Output"
}

In order for the automatic detection (zeroconf) of your music server to work, you need to prevent systemd from creating the network socket:

systemctl stop mpd.service
systemctl stop mpd.socket
systemctl disable mpd.socket

otherwise you'll see this in /var/log/mpd/mpd.log:

zeroconf: No global port, disabling zeroconf

Once all of that is in place, start the mpd daemon:

systemctl start mpd.service

and create an index of your music files:

MPD_HOST=Password1@/run/mpd/socket mpc update

while watching the logs to notice any files that the mpd user doesn't have access to:

tail -f /var/log/mpd/mpd.log

Enhancements

I also added the following in /etc/logcheck/ignore.server.d/local-mpd to silence unnecessary log messages in logcheck emails:

^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Started Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopped Music Player Daemon.$
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopping Music Player Daemon...$

and created a cronjob in /etc/cron.d/mpd-francois to update the database daily and stop the music automatically in the evening:

# Refresh DB once a day
5 1 * * *  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update
# Think of the neighbours
0 22 * * 0-4  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop
0 23 * * 5-6  mpd  MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop

Clients

To let anybody on the local network connect, I opened port 6600 on the firewall (/etc/network/iptables.up.rules since I'm using Debian's iptables-apply):

-A INPUT -s 192.168.1.0/24 -p tcp --dport 6600 -j ACCEPT

Then I looked at the long list of clients on the mpd wiki.

Desktop

The official website suggests two clients which are available in Debian and Ubuntu:

Both of them work well, but haven't had a release since 2011, even though there is some activity in 2013 and 2015 in their respective source control repositories.

Ario has a simpler user interface but gmpc has cover art download working out of the box, which is why I might stick with it.

In both cases, it is possible to configure a polipo proxy so that any external resources are fetched via Tor.

Android

On Android, I got these two to work:

I picked M.A.L.P. since it includes a nice widget for the homescreen.

iOS

On iOS, these are the most promising clients I found:

since MPoD and MPaD don't appear to be available on the AppStore anymore.

Extracting Early Boot Messages in QEMU

Be me, you're a kernel hacker, you make some changes to your kernel, you boot test it in QEMU, and it fails to boot. Even worse is the fact that it just hangs without any failure message, no stack trace, no nothing. "Now what?" you think to yourself.

You probably do the first thing you learnt in debugging101 and add abundant print statements all over the place to try and make some sense of what's happening and where it is that you're actually crashing. So you do this, you recompile your kernel, boot it in QEMU and lo and behold, nothing... What happened? You added all these shiny new print statements, where did the output go? The kernel still failed to boot (obviously), but where you were hoping to get some clue to go on you were again left with an empty screen. "Maybe I didn't print early enough" or "maybe I got the code paths wrong" you think, "maybe I just need more prints" even. So lets delve a bit deeper, why didn't you see those prints, where did they go, and how can you get at them?

__log_buf

So what happens when you call printk()? Well what normally happens is, depending on the log level you set, the output is sent to the console or logged so you can see it in dmesg. But what happens if we haven't registered a console yet? Well then we can't print the message can we, so its logged in a buffer, kernel log buffer to be exact helpfully named __log_buf.

Console Registration

So how come I eventually see print statements on my screen? Well at some point during the boot process a console is registered with the printk system, and any buffered output can now be displayed. On ppc it happens that this occurs in register_early_udbg_console() called in setup_arch() from start_kernel(), which is the generic kernel entry point. From this point forward when you print something it will be displayed on the console, but what if you crash before this? What are you supposed to do then?

Extracting Early Boot Messages in QEMU

And now the moment you've all been waiting for, how do I extract those early boot messages in QEMU if my kernel crashes before the console is registered? Well it's quite simple really, QEMU is nice enough to allow us to dump guest memory, and we know the log buffer is in there some where, so we just need to dump the correct part of memory which corresponds to the log buffer.

Locating __log_buf

Before we can dump the log buffer we need to know where it is. Luckily for us this is fairly simple, we just need to dump all the kernel symbols and look for the right one.

> nm vmlinux > tmp; grep __log_buf tmp;
c000000000f5e3dc b __log_buf

We use the nm tool to list all the kernel symbols and output this into some temporary file, we can then grep this for the log buffer (which we know to be named __log_buf), and presto we are told that it's at kernel address 0xf5e3dc.

Dumping Guest Memory

It's then simply a case of dumping guest memory from the QEMU console. So first we press ^a+c to get us to the QEMU console, then we can use the aptly named dump-guest-memory.

> help dump-guest-memory
dump-guest-memory [-p] [-d] [-z|-l|-s] filename [begin length] -- dump guest memory into file 'filename'.
            -p: do paging to get guest's memory mapping.
            -d: return immediately (do not wait for completion).
            -z: dump in kdump-compressed format, with zlib compression.
            -l: dump in kdump-compressed format, with lzo compression.
            -s: dump in kdump-compressed format, with snappy compression.
            begin: the starting physical address.
            length: the memory size, in bytes.

We just give it a filename for where we want our output to go, we know the starting address, we just don't know the length. We could choose some arbitrary length, but inspection of the kernel code shows us that:

#define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);

Looking at the pseries_defconfig file shows us that the LOG_BUF_SHIFT is set to 18, and thus we know that the buffer is 2^18 bytes or 256kb. So now we run:

> dump-guest-memory tmp 0xf5e3dc 262144

And we now get our log buffer in the file tmp. This can simply be viewed with:

> hexdump -C tmp

This gives a readable, if poorly formatted output. I'm sure you can find something better but I'll leave that as an exercise for the reader.

Conclusion

So if like me your kernel hangs somewhere early in the boot process and you're left without your console output you are now fully equipped to extract the log buffer in QEMU and hopefully therein lies the answer to why you failed to boot.

Git hook to help with OpenStack development

I wrote a small Git hook which may be useful in helping OpenStack devs run tests (and any script they like) before a commit is made (see Superuser magazine article).

This way we can save everyone time in the review process by fixing simple issues before they break in the check-pipeline.

Installation is easy (see the GitHub page) and all prompts default to no, so that the dev can easily just hit Enter to skip and continue (but still be reminded).

January 29, 2017

Installing Centos 7.2 on IBM Power System's S822LC for High Performance Computing (Minksy) with USB device

Introduction

If you are installing Linux on your IBM Power System's S822LC server then the instructions in this article will help you to start and run your system. These instructions are specific to installing CentOS 7 on an IBM Power System S822LC for High Performance Computing (Minsky), but also work for RHEL 7 - just swap CentOS for RHEL.

Prerequisites

Before you power on the system, ensure that you have the following items:

  • Ethernet cables;
  • USB storage device of 7G or greater;
  • An installed ethernet network with a DHCP server;
  • Access to the DHCP server's logs;
  • Power cords and outlet for your system;
  • PC or notebook that has IPMItool level 1.8.15 or greater; and
  • a VNC client.

Download CentOS ISO file from the Centos Mirror. Select the "Everything" ISO file.

Note: You must use the 1611 release (dated 2016-12-22) or later due to Linux Kernel support for the server hardware.

Step 1: Preparing to power on your system

Follow these steps to prepare your system:

  1. If your system belongs in a rack, install your system into that rack. For instructions, see IBM POWER8 Systems information.
  2. Connect an Ethernet cable to the left embedded Ethernet port next to the serial port on the back of your system and the other end to your network. This Ethernet port is used for the BMC/IPMI interface.
  3. Connect another Enternet cable to the right Ethernet port for network connection for the operating system.
  4. Connect the power cords to the system and plug them into the outlets.

At this point, your firmware is booting.

Step 2: Determining the BMC firmware IP address

To determine the IP address of the BMC, examine the latest DHCP server logs for the network connected to the server. The IP address will be requested approximately 2 minutes after being powered on.

It is possible to set the BMC to a static IP address by following the IBM documentation on IPMI.

Step 3: Connecting to the BMC firmware with IPMItool

After you have a network connection set up for your BMC firmware, you can connect using Intelligent Platform Management Interface (IPMI). IPMI is the default console to use when connecting to the Open Power Abstraction Layer (OPAL) firmware.

Use the default authentication for servers over IPMI is:

  • Default user: ADMIN
  • Default password: admin

To power on your server from a PC or notebook that is running Linux®, follow these steps:

Open a terminal program on your PC or notebook with Activate Serial-Over-Lan using IPMI. Use other steps here as needed.

For the following impitool commands, server_ip_address is the IP address of the BMC from Step 2, and ipmi_user and ipmi_password are the default user ID and password for IPMI.

Power On using IPMI

If your server is not powered on, run the following command to power the server on:

ipmitool -I lanplus -H server_ip_address -U ipmi_user -P ipmi_password chassis power on

Activate Serial-Over-Lan using IPMI

Activate your IPMI console by running this command:

ipmitool -I lanplus -H server_ip_address -U ipmi_user -P ipmi_password sol activate

After powering on your system, the Petitboot interface loads. If you do not interrupt the boot process by pressing any key within 10 seconds, Petitboot automatically boots the first option. At this point the IPMI console will be connected to the Operating Systems serial. If you get to this stage accidently you can deactivate and reboot as per the following two commands.

Deactivate Serial-Over-Lan using IPMI

If you need to power off or reboot your system, deactivate the console by running this command:

ipmitool -I lanplus -H server_ip_address -U user-name -P ipmi_password sol deactivate

Reboot using IPMI

If you need to reboot the system, run this command:

ipmitool -I lanplus -H server_ip_address -U user-name -P ipmi_password chassis power reset

Step 4: Creating a USB device and booting

At this point, your IPMI console should be contain a Petitboot bootloader menu as illustrated below and you are ready to install Centos 7 on your server.

Petitboot menu over IPMI

Use one of the following USB devices:

  • USB attached DVD player with a single USB cable to stay under 1.0 Amps, or
  • 7 GB (or more) 2.0 (or later) USB flash drive.

Follow the following instructions:

  1. To create the bootable USB device, follow the instructions in the CentOS wiki Host to Set Up a USB to Install CentOS.
  2. Insert your bootable USB device into the front USB port. CentOS AltArch installer will automatically appear as a boot option on the Petitboot main screen. If the USB device does not appear select Rescan devices. If your device is not detected, you might have to try a different type.
  3. Arrow up to select the CentOS boot option. Press e (Edit) to open the Petitboot Option Editor window
  4. Move the cursor to the Boot arguments section and to include the following information: ro inst.stage2=hd:LABEL=CentOS_7_ppc64le:/ console=hvc0 ip=dhcp (if using RHEL the LABEL will be similar to RHEL-7.3\x20Server.ppc64le:/)

Petitboot edited "Install CentOS AltArch 7 (64-bit kernel)

Notes about the boot arguments:

  • ip=dhcp to ensure network is started for VNC installation.
  • console hvc0 is needed as this is not the default.
  • inst.stage2 is needed as the boot process won't automatically find the stage2 install on the install disk.
  • append inst.proxy=URL where URL is the proxy URL if installing in a network that requires a proxy to connect externally.

You can find additional options at Anaconda Boot Options.

  1. Select OK to save your options and return to the Main menu
  2. On the Petitboot main screen, select the CentOS AltArch option and then press Enter.

Step 5: Complete your installation

After you select to boot the CentOS installer, the installer wizard walks you through the steps.

  1. If the CentOS installer was able to obtain a network address via DHCP, it will present an option to enable the VNC. If no option is presented check your network cables. VNC option
  2. Select the Start VNC option and it will provide an OS server IP adress. Note that this will be different to the BMC address previously optained. VNC option selected
  3. Run a VNC client program on your PC or notebook and connect to the OS server IP address.

VNC of Installer

During the install over VNC, there are a couple of consoles active. To switch between them in the ipmitool terminal, press ctrl-b and then between 1-4 as indicated.

Using the VNC client program:

  1. Select "Install Destination"
  2. Select a device from "Local Standard Disks"
  3. Select "Full disk summary and boot device"
  4. Select the device again from "Selected Disks" with the Boot enabled
  5. Select "Do not install boot loader" from device. Disabling install of boot loader which results in Result after disabling boot loader install.

Without disabling boot loader, the installer complains about an invalid stage1 device. I suspect it needs a manual Prep partition of 10M to make the installer happy.

If you have a local Centos repository you can set this by selecting "Install Source" - the directories at this url should look like CentOS's Install Source for ppc64le.

Step 6: Before reboot and using the IPMI Serial-Over-LAN

Before reboot, generate the grub.cfg file as Petitboot uses this to generate its boot menu:

  1. Using the ipmitool's shell (ctrl-b 2):
  2. Enter the following commands to generate a grub.cfg file
chroot /mnt/sysimage
rm /etc/grub.d/30_os-prober
grub2-mkconfig -o /boot/grub2/grub.cfg
exit

/etc/grub.d/30_os-prober is removed as Petitboot probes the other devices anyway so including it would create lots of duplicate menu items.

The last step is to restart your system.

Note: While your system is restarting, remove the USB device.

After the system restarts, Petitboot displays the option to boot CentOS 7.2. Select this option and press Enter.

Conclusion

After you have booted CentOS, your server is ready to go! For more information, see the following resources:

Random Test Subject

Tim Serong Almost every time I fly, it seems like I get pulled aside for the random explosives trace detection test. I always assumed it was because I usually look like a crazy mountain man (see photo). But, if you google around for “airport random explosives test”, you’ll find forum posts from security staff assuring everyone they’re not doing profiling, and even a helpful FAQ from Newcastle Airport (“Why are you always chosen for ‘explosive testing’?“) which says the process is “as the officer finishes screening one person, they are required to ask the next person walking through screening to undertake the ETD test”.

So maybe it’s just bad luck. Except possibly for that time at Hobart airport last week, where I was seeing off a colleague after linux.conf.au 2017. As far as I could tell, we were the only two people approaching security, and my colleague was in front. He was waved through to the regular security screening, and I was asked over for an explosives test, to which I replied “you’re most welcome to test me if you like, but I’m not actually going through security into departures”. The poor guy looked a bit nonplussed at this, then moved on to the next traveler who’d since appeared in line behind us.

What to do about this in future? Obviously, I need a new t-shirt, with text something like one of these:

Random Test SubjectPick Me for Random Testing Randomly Chosen Every Time

If anyone else would like a t-shirt along these lines, the images above conveniently link to my Redbubble store. Or, if you’d rather DIY, there’s PNGs here, here and here (CC-BY-SA as usual, and no, they’re not broken, it’s white text on a transparent background).

Finally, for some real randomness, check out Keith Packard’s ChaosKey To Production presentation. I’m not actually affiliated with Keith, but the ChaosKey sure looks nifty.

South Coast Track Report

Please note this is a work in progress

I had previously stated my intention to walk the South Coast Track. I have now completed this walk and now want a space where I can collect all my thoughts.

Photos: Google Photos album

The sections I’m referring to here come straight from the guide book. Due to the walking weather and tides all being in our favour, we managed to do the walk in six days. We flew in late on the first day and did not finish section one of the walk, on the second day we finished section one and then completed section two and three. On day three it was just the Ironbound range. On day four it was just section five. Day five we completed section six and the tiny section seven. Day six was section eight and day seven was cockle creak (TODO something’s not adding up here)

The hardest day, not surprisingly, was day three where we tackled the Ironbound range, 900m up, then down. The surprising bit was how easy the ascent was and how god damn hard the descent was. The guide book says there are three rest camps on the descent, with one just below the peak, a perfect spot for lunch. Either this camp is hidden (e.g. you have to look behind you) or it’s overgrown, as we all missed it. This meant we ended up skipping lunch and were slipping down the wed, muddy awful descent side for hours. When we came across the mid rest camp stop, because we’d been walking for so long, everyone assumed we were at the lower camp stop and that we were therefore only an hour or so away from camp. Another three hours later or so we actually came across the lower camp site, and the by that time all sense of proportion was lost and I was starting to get worried that somehow we’d gotten lost and were not on the right trail and that we’d run out of light. In the end I got into camp about an hour before sundown (approx eight) and B&R got in about half an hour before sundown. I was utterly exhausted, got some water, pitched the tent, collapsed in it and fell asleep. Woke up close to midnight, realised I hadn’t had any lunch or dinner, still wasn’t actually feeling hungry. I forced myself to eat a hot meal, then collapsed in bed again.

TODO: very easy to follow trail.
TODO: just about everything worked.
TODO: spork
TODO: solar panel
TODO: not eating properly
TODO: needing more warmth

I could not have asked for better walking companions, Richard and Bec.


Filed under: camping, Uncategorized

January 28, 2017

Charging my ThinkPad X1 Gen4 using USB-C

As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.

I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).

At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.

One of the less-touted features of USB-C is USB-PD[1], which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.

Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).

After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.

Devkits were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.

It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.

By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.

Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.

I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.

At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.

Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.

Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.

Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.

At linux.conf.au last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.

Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.





Rough timeline:
  • April 21st 2016, decide to start working on the project
  • April 28th, devkits arrive
  • May 8th, schematic largely complete, work starts on PCB layout
  • June 14th, order sent to CM
  • ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
  • early August, boards arrive from California
  • Somewhere here I try, and fail, to reflow a modified connector onto a board
  • October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
  • January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
  • January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
  • January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.


1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.

Recording performance information from short lived processes with prometheus

Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome.

The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really.

First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time:

$ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz
$ tar xvzf pushgateway-0.3.1.linux-386.tar.gz
$ cd pushgateway-0.3.1.linux-386
$ ./pushgateway


Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts.

The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway:

echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job


This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL:

# TYPE some_metric untyped
some_metric{instance="",job="some_job"} 3.14


You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first.

On jobs and instances

Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances.

For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers.

So, we go from something like this:



To an architecture which looks a bit like this:



Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures:



So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way to lookup jobs and instances via DNS, but that's a story for another day.

To harp on a point here, all of these processes would be running a web server exporting metrics in google land -- that means that prometheus would know that its monitoring a frontend job because it would be listed in the configuration file as such. You can see this in the configuration file from the previous post. Here's the relevant snippet again:

  - job_name: 'node'
    static_configs:
      - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']


The job "node" runs on three targets (instances), named "molokai:9100", "dell:9100", and "eeebox:9100".

However, we live in the ghetto for these ephemeral scripts and want to use the pushgateway for more than one such script, so we have to tell lies via the pushgateway. So for my simple emphemeral script, we'll tell the pushgateway that the job is the script name and the instance can be an empty string. If we don't do that, then prometheus will think that the metric relates to the pushgateway process itself, instead of the ephemeral process.

We tell the pushgateway what job and instance to use like this:

echo "some_metric 3.14" | curl --data-binary @- http://localhost:9091/metrics/job/frontend/instance/0


Now we'll get this at the metrics URL:

# TYPE some_metric untyped
some_metric{instance="",job="some_job"} 3.14
some_metric{instance="0",job="frontend"} 3.14


The first metric there is from our previous attempt (remember when I said that values are never cleared out?), and the second one is from our second attempt. To clear out values you'll need to restart the pushgateway process. For simple ephemeral scripts, I think its ok to leave the instance empty, and just set a job name -- as long as that job name is globally unique.

We also need to tell prometheus to believe our lies about the job and instance for things reported by the pushgateway. The scrape configuration for the pushgateway therefore ends up looking like this:

  - job_name: 'pushgateway'
    honor_labels: true
    static_configs:
      - targets: ['molokai:9091']


Note the honor_labels there, that's the believing the lies bit.

There is one thing to remember here before we can move on. Job names are being blindly trusted from our reporting. So, its now up to us to keep job names unique. So if we export a metric on every machine, we might want to keep the job name specific to the machine. That said, it really depends on what you're trying to do -- so just pay attention when picking job and instance names.

On metric types

Prometheus supports a couple of different types for the metrics which are exported. For now we'll discuss two, and we'll cover the third later. The types are:

  • Gauge: a value which goes up and down over time, like the fuel gauge in your car. Non-motoring examples would include the amount of free disk space on a given partition, the amount of CPU in use, and so forth.
  • Counter: a value which always increases. This might be something like the number of bytes sent by a network card -- the value only resets when the network card is reset (probably by a reboot). These only-increasing types are valuable because its easier to do maths on them in the monitoring system.
  • Histograms: a set of values broken into buckets. For example, the response time for a given web page would probably be reported as a histogram. We'll discuss histograms in more detail in a later post.


I don't really want to dig too deeply into the value types right now, apart from explaining that our previous examples haven't specified a type for the metrics being provided, and that this is undesirable. For now we just need to decide if the value goes up and down (a gauge) or just up (a counter). You can read more about prometheus types at https://prometheus.io/docs/concepts/metric_types/ if you want to.

A typed example

So now we can go back and do the same thing as before, but we can do it with typing like adults would. Let's assume that the value of pi is a gauge, and goes up and down depending on the vagaries of space time. Let's also show that we can add a second metric at the same time because we're fancy like that. We'd therefore need to end up doing something like (again heavily based on the contents of the README):

cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/frontend/instance/0
# TYPE some_metric gauge
# HELP approximate value of pi in the current space time continuum
some_metric 3.14
# TYPE another_metric counter
# HELP another_metric Just an example.
another_metric 2398
EOF


And we'd end up with values like this in the pushgateway metrics URL:

# TYPE some_metric gauge
some_metric{instance="0",job="frontend"} 3.14
# HELP another_metric Just an example.
# TYPE another_metric counter
another_metric{instance="0",job="frontend"} 2398


A tangible example

So that's a lot of talking. Let's deploy this in my home lab for something actually useful. The node_exporter does not report any SMART health details for disks, and that's probably a thing I'd want to alert on. So I wrote this simple script:

#!/bin/bash

hostname=`hostname | cut -f 1 -d "."`

for disk in /dev/sd[a-z]
do
  disk=`basename $disk`

  # Is this a USB thumb drive?
  if [ `/usr/sbin/smartctl -H /dev/$disk | grep -c "Unknown USB bridge"` -gt 0 ]
  then
    result=1
  else
    result=`/usr/sbin/smartctl -H /dev/$disk | grep -c "overall-health self-assessment test result: PASSED"`
  fi

  cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/$hostname/instance/$disk
  # TYPE smart_health_passed gauge
  # HELP whether or not a disk passed a "smartctl -H /dev/sdX"
  smart_health_passed $result
EOF
done


Now, that's not perfect and I am sure that I'll re-write this in python later, but it is actually quite useful already. It will report if a SMART health check failed, and now I could write an alerting rule which looks for disks with a health value of 0 and send myself an email to go to the hard disk shop. Once your pushgateways are being scraped by prometheus, you'll end up with something like this in the console:



I'll explain how to turn this into alerting later.

Tags for this post: prometheus monitoring ephemeral_script pushgateway
Related posts: A pythonic example of recording metrics about ephemeral scripts with prometheus; Basic prometheus setup; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment

January 27, 2017

Basic prometheus setup

I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else.

The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data.

The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing:

$ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz
$ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz
$ cd node_exporter-0.14.0-rc.1.linux-386
$ ./node_exporter


That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics:

$ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"'
node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11


Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on.

So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure.

Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial:

$ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz
$ tar xvzf prometheus-1.5.0.linux-386.tar.gz
$ cd prometheus-1.5.0.linux-386


Now we need to move some things around to install this nicely. I did the puppet equivalent of:

  • Moving the prometheus file to /usr/bin
  • Creating an /etc/prometheus directory and moving console_libraries and consoles into it
  • Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second
  • And creating an empty data directory, in my case at /data/prometheus


The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one:

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'stillhq'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['molokai:9090']

  - job_name: 'node'
    static_configs:
      - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']


Here you can see that I want to scrape each of my web servers which exports metrics every 15 seconds, and I also want to calculate values (such as firing alerts) every 15 seconds too. This might not scale if you have bajillions of processes or machines to monitor. I also label all of my values as coming from my domain, so that if I ever aggregate these values with another prometheus from somewhere else the origin will be clear.

The other interesting bit for now is the scrape configuration. This lists the metrics exporters to monitor. In this case its prometheus itself (molokai:9090), and then each of my machines in the home lab (molokai, dell, and eeebox -- all on port 9100). Remember, port 9090 is the prometheus binary itself and port 9100 is that node_exporter binary we now have running on all of our machines.

Now if we start prometheus, it will do its thing. There is some configuration which needs to be passed on the command line here (instead of in the configration file), so my command line looks like this:

/usr/bin/prometheus -config.file=/etc/prometheus/prometheus.yml \
    -web.console.libraries=/etc/prometheus/console_libraries \
    -web.console.templates=/etc/prometheus/consoles \
    -storage.local.path=/data/prometheus


Prometheus also presents an interactive user interface on port 9090, which is handy. Here's an example of it graphing the load average on each of my machines (it was something which caused a nice jaggy line):



You can see here that the user interface has a drop down for selecting values that are known, and that the key at the bottom tells you things about each time series in the graph. So for example, if we added {instance="eeebox:9100"} to the end of the value in the text box at the top, then we'd be filtering for values with that label set, and would as a result only show one value in the graph (the one for eeebox).

If you're interested in very simple dashboarding of basic system metrics, that's actually all you need to do. In my next post about prometheus I'm going to show how to write your own binary which exports values to be graphed. In my case, the temperature outside my house.

Tags for this post: prometheus monitoring node_exporter
Related posts: Recording performance information from short lived processes with prometheus; A pythonic example of recording metrics about ephemeral scripts with prometheus; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment

January 26, 2017

Life in the Philippines, Duterte Background, and More

- interesting history. Aetan people were original inhabitants of Philippines. Now a minority group. Malaysian and Indonesian people settled there thereafter. Then Chinese and Muslims came. Finally, Spanish and US colonialism and now relative independence... Two main wars that changed it's history. That between it's colonisers (US and Spain) and between itself and the US which helped them

Data Centre Preparation: A Summary

When installing a new system (whether HPC, cloud, or just a bunch of servers, disks, etc), they must be housed. Certainly this can be without any specialist environment, especially if one is building a small test cluster; for example with half-a-dozen old but homogeneous systems, each connected with 100BASE-TX ethernet to a switch etc.

read more

January 25, 2017

Does the Military-Industrial Complex Make Sense?, Random Thoughts, and More

Given the importance of the Military-Industrial Complex now plays in many economies I thought it'd be worthwhile to take a look at whether it actually makes sense? - the military industrial complex doesn't work the way that a lot of people think. It's actually linked in to the way the financial system works and the way the world panned out after the World Wars. Since the US/West came out

ArduPilot on the CL84 TiltWing VTOL

ArduPilot now supports TiltWing VTOL aircraft! First test flights done earlier today in Canberra

We did the first test flights of the Unique Models CL84 TiltWing VTOL with ArduPilot today. Support for tilt-wing aircraft is in the 3.7.1 stable release, with enhancements to support the tilt mechanism of the CL84 added for the upcoming 3.8.0 release.

This CL84 model operates as a tricopter in VTOL flight, and as a normal aileron/elevator plane in forward flight. The unusual thing from the point of view of ArduPilot is that the tilt mechanism uses a retract style servo, which means it can be commanded to be fully up or fully down, but you can't ask it to hold any angle in between. That makes for some interesting challenges in the VTOL transition code.

For the 3.8.0 release there is a new parameter Q_TILT_TYPE that controls whether the tilt mechanism for tiltrotors and tiltwings is a continuous servo or a binary (retract style) servo. In either case the Q_TILT_RATE parameter sets the rate at which the servo changes angle (in degrees/second).

This aircraft has been previously tested in hover by Greg Covey (see http://discuss.ardupilot.org/t/tiltrotor-support-for-plane/8805) but has not previously been tested with automatic transitions.
Many thanks to Grant, Peter, James and Jack from CanberraUAV for their assistance with testing the CL84 today and the videos and photos!

January 24, 2017

Keeping The Build Directory in EasyBuild and Paraview Plugins

By default, EasyBuild will delete the build directory of an successful installation and will save failures of attempted installs for diagnostic purposes. In some cases however, one may want to save the build directory. This can be useful, for example, for diagnosis of *successful* builds. Another example is the installation of plugins for applications such as Paraview, which *require* access to the the successful buildir.

read more

The Provision of HPC Resources to Top Universities

Recently, the University of Melbourne is ranked #1 in Australia and #33 in the world, according to the Times Higher Education World University Rankings of 2015-2016 [1]. The rankings are based on a balanced metric between citations, industry income, international outlook, research, and teaching.

read more

January 23, 2017

Saving Capitalist Democracy 2, Random Thoughts, and More

Obviously, this next part is a continuation of one of my others posts:  http://dtbnguyen.blogspot.com/2016/11/saving-capitalist-democracy-random.html - just get better at doing things. If counties such as Cuba can deliver good healthcare on limited resources surely more prosperous countries can as well? http://www.smh.com.au/lifestyle/health-and-wellbeing/

OHC2017 zero to firmware in < 2 hours

I thought I'd make some modifications along the way in the build, so I really couldn't do a head to head with the build time I had heard about (a lowish number of minutes). The on/off switch being where it was didn't fit my plans so I made that an off boarder and also moved the battery to off board so that I might use the space below the screen for something, perhaps where the stylus lives in the case.


I did manage to go from opening the packet to firmware environment setup, built, and uploaded in less than 2 hours total. No bridges, no hassles, cable shrinks around the place and 90 degree headers across the bottom of the board for future tinkering.

This is going to look extremely stylish in a CNCed hardwood case. My current plan is to turn it into a smart remote control. Rotary encoder for volume, maybe modal so that the desired "program" can be selected quickly from a list without needing to flick or page through things.

Gods of Metal




ISBN: 9780141982267
LibraryThing
In this follow-up to Command and Control, Schlosser explores the conscientious objectors and protestors who have sought to highlight not just the immorality of nuclear weapons, but the hilariously insecure state the US government stores them in. In all seriousness, we are talking grannies with heart conditions being able to break in.

My only real objection to this book is that is more of a pamphlet than a book, and feels a bit like things that didn't make it into the main book. That said, it is well worth the read.

Tags for this post: book eric_schlosser nuclear weapons safety protest
Related posts: Command and Control; Random linkage; Fast Food Nation; Starfish Prime; Why you should stand away from the car when the cop tells you to; Random fact for the day
Comment Recommend a book

January 20, 2017

Explaining Prophets 3, Random Thoughts, and More

Obvious continuation of my other two posts: http://dtbnguyen.blogspot.com/2016/12/explaining-prophets-fake-news-and-more_26.html http://dtbnguyen.blogspot.com/2017/01/explaining-prophets-2-what-is-liberal.html - many of the world's top scientists seemed to be have possessed the same abilities with regards to 'pre-cognition' and 'lucid dreaming'. Like the other scientists I mentioned in the

Linux.conf.au 2017 – Friday – Closing

Code of Consult and Safety

  • Badge
    • Putting prefered pronoun
    • Emoji
  • Free Childcare
    • Sponsored by Github
    • Approx 10 kids
  • Assistance Grants
  • Attendees
    • Breakdown by gender etc
    • Roughly 25% of attendees and speakers not men
  • More numbers
    • 104 Matrix chat users
    • 554 attendees
    • 2900 coffee cups
    • Network claimed to 7.5Gb/s
    • 1.6 TB over the week, 200Mb/s max
    • 30 Session Chairs
    • 12 Miniconfs
    • 491 Proposals (130 more than the others)
    • 6 Tutorials, 75 talks, 80 speakers
    • 4 Keynote speakers
    • 21 Sponsors

Linux.conf.au 2018 – Sydney

  • A little bit of history repeating
  • 2001, 2007, 2018
  • Venue is UTS
  • 5 minutes to food, train station
  • https://lca2018.org
  • @lca2018 on twitter
  • Looking for a few extra helpers

Raffle

  • In support of Outreachy
  • 3 interns funded

Final Bit

  • Thanks to team members

 

 

Share

Balloon Meets Gum Tree

Today I attended the launch of Horus 38, a high altitude ballon flight carrying 4 payloads, one of which was the latest version of the SSDV system Mark and I have been working on.

Since the last launch, Mark and I have put a lot of work into carefully integrating a rate 0.8 LDPC code developed by Bill, VK5DSP. The coded 115 kbit/s system is now working error free on the bench down to -112dBm, and can transfer a new hi-res image in just a few seconds. With a tx power of 50mW, we estimate a line of site range of 100km. We are now out-performing commercial FSK telemetry chip sets using our open source system.

However disaster struck soon after launch at Mt Barker High School oval. High winds blew the payloads into a tree and three of them were chopped off, leaving the balloon and a lone payload to continue into the stratosphere. One of the payloads that hit the tree was our SSDV, tumbling into a neighboring back yard. Oh well, we’ll have another try in December.

Now I’ve been playing a lot of Kerbal Space Program lately. It’s got me thinking about vectors, for example in Kerbal I learned how to land two space craft at exactly the same point on the Mun (Moon) using vectors and some high school equations of motion. I’ve also taken up sailing – more vectors involved in how sails propel a ship.

The high altitude balloon consists of a latex, helium filled weather balloon a few meters in diameters. Strung out beneath that on 50m of fishing line are a series of “payloads”, our electronic gizmos in little foam boxes. The physical distance helps avoid interference between the radios in each box.

While the balloon was held near the ground, it was keeled over at an angle:

It’s tethered, and not moving, but is acted on by the force of the lift from the helium and drag from the wind. These forces pivot the balloon around an arc with a radius of the tether. If these forces were equal the balloon would be at 45 degrees. Today it was lower, perhaps 30 degrees.

When the balloon is released, it is accelerated by the wind until it reaches a horizontal velocity that matches the wind speed. The payloads will also reach wind speed and eventually hang vertically under the balloon due to the force of gravity. Likewise the lift accelerates the balloon upwards. This is balanced by drag to reach a vertical velocity (the ascent rate). The horizontal and vertical velocity components will vary over time, but lets assume they are roughly constant over the duration of our launch.

Now today the wind speed was 40 km/hr, just over 10 m/s. Mark suggested a typical balloon ascent rate of 5 m/s. The high school oval was 100m wide, so the balloon would take 100/10 = 10s to traverse the oval from one side to the gum tree. In 10 seconds the balloon would rise 5×10 = 50m, approximately the length of the payload string. Our gum tree, however, rises to a height of 30m, and reached out to snag the lower 3 payloads…..

Physics of Road Rage

A few days ago while riding my bike I was involved in a spirited exchange of opinions with a gentleman in a motor vehicle. After said exchange he attempted to run me off the road, and got out of his car, presumably with intent to assault me. Despite the surge of adrenaline I declined to engage in fisticuffs, dodged around him, and rode off into the sunset. I may have been laughing and communicating further with sign language. It’s hard to recall.

I thought I’d apply some year 11 physics to see what all the fuss was about. I was in the middle of the road, preparing to turn right at a T-junction (this is Australia remember). While his motivations were unclear, his vehicle didn’t look like an ambulance. I am assuming he as not an organ-courier, and that there probably wasn’t a live heart beating in an icebox on the front seat as he raced to the transplant recipient. Rather, I am guessing he objected to me being in that position, as that impeded his ability to travel at full speed.

The street in question is 140m long. Our paths crossed half way along at the 70m point, with him traveling at the legal limit of 14 m/s, and me a sedate 5 m/s.

Lets say he intended to brake sharply 10m before the T junction, so he could maintain 14 m/s for at most 60m. His optimal journey duration was therefore 4 seconds. My monopolization of the taxpayer funded side-street meant he was forced to endure a 12 second journey. The 8 second difference must have seemed like eternity, no wonder he was angry, prepared to risk physical injury and an assault charge!

Codec 2 700C

My endeavor to produce a digital voice mode that competes with SSB continues. For a big chunk of 2016 I took a break from this work as I was gainfully employed on a commercial HF modem project. However since December I have once again been working on a 700 bit/s codec. The goal is voice quality roughly the same as the current 1300 bit/s mode. This can then be mated with the coherent PSK modem, and possibly the 4FSK modem for trials over HF channels.

I have diverged somewhat from the prototype I discussed in the last post in this saga. Lots of twists and turns in R&D, and sometimes you just have to forge ahead in one direction leaving other branches unexplored.

Samples

Sample 1300 700C
hts1a Listen Listen
hts2a Listen Listen
forig Listen Listen
ve9qrp_10s Listen Listen
mmt1 Listen Listen
vk5qi Listen Listen
vk5qi 1% BER Listen Listen
cq_ref Listen Listen

Note the 700C samples are a little lower level, an artifact of the post filtering as discussed below. What I listen for is intelligibility, how easy is the same to understand compared to the reference 1300 bit/s samples? Is it muffled? I feel that 700C is roughly the same as 1300. Some samples a little better (cq_ref), some (ve9qrp_10s, mmt1) a little worse. The artifacts and frequency response are different. But close enough for now, and worth testing over air. And hey – it’s half the bit rate!

I threw in a vk5qi sample with 1% random errors, and it’s still usable. No squealing or ear damage, but perhaps more sensitive that 1300 to the same BER. Guess that’s expected, every bit means more at a lower bit rate.

Some of the samples like vk5qi and cq_ref are strongly low pass filtered, others like ve9qrp are “flat” spectrally, with the high frequencies at about the same level as the low frequencies. The spectral flatness doesn’t affect intelligibility much but can upset speech codecs. Might be worth trying some high pass (vk5qi, cq_ref) or low pass (ve9qrp_10s) filtering before encoding.

Design

Below is a block diagram of the signal processing. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.

The amplitudes and even the Vector Quantiser (VQ) entries are in dB, which is very nice to work in and matches the ears logarithmic amplitude response. The VQ was trained on just 120 seconds of data from a training database that doesn’t include any of the samples above. More work required on the VQ design and training, but I’m encouraged that it works so well already.

Here is a 3D plot of amplitude in dB against time (300 frames) and the K=20 frequency vectors for hts1a. You can see the signal evolving over time, and the low levels at the high frequency end.

The post filter is another key step. It raises the spectral peaks (formants) an lowers the valleys (anti-formants), greatly improving the speech quality. When the peak/valley ratio is low, the speech takes on a muffled quality. This is an important area for further investigation. Gain normalisation after post filtering is why the 700C samples are lower in level than the 1300 samples. Need some more work here.

The two stage VQ uses 18 bits, energy 4 bits, and pitch 6 bits for a total of 28 bits every 40ms frame. Unvoiced frames are signalled by a zero value in the pitch quantiser removing the need for a voicing bit. It doesn’t use differential in time encoding to make it more robust to bit errors.

Days and days of very careful coding and checks at each development step. It’s so easy to make a mistake or declare victory early. I continually compared the output speech to a few Codec 2 1300 samples to make sure I was in the ball park. This reduced the subjective testing to a manageable load. I used automated testing to compare the reference Octave code to the C code, porting and testing one signal processing module at a time. Sometimes I would just printf rows of vectors from two versions and compare the two, old school but quite effective and spotting the step where the bug crept in.

Command line

The Octave simulation code can be driven by the scripts newamp1_batch.m and newamp1_fby.m, in combination with c2sim.

To try the C version of the new mode:

codec2-dev/build_linux/src$ ./c2enc 700C ../../raw/hts1a.raw - | ./c2dec 700C - -| play -t raw -r 8000 -s -2 -

Next Steps

Some thoughts on FEC. A (23,12) Golay code could protect the most significant bits of 1st VQ index, pitch, and energy. The VQ could be organised to tolerate errors in a few of its bits by sorting to make an error jump to a ‘close’ entry. The extra 11 parity bits would cost 1.5dB in SNR, but might let us operate at significantly lower in SNR on a HF channel.

Over the next few weeks we’ll hook up 700C to the FreeDV API, and get it running over the air. Release early and often – lets find out if 700C works in the real world and provides a gain in performance on HF channels over FreeDV 1600. If it looks promising I’d like to do another lap around the 700C algorithm, investigating some of the issues mentioned above.

Horus 39 – Fantastic High Speed SSDV Images

A great result from our high speed SSDV image (Wenet) system, which we flew as part of Horus 38 on Saturday Dec 3. A great write up and many images on the AREG web site.

One of my favorite images below, just before impact with the ground. You can see the parachute and the tangled remains of the balloon in the background, the yellow fuzzy line is the nylon rope close to the lens.

Well done to the AREG club members (in particular Mark) for all your hard work in preparing the payloads and ground stations.

High Altitude Balloons is a fun hobby. It’s a really nice day out driving in the country with nice people in a car packed full of technology. South Australia has some really nice bakeries that we stop at for meat pies and donuts on the way. Yum. It was very satisfying to see High Definition (HD) images immediately after take off as the balloon soared above us. Several ground stations were collecting packets that were re-assembled by a central server – we crowd sourced the image reception.

Open Source FSK modem

Surprisingly we were receiving images while mobile for much of the flight. I could see the Eb/No move up and down about 6dB over 3 second cycles, which we guess is due to rotation or swinging of the payload under the balloon. The antennas used are not omnidirectional so the change in orientation of tx and rx antennas would account for this signal variation. Perhaps this can be improved using different antennas or interleaving/FEC.

Our little modem is as good as the Universe will let us make it (near perfect performance against theory) and it lived up to the results predicted by our calculations and tested on the ground. Bill, VK5DSP, developed a rate 0.8 LDPC code that provides 6dB coding gain. We were receiving 115 kbit/s data on just 50mW of tx power at ranges of over 100km. Our secret is good engineering, open source software, $20 SDRs, and a LNA. We are outperforming commercial chipsets with open source.

The same modem has been used for low bit rate RTTY telemetry and even innovative new VHF/UHF Digital Voice modes.

The work on our wonderful little FSK modem continues. Brady O’Brien, KC9TPA has been refactoring the code for the past few weeks. It is now more compact, has a better command line interface, and most importantly runs faster so we getting close to running high speed telemetry on a Raspberry Pi and fully embedded platforms.

I think we can get another 4dB out of the system, bringing the MDS down to -116dBm – if we use 4FSK and lose the RS232 start/stop bits. What we really need next is custom tx hardware for open source telemetry. None of the chipsets out there are quite right, and our demod outperforms them all so why should we compromise?

Recycled Laptops

The project has had some interesting spin offs. The members of AREG are getting really interested in SDR on Linux resulting in a run on recycled laptops from ASPItech, a local electronics recycler!

Links

Balloon meets Gum Tree
Horus 37 – High Speed SSTV Images
High Speed Balloon Data Link
All Your Modem are Belong To Us
FreeDV 2400A and 2400B Demos
Wenet Source Code
Nov 2016 Wenet Presentation

Horus 37 – High Speed SSTV Images

Today I was part of the AREG team that flew Horus 37 – a High Altitude Balloon flight. The payload included hardware sending Slow Scan TV (SSTV) images at 115 kbit/s, based on the work Mark and I documented in this blog post from earlier this year.

It worked! Using just 50mW of transmit power and open source software we managed to receive SSTV images at bit rates of up to 115 kbit/s:

More images here.

Here is a screen shot of the Python dashboard for the FSK demodulator that Mark and Brady have developed. It gives us some visibility into the demod state and signal quality:

(View-Image on your browser to get a larger version)

The Eb/No plot shows the signal strength moving up and down over time, probably due to motion of our car. The Tone Frequency Estimate shows a solid lock on the two FSK frequencies. The centre of the Eye Diagram looks good in this snapshot.

Octave and C LDPC Library

There were some errors in received packets, which appear as stripes in the images:

On the next flight we plan to add a LDPC FEC code to protect against these errors and allow the system to operate at signal levels about 8dB lower (more than doubling our range).

Bill, VK5DSP, has developed a rate 0.8 LDPC code designed for the packet length of our SSTV software (2064 bits/packet including checksum). This runs with the CML library – C software designed to be called from Matlab via the MEX file interface. I previously showed how the CML library can be used in GNU Octave.

I like to develop modem algorithms in GNU Octave, then port to C for real time operation. So I have put some time into developing Octave/C software to simulate the LDPC encoded FSK modem in Octave, then easily port exactly the same LDPC code to C. For example the write_code_to_C_include_file() Octave function generates a C header file with the code matrices and test vectors. There are test functions that use an Octave encoder and C decoder and compare the results to an Octave decoder. It’s carefully tested and bit exact to 64-bit double precision! Still a work in progress, but has been checked into codec2-dev SVN:

ldpc_fsk_lib.m Library of Octave functions to support LDPC over FSK modems
test_ldpc_fsk_lib.m Test and demo functions for Octave and C library code
mpdecode_core.c CML MpDecode.c LDPC decoder functions re-factored
H2064_516_sparse.h Sample C include file that describes Bill’s rate 0.8 code
ldpc_enc.c Command line LDPC encoder
ldpc_dec.c Command line LDPC decoder
drs232_ldpc.c Command line SSTV deframer and LDPC decoder

This software might be useful for others who want to use LDPC codes in their Matlab/Octave work, then run them in real time in C. With the (2064,512) code, the decoder runs at about 500 kbit/s on one core of my old laptop. I would also like to explore the use of these powerful codes in my HF Digital Voice work.

SSTV Hardware and Software

Mark did a fine job putting the system together and building the payload hardware and it’s enclosure:

It uses a Raspberry Pi, with a FSK modulator we drive from the Pi’s serial port. The camera aperture is just visible at the front. Mark has published the software here. The tx side is handled by a single Python script. Here is the impressive command line used to start the rx side running:

#!/bin/bash
# 
#	Start RX using a rtlsdr. 
# 
python rx_gui.py & 
rtl_sdr -s 1000000 -f 441000000 -g 35 - | csdr convert_u8_f | csdr bandpass_fir_fft_cc 0.1 0.4 0.05 | csdr fractional_decimator_ff 1.08331 | csdr realpart_cf | csdr convert_f_s16 | ./fsk_demod 2XS 8 923096 115387 - - S 2> >(python fskdemodgui.py --wide) | ./drs232_ldpc - - | python rx_ssdv.py --partialupdate 16

We have piped together a bunch of command line utilities on the Linux command line. A hardware analogy is a bunch of electronic boards on a work bench connected via coaxial jumper leads. It works quite well and allows us to easily prototype SDR radio systems on Linux machines from a laptop to a RPi. However down the track we need to get it all “in one box” – a single, cross platform executable anyone can run.

Next Steps

We did some initial tests with the LDPC decoder today but hit integration issues that flat lined our CPU. Next steps will be to investigate these issues and try LDPC encoded SSTV on the next flight, which is currently scheduled for the end of October. We would love to have some help with this work, e.g. optimizing and testing the software. Please let us know if you would like to help!

Links
Mark’s blog post on the flight
AREG blog post detailing the entire flight, including set up and recovery
High Speed Balloon Data Link – Development and Testing of the SSTV over FSK system
All your Modems are belong to Us – The origin of the “ideal” FSK demod used for this work.
FreeDV 2400A – The C version of this modem developed by Brady and used for VHF Digital Voice
LDPC using Octave and CML – using the CML library LDPC decoder in GNU Octave

OQPSK Modem Simulation

A friend of mine is developing a commercial OQPSK modem and was a bit stuck. I’m not surprised as I’ve had problems with OQPSK in the past as well. He called to run a few ideas past me and I remembered I had developed a coherent GMSK modem simulation a few years ago. Turns out MSK and friends like GMSK can be interpreted as a form of OQPSK.

A few hours later I had a basic OQPSK modem simulation running. At that point we sat down for a bottle of Sparkling Shiraz and some curry to celebrate. The next morning, slightly hung over, I spent another day sorting out the diabolical phase and timing ambiguity issues to make sure it runs at all sorts of timing and phase offsets.

So oqsk.m is a reference implementation of an Offset QPSK (OQPSK) modem simulation, written in GNU Octave. It’s complete, including timing and phase offset estimation, and phase/timing ambiguity resolution. It handles phase, frequency, timing, and sample clock offsets. You could run it over real world channels.

It’s performance is bang on ideal for QPSK:

I thought it would be useful to publish this blog post as OQPSK modems are hard. I’ve had a few run-in with these beasts over the years and had headaches every time. This business about the I and Q arms being half a symbol offset from each other makes phase synchronisation very hard and does your head in. Here is the Tx waveform, you can see the half symbol time offset in the instant where I and Q symbols change:

As this is unfiltered OQPSK, the Tx waveform is just the the Tx symbols passed through a zero-order hold. That’s a fancy way of saying we keep the symbols values constant for M=4 samples then change them.

There are very few complete reference implementations of high quality modems on the Internet. Providing them has become a mission of mine. By “complete” I mean pushing past the textbook definitions to include real world synchronisation. By “high quality” I mean tested against theoretical performance curves with different channel impairments. Or even tested at all. OQPSK is a bit obscure and it’s even harder to find any details of how to build a real world modem. Plenty of information on the basics, but not the nitty gritty details like synchronisation.

The PLL and timing loop simultaneously provides phase and timing estimation. I derived it from a similar algorithm used for the GMSK modem simulation. Unusually for me, the operation of the timing and phase PLL loop is still a bit of mystery. I don’t quite fully understand it. Would welcome more explanation from any readers who are familiar to it. Parts of it I understand (and indeed I engineered) – the timing is estimated on blocks of samples using a non-linearity and DFT, and the PLL equations I worked through a few years ago. It’s also a bit old school, I’m more familiar with feed forward type estimators and not something this “analog”. Oh well, it works.

Here is the phase estimator PLL loop doing it’s thing. You can see the Digital Controlled Oscillator (DCO) phase tracking a small frequency offset in the lower subplot:

Phase and Timing Ambiguities

The phase/timing estimation works quite well (great scatter diagram and BER curve), but can sync up with some ambiguities. For example the PLL will lock on the actual phase offset plus integer multiples of 90 degrees. This is common with phase estimators for QPSK and it means your constellation has been rotated by some multiple of 90 degrees. I also discovered that combinations of phase and timing offsets can cause confusion. For example a 90 degree phase shift swaps I and Q. As the timing estimator can’t tell I from Q it might lock onto a sequence like …IQIQIQI… or …QIQIQIQ…. leading to lots of pain when you try to de-map the sequence back to bits.

So I spent a Thursday exploring these ambiguities. I ended up correlating the known test sequence with the I and Q arms separately, and worked out how to detect IQ swapping and the phase ambiguity. This was tough, but it’s now handling the different combinations of phase, frequency and timing offsets that I throw at it. In a real modem with unknown payload data a Unique Word (UW) of 10 or 20 bits at the start of each data frame could be used for ambiguity resolution.

Optional Extras

The modem lacks an initial frequency offset estimator, but the PLL works OK with small freq offsets like 0.1% of the symbol rate. It would be useful to add an outer loop to track these frequency offsets out.

As it uses feedback loops its not super fast to sync and best suited to continuous rather than burst operation.

The timing recovery might need some work for your application, as it just uses the nearest whole sample. So for a small over-sample rate M=4, a timing offset of 2.7 samples will mean it chooses sample 3, which is a bit coarse, although given our BER results it appears unfiltered PSK isn’t too sensitive to timing errors. Here is the timing estimator tracking a sample clock offset of 100ppm, you can see the coarse quantisation to the nearest sample in the lower subplot:

For small M, a linear interpolator would help. If M is large, say 10 or 20, then using the nearest sample will probably be good enough.

This modem is unfiltered PSK, so it has broad lobes in the transmit spectrum. Here is the Tx spectrum at Eb/No=4dB:

The transmit filter is just a “zero older hold” and the received filter an integrator. Raised cosine filtering could be added if you want a narrow bandwidth. This will probably make it more sensitive to timing errors.

Like everything with modems, test it by measuring the BER. Please.

Links

oqsk.m GNU Octave OQPSK modem simulation

GMSK Modem Simulation blog post that was used as a starting point for the OQPSK modem. With lots more reference links.

Linux.conf.au 2017 – Friday – Lightning Talks

Use #lcapapers to tell Linux.conf.au what you want to see in 2018

Michael Still and Michael Davies get the Rusty Wrench award

Karaoke – Jack Skinner

  • Talk with random slides

Martin Krafft

  • Matrix
  • End to end encrypted communication system
  • No entity owns your conversations
  • Bridge between walled gardens (eg IRC and Slack)
  • In Very late Beta, 450K user accounts
  • Run or Write your own servers or services or client

Cooked – Pete the Pirate

  • How to get into Sous Vide cooking
  • Create home kit
  • Beaglebone Black
  • Rice cooker, fish tank air pump.
  • Also use to germinate seeds
  • Also use this system to brew beer

Emoji Archeology 101 – Russell Keith-Magee

  • 1963 Happy face created
  • 🙂 invented
  • later 🙁 invented
  • Only those emotions imposed by the Unicode consortium can now be expressed

The NTPsec Project – Mark Atwood

  • Since 2014
  • For and git in 2015 from parent ntp project
  • 1.0.0 release soon
  • Removed 73% of lines from classic
    • Removed commandline tools
    • Got write of stuff for old OSes
    • Changed to POSIX and modern coding
    • removed experiments
  • Switch to git and bugzilla etc
  • Fun not painful
  • Welcoming community, not angry
  • ntpsec.org

National Computer Science Summer School – Katie Bell

  • Running for 22 years
  • Web stream, Embedded Stream
  • Using BBC Microbit
  • Lots of projects
  • Students in grade 10-11
  • Happens in January
  • Also 5 week long online programming competition NCSS Competition.

Blockchain – Rusty Russell

  • Blockchain
  • Blockchain
  • Blockchain

Go to Antarctica – Jucinter Richardson

  • Went Twice
  • Go by ship
  • No rain
  • Nice and cool
  • Join the government
  • Positions close
  • Go while it is still there

Cool and Awesome projects you should help with – Tim Ansell

  • Tomu Boards
  • MicroPython on FPGAs
  • Python Devicetree – needs a good library
  • QEMU for LiteX / MiSoC
  • NuttX for LiteX / MiSoC
  • QEMU for Tomu
  • Improving LiteX / MiSoc
  • Sypress FX2
  • Linux to LiteX / MiSoC
  • DMMI2USB
  • j.mp/timpro-lca2017

LoRa TAS – Paul Neumeyer

  • long range (2-3km urban 10km rural)
  • low power (batter ~5 years)
  • Unlicensed radio spectrum 915-928 Mhz BAnd (AUS)
  • LoRaWAN is an open standard
  • Ideal for IoT applications (sensing, preventative maintenance, smart)

Roan Kattatow

  • Different languages mix dots and commas and spaces etc to write numbers

ZeroSkip – Ron Gondwana

  • Crash safe embeded database
  • Not fast enough
  • Zeroskip
  • Append only database file
  • Switch files now and then
  • Repack old files togeather

PyCon Au – Richard Jones

  • Python Conference Australia
  • 7th in Melbourne in Aug 2016 – 650 people, 96 presentation
  • In Melb on 308 of August on 2016
  • 2017.pycon-au.org

Buying a Laptop built for Linux – Paul Wayper

  • Bought from System76
  • Designed for Linux

openQA – Aleksa Sarai

  • Life is too short for manual testing
  • Perl based framework that lets you emulate a user
  • Runs from console, emulates keyboard and mouse
  • Has screenshots
  • Used by SUSE and openSUSE and fedora
  • Fuzzy comparison, using regular expressions
  • open.qa

South Coast Track – Bec, Clinton and Richard

  • What I did in the Holidays
  • 6 day walk in southern tasmania
  • Lots of pretty photos

Share

Linux.conf.au 2017 – Friday – Session 2

Continuously Delivering Security in the Cloud – Casey West

  • This is a talk about operation excellence
  • Why are system attacked? Because they exist
  • Resisting Change to Mitigate Risk – It’s a trap!
  • You have a choice
    • Going fast with unbounded risk
    • Going slow to mitigate risk
  • Advanced Persistent Threat (ATP) – The breach that lasts for months
  • Successful attacks have
    • Time
    • Leaked or misused creditials
    • Miconfigured or unpatched software
  • Changing very little slowly helps all three of the above
  • A moving target is harder to hit
  • Cloud-native operability lets platforms move faster
    • Composable architecture (serverless, microservices)
    • Automated Processes (CD)
    • Collaborative Culture (DevOps)
    • Production Environment (Structured Platform)
  • The 3 Rs
    • Rotate
      • Rotate credentials every few minutes or hours
      • Credentials will leak, Humans are weak
      • “If a human being generates a password for you then you should reject it”
      • Computers should generate it, every few hours
    • Repave
      • Repave every server and application every few minutes/hours
      • Implies you have things like LBs that can handle servers adding and leaving
      • Container lifecycle
        • Built
        • Deploy
        • Run
        • Stop
        • Note: No “change “step
      • A Server that doesn’t exist isn’t being cromprimised
      • Regularly blow away running containers
      • Repave ≠ Patch
      • uptime <= 3600
    • Repair
      • Repair vulnerable runtime environments every few minutes or hours
      • What stuff will need repair?
        • Applications
        • Runtime Environments (eg rails)
        • Servers
        • Operating Systems
      • The Future of security is build pipelines
      • Try to put in credential rotation and upsteam imports into your builds
  • Embracing Change to Mitigate Risk
  • Less of a Trap (in the cloud)

Share

Linux.conf.au 2017 – Friday – Session 1

Adventures in laptop battery hacking -Matthew Chapman

  • Lenovo Thinkpad X230T
    • Bought Aug 2013
    • Ariginal capacity 62 KWh – 5hours and 12W
    • Capacity down to 1.9Wh – 10 minutes
  • 45N1079 replacement bought
    • DRM on laptop claimed it was not genuine and refused to recharge it.
  • Batteries talk SBS protocol to laptop
  • SMBus port and SMClock port
    • sniffed the port with logic analyser
    • Using I2C protocol
    • Looked at spec to see what it means
    • Challenge-response authentication
  • Options
    1. Throw Away
    2. Replace Cells
      • Easy to damage
      • Might not work
    3. Hack firmware on battery
      • Talk at DEFCON 19
      • But this is different model from that
      • Couldn’t work out how to get to firmware
    4. Added something in between
    5. Update the firmware on the machine
      • Embeded Controller (EC)
      • MEC1619
  • Looking though the firmware for Battery Authentication
    • Found routine that look plausable
    • But other stuff was encrypted
  • EC Update process
    • BIOS update puts EC update in spare flash memory area
    • After the BIOs grabs that and applies update
  • Pulled apart the BIOs, found EcFwUpdateDxe.efi routine that updates the EC
    • Found that stuff send to the EC still encrypted.
    • Unencryption done by flasher program
  • Flasher program
    • Encrypted itself (decrypted by the current fireware)
    • JTAG interface for flashing debug
  • JTAG
    • Physically difficult to get to
    • Luckily Russian Hackers have already grabbed a copy
  • The Decryption function in the Flasher program
    • Appears to be blowfish
    • Found the key (in expanded form) in the firmware
    • Enough for the encryption and decryption
  • Checksums
    • Outer checksum checked by BIOs
    • Post-decryption sum – checked by the flasher (bricks EC if bad)
    • Section Echecksums (also bricks)
  • Applying
    • noop the checks in code
    • noop another check that sometimes failer
    • Different error message
  • Found a second authentication process
    • noop out the 2nd challenge in the BIOs
  • Works!
  • Posted writeup, posted to hacker news
    • 1 million page views
  • Uploaded code to github
    • Other people doing stuff with the embedded controller
    • No longer works on latest laptops, EC firmware appears to be signed
  • Anything can be broken with physical access and significant determination

Election Software – Vanessa Teague

  • Australian Elections use a lot of software
    • Encoding and counting preferential votes
    • For voting in polling places
    • For voting over the internet
  • How do we know this software is correct
  • The Paper ballot box is engineered around a serious of problems
    • In the past people bought their own voting paper
    • The Australian Ballot used in many places (eg NZ)
    • Franch use different method with envelopes and glass boxes
    • The US has had lots of problems and different ways
  • Four cases studies in Aus
  • vVote: Victoria
    • Vic state election 2014
    • 1121 votes for overseas Australians voting in Embassies etc
    • Based on Pret a Voter
    • You can varify that what you voted was what went though
    • Source code on bitbucket
    • Crypto signed, varified, open source, etc
    • Not going forward
    • Didn’t get the electoral commissions input and buy-in.
    • A little hard to use
  • iVote: NSW and WA
    • 280,000 votes over Internet in 2015 NSW state election ( around 5-6% of total votes)
    • Vote on a device of your choosing
    • Vote encrypted and send over Internet
    • Get receipt number
    • Exports to a varification service. You can telephone them, give them your number and they will read back you votes
    • Website used 3rd-party analytics provider with export-grade crypto
      • Vulnerable to injection of content, votes could be read or changed
      • Fixed (after 66k votes cast)
    • NSW iVote really wasn’t varifiable
    • About 5000 people called into service and successfully verified
    • How many tried to verify but failed?
    • Commission said 1.7% of electors verified and none identified any anomalies with their vote (Mar 2015)
    • How many tried and failed? “in the 10s” (Oct 2015)
    • Parliamentary said how many failed? Seven or 5 (Aug 2016)
    • How many failed to get any vote? 627 (Aug 2016)
    • This is a failure rate of about 10%
    • It is believed it was around 200 unique (later in 2016)
  • Vote Counting software
  • Errors in NSW counting
    • NSW legislative voting redistributed votes are selected at random
    • No source code for this
    • Use same source code for lots of other elections
    • Re-ran some of the votes, found randomness could change results. Found one most likely cost somebody a seat, but not till 4 years later.
  • Recomended
    • Generate the random key publicly
    • Open up the source code
    • They electorial peopel didn’t want to do this.
  • In the 2016 localgovt count we found 2 more bugs
    • One candidate should have won with 54% probability but didn’t
  • The Australian Senate Count
  • AEC consistent refuses to revel the source code
  • The Senate Date is release, you can redo it yourself any bugs will become evident
  • What about digitising the ballots?
    • How would we know if that wasn’t working?
    • Only by auditing the paper evidence
  • Auditing
    • The Americas have a history or auditing the paper ballots
    • But the Australian vote is a lot more complex so everything not 100% yet
    • Stuff is online

 

Share

Retiring from GovHack

It is with a little sadness, but a lot of pride that I announce my retirement from GovHack, at least retirement from the organising team :) It has been an incredible journey with a lot of amazing people along the way and I will continue to be it’s biggest fan and support. I look forward to actually competing in future GovHacks and just joining in the community a little more than is possible when you are running around organising things! I think GovHack has grown up and started to walk, so as any responsible parent, I want to give it space to grow and evolve with the incredible people at the helm, and the new people getting involved.

Just quickly, it might be worth reflecting on the history. The first “GovHack” event was a wonderfully run hackathon by John Allsopp and Web Directions as part of the Gov 2.0 Taskforce program in 2009. It was small with about 40 or so people, but extremely influential and groundbreaking in bringing government and community together in Australia, and I want to thank John for his work on this. You rock! I should also acknowledge the Gov 2.0 Taskforce for funding the initiative, Senator at the time Kate Lundy for participating and giving it some political imprimatur, and early public servants who took a risk to explore new models of openness and collaboration such as Aus Gov CTO John Sheridan. A lot of things came together to create an environment in which community and government could work together better.

Over the subsequent couple of years there were heaps of “apps” competitions run by government and industry. On the one hand it was great to see experimentation however, unfortunately, several events did silly things like suing developers for copyright infringement, including NDAs for participation, or setting actual work for development rather than experimentation (which arguably amounts to just getting free labour). I could see the tech community, my people, starting to disengage and become entirely and understandably cynical of engaging with government. This would be a disastrous outcome because government need geeks. The instincts, skills and energy of the tech community can help reinvent the future of government so I wanted to right this wrong.

In 2012 I pulled together a small group of awesome people. Some from that first GovHack event, some from BarCamp, some I just knew and we asked John if we could use the name (thank you again John!) and launched a voluntary, community run, annual and fun hackathon, by hackers for hackers (and if you are concerned by that term, please check out what a hacker is). We knew if we did something awesome, it would build the community up, encourage governments to open data, show off our awesome technical community, and provide a way to explore tricky problems in new and interesting ways. But we had to make is an awesome event for people to participate in.

It worked.

It has been wonderful to see GovHack grow from such humble origins to the behemoth it is today, whilst also staying true to the original purpose, and true to the community it serves. In 2016 (for which I was on maternity leave) there were over 3000 participants in 40 locations across two countries with active participation by Federal, State/Territory and Local Governments. There are always growing pains, but the integrity of the event and commitment to community continues to be a huge part of the success of the event.

In 2015 I stepped back from the lead role onto the general committee, and Geoff Mason did a brilliant job as Head Cat Herder! In 2016 I was on maternity leave and watched from a distance as the team and event continued to evolve and grow under the leadership of Richard Tubb. I feel now that it has its own momentum, strong leadership, an amazing community of volunteers and participation and can continue to blossom. This is a huge credit to all the people involved, to the dedicated national organisers over the years, to the local organisers across Australia and New Zealand, and of course, to all the community who have grown around it.

A few days ago, a woman came up to me at linux.conf.au and told me about how she had come to Australia not knowing anyone, and gone to GovHack after seeing it advertised at her university, and she made all her friends and relationships there and is so extremely happy. It made me teary, but also was a timely reminder. Our community is amazing. And initiatives like GovHack can be great enablers for our community, for new people to meet, build new communities, and be supported to rock. So we need to always remember that the projects are only as important as how much they help our community.

I continue to be one of GovHack’s biggest fans. I look forward to competing this year and seeing where current and future leadership takes the event and they have my full support and confidence. I will be looking for my next community startup after I finish writing my book (hopefully due mid year :)).

If you love GovHack and want to help, please volunteer for 2017, consider joining the leadership, or just come along for fun. If you don’t know what GovHack is, I’ll see you there!

Linux.conf.au 2017 – Friday Keynote – Robert Lefkowitz

Keeping Linux Great

  • Previous Keynotes have posed question I’ll pose answers
  • What is the free of open source software, it has no future
  • FLOSS is yesterday’s gravy
    • Based on where the technology is today. How would FLOSS work with punch cards?
    • Other people have said similar things
    • Software, Linux and similar all going down in google trends
    • But “app” is going up
  • Lithification
    • Small pieces losely joined
    • Linux used to be great could you could pipe stuff to little programs
    • That is what is happening to software
    • Example – share a page to another app in a mobile interface
    • All apps no longer need to send mail, they just have to talk to the mail app
  • So What should you do?
    • Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
      • Components must be 5> million or >20 LOC , only a handful or them
      • At the other end apps are smaller since they can depend on the OS or other Apps for lots of functionality so they don’t have to write it themselves.
      • Example node with thousands of dependencies
  • App Freedom
    • “Advanced programming environments conflate the runtime with the devtime” – Bret Victor
    • Open Source software rarely does that
    • “It turns out that Object Orientation didn’t work out, it is another legacy with are stuck with”
    • Having the source code is nice but it is not a requirement. Access to the runtime is what you want. You need to get it where people are using it.
  • Liberal Software
  • But not everything wasn’t to be a programmer
    • 75% comes from 6 generic web applications ( collection, storage, reservation, etc)
  • A lot of functionality requires big data or huge amounts of machines or is centralised so open sourcing the software doesn’t do anything useful
  • If it was useful it could be patented, if it was not useful but literary then it was just copyright

Share

January 19, 2017

Linux.conf.au 2017 – Thursday – Session 3

Open Source Accelerating Innovation – Allison Randal

  • Story of Stallman and the printer
  • Don’t talk about the story of the context
    • Stallman was living in a free software domain, propriety software was creeping in
    • Software only became subject to copyright in early 80s
  • First age of software – 1940s – 1960s
    • Software was low value
    • Software was all free and open, given away
  • Precursor – The 1970s
  • Middle Age of Software – 1980s
    • Start of Windows, Mac, Oracle and other big software companies
    • Also start of GNU and BSD
    • Who Leads?
      • Propritory software was seen as the innovator and always would be.
      • Free Software was seen to be always chasing after windows
  • The 2000s
    • Free Software caught up with Propritory
    • Used by big companies
    • “Open Source” name adopted
    • dot-com bubble had burst
    • Web 2.0
    • Economic necessity, everyone else getting it for free
    • Collaborative Process – no silver bullet but a better chance
    • Innovations lead by open source
  • Software Freedoms
    • About Control over our material enviroment
    • If you don’t other freedoms then you don’t have a free society
  • Modern Age of Software
    • Accelerating
    • Cops in 2010 42% used OS software,  In 2015 78% using
    • Using Open Source is now just table stakes
    • Competitive edge for companies is participating is OS
    • Most participation pushes innovation even faster
  • Now What?
    • The New innovative companies
      • Amazing experiences
      • Augment Workers
      • Deliver cool stuff to customers
      • Use Network effects, Brand names
    • Businesses making contribution to society
    • Need to look at software that just doesn’t cover commercial use cases.
  • Next Phase
    • Diversity
    • Myopic monocultures – risk cause they miss the dangers
    • empowered to change the rule for the better

Surviving the Next 30 Years of Free Software – Karen M. Sandler

  • We’re not getting any younger
  • Software Relicensing
    • Need to get approval of authors to re-license
    • Has had to contact surviving spouse and get them to agree to re-license the code
    • One survivor wanted payment. Didn’t understand that code would be written out of the project.
  • There are surely other issues that that we have no considered
  • Copyright Assignment is a way around it
    • But not everybody likes that.
  • Bequeathment doesn’t work
    • In some jurisdictions copyrights have to assessed for their value before being transferred. Taxes could be owed
  • Who is your next of Kin?
    • They might share your OS values or even think of them
  • Need perpetual care of copyrights
    • Debian Copyright Aggregation Projects
  • A Trust
    • Assign copyrights today, will give you back the rights you want but these expire on your death
    • Would be a registry for free software
    • Companies could participate to
  • Recognize the opportunity with age
    • A lot of people with a lot of spare time

 

Share

Books referenced in my Organizational Change talk at LCA2017

All of these are available as Kindle books, but I’m sure you can get 3D copies too:

The Five Dysfunctions of a Team: A Leadership Fable by Patrick M. Lencioni
Leading Change by John P. Kotter
Who Says Elephants Can’t Dance? Louis V. Gerstner Jr.
Nonviolent Communication: A language of Life by Marshall B. Rosenberg and Arun Gandhi

Linux.conf.au 2017 – Thursday – Session 2

Content as a driver of change: then and now – Lana Brindley

  • Humans have always told stories
  • Cave Drawings
    • Australian Indigenous art is the oldest continuous art in the world
    • Stories of extinct mega-fauna
    • Stories of morals but sometimes also funny
  • Early Written Manuals
    • We remember the Eureka
  • Religious Leaders
    • Gutenburg
    • Bible was only redistributed book, restricted to clergy
  • Fairy Tales
    • Charles Perrault versions.
    • Brother Grim
    • Cautionary tales for adults
    • Very gruesome in the originals and many versions
    • Easiest and entertaining way for illiterate people to share moral stories
  • Master and Apprentice
    • Cheap Labour and Learn a Trade
  • Journals and Letters
    • In the early 19th century letter writing started happoning
    • Recipe Books

 

  • Recently
  • Paper Manuals
    • Traditionally the proper method for technical docs
  • Whitepapers
    • Printed version will probably go away
    • Digital form may live on
  • Training Courses
    • Face to face training has it’s benifits
    • Online is where techical stuff is moving
  • Online Books
    • Online version of a printed book
    • Designed to be read from beginning to end, TOC, glossary, etc

 

  • Today
  • MOOCS
    • Quite common
  • Data Typing (DITA)
    • Break down the content into logical pices
    • Store in a database
    • Mix on the fly
    • Doing this sort of the since 1960s and 1970s
  • Single Sourcing
    • Walked away from old idea of telling a story
    • Look at how people consumed and learnt difficult concepts
    • Deliver the same content many ways (beginner user, advanced, reference)
    • Chunks of information we can deliver however we like
  • User-Side Content Curation
    • Organised like a wikipedia article
    • Imagine a side listing lots of cars for sale, the filters curate the content
  • What comes next?
    • Large datasets and let people filter
    • Power going from producers to consumers
    • Consumers want to filter themselves, not leave the producers to do this
  • References and further reading for talk

I am your user. Why do you hate me? Donna Benjamin

  • Free and open source software suffers from poor usability
  • We’ve struggled with open source software, heard devs talk about users with contempt
  • We define users by what they can’t do
  • How do I hate thee let I count the ways
    • Why were we being made to feel stupid when we used free software
    • Software is “made by me for me”, just for brainiac me
    • Lots of stories about stupid users. Should we be calling our users stupid?
    • We often talk/draw about users as faceless icons
    • Take pride in having prickly attitudes
  • Users
    • Whiney, entitled and demanding
    • We wouldn’t want some of them as friends
    • Not talk about those sort of users
  • Lets Chat about chat
    • Slack – used by OS projects, not the freest, propritory
    • Better in many ways less friction, in many ways
  • Steep Learning curves
    • How long to get to the level of (a) Stop hating it? (b) Are Kicking ass
    • How do we get people over that level as quickly as possible
    • They don’t want to be badass at using your tool. They want you to be badass at what using your tool allows them to do
    • Badass: Making Users Awesome – Kathy Sierra
  • Perfect is the enemy of the good
  • Understand who your users are; see them as people like your friends and colleagues; not faceless icons

 

Share

Linux.conf.au 2017 – Thursday – Session 1

The Vulkan Graphics API, what it means for Linux – David Airlie

  • What is Vulkan
    • Not OpenGL++
    • From Scratch, Low Level, Open Graphics API
    • Stack
      • Loader (Mostly just picks the driver)
      • Layers (sometimes optional) – Seperate from the drivers.
        • Validation
        • Application Bug fixing
        • Tracing
        • Default GPU selection
      • Drivers (ICDs)
    • Open Source test Suite. ( “throw it over the wall Open Source”)
  • Why a new 3D API
    • OpenGL is old, from 1992
    • OpenGL Design based on 1992 hardware model
    • State machine has grown a lot as hardware has changed
    • Lots of stuff in it that nobody uses anymore
    • Some ideas were not so good in retrospec
      • Single context makes multi-threading hard
      • Sharing context is not reliable
      • Orientated around windows, off-screen rendering is a bolt-on
      • GPU hardware has converged to just 3-5 vendors with similar hardware. Not as much need to hid things
    •  Vulkan moves a lot of stuff up to the application (or more likely the OS graphics layer like Unity)
    • Vulkan gives applications access to the queues if they want them.
    • Shading Language – SPIR-V
      • Binary formatted, seperate from Vulkan, also used by OpenGL
      • Write Shaders HSL or GLSL and they get converted to SPIR-V
    • Driver Development
      • Almost all Error checking needed since done on the validation layer
      • Simpler to explicitly build command stream and then submit
    • Linux Support
      • Closed source Drivers
        • Nvidia
        • AMD (amdgpu-pro) – promised open source “real soon now … a year ago”
      • Open Source
        • Intel Linux (anv) –
          • on release day. 3.5 people over 8 months
          • SPIR -> NIR
          • Vulkan X11/Wayland WSI
          • anv Vulkan <– Core driver, not sharable
          • NIR -> i965 gen
          • ISL Library (image layout/tiling)
        • radv (for AMD GPUs)
          • Dave has been working on it since early July 2016 with one other guy
          • End of September Doom worked.
          • One Benchmark faster than AMD Driver
          • Valve hired someone to work on the driver.
          • Similar model to Intel anv driver.
          • Works on the few Vulkan games, working on SteamVR

 

Building reliable Ceph clusters – Lars Marowsky-Brée

  • Ceph
    • Storage Project
    • Multiple front ends (S3, Swift, Block IO, iSCSI, CephFS)
    • Built on RADOS data store
    • Software Defined Storage
      • Commodity servers + ceph + OS + Mngt (eg Open Attic)
      • Makes sense at 4+ servers with 10 drives each
      • metadata servce
      • CRUSH algorithm to speread out the data, no centralised table (client goes directly to data)
    • Access Methods
      • Use only what you need
      • RADOS Block devices   <– most stable
      • S3 (or Swift) via RadosGW  <– Mature
      • CephFS  <— New and pretty stable , avoid stuff non meta-data intensive
    • Introducing Dependability
      • Availability
      • Reliability
        • Duribility
      • Safety
      • Maintainability
    • Most outages are caused by Humans
    • At Scale everything fails
      • The Distributed systems are still vulnerable to correlated failures (eg same batch of hard drives)
      • Advantages of Heterogeneity – Everything is broken different
      • Homogeneity is non-sustainable
    • Failure is inevitable; suffering is optional
      • Prepare for downtime
      • Test if system meets your SLA when under load and when degraded and during recovery
    • How much available do you need?
      • An extra nine will double your price
  • A Bag full of suggestions
    • Embrace diversity
      • Auto recovery requires a >50% majority
      • 3 suppliers?
      • Mix arch and stuff between racks/pods and geography
      • Maybe you just go with manually added recovery
    • Hardware Choices
      • Vendors have reference archetectures
      • Hard to get vendors to mix, they don’t like that and fewer docs.
      • Hardware certification reduces the risk
      • Small variations can have huge impact
        • Customer bought network card and switch one up from the ref architecture. 6 months of problems till firmware bug fixed.
    • How many monitors do I need?
      • Not performance critcal
      • 3 is usually enough as long as well distributed
      • Big envs maybe 5 or 7
      • Don’t coverge (VMs) these with other types of nodes
    • Storage
      • Avoid Desktop Disks and SSDs
    • Storage Node sizing
      • A single node should not be more than 10% of your capacity
      • You need space capacity at least as big as a single node (to recover after fail)
    • Durability
      • Erasure Encode more durabily and high percentage of disk used
      • But recovery a lot slower, high overhead, etc
      • Different strokes for different pools
    • Network cards, different types, cross connect, use last years cards
    • Gateways: tests okay under failure
    • Config drift: Use config mngt (puppet etc)
    • Monioring
      • Perf as system ages
      • SSD degradation
    • Updates
      • Latest software is always the best
      • Usually good to update
      • Can do rolling upgrades
      • But still test a little on a staging server first
      • Always test on your system
        • Don’t trust metrics from vendors
        • Test updates
        • test your processes
        • Use OS to avoid vendor lock in
    • Disaster will strike
      • Have backups and test them and recoveries
    • Avoid Complexity
      • Be aggressive in what you test
      • Be commiserative in what you deploy only what you need
    • Q: Minimum size?
    • A: Not if you can fit on a single server

 

Share

Linux.conf.au 2017 – Thursday Keynote – Nadia Eghbal

Consider the Maintainer

  • Is it alright to compromise or even deliberately ignore the happiness of maintainers so that we can enjoy free software?
  • Huge growth in usage and downloads of Open Source software
  • 2/3s of popular open source projects on github are maintained by one of two people
  • Why so few?
    • Style has changed, lots of smaller projects
    • Being a maintainer isn’t glamorous of fun most of the time
    • 1% are creating the content that 99% of people consume
  • “Rapid evolution [..] poses the risk of introducing errors faster than people can fix them”
  • Consumption scales for most thing, not for open source because it creates more work for the maintainer
  • “~80% of contributors on github don’t know how to solve a merge conflict”
  • People see themselves as users of OS software, not potential maintainers – examples of rants by users against maintainers and the software
  • “Need maintainers, not contributors”
  • “Helping people over their first pull request, not helping them triage issues”
  • Why are we not talking about this?
  • Lets take a trip back in History
    • Originally Stallman said Free software was about freedom, not popularity. eg “as is” disclaimer of warranty
    • Some people create software sometimes.
    • Debian Social Contract, 4 freedoms, etc places [OS / Free] software and users first, maintainers often not mentioned.
    • Orientated around the user not the producer
  • Four Freedoms of OS producers
    • Decide to participate
    • Say no to contributions or requests
    • Define the priorities and policies of the project
    • Step down or move on
  • Other Issues maintainers need help with
    • Community best practices
    • Project analytics
    • Tools and bots for maintainers (especially for human coordination)
    • Conveying support status ( for contributors, not just user support )
    • Finding funding
  • People have talked about this before, mostly they concentrated on a few big projects like Linux or Apache (and not much written since 2005)
    • Doesn’t reflect the ecosystem today, thousands of small projects, github, social media, etc
    • Open source today is not what open source was 20 years ago
  • Q&A
    • Q: What do you see as responsibly and potential for orgs like Github?
    • A: Joined github to help with this. Hopes that github can help with tools.
    • Q: How can we get metrics on real projects, no just plaything on github
    • A: People are using stars on github, which is useless. One idea is to look at dependencies. libraries.io is looking. Hope for better metrics.
    • Q: Is it all agile programmings fault?
    • A: Possibly, people this days are learning to code but average level is lower and they don’t know what is under the hood. Pretty good in general but. “Under the hood it is not just a hammer, it is a human being”
    • Q: Your background is in funding, how does transiticion work when a project or some people on it start getting money?
    • A: It is complicated, need some guidelines. Some projects have made it work well ( “jsmobile” I think she said ). Need best practice and to keep things transparent
    • Q: How to we get out to the public (even programmers/tech people at tech companies) what OS is really like these days?
    • A: Example of Rust. Maybe some outreach and general material
    • Q: Is Patreon or other crowd-funding a good way to fund projects?
    • A: Needs a good target, requires a huge following which is hard to people who are not good at marketing. Better for one-time vs recurring. Hard to decide exactly what money should be used for

 

Share

January 18, 2017

Linux.conf.au 2017 – Wednesday – Session 3

Handle Conflict, Like a Boss! – Deb Nicholson

  • Conflict is natural
  • “When they had no outfit for their conflict they turned into Reavers and ate people and stuff”
  • People get caught up in their area not the overall goal for their organisation
  • People associate with a role, don’t like when it gets changed or eliminated
  • Need to go deep, people don’t actually tell you the problem straight away
  • If things get too bad, then go to another project
  • Identify the causes of conflict
  • 3 Styles of handling conflict
    • Avoidance
      • Can let things fester
      • They come across as unconnected
      • Looks like support for the status quo
    • Accommodation
      • Compromise on everything
      • Looks like not taking seriously
    • Assertion
      • Going to wear down everyone else
      • People won’t tell you when things are wrong
  • Going a little deeper
    • People don’t understand history (and why things are weird)
      • go to historical motivations and get buy-in for the strategy that reflects the new reality
    • People are acting to motivations you don’t see
      • Ask about the other persons motivations
    • Fear (often of change)
      • “What is the worse that could happen?”
    • Right Place, wrong time
      • Stuff is going to the wrong person or group
    • Help everyone get perspective
      • Don’t do the same forum, method, people all the time if it always has conflict.
  • What do you do with the Info
    • Put yourself in other persons shoes
    • Find alignment
    • A Word about who is doing this conflict resolution
      • Shouldn’t be just a single person/role
      • Or only women
      • Should be everyone/anyone
      • But if it is within a big or then maybe hire someone
  • Planning for future conflicts
    • Assuming the best
    • No ad hominem (hard to go back)
  • Conflict resolution between groups
    • What could we accomplish if we worked together
    • Doesn’t look good to outsiders
    • More Face-to-Face between projects (towards a common goal)

 

Open Compute Project Down Under – Andrew Ruthven

  • What is Open Compute
    • Vanity free Computing ( remove pretty bits )
    • Stripped Down – we don’t need, no video, minimum extra posts)
    • Efficient and easy
      • Maintenance, Air flow, Electricity
    • Came out of Facebook, now a foundation
    • 1/10th the number of techs/server
  • Projects and Technologies
    • 9 main areas, over 4000 people working on it.
    • Design and Specs
  • Recent Hardware
    • Some comes in 19″ racks
    • HPE, Microsoft Project Olympus
  • In Aus / NZ
    • Telstra – 2 rack of OCP Decathleon, Open Networking using Hyper Scalers
    • Rackspace
    • Large Gaming site
    • Catalyst IT
  • Why OCP for Catalyst
    • Very Open source software orientated company
    • Have a Cloud Operation
    • Looking at for a while
    • Finally ordered first unit in 2016 (Winterfell)
    • Cumulus Linux switches from Penguin computing, works of 12volt in Open Rack
  • Issues for Aus / Nz
    • Very small scale, sometimes to small for vendors
    • Supply chain hard, ended up using an existing integrator
    • Hyper Scalers in Aus, will ship to NZ
    • Number of comapnies seee to NZ
  • Lessons
    • Scale is an issue for failures aswell as supply
    • Have >1 power shelf
    • Have at least 2 racks with 4 power sheleves
    • Too small for vendors to get certification
    • Trust in new hardware
  • Your Own deployment
    • Green field DC
      • Use DC Designs
      • Allow for 48U racks (2.5 metres tall)
      • 2x or 4x 3-phase circuits per rack
    • Existing DCs
      • Consider modifications
      • 19″ servers options
      • 48OU Open rack if you have enough height
      • 22OU is you don’t have enough height
      • Carefully check the specs
    • Open Networking
      • Run collectd etc directly on your switch
    • Supply Chain
    • Community Support
      • OCP has a Aus/NZ Mailing list (ocp-anz)
      • Discussion on what is a priority across Aus and NZ

Share

Linux.conf.au 2017 – Wednesday – Session 2

400,000 ephemeral containers: testing entire ecosystems with Docker – Daniel Axtens

  • A pretty interesting talk. It was largely a demo so I didn’t grab many notes

Community Building Beyond the Black Stump – Josh Simmons

  • How to build communities when you don’t live in a big city
  • Whats in a meetup?
  • Santa Rosa County, north of San Franscisco
    • Not easy to get to SF
    • SF meetups not always relevant
  • After meeting with one other person, created “North Bay web Professionals”, minimal existing groups
  • Multidisciplinary community worked better
    • Designers, Marketers, Web Devs, writers, etc
    • Hired each other
    • Seemed to work better, fewer toxic dynamics
    • Safe space for beginners
  • 23 People at first event (worked hard to tell people)
    • Told everyone that we knew even if not interested
    • Contacted the competitors
    • Contacting firms, schools
    • Co-working spaces (formal of de-facto like cafes)
    • Other meetup groups, even in unrelated areas.
  • Adapting to the needs of the community
    • You might have a vision
    • But you must adapt to who turns up and what they want/need
  • First meeting
    • Asked people to bring food
    • Fluffy start time so could greet people and mingle
    • Went round room and got people to introduce themselves
      • Intro ended up being a thing they always did
      • Helped people remember names
      • Got everyone to say a little
      • put people in a social mindset
    • Framework for events decided
    • Decided on next meeting date, some prep
    • Ended up going late
      • Format became. Social -> talk -> Social on each night.
  • Tools
    • Used facebook and meetup
    • 1/3 of people came just from meetup promoting automatically
    • Go where people already are
  • Renamed from “North Bay Web professions” to “North Bay Web and Interactive Media professionals”
  • “Ask a person, not a search engine”
  • Hosted over 169 events – Core was the monthly meeting
    • Tried to keep the topics a little broad
    • Often the talk was narrow but compensated with a broad Q&A afterwards
  • Thinking of people as “members” not “attendees” , have to work at getting them come back
  • Also hosted
    • Lunches, rotated all around the region so eventually near everywhere, Casual
    • Unconfernces
    • Topical meetups
    • Charity Hackathon, teamed up with students and non-profits to do website for non-profit. Student was an apprentice.
    • Hosted Ag+Tech mixers with local farmers groups
    • Helped local cities put out tech RFPs
  • Q: Success measures? A: Survey of member, things like Job referrals, what have learnt

 

 

Share

Linux.conf.au 2017 – Wednesday – Session 1

Servo Architecture: Safety and Performance – Jack Moffitt

  • History
    • 1994 Netscape Navigator
    • 2002 Mozilla Release
    • 2008 multi-core CPU stuff not making firefox faster
    • 2016 CPUs now have on-chip GPUs
    • Very hard to write multi-threaded C++ to allow mozilla to take advantage of many cores
  • How to make Servo Faster?
  • Constellation
    • In the past – Monolithic browser engines
      • Single browser engine handling multiple tabs
      • Two processes – Pool Content processes vs Chrome process
        • If one process dies on a page doesn’t take out whole browser
      • Sanboxing lets webpage copies have less privs
    • Threads
      • Less overhead than whole processes
      • Thread per page
      • More responsive
      • Sandboxing
      • More robust to failure
    • Is this the best we can do?
      • Run Javascript and layout simultaniously
      • Pipeline splitting them up
      • Child pipelines for inner iframes (eg ads)
  • Constellation
    • Rust can fail better
    • Most failures stop at thread boundaries
    • Still do sandbox and privledges
    • Option to still have some tabs in multiple processes
  • Webrender
    • Using the GPU
      • Frees up main CPU
      • Are VERY fast at some stuff
      • Easiest place to start is rendering
    • Don’t browsers already use the GPU?
      • Only in a limited way for compositing
    • Key ideas
      • Retain mode not immediate mode (put things in optimal order first)
      • Designed to render CSS content (CSS is actually pretty simple)
      • Draw the whole frame every frame (things are fast enough, simpler to not try to optimise)
    • Pipeline
      • Chop screen into 256×256 tiles
      • Tile assignment
      • Create a big tree
      • merge and assign render targets
      • create and execute batches
    • Text
      • Rasterize on CPU and upload glyth to GPU
      • Paste and shadow usign the GPU
  • Project Quantum
    •  Taking technology we made in servo and put it in gecko
  • Research in progress
    • Pathfinder – GPU font rasterizer – Now faster than everything else
    • Magic DOM
      • Wins in JS/DOM intergration
      • Fusing reflectors and DOM objects
      • Self hosted JS
    • External colaborations: ML, Power Mngt, WebBluetooth, etc
  • Get involved
    • Test nightlies
    • Curated bugs for new contributors
    • servo.org

In Case of Emergency: Break Glass – BCP, DRP, & Digital Legacy – David Bell

  • Definitions
    • BCP = Business continuity Plan
    • A process to prevent and recover from business continuity plans
    • BIP = Business interuptions plan
    • BRP = Recovery plan
    • RPO = Recovery point objective, targetted recovery point (when you last backed up)
    • RTO = Recovery time objective
  • Why?
    • Because things will go wrong
    • Because things should not go even more wrong
  • Create your BCP
    • Brainstorm
    • Identify events that may interrupt, loss access to physical site, loss of staff
    • Backups
      • 3 copies
      • 2 different media/formats
      • 1 offsite and online
      • Check how long it will take to download or fetch
    • Test
    • Who has the Authority
    • Communication chains, phone trees, contact details
    • Practice Early, Practice often
      • Real-world scenarios
      • Measure, measure, measure
      • Record your results
      • Convert your into an action item
      • Have different people on the tests
    • Each Biz Unit or team should have their own BCP
    • Recovery can be expensive, make sure you know what your insurance will cover
  • Breaking the Glass
    • Documentation is the Key
    • Secure credentials super important
    • Shamir secret sharing, need number of people to re-create the share
  • Digital Legacy
    • Do the same for your personal data
    • Document
      • Credentials
      • Services
        • What uses them
        • billing arrangments
        • Credentials
      • What are your wishes for the above.
    • Talk to your family and friends
    • Backups
    • Document backups and backup your documentation
    • Secret sharing, offer to do the same for your friends
  • Other / Questions
    • Think about 2-Facter devices
    • Google and some others companies can setup “Next of Kin” contacts

 

 

Share

Linux.conf.au 2017 – Wednesday Keynote – Dan Callahan

Designing for failure: On the decommissioning of Persona

  • Worked for Mozilla on Persona
  • Persona did authentication on the web
    • You would go to a website
    • Type in your email address
    • Redirects via login page by your email provider
    • You login and redirect back
  • Started centralised, designed to be uncentralised as it is taken up
  • Some sites were only offering login via social media
    • Some didn’t offer traditional logins for emails or local usernames
    • Imposes 3rd party between you and your user.
    • Those 3rd parties have their own rules, eg real name requirements
  • Persona Failed
    • Traditional logins now more common
  • Cave Diving
    • Equipment and procedures designed to let you still survive if something fails
    • Training review deaths and determines how can be prevented
    • “5 rules of accident analysis” for cave diving
  • Three weeks ago switched off Persona
    • Encourage others to share mistakes

 

  • Just having a free license is not enough to succeed
  • Had a built in centralisation point
    • Protocol designed so browser could eventually natively implement but initially login.persona.com was using it.
    • Relay between provider and website went via Mozilla until browser natively implemented
    • No ability to fork the project
  • Bits rot more quickly online
    • Stuff that is online must be continually maintain (especially security)
    • Need a way to have software maintained without experts
  • Complexity Limits agency
    • Limits who can run project at all
    • Lots of work for those people who can run it
  • A free license don’t further my feeedom if we can’t run the software

 

  • Prolong Your Project’s Life
  • Bad ideas
    • We used popups and people reflexively closed them
    • API wasn’t great
  • Didn’t measure the right thing
    • Is persona product or infrastructure?
    • Treated like a product, not a good fit
  • Explicitly define and communicate your scope
    • “Solves authentication” or “Authenticate email addresses”
    • Broke some sites
    • Got used by FireFoxOS which was not a good fit
  • Ruthlessly oppose complexity
    • Tried to do too much mean’t it was overly complex
    • Complex hard to maintain and review and grow
    • Hard for newbies to join
    • If it is complex then it is hard to even test that is is working as expected
    • Focus and simplify
    • Almost no outside contributors, especially bad when mozilla dropped it.

 

  • Plan for Your Projects Failure
  • “Sometimes that [bus failure] is just a commuter bus that picks up that person and takes them to another job”
  • If you know you are dead say it
    • 3 years after we pulled people off project till officially killed
    • Might work for local software but services cost money to run
    • Sooner you admit you are dead the sooner people can plan to your departure
  • Ensure your users can recover without your involvement
    • Hard to do when you think your project is going to save the world
    • Example firefox sync has a copy of the data locally so even if it dies user will survive
  • Use standard data formats
    • eg OPML for RSS providers
  • Minimise the harm caused when your project goes away

 

Share

January 17, 2017

Linux.conf.au 2017 – Tuesday – Session 3

The Internet of Scary Things – tips to deploy and manage IoT safely Christopher Biggs

  • What you need to know about the Toaster Apocalypse
  • Late 2016 brought to prominence when major sites hit by DDOS from compromised devices
  • Risks present of grabbing images
    • Targeted intrusion
    • Indiscriminate harvesting of images
    • Drive-by pervs
    • State actors
  • Unorthorized control
    • Hit traffic lights, doorbells
  • Takeover of entire devices
    • Used for DDOS
    • Demanding payment for the owner to get control of them back.
  • “The firewall doesn’t divide the scary Internet from the safe LAN, the monsters are in the room”

 

  • Poor Security
    • Mostly just lazyness and bad practices
    • Hard for end-users to configure (especially non-techies)
    • Similar to how servers and Internet software, PCs were 20 years ago
  • Low Interop
    • Everyone uses own cloud services
    • Only just started getting common protocols and stds
  • Limited Maint
    • No support, no updates, no patches
  • Security is Hard
  • Laziness
    • Threat service is too large
    • Telnet is too easy for devs
    • Most things don’t need full Linux installs
  • No incentives
    • Owner might not even notice if compromised
    • No incentive for vendors to make them better

 

  • Examples
    • Cameras with telenet open, default passwords (that can not be changed)
    • exe to access
    • Send UDP to enable a telnet port
    • Bad Mobile apps

 

  • Selecting a device
    • Accept you will get bad ones, will have to return
    • Scan your own network, you might not know something is even wifi enabled
    • Port scan devices
    • Stick with the “Big 3” ramework ( Apple, Google, Amazon )
    • Make sure it supports open protocols (indicates serious vendor)
    • Check if open source firmward or clients exists
    • Check for reviews (especially nagative) or teardowns

 

  • Defensive arch
    • Put on it’s own network
    • Turn off or block uPNP opening firewall holes
    • Plan for breaches
      • Firewall rules, rate limited, recheck now and then
    • BYO cloud (dont use the vendor cloud)
      • HomeBridge
      • Node-RED (Alexa)
      • Zoneminder, Motion for cameras
  • Advice for devs
    • Apple HomeKit (or at least support for Homebridge for less commercial)
    • Amazon Alexa and AWS IoT
      • Protocols open but look nice
    • UCF uPnP and SNP profiles
      • Device discovery and self discovery
      • Ref implimentations availabel
    • NoApp setup as an alternative
      • Have an API
    • Support MQTT
    • Long Term support
      • Put copy of docs in device
      • Decide up from what and how long you will support and be up front
    • Limit what you put on the device
      • Don’t just ship a Unix PC
      • Take out debug stuff when you ship

 

  • Trends
    • Standards
      • BITAG
      • Open Connectivity founddation
      • Regulation?
    • Google Internet of things
    • Apple HomeHit
    • Amazon Alexa
      • Worry about privacy
    • Open Connectivity Foundation – IoTivity
    • Resin.io
      • Open source etc
      • Linux and Docket based
    • Consumer IDS – FingBox
  • Missing
    • Network access policy framework shipped
    • Initial network authentication
    • Vulnerbility alerting
    • Patch distribution

Rage Against the Ghost in the Machine – Lilly Ryan

  • What is a Ghost?
    • The split between the mind and the body (dualism)
    • The thing that makes you you, seperate to the meat of your body
  • Privacy
    • Privacy for information not physcial
    • The mind has been a private place
    • eg “you might have thought about robbing a bank”
    • The thoughts we express are what what is public.
    • Always been private since we never had technology to get in there
    • Companies and governments can look into your mind via things like your google queries
    • We can emulate the inner person not just the outer expression
  • How to Summon a Ghost
    • Digital re-creation of a person by a bot or another machine
    • Take information that post online
    • Likes on facebook, length of time between clicks
  • Ecto-meta-data
    • Take meta data and create something like you that interacts
  • The Smartphone
    • Collects meta-data that doesn’t get posted publicly
    • deleted documents
    • editing of stuff
    • search history
    • patten of jumping between apps
  • The Public meta-data that you don’t explicitly publish
    • Future could emulate you sum of oyu public bahavour
  • What do we do with a ghost?
    • Create chatbots or online profiles that emulate a person
    • Talk to a Ghost of yourself
    • Put a Ghost to work. They 3rd party owns the data
    • Customer service bot, PA
    • Chris Helmsworth could be your PA
    • Money will go to facebook or Google
  • Less legal stuff
    • Information can leak from big companies
  • How to Banish a Ghost
    • Option to donating to the future
    • currently no regulation or code of conduct
    • Restrict data you send out
      • Don’t use the Internet
      • Be anonymous
      • Hard to do when cookies match you across many sites
        • You can install cookie blocker
    • Which networks you connect to
      • eg list of Wifi networks match you with places and people
      • Mobile network streams location data
      • location data reveals not just where you go but what stores, houses or people you are near
      • Turn off wifi, bluetooth or data when you are not using. Use VPNs
    • Law
      • Lobby and push politicians
      • Push back on comapnies
    • For technologiest
      • Collect the minimum, not the maximum

FreeIPA project update (turbo talk) – Fraser Tweedale

  • Central Identity manager
  • Ldap + Kerberos, CA, DNS, admin tools, client. Hooks into AD
  • NAnage via web or client
  • Client SSSD. Used by various distros
  • What is in the next release
    • Sub-CAs
    • Can require 2FA for important serices
    • KDC Proxy
    • Network bound encryption. ie Needs to talk to local server to unencrypt a disk
    • User Session recording

 

Minimum viable magic

Politely socially engineering IRL using sneaky magician techniques – Alexander Hogue

  • Puttign things up your sleeve is actually hard
  • Minimum viable magic
  • Miss-direct the eyes
  • Eyes only move in a straight line
  • Exploit pattern recognition
  • Exploit the spot light
  • Your attention is a resource

Share

Linux.conf.au 2017 – Tuesday – Session 2

Stephen King’s practical advice for tech writers – Rikki Endsley

  • Example What and Whys
    • Blog post, press release, talk to managers, tell devs the process
    • 3 types of readers: Lay, Managerial, Experts
  • Resources:
    • Press: The care and Feeding of the Press – Esther Schindler
    • Documentation: RTFM? How to write a manual worth reading

 

  • “On Writing: A memoir of the craft” by Stephen King
  • Good writing requires reading
    • You need to read what others in your area or topic or competition are writing
  • Be clear on Expectations
    • See examples
    • Howto Articles by others
    • Writing an Excellent Post-Event Wrap Up report by Leslie Hawthorn
  • Writing for the Expert Audience
    • New Process for acceptance of new modules in Extras – Greg DeKoenigserg (Ansible)
    • vs Ansible Extras Modules + You – Robyn Bergeon
      • Defines audience in the intro

 

  • Invite the reader in
  • Opening Line should Invite the reader to begin the story
  • Put in an explitit outline at the start

 

  • Tell a story
  • That is the object of the exercise
  • Don’t do other stuff

 

  • Leave out the boring parts
  • Just provides links to the details
  • But sometimes if people not experts you need to provide more detail

 

  • Sample outline
    • Intro (invite reader in)
    • Brief background
    • Share the news (explain solution)
    • Conclude (include important dates)

 

  • Sample Outline: Technical articles
  • Include a “get technical” section after the news.
  • Too much stuff to copy all down, see slides

 

  • To edit is divine
  • Come back and look at it afterwards
  • Get somebody who will be honest to do this

 

  • Write for OpenSource.com
  • opensource.com/story

 

  • Q: How do you deal with skimmers?   A: Structure, headers
  • Q: Pet Peeves?  A: Strong intro, People using “very” or “some” , Leaving out import stuff

 

 

Share

Linux.conf.au 2017 – Tuesday Session 1

Fishbowl discussion – GPL compliance Karen M. Sandler

  • Fishbowl format
    • 5 seats at front of the room, 4 must be occupied
    • If person has something to say they come up and sit in spare chair, then one existing person must sit down.
  • Topics
    • Conflicts of Law
    • Mixing licences
    • Implied warrenty
    • Corporate Procedures and application
    • Get knowledge of free licences into the law school curriculum
  • “Being the Open Source guy at Oracle has always been fun”
  • “Our large company has spent 2000 hours with a young company trying to fix things up because their license is not GPL compliant”
  • BlackDuck is a commercial company will review your company’s code looking for GPL violations. Some others too
    • “Not a perfect magical tool by any sketch”
    • Fossology is alternative open tool
    • Whole business model around license compliance, mixed in with security
    • Some of these companies are Kinda Ambulance chasers
    • “Don’t let those companies tell you how to tun your business”
    • “Compliance industry complex” , “Compliance racket”
  • At my employer with have a tool that just greps for a “GPL” license in code, better than nothing.
  • Lots of fear in this area over Open-source compliance lawsuits
    • Disagreements in community if this should be a good idea
    • More, Less, None?
    • “As a Lawyer I think there should definitely be more lawsuits”
    • “A lot of large organisations will ignore anything less than [a lawsuit] “
    • “Even today I deal with organisations who reference the SCO period and fear widespread lawsuits”
  • Have Lawsuits chilled adoption?
    • Yes
    • Chilled adoption of free software vs GPL software
    • “Android has a policy of no GPL in userspace” , “they would replace the kernel if they could”
    • “Busybox lawsuits were used as a club to get specs so the kernel devs could create drivers” , this is not really applicable outside the kernel
    • “My goal in doing enforcement was to ensure somebody with a busybox device could compile it”
    • “Lawyers hate any license that prevents them getting future work”
    • “The amount of GPL violations skyrocketed with embedded devices shipping with Linux and GPL software”
  • People are working on a freer (eg “Not GPL”) embeded stack to replace Android userspace: Toybox, Toolbox, No kernel replacement yet.
  • Employees and Compliance
    • Large company helping out with charities systems unable to put AGPL software from that company on their laptops
    • “Contributing software upstream makes you look good and makes your company look good” , Encourages others and you can use their contributions
    • Work you do on your volunteer days at company do not fill under software assignment policy etc, but they still can’t install random stuff on their machines.
  • Website’s often are not GPL compliance, heavy restrictions, users giving up their licenses.
  • “Send your lawyers a video of another person in a suit talking about that topic”

U 2 can U2F Rob N ★

  • Existing devices are not terribly but better than nothing, usability sucks
  • Universal Two-Factor
    • Open Standard by FIDO alliance
    • USB, NFC, Bluetooth
    • Multiple server and host implimentations
    • One device multi-sites
    • Cloning protection
  • Interesting Examples
  • User experience: Login, press the button twice.
  • Under the hood a lot more complicated
    • Challenge from site, send must sign challenge (including website  url to prevent phishing site proxying)
    • Multiple keypairs for each website on device
    • Has a login counter on the device included in signature, so server can panic then counter gets out of sync from a cloned device
  • Attestation Certificate
    • Shared across model or production batch
  • Browserland
    • Javascript
    • Chrome-based support are good
    • Firefox via extension (Native “real soon now”)
    • Mobile works on Android + Chrome + Google Authenticator

Share