Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 54 min 26 sec ago

OpenSTEM: This Week in HASS – term 2, week 4

Mon, 2017-05-08 09:03

It’s NAPLAN week and that means time is short! Fortunately, the Understanding Our World™ program is based on 9 week units, which means that if you run out of time in any particular week, it’s not a disaster. Furthermore, we have made sure that there is plenty of catch-up time within the lessons, so that there is no need to feel rushed. This week students are getting into the nitty gritty of their term projects. Our youngest students are studying their surroundings at school and in the local area. Older students are getting to the core of their research projects.

Foundation to Year 3

Students in our standalone Foundation/Kindy/Prep class (unit F.2) are starting to build a model of their Favourite Place. It is the teacher’s choice whether they build a diorama, make a poster or collage, or how this is done in class. This week students start by drawing or cutting out pictures to show aspects of their favourite place. Students in an integrated Foundation/Kindy/Prep (unit F.6) and Year 1 class are using their senses to investigate their class and school – what can we see, hear, smell, feel and taste? Some ideas can be found in resources such as My Favourite Sounds and the Teacher Handbook also contains lots of ideas for these investigations. Students in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are also discussing how the school and local area has changed through time. The teacher can use old maps, photos or newspaper reports to guide students through these discussions. What information is available in the school? What do local families remember?

Years 3 to 6

Students in Year 3 (unit 3.6), 4 (unit 4.2), 5 (unit 5.2) and 6 (unit 6.2) are continuing to research their explorer. This week year 3 students are focusing on the climates encountered by their explorer. Resources such as Climate Zones of Australia and Climate Zones of the World can help the class to identify these climate areas. Year 4 students examine Environments in Africa and South America, in order to discuss the environments encountered by their explorer. Students in Year 5 can read up about the environment encountered by their explorer in North America, and Year 6 students examine the Environments of Asia. In each case, the student workbook guides the student through this investigation and helps them to isolate pertinent information to include in their presentation. This helps students to gain an understanding of how to research a topic and derive an understanding of what information they need to consider. Teachers can use the workbook to check in and see how students are travelling in their progress towards completing the project, as well as their understanding of the content covered.

Colin Charles: Speaking in May 2017

Fri, 2017-05-05 19:01

It was a big April if you’re in the MySQL ecosystem, so am looking forward to other events that have different focus and a different base, so to speak. See you at:

  • rootconf – May 11-12 2017 – Bangalore, India. My first Rootconf was last year, and it was a great event; I look forward to going there again this year, to talk about capacity planning for your databases. If you register with this link you get a 10% discount.
  • Open Source Data Center Conference – May 16-18 2017 – Berlin, Germany. I’ve enjoyed my trips to OSDC in the last few years, and they’re on their last tickets now – so register if you plan to go!

Tim Serong: How to Really Clean a Roomba

Wed, 2017-05-03 13:04

The official iRobot Roomba instructional videos show a Roomba doing its thing in an immaculately clean house. When it comes time to clean the Roomba itself, an immaculately manicured woman empties a sprinkling of dirt from the Roomba’s hopper into a bin, flicks no dust at all off the rotor brush and then delicately grooms the main brush, before putting the Roomba back on to charge.

It turns out the cleaning procedure is a bit more involved for two long-haired adults and three cats living on a farm. Note that the terminology used in the instructions below was made up by me just now, and may or may not match what’s in the Roomba manual. Also, our Roomba is named Neville.

First, assemble some tools. You will need at least two screwdrivers (one phillips, one slotted), the round red Roomba brush cleaning thingy, and a good sharp knife. You will not need the useless flat red Roomba cleaning thingy the woman in the official video used to groom the main brush.

Brace yourself, then turn the Roomba over (here we see that Neville had an unfortunate encounter with some old cat-related mess, in addition to the usual dirt, mud, hair, straw, wood shavings, chicken feathers, etc.)

Remove the hopper:

Empty the hopper:

See if you can see if the fan inside the hopper looks like it’s clogged. It’s probably good this time (Neville hasn’t accidentally been run with one filter missing lately), but we may as well open it up anyway.

Take the top off:

Take the filter plate off:

Take the fan cowling off. Only a bit of furry dusty gunk:

Remove the furry dusty gunk:

Next, pop open the roller enclosure:

Remove the rollers and take their end caps off:

Pull the brush roller through the round Roomba brush cleaning thingy:

This will remove most of the hair, pine needles and straw:

Do it again to remove the rest of the hair, pine needles and straw:

Check the end of the axle for even more hair:

This can be removed using your knife:

The rubber roller also needs a good bit of knife action:

The rubber roller probably really needs to be replaced at this point (it’s getting a bit shredded), but I’ll do that next time.

Clean the inside of the roller enclosure using a hand-held Dyson vacuum cleaner:

That didn’t work very well. I assume Neville managed to escape into the bathroom recently and got a bit wet (he’s a free spirit).

Rubbing with a damp cloth helps somewhat:

But this Orange Power stuff is even better:

See?

Next, use your knife to liberate the rotor brush:

Almost there, but we need to take the whole thing off:

Yikes:

Note to self: buy new rotor brush.

One of Neville’s wheels has horrible gunk stuck it its tread. Spraying with Orange Power and wiping helps somewhat (we’ll come back to this later):

Use a screwdriver to pop the little front wheel out, and cut the hair off its axle with a knife. The axle could probably use some WD40 too:

Remove the entire bottom plate:

Use that hand-held vacuum cleaner again to get rid of the dust balls:

Use your fingers or a screwdriver to remove small chicken and/or duck feathers from the wheel housing:

Use a chopstick to scrape the remaining gunk out of the wheel tread:

Voila!

Here’s what came out of the poor little guy:

Finally, reassemble everything, and Neville is ready for next time (or, at least, is ready to go back on the charger):

Stewart Smith: API, ABI and backwards compatibility are a hard necessity

Wed, 2017-05-03 13:00

Recently, I was reading a thread on LKML on a proposal to change the behavior of the open system call when confronted with unknown flags. The thread is worth a read as the topic of augmenting things that exist probably by accident to be “better” is always interesting, as is the definition of “better”.

Keeping API and/or ABI compatibility is something that isn’t a new problem, and it’s one that people are pretty good at sometimes messing up.

This problem does not go away just because “we have cloud now”. In any distributed system, in order to upgrade it (or “be agile” as the kids are calling it), you by definition are going to have either downtime or at least two versions running concurrently. Thus, you have to have your interfaces/RPCs/APIs/ABIs/protocols/whatever cope with changes.

You cannot instantly upgrade the world, it happens gradually. You also have to design for at least three concurrent versions running. One is the original, the second is your upgrade, your third is the urgent fix because the upgrade is quite broken in some new way you only discover in production.

So, the way you do this? Never ever EVER design for N-1 compatibility only. Design for going back a long way, much longer than you officially support. You want to have a design and programming culture of backwards compatibility to ensure you can both do new and exciting things and experiment off to the side.

It’s worth going and rereading Rusty’s API levels posts from 2008:

OpenSTEM: This Week in HASS – term 2, week 3

Mon, 2017-05-01 09:06

This week all of our students start to get into the focus areas of their units. For our youngest students that means starting to examine their “Favourite Place” – a multi-sensory examination which help them to explore a range of different kinds of experiences as they build a representation of their Favourite Place. Students in Years 1 to 3 start mapping their local area and students in Years 3 to 6 start their research topics for the term, each choosing a different explorer to investigate.

Foundation/Kindy/Prep to Year 3

Students doing our stand-alone Foundation/Kindy/Prep unit (F.2) start examining the concept of a Favourite Place this week. This week is an introduction to a 6 week investigation, using all their senses to consider different aspects of places. They are focusing on thinking about what makes their favourite place special to them and how different people like different places. This provides great opportunities for practising skills of considering alternate points of view, having respectful discussions and accepting that others might have opinions different to their own, but no less valid. Students in integrated Foundation/Kindy/Prep (unit F.6) classes and in Years 1 (unit 1.2), 2 (unit 2.2) and 3 (unit 3.2) are doing some mapping this week, learning to represent school buildings, open areas, roads, houses, shops etc in a 2 dimensional plan. This exercise forms the foundation for an examination of the school and local landscape over the next few weeks.

Years 3 to 6

Students in Years 3 to 6 start their research projects this week. Students doing unit 3.6, Exploring Climates, will be investigating people who have explored extreme climates. Options include the first people to reach Australia during the Ice Age, Aboriginal people who lived in Australia’s central deserts, Europeans who explored central Australia, such as Sturt, Leichhardt and others. Students doing unit 4.2 will be investigating explorers of Africa and South America, including Ferdinand Magellan (and Elcano), Walter Raleigh, Amerigo Vespucci and many others. Students doing unit 5.2  are investigating explorers of North America. Far beyond Christopher Columbus, choices include Vikings such as Eric the Red, Leif Erikson and Bjarni Herjolfsson; Vitus Bering (after whom the Bering Strait is named), the French in the colony of Quebec, such as Jacques Cartier, Samuel de Champlain and Pierre François-Xavier de Charlevoix. Some 19th century women such as Isabella Bird (pictured on right) and Nellie Bly are also provided as options for research. Unit 6.2 examines explorers of Asia. In this unit, Year 6 students are encouraged to move beyond a Eurocentric approach to exploration and consider explorers from other areas such as Asia and Africa as well. Thus explorers such as Ibn Battuta, Ahmad Ibn Fadlan, Gan Ying, Ennin and Zheng He, join the list with Willem Barents, William Adams, Marco Polo and Abel Tasman. Women explorers include Gertrude Bell and Ida Pfeiffer. The whole question of women explorers, and the constraints under which they have operated in different cultures and time periods, can form part of a class discussion, either as extension or for classes with a particular interest.

Teachers have the option for student to present the results of their research (which will cover the next 4 weeks) as a slide presentation, using software such as Powerpoint, a poster, a narrative, a poem, a short play or any other format that is useful, and some teachers have managed to combine this with requirements for other subject areas, such as English or Digital Technologies, thereby making the exercise even more time-efficient.

Erik de Castro Lopo: What do you mean ExceptT doesn't Compose?

Sun, 2017-04-30 12:22

Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata.

At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus:

Using exceptions for control flow is the root of many evils in software.

Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded:

Yet what to do if you want composable code? Currently I have
type Rpc a = ExceptT RpcError IO a
which is terrible

But what do we mean by "composable"? I like the wikipedia definition:

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements.

The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code.

At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text.

In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together.

So lets look at the naive version of a program that doesn't do any exception handling at all.

import Data.ByteString.Char8 (readFile, writeFile) import Naive.Cat (Cat, parseCat) import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection) import Naive.Dog (Dog, parseDog) import Prelude hiding (readFile, writeFile) import System.Environment (getArgs) import System.Exit (exitFailure) main :: IO () main = do args <- getArgs case args of [inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile _ -> putStrLn "Expected three file names." >> exitFailure readCatFile :: FilePath -> IO Cat readCatFile fpath = do putStrLn "Reading Cat file." parseCat <$> readFile fpath readDogFile :: FilePath -> IO Dog readDogFile fpath = do putStrLn "Reading Dog file." parseDog <$> readFile fpath writeResultFile :: FilePath -> Result -> IO () writeResultFile fpath result = do putStrLn "Writing Result file." writeFile fpath $ renderResult result processFiles :: FilePath -> FilePath -> FilePath -> IO () processFiles infile1 infile2 outfile = do cat <- readCatFile infile1 dog <- readDogFile infile2 result <- withDatabaseConnection $ \ db -> processWithDb db cat dog writeResultFile outfile result

Once built as per the instructions in the repo, it can be run with:

dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null Reading Cat file 'Naive/Cat.hs' Reading Dog file 'Naive/Dog.hs'. Writing Result file '/dev/null'.

The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown:

  • Either of the two readFile calls.
  • The writeFile call.
  • The parsing functions parseCat and parseDog.
  • Opening the database connection.
  • The database connection could terminate during the processing stage.

So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things.

Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:

{-# LANGUAGE OverloadedStrings #-} import Control.Exception (SomeException) import Control.Monad.IO.Class (liftIO) import Control.Error (ExceptT, fmapL, fmapLT, handleExceptT , hoistEither, runExceptT) import Data.ByteString.Char8 (readFile, writeFile) import Data.Monoid ((<>)) import Data.Text (Text) import qualified Data.Text as T import qualified Data.Text.IO as T import Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError) import Improved.Db (DbError, Result, processWithDb, renderDbError , renderResult, withDatabaseConnection) import Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError) import Prelude hiding (readFile, writeFile) import System.Environment (getArgs) import System.Exit (exitFailure) data ProcessError = ECat CatParseError | EDog DogParseError | EReadFile FilePath Text | EWriteFile FilePath Text | EDb DbError main :: IO () main = do args <- getArgs case args of [inFile1, infile2, outFile] -> report =<< runExceptT (processFiles inFile1 infile2 outFile) _ -> do putStrLn "Expected three file names, the first two are input, the last output." exitFailure report :: Either ProcessError () -> IO () report (Right _) = pure () report (Left e) = T.putStrLn $ renderProcessError e renderProcessError :: ProcessError -> Text renderProcessError pe = case pe of ECat ec -> renderCatParseError ec EDog ed -> renderDogParseError ed EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg EDb dbe -> renderDbError dbe readCatFile :: FilePath -> ExceptT ProcessError IO Cat readCatFile fpath = do liftIO $ putStrLn "Reading Cat file." bs <- handleExceptT handler $ readFile fpath hoistEither . fmapL ECat $ parseCat bs where handler :: SomeException -> ProcessError handler e = EReadFile fpath (T.pack $ show e) readDogFile :: FilePath -> ExceptT ProcessError IO Dog readDogFile fpath = do liftIO $ putStrLn "Reading Dog file." bs <- handleExceptT handler $ readFile fpath hoistEither . fmapL EDog $ parseDog bs where handler :: SomeException -> ProcessError handler e = EReadFile fpath (T.pack $ show e) writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO () writeResultFile fpath result = do liftIO $ putStrLn "Writing Result file." handleExceptT handler . writeFile fpath $ renderResult result where handler :: SomeException -> ProcessError handler e = EWriteFile fpath (T.pack $ show e) processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO () processFiles infile1 infile2 outfile = do cat <- readCatFile infile1 dog <- readDogFile infile2 result <- fmapLT EDb . withDatabaseConnection $ \ db -> processWithDb db cat dog writeResultFile outfile result

The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function.

We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:

EDb (DbError3 ResultError1)

contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing.

We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose.

Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including:

  • Errors are explicit in the types of the functions, making the code easier to reason about.
  • Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions.
  • The programmer spends less time chasing the source of exceptions in large complex code bases.
  • More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought.

Want to discuss this? Try reddit.

Dave Hall: Continuing the Conversation at DrupalCon and Into the Future

Fri, 2017-04-28 01:02

My blog post from last week was very well received and sparked a conversation in the Drupal community about the future of Drupal. That conversation has continued this week at DrupalCon Baltimore.

Yesterday during the opening keynote, Dries touched on some of the issues raised in my blog post. Later in the day we held an unofficial BoF. The turn out was smaller than I expected, but we had a great discussion.

Drupal moving from a hobbyist and business tool to being an enterprise CMS for creating "ambitious digital experiences" was raised in the Driesnote and in other conversations including the BoF. We need to acknowledge that this has happened and consider it an achievement. Some people have been left behind as Drupal has grown up. There is probably more we can do to help these people. Do we need more resources to help them skill up? Should we direct them towards WordPress, backdrop, squarespace, wix etc? Is it is possible to build smaller sites that eventually grow into larger sites?

In my original blog post I talked about "peak Drupal" and used metrics that supported this assertion. One metric missing from that post is dollars spent on Drupal. It is clear that the picture is very different when measuring success using budgets. There is a general sense that a lot of money is being spent on high end Drupal sites. This has resulted in less sites doing more with Drupal 8.

As often happens when trying to solve problems with Drupal during the BoF descended into talking technical solutions. Technical solutions and implementation detail have a place. I think it is important for the community to move beyond this and start talking about Drupal as a product.

In my mind Drupal core should be a content management framework and content hub service for building compelling digital experiences. For the record, I am not arguing Drupal should become API only. Larger users will take this and build their digital stack on top of this platform. This same platform should support an ecosystem of Drupal "distros". These product focused projects target specific use cases. Great examples of such distros include Lightning, Thunder, Open Social, aGov and Drupal Commerce. For smaller agencies and sites a distro can provide a great starting point for building new Drupal 8 sites.

The biggest challenge I see is continuing this conversation as a community. The majority of the community toolkit is focused on facilitating technical discussions and implementations. These tools will be valuable as we move from talking to doing, but right now we need tools and processes for engaging in silver discussions so we can build platinum level products.

Linux Users of Victoria (LUV) Announce: LUV Main May 2017 Meeting: The Plasma programming language

Wed, 2017-04-26 17:02
Start: May 2 2017 18:30 End: May 2 2017 20:30 Start: May 2 2017 18:30 End: May 2 2017 20:30 Location:  The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053 Link:  http://luv.asn.au/meetings/map

PLEASE NOTE NEW LOCATION

Tuesday, May 2, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• Dr. Paul Bone, the Plasma programming language
• To be announced

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 2, 2017 - 18:30

read more

Matthew Oliver: Weechat – a trial

Mon, 2017-04-24 13:03

I’m a big fan on console apps. But for IRC, I have been using quassel, as it gives me a client on my phone. But I’ve been cleaning up my cloud accounts, and thought of the good old days when you’d simply run a console IRC client in screen or tmux.

Many years ago I used weechat, it was awesome, so I thought why not go and have a play.

To my surprise weechat has a /relay command which allows other clients to connect. One such client is WeechatAndroid, which in itself isn’t a IRC client, but actually a client that can talk to a already running weechat… This is exactly what I want.

So in case any of you wanted to do the same, this is my current tmux + weechat setup. Mostly gleened from various internet sources.

 

Initial WeeChat setup

Once you start weechat, you can simply configure things, when you do, it writes it to its config file. So really all you need to do is place or edit the config file directly. So once setup you can easily move it. However, as you’ll always want to be connected, it’s nice to know how to change configuration while it’s running… you know, so you can play.

I’ll assume you have installed weechat in whatever distro your using. So we’ll start by running weechat (inside a tmux or screen):

weechat

Now while we are inside weechat, lets first start by installing/activating some scripts:

/script install buffers.pl go.py colorize_nicks.py urlserver.py

Noting that there are heaps of scripts to install, but lets explain these:

  • buffers.pl – makes a left side window listing the buffers (or channels).
  • go.py – allows you to quickly search the buffers.
  • urlserver.py – automatically shorens long urls so they don’t break if they overlap over a line.
  • colorize_nicks.py – is obvious.

The go.py script is useful. But turns out you can also turn on mouse support with:

/mouse enable

Which will then allow you to use your mouse to select a buffer, although this will break normal copy and pasting, so maybe not worth the effort, but thought I’d mention it.
Although of course normally you’d use the weechat keybindings to access them, <alt+left or right> to go to a buffer, or <alt-a> to goto the last active buffer. But go, means you can easily jump, and if you add a go.py keybinding:

/key bind meta-g /go

An <alt+g> will lauch it, so you can type away, is really easy.

But we are probably getting away from our selves. There are plenty of places that can tell you how to configure it connect to freenode etc. I used: https://weechat.org/files/doc/devel/weechat_quickstart.en.html

Now that you have some things configured lets setup the relay script/plugin.

 

Setting up /relay

This actually isn’t too hard. But there was a gotcha, which is why I’m writing about it. I first simply followed WeechatAndroid’s guide. But this just sets up an insecure relay:


/relay add weechat 8001
/set relay.network.password "your-secret-password"

NOTE: 8001 is the port, so you can change this, and the password to whatever you want.

However, that’s great for a test, but we probably want SSL, so we need a SSL cert:


mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

NOTE: To change or see where weechat is looking for the cert is to see what the value of ‘relay.network.ssl_cert_key’ or just do a: /set relay.*

We can tell weechat to load this sslcert without restarting by running:

/relay sslcertkey

And here’s the gotcha, you will fail to connect via SSL until you create an instance of the weechat relay protocal (listening socket) with ssl in the name of the protocol:
/relay del weechat
/relay add ssl.weechat 8001

Or just be smart and setup the SSL version.

On the weechat buffer you will see the client connecting and disconnecting, so is a good way to debug connection issues.

 

Weechat /relay + ssl (TL;DR)

mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

In weechat:

/relay sslcertkey
/relay add ssl.weechat 8001
/set relay.network.password "your-secret-password"

OpenSTEM: This Week in HASS – term 2, week 2

Mon, 2017-04-24 09:03

It is hoped that by now all the school routine is shaking back down into place. No doubt you’ve all got ANZAC Day marked on your class calendars, and this may be a good time to revisit some of the celebrations with the younger students. This week our younger students are looking at types of homes and local Aboriginal groups. Students in Year 3 are investigating climate zones and biomes of Australia, while students in Years 4 to 6 are looking at Europe in the ‘Age of Discovery’ (the 15th to 18th centuries).

Foundation/Prep to Year 3

Students in our stand-alone Foundation/Prep class (Unit F.2), in line with the name of the unit “Where We Live”, are examining different types of homes and talking about how people get the things they need (such as shelter, warmth etc) from their homes. Students examine a wide range of different types of homes including freestanding houses, apartments, townhouses, as well as boats, caravans and other less conventional homes.

Students in integrated Foundation/Prep classes (Unit F.6) and in years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) are finding out about their local Aboriginal groups, in the area of their school. Students will be considering how the groups are connected to the land and what changes they have seen since they first arrived in that area, thousands of years before. Remember, if you need information about your local Aboriginal group, feel free to contact us and ask.

Years 3 to 6

Students in Year 3, doing the Unit “Exploring Climates” (Unit 3.6) are consolidating work done last week on climate zones and the biomes of Australia. This week they are focusing on matching the climate zone to the region of Australia. Students in Years 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are shifting focus across to Europe in the 15th to 18th centuries – the ‘Age of Discovery’.

This sets the scene for further examinations of explorers and the research project students will undertake this term, as well as introducing students to the conditions in Europe which later led to colonisation, thereby providing some important background information for Australian history in Term 3. Students can examine Spain, Portugal and England and the role that they played in exploring the world at this time.

Science! Sailing Ship Science

Did you know: the Understanding Our World™ program also fully covers the Science component of the Australian Curriculum at each year level, integrated with the HASS materials!

In line with the Age of Discovery explorer theme, student start their Science activity: “Ancient Sailing Ships“. A perennial favourite with students, this activity involves making a simple model sailing ship and then examining the forces acting on the ship, the properties of different parts of the ships and the materials from which they were made, examining different types of sails (square-rigged versus lateen-rigged), as well as considering the phases of matter associated with sailing ships.

Some schools set up water troughs and fans and race the ships against each other, which causes much excitement! This activity also helps students understand some of the challenges faced by explorers who travelled the world in similar vessels.

Dave Hall: Many People Want To Talk

Sat, 2017-04-22 03:02

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

Dave Hall: Drupal, We Need To Talk

Wed, 2017-04-19 23:02

Update 21 April: I've published a followup post with details of the BoF to be held at DrupalCon Baltimore on Tuesday 25 April. I hope to see you there so we can continue the conversation.

Drupal has a problem. No, not that problem.

We live in a post peak Drupal world. Drupal peaked some time during the Drupal 8 development cycle. I’ve had conversations with quite a few people who feel that we’ve lost momentum. DrupalCon attendances peaked in 2014, Google search impressions haven’t returned to their 2009 level, core downloads have trended down since 2015. We need to accept this and talk about what it means for the future of Drupal.

Technically Drupal 8 is impressive. Unfortunately the uptake has been very slow. A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor.

In the five years Drupal 8 was being developed there was a fundamental shift in software architecture. During this time we witnessed the rise of microservices. Drupal is a monolithic application that tries to do everything. Don't worry this isn't trying to rekindle the smallcore debate from last decade.

Today it is more common to see an application that is built using a handful of Laravel micro services, a couple of golang services and one built with nodejs. These applications often have multiple frontends; web (react, vuejs etc), mobile apps and an API. This is more effort to build out, but it likely to be less effort maintaining it long term.

I have heard so many excuses for why Drupal 8 adoption is so slow. After a year I think it is safe to say the community is in denial. Drupal 8 won't be as popular as D7.

Why isn't this being talked about publicly? Is it because there is a commercial interest in perpetuating the myth? Are the businesses built on offering Drupal services worried about scaring away customers? Adobe, Sitecore and others would point to such blog posts to attack Drupal. Sure, admitting we have a problem could cause some short term pain. But if we don't have the conversation we will go the way of Joomla; an irrelevant product that continues its slow decline.

Drupal needs to decide what is its future. The community is full of smart people, we should be talking about the future. This needs to be a public conversation, not something that is discussed in small groups in dark corners.

I don't think we will ever see Drupal become a collection of microservices, but I do think we need to become more modular. It is time for Drupal to pivot. I think we need to cut features and decouple the components. I think it is time for us to get back to our roots, but modernise at the same time.

Drupal has always been a content management system. It does not need to be a content delivery system. This goes beyond "Decoupled (Headless) Drupal". Drupal should become a "content hub" with pluggable workflows for creating and managing that content.

We should adopt the unix approach, do one thing and do it well. This approach would allow Drupal to be "just another service" that compliments the application.

What do you think is needed to arrest the decline of Drupal? What should Drupal 9 look like? Let's have the conversation.

Ben Martin: CNC Alloy Candelabra

Wed, 2017-04-19 18:03
While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

Chris Smart: Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Tue, 2017-04-18 21:02

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

OpenSTEM: This Week in HASS – term 2, week 1

Tue, 2017-04-18 09:03

Welcome to the new school term, and we hope you all had a wonderful Easter! Many of our students are writing NAPLAN this term, so the HASS program provides a refreshing focus on something different, whilst practising skills that will help students prepare for NAPLAN without even realising it! Both literacy and numeracy are foundation skills of much of the broader curriculum and are reinforced within our HASS program as well. Meantime our younger students are focusing on local landscapes this term, while our older students are studying explorers of different continents.

Foundation to Year 3

Our youngest students (Foundation/Prep Unit F.2) start the term by looking at different types of homes. A wide selection of places can be homes for people around the world, so students can compare where they live to other types of homes. Students in integrated Foundation/Prep and Years 1 to 3 (Units F.61.2; 2.2 and 3.2) start their examination of the local landscape by examining how Aboriginal people arrived in Australia 60,000 years ago. They learn how modern humans expanded across the world during the last Ice Age, reaching Australia via South-East Asia. Starting with this broad focus allows them to narrow down in later weeks, finally focusing on their local community.

Year 3 to Year 6

Students in Years 3 to 6 (Units 3.6; 4.2; 5.2 and 6.2) are looking at explorers this term. Each year level focuses on explorers of a different part of the world. Year 3 students investigate different climate zones and explorers of extreme climate areas (such as the Poles, or the Central Deserts of Australia).  Year 4 students examine Africa and South America and investigate how European explorers during the ‘Age of Discovery‘ encountered different environments, animals and people on these continents. The students start with prehistory and this week they are looking at how Ancient Egyptians and Bantu-speaking groups explored Africa thousands of years ago. They also examine Great Zimbabwe. Year 5 students are studying North America, and this week are starting with the Viking voyages to Greenland and Newfoundland, in the 10th century. Year 6 students focus on Asia, and start with a study in Economics by examining the Dutch East India Company of the 17th and 18th centuries. (Remember HASS for years 5 and 6 includes History, Geography, Civics and Citizenship and Economics and Business – we cover it all, plus Science!)

You might be wondering how on earth we integrate such apparently disparate topics for multi-year classes! Well, our Teacher Handbooks are full of tricks to make teaching these integrated classes a breeze. The Teacher Handbooks with lesson plans and hints for how to integrate across year levels are included, along with the Student Workbooks, Model Answers and Assessment Guides, within our bundles for each unit. Teachers using these units have been thrilled at how easy it is to use our material in multi-year level classes, whilst knowing that each student is covering curriculum-appropriate material for their own year level.

Russell Coker: More KVM Modules Configuration

Mon, 2017-04-17 21:02

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

Related posts:

  1. Video Mode and KVM I recently changed my KVM servers to use the kernel...
  2. Testing STONITH One problem that I have had in configuring Heartbeat clusters...
  3. Modules and NFS for Xen I’m just in the process of converting a multi-user system...

Chris Smart: Creating an OpenStack Ironic deploy image with Buildroot

Sun, 2017-04-16 19:02

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

Francois Marier: Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

Fri, 2017-04-14 01:03

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash /usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service" pushd /etc/ > /dev/null /usr/bin/git add letsencrypt DIFFSTAT="$(/usr/bin/git diff --cached --stat)" if [ -n "$DIFFSTAT" ] ; then /usr/bin/git commit --quiet -m "Renewed letsencrypt certs" echo "$DIFFSTAT" fi popd > /dev/null # Generate the right certs for ejabberd and znc if test /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem -nt /etc/ejabberd/ejabberd.pem ; then cat /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem > /etc/ejabberd/ejabberd.pem fi cat /etc/letsencrypt/live/irc.fmarier.org/privkey.pem /etc/letsencrypt/live/irc.fmarier.org/fullchain.pem > /home/francois/.znc/znc.pem

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

Finally, since my XMPP server and IRC bouncer need the private key and the full certificate chain to be in the same file, so I regenerate these files at the end of the script. In the case of ejabberd, I only do so if the certificates have actually changed since overwriting ejabberd.pem changes its timestamp and triggers an fcheck notification (since it watches all files under /etc).

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

Colin Charles: Speaking in April 2017

Sun, 2017-04-09 01:02

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

Dave Hall: Remote Presentations

Thu, 2017-04-06 17:02

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.