Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 26 min ago

Linux Users of Victoria (LUV) Announce: LUV Main May 2017 Meeting

Wed, 2017-04-26 17:02
Start: May 2 2017 18:30 End: May 2 2017 20:30 Start: May 2 2017 18:30 End: May 2 2017 20:30 Location:  The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053 Link:  http://luv.asn.au/meetings/map

PLEASE NOTE NEW LOCATION

Tuesday, May 2, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

May 2, 2017 - 18:30

Matthew Oliver: Weechat – a trial

Mon, 2017-04-24 13:03

I’m a big fan on console apps. But for IRC, I have been using quassel, as it gives me a client on my phone. But I’ve been cleaning up my cloud accounts, and thought of the good old days when you’d simply run a console IRC client in screen or tmux.

Many years ago I used weechat, it was awesome, so I thought why not go and have a play.

To my surprise weechat has a /relay command which allows other clients to connect. One such client is WeechatAndroid, which in itself isn’t a IRC client, but actually a client that can talk to a already running weechat… This is exactly what I want.

So in case any of you wanted to do the same, this is my current tmux + weechat setup. Mostly gleened from various internet sources.

 

Initial WeeChat setup

Once you start weechat, you can simply configure things, when you do, it writes it to its config file. So really all you need to do is place or edit the config file directly. So once setup you can easily move it. However, as you’ll always want to be connected, it’s nice to know how to change configuration while it’s running… you know, so you can play.

I’ll assume you have installed weechat in whatever distro your using. So we’ll start by running weechat (inside a tmux or screen):

weechat

Now while we are inside weechat, lets first start by installing/activating some scripts:

/script install buffers.pl go.py colorize_nicks.py urlserver.py

Noting that there are heaps of scripts to install, but lets explain these:

  • buffers.pl – makes a left side window listing the buffers (or channels).
  • go.py – allows you to quickly search the buffers.
  • urlserver.py – automatically shorens long urls so they don’t break if they overlap over a line.
  • colorize_nicks.py – is obvious.

The go.py script is useful. But turns out you can also turn on mouse support with:

/mouse enable

Which will then allow you to use your mouse to select a buffer, although this will break normal copy and pasting, so maybe not worth the effort, but thought I’d mention it.
Although of course normally you’d use the weechat keybindings to access them, <alt+left or right> to go to a buffer, or <alt-a> to goto the last active buffer. But go, means you can easily jump, and if you add a go.py keybinding:

/key bind meta-g /go

An <alt+g> will lauch it, so you can type away, is really easy.

But we are probably getting away from our selves. There are plenty of places that can tell you how to configure it connect to freenode etc. I used: https://weechat.org/files/doc/devel/weechat_quickstart.en.html

Now that you have some things configured lets setup the relay script/plugin.

 

Setting up /relay

This actually isn’t too hard. But there was a gotcha, which is why I’m writing about it. I first simply followed WeechatAndroid’s guide. But this just sets up an insecure relay:


/relay add weechat 8001
/set relay.network.password "your-secret-password"

NOTE: 8001 is the port, so you can change this, and the password to whatever you want.

However, that’s great for a test, but we probably want SSL, so we need a SSL cert:


mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

NOTE: To change or see where weechat is looking for the cert is to see what the value of ‘relay.network.ssl_cert_key’ or just do a: /set relay.*

We can tell weechat to load this sslcert without restarting by running:

/relay sslcertkey

And here’s the gotcha, you will fail to connect via SSL until you create an instance of the weechat relay protocal (listening socket) with ssl in the name of the protocol:
/relay del weechat
/relay add ssl.weechat 8001

Or just be smart and setup the SSL version.

On the weechat buffer you will see the client connecting and disconnecting, so is a good way to debug connection issues.

 

Weechat /relay + ssl (TL;DR)

mkdir -p ~/.weechat/ssl
cd ~/.weechat/ssl
openssl req -nodes -newkey rsa:2048 -keyout relay.pem -x509 -days 365 -out relay.pem

In weechat:

/relay sslcertkey
/relay add ssl.weechat 8001
/set relay.network.password "your-secret-password"

OpenSTEM: This Week in HASS – term 2, week 2

Mon, 2017-04-24 09:03

It is hoped that by now all the school routine is shaking back down into place. No doubt you’ve all got ANZAC Day marked on your class calendars, and this may be a good time to revisit some of the celebrations with the younger students. This week our younger students are looking at types of homes and local Aboriginal groups. Students in Year 3 are investigating climate zones and biomes of Australia, while students in Years 4 to 6 are looking at Europe in the ‘Age of Discovery’ (the 15th to 18th centuries).

Foundation/Prep to Year 3

Students in our stand-alone Foundation/Prep class (Unit F.2), in line with the name of the unit “Where We Live”, are examining different types of homes and talking about how people get the things they need (such as shelter, warmth etc) from their homes. Students examine a wide range of different types of homes including freestanding houses, apartments, townhouses, as well as boats, caravans and other less conventional homes.

Students in integrated Foundation/Prep classes (Unit F.6) and in years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) are finding out about their local Aboriginal groups, in the area of their school. Students will be considering how the groups are connected to the land and what changes they have seen since they first arrived in that area, thousands of years before. Remember, if you need information about your local Aboriginal group, feel free to contact us and ask.

Years 3 to 6

Students in Year 3, doing the Unit “Exploring Climates” (Unit 3.6) are consolidating work done last week on climate zones and the biomes of Australia. This week they are focusing on matching the climate zone to the region of Australia. Students in Years 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are shifting focus across to Europe in the 15th to 18th centuries – the ‘Age of Discovery’.

This sets the scene for further examinations of explorers and the research project students will undertake this term, as well as introducing students to the conditions in Europe which later led to colonisation, thereby providing some important background information for Australian history in Term 3. Students can examine Spain, Portugal and England and the role that they played in exploring the world at this time.

Science! Sailing Ship Science

Did you know: the Understanding Our World® program also fully covers the Science component of the Australian Curriculum at each year level, integrated with the HASS materials!

In line with the Age of Discovery explorer theme, student start their Science activity: “Ancient Sailing Ships“. A perennial favourite with students, this activity involves making a simple model sailing ship and then examining the forces acting on the ship, the properties of different parts of the ships and the materials from which they were made, examining different types of sails (square-rigged versus lateen-rigged), as well as considering the phases of matter associated with sailing ships.

Some schools set up water troughs and fans and race the ships against each other, which causes much excitement! This activity also helps students understand some of the challenges faced by explorers who travelled the world in similar vessels.

Dave Hall: Many People Want To Talk

Sat, 2017-04-22 03:02

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.

Dave Hall: Drupal, We Need To Talk

Wed, 2017-04-19 23:02

Update 21 April: I've published a followup post with details of the BoF to be held at DrupalCon Baltimore on Tuesday 25 April. I hope to see you there so we can continue the conversation.

Drupal has a problem. No, not that problem.

We live in a post peak Drupal world. Drupal peaked some time during the Drupal 8 development cycle. I’ve had conversations with quite a few people who feel that we’ve lost momentum. DrupalCon attendances peaked in 2014, Google search impressions haven’t returned to their 2009 level, core downloads have trended down since 2015. We need to accept this and talk about what it means for the future of Drupal.

Technically Drupal 8 is impressive. Unfortunately the uptake has been very slow. A factor in this slow uptake is that from a developer's perspective, Drupal 8 is a new application. The upgrade path from Drupal 7 to 8 is another factor.

In the five years Drupal 8 was being developed there was a fundamental shift in software architecture. During this time we witnessed the rise of microservices. Drupal is a monolithic application that tries to do everything. Don't worry this isn't trying to rekindle the smallcore debate from last decade.

Today it is more common to see an application that is built using a handful of Laravel micro services, a couple of golang services and one built with nodejs. These applications often have multiple frontends; web (react, vuejs etc), mobile apps and an API. This is more effort to build out, but it likely to be less effort maintaining it long term.

I have heard so many excuses for why Drupal 8 adoption is so slow. After a year I think it is safe to say the community is in denial. Drupal 8 won't be as popular as D7.

Why isn't this being talked about publicly? Is it because there is a commercial interest in perpetuating the myth? Are the businesses built on offering Drupal services worried about scaring away customers? Adobe, Sitecore and others would point to such blog posts to attack Drupal. Sure, admitting we have a problem could cause some short term pain. But if we don't have the conversation we will go the way of Joomla; an irrelevant product that continues its slow decline.

Drupal needs to decide what is its future. The community is full of smart people, we should be talking about the future. This needs to be a public conversation, not something that is discussed in small groups in dark corners.

I don't think we will ever see Drupal become a collection of microservices, but I do think we need to become more modular. It is time for Drupal to pivot. I think we need to cut features and decouple the components. I think it is time for us to get back to our roots, but modernise at the same time.

Drupal has always been a content management system. It does not need to be a content delivery system. This goes beyond "Decoupled (Headless) Drupal". Drupal should become a "content hub" with pluggable workflows for creating and managing that content.

We should adopt the unix approach, do one thing and do it well. This approach would allow Drupal to be "just another service" that compliments the application.

What do you think is needed to arrest the decline of Drupal? What should Drupal 9 look like? Let's have the conversation.

Ben Martin: CNC Alloy Candelabra

Wed, 2017-04-19 18:03
While learning Fusion 360 I thought it would be fun to flex my new knowledge of cutting out curved shapes from alloy. Some donated LED fake candles were all the inspiration needed to design and cut out a candelabra. Yes, it is industrial looking. With vcarve and ball ends I could try to make it more baroque looking, but then that would require more artistic ability than a poor old progammer might have.


It is interesting working out how to fixture the cut for such creations. As of now, Fusion360 will allow you to put tabs on curved surfaces, but you don't get to manually place them in that case. So its a bit of fun getting things where you want them by adjusting other parameters.

Also I have noticed some issues with tabs on curves where exact multiples of layer depth align perfectly with the top of the tab height. Making sure that case doesn't happen makes sure the resulting undesired cuts don't happen. So as usual I managed to learn a bunch of stuff while making something that wasn't in my normal comfort zone.

The four candles are run of a small buck converter and wired in parallel at 3 volts to simulate the batteries they normall run of.

I can feel a gnarled brass candle base coming at some stage to help mitigate the floating candle look. Adding some melted real wax has also been suggested to give a more real look.

Chris Smart: Patches for OpenStack Ironic Python Agent to create Buildroot images with Make

Tue, 2017-04-18 21:02

Recently I wrote about creating an OpenStack Ironic deploy image with Buildroot. Doing this manually is good because it helps to understand how it’s pieced together, however it is slightly more involved.

The Ironic Python Agent (IPA) repo has some imagebuild scripts which make building the CoreOS and TinyCore images pretty trivial. I now have some patches which add support for creating the Buildroot images, too.

The patches consist of a few scripts which wrap the manual build method and a Makefile to tie it all together. Only the install-deps.sh script requires root privileges, if it detects missing dependencies, all other Buildroot tasks are run as a non-privileged user. It’s one of the great things about the Buildroot method!

Build

Again, I have included documentation in the repo, so please see there for more details on how to build and customise the image. However in short, it is as simple as:

git clone https://github.com/csmart/ironic-python-agent.git
cd ironic-python-agent/imagebuild/buildroot
make
# or, alternatively:
./build-buildroot.sh --all

These actions will perform the following tasks automatically:

  • Fetch the Buildroot Git repositories
  • Load the default IPA Buildroot configuration
  • Download and verify all source code
  • Build the toolchain
  • Use the toolchain to build:
    • System libraries and packages
    • Linux kernel
    • Python Wheels for IPA and dependencies
  • Create the kernel, initramfs and ISO images

The default configuration points to the upstream IPA Git repository, however you can change this to point to any repo and commit you like. For example, if you’re working on IPA itself, you can point Buildroot to your local Git repo and then build and boot that image to test it!

The following finalised images will be found under ./build/output/images:

  • bzImage (kernel)
  • rootfs.cpio.xz (ramdisk)
  • rootfs.iso9660 (ISO image)

These files can be uploaded to Glance for use with Ironic.

Help

To see available Makefile targets, simply run the help target:

make help

Help is also available for the shell scripts if you pass the –help option:

./build-buildroot.sh --help
./clean-buildroot.sh --help
./customise-buildroot.sh --help

Customisation

As with the manual Buildroot method, customising the build is pretty easy:

make menuconfig
# do buildroot changes, e.g. change IPA Git URL
make

I created the kernel config from scratch (via tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

make linux-menuconfig
# do kernel changes
make

Each time you run make, it’ll pick up where you left off and re-create your images.

Really happy for anyone to test it out and let me know what you think!

OpenSTEM: This Week in HASS – term 2, week 1

Tue, 2017-04-18 09:03

Welcome to the new school term, and we hope you all had a wonderful Easter! Many of our students are writing NAPLAN this term, so the HASS program provides a refreshing focus on something different, whilst practising skills that will help students prepare for NAPLAN without even realising it! Both literacy and numeracy are foundation skills of much of the broader curriculum and are reinforced within our HASS program as well. Meantime our younger students are focusing on local landscapes this term, while our older students are studying explorers of different continents.

Foundation to Year 3

Our youngest students (Foundation/Prep Unit F.2) start the term by looking at different types of homes. A wide selection of places can be homes for people around the world, so students can compare where they live to other types of homes. Students in integrated Foundation/Prep and Years 1 to 3 (Units F.61.2; 2.2 and 3.2) start their examination of the local landscape by examining how Aboriginal people arrived in Australia 60,000 years ago. They learn how modern humans expanded across the world during the last Ice Age, reaching Australia via South-East Asia. Starting with this broad focus allows them to narrow down in later weeks, finally focusing on their local community.

Year 3 to Year 6

Students in Years 3 to 6 (Units 3.6; 4.2; 5.2 and 6.2) are looking at explorers this term. Each year level focuses on explorers of a different part of the world. Year 3 students investigate different climate zones and explorers of extreme climate areas (such as the Poles, or the Central Deserts of Australia).  Year 4 students examine Africa and South America and investigate how European explorers during the ‘Age of Discovery‘ encountered different environments, animals and people on these continents. The students start with prehistory and this week they are looking at how Ancient Egyptians and Bantu-speaking groups explored Africa thousands of years ago. They also examine Great Zimbabwe. Year 5 students are studying North America, and this week are starting with the Viking voyages to Greenland and Newfoundland, in the 10th century. Year 6 students focus on Asia, and start with a study in Economics by examining the Dutch East India Company of the 17th and 18th centuries. (Remember HASS for years 5 and 6 includes History, Geography, Civics and Citizenship and Economics and Business – we cover it all, plus Science!)

You might be wondering how on earth we integrate such apparently disparate topics for multi-year classes! Well, our Teacher Handbooks are full of tricks to make teaching these integrated classes a breeze. The Teacher Handbooks with lesson plans and hints for how to integrate across year levels are included, along with the Student Workbooks, Model Answers and Assessment Guides, within our bundles for each unit. Teachers using these units have been thrilled at how easy it is to use our material in multi-year level classes, whilst knowing that each student is covering curriculum-appropriate material for their own year level.

Russell Coker: More KVM Modules Configuration

Mon, 2017-04-17 21:02

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

Related posts:

  1. Video Mode and KVM I recently changed my KVM servers to use the kernel...
  2. Testing STONITH One problem that I have had in configuring Heartbeat clusters...
  3. Modules and NFS for Xen I’m just in the process of converting a multi-user system...

Chris Smart: Creating an OpenStack Ironic deploy image with Buildroot

Sun, 2017-04-16 19:02

Ironic is an OpenStack project which provisions bare metal machines (as opposed to virtual).

A tool called Ironic Python Agent (IPA) is used to control and provision these physical nodes, performing tasks such as wiping the machine and writing an image to disk. This is done by booting a custom Linux kernel and initramfs image which runs IPA and connects back to the Ironic Conductor.

The Ironic project supports a couple of different image builders, including CoreOS, TinyCore and others via Disk Image Builder.

These have their limitations, however, for example they require root privileges to be built and, with the exception of TinyCore, are all hundreds of megabytes in size. One of the downsides of TinyCore is limited hardware support and although it’s not used in production, it is used in the OpenStack gating tests (where it’s booted in virtual machines with ~300MB RAM).

Large deployment images means a longer delay in the provisioning of nodes and so I set out to create a small, customisable image that solves the problems of the other existing images.

Buildroot

I chose to use Buildroot, a well regarded, simple to use tool for building embedded Linux images.

So far it has been quite successful as a proof of concept.

Customisation can be done via the menuconfig system, similar to the Linux kernel.

Buildroot menuconfig

Source code

All of the source code for building the image is up on my GitHub account in the ipa-buildroot repository. I have also written up documentation which should walk you through the whole build and customisation process.

The ipa-buildroot repository contains the IPA specific Buildroot configurations and tracks upstream Buildroot in a Git submodule. By using upstream Buildroot and our external repository, the IPA Buildroot configuration comes up as an option for regular Buildroot build.

IPA in list of Buildroot default configs

Buildroot will compile the kernel and initramfs, then post build scripts clone the Ironic Python Agent repository and creates Python wheels for the target.

This is so that it is highly flexible, based on the version of Ironic Python Agent you want to use (you can specify the location and branch of the ironic-python-agent and requirements repositories).

Set Ironic Python Agent and Requirements location and Git version

I created the kernel config from scratch (using tinyconfig) and deliberately tried to balance size and functionality. It should boot on most Intel based machines (BIOS and UEFI), however hardware support like hard disk and ethernet controllers is deliberately limited. The goal was to start small and add more support as needed.

By using Buildroot, customising the Linux kernel is pretty easy! You can just run this to configure the kernel and rebuild your image:

make linux-menuconfig && make

If this interests you, please check it out! Any suggestions are welcome.

Francois Marier: Automatically renewing Let's Encrypt TLS certificates on Debian using Certbot

Fri, 2017-04-14 01:03

I use Let's Encrypt TLS certificates on my Debian servers along with the Certbot tool. Since I use the "temporary webserver" method of proving domain ownership via the ACME protocol, I cannot use the cert renewal cronjob built into Certbot.

Instead, this is the script I put in /etc/cron.daily/certbot-renew:

#!/bin/bash /usr/bin/certbot renew --quiet --pre-hook "/bin/systemctl stop apache2.service" --post-hook "/bin/systemctl start apache2.service" pushd /etc/ > /dev/null /usr/bin/git add letsencrypt DIFFSTAT="$(/usr/bin/git diff --cached --stat)" if [ -n "$DIFFSTAT" ] ; then /usr/bin/git commit --quiet -m "Renewed letsencrypt certs" echo "$DIFFSTAT" fi popd > /dev/null

It temporarily disables my Apache webserver while it renews the certificates and then only outputs something to STDOUT (since my cronjob will email me any output) if certs have been renewed.

Since I'm using etckeeper to keep track of config changes on my servers, my renewal script also commits to the repository if any certs have changed.

External Monitoring

In order to catch mistakes or oversights, I use ssl-cert-check to monitor my domains once a day:

ssl-cert-check -s fmarier.org -p 443 -q -a -e francois@fmarier.org

I also signed up with Cert Spotter which watches the Certificate Transparency log and notifies me of any newly-issued certificates for my domains.

In other words, I get notified:

  • if my cronjob fails and a cert is about to expire, or
  • as soon as a new cert is issued.

The whole thing seems to work well, but if there's anything I could be doing better, feel free to leave a comment!

Colin Charles: Speaking in April 2017

Sun, 2017-04-09 01:02

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

Dave Hall: Remote Presentations

Thu, 2017-04-06 17:02

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.

Lev Lafayette: 'Advanced Computing': A International Journal of Plagiarism

Wed, 2017-04-05 17:05

Advanced Computing : An International Journal was a publication that I considering writing for. However it is almost certainly a predatory open-access journal, that seeks a "publication charge", without even performing the minimal standards of editorial checking.

I can just tolerate the fact that the most recent issue has numerous spelling and grammatical errors as the I believe that English is not the first language of the authors. It should have been caught by the editors, but we'll let that slide for a far greater crime - that of widespread plagiarism.

The fact that the editors clearly didn't even check for this is in inexcusable oversight.

I opened this correspondence to the editors in the hope that others will find it prior to submitting or even considering submission to the journal in question. I also hope the editors take the opportunity to dramatically improve their editorial standards.

read more

Michael Still: Light to Light, Day Three

Wed, 2017-04-05 11:00
The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Michael Still: Light to Light, Day Two

Wed, 2017-04-05 11:00
Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.



Interactive map for this route.

             

Tags for this post: events pictures 20170312 photo scouts bushwalk
Related posts: Light to Light, Day Three; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Michael Still: Light to Light, Day One

Wed, 2017-04-05 09:00
Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.



Interactive map for this route.

                                       

See more thumbnails

Tags for this post: events pictures 20170311 photo scouts bushwalk
Related posts: Light to Light, Day Three; Light to Light, Day Two; Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Pia Waugh: Iteration or Transformation in government: paint jobs and engines

Mon, 2017-04-03 11:01

I was recently at an event talking about all things technology with a fascinating group of people. It was a reminder to me that digital transformation has become largely confused with digital iteration, and we need to reset the narrative around this space if we are to realise the real opportunities and benefits of technology moving forward. I gave a speech recently about major paradigm shifts that have brought us to where we are and I encourage everyone to consider and explore these paradigm shifts as important context for this blog post and their own work, but this blog post will focus specifically on a couple of examples of actual transformative change worth exploring.

The TL;DR is simply that you need to be careful to not mistake iteration for transformation. Iteration is an improvement on the status quo. Transformation is a new model of working that is, hopefully, fundamentally better than the status quo. As a rule of thumb, if what you are doing is simply better, faster or cheaper, that it is probably just iterative. There are many examples from innovation and digital transformation agendas which are just improvements on the status quo, but two examples of actual transformation of government I think are worth exploring are Gov-as-an-API and mutually beneficial partnerships to address shared challenges.

Background

Firstly, why am I even interested in “digital transformation”? Well, I’ve worked on open data in the Australian Federal Government since 2012 and very early on we recognised that open data was just a step towards the idea of “Gov as a Platform” as articulated by Tim O’Reilly nearly 10 years ago. Basically, he spoke about the potential to transform government into Government as a Platform, similar (for those unfamiliar with the “as a platform” idea) to Google Maps, or the Apple/Google app stores. Basically government could provide the data, content, transaction services and even business rules (regulation, common patterns such as means testing, building codes, etc) in a consumable, componentised and modular fashion to support a diverse ecosystem of service delivery, analysis and products by myriad agents, including private and public sector, but also citizens themselves.

Seems obvious right? I mean the private sector (the tech sector in any case) have been taking this approach for a decade.

What I have found in government is a lot of interest in “digital” where it is usually simply digitising an existing process, product or service. The understanding of consumable, modular architecture as a strategic approach to achieve greater flexibility and agility within an organisation, whilst enabling a broader ecosystem to build on top, is simply not understood by many. Certainly there are pockets that understand this, especially at the practitioner level, but agencies are naturally motivated to simply delivery what they need in isolation from a whole of government view. It was wonderful to recently see New Zealand picking up a whole of government approach in this vein but many governments are still focused on simple digitisation rather than transformation.

Why is this a problem? Well, to put it simply, government can’t scale the way it has traditionally worked to meet the needs and challenges of an increasingly changing world. Unless governments can transform to be more responsive, adaptive, collaborative and scalable, then they will become less relevant to the communities they serve and less effective in implementing government policy. Governments need to learn to adapt to the paradigm shifts from centrist to distributed models, from scarcity to surplus resources, from analogue to digital models, from command and control to collaborative relationships, and from closed to open practices.

Gov as an API

On of the greatest impacts of the DTO and the UK Government Digital Service has been to spur a race to the top around user centred design and agile across governments. However, these methods whilst necessary, are not sufficient for digital transformation, because you too easily see services created that are rapidly developed and better for citizens, but still based on bespoke siloed stacks of technology and content that aren’t reconsumable. Why does this matter? Because there are loads of components needed for multiple services, but siloed service technology stacks lead to duplication, a lack of agility in iterating and improving the user experience on an ongoing basis, a lack of programmatic access to those components which would enable system to system automation, and a complete lack of the “platform” upon which an ecosystem could be built.

When I was at the interim DTO in 2016, we fundamentally realised that no single agency would ever be naturally motivated, funded or mandated to deliver services on behalf of someone else. So rather than assuming a model wherein an agency is expected to do just that, we started considering new models. New systems wherein agencies could achieve what they needed (and were mandated and funded) to do, but where the broader ecosystem could provide multi-channel services delivery where there is no wrong door for citizens to do what they need. One channel might be the magical “life events” lens, another might be third parties, or State and Territory Governments, or citizen mashups. These agents and sectors have ongoing relationships with their users allowing them to exponentially spread and maintain user-centred design in way that government by itself can not afford to do, now or into the future.

This vision was itself was just a reflection of the Amazon, Google Maps, the Apple “apps store” and other platform models so prevalent in the private sector as described above. But governments everywhere have largely interpreted the “Gov as a Platform” idea as simply common or shared platforms. Whilst common platforms can provide savings and efficiencies, it has not enabled the system transformation needed to get true digital transformation across government.

So what does this mean practically? There are certainly pockets of people doing or experimenting in this space. Here are some of my thoughts to date based on work I’ve done in Australia (at the interim DTO) and in New Zealand (with the Department of Internal Affairs).

Firstly you can largely identify four categories of things involved in any government service:

  • Content – obvious, but taking into account the domain specific content of agencies as well as the kind of custodian or contextual content usually managed by points of aggregation or service delivery
  • Data – any type of list, source of intelligence or statistics, search queries such as ABN lookups
  • Transaction services – anything a person or business interacts with such as registration, payments, claims, reporting, etc. Obviously requires strict security frameworks
  • Business rules – the regulation, legislation, code, policy logic or even reusable patterns such as means testing which are usually hard coded into projects as required. Imagine an authoritative public API with the business logic of government available for consumption by everyone. A good example of pioneering work in this space is the Regulation as a Platform work by Data61.

These categories of components can all be made programmatically available for the delivery of your individual initiative and for broader reuse either publicly (for data, content and business rules) or securely (for transaction services). But you also need some core capabilities that are consumable for any form of digital service, below are a few to consider:

  • Identity and authentication, arguably also taking into account user consent based systems which may be provided from outside of government
  • Service analytics across digital and non digital channels to baseline the user experience and journey with govt and identify what works through evidence. This could also fuel a basic personalisation service.
  • A government web platform to pull together the government “sedan” service
  • Services register – a consumable register of government services (human services) to draw from across the board.

Imagine if we tool a conditional approach to matters, where you don’t need to provide documentation to prove your age (birth certificate, licence, passport), all of which give too much information, but rather can provide a verifiable claim that yes I am over the required age. This would both dramatically reduce the work for gov, and improve the privacy of people. See the verifiable claims work by W3C for more info on this concept, but it could be a huge transformation for how gov and privacy operates.

The three key advantages to taking this approach are:

  1. Agency agility – In splitting the front end from a consumable backend, agencies gain the ability to more rapidly iterate the customer experience of the service, taking into account changing user needs and new user platforms (mobile is just the start – augmented reality and embedded computing are just around the corner). When the back end and front end of a service are part of the one monolithic stack, it is simply too expensive and complicated to make many changes to the service.
  2. Ecosystem enablement – As identified above, a key game changer with the model is the ability for others to consume the services to support and multi-channel of services, analysis and products delivered by the broader community of government, industry and community players.
  3. Automation – the final and least sexy, though most interesting from a service improvement perspective, is automation. If your data, content, transaction systems and rules are programmatically available, suddenly you create the opportunity for the steps of a life event to be automated, where user consent is granted. The user consent part is really important, just to be clear! So rather than having 17 beautiful but distinct user services that a person has to individually complete, a user could be asked at any one of those entry points whether they’d like the other 16 steps to be automatically completed on their behalf. Perhaps the best way government can serve citizens in many cases is to get out of the way

Meaningful and mutually beneficial collaboration

Collaboration has become something of a buzzword in government often resulting in meetings, MOUs, principle statements or joint media releases. Occasionally there are genuine joint initiatives but there are still a lot of opportunities to explore new models of collaboration that achieve better outcomes.

Before we talk about how to collaborate, we need to address the elephant in the room: natural motivation. Government often sees consultation as something nice to have, collaboration as a nice way of getting others to contribute to something, and co-design as something to strive across the business units in your agency. If we consider the idea that government simply cannot meet the challenges or opportunities of the 21st century in isolation, if we acknowledge that government cannot scale at the same pace of the changing domains we serve, then we need to explore new models of collaboration where we actively partner with others for mutual benefit. To do this we need to identify areas for which others are naturally motivated to collaborate.

Firstly, let’s acknowledge there will always be work to do for which there are no naturally motivated partners. Why would anyone else want, at their own cost, to help you set up your mobility strategy, or implement an email server, or provide telephony services? The fact is that a reasonable amount of what any organisation does would be seen as BAU, as commodity, and thus only able to be delivered through internal capacity or contractual relationships with suppliers. So initiatives that try to improve government procurement practices can iteratively improve these customer-supplier arrangements but they don’t lend themselves to meaningful or significant collaboration.

OK, so what sort of things could be done differently? This is where you need to look critically at the purpose of your agency including the highest level goals, and identify who the natural potential allies in those goals could be. You can then approach your natural allies, identify where there are shared interests, challenges or opportunities, and collectively work together to co-design, co-invest, co-deliver and co-resource a better outcome for all involved. Individual allies could use their own resources or contractors for their contribution to the work, but the relationship is one of partnership, the effort and expertise is shared, and the outcomes are more powerful and effective than any one entity would have delivered on their own. In short, the whole becomes greater than the sum of its parts.

I will use the exciting and groundbreaking work of my current employer as a real example to demonstrate the point.

AUSTRAC is the Australian Government financial intelligence agency with some regulatory responsibilities. The purpose of the agency is threefold: 1) to detect and disrupt abuse of the financial system; 2) to strengthen the financial system against abuse; and 3) to contribute to the growth of the Australian economy. So who are natural allies in these goals… banks, law enforcement and fraud focused agencies, consumer protection organisations, regulatory organisations, fintech and regtech startups, international organisations, other governments, even individual citizens! So to tap into this ecosystem of potential allies, AUSTRAC has launched a new initiative called the “Fintel Alliance” which includes, at its heart, new models of collaborating on shared goals. There are joint intelligence operations on major investigations like the Panama Papers, joint industry initiatives to explore shared challenges and then develop prototypes and references implementations, active co-design of the new regulatory framework with industry, and international collaborations to strengthen the global financial system against abuse. The model is still in early days, but already AUSTRAC has shown that a small agency can punch well above it’s weight by working with others in new and innovative ways.

Other early DTO lessons

I’ll finish with a few lessons from the DTO. I worked at the DTO for the first 8 months (Jan – Sept 2015) when it was being set up. It was a crazy time with people from over 30 agencies thrust together to create a new vision for government services whilst simultaneously learning to speak each other’s language and think in a whole of government(s) way. We found a lot of interesting things, not least of all just how pervasive the siloed thinking of government ran. For example, internal analysis at the DTO of user research from across government agencies showed that user research tended to be through the narrow lens of an agency’s view of “it’s customers” and the services delivered by that agency. It was clear the user needs beyond the domain of the agency was seen as out of scope, or, at best, treated as a hand off point.

We started writing about a new draft vision whilst at the DTO which fundamentally was based on the idea of an evidence based, consumable approach to designing and delivering government services, built on reusable components that could be mashed up for a multi-channel ecosystem of service delivery. We tested this with users, agencies and industry with great feedback. Some of our early thinking is below, now a year and a half old, but worth referring back to:

One significant benefit of the DTO and GDS was the cycling of public servants through the agency to experience new ways of working and thinking, and applying an all of government lens across their work. This cultural transformation was then maintained in Australia, at least in part, when those individuals returned to their home agencies. A great lesson for others in this space.

A couple of other lessons learned from the DTO are below:

  • Agencies want to change. They are under pressure from citizens, governments and under budget constraints and know they need better ways to do things.
  • A sandbox is important. Agencies need somewhere to experiment, play with new tools, ideas and methods, draw on different expertise and perspectives, build prototypes and try new ideas. This is ideally best used before major projects are undertaken as a way to quickly test ideas before going to market. It also helps improve expectations of what is possible and what things should cost.
  • Everyone has an agenda, every agency will drive their own agenda with whatever the language of the day and agendas will continue to diverge from each other whilst there is not common vision.
  • Evidence is important! And there isn’t generally enough AoG evidence available. Creating an evidence base was a critical part of identifying what works and what doesn’t.
  • Agile is a very specific and useful methodology, but often gets interpreted as something loose, fast, and unreliable. Education about proper agile methods is important.
  • An AoG strategy for transformation is critical. If transformation is seen as a side project, it will never be integrated into BAU.
  • Internal brilliance needs tapping. Too often govt brings in consultants and ignores internal ideas, skills and enthusiasm. There needs to be a combination of public engagement and internal engagement to get the best outcomes.

I want to just finish by acknowledging and thanking the “interim DTO” team and early leadership for their amazing work, vision and collective efforts in establishing the DTO and imagining a better future for service delivery and for government more broadly. It was an incredible time with incredible people, and your work continues to live on and be validated by service delivery initiatives in Australia and across the world. Particular kudos to team I worked directly with, innovative and awesome public servants all! Sharyn Clarkson, Sean Minney, Mark Muir, Vanessa Roarty, Monique Kenningham, Nigel O’Keefe, Mark McKenzie, Chris Gough, Deb Blackburn, Lisa Howdin, Simon Fisher, Andrew Carter, Fran Ballard and Fiona Payne Also to our contractors at the time Ruth Ellison, Donna Spencer and of course, the incredible and awesome Alex Sadleir.

Francois Marier: Manually expanding a RAID1 array on Ubuntu

Mon, 2017-04-03 05:10

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine.

My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive

In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:

$ parted /dev/sdc unit s print mkpart print

Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):

$ parted /dev/sdd unit s mktable gpt mkpart

Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:

  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive

I created the partition and flagged it as a RAID one:

mkpart toggle 2 raid

and then deleted the temporary partition on the old 2 TB drive:

$ parted /dev/sdc print rm 3 print Create a temporary RAID1 array on the new drive

With the new drive properly partitioned, I created a new RAID array for it:

mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing

and added it to /etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

which required manual editing of that file to remove duplicate entries.

Create the encrypted partition

With the new RAID device in place, I created the encrypted LUKS partition:

cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10 cryptsetup luksOpen /dev/md10 chome2

I took the UUID for the temporary RAID partition:

blkid /dev/md10

and put it in /etc/crypttab as chome2.

Then, I formatted the new LUKS partition and mounted it:

mkfs.ext4 -m 0 /dev/mapper/chome2 mkdir /home2 mount /dev/mapper/chome2 /home2 Copy the data from the old drive

With the home paritions of both drives mounted, I copied the files over to the new drive:

eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/

making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive

After the copy, I switched over to the new drive in a step-by-step way:

  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.
Add the old drive to the new RAID array

With all of this working, it was time to clear the mdadm superblock from the old drive:

mdadm --zero-superblock /dev/sdc1

and then change the second partition of the old drive to make it the same size as the one on the new drive:

$ parted /dev/sdc rm 2 mkpart toggle 2 raid print

before adding it to the new array:

mdadm /dev/md1 -a /dev/sdc1 Rename the new array

To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:

umount /home cryptsetup luksClose chome mdadm --stop /dev/md10 mdadm --stop /dev/md1

before running this command:

mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2

and updating the name in /etc/mdadm/mdadm.conf.

The last step was to regenerate the initramfs:

update-initramfs -u

before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

OpenSTEM: Staying safe during Cyclone Debbie

Thu, 2017-03-30 16:06
Cyclone Debbie as seen from the ISS

We hope all teachers and students are safe in the areas of Queensland and New South Wales affected by the cyclone weather! We understand that many state schools (any South of Agnes Water to Northern New South Wales) are closed today, the radar shows a very large rain front coming through. Near Brisbane it’s been raining for many hours already, and the wind is now picking up as well. It’s good to be inside, although things are starting to feel moist (reminding Arjen of when he lived in Cairns).

Why not take this opportunity to replace dry old teaching materials using coupon code DEBBIE for 25% discount on any Understanding our World® unit. This special Cyclone Debbie offer ends Sunday 2nd April.

Did you know that, in the Understanding Our World units, Year 5 students did work on Natural Disasters during this term!

Also, do take a peek at the Open Source Earth Wind Patterns site at NullSchool – using live data to create a moving image. All open. Beautiful.