Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 22 min ago

Colin Charles: Speaking in April 2017

Sun, 2017-04-09 01:02

Its been a while since I’ve blogged (will have to catch up soon), but here’s a few appearances:

  • How we use MySQL today – April 10 2017 – New York MySQL meetup. I am almost certain this will be very interesting with the diversity of speakers and topics.
  • Percona Live 2017 – April 24-27 2017 – Santa Clara, California. This is going to be huge, as its expanded beyond just MySQL to include MongoDB, PostgreSQL, and other open source databases. Might even be the conference with the largest time series track out there. Use code COLIN30 for the best discount at registration.

I will also be in attendance at the MariaDB Developer’s (Un)Conference, and M|17 that follows.

Dave Hall: Remote Presentations

Thu, 2017-04-06 17:02

Living in the middle of nowhere and working most of my hours in the evenings I have few opportunities to attend events in person, let alone deliver presentations. As someone who likes to share knowledge and present at events this is a problem. My work around has been presenting remotely. Many of my talks are available on playlist on my youtube channel.

I've been doing remote presentations for many years. During this time I have learned a lot about what it takes to make a remote presentation sucessful.

Preparation

When scheduling a remote session you should make sure there is enough time for a test before your scheduled slot. Personally I prefer presenting after lunch as it allows an hour or so for dealing with any gremlins. The test presentation should use the same machines and connections you'll be using for your presentation.

I prefer using Hangouts On Air for my presentations. This allows me to stream my session to the world and have it recorded for future reference. I review every one of my recorded talks to see what I can do better next time.

Both sides of the connection should use wired connections. WiFi, especially at conferences can be flakely. Organisers should ensure that all presentation machines are using Ethernet, and if possible it should be on a separate VLAN.

Tips for Presenters

Presenting to a remote audience is very different to presenting in front of a live audience. When presenting in person you're able to focus on people in the audience who seem to be really engaged with your presentation or scan the crowd to see if you're putting people to sleep. Even if there is a webcam on the audience it is likely to be grainy and in a fixed position. It is also difficult to pace when presenting remotely.

When presenting in person your slides will be diplayed in full screen mode, often with a presenter view in your application of choice. Most tools don't allow you to run your slides in full screen mode. This makes it more difficult as a presenter. Transitions won't work, videos won't autoplay and any links Keynote (and PowerPoint) open will open in a new window that isn't being shared which makes demos trickier. If you don't hide the slide thumbnails to remind you of what is coming next, the audience will see them too. Recently I worked out printing thumbnails avoids revealing the punchlines prematurely.

Find out as much information as possible about the room your presentation will be held in. How big is it? What is the seating configuration? Where is the screen relative to where the podium is?

Tips for Organisers

Event organisers are usually flat out on the day of the event. Having to deal with a remote presenter adds to the workload. Some preparation can make life easier for the organisers. Well before the event day make sure someone is nominated to be the point of contact for the presenter. If possible share the details (name, email and mobile number) for the primary contact and a fallback. This avoids the presenter chasing random people from the organising team.

On the day of the event communicate delays/schedule changes to the presenter. This allows them to be ready to go at the right time.

It is always nice for the speaker to receive a swag bag and name tag in the mail. If you can afford to send this, your speaker will always appreciate it.

Need a Speaker?

Are you looking for a speaker to talk about Drupal, automation, devops, workflows or open source? I'd be happy to consider speaking at your event. If your event doesn't have a travel budget to fly me in, then I can present remotely. To discuss this futher please get in touch using my contact form.

Lev Lafayette: 'Advanced Computing': A International Journal of Plagiarism

Wed, 2017-04-05 17:05

Advanced Computing : An International Journal was a publication that I considering writing for. However it is almost certainly a predatory open-access journal, that seeks a "publication charge", without even performing the minimal standards of editorial checking.

I can just tolerate the fact that the most recent issue has numerous spelling and grammatical errors as the I believe that English is not the first language of the authors. It should have been caught by the editors, but we'll let that slide for a far greater crime - that of widespread plagiarism.

The fact that the editors clearly didn't even check for this is in inexcusable oversight.

I opened this correspondence to the editors in the hope that others will find it prior to submitting or even considering submission to the journal in question. I also hope the editors take the opportunity to dramatically improve their editorial standards.

read more

Michael Still: Light to Light, Day Three

Wed, 2017-04-05 11:00
The third and final day of the Light to Light Walk at Ben Boyd National Park. This was a shorter (8 kms) easier walk. A nice way to finish the journey.



Interactive map for this route.

                     

Tags for this post: events pictures 20170313 photo scouts bushwalk
Related posts: Light to Light, Day Two; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Michael Still: Light to Light, Day Two

Wed, 2017-04-05 11:00
Our second day walking the Light to Light walk in Ben Boyd National Park. This second day was about 10 kms and was on easier terrain than the first day. That said, probably a little less scenic than the first day too.



Interactive map for this route.

             

Tags for this post: events pictures 20170312 photo scouts bushwalk
Related posts: Light to Light, Day Three; Exploring the Jagungal; Light to Light, Day One; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Michael Still: Light to Light, Day One

Wed, 2017-04-05 09:00
Macarthur Scouts took a group of teenagers down to Ben Boyd National Park on the weekend to do the Light to Light walk. The first day was 14 kms through lovely undulating terrain. This was the hardest day of the walk, but very rewarding and I think we all had fun.



Interactive map for this route.

                                       

See more thumbnails

Tags for this post: events pictures 20170311 photo scouts bushwalk
Related posts: Light to Light, Day Three; Light to Light, Day Two; Exploring the Jagungal; Scout activity: orienteering at Mount Stranger; Potato Point

Comment

Pia Waugh: Iteration or Transformation in government: paint jobs and engines

Mon, 2017-04-03 11:01

I was recently at an event talking about all things technology with a fascinating group of people. It was a reminder to me that digital transformation has become largely confused with digital iteration, and we need to reset the narrative around this space if we are to realise the real opportunities and benefits of technology moving forward. I gave a speech recently about major paradigm shifts that have brought us to where we are and I encourage everyone to consider and explore these paradigm shifts as important context for this blog post and their own work, but this blog post will focus specifically on a couple of examples of actual transformative change worth exploring.

The TL;DR is simply that you need to be careful to not mistake iteration for transformation. Iteration is an improvement on the status quo. Transformation is a new model of working that is, hopefully, fundamentally better than the status quo. As a rule of thumb, if what you are doing is simply better, faster or cheaper, that it is probably just iterative. There are many examples from innovation and digital transformation agendas which are just improvements on the status quo, but two examples of actual transformation of government I think are worth exploring are Gov-as-an-API and mutually beneficial partnerships to address shared challenges.

Background

Firstly, why am I even interested in “digital transformation”? Well, I’ve worked on open data in the Australian Federal Government since 2012 and very early on we recognised that open data was just a step towards the idea of “Gov as a Platform” as articulated by Tim O’Reilly nearly 10 years ago. Basically, he spoke about the potential to transform government into Government as a Platform, similar (for those unfamiliar with the “as a platform” idea) to Google Maps, or the Apple/Google app stores. Basically government could provide the data, content, transaction services and even business rules (regulation, common patterns such as means testing, building codes, etc) in a consumable, componentised and modular fashion to support a diverse ecosystem of service delivery, analysis and products by myriad agents, including private and public sector, but also citizens themselves.

Seems obvious right? I mean the private sector (the tech sector in any case) have been taking this approach for a decade.

What I have found in government is a lot of interest in “digital” where it is usually simply digitising an existing process, product or service. The understanding of consumable, modular architecture as a strategic approach to achieve greater flexibility and agility within an organisation, whilst enabling a broader ecosystem to build on top, is simply not understood by many. Certainly there are pockets that understand this, especially at the practitioner level, but agencies are naturally motivated to simply delivery what they need in isolation from a whole of government view. It was wonderful to recently see New Zealand picking up a whole of government approach in this vein but many governments are still focused on simple digitisation rather than transformation.

Why is this a problem? Well, to put it simply, government can’t scale the way it has traditionally worked to meet the needs and challenges of an increasingly changing world. Unless governments can transform to be more responsive, adaptive, collaborative and scalable, then they will become less relevant to the communities they serve and less effective in implementing government policy. Governments need to learn to adapt to the paradigm shifts from centrist to distributed models, from scarcity to surplus resources, from analogue to digital models, from command and control to collaborative relationships, and from closed to open practices.

Gov as an API

On of the greatest impacts of the DTO and the UK Government Digital Service has been to spur a race to the top around user centred design and agile across governments. However, these methods whilst necessary, are not sufficient for digital transformation, because you too easily see services created that are rapidly developed and better for citizens, but still based on bespoke siloed stacks of technology and content that aren’t reconsumable. Why does this matter? Because there are loads of components needed for multiple services, but siloed service technology stacks lead to duplication, a lack of agility in iterating and improving the user experience on an ongoing basis, a lack of programmatic access to those components which would enable system to system automation, and a complete lack of the “platform” upon which an ecosystem could be built.

When I was at the interim DTO in 2016, we fundamentally realised that no single agency would ever be naturally motivated, funded or mandated to deliver services on behalf of someone else. So rather than assuming a model wherein an agency is expected to do just that, we started considering new models. New systems wherein agencies could achieve what they needed (and were mandated and funded) to do, but where the broader ecosystem could provide multi-channel services delivery where there is no wrong door for citizens to do what they need. One channel might be the magical “life events” lens, another might be third parties, or State and Territory Governments, or citizen mashups. These agents and sectors have ongoing relationships with their users allowing them to exponentially spread and maintain user-centred design in way that government by itself can not afford to do, now or into the future.

This vision was itself was just a reflection of the Amazon, Google Maps, the Apple “apps store” and other platform models so prevalent in the private sector as described above. But governments everywhere have largely interpreted the “Gov as a Platform” idea as simply common or shared platforms. Whilst common platforms can provide savings and efficiencies, it has not enabled the system transformation needed to get true digital transformation across government.

So what does this mean practically? There are certainly pockets of people doing or experimenting in this space. Here are some of my thoughts to date based on work I’ve done in Australia (at the interim DTO) and in New Zealand (with the Department of Internal Affairs).

Firstly you can largely identify four categories of things involved in any government service:

  • Content – obvious, but taking into account the domain specific content of agencies as well as the kind of custodian or contextual content usually managed by points of aggregation or service delivery
  • Data – any type of list, source of intelligence or statistics, search queries such as ABN lookups
  • Transaction services – anything a person or business interacts with such as registration, payments, claims, reporting, etc. Obviously requires strict security frameworks
  • Business rules – the regulation, legislation, code, policy logic or even reusable patterns such as means testing which are usually hard coded into projects as required. Imagine an authoritative public API with the business logic of government available for consumption by everyone. A good example of pioneering work in this space is the Regulation as a Platform work by Data61.

These categories of components can all be made programmatically available for the delivery of your individual initiative and for broader reuse either publicly (for data, content and business rules) or securely (for transaction services). But you also need some core capabilities that are consumable for any form of digital service, below are a few to consider:

  • Identity and authentication, arguably also taking into account user consent based systems which may be provided from outside of government
  • Service analytics across digital and non digital channels to baseline the user experience and journey with govt and identify what works through evidence. This could also fuel a basic personalisation service.
  • A government web platform to pull together the government “sedan” service
  • Services register – a consumable register of government services (human services) to draw from across the board.

Imagine if we tool a conditional approach to matters, where you don’t need to provide documentation to prove your age (birth certificate, licence, passport), all of which give too much information, but rather can provide a verifiable claim that yes I am over the required age. This would both dramatically reduce the work for gov, and improve the privacy of people. See the verifiable claims work by W3C for more info on this concept, but it could be a huge transformation for how gov and privacy operates.

The three key advantages to taking this approach are:

  1. Agency agility – In splitting the front end from a consumable backend, agencies gain the ability to more rapidly iterate the customer experience of the service, taking into account changing user needs and new user platforms (mobile is just the start – augmented reality and embedded computing are just around the corner). When the back end and front end of a service are part of the one monolithic stack, it is simply too expensive and complicated to make many changes to the service.
  2. Ecosystem enablement – As identified above, a key game changer with the model is the ability for others to consume the services to support and multi-channel of services, analysis and products delivered by the broader community of government, industry and community players.
  3. Automation – the final and least sexy, though most interesting from a service improvement perspective, is automation. If your data, content, transaction systems and rules are programmatically available, suddenly you create the opportunity for the steps of a life event to be automated, where user consent is granted. The user consent part is really important, just to be clear! So rather than having 17 beautiful but distinct user services that a person has to individually complete, a user could be asked at any one of those entry points whether they’d like the other 16 steps to be automatically completed on their behalf. Perhaps the best way government can serve citizens in many cases is to get out of the way

Meaningful and mutually beneficial collaboration

Collaboration has become something of a buzzword in government often resulting in meetings, MOUs, principle statements or joint media releases. Occasionally there are genuine joint initiatives but there are still a lot of opportunities to explore new models of collaboration that achieve better outcomes.

Before we talk about how to collaborate, we need to address the elephant in the room: natural motivation. Government often sees consultation as something nice to have, collaboration as a nice way of getting others to contribute to something, and co-design as something to strive across the business units in your agency. If we consider the idea that government simply cannot meet the challenges or opportunities of the 21st century in isolation, if we acknowledge that government cannot scale at the same pace of the changing domains we serve, then we need to explore new models of collaboration where we actively partner with others for mutual benefit. To do this we need to identify areas for which others are naturally motivated to collaborate.

Firstly, let’s acknowledge there will always be work to do for which there are no naturally motivated partners. Why would anyone else want, at their own cost, to help you set up your mobility strategy, or implement an email server, or provide telephony services? The fact is that a reasonable amount of what any organisation does would be seen as BAU, as commodity, and thus only able to be delivered through internal capacity or contractual relationships with suppliers. So initiatives that try to improve government procurement practices can iteratively improve these customer-supplier arrangements but they don’t lend themselves to meaningful or significant collaboration.

OK, so what sort of things could be done differently? This is where you need to look critically at the purpose of your agency including the highest level goals, and identify who the natural potential allies in those goals could be. You can then approach your natural allies, identify where there are shared interests, challenges or opportunities, and collectively work together to co-design, co-invest, co-deliver and co-resource a better outcome for all involved. Individual allies could use their own resources or contractors for their contribution to the work, but the relationship is one of partnership, the effort and expertise is shared, and the outcomes are more powerful and effective than any one entity would have delivered on their own. In short, the whole becomes greater than the sum of its parts.

I will use the exciting and groundbreaking work of my current employer as a real example to demonstrate the point.

AUSTRAC is the Australian Government financial intelligence agency with some regulatory responsibilities. The purpose of the agency is threefold: 1) to detect and disrupt abuse of the financial system; 2) to strengthen the financial system against abuse; and 3) to contribute to the growth of the Australian economy. So who are natural allies in these goals… banks, law enforcement and fraud focused agencies, consumer protection organisations, regulatory organisations, fintech and regtech startups, international organisations, other governments, even individual citizens! So to tap into this ecosystem of potential allies, AUSTRAC has launched a new initiative called the “Fintel Alliance” which includes, at its heart, new models of collaborating on shared goals. There are joint intelligence operations on major investigations like the Panama Papers, joint industry initiatives to explore shared challenges and then develop prototypes and references implementations, active co-design of the new regulatory framework with industry, and international collaborations to strengthen the global financial system against abuse. The model is still in early days, but already AUSTRAC has shown that a small agency can punch well above it’s weight by working with others in new and innovative ways.

Other early DTO lessons

I’ll finish with a few lessons from the DTO. I worked at the DTO for the first 8 months (Jan – Sept 2015) when it was being set up. It was a crazy time with people from over 30 agencies thrust together to create a new vision for government services whilst simultaneously learning to speak each other’s language and think in a whole of government(s) way. We found a lot of interesting things, not least of all just how pervasive the siloed thinking of government ran. For example, internal analysis at the DTO of user research from across government agencies showed that user research tended to be through the narrow lens of an agency’s view of “it’s customers” and the services delivered by that agency. It was clear the user needs beyond the domain of the agency was seen as out of scope, or, at best, treated as a hand off point.

We started writing about a new draft vision whilst at the DTO which fundamentally was based on the idea of an evidence based, consumable approach to designing and delivering government services, built on reusable components that could be mashed up for a multi-channel ecosystem of service delivery. We tested this with users, agencies and industry with great feedback. Some of our early thinking is below, now a year and a half old, but worth referring back to:

One significant benefit of the DTO and GDS was the cycling of public servants through the agency to experience new ways of working and thinking, and applying an all of government lens across their work. This cultural transformation was then maintained in Australia, at least in part, when those individuals returned to their home agencies. A great lesson for others in this space.

A couple of other lessons learned from the DTO are below:

  • Agencies want to change. They are under pressure from citizens, governments and under budget constraints and know they need better ways to do things.
  • A sandbox is important. Agencies need somewhere to experiment, play with new tools, ideas and methods, draw on different expertise and perspectives, build prototypes and try new ideas. This is ideally best used before major projects are undertaken as a way to quickly test ideas before going to market. It also helps improve expectations of what is possible and what things should cost.
  • Everyone has an agenda, every agency will drive their own agenda with whatever the language of the day and agendas will continue to diverge from each other whilst there is not common vision.
  • Evidence is important! And there isn’t generally enough AoG evidence available. Creating an evidence base was a critical part of identifying what works and what doesn’t.
  • Agile is a very specific and useful methodology, but often gets interpreted as something loose, fast, and unreliable. Education about proper agile methods is important.
  • An AoG strategy for transformation is critical. If transformation is seen as a side project, it will never be integrated into BAU.
  • Internal brilliance needs tapping. Too often govt brings in consultants and ignores internal ideas, skills and enthusiasm. There needs to be a combination of public engagement and internal engagement to get the best outcomes.

I want to just finish by acknowledging and thanking the “interim DTO” team and early leadership for their amazing work, vision and collective efforts in establishing the DTO and imagining a better future for service delivery and for government more broadly. It was an incredible time with incredible people, and your work continues to live on and be validated by service delivery initiatives in Australia and across the world. Particular kudos to team I worked directly with, innovative and awesome public servants all! Sharyn Clarkson, Sean Minney, Mark Muir, Vanessa Roarty, Monique Kenningham, Nigel O’Keefe, Mark McKenzie, Chris Gough, Deb Blackburn, Lisa Howdin, Simon Fisher, Andrew Carter, Fran Ballard and Fiona Payne Also to our contractors at the time Ruth Ellison, Donna Spencer and of course, the incredible and awesome Alex Sadleir.

Francois Marier: Manually expanding a RAID1 array on Ubuntu

Mon, 2017-04-03 05:10

Here are the notes I took while manually expanding an non-LVM encrypted RAID1 array on an Ubuntu machine.

My original setup consisted of a 1 TB drive along with a 2 TB drive, which meant that the RAID1 array was 1 TB in size and the second drive had 1 TB of unused capacity. This is how I replaced the old 1 TB drive with a new 3 TB drive and expanded the RAID1 array to 2 TB (leaving 1 TB unused on the new 3 TB drive).

Partition the new drive

In order to partition the new 3 TB drive, I started by creating a temporary partition on the old 2 TB drive (/dev/sdc) to use up all of the capacity on that drive:

$ parted /dev/sdc unit s print mkpart print

Then I initialized the partition table and creating the EFI partition partition on the new drive (/dev/sdd):

$ parted /dev/sdd unit s mktable gpt mkpart

Since I want to have the RAID1 array be as large as the smaller of the two drives, I made sure that the second partition (/home) on the new 3 TB drive had:

  • the same start position as the second partition on the old drive
  • the end position of the third partition (the temporary one I just created) on the old drive

I created the partition and flagged it as a RAID one:

mkpart toggle 2 raid

and then deleted the temporary partition on the old 2 TB drive:

$ parted /dev/sdc print rm 3 print Create a temporary RAID1 array on the new drive

With the new drive properly partitioned, I created a new RAID array for it:

mdadm /dev/md10 --create --level=1 --raid-devices=2 /dev/sdd1 missing

and added it to /etc/mdadm/mdadm.conf:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

which required manual editing of that file to remove duplicate entries.

Create the encrypted partition

With the new RAID device in place, I created the encrypted LUKS partition:

cryptsetup -h sha256 -c aes-xts-plain64 -s 512 luksFormat /dev/md10 cryptsetup luksOpen /dev/md10 chome2

I took the UUID for the temporary RAID partition:

blkid /dev/md10

and put it in /etc/crypttab as chome2.

Then, I formatted the new LUKS partition and mounted it:

mkfs.ext4 -m 0 /dev/mapper/chome2 mkdir /home2 mount /dev/mapper/chome2 /home2 Copy the data from the old drive

With the home paritions of both drives mounted, I copied the files over to the new drive:

eatmydata nice ionice -c3 rsync -axHAX --progress /home/* /home2/

making use of wrappers that preserve system reponsiveness during I/O-intensive operations.

Switch over to the new drive

After the copy, I switched over to the new drive in a step-by-step way:

  1. Changed the UUID of chome in /etc/crypttab.
  2. Changed the UUID and name of /dev/md1 in /etc/mdadm/mdadm.conf.
  3. Rebooted with both drives.
  4. Checked that the new drive was the one used in the encrypted /home mount using: df -h.
Add the old drive to the new RAID array

With all of this working, it was time to clear the mdadm superblock from the old drive:

mdadm --zero-superblock /dev/sdc1

and then change the second partition of the old drive to make it the same size as the one on the new drive:

$ parted /dev/sdc rm 2 mkpart toggle 2 raid print

before adding it to the new array:

mdadm /dev/md1 -a /dev/sdc1 Rename the new array

To change the name of the new RAID array back to what it was on the old drive, I first had to stop both the old and the new RAID arrays:

umount /home cryptsetup luksClose chome mdadm --stop /dev/md10 mdadm --stop /dev/md1

before running this command:

mdadm --assemble /dev/md1 --name=mymachinename:1 --update=name /dev/sdd2

and updating the name in /etc/mdadm/mdadm.conf.

The last step was to regenerate the initramfs:

update-initramfs -u

before rebooting into something that looks exactly like the original RAID1 array but with twice the size.

OpenSTEM: Staying safe during Cyclone Debbie

Thu, 2017-03-30 16:06
Cyclone Debbie as seen from the ISS

We hope all teachers and students are safe in the areas of Queensland and New South Wales affected by the cyclone weather! We understand that many state schools (any South of Agnes Water to Northern New South Wales) are closed today, the radar shows a very large rain front coming through. Near Brisbane it’s been raining for many hours already, and the wind is now picking up as well. It’s good to be inside, although things are starting to feel moist (reminding Arjen of when he lived in Cairns).

Why not take this opportunity to replace dry old teaching materials using coupon code DEBBIE for 25% discount on any Understanding our World® unit. This special Cyclone Debbie offer ends Sunday 2nd April.

Did you know that, in the Understanding Our World units, Year 5 students did work on Natural Disasters during this term!

Also, do take a peek at the Open Source Earth Wind Patterns site at NullSchool – using live data to create a moving image. All open. Beautiful.

Dan Treacy: Hello world!

Thu, 2017-03-30 06:01

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Binh Nguyen: Life in Yemen, Blogger2Book BASH script, and More

Thu, 2017-03-30 00:08
- main reasons why it's been in the news is because of recent conflict. A lot of them are saying this is a proxy war between Saudi Arabia and Iran who effectively represent different sects within Islam. Over and over again feels like people in the Middle East just want some dignity and respect from the West? Shortages across the board (food, medicine, petrol, etc...) Yemen people saying Saudi

sthbrx - a POWER technical blog: Evaluating CephFS on Power

Tue, 2017-03-28 23:00
Methodology

To evaluate CephFS, we will create a ppc64le virtual machine, with sufficient space to compile the software, as well as 3 sparse 1TB disks to create the object store.

We will then build & install the Ceph packages, after adding the PowerPC optimisiations to the code. This is done, as ceph-deploy will fetch prebuilt packages that do not have the performance patches if the packages are not installed.

Finally, we will use the ceph-deploy to deploy the instance. We will ceph-deploy via pip, to avoid file conflicts with the packages that we built.

For more information on what each command does, visit the following tutorial, upon which which this is based: http://palmerville.github.io/2016/04/30/single-node-ceph-install.html

Virtual Machine Config

Create a virtual machine with at least the following: - 16GB of memory - 16 CPUs - 64GB disk for the root filesystem - 3 x 1TB for the Ceph object store - Ubuntu 16.04 default install (only use the 64GB disk, leave the others unpartitioned)

Initial config
  • Enable ssh
sudo apt install openssh-server sudo apt update sudo apt upgrade sudo reboot
  • Install build tools
sudo apt install git debhelper Build Ceph mkdir $HOME/src cd $HOME/src git clone --recursive https://github.com/ceph/ceph.git # This may take a while cd ceph git checkout master git submodule update --force --init --recursive
  • Cherry-pick the Power performance patches:
git remote add kestrels https://github.com/kestrels/ceph.git git fetch --all git cherry-pick 59bed55a676ebbe3ad97d8ec005c2088553e4e11
  • Install prerequisites
./install-deps.sh sudo apt install python-requests python-flask resource-agents curl python-cherrypy python3-pip python-django python-dateutil python-djangorestframework sudo pip3 install ceph-deploy cd $HOME/src/ceph sudo dpkg-buildpackage -J$(nproc) # This will take a couple of hours (16 cpus)
  • Install the packages (note that python3-ceph-argparse will fail, but is safe to ignore)
cd $HOME/src sudo dpkg -i *.deb Create the ceph-deploy user sudo adduser ceph-deploy echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy sudo chmod 0440 /etc/sudoers.d/ceph-deploy Configure the ceph-deploy user environment su - ceph-deploy ssh-keygen node=`hostname` ssh-copy-id ceph-deploy@$node mkdir $HOME/ceph-cluster cd $HOME/ceph-cluster ceph-deploy new $node # If this fails, remove the bogus 127.0.1.1 entry from /etc/hosts echo 'osd pool default size = 2' >> ceph.conf echo 'osd crush chooseleaf type = 0' >> ceph.conf Complete the Ceph deployment ceph-deploy install $node ceph-deploy mon create-initial drives="vda vdb vdc" # the 1TB drives - check that these are correct for your system for drive in $drives; do ceph-deploy disk zap $node:$drive; ceph-deploy osd prepare $node:$drive; done for drive in $drives; do ceph-deploy osd activate $node:/dev/${drive}1; done ceph-deploy admin $node sudo chmod +r /etc/ceph/ceph.client.admin.keyring ceph -s # Check the state of the cluster Configure CephFS ceph-deploy mds create $node ceph osd pool create cephfs_data 128 ceph osd pool create cephfs_metadata 128 ceph fs new cephfs cephfs_metadata cephfs_data sudo systemctl status ceph\*.service ceph\*.target # Ensure the ceph-osd, ceph-mon & ceph-mds daemons are running sudo mkdir /mnt/cephfs key=`grep key ~/ceph-cluster/ceph.client.admin.keyring | cut -d ' ' -f 3` sudo mount -t ceph $node:6789:/ /mnt/cephfs -o name=admin,secret=$key References
  1. http://docs.ceph.com/docs/master/install/clone-source/
  2. http://docs.ceph.com/docs/master/install/build-ceph/
  3. http://palmerville.github.io/2016/04/30/single-node-ceph-install.html

Donna Benjamin: Making Views in Drupal 7 and Drupal 8

Tue, 2017-03-28 12:03
Wednesday, April 5, 2017 - 11:45

This talk was written for DrupalGov Canberra 2017.

Download a PDF of the slides.

Or view below on slideshare.

 

Making views - DrupalGov Canberra 2017 from Donna Benjamin AttachmentSize Making Views in Drupal 7 and Drupal 84.46 MB

BlueHackers: Busyness: A Modern Health Crisis | LinkedIn

Tue, 2017-03-28 11:35

Benjamin Cardullo writes about an issue that we really have to take (more) seriously.  Particularly with mobile devices enabling us to be “connected” 24/7, being busy (or available) all of that time is not a good thing at all.

How do we measure professional success? Is it by the location of our office or the size of our paycheck? Is it measured by the dimensions of our home or the speed of our car? Ten years ago, those would have been the most prominent answers; however, today when someone is really pulling out the big guns, when they really want to show you how important they are, they’ll tell you all about their busy day and how they never had a moment to themselves.

Read the full article: https://www.linkedin.com/pulse/busyness-modern-health-crisis-benjamin-cardullo

OpenSTEM: This Week in HASS – term 1, week 9

Tue, 2017-03-28 10:09

The last week of our first unit – time to wrap up, round off, finish up any work not yet done and to perhaps get a preliminary taste of what’s to come in future units. Easter holidays are just around the corner. Our youngest students are having a final discussion about celebrations; slightly older students are finishing off their quest for Aunt Madge, by looking at landmarks and the older students are considering democracy in Australia, compared to its early beginnings in Ancient Greece.

Foundation to Year 3

Foundation/Prep (units F.1 and F.6) students are finishing off their discussions about celebrations, just in time for the Easter holidays, by looking at celebrations around the world. Teachers may wish to focus on how other countries celebrate Easter, with passion plays, processions and special meals. Students in Years 1 (unit 1.1), 2 (unit 2.1) and 3 (unit 3.1) are finishing off their Aunt Madge activity, looking at landmarks in Australia and around the world. There is the option for teachers to concentrate on Australian landmarks in this lesson, setting the stage for some local history studies in the next unit, next term.

Years 3 to 6 Ancient Greek pottery with votes scratched into the surface

Older students in Years 3 (unit 3.5), 4 (unit 4.1), 5 (unit 5.1) and 6 (unit 6.1) start looking ahead and laying the foundations for later studies on the Australian system of government and democracy, by comparing democracy as it arose in Ancient Greece, with the modern Australian democratic system. Our word for democracy comes from the Ancient Greek words demos (people) and kratia (power). Students move on from their discussion of Eratosthenes to looking at the Ancient Greek democratic system, which was to lay the groundwork for modern democratic systems around the world. Discussing Ancient Greek democracy leads students to consider the rights and responsibilities of being a citizen, at both the local and international levels. Students also consider who could and could not vote and what this meant for different groups. They can also touch on the ancient practise of ostracism, which can lead to ethical debates around fair election practises. By considering these fundamental concepts, students are better able to relate the ideas around modern democracy to their own lives.

 

David Rowe: AMBE+2 and MELPe 600 Compared to Codec 2

Sun, 2017-03-26 20:03

Yesterday I was chatting on the #freedv IRC channel, and a good question was asked: how close is Codec 2 to AMBE+2 ? Turns out – reasonably close. I also discovered, much to my surprise, that Codec 2 700C is better than MELPe 600!

Samples

Original AMBE+2 3000 AMBE+ 2400 Codec 2 3200 Codec 2 2400 Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Original MELPe 600 Codec 2 700C Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen Listen

Here are all the samples in one big tar ball.

Discussion

I don’t have a AMBE or MELPe codec handy so I used the samples from the DVSI and DSP Innovations web sites. I passed the original “DAMA” speech samples found on these sites through Codec 2 (codec2-dev SVN revision 3053) at various bit rates. Turns out the DAMA samples were the same for the AMBE and MELPe samples which was handy.

These particular samples are “kind” to codecs – I consistently get good results with them when I test with Codec 2. I’m guessing they also allow other codecs to be favorably demonstrated. During Codec 2 development I make a point of using “pathological” samples such as hts1a, cg_ref, kristoff, mmt1 that tend to break Codec 2. Some samples of AMBE and MELP using my samples on the Codec 2 page.

I usually listen to samples through a laptop speaker, as I figure it’s close to the “use case” of a PTT radio. Small speakers do mask codec artifacts, making them sound better. I also tried a powered loud speaker with the samples above. Through the loudspeaker I can hear AMBE reproducing the pitch fundamental – a bass note that can be heard on some males (e.g. 7), whereas Codec 2 is filtering that out.

I feel AMBE is a little better, Codec 2 is a bit clicky or impulsive (e.g. on sample 1). However it’s not far behind. In a digital radio application, with a small speaker and some acoustic noise about – I feel the casual listener wouldn’t discern much difference. Try replaying these samples through your smart-phone’s browser at an airport and let me know if you can tell them apart!

On the other hand, I think Codec 2 700C sounds better than MELPe 600. Codec 2 700C is more natural. To my ear MELPe has very coarse quantisation of the pitch, hence the “Mr Roboto” sing-song pitch jumps. The 700C level is a bit low, an artifact/bug to do with the post filter. Must fix that some time. As a bonus Codec 2 700C also has lower algorithmic delay, around 40ms compared to MELPe 600’s 90ms.

Curiously, Codec 2 uses just 1 voicing bit which means either voiced or unvoiced excitation in each frame. xMBE’s claim to fame (and indeed MELP) over simpler vocoders is the use of mixed excitation. Some of the spectrum is voiced (regular pitch harmonics), some unvoiced (noise like). This suggests the benefits of mixed excitation need to be re-examined.

I haven’t finished developing Codec 2. In particular Codec 2 700C is very much a “first pass”. We’ve had a big breakthrough this year with 700C and development will continue, with benefits trickling up to other modes.

However the 1300, 2400, 3200 modes have been stable for years and will continue to be supported.

Next Steps

Here is the blog post that kicked off Codec 2 – way back in 2009. Here is a video of my linux.conf.au 2012 Codec 2 talk that explains the motivations, IP issues around codecs, and a little about how Codec 2 works (slides here).

What I spoke about then is still true. Codec patents and license fees are a useless tax on business and stifle innovation. Proprietary codecs borrow as much as 95% of their algorithms from the public domain – which are then sold back to you. I have shown that open source codecs can meet and even exceed the performance of closed source codecs.

Wikipedia suggests that AMBE license fees range from USD$100k to USD$1M. For “one license fee” we can improve Codec 2 so it matches AMBE+2 in quality at 2400 and 3000 bit/s. The results will be released under the LGPL for anyone to use, modify, improve, and inspect at zero cost. Forever.

Maybe we should crowd source such a project?

Command Lines

This is how I generated the Codec 2 wave files:

~/codec2-dev/build_linux//src/c2enc 3200 9.wav - | ~/codec2-dev/build_linux/src/c2dec 3200 - - | sox -t raw -r 8000 -s -2 - 9_codec2_3200.wav

Links

DVSI AMBE sample page

DSP Innovations, MELPe samples. Can anyone provide me with TWELP samples from these guys? I couldn’t find any on the web that includes the input, uncoded source samples.

OpenSTEM: Trying an OpenSTEM unit without a subscription

Sun, 2017-03-26 16:05

We have received quite a few requests for this option, so we’ve made it possible. As we understand it, in many cases an individual teacher wants to try our materials (often on behalf of the school, as a trial) but the teacher has to fund this from their classroom budget, so we appreciate they need to limit their initial outlay.

While purchasing units with an active subscription still works out cheaper (we haven’t changed that pricing), we have tweaked our online store to now also allow the purchase of individual unit bundles, from as little as $49.50 (inc.GST) for the Understanding Our World® HASS+Science program units. That’s a complete term bundle with teacher handbook, student workbook, assessment guide, model answers and curriculum mapping, as well as all the base resource PDFs needed for that unit! After purchase, the PDF materials can be downloaded from the site (optionally many files together in a ZIP).

We’d love to welcome you as a new customer! From experience we know that you’ll love our materials. The exact pricing difference (between subscription and non-subscription) depends on the type of bundle (term unit, year bundle, or multi-year bundle) and is indicated per item.

Try OpenSTEM today! Browse our teacher unit bundles (Foundation Year to Year 6).

This includes units for Digital Technologies, the Ginger Beer Science project, as well as for our popular Understanding Our World® HASS+Science program.

James Morris: Linux Security Summit 2017: CFP Announcement

Sat, 2017-03-25 00:01

The 2017 Linux Security Summit CFP (Call for Participation) is now open!

See the announcement here.

The summit this year will be held in Los Angeles, USA on 14-15 September. It will be co-located with the Open Source Summit (formerly LinuxCon), and the Linux Plumbers Conference. We’ll follow essentially the same format as the 2016 event (you can find the recap here).

The CFP closes on June 5th, 2017.

sthbrx - a POWER technical blog: Erasure Coding for Programmers, Part 2

Fri, 2017-03-24 09:08

We left part 1 having explored GF(2^8) and RAID 6, and asking the question "what does all this have to do with Erasure Codes?"

Basically, the thinking goes "RAID 6 is cool, but what if, instead of two parity disks, we had an arbitrary number of parity disks?"

How would we do that? Well, let's introduce our new best friend: Coding Theory!

Say we want to transmit some data across an error-prone medium. We don't know where the errors might occur, so we add some extra information to allow us to detect and possibly correct for errors. This is a code. Codes are a largish field of engineering, but rather than show off my knowledge about systematic linear block codes, let's press on.

Today, our error-prone medium is an array of inexpensive disks. Now we make this really nice assumption about disks, namely that they are either perfectly reliable or completely missing. In other words, we consider that a disk will either be present or 'erased'. We come up with 'erasure codes' that are able to reconstruct data when it is known to be missing. (This is a slightly different problem to being able to verify and correct data that might or might not be subtly corrupted. Disks also have to deal with this problem, but it is not something erasure codes address!)

The particular code we use is a Reed-Solomon code. The specific details are unimportant, but there's a really good graphical outline of the broad concepts in sections 1 and 3 of the Jerasure paper/manual. (Don't go on to section 4.)

That should give you some background on how this works at a pretty basic mathematical level. Implementation is a matter of mapping that maths (matrix multiplication) onto hardware primitives, and making it go fast.

Scope

I'm deliberately not covering some pretty vast areas of what would be required to write your own erasure coding library from scratch. I'm not going to talk about how to compose the matricies, how to invert them, or anything like that. I'm not sure how that would be a helpful exercise - ISA-L and jerasure already exist and do that for you.

What I want to cover is an efficient implementation of the some algorithms, once you have the matricies nailed down.

I'm also going to assume your library already provides a generic multiplication function in GF(2^8). That's required to construct the matrices, so it's a pretty safe assumption.

The beginnings of an API

Let's make this a bit more concrete.

This will be heavily based on the ISA-L API but you probably want to plug into ISA-L anyway, so that shouldn't be a problem.

What I want to do is build up from very basic algorithmic components into something useful.

The first thing we want to do is to be able to is Galois Field multiplication of an entire region of bytes by an arbitrary constant.

We basically want gf_vect_mul(size_t len, <something representing the constant>, unsigned char * src, unsigned char * dest)

Simple and slow approach

The simplest way is to do something like this:

void gf_vect_mul_simple(size_t len, unsigned char c, unsigned char * src, unsigned char * dest) { size_t i; for (i=0; i<len; i++) { dest[i] = gf_mul(c, src[i]); } }

That does multiplication element by element using the library's supplied gf_mul function, which - as the name suggests - does GF(2^8) multiplication of a scalar by a scalar.

This works. The problem is that it is very, painfully, slow - in the order of a few hundred megabytes per second.

Going faster

How can we make this faster?

There are a few things we can try: if you want to explore a whole range of different ways to do this, check out the gf-complete project. I'm going to assume we want to skip right to the end and know what is the fastest we've found.

Cast your mind back to the RAID 6 paper (PDF). I talked about in part 1. That had a way of doing an efficient multiplication in GF(2^8) using vector instructions.

To refresh your memory, we split the multiplication into two parts - low bits and high bits, looked them up separately in a lookup table, and joined them with XOR. We then discovered that on modern Power chips, we could do that in one instruction with vpermxor.

So, a very simple way to do this would be:

  • generate the table for a
  • for each 16-byte chunk of our input:
    • load the input
    • do the vpermxor with the table
    • save it out

Generating the tables is reasonably straight-forward, in theory. Recall that the tables are a * {{00},{01},...,{0f}} and a * {{00},{10},..,{f0}} - a couple of loops in C will generate them without difficulty. ISA-L has a function to do this, as does gf-complete in split-table mode, so I won't repeat them here.

So, let's recast our function to take the tables as an input rather than the constant a. Assume we're provided the two tables concatenated into one 32-byte chunk. That would give us:

void gf_vect_mul_v2(size_t len, unsigned char * table, unsigned char * src, unsigned char * dest)

Here's how you would do it in C:

void gf_vect_mul_v2(size_t len, unsigned char * table, unsigned char * src, unsigned char * dest) { vector unsigned char tbl1, tbl2, in, out; size_t i; /* Assume table, src, dest are aligned and len is a multiple of 16 */ tbl1 = vec_ld(16, table); tbl2 = vec_ld(0, table); for (i=0; i<len; i+=16) { in = vec_ld(i, (unsigned char *)src); __asm__("vpermxor %0, %1, %2, %3" : "=v"(out) : "v"(tbl1), "v"(tbl2), "v"(in) vec_st(out, i, (unsigned char *)dest); } }

There's a few quirks to iron out - making sure the table is laid out in the vector register in the way you expect, etc, but that generally works and is quite fast - my Power 8 VM does about 17-18 GB/s with non-cache-contained data with this implementation.

We can go a bit faster by doing larger chunks at a time:

for (i=0; i<vlen; i+=64) { in1 = vec_ld(i, (unsigned char *)src); in2 = vec_ld(i+16, (unsigned char *)src); in3 = vec_ld(i+32, (unsigned char *)src); in4 = vec_ld(i+48, (unsigned char *)src); __asm__("vpermxor %0, %1, %2, %3" : "=v"(out1) : "v"(tbl1), "v"(tbl2), "v"(in1)); __asm__("vpermxor %0, %1, %2, %3" : "=v"(out2) : "v"(tbl1), "v"(tbl2), "v"(in2)); __asm__("vpermxor %0, %1, %2, %3" : "=v"(out3) : "v"(tbl1), "v"(tbl2), "v"(in3)); __asm__("vpermxor %0, %1, %2, %3" : "=v"(out4) : "v"(tbl1), "v"(tbl2), "v"(in4)); vec_st(out1, i, (unsigned char *)dest); vec_st(out2, i+16, (unsigned char *)dest); vec_st(out3, i+32, (unsigned char *)dest); vec_st(out4, i+48, (unsigned char *)dest); }

This goes at about 23.5 GB/s.

We can go one step further and do the core loop in assembler - that means we control the instruction layout and so on. I tried this: it turns out that for the basic vector multiply loop, if we turn off ASLR and pin to a particular CPU, we can see a improvement of a few percent (and a decrease in variability) over C code.

Building from vector multiplication

Once you're comfortable with the core vector multiplication, you can start to build more interesting routines.

A particularly useful one on Power turned out to be the multiply and add routine: like gf_vect_mul, except that rather than overwriting the output, it loads the output and xors the product in. This is a simple extension of the gf_vect_mul function so is left as an exercise to the reader.

The next step would be to start building erasure coding proper. Recall that to get an element of our output, we take a dot product: we take the corresponding input element of each disk, multiply it with the corresponding GF(2^8) coding matrix element and sum all those products. So all we need now is a dot product algorithm.

One approach is the conventional dot product:

  • for each element
    • zero accumulator
    • for each source
      • load input[source][element]
      • do GF(2^8) multiplication
      • xor into accumulator
    • save accumulator to output[element]

The other approach is multiply and add:

  • for each source
    • for each element
      • load input[source][element]
      • do GF(2^8) multiplication
      • load output[element]
      • xor in product
      • save output[element]

The dot product approach has the advantage of fewer writes. The multiply and add approach has the advantage of better cache/prefetch performance. The approach you ultimately go with will probably depend on the characteristics of your machine and the length of data you are dealing with.

For what it's worth, ISA-L ships with only the first approach in x86 assembler, and Jerasure leans heavily towards the second approach.

Once you have a vector dot product sorted, you can build a full erasure coding setup: build your tables with your library, then do a dot product to generate each of your outputs!

In ISA-L, this is implemented something like this:

/* * ec_encode_data_simple(length of each data input, number of inputs, * number of outputs, pre-generated GF(2^8) tables, * input data pointers, output code pointers) */ void ec_encode_data_simple(int len, int k, int rows, unsigned char *g_tbls, unsigned char **data, unsigned char **coding) { while (rows) { gf_vect_dot_prod(len, k, g_tbls, data, *coding); g_tbls += k * 32; coding++; rows--; } } Going faster still

Eagle eyed readers will notice that however we generate an output, we have to read all the input elements. This means that if we're doing a code with 10 data disks and 4 coding disks, we have to read each of the 10 inputs 4 times.

We could do better if we could calculate multiple outputs for each pass through the inputs. This is a little fiddly to implement, but does lead to a speed improvement.

ISA-L is an excellent example here. Intel goes up to 6 outputs at once: the number of outputs you can do is only limited by how many vector registers you have to put the various operands and results in.

Tips and tricks
  • Benchmarking is tricky. I do the following on a bare-metal, idle machine, with ASLR off and pinned to an arbitrary hardware thread. (Code is for the fish shell)

    for x in (seq 1 50) setarch ppc64le -R taskset -c 24 erasure_code/gf_vect_mul_perf end | awk '/MB/ {sum+=$13} END {print sum/50, "MB/s"}'
  • Debugging is tricky; the more you can do in C and the less you do in assembly, the easier your life will be.

  • Vector code is notoriously alignment-sensitive - if you can't figure out why something is wrong, check alignment. (Pro-tip: ISA-L does not guarantee the alignment of the gftbls parameter, and many of the tests supply an unaligned table from the stack. For testing __attribute__((aligned(16))) is your friend!)

  • Related: GCC is moving towards assignment over vector intrinsics, at least on Power:

    vector unsigned char a; unsigned char * data; // good, also handles word-aligned data with VSX a = *(vector unsigned char *)data; // bad, requires special handling of non-16-byte aligned data a = vec_ld(0, (unsigned char *) data);
Conclusion

Hopefully by this point you're equipped to figure out how your erasure coding library of choice works, and write your own optimised implementation (or maintain an implementation written by someone else).

I've referred to a number of resources throughout this series:

If you want to go deeper, I also read the following and found them quite helpful in understanding Galois Fields and Reed-Solomon coding:

For a more rigorous mathematical approach to rings and fields, a university mathematics course may be of interest. For more on coding theory, a university course in electronics engineering may be helpful.

OpenSTEM: Guess the Artefact!

Thu, 2017-03-23 14:04

Today we are announcing a new challenge for our readers – Guess the Artefact! We post pictures of an artefact and you can guess what it is. The text will slowly reveal the answer, through a process of examination and deduction – see if you can guess what it is, before the end. We are starting this challenge with an item from our year 6 Archaeological Dig workshop. Year 6 (unit 6.3) students concentrate on Federation in their Australian History segment – so that’s your first clue! Study the image and then start reading the text below.

Our first question is what is it? Study the image and see if you can work out what it might be – it’s an dirty, damaged piece of paper. It seems to be old. Does it have a date? Ah yes, there are 3 dates – 23, 24 and 25 October, 1889, so we deduce that it must be old, dating to the end of the 19th century. We will file the exact date for later consideration. We also note references to railways. The layout of the information suggests a train ticket. So we have a late 19th century train ticket!

Now why do we have this train ticket and whose train ticket might it have been? The ticket is First Class, so this is someone who could afford to travel in style. Where were they going? The railways mentioned are Queensland Railways, Great Northern Railway, New South Wales Railways and the stops are Brisbane, Wallangara, Tenterfield and Sydney. Now we need to do some research. Queensland Railways and New South Wales Railways seem self-evident, but what is Great Northern Railway? A brief hunt reveals several possible candidates: 1) a contemporary rail operator in Victoria; 2) a line in Queensland connecting Mt Isa and Townsville and 3) an old, now unused railway in New South Wales. We can reject option 1) immediately. Option 2) is the right state, but the towns seem unrelated. That leaves option 3), which seems most likely. Looking into the NSW option in more detail we note that it ran between Sydney and Brisbane, with a stop at Wallangara to change gauge – Bingo!

Wallangara Railway Station

More research reveals that the line reached Wallangara in 1888, the year before this ticket was issued. Only after 1888 was it possible to travel from Brisbane to Sydney by rail, albeit with a compulsory stop at Wallangara. We note also that the ticket contains a meal voucher for dinner at the Railway Refreshment Rooms in Wallangara. Presumably passengers overnighted in Wallangara before continuing on to Sydney on a different train and rail gauge. Checking the dates on the ticket, we can see evidence of an overnight stop, as the next leg continues from Wallangara on the next day (24 Oct 1889). However, next we come to some important information. From Wallangara, the next leg of the journey represented by this ticket was only as far as Tenterfield. Looking on a map, we note that Tenterfield is only about 25 km away – hardly a day’s train ride, more like an hour or two at the most (steam trains averaged about 24 km/hr at the time). From this we deduce that the ticket holder wanted to stop at Tenterfield and continue their journey on the next day.

We know that we’re studying Australian Federation history, so the name Tenterfield should start to a ring a bell – what happened in Tenterfield in 1889 that was relevant to Australian Federation history? The answer, of course, is that Henry Parkes delivered his Tenterfield Oration there, and the date? 24 October, 1889! If we look into the background, we quickly discover that Henry Parkes was on his way from Brisbane back to Sydney, when he stopped in Tenterfield. He had been seeking support for Federation from the government of the colony of Queensland. He broke his journey in Tenterfield, a town representative of those towns closer to the capital of another colony than their own, which would benefit from the free trade arrangements flowing from Federation. Parkes even discussed the issue of different rail gauges as something that would be solved by Federation! We can therefore surmise that this ticket may well be the ticket of Henry Parkes, documenting his journey from Brisbane to Sydney in October, 1889, during which he stopped and delivered the Tenterfield Oration!

This artefact is therefore relevant as a source for anyone studying Federation history – as well as giving us a more personal insight into the travels of Henry Parkes in 1889, it allows us to consider aspects of life at the time:

  • the building of railway connections across Australia, in a time before motor cars were in regular use;
  • the issue of different size railway gauges in the different colonies and what practical challenges that posed for a long distance rail network;
  • the ways in which people travelled and the speed with which they could cross large distances;
  • what rail connections would have meant for small, rural towns, to mention just a few.
  • Why might the railway companies have provided meal vouchers?

These are all sidelines of inquiry, which students may be interested to pursue, and which might help them to engage with the subject matter in more detail.

In our Archaeological Dig Workshops, we not only engage students in the processes and physical activities of the dig, but we provide opportunities for them to use the artefacts to practise deduction, reasoning and research – true inquiry-based learning, imitating real-world processes and far more engaging and empowering than more traditional bookwork.