Planet Linux Australia
– “News” Website
– 5 person DevOps team
– “Something you do because Gartner said it’s cool”
– Sysadmin -> InfraCoder/SRE -> Dev Shepherd -> Dev
– Stuff in the middle somewhere
Company Structure drives DevOps structure
– Lots of products – one team != one product
– Dev teams with very specific focus
– Scale – too big, yet to small
About our team
– Mainly Ops focus
– small number compared to developers
– Operate like an agency model for developers
– “If you buy the Dom Post it would help us grow our team”
– Lots of different vendors with different skill levels and technology
– Use KanBan with Jira
– Works for Ops focussed team
– Not so great for long running projects
War Against OnCall
– Biggest cause of burnout
– focus on minimising callouts
– Zero alarm target
– Love pagerduty
Commonalities across platforms
– Everyone using compute
– Using Public Cloud
– Using off the shelf version control, deployment solutions
– Don’t get overly creative and make things too complex
– Proven technology that is well tried and tested and skills available in marketplace
– Classic technologist like Nginx, Java, Varnish still have their place. Don’t always need latest fashion
– Linux, ubuntu
– Adobe AEM Java CMS
– AWS 14x c4.2xlarge
– Varnish in front, used by everybody else. Makes ELB and ALB look like toys
How use Varnish
– Retries against backends if 500 replies, serve old copies
– split routes to various backends
– Control CDN via header
– Dynamic Configuration via puppet
– Keeps online during breaking load
– 90% cache offload
– Management is a bit slow and manual
– Small batch jobs
– Check mail reputation score
– “Download file from a vendor” type stuff
– Purge cache when static file changes
– Lamda webapps – Hopefully soon, a bit immature
Increasing number of microservices
Standards are vital for microservices
– Simple and reasonable
– Shareable vendors and internal
– grow organicly
– Needs to be detail
– 12 factor App
– 3 languages Node, Java, Ruby
– Common deps (SQL, varnish, memcache, Redis)
– Build pipeline standardise. Using Codeship
– Standardise server builds
– Everything Automated with puppet
– Puppet building docker containers (w puppet + puppetstry)
– Std Application deployment
– Had proliferation
– pm2, god, supervisord, systemvinit are out
– systemd and upstart are in
– “Enterprise ___” is always bad
– Educating the business is a forever job
– Be reasonable, set boundaries
More Stuff at
Q: Pull request workflow
A: Largely replaced traditional review
Q: DR eg AWS outage
A: Documented process if codeship dies can manually push, Rest in 2*AZs, Snapshots
Q: Dev teams structure
A: Project specific rather than product specific.
Q: Puppet code tested?
A: Not really, Kinda tested via the pre-prod environment, Would prefer result (server spec) testing rather than low level testing of each line
A: Code team have good test coverage though. 80-90% in many cases.
Q: Load testing, APM
A: Use New Relic. Not much luck with external load testing companies
Q: What is somebody wants something non-standard?
A: Case-by-case. Allowed if needed but needs a good reason.
Q: What happens when automation breaks?
A: Documentation is actually pretty good.
Theory: Devops is role that never existed.
In the old days
– Shipping used to be hard and expensive, eg on physical media
– High cost of release
– but everybody else was the same.
– Lots of QA and red tape, no second chances
Then we got the Internet
– Speed became everything
– You just shipped enough
But Hardware still was a limiting factor
– Virtual machines
This led to complacency
– Still had a physical server under it all
Birth of devops
– Software got faster but still had to have hardware under their somewhere
– Disparity between operations cadence and devs cadence
– things got better
– But we didn’t free ourselves from hardware
– Now everything is much more complex
Developers are now divorced from the platform
– Everything is abstracted
– It is leaky buckets all the way down
– Education of developers as to what happens below the hood
– Stop reinventing the where
– Harmony is much more productive
– Lots of tools means that you don’t have enough expertise on each
– Reduce fiefdoms
– Push responsibility but not ownership (you own it but the devs makes some of the changes)
– Live with the code
– Pit of success, easy ways to fail that don’t break stuff (eg test environments, by default it will do the right thing)
– Be Happy. Everybody needs to be a bit devops and know a bit of everything.
Backend developer at Spotify
– 100m active users
– 800+ tech employees
– 120 teams
– Microservices architecture
Walk though Sample artist’s page
– each component ( playlist, play count, discgraphy) is a seperate service
– Aggregated to send result back to client
Hard to co-ordinate between services as scale grows
– 1000+ services
– Each need to use each others APIs
– Dev teams all around the world
– Teams had docs in different places
– Some in Wiki, Readme, markdown, all different
Current Solution – System Z
– Centralise in one place, as automated as possible
– Internal application
– Web app, catalog of all systems and its parts
– Well integrated with Apollo service
Web Page for each service
– Various tabs
– Configuration (showing versions of build and uptimes)
– API – list of all endpoints for service, scheme, errors codes, etc (automatically populated)
– System tab – Overview on how service is connected to other services, dependencies (generated automatically)
– System Z gets information from Apollo and prod servers about each service that has been registered
– Java libs for writing microservices
– Open source
– Metadata module
– Exposes endpoint with metadata for each service
– instance info – versions, uptime
– configuration – currently loaded config of the service
– endpoints –
– call information – monitors service and learns and returns what incoming and outgoing services the service actually does and to/from what other services.
– Automatically builds dependencies
– Quicker access to relevant information
– Automated boring stuff
– All in one place
– Think about growth and scaling at the start of the project
Q: How to handle breaking APIs
A: We create new version of API endpoint and encourage people to move over.
– Works for Datacom
– Consultant in Application performance management team
Story from Start of 2015
– Friday night phone calls from your boss are never good.
– Dropped in application monitoring tools (Dynatrace) on Friday night, watch over weekend
– Prev team pretty sure problem is a memory leak but had not been able to find it (for two weeks)
– If somebody tells you they know what is wrong but can’t find it, give details or fix it then be suspicious
Book: Java Enterprise performance
– Monday prod load goes up and app starts crashing
– Told ops team but since crash wasn’t visable yet, was not believed. waited
– Java App, Jboss on Linux
– Multiple JVMs
– Oracle DBs, Mulesoft ESB, ActiveMQ, HornetQ
Ah Ha moment
– Had a look at import process
– 2.3 million DB queries per half hour
– With max of 260 users, seems way more than what is needed
– Happens even when nobody is logged in
Tip: Typically 80% of all issues can be detected in dev or test if you look for them.
Where did this code come from?
– Process to import a csv into the database
– 1 call mule -> 12 calls to AMQ -> 12 calls to App -> 102 db queries
– Passes all the tests… But
– Still shows huge growth in queries as we go through layers
– DB queries grow bigger with each run
Tip: Know how your code behaves and track how this behavour changes with each code change (or even with no code change)
Q: Why Dynatrace?
A: Quick to deploy, useful info back in only a couple of hours
Originally in the Marines, environment where burnout not tolerated
Works for Thoughtworks – not a mental health professional
Devops could make this worse
Some clichéd places say: “Teach the devs puppet and fire all the Ops people”
Why should we address burnout?
– Google found psychological safety was the number 1 indicator of an effective team
– Not just a negative, people do better job when feeling good.
What is burnout
– The Truth about burnout – Maslach and Leiter
– The Dimensions of Burnout
– Mismatch between work and the person
– Work overload
– Lack of control
– Insufficient reward
– Breakdown of communication
– Various prioritisation methods
– More load sharing
– Less deploy marathons
– Some orgs see devops as a cost saving
– There is no such thing as a full stack engineer
– team has skills, not a person
Lack of Control
– Team is ultimately for the decissions
– Use the right technolgy and tools for the team
– This doesnt mean a “Devops team” contolling what others do
– Actually not a great motivator
Breakdown in communication
– Walls between teams are bad
– Everybody involved with product should be on the same team
– 2 pizza team
– Pairs with different skill sets are common
– Swarming can be done when required ( one on keyboard, everybody else watching and talking and helping on big screen)
– Blameless retrospectives are held
– No “Devops team”, creating a silo is not a solution for silos
Absence of Fairness
– You build it, you run it
– Everybody is responsible for quality
– Everybody is measured in the same way
– example Expedia – *everything* deployed has A/B tesing
– everybody goes to release party
– In the broadest possible sense
– eg Company industry and values should match your own
Reminder: it is about you and how you fit in with the above
Pay attention to how you feel
– Increase your self awareness
– Maslach Burnout inventory
– Try not to focus on the negative.
Pay attention to work/life balance
– Ask for it, company might not know your needs
– If you can’t get it then quit
Talk to somebody
– Professional help is the best
– Trained to identify cause and effect
– can recommend treatment
– You’d call them if you broke your arm
Friends and family
– People who care, that you haven’t even meet
– Empathy is great , but you aren’t a professional
– Don’t guess cause and effect
– Don’t recommend treatment if not a professional
Q: Is it Gender specific for men (since IT is male dominated) ?
– The “absence of fairness” problem is huge for women in IT
Q: How to promote Psychological safety?
– Blameless post-mortems
Damian Brady – Just let me do my job
After working in govt, went to work for new company and hoped to get stuff done
But whole dev team was unhappy
– Random work assigned
– All deadlines missed
– Lots of waste of time meetings
But 2 years later
– Hitting all deadlines
– Useful meetings
What changes were made?
New boss, protect devs for MUD ( Meetings, uncertainty, distractions )
– In board sense, 1-1, all hands, normal meetings
– People are averaging 7.5 hours/week in meetings
– On average 37% of meeting time is not relevant to person ( ~ $8,000 / year )
– Do meetings have goals and do they achieve those goals?
– 38% without goals
– only half of remaining meet those goals
– around 40% of meetings have and achieve goals
– Might not be wasted. Look at “What has changed as result of this meeting?”
– New Boss went to meetings for us (didn’t need everybody) as a representative
– Set a clear goal and agenda
– Avoid gimmicks
– don’t default to 30min or 1h
– 60% of people interrupted 10 or more times per day
– Good to stay in a “flow state”
– 40% people say they are regularly focussed in their work. but all are sometimes
– 35% of time loss focus when interrupted
– Study shows people can take up to 23mins to get focus back after interruption
– $25,000/year wasting according to interruptions
– Allowing headphones, rule not to interrupt people wearing headphones
– “Do not disturb” times
– Little Signs
– Had “the finger” so that you could tell somebody your were busy right now and would come back to them
– Let devs go to meeting rooms or cafes to hide from interruptions
– All “go dark” where email and chat turned off
– 82% in survey were clear
– nearly 60% of people their top priority changes before they can finish it.
– Autonomy, mastery, purpose
– Tried to let people get clear runs at work
– Helped people acknowledge the unexpected work, add to Sprint board
– Established a gate – Business person would have to go through the manager
– Make the requester responsible – made the requester decide what stuff didn’t get done by physically removing stuff from the sprint board to add their own
The new MySQL 8.0.0 milestone release that was recently announced brings something that has been a looooong time coming: the removal of the FRM file. I was the one who implemented this in Drizzle way back in 2009 (July 28th 2009 according to Brian)- and I may have had a flashback to removing the tentacles of the FRM when reading the MySQL 8.0.0 announcement.
As an idea for how long this has been on the cards, I’ll quote Brian from when we removed it in Drizzle:
We have been talking about getting rid of FRM since around 2003. I remember a drive up to northern Finland with Kaj Arnö, where we spent an hour talking about this. I, David, and MontyW have talked about this for years.
Soo… it was a known problem for at least thirteen years. One of the issues removing it was how pervasive all of the FRM related things were. I shudder at the mention of “pack_flag” and Jay Pipes probably does too.
At the time, we tried a couple of approaches as to how things should look. Our philosophy with Drizzle was that it should get out of the way at let the storage engines be the storage engines and not try to second guess them or keep track of things behind their back. I still think that was the correct architectural approach: the role of Drizzle was to put SQL on top of a storage engine, not to also be one itself.
Looking at the MySQL code, there’s one giant commit 31350e8ab15179acab5197fa29d12686b1efd6ef. I do mean giant too, the diffstat is amazing:786 files changed, 58471 insertions(+), 25586 deletions(-)
How anyone even remotely did code review on that I have absolutely no idea. I know the only way I could get it to work in Drizzle was to do it incrementally, a series of patches that gradually chiseled out what needed to be taken out so I could put it an API and the protobuf code.
Oh, and in case you’re wondering:- uint offset,pack_flag; + uint offset;
Thank goodness. Now, you may not appreciate that as much as I might, but pack_flag was not the height of design, it was… pretty much a catchalll for some kind of data about a field that wasn’t something that already had a field in the FRM. So it may include information on if the field could be null or not, if it’s decimal, how many bytes an integer takes, that it’s a number and how many oh, just don’t ask.
Also gone is the weird interval_id and a whole bunch of limitations because of the FRM format, including one that I either just discovered or didn’t remember: if you used all 256 characters in an enum, you couldn’t create the table as MySQL would pick either a comma or an unused character to be the separator in the FRM!?!
Also changed is how the MySQL server handles default values. For those not aware, the FRM file contains a static copy of the row containing default values. This means the default values are computed once on table creation and never again (there’s a bunch of work arounds for things like AUTO_INCREMENT and DEFAULT NOW()). The new sql/default_values.cc is where this is done now.
For now at least, table metadata is also written to a file that appears to be JSON format. It’s interesting that a SQL database server is using a schemaless file format to describe schema. It appears that these files exist only for disaster recovery or perhaps portable tablespaces. As such, I’m not entirely convinced they’re needed…. it’s just a thing to get out of sync with what the storage engine thinks and causes extra IO on DDL (as well as forcing the issue that you can’t have MVCC into the data dictionary itself).
What will be interesting is to see the lifting of these various limitations and how MariaDB will cope with that. Basically, unless they switch, we’re going to see some interesting divergence in what you can do in either database.
There’s certainly differences in how MySQL removed the FRM file to the way we did it in Drizzle. Hopefully some of the ideas we had were helpful in coming up with this different approach, as well as an extra seven years of in-production use.
At some point I’ll write something up as to the fate of Drizzle and a bit of a post-mortem, I think I may have finally worked out what I want to say…. but that is a post for another day.
This is my very first post on Planet PostgreSQL, so thank you for having me here! I’m not sure if you’re aware, but the PostgreSQL Events page lists the conference as something that should be of interest to PostgreSQL users and developers.
There is a PostgreSQL Day on October 4 2016 in Amsterdam, and if you’re planning on just attending a single day, use code PostgreSQLRocks and it will only cost €200+VAT.
I for one am excited to see Patroni: PostgreSQL High Availability made easy, Relational Databases at Uber: MySQL & Postgres, and Linux tuning to improve PostgreSQL performance: from hardware to postgresql.conf.
I’ll write notes here, if time permits we’ll do a database hackers lunch gathering (its good to mingle with everyone), and I reckon if you’re coming for PostgreSQL day, don’t forget to also signup to the Community Dinner at Booking.com.
So, about ten days ago the MySQL Server Team released MySQL 8.0.0 Milestone to the world. One of the most unfortunate things about MySQL development is that it’s done behind closed doors, with the only hints of what’s to come arriving in maybe a note on a bug or such milestone releases that contain a lot of code changes. How much code change? Well, according to the text up on github for the 8.0 branch “This branch is 5714 commits ahead, 4 commits behind 5.7. ”
Way back in 2013, I looked at MySQL Code Size over releases, which I can again revisit and include both MySQL 5.7 and 8.0.0.
While 5.7 was a big jump again, we seem to be somewhat leveling off, which is a good thing. Managing to add features and fix long standing problems without bloating code size is good for software maintenance. Honestly, hats off to the MySQL team for keeping it to around a 130kLOC code size increase over 5.7 (that’s around 5%).
These days I’m mostly just a user of MySQL, pointing others in the right direction when it comes to some issues around it and being the resident MySQL grey(ing)beard(well, if I don’t shave for a few days) inside IBM as a very much side project to my day job of OPAL firmware.
So, personally, I’m thrilled about no more FRM, better Unicode, SET PERSIST and performance work. With my IBM hat on, I’m thrilled about the fact that it compiled on POWER out of the box and managed to work (I haven’t managed to crash it yet). There seems to be a possible performance issue, but hey, this is a huge improvement over the 5.7 developer milestones when run on POWER.
A lot of the changes are focused around usability, making it easier to manage and easier to run at at least a medium amount of scale. This is long overdue and it’s great to see even seemingly trivial things like SET PERSIST coming (I cannot tell you how many times that has tripped me up).
In a future post, I’ll talk about the FRM removal!
The original article presented two graphs: one of MariaDB searches (which are increasing) and the other showing MySQL searches (decreasing or leveling out). It turns out that the y axis REALLY matters.
I honestly expected better….
— Stewart Smith (@stewartsmith) September 22, 2016
— Stewart Smith (@stewartsmith) September 22, 2016
Aaron Sullivan announced on the Rackspace Blog that you can now get your own Barreleye system! What’s great is that the code for the Barreleye platform is upstream in the op-build project, which means you can build your own firmware for them (just like garrison, the “IBM S822LC for HPC” system I blogged about a few days ago).
Remarkably, to build an image for the host firmware, it’s eerily similar to any other platform:git clone --recursive https://github.com/open-power/op-build.git cd op-build . op-build-env op-build barreleye_defconfig op-build
…and then you wait. You can cross compile on x86.
Hopefully, someone involved in OpenBMC can write on how to build the BMC firmware.
Linux Users of Victoria (LUV) Announce: LUV Main October 2016 Meeting: Sending Linux to Antarctica, 2012-2017 / Annual General Meeting
6th Floor, 200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
• Scott Penrose, Sending Linux to Antarctica: 2012-2017
• Annual General Meeting and lightning talks
200 Victoria St. Carlton VIC 3053 (formerly the EPA building)
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
LUV would like to acknowledge Red Hat for their help in obtaining the venue.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.October 4, 2016 - 18:30
DrupalCon Dublin is just around the corner. Earlier today I started my journey to Dublin. This week I'll be in Mumbai for some work meetings before heading to Dublin.
On Tuesday 27 September at 1pm I will be presenting my session Let the Machines do the Work. This lighthearted presentation provides some practical examples of how teams can start to introduce automation into their Drupal workflows. All of the code used in the examples will be available after my session. You'll need to attend my talk to get the link.
As part of my preparation for Dublin I've been road testing my session. Over the last few weeks I delivered early versions of the talk to the Drupal Sydney and Drupal Melbourne meetups. Last weekend I presented the talk at Global Training Days Chennai, DrupalCamp Ghent and DrupalCamp St Louis. It was exhausting presenting three times in less than 8 hours, but it was definitely worth the effort. The 3 sessions were presented using hangouts, so they were recorded. I gained valuable feedback from attendees and became aware of some bits of my talk needed some attention.
Just as I encourage teams to iterate on their automation, I've been iterating on my presentation. Over the next week or so I will be recutting my demos and polishing the presentation. If you have a spare 40 minutes I would really appreciate it if you watch one of the session recording below and leave a comment here with any feedback.Global Training Days Chennai DrupalCamp Ghent
Note: I recorded the audience not my slides.DrupalCamp St Louis
Note: There was an issue with the mic in St Louis, so there is no audio from their side.
The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.
Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.
I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).
After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.
Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).
In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too.
I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.
To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware.
I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:APT::Default-Release "wheezy";
Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.
I then installed the following:apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev
To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie anyway).
Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn’t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly.
That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.
When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. It’s on the TODO list at work, but everyone else is too busy – a perfect job for an unpaid volunteer. Wheezy’s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg.
Last October data.gov.au was moved from the Department of Finance to the Department of Prime Minister and Cabinet (PM&C) and I moved with the team before going on maternity leave in January. In July of this year, whilst still on maternity leave, I announced that I was leaving PM&C but didn’t say what the next gig was. In choosing my work I’ve always tried to choose new areas, new parts of the broader system to better understand the big picture. It’s part of my sysadmin background – I like to understand the whole system and where the config files are so I can start tweaking and making improvements. These days I see everything as a system, and anything as a “config file”, so there is a lot to learn and tinker with!
Over the past 3 months, my little family (including new baby) has been living in New Zealand on a bit of a sabbatical, partly to spend time with the new bub during that lovely 6-8 months period, but partly for us to have the time and space to consider next steps, personally and professionally. Whilst in New Zealand I was invited to spend a month working with the data.govt.nz team which was awesome, and to share some of my thoughts on digital government and what systemic “digital transformation” could mean. It was fun and I had incredible feedback from my work there, which was wonderful and humbling. Although tempting to stay, I wanted to return to Australia for a fascinating new opportunity to expand my professional horizons.
Thus far I’ve worked in the private sector, non-profits and voluntary projects, political sphere (as an advisor), and in the Federal and State/Territory public sectors. I took some time whilst on maternity leave to think about what I wanted to experience next, and where I could do some good whilst building on my experience and skills to date. I had some interesting offers but having done further tertiary study recently into public policy, governance, global organisations and the highly complex world of international relations, I wanted to better understand both the regulatory sphere and how international systems work. I also wanted to work somewhere where I could have some flexibility for balancing my new family life.
I’m pleased to say that my next gig ticks all the boxes! I’ll be starting next week at AUSTRAC, the Australian financial intelligence agency and regulator where I’ll be focusing on international data projects. I’m particularly excited to be working for the brilliant Dr Maria Milosavljevic (Chief Innovation Officer for AUSTRAC) who has a great track record of work at a number of agencies, including as CIO of the Australian Crime Commission. I am also looking forward to working with the CEO, Paul Jevtovic APM, who is a strong and visionary leader for the organisation, and I believe a real change agent for the broader public sector.
It should be an exciting time and I look forward to sharing more about my work over the coming months! Wish me luck
Well, the last 3 months just flew past on our New Zealand adventure! This is the final blog post. We meant to blog more often but between limited internet access and being busy getting the most of our much needed break, we ended up just doing this final post. Enjoy!
Photos were added every week or so to the flickr album.
I was invited to spend 4 weeks during this trip working with the Department of Internal Affairs in the New Zealand Government on beta.data.govt.nz and a roadmap for data.govt.nz. The team there were just wonderful to work with as were the various people I met from across the NZ public sector. It was particularly fascinating to spend some time with the NZ Head Statistician Liz MacPherson who is quite a data visionary! It was great to get to better know the data landscape in New Zealand and contribute, even in a small way, to where the New Zealand Government could go next with open data, and a more data-driven public sector. I was also invited to share my thoughts on where government could go next more broadly, with a focus on “gov as an API” and digital transformation. It really made me realise how much we were able to achieve both with data.gov.au from 2013-2015 and in the 8 months I was at the Digital Transformation Office. Some of the strategies, big picture ideas and clever mixes of technology and system thinking created some incredible outcomes, things we took for granted from the inside, but things that are quite useful to others and are deserving of recognition for the amazing public servants who contributed. I shared with my New Zealand colleagues a number of ideas we developed at the DTO in the first 8 months of the “interim DTO”, which included the basis for evidence based service design, delivery & reporting, and a vision for how governments could fundamentally change from siloed services to modular and mashable government. “Mashable government” enables better service and information delivery, a competitive ecosystem of products and services, and the capability to automate system to system transactions – with citizen permission of course – to streamline complex user needs. I’m going to do a dedicated blog post later on some of the reflections I’ve had on that work with both data.gov.au and the early DTO thinking, with kudos to all those who contributed.
I mentioned in July that I had left the Department of Prime Minister and Cabinet (where data.gov.au was moved to in October 2015, and I’ve been on maternity leave since January 2016). My next blog post will be about where I’m going and why. You get a couple of clues: yes it involves data, yes it involves public sector, and yes it involves an international component. Also, yes I’m very excited about it!! Stay tuned
When we planned this trip to New Zealand, Thomas has some big numbers in mind for how many fish we should be able to catch. As it turned out, the main seasonal run of trout was 2 months later than usual so for the first month and a half of our trip, it looked unlikely we would get anywhere near what we’d hoped. We got to about 100 fish, fighting for every single one (and keeping only about 5) and then the run began! For 4 weeks of the best fishing of the season I was working in Wellington Mon-Fri, with Little A accompanying me (as I’m still feeding her) leaving Thomas to hold the fort. I did manage to get some great time on the water after Wellington, with my best fishing session (guided by Thomas) resulting in a respectable 14 fish (over 2 hours). Thomas caught a lazy 42 on his best day (over only 3 hours), coming home in time for breakfast and a cold compress for his sprained arm. All up our household clocked up 535 big trout (mostly Thomas!) of which we only kept 10, all the rest were released to swim another day. A few lovely guests contributed to the numbers so thank you Bill, Amanda, Amelia, Miles, Glynn, Silvia and John who together contributed about 40 trout to our tally!
My studies are going well. I now have only 1.5 subjects left in my degree (the famously elusive degree, which was almost finished and then my 1st year had to be repeated due to doing it too long ago for the University to give credit for, gah!). To finish the degree, a Politics degree with loads of useful stuff for my work like public policy, I quite by chance chose a topic on White Collar Crime which was FASCINATING!
Over the course of the 3 months we had a number of wonderful guests who contributed to the experience and had their own enjoyable and relaxing holidays with us in little Turangi: fishing, bushwalking, going to the hot pools and thermal walks, doing high tea at the Tongariro Chateau at Whakaapa Village, Huka Falls in Taupo, and even enjoying some excellent mini golf. Thank you all for visiting, spending time with us and sharing in our adventure. We love you all!
Little A is now almost 8 months old and has had leaps and bounds in development from a little baby to an almost toddler! She has learned to roll and commando crawl (pulling herself around with her arms only) around the floor. She loves to sit up and play with her toys and is eating her way through a broad range of foods, though pear is still her favourite. She is starting to make a range of noises and the race is on as to whether she’ll say ma or da first She has quite the social personality and we adore her utterly! She surprised Daddy with a number of presents on Father’s Day, and helped to make our first family Father’s Day memorable indeed.
And so it’s with mixed feelings that we bid adieu to the sleepy town of Turangi. It’s been a great adventure, with lots of wonderful memories and a much-needed chance to get off the grid for a while, but we’re both looking forward to re-entering respectable society, catching up with those of you that we haven’t seen for a while, and planning our next great adventure. We’ll be back in Turangi in February for a different adventure with friends of ours from the US, but that will be only a week or so. Turangi is a great place, and if you’re ever in the area stop into the local shopping centre and try one of the delicious pork and watercress or lamb, mint and kumara pies available from the local bakeries – reason enough to return again and again.
A few events, but mostly circling around London:
- Open collaboration – an O’Reilly Online Conference, at 10am PT, Tuesday September 13 2016 – I’m going to be giving a new talk titled Forking Successfully. I’ve seen how the platform works, and I’m looking forward to trying this method out (its like a webminar but not quite!)
- September MySQL London Meetup – I’m going to focus on MySQL, a branch, Percona Server and the fork MariaDB Server. This will be interesting because one of the reasons you don’t see a huge Emacs/XEmacs push after about 20 years? Feature parity. And the work that’s going into MySQL 8.0 is mighty interesting.
- Operability.io should be a fun event, as the speakers were hand-picked and the content is heavily curated. I look forward to my first visit there.
IBM (my employer) recently announced the new S822LC for HPC POWER8+NVLINK NVIDIA P100 GPUs server (press release, IBM Systems Blog, The Register). The “For HPC” suffix on the model number is significant, as the S822LC is a different machine. What makes the “for HPC” variant different is that the POWER8 CPU has (in addition to PCIe), logic for NVLink to connect the CPU to NVIDIA GPUs.
There’s also the NVIDIA Tesla P100 GPUs which are NVIDIA’s latest in an SXM2 form factor, but instead of delving into GPUs, I’m going to tell you how to compile the firmware for this machine.
You see, this is an OpenPOWER machine. It’s an OpenPOWER machine where the vendor (in this case IBM) has worked to get all the needed code upstream, so you can see exactly what goes into a firmware build.
To build the latest host firmware (you can cross compile on x86 as we use buildroot to build a cross compiler):git clone --recursive https://github.com/open-power/op-build.git cd op-build . op-build-env op-build garrison_defconfig op-build
That’s it! Give it a while and you’ll end up with output/images/garrison.pnor – which is a firmware image to flash onto PNOR. The machine name is garrison as that’s the code name for the “S822LC for HPC” (you may see Minsky in the press, but that’s a rather new code name, Garrison has been around for a lot longer as a name).
Electron Workshop 31 Arden Street, North Melbourne.Link: http://www.sfd.org.au/melbourne/
There will not be a regular LUV Beginners workshop for the month of September. Instead, you're going to be in for a much bigger treat!
This month, Free Software Melbourne, Linux Users of Victoria and Electron Workshop are joining forces to bring you the local Software Freedom Day event for Melbourne.
The event will take place on Saturday 17th September between 10am and 4:30pm at:
31 Arden Street, North Melbourne.
Electron Workshop is on the south side of Arden Street, about half way between Errol Street and Leveson Street. Public transport: 57 tram, nearest stop at corner of Errol and Queensberry Streets; 55 and 59 trams run a few blocks away along Flemington Road; 402 bus runs along Arden Street, but nearest stop is on Errol Street. On a Saturday afternoon, some car parking should be available on nearby streets.
LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue.
Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.September 17, 2016 - 10:00
Notice that the high tensile M8 bolt attached to the top suspension is at a slight angle. In the end the top of the suspension will be between the two new alloy plates. But to do that I need to trim some waste from the plates, but to do that I needed to test mount to see where and what needs to be trimmed. I now have an idea of what to trim for a final test mount ☺.
Below is a close up view of the coil over showing the good clearance from the tire and wheel assembly and the black markings on the top plate giving an idea of the material that I will be removing so that the top tension nut on the suspension clears the plate.
The mounting hole in the suspension is 8mm diameter. The bearing blocks are for 1/4 inch (~6.35mm) diameters. For test mounting I got some 1/4 inch threaded rod and hacked off about what was needed to get clear of both ends of the assembly. M8 nylock nuts on both sides provide a good first mounting for testing. The crossover plate that I made is secured to the beam by two bolts. At the moment the bearing block is held to the crossover by JB Weld only, I will likely use that to hold the piece and drill through both chunks of ally and bolt them together too. It's somewhat interesting how well these sorts of JB and threaded rod assemblies seem to work though. But a fracture in the adhesive at 20km/h when landing from a jump without a bolt fallback is asking for trouble.
The top mount is shown below. I originally had the shock around the other way, to give maximum clearance at the bottom so the tire didn't touch the shock. But with the bottom mount out this far I flipped the shock to give maximum clearance to the top mounting plates instead.
So now all I need is to cut down the top plates, drill bolt holes for the bearing to crossover plate at the bottom, sand the new bits smooth, and maybe I'll end up using the threaded rod at the bottom with some JB to soak up the difference from 1/4 inch to M8.
Oh, and another order to get the last handful of parts needed for the mounting.