Planet Linux Australia
10 years ago I first blogged about getting glasses . I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:New 4 Years Old Really Old R-SPH 0.00 0.00 -0.25 R-CYL -1.50 -1.50 -1.50 R-AXS 180 179 180 L-SPH 0.00 -0.25 -0.25 L-CYL -1.00 -1.00 -1.00 L-AXS 5 10 179
The Specsavers website has a good description of what this means . In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.
The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.
Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this ). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.
The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.
-  https://etbe.coker.com.au/2006/09/20/vision/
-  https://www.specsavers.com.au/glasses/your-prescription
-  https://en.wikipedia.org/wiki/Astigmatism_(eye)
- I’m thrilled to naturally be at Percona Live Europe Amsterdam from Oct 3-5 2016. I have previously talked about some of my sessions but I think there’s another one on the schedule already.
- LinuxCon Europe – Oct 4-6 2016. I won’t be there for the whole conference, but hope to make the most of my day on Oct 6th.
- MariaDB Developer’s meeting – Oct 6-8 2016 – skipping the first day, but will be there all day 2 and 3. I even have a session on day 3, focused on compatibility with MySQL, a topic I deeply care about (session schedule)
- OSCON London – Oct 17-20 2016 – a bit of a late entrant, I do have a talk titled “Forking successfully”, and wonder if a branch makes more sense, how to fork, and what happens when parity comes?
- October MySQL London Meetup – Oct 17 2016 – I’m already in London, I wouldn’t miss this meetup for the world! There’s no agenda yet, but I think the discussion should be fun.
I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.Wget Overview
Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.
As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.
An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.
However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team ). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading .Privilege Separation
If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.
I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.
It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.Browser Features
Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.Conclusion
I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.
Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).
-  https://www.schneier.com/blog/archives/2016/04/how_hacking_tea.html
-  https://krebsonsecurity.com/2016/04/fbi-2-3-billion-lost-to-ceo-email-scams/
– Does the job
– People will accept
– Never ending Poc
– Doesn’t do the job
How to pick
– Budget / Licensing
– does it address your pain points
– Learning cliff
– Community support
– Enterprise acceptability
– Config in version control?
Central tooling team
– Pro standardize, educate, education
– Constant Bottleneck, delays, stifles innovation, not in sync with teams
DevOps != Tool
Tools != DevOps
Tools facilitate it not define it.Howard Duff – Eric and his blue boxes
Physical example of KanBan in an underwear factoryLindsey Holmwood – Deepening people to weather the organisation
Note: Lindsey presents really fast so I missed recording a lot from the talk
His Happy, High performing Team -> He left -> 6 months later half of team had left
How do you create a resilient culture?
What is culture?
– Lots of research in organisation psychology
– Edgar Schein – 3 levels of culture
– Artefacts, Values, Assumptions
– Physical manifestations of our culture
– Standups, Org charts, desk layout, documentation
– actual software written
– Easiest to see and adopt
– Goals, strategies and philosophise
– “we will dominate the market”
– “Management if available”
– “nobody is going to be fired for making a mistake”
– lived values vs aspiration values (People have good nose for bullshit)
– Example, cores values of Enron vs reality
– Work as imagined vs Work is actually done
– beliefs, perceptions, thoughts and feelings
– exist on an unconscious level
– hard to discern
– “bad outcomes come from bad people”
– “it is okay to withhold information”
– “we can’t trust that team”
– “profits over people”
If we can change our people, we can change our culture
What makes a good team member?
– Assume the best of others
– Aware of their cognitive bias
– Aware of the fundamental attribution error (judge others by actions, judge ourselves by our intentions)
– Aware of hindsight bias. Hindsight bias is your culture killer
– When bad things happen explain in terms of foresight
– Regular 1:1s
Eliminate performance reviews
Willing to play devils advocate
Commit and acting
– Shared goal settings
– Don’t solutioneer
– Provide context about strategy, about desired outcome
What makes a good team?
Influence of hiring process
– Willingness to adapt and adopt working in new team
– Qualify team fit, tech talent then rubber stamp from team lead
– have a consistent script, but be prepared to improvise
– Everyone has the veto power
– Leadership is vetoing at the last minute, thats a systemic problem with team alignment not the system
– Benefit: team talks to candidate (without leadership present)
– Many different perspectives
– unblock management bottlenecks
– Risk: uncovering dysfunctions and misalignment in your teams
– Hire good people, get out of their way
Diversity and inclusion
– includes: race, gender, sexual orientation, location, disability, level of experience, work hours
– Seek out diverse candidates.
– Sponsor events and meetups
– Make job description clear you are looking for diverse background
– Must include and embrace differences once they actually join
– Safe mechanism for people to raise criticisms, and acting on them
Leadership and Absence of leadership
– Having a title isn’t required
– If leader steps aware things should continue working right
– Team is their own shit umbrella
– empowerment vs authority
– empowerment is giving permission from above (potentially temporary)
– authority is giving power (granting autonomy)
Part of something bigger than the team
– help people build up for the next job
– Guilds in the Spotify model
– Run them like meetups
– Get senior management to come and observe
– What we’re talking about is tech culture
We can change tech culture
– How to make it resist the culture of the rest of the organisation
– Artefacts influence behaviour
– Artifact fast builds -> value: make better quality
– Artifact: post incident reviews -> Value: Failure is an opportunity for learning
Q: What is a pre-incident review
A: Brainstorm beforehand (eg before a big rollout) what you think might go wrong if something is coming up
then afterwards do another review of what just went wrong
Q: what replaces performance reviews
A: One on ones
Q: Overcoming Resistance
A: Do it and point back at the evidence. Hard to argue with an artifact
Q: First step?
A: One on 1s
Getting started, reading books by Patrick Lencioni:
– Solos, Politics and turf wars
– 5 Dysfunctions of a team
Maybe title should be “Culture is Hard”
Working at HealthLink
– Windows running Java stuff
– Out of date and poorly managed
– Deployments manual, thrown over the wall by devs to ops
Team Death Star
– Destroy bad processes
– Change deployment process
CD and CI Requirements
– Goal: Time to regression test under 2 mins, time to deploy under 2 mins (from 2 weeks each)
– Puppet too slow to deploy code in a minute or two. App deply vs Conf mngt
– Can’t use (then) containers on Windows so not an option
– Puppet for Server config
Smashed the 2 minute target!
– We focused on the tech side and let the people side slip
– Windows shop, hard work even to get a Linux VM at the start
– Devs scared to run on Linux. Some initial deploy problems burnt people
– Lots of different new technologies at once all pushed to devs, no pull from them.
Blackout where we weren’t allowed to talk to them for four weeks
– Should have been a warning sign…
We thought we were ready.
– Ops was not ready
“5 dysfunctions of a team”
– Trust as at the bottom, we didn’t have that
– We were aware of this, but didn’t follow though
– We were used to disruption but other teams were not
Note: I’m not sure how the story ended up, they sort of left it hanging.Pavel Jelinek – Kubernetes in production
Works at Movio
– Software for Cinema chains (eg Loyalty cards)
– 100million emails per month. million of SMS and push notifications (less push cause ppl hate those)
– Started with mysql and php application
– AWS from the beginning
– On largest aws instance but still slow.
Decided to go with Microservices
– Put stuff in Docker
– Used Jenkins, puppet, own docker registery, rundeck (see blog post)
– Devs didn’t like writing puppet code and other manual setup
Decided to go to new container management at start of 2016
– Was pushing for Nomad but devs liked Kubernetes
– Built in ports, HA, LB, Health-checks
Concepts in Kub
– POD – one or more containers
– Deployment, Daemon, Pet Set – Scaling of a POD
– Service- resolvable name, load balancing
– ConfigMap, Volume, Secret – Extended Docker Volume
Devs look after some kub config files
– Brings them closer to how stuff is really working
– Using kubectl to create pod in his work’s lab env
– Add load balancer in front of it
– Add a configmap to update the container’s nginx config
– Make it public
– LB replicas, Rolling updates
– lots of small containers are better
– log on container stdout, preferable via json
– Test and know your resource requirements (at movio devs teams specify, check and adjust)
– Be aware of the node sizes
– Stateless please
– if not stateless than clustered please
– Must handle unexpected immediate restarts
Here’s a summary of the 2016 Linux Security Summit, which was held last month in Toronto.
Presentation slides are available at http://events.linuxfoundation.org/events/archive/2016/linux-security-summit/program/slides.
This year, videos were made of the sessions, and they may be viewed at https://www.linux.com/news/linux-security-summit-videos — many thanks to Intel for sponsoring the recordings!
LWN has published some excellent coverage:
- Inside the mind of a Coccinelle programmer (Julia Lawall keynote)
- State of the Kernel Self Protection Project (Kees Cook)
- Toward measured boot out of the box (Matthew Garrett)
- Filesystem images and unprivileged containers (james Bottomley)
- On the way to safe containers (Stéphane Graber and Tycho Andersen)
- Minijail (Jorge Lucangeli Obes)
- AMD memory encryption technologies (David Kaplan)
- Audit, namespaces, and containers (Richard Guy Briggs)
This is a pretty good representation of the main themes which emerged in the conference: container security, kernel self-protection, and integrity / secure boot.
Many of the core or low level security technologies (such as access control, integrity measurement, crypto, and key management) are now fairly mature. There’s more focus now on how to integrate these components into higher-level systems and architectures.
One talk I found particularly interesting was Design and Implementation of a Security Architecture for Critical Infrastructure Industrial Control Systems in the Era of Nation State Cyber Warfare. (The title, it turns out, was a hack to bypass limited space for the abstract in the cfp system). David Safford presented an architecture being developed by GE to protect a significant portion of the world’s electrical grid from attack. This is being done with Linux, and is a great example of how the kernel’s security mechanisms are being utilized for such purposes. See the slides or the video. David outlined gaps in the kernel in relation to their requirements, and a TPM BoF was held later in the day to work on these. The BoF was reportedly very successful, as several key developers in the area of TPM and Integrity were present.
— LinuxSecuritySummit (@LinuxSecSummit) August 25, 2016
Attendance at LSS was the highest yet with well over a hundred security developers, researchers and end users.
Special thanks to all of the LF folk who manage the logistics for the event. There’s no way we could stage something on this scale without their help.
Stay tuned for the announcement of next year’s event!
– “News” Website
– 5 person DevOps team
– “Something you do because Gartner said it’s cool”
– Sysadmin -> InfraCoder/SRE -> Dev Shepherd -> Dev
– Stuff in the middle somewhere
Company Structure drives DevOps structure
– Lots of products – one team != one product
– Dev teams with very specific focus
– Scale – too big, yet to small
About our team
– Mainly Ops focus
– small number compared to developers
– Operate like an agency model for developers
– “If you buy the Dom Post it would help us grow our team”
– Lots of different vendors with different skill levels and technology
– Use KanBan with Jira
– Works for Ops focussed team
– Not so great for long running projects
War Against OnCall
– Biggest cause of burnout
– focus on minimising callouts
– Zero alarm target
– Love pagerduty
Commonalities across platforms
– Everyone using compute
– Using Public Cloud
– Using off the shelf version control, deployment solutions
– Don’t get overly creative and make things too complex
– Proven technology that is well tried and tested and skills available in marketplace
– Classic technologist like Nginx, Java, Varnish still have their place. Don’t always need latest fashion
– Linux, ubuntu
– Adobe AEM Java CMS
– AWS 14x c4.2xlarge
– Varnish in front, used by everybody else. Makes ELB and ALB look like toys
How use Varnish
– Retries against backends if 500 replies, serve old copies
– split routes to various backends
– Control CDN via header
– Dynamic Configuration via puppet
– Keeps online during breaking load
– 90% cache offload
– Management is a bit slow and manual
– Small batch jobs
– Check mail reputation score
– “Download file from a vendor” type stuff
– Purge cache when static file changes
– Lamda webapps – Hopefully soon, a bit immature
Increasing number of microservices
Standards are vital for microservices
– Simple and reasonable
– Shareable vendors and internal
– grow organicly
– Needs to be detail
– 12 factor App
– 3 languages Node, Java, Ruby
– Common deps (SQL, varnish, memcache, Redis)
– Build pipeline standardise. Using Codeship
– Standardise server builds
– Everything Automated with puppet
– Puppet building docker containers (w puppet + puppetstry)
– Std Application deployment
– Had proliferation
– pm2, god, supervisord, systemvinit are out
– systemd and upstart are in
– “Enterprise ___” is always bad
– Educating the business is a forever job
– Be reasonable, set boundaries
More Stuff at
Q: Pull request workflow
A: Largely replaced traditional review
Q: DR eg AWS outage
A: Documented process if codeship dies can manually push, Rest in 2*AZs, Snapshots
Q: Dev teams structure
A: Project specific rather than product specific.
Q: Puppet code tested?
A: Not really, Kinda tested via the pre-prod environment, Would prefer result (server spec) testing rather than low level testing of each line
A: Code team have good test coverage though. 80-90% in many cases.
Q: Load testing, APM
A: Use New Relic. Not much luck with external load testing companies
Q: What is somebody wants something non-standard?
A: Case-by-case. Allowed if needed but needs a good reason.
Q: What happens when automation breaks?
A: Documentation is actually pretty good.
Theory: Devops is role that never existed.
In the old days
– Shipping used to be hard and expensive, eg on physical media
– High cost of release
– but everybody else was the same.
– Lots of QA and red tape, no second chances
Then we got the Internet
– Speed became everything
– You just shipped enough
But Hardware still was a limiting factor
– Virtual machines
This led to complacency
– Still had a physical server under it all
Birth of devops
– Software got faster but still had to have hardware under their somewhere
– Disparity between operations cadence and devs cadence
– things got better
– But we didn’t free ourselves from hardware
– Now everything is much more complex
Developers are now divorced from the platform
– Everything is abstracted
– It is leaky buckets all the way down
– Education of developers as to what happens below the hood
– Stop reinventing the where
– Harmony is much more productive
– Lots of tools means that you don’t have enough expertise on each
– Reduce fiefdoms
– Push responsibility but not ownership (you own it but the devs makes some of the changes)
– Live with the code
– Pit of success, easy ways to fail that don’t break stuff (eg test environments, by default it will do the right thing)
– Be Happy. Everybody needs to be a bit devops and know a bit of everything.
Backend developer at Spotify
– 100m active users
– 800+ tech employees
– 120 teams
– Microservices architecture
Walk though Sample artist’s page
– each component ( playlist, play count, discgraphy) is a seperate service
– Aggregated to send result back to client
Hard to co-ordinate between services as scale grows
– 1000+ services
– Each need to use each others APIs
– Dev teams all around the world
– Teams had docs in different places
– Some in Wiki, Readme, markdown, all different
Current Solution – System Z
– Centralise in one place, as automated as possible
– Internal application
– Web app, catalog of all systems and its parts
– Well integrated with Apollo service
Web Page for each service
– Various tabs
– Configuration (showing versions of build and uptimes)
– API – list of all endpoints for service, scheme, errors codes, etc (automatically populated)
– System tab – Overview on how service is connected to other services, dependencies (generated automatically)
– System Z gets information from Apollo and prod servers about each service that has been registered
– Java libs for writing microservices
– Open source
– Metadata module
– Exposes endpoint with metadata for each service
– instance info – versions, uptime
– configuration – currently loaded config of the service
– endpoints –
– call information – monitors service and learns and returns what incoming and outgoing services the service actually does and to/from what other services.
– Automatically builds dependencies
– Quicker access to relevant information
– Automated boring stuff
– All in one place
– Think about growth and scaling at the start of the project
Q: How to handle breaking APIs
A: We create new version of API endpoint and encourage people to move over.
– Works for Datacom
– Consultant in Application performance management team
Story from Start of 2015
– Friday night phone calls from your boss are never good.
– Dropped in application monitoring tools (Dynatrace) on Friday night, watch over weekend
– Prev team pretty sure problem is a memory leak but had not been able to find it (for two weeks)
– If somebody tells you they know what is wrong but can’t find it, give details or fix it then be suspicious
Book: Java Enterprise performance
– Monday prod load goes up and app starts crashing
– Told ops team but since crash wasn’t visable yet, was not believed. waited
– Java App, Jboss on Linux
– Multiple JVMs
– Oracle DBs, Mulesoft ESB, ActiveMQ, HornetQ
Ah Ha moment
– Had a look at import process
– 2.3 million DB queries per half hour
– With max of 260 users, seems way more than what is needed
– Happens even when nobody is logged in
Tip: Typically 80% of all issues can be detected in dev or test if you look for them.
Where did this code come from?
– Process to import a csv into the database
– 1 call mule -> 12 calls to AMQ -> 12 calls to App -> 102 db queries
– Passes all the tests… But
– Still shows huge growth in queries as we go through layers
– DB queries grow bigger with each run
Tip: Know how your code behaves and track how this behavour changes with each code change (or even with no code change)
Q: Why Dynatrace?
A: Quick to deploy, useful info back in only a couple of hours
Originally in the Marines, environment where burnout not tolerated
Works for Thoughtworks – not a mental health professional
Devops could make this worse
Some clichéd places say: “Teach the devs puppet and fire all the Ops people”
Why should we address burnout?
– Google found psychological safety was the number 1 indicator of an effective team
– Not just a negative, people do better job when feeling good.
What is burnout
– The Truth about burnout – Maslach and Leiter
– The Dimensions of Burnout
– Mismatch between work and the person
– Work overload
– Lack of control
– Insufficient reward
– Breakdown of communication
– Various prioritisation methods
– More load sharing
– Less deploy marathons
– Some orgs see devops as a cost saving
– There is no such thing as a full stack engineer
– team has skills, not a person
Lack of Control
– Team is ultimately for the decissions
– Use the right technolgy and tools for the team
– This doesnt mean a “Devops team” contolling what others do
– Actually not a great motivator
Breakdown in communication
– Walls between teams are bad
– Everybody involved with product should be on the same team
– 2 pizza team
– Pairs with different skill sets are common
– Swarming can be done when required ( one on keyboard, everybody else watching and talking and helping on big screen)
– Blameless retrospectives are held
– No “Devops team”, creating a silo is not a solution for silos
Absence of Fairness
– You build it, you run it
– Everybody is responsible for quality
– Everybody is measured in the same way
– example Expedia – *everything* deployed has A/B tesing
– everybody goes to release party
– In the broadest possible sense
– eg Company industry and values should match your own
Reminder: it is about you and how you fit in with the above
Pay attention to how you feel
– Increase your self awareness
– Maslach Burnout inventory
– Try not to focus on the negative.
Pay attention to work/life balance
– Ask for it, company might not know your needs
– If you can’t get it then quit
Talk to somebody
– Professional help is the best
– Trained to identify cause and effect
– can recommend treatment
– You’d call them if you broke your arm
Friends and family
– People who care, that you haven’t even meet
– Empathy is great , but you aren’t a professional
– Don’t guess cause and effect
– Don’t recommend treatment if not a professional
Q: Is it Gender specific for men (since IT is male dominated) ?
– The “absence of fairness” problem is huge for women in IT
Q: How to promote Psychological safety?
– Blameless post-mortems
Damian Brady – Just let me do my job
After working in govt, went to work for new company and hoped to get stuff done
But whole dev team was unhappy
– Random work assigned
– All deadlines missed
– Lots of waste of time meetings
But 2 years later
– Hitting all deadlines
– Useful meetings
What changes were made?
New boss, protect devs for MUD ( Meetings, uncertainty, distractions )
– In board sense, 1-1, all hands, normal meetings
– People are averaging 7.5 hours/week in meetings
– On average 37% of meeting time is not relevant to person ( ~ $8,000 / year )
– Do meetings have goals and do they achieve those goals?
– 38% without goals
– only half of remaining meet those goals
– around 40% of meetings have and achieve goals
– Might not be wasted. Look at “What has changed as result of this meeting?”
– New Boss went to meetings for us (didn’t need everybody) as a representative
– Set a clear goal and agenda
– Avoid gimmicks
– don’t default to 30min or 1h
– 60% of people interrupted 10 or more times per day
– Good to stay in a “flow state”
– 40% people say they are regularly focussed in their work. but all are sometimes
– 35% of time loss focus when interrupted
– Study shows people can take up to 23mins to get focus back after interruption
– $25,000/year wasting according to interruptions
– Allowing headphones, rule not to interrupt people wearing headphones
– “Do not disturb” times
– Little Signs
– Had “the finger” so that you could tell somebody your were busy right now and would come back to them
– Let devs go to meeting rooms or cafes to hide from interruptions
– All “go dark” where email and chat turned off
– 82% in survey were clear
– nearly 60% of people their top priority changes before they can finish it.
– Autonomy, mastery, purpose
– Tried to let people get clear runs at work
– Helped people acknowledge the unexpected work, add to Sprint board
– Established a gate – Business person would have to go through the manager
– Make the requester responsible – made the requester decide what stuff didn’t get done by physically removing stuff from the sprint board to add their own
The new MySQL 8.0.0 milestone release that was recently announced brings something that has been a looooong time coming: the removal of the FRM file. I was the one who implemented this in Drizzle way back in 2009 (July 28th 2009 according to Brian)- and I may have had a flashback to removing the tentacles of the FRM when reading the MySQL 8.0.0 announcement.
As an idea for how long this has been on the cards, I’ll quote Brian from when we removed it in Drizzle:
We have been talking about getting rid of FRM since around 2003. I remember a drive up to northern Finland with Kaj Arnö, where we spent an hour talking about this. I, David, and MontyW have talked about this for years.
Soo… it was a known problem for at least thirteen years. One of the issues removing it was how pervasive all of the FRM related things were. I shudder at the mention of “pack_flag” and Jay Pipes probably does too.
At the time, we tried a couple of approaches as to how things should look. Our philosophy with Drizzle was that it should get out of the way at let the storage engines be the storage engines and not try to second guess them or keep track of things behind their back. I still think that was the correct architectural approach: the role of Drizzle was to put SQL on top of a storage engine, not to also be one itself.
Looking at the MySQL code, there’s one giant commit 31350e8ab15179acab5197fa29d12686b1efd6ef. I do mean giant too, the diffstat is amazing:786 files changed, 58471 insertions(+), 25586 deletions(-)
How anyone even remotely did code review on that I have absolutely no idea. I know the only way I could get it to work in Drizzle was to do it incrementally, a series of patches that gradually chiseled out what needed to be taken out so I could put it an API and the protobuf code.
Oh, and in case you’re wondering:- uint offset,pack_flag; + uint offset;
Thank goodness. Now, you may not appreciate that as much as I might, but pack_flag was not the height of design, it was… pretty much a catchalll for some kind of data about a field that wasn’t something that already had a field in the FRM. So it may include information on if the field could be null or not, if it’s decimal, how many bytes an integer takes, that it’s a number and how many oh, just don’t ask.
Also gone is the weird interval_id and a whole bunch of limitations because of the FRM format, including one that I either just discovered or didn’t remember: if you used all 256 characters in an enum, you couldn’t create the table as MySQL would pick either a comma or an unused character to be the separator in the FRM!?!
Also changed is how the MySQL server handles default values. For those not aware, the FRM file contains a static copy of the row containing default values. This means the default values are computed once on table creation and never again (there’s a bunch of work arounds for things like AUTO_INCREMENT and DEFAULT NOW()). The new sql/default_values.cc is where this is done now.
For now at least, table metadata is also written to a file that appears to be JSON format. It’s interesting that a SQL database server is using a schemaless file format to describe schema. It appears that these files exist only for disaster recovery or perhaps portable tablespaces. As such, I’m not entirely convinced they’re needed…. it’s just a thing to get out of sync with what the storage engine thinks and causes extra IO on DDL (as well as forcing the issue that you can’t have MVCC into the data dictionary itself).
What will be interesting is to see the lifting of these various limitations and how MariaDB will cope with that. Basically, unless they switch, we’re going to see some interesting divergence in what you can do in either database.
There’s certainly differences in how MySQL removed the FRM file to the way we did it in Drizzle. Hopefully some of the ideas we had were helpful in coming up with this different approach, as well as an extra seven years of in-production use.
At some point I’ll write something up as to the fate of Drizzle and a bit of a post-mortem, I think I may have finally worked out what I want to say…. but that is a post for another day.
This is my very first post on Planet PostgreSQL, so thank you for having me here! I’m not sure if you’re aware, but the PostgreSQL Events page lists the conference as something that should be of interest to PostgreSQL users and developers.
There is a PostgreSQL Day on October 4 2016 in Amsterdam, and if you’re planning on just attending a single day, use code PostgreSQLRocks and it will only cost €200+VAT.
I for one am excited to see Patroni: PostgreSQL High Availability made easy, Relational Databases at Uber: MySQL & Postgres, and Linux tuning to improve PostgreSQL performance: from hardware to postgresql.conf.
I’ll write notes here, if time permits we’ll do a database hackers lunch gathering (its good to mingle with everyone), and I reckon if you’re coming for PostgreSQL day, don’t forget to also signup to the Community Dinner at Booking.com.
So, about ten days ago the MySQL Server Team released MySQL 8.0.0 Milestone to the world. One of the most unfortunate things about MySQL development is that it’s done behind closed doors, with the only hints of what’s to come arriving in maybe a note on a bug or such milestone releases that contain a lot of code changes. How much code change? Well, according to the text up on github for the 8.0 branch “This branch is 5714 commits ahead, 4 commits behind 5.7. ”
Way back in 2013, I looked at MySQL Code Size over releases, which I can again revisit and include both MySQL 5.7 and 8.0.0.
While 5.7 was a big jump again, we seem to be somewhat leveling off, which is a good thing. Managing to add features and fix long standing problems without bloating code size is good for software maintenance. Honestly, hats off to the MySQL team for keeping it to around a 130kLOC code size increase over 5.7 (that’s around 5%).
These days I’m mostly just a user of MySQL, pointing others in the right direction when it comes to some issues around it and being the resident MySQL grey(ing)beard(well, if I don’t shave for a few days) inside IBM as a very much side project to my day job of OPAL firmware.
So, personally, I’m thrilled about no more FRM, better Unicode, SET PERSIST and performance work. With my IBM hat on, I’m thrilled about the fact that it compiled on POWER out of the box and managed to work (I haven’t managed to crash it yet). There seems to be a possible performance issue, but hey, this is a huge improvement over the 5.7 developer milestones when run on POWER.
A lot of the changes are focused around usability, making it easier to manage and easier to run at at least a medium amount of scale. This is long overdue and it’s great to see even seemingly trivial things like SET PERSIST coming (I cannot tell you how many times that has tripped me up).
In a future post, I’ll talk about the FRM removal!
The original article presented two graphs: one of MariaDB searches (which are increasing) and the other showing MySQL searches (decreasing or leveling out). It turns out that the y axis REALLY matters.
I honestly expected better….
— Stewart Smith (@stewartsmith) September 22, 2016
— Stewart Smith (@stewartsmith) September 22, 2016
Aaron Sullivan announced on the Rackspace Blog that you can now get your own Barreleye system! What’s great is that the code for the Barreleye platform is upstream in the op-build project, which means you can build your own firmware for them (just like garrison, the “IBM S822LC for HPC” system I blogged about a few days ago).
Remarkably, to build an image for the host firmware, it’s eerily similar to any other platform:git clone --recursive https://github.com/open-power/op-build.git cd op-build . op-build-env op-build barreleye_defconfig op-build
…and then you wait. You can cross compile on x86.
Hopefully, someone involved in OpenBMC can write on how to build the BMC firmware.
Linux Users of Victoria (LUV) Announce: LUV Main October 2016 Meeting: Sending Linux to Antarctica, 2012-2017 / Annual General Meeting
6th Floor, 200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
• Scott Penrose, Sending Linux to Antarctica: 2012-2017
• Annual General Meeting and lightning talks
200 Victoria St. Carlton VIC 3053 (formerly the EPA building)
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
LUV would like to acknowledge Red Hat for their help in obtaining the venue.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.October 4, 2016 - 18:30
DrupalCon Dublin is just around the corner. Earlier today I started my journey to Dublin. This week I'll be in Mumbai for some work meetings before heading to Dublin.
On Tuesday 27 September at 1pm I will be presenting my session Let the Machines do the Work. This lighthearted presentation provides some practical examples of how teams can start to introduce automation into their Drupal workflows. All of the code used in the examples will be available after my session. You'll need to attend my talk to get the link.
As part of my preparation for Dublin I've been road testing my session. Over the last few weeks I delivered early versions of the talk to the Drupal Sydney and Drupal Melbourne meetups. Last weekend I presented the talk at Global Training Days Chennai, DrupalCamp Ghent and DrupalCamp St Louis. It was exhausting presenting three times in less than 8 hours, but it was definitely worth the effort. The 3 sessions were presented using hangouts, so they were recorded. I gained valuable feedback from attendees and became aware of some bits of my talk needed some attention.
Just as I encourage teams to iterate on their automation, I've been iterating on my presentation. Over the next week or so I will be recutting my demos and polishing the presentation. If you have a spare 40 minutes I would really appreciate it if you watch one of the session recording below and leave a comment here with any feedback.Global Training Days Chennai DrupalCamp Ghent
Note: I recorded the audience not my slides.DrupalCamp St Louis
Note: There was an issue with the mic in St Louis, so there is no audio from their side.
The day before yesterday (at Infoxchange, a non-profit whose mission is “Technology for Social Justice”, where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it.
Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn’t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing – but it turned out that many programs would segfault – e.g. it couldn’t run bash, but sh (dash) was OK.
I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn’t yet run in jessie).
After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I’d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23.
Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6).
In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven’t actually tested installing jessie’s libc6 on squeeze – if it works, I expect it’ll require a lot of extra stuff to be installed too.
I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie.
To build it, I had to use a system which hadn’t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker – crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn’t great, but it was OK…and it worked. Docker has native support for ZFS, so that’s what I’m using on my real hardware.
I started with the base wheezy image we’re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:APT::Default-Release "wheezy";
Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app’s container again.
I then installed the following:apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev
To minimise the risk of incompatible updates, it’s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn’t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them – I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it’s working OK now and we’re probably better off with libssl-dev from jessie anyway).
Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn’t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly.
That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I’m adding to it, but I expect there’ll be a few more yaks to shave before I’m finished.
When I finish what I’m currently working on, I’ll take a look at what needs to be done to get this app running on jessie. It’s on the TODO list at work, but everyone else is too busy – a perfect job for an unpaid volunteer. Wheezy’s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg.
Last October data.gov.au was moved from the Department of Finance to the Department of Prime Minister and Cabinet (PM&C) and I moved with the team before going on maternity leave in January. In July of this year, whilst still on maternity leave, I announced that I was leaving PM&C but didn’t say what the next gig was. In choosing my work I’ve always tried to choose new areas, new parts of the broader system to better understand the big picture. It’s part of my sysadmin background – I like to understand the whole system and where the config files are so I can start tweaking and making improvements. These days I see everything as a system, and anything as a “config file”, so there is a lot to learn and tinker with!
Over the past 3 months, my little family (including new baby) has been living in New Zealand on a bit of a sabbatical, partly to spend time with the new bub during that lovely 6-8 months period, but partly for us to have the time and space to consider next steps, personally and professionally. Whilst in New Zealand I was invited to spend a month working with the data.govt.nz team which was awesome, and to share some of my thoughts on digital government and what systemic “digital transformation” could mean. It was fun and I had incredible feedback from my work there, which was wonderful and humbling. Although tempting to stay, I wanted to return to Australia for a fascinating new opportunity to expand my professional horizons.
Thus far I’ve worked in the private sector, non-profits and voluntary projects, political sphere (as an advisor), and in the Federal and State/Territory public sectors. I took some time whilst on maternity leave to think about what I wanted to experience next, and where I could do some good whilst building on my experience and skills to date. I had some interesting offers but having done further tertiary study recently into public policy, governance, global organisations and the highly complex world of international relations, I wanted to better understand both the regulatory sphere and how international systems work. I also wanted to work somewhere where I could have some flexibility for balancing my new family life.
I’m pleased to say that my next gig ticks all the boxes! I’ll be starting next week at AUSTRAC, the Australian financial intelligence agency and regulator where I’ll be focusing on international data projects. I’m particularly excited to be working for the brilliant Dr Maria Milosavljevic (Chief Innovation Officer for AUSTRAC) who has a great track record of work at a number of agencies, including as CIO of the Australian Crime Commission. I am also looking forward to working with the CEO, Paul Jevtovic APM, who is a strong and visionary leader for the organisation, and I believe a real change agent for the broader public sector.
It should be an exciting time and I look forward to sharing more about my work over the coming months! Wish me luck