Planet Linux Australia
The reviews online for this book aren't great, and frankly they're right. The plot is predictable, and there isn't much character development. Just lots and lots of blow-by-blow combat. It gets wearing after a while, and I found this book at bit of a slog. Not recommended.
Tags for this post: book william_c_dietz combat halo engineered_human cranial_computer personal_ai aliens
Related posts: Halo: The Fall of Reach; The Last Colony ; The End of All Things; The Human Division; Old Man's War ; The Ghost Brigades Comment Recommend a book
Thanks to the absolutely amazing efforts of the LCA video team, they’ve already (only a few days after I gave it) got the video from my linux.conf.au 2016 talk up!Abstract
In mid 2014, IBM released the first POWER8 based systems with the new Free and Open Source OPAL firmware. Since then, several members of the OpenPower foundation have produced (or are currently producing) machines based on the POWER8 processor with the OPAL firmware.
This talk will cover the POWER8 chip with an open source firmware stack and how it all fits together.
We will walk through all of the firmware components and what they do, including the boot sequence from power being applied up to booting an operating system.
We’ll delve into:
– the time before you have RAM
– the time before you have thermal management
– the time before you have PCI
– runtime processor diagnostics and repair
– the bootloader (and extending it)
– building and flashing your own firmware
– using a simulator instead
– the firmware interface that Linux talks to
– device tree and OPAL calls
– fun in firmware QA and testing
A few weeks ago I noticed a retweet by ESA, asking for expression of interest from space enthusiasts to attend and social-media (verb) the inauguration of a new antenna at their New Norcia deep spacetracking site in Western Australia.
After some um-ing and ah-ing, I decided to apply. After all, when I'm on holiday elsewhere I try to visit observatories and other space related things and am always a bit disappointed when a fence keeps me at a distance.
Last week I got an email with the the happy news that I was one of the fifteen lucky people selected to attend!
So, over the next week you'll probably see a lot of space tweets from me with impressive radio hardware, behind the scenes looks at things, and a lot of excited people.
Tags: spaceSocialSpaceWAESAdeep spaceastronomy
Yesterday at linux.conf.au 2016 in Geelong, I had the privilege of being able to introduce our plans for linux.conf.au 2017, which my team and I are bringing to Hobart next year. We’ll be sharing more with you over the coming weeks and months, but until then, here’s some stuff you might like to know:The Dates
16–20 January 2017.The Venue
We’re hosting at the Wrest Point Convention Centre. I was involved in the organisation of PyCon Australia 2012 and 2013, which used Wrest Point, and I’m very confident that they deeply understand the needs of our community. Working out of a Convention Centre will reduce the amount of work we need to do as a team to organise the main part of the conference, and will let us focus on delivering an even better social programme for you.
We’ll have preferred rates at the adjoining hotels, which we’ll make available to attendees closer to the conference. We will also have the University of Tasmania apartments available, if you’d rather stay at somewhere more affordable. The apartments are modern, have great common spaces, and were super-popular back when lca2009 was in Hobart.The Theme
Our theme for linux.conf.au 2017 is The Future of Open Source. LCA has a long history as a place where people come to learn from people who actually build the world of Free and Open Source Software. We want to encourage presenters to share with us where we think their projects are heading over the coming years. These thoughts could be deeply technical: presenting emerging Open Source technology, or features of existing projects that are about to become part of every sysadmin’s toolbox.
Thinking about the future, though, also means thinking about where our community is going. Open Source has become massively successful in much of the world, but is this success making us become complacent in other areas? Are we working to meet the needs of end-users? How can we make sure we don’t completely miss the boat on Mobile platforms? LCA gets the best minds in Free Software to gather every year. Next year, we’ll be using that opportunity to help see where our world is heading.
So, that’s where our team has got so far. Hopefully you’re as excited to attend our conference as we are to put it on. We’ll be telling you more about it real soon now. In the meantime, why not visit lca2017.org and find out more about the city, or sign up to the linux.conf.au announcements list, so that you can find out more about the conference as we announce it!
- New Zealand Open Source Society
- LCA 2015 give-aways of ARM chromebooks
- Linux on ARM chellenge
- Call to Arms
- x86 != Linux
- Please consider other archetectures
- Open Source GPS and MAP sharing
- Android client and IOS to come
- Create a group, Add placemaps, Share location with a group
- Also run OpenStreetmaps tileserver
- stackptr.com/registration – Invite code LCA2016
- Hat Rack
- code is in githug, but what about everything else?
- How to ack stuff that isn’t code?
- bit.do/LABHR #LABHR
- Recommend people, especially people not like you
- Melbourne 12-16 August
- DjangoCon Au, Science and Data Miniconf, Python in Education plus more on 1st day
- CPF open in mid-March
- Financial assistence programme
- Kiwi PyCon
- 2016 in dunedin
- Town Hall
- 9-11 September
- Have fun
- Open up the government data
- 29-31 July across Aus and NZ
- JMAP: a better way to email
- Lots of email standards, all aweful
- $Company API
- json over https
- Single API for email/cal/contacts
- Mobile/battery/network friendly
- Working now at fastmail
- Support friendly (only uses http, just one port for everything).
- Batches commands, uses OOB notification
- Upgrade path – JMAP proxy
- http://jmap.io , https://proxy.jmap.io/
- “Devops is just a name for a Sysadmin without any experience”
- Lets get back to unix principals with tools
- Machine Learning Demo
- Filk of technical – Lied about being technical/gadget type.
- Randomness at 1MB/s
- Copied from OneRNG
- 4x4mm QFN package attached to USB key
- Driver in Linux 4.1 (good in 4.3)
- Just works!
- Building up smaller batches to test
- Hoping around $30
- Thanks to Speakers
- Clarification about the Speaker Gifts
- Thanks to Sponsors
- Raffle – $9680 raised
- SFC donations with “lcabythebay” in the comment field will be matched (twice) in next week or two.
- Thanks to Main Organisers from LCA President
- Linux.conf.au 2017
- January 16th-20th 2017
- At the Wrest Point casino convention centre. Accommodation on site and at Student accommodation
- Thanks to various people
- hdmi2usb.tv is the video setup
Free as in cheap gadgets: the ESP8266 by Angus Gratton
- I missed the start of the talk but he was giving a history of the release and getting software support for it.
- Arduino for ESP8266 very popular
- 2015-2016 maturing
- Lots of development boards
- Sparkfun ESP8266 thing, Adafruid Hazaah, WeMOS D1
- Common Projects
- Lots of lighting projects, addressable LED strips
- Wireless power monitoing projects
- Copy of common projects. Smoke alarm project
- ESPlant – speakers project built in Open Hardware Miniconf – solar powered gardening sensor
- Moodlight kickstarter
- Not a lot of documentation compared to other micro-controllers. 1/10 that of similar products
- Weird hardware behaviour. Unusual output
- Default baud rate 74880 bps
- Bad TLS – TLS v1.0, 1.1 only , RSA 512/1024 . 2048 might work
- Other examples
- FOSS in ESP8266
- GCC , Lua , Arduino, Micro Python
- axTLS , LWIP, max80211, wpa_supplicant
- Wrapped APIs, almost no source, mostly missing attribution
- Weird licenses on stuff
- Does this source matter?
- Anecdote: TLS random key same every time due to bad random function (later fixed). But still didn’t initially use the built-in random number generator.
- Reverse Engineering
- Wiki , Tools: foogod/xtobjdis , ScratchABit , radara2 (soon)
- esp-open-rtos – based on the old version that was under MIT
- mbedTLS – TLS 1.2 (and older) , RSA to 4096 and other stuff. Audited and maintained
- Working on a testing setup for regression tests
- For beginners
- Start with Ardino
- Look at dev board
- Hopefully other companies will see success and will bring their own products out
- but with a more open licenses
- ESP32 is coming, probably 1y away from being good and ready
secretd – another take on securely storing credentials by Tollef Fog Heen
- Works for fastly
- What is the problem?
- Code can be secret
- Configuration can be secret
- Credentials are secret
- Secrets start in the following and move to the next..
- directly code
- then a configuration file
- then an pre-encrypted store
- then an online store
- Problems with stores
- Complex or insecure
- Manual work to re-encrypt
- Updating is hard
- Not support for dev/prod split
- Requirements for a fix
- Dynamic environment support
- Central storage
- Policy based access controls, live
- APIs for updating
- Use Case
- Hardware (re)bootstrapping
- Hands-of/live handling
- PCI: auditing
- Machine might have no persistent storage
- pwstore – pre-encrypted
- chef-vault – pre-encrypted
- Hashicorp Vault – distributed, complex, TTL on secrets
- etcd – x509
- tree structure, keys are just strings
- positive ACLs
- PostgressSQL backend
- Apache Licensed
- Client -> json over ssh -> secret-shell -> unix socket -> secretd -> postgressSQL
- Encrypting secrets on disk
- Admin tools/other UIs
- Tool integration
- Enrolment key support
- Why not sqlite? – Cause I wanted at database. Postgres more directly supported the data structure I wanted, also type support
- Why do just use built-in postgress security stuff? – Features didn’t exist a year ago, also requires all users must exist as DB users.
Keynote – Genevieve Bell
- Building the Future
- Lots of rolls as an Anthropologist at Intel over last 15 years or so
- Vision of future from 1957 shows what the problems are in 1957 that the future would solve
- Visions of the future seem very clean and linear, in reality it is messy and myriad.
- ATM machine told her “Happy Birthday”
- Imagining “Have you tried turning it off and on again?” at smart city scale is kind of terrifying.
- Many people function well when they are offline, some people used to holiday in places with no cell reception
- Social structures like Sabbath to give people time offline, but devices want us to be always online
- Don’t want to always have seamless between devices, context matters. Want work/home/etc split
- Technology lays bare domestic habits that were previously hidden
- Who is else knows what you household habits are -> Gossip
- Big Data
- Messy , incomplete, inaccurate
- Average human tells 6-200 lies per day
- 100% of Americans lie in online profiles
- Men lie about height, Women lie about weight
- More data does not equal more truth. More data just means more data
- My optimise for the wrong things (from the user’s point of view)
- Security and Privacy
- Conversation entwined with conversation about National Security
- Concepts different from around the world
- What is it like to release data under one circumstance and then to realise you have released it under several others
- Cost of memory down to zero, we should just store everything
- What are the usage models
- What if everything you ever did and said was just there, what if you can never get away from it. There are mental illnesses based on this problem
- What is changing? to whose advantage and disadvantage? what does this mean to related areas?
- Our solutions need to be human
- We are the architects of our future
- Explain engineers to the world? – Treated first year at Intel like it was Anthropology fieldwork. Disconnect between what people imagine technologists think/do and what they really do. Need to explain what we do better
Helicopters and rocket-planes by Andrew Tridgell
- The wonderful and crazy world of Open Autopilots
- Outback Challenge
- 90km/h for 45 minutes
- Search pattern for a lost bushwalker with UAV
- Drop them a rescue package
- 2016 is much harder VTOL, get blood sample. Most do takeoff and landing remotely (30km from team).
- “Not allowed to get blood sample using a propeller”
- VTOL solutions – Helicopters and Quadplanes – tried both solutions
- Communication 15km away, 2nd aircraft as a relay
- Pure electric doesn’t have range. 100km/h for 1h
- “Flying vibration generators with rotating swords at the top”
- Hard to scale up which is needed in this case. 15cc motor, 2m blades, 12-14kg loaded
- Petrol engines efficient VTOL and high energy density
- Very precise control, good in high wind (competition can have ground wind up to 25 knots)
- Normal stable flight vibrates at 6G , show example where in a couple of seconds flight goes bad and starts vibrating at 30+ G in a few seconds due to control problem (when pitch controller was adjusted and then started feedback loop)
- Normal Plane with wings but 4 virtually pointing propellers added
- Long range, less vibration
- initially two autopilots plus one more co-ordinating
- electric for takeoff, petrol engine for for long range forward flight.
- Hard to scale
- Quadplane v2
- Single auto-pilot
- avoid turning off quad motors before enough speed from forward motor
- Pure electric for all motors
- Forward flight with wings much more efficient.
- Options with scale-up to have forward motor as petrol
- Lohan rocket plane – Offshoot of The Register website
- Mission hasn’t happened yet
- Balloon takes plane to 20km, drops rocket and goes to Mach 2 in 8 seconds. Rocket glides back to each under autopilot and lands at SpacePort USA
- 3d printed rocket. Needs to wiggle controls during ascent to stop them freezing up.
- This will be it’s first flight so has autotune mode to hopefully learn how to fly for the first time on the way down
- Hardware running Ardupilot
- Bebop drone and 3DR solo runs open autopilot software
- BBBmini fully open source kit
- Qualcom flight more locked down
- PXFMini for smaller ones
The world of 100G networking by Christopher Lameter
- Why not?
- Capacity needed
- Machines are pushing 100G to memory
- Everything reqires more Bandwidth
- Was 10 * 10G standards CFP Cxx
- New standard is 4 * 28Gs QSFP28 . compact and designed to replace 10G and 40G networking
- Inifiband (EDR)
- Most mature to date, switches and NICs available
- Hopefully available in 2016
- NICS under dev, can reuse EDR adapter
- Redesigned to try replace infiband
- Comparison connectors
- QSFP28 smaller
- QSFP idea with spliter into 4 * 25G links for some places
- Standard complete in 2016 , 50G out there but standard doesn’t exist yet.
- QSFP is 4 cables
- 100G switches
- 100G x 32 or 50G x64 or 25G x 128
- Models being released this year, hopefully
- Keeping up
- 100G is just 0.01ns per bit , 150ns for 1500MTU packet, 100M packets/second, 50 packets per 10 us
- Hardware distributed packets between cores. will need 60 cores to handle 100G in CPU, need to offload
- Having multiple servers (say 4) sharing a Nic using PCIe!
- How do you interface with these?
- Socket API
- Looking Ahead
- 100G is going to be a major link speed in data centers soon
- Software needs to mature especially the OS stack to handle bottlenecks
So, for tedious reasons, I was talking to Matthew Garrett about how he was born in Galway, the Republic of Ireland.
At LCA I attended a talk about Unikernels. Here are the reasons why I think that they are a bad idea:Single Address Space
According to the Unikernel Wikipedia page  a significant criteria for a Unikernel system is that it has a single address space. This gives performance benefits as there is no need to change CPU memory mappings when making system calls. But the disadvantage is that any code in the application/kernel can access any other code directly.
In a typical modern OS (Linux, BSD, Windows, etc) every application has a separate address space and there are separate memory regions for code and data. While an application can request the ability to modify it’s own executable code in some situations (if the OS is configured to allow that) it won’t happen by default. In MS-DOS and in a Unikernel system all code has read/write/execute access to all memory. MS-DOS was the least reliable OS that I ever used. It was unreliable because it performed tasks that were more complex than CP/M but had no memory protection so any bug in any code was likely to cause a system crash. The crash could be delayed by some time (EG corrupting data structures that are only rarely accessed) which would make it very difficult to fix. It would be possible to have a Unikernel system with non-modifyable executable areas and non-executable data areas and it is conceivable that a virtual machine system like Xen could enforce that. But that still wouldn’t solve the problem of all code being able to write to all data.
On a Linux system when an application writes to the wrong address there is a reasonable probability that it will not have write access and you will immediately get a SEGV which is logged and informs the sysadmin of the address of the crash.
When Linux applications have bugs that are difficult to diagnose (EG buffer overruns that happen in production and can’t be reproduced in a test environment) there are a variety of ways of debugging them. Tools such as Valgrind can analyse memory access and tell the developers which code had a bug and what the bug does. It’s theoretically possible to link something like Valgrind into a Unikernel, but the lack of multiple processes would make it difficult to manage.Debugging
A full Unix environment has a rich array of debugging tools, strace, ltrace, gdb, valgrind and more. If there are performance problems then tools like sysstat, sar, iostat, top, iotop, and more. I don’t know which of those tools I might need to debug problems at some future time.
I don’t think that any Internet facing service can be expected to be reliable enough that it will never need any sort of debugging.Service Complexity
It’s very rare for a server to have only a single process performing the essential tasks. It’s not uncommon to have a web server running CGI-BIN scripts or calling shell scripts from PHP code as part of the essential service. Also many Unix daemons are not written to run as a single process, at least threading is required and many daemons require multiple processes.
It’s also very common for the design of a daemon to rely on a cron job to clean up temporary files etc. It is possible to build the functionality of cron into a Unikernel, but that means more potential bugs and more time spent not actually developing the core application.
One could argue that there are design benefits to writing simple servers that don’t require multiple programs. But most programmers aren’t used to doing that and in many cases it would result in a less efficient result.
One can also argue that a Finite State Machine design is the best way to deal with many problems that are usually solved by multi-threading or multiple processes. But most programmers are better at writing threaded code so forcing programmers to use a FSM design doesn’t seem like a good idea for security.Management
The typical server programs rely on cron jobs to rotate log files and monitoring software to inspect the state of the system for the purposes of graphing performance and flagging potential problems.
It would be possible to compile the functionality of something like the Nagios NRPE into a Unikernel if you want to have your monitoring code running in the kernel. I’ve seen something very similar implemented in the past, the CA Unicenter monitoring system on Solaris used to have a kernel module for monitoring (I don’t know why). My experience was that Unicenter caused many kernel panics and more downtime than all other problems combined. It would not be difficult to write better code than the typical CA employee, but writing code that is good enough to have a monitoring system running in the kernel on a single-threaded system is asking a lot.
One of the claimed benefits of a Unikernel was that it’s supposedly risky to allow ssh access. The recent ssh security issue was an attack against the ssh client if it connected to a hostile server. If you had a ssh server only accepting connections from management workstations (a reasonably common configuration for running servers) and only allowed the ssh clients to connect to servers related to work (an uncommon configuration that’s not difficult to implement) then there wouldn’t be any problems in this regard.
I think that I’m a good programmer, but I don’t think that I can write server code that’s likely to be more secure than sshd.On Designing It Yourself
One thing that everyone who has any experience in security has witnessed is that people who design their own encryption inevitably do it badly. The people who are experts in cryptology don’t design their own custom algorithm because they know that encryption algorithms need significant review before they can be trusted. The people who know how to do it well know that they can’t do it well on their own. The people who know little just go ahead and do it.
I think that the same thing applies to operating systems. I’ve contributed a few patches to the Linux kernel and spent a lot of time working on SE Linux (including maintaining out of tree kernel patches) and know how hard it is to do it properly. Even though I’m a good programmer I know better than to think I could just build my own kernel and expect it to be secure.
I think that the Unikernel people haven’t learned this.
No related posts.
Law and technology: impedance mismatch by Michael Cordover
- IP lawyer
- Known as the EasyCount guy
- Lawyers and Politicians don’t get it
- Governing behaviour that is not well understood (especially by lawyers) is hard
- Some laws are passed under assumption that they won’t always be enforced (eg Jaywalking, Speeding limits). Pervasive monitoring may make this assumption obsolete
- Technology people don’t get the law either
- Good reasons for complexity of the law
- Technology isn’t neutral
- Legal detailed programmatic specifically
- Civil aviation
- Anonymous Data
- Personal information – info from which id can be worked out
- 100s of examples where law is vague and doesn’t well map to technology
- Unauthorised access
- The obvious, easy solution:
- Everybody must know about technology
- NEVER going to happen
- Just make a lot of contracts
- Copyright – works fairly well, eg copyleft
- TOS – works to restrict liability of service providers so services can actually be safely provided
- P3P – Privacy protection protocol
- But doesn’t work well in multiple jurisdictions, small ppl against big companies, etc
- Laws that are fit for purpose
- An ISP is not an IRC server
- VOIP isn’t PSTN
- Focus on the outcome, sometimes
- A somewhat radical shift in legal approach
- It turns out the Internet is (sometimes) different
- United States vs Causby – 1946 case that said people don’t work air above their property to infinity. Airplanes could fly above it.
- You can help
- Don’t ignore they law
- Don’t be too technical
- Don’t expect a technical solution
- Think about policy solutions
- Talk to everybody
Machine Ethics and Emerging Technologies by Paul ‘@pjf’ Fenwick
- Arrived late
- Autonomous cars
- Little private ownership of autonomous vehicles
- 250k driving Taxis
- 3.5 million truck drivers + plus more that depend on them
- Most of the cost is the end-to-end on a highway. Humans could do the hard last-mile
- Industrial revolution
- Lots of people put out of jobs
- Capital offence to harm machines
- We still have tailors
- But some jobs have been eliminated – eg Water bearer in cities
- Replacing humans with small amounts of code
- White collar jobs now being replaced
- If more and more people are getting put out of jobs and we live in a society that expects people to have jobs what can we do?
- Education to retrain
- We *are* working less 1870=70h work week , 1988=40h work week
- Leisure has much increased 44k hours -> 122k hours (shorter week + live longer)
- What do people do with more leisure?
- Pictures of cats!
- Increase in innovation
- How would the future work if machines are doing the vast majority of jobs?
- Technological dividend
- Basic income
- “Drones have really taken off in the last few years”
- Delivery drones
- Disaster relief
- Military drones – If autonomous then radio silent
- Solar powered drones with multi-day/week duration
- Good for environmental monitoring
- Have anonymous warfare, somebody launches it, and it kills some people, but you don’t know who to blame
- Machine Intelligence
- Watson getting better at cancer diagnosis and treatments plan than many doctors
- Please focus on the upsides of lethal autonomous robots – Okay with robots, less happy with taking the machine out of the loop.
- Why work week at 40 hours – Conjecture by Paul – Culture says humans must work and work gives you value and part time work is seen as much less important
Open Source Tools for Distributed Systems Administration by Elizabeth K. Joseph
- Tools that enable distributed teams to work
- Works day to day on Openstack
- How most projects do infrastructure
- Team or company manges do it or they just use github
- Requests via mailing list or bug/ticketing system
- Priority determined by the core team
- Is there a better way – How Openstack is different – Openstack infrastructure team
- Host own git, wiki, ircbots, mailing lists, web servers and run them themselves
- All configs are open source and tracked in git
- Anyone can submit changes to our project.
- We all work remotely
- Openstack CI system
- 800+ projects
- All projects must work togeather
- changes can’t break master branch
- code must be clean
- testing must be completely automated
- Tools for CI (* is they own tools)
- Launchpad for Auth
- zuul* – gatekeep
- Automated Test for infrastructure
- puppet parser validate, puppet lint, puppet application tests
- XML checkers
- Alphabetized files ( cause people forget the alphabet)
- Permissions on IRC channels
- Peer review means
- Multiple eyes on changes prior to merging
- Good infrastructure for developing new solutions
- No special process to go through commit access
- Trains us to be collaborative by default
- Since anyone can contribute, anyone can devote resources to it
- Gerrit in-line comments
- Automated deployments. Either puppet directly or via vcsrepo
- Can you really manage infrastructure via git commits
- Cacti – cacti.openstack.org
- Cacti are public so anybody can check them
- No active monitoring
- so you can watch changes happening
- Had to change a little so secret stuff not public
- Fairly good since distributed team
- Not quiet everything
- Need to look at logs
- Some stuff is manual
- Passwords need to be privately managed (but in private git repo)
- Some complicated migrations are manual
- Cacti – cacti.openstack.org
- Maintenance collaboration on Etherpad
- Via IRC various channels
- main + incident + sprint + weekly meetings
- channel/meeting logs
- In-person collaboration at Openstack design summit every 6 months
- And then there are timezones
- The first/root member in a particular region struggles to feel cohesion with the team
- Increased reluctance to land changes into production
- makes slower on-boarding
- Only solved by increasing coverage in that time-zone so they’re not alone
- Reason why no audio/video? – Not recorded or even hard to access if they are
- How to dev “write documentation” culture – Make that person responsible to write docs so others can still handle it. Helps if it it really easy to do. Wikis never seem to work in practice, goes though same process as everything else (common workflow)
- Task visibility – was bugzilla + launchpad – trying storyboard but not working well.
Paul Fenwick posed a journey of questioning what the future might look in 10,000 years time and is what we're doing today good for humanity.
- More and more white collar jobs are being automated.
- What are all these masses going to do with their leisure time?
- More leisure time means more innovation.
- Covered the benefits of drones.
- Covered the dark side of drone use.
- LARs (Lethal Autonomous Robots) are a significant issue.
- Enables anonymous warfare
- Long term target monitoring and execution
- Can be used for long term environmental monitoring.
Another excellent, informative and entertaining talk by Paul.Updated:
Added the talk below.
- Switching from Processor centric computing to memory driven computing
- Described how the memory fabric works.
- Will be able to connect any computing node to the shared memory.
- Illustrated node assembly.
- Next prototype will interconnect 320 terrabytes of memory accessible storage.
- Planning to build larger machines.
- Putting in facilities to protect the hardware from a compromised operating system.
- Showed how fabric attached memory connects.
- Linux is being ported to the machine.
- Linux with HPE changes.
- All work is being open sourced.
- Creating a new file system allocate mempry in 8G units.
- Library File System (LFS)
- Currently focussing on Librarian, machine-wide shared memory allocator.
- Trying to provide a two level allocation scheme
- POSIX API.
- No sparse files.
- Locking is not global.
- Farbic attached memory is not cache coherent
- Read errors are signalled synchronously.
- Write errors are asynchronous and require a barrier.
- Went through all the areas they're working on Free Software.
Jono Bacon Keynote
- Community 1.0 (ca 1998)
- Observational – Now book on how to do it
- Organic – people just created them
- Technical Enviroment – Had to know C (or LaTex)
- Community 2.0 (ca 2004, 2005)
- Wikipedia, Redhat, Openstack, Github
- Renaissance – Stuff got written down on how to do it
- Self Organising groups – Gnome, Kde, Apache foundation – push creation of tech and community
- Diversity – including of skills , non-technical people had a seat at the table and a role.
- Company Engagement – Starting hiring community managers, sometimes didn’t work very well
- Community 3.0 ?
- “Thoughtful and productive communities make us as species better“
- Access and power is growing exponentally
- But stuff around is changing
- Cellphones are access method for most
- Cloud computering
- CD-printers, drones, cloud, crowdfunding, Ardinino
- Lots for channels to get things to everybody and everybody can participate
- “We need to empower diversity of both people and talent”
- Human brain has not had a upgrade in a long time
- Bold and Audacious Goals
- Openness is at the heart of all of these
- Open source in the middle of many
- Eg Drone
- Runs linux
- Open API
- “Open Source is where Society innovates”
- “Need to make great community leadership accessible to everybody”
- “Predictable collaboration – an aspirational goal where we won’t *need* community managers”
- Not just about technology
- We are all human.
- Tangible value vs Intangible value
- Tangible can be measured and driven to fix the numbers
- Intangible – trust, dignety
- System 1 thinking vs System 2 thinking
- Instant vs considered
- SCARF Model of thinking
- Status – clarity of relative importance, need people to be able to flow between them
- Certainty – Security and predictability
- Autonomy – People really want choices
- R – I got distracted by twitter, I’m sure it was important
- Fair – fairness
- Two Golden Rules
- We accomplish our goals indirectly
- We influence behaviour with small actions
- We need to concentrate to building an experience for people to who join the community
- Community Workflow
- Communication – formal, inclormal? Coc? Tech to use?
- Release sceduled, support?
- How to participate, tech, hackthons
- Government structure
- Paths for different people
- New developers
- Core Developers
- Downstream Cosutomers
- Opportunity vs Belonging
- Increasing Signal to Noise ratio – Trolls are easy[er], harder for people who are just no deft in communication. Mentorship can help
- Destructive communities (like 4chan) , how can technology be used to work against these – Leaders need to set examples. Make clear abusive behavour towards others. Won’t be able to build tools that will completely remove bad behaviour. Had to tell destructive vs direct automatically but they can augmented.
- What about Linus type people? – View is that even though it works for him and it is okay with people he knows. Viewed inwards by others it sets a bad example.
Using Persistent Memory for Fun and Profit by Matthew Wilcox
- What is it?
- Retains data without power
- NV-DIMMs available – often copy DRAM to flash when power lost
- Intel 3D X-point shipping in 2017. will become more a standard feature
- How do we could use it
- Total System persistence
- But the CPU cache is not backed up, so pending writes vanish
- Application level persistence
- Boot new kernel be keep the running apps
- CPU cache still
- Completely redesigned operating system to use
- But we want to use in 2017
- A special purpose filesystem
- Implementation not that great
- A very fast block device
- Usaged as very fast cache for apps really need it. Not really general purpose
- Small modifications to existing file systems
- On top of ext2 (xip)
- Total System persistence
- How do we actually use it
- New CPU instructions ( mostly to make sure encourage that things are flushed from the CPU cache)
- Special purpose programming language shouldn’t be needed for interpreted languages. But for compiled code libraries might be needed
- NVML library
- Stuff built on NVML library so far.
- Red-Black tree, B-tree, other data-structures
- Key-value store
- Fuse file system
- Example MySQL storage engine
- In 2017 will we have mix of persistent and non-persistent RAM? – Yes . New Layer in the storage hierarchy
- Performance of 3d will be slower a little slow than DRAM but within ballpark, various trade-offs with other characteristics
- Probably won’t have native crypto
Dropbox Database Infrastructure by Tammy Butow
- Dropbox for last 4 months, previously Digital Ocean, prev National Australia Bank
- Using MySQL for last 10 years. Now doing it FT.
- 400 Million customers
- Petabytes of data across thousands of servers
- In 2012 Dropbox just had 1 DBA, but was huge then.
- In 2016 it has grown to 9 people
- 6000 DB servers -> DB Proxy -> DB as a service (edgestore) -> memcache -> Web Servers (nginx)
- Talk – Go at Dropbox, Zviad Metreveli on Youtube
- Applications talk directly to edgestore not directly to database
- vitess is mysql proxy (by youtube) similar to what dropbox wrote. Might move to that
- Percona 5.6
- Constantly upgrading (4 times in last year)
- DBmanager – service we manage mysql via
- Each Cluster is proiamry + 2 replicas
- Use xtrabackup ( to hdfs locally and s3)
- Tasks grow and take time
- Automating DB operations
- Web interface with standard operations and status of servers
- Cloning Screen
- Promotion Screen
- Create and restore backups
- WebUI gives you feedback and you can see how things are going. Don’t need magic command lines. Good for other teams to see stuff and do stuff (options right in front of them).
- Database job scheduling and prioritization. Promotion will take priority over anything else.
- Common logging, centralized server and nice gui that everyone can see
- Availbale on dropbox github
- Visable all quests and actions that need to be done by the team
- Improving backup and restore speed.
- Improving backup and restore speed.
- Auto-remediation (naoru) – up on github at some point
- Inventory Management
- Machine Database (MDB)
- Has tags for things like kernel versions
- Automated periodic tcpdump
- Tools to kill long running transactions
- List current queries running
- The Future
- Reliabilty, performance and cost improvements
- Config management
- Love the “Go Programming Language” by Kernighan
- List of Papers they love
- Using percona not mariadb. They also shard not cluster DBs
- Big Culture change from Back to Dropbox – At Bank tried to decom old systems, reduce risk. At Dropbox everyone is very Brave and pushing boundarys
- machine database automatically built largely
- Predictive Analysis on hardware – Do some , lots of dashboards for hardware team, lifecycle management of hardware. Don’t hug servers. Hug the hardware class instead.
- Rollbacks are okay and should be easy. Always be able to rollback a change to get to back to a good stack.
Jono Bacon spoke about how open communities are changing the world and how they may be improved in the future.Community 1.0
- Early Free Software communities were built from observing other groups around them and figuring things out as they went along.
- Very high technical barrier of entry
- Allowed broader participation, with Wikipedia as an example.
- Knowledge had been built to allow people to start in the community from a common point
- Self organising groups
- Enabled greater diversity
- Companies began engaging with communities.
- How do we build effective reproducible communities?
- Thoughtful and productive communities advance the human race,
- Sharing the knowledge on how to build effective communities is going to be
- Covered ubiquitous computing growth, 3D printing, Arduino etc
- Crowd funding as one method of empowering consumers.
- Not just consumption but empowering people to have better lives, key.
- We need to empower diversity in all it's forms.
- Openness is the greatest enabler.
- The principles of openness are flowing through all forms of technology, life and work.
- In a world worried about AI, we the people should be ensuring that it's open and taking control.
"Open Source is where society innovates" - Jono Bacon
- We need to crack predictable collaboration. Making great great community leadership available everyone.
- We can do better, we've only scratch the surface with our success thus far.
- For self respect we need to contribute. To contribute we need access.
- Jono realised that his role as community manager was to help other contributors be as effective as possible with their time when they're contributing.
- Discussed the difference between system 1 and system 2 thinking.
- However behavioural economics is hard to apply in practice.
- The principles can be pulled out and used though.
- Discussed SCARF model of social threats and rewards.
- From this model we can figure out how to put this into practice.
- We accomplish goals indirectly. Gave Boeing as an example.
- We influence behaviour with small actions. Recommended the book Lunch.
- Build comprehensive rewarding experiences.
- Need to make building a successfully structured community easy.
- Described experiences from different stakeholder perspectives.
The most important feeling we can create is a sense of belonging.
Jamie Wilkinson gave on overview of the Prometheus monitoring tool, based on the Borgmon white paper released by Google.
- Monitoring complexity was becoming expensive.
- Borgmon inverted the monitoring process
- Was heavily relied upon at Google.
- Prometheus, Bosun, Riemann are stream based monitoring like Borgman.
- Prometheus scrapes /varz
- Sends alerts as key value pairs
- Using shards for scaling.
- Defines targets in a YAML file.
- Data storage is in a global database in memory
- Use "higher level abstractions to lower cost of maintenance
- Use metrics, not checks
- Design alerts based on service objectives.
Another brilliant monitoring talk from Jamie.
The future belongs to unikernels. Linux will soon no longer be used in Internet facing production systems. by Andrew Stuart
- Stripped down OS running a single application
- Startup time only a few milli-seconds
- Many of the current ones are language specific
- The Unikernel Zoo
- MirageOS – Must be written in OCaml
- Rump – Able to run general purpose software, run compiled posix applications, largely unmodified. Can have threading but not forking
- HalVM – Must be coded in Haskell
- Ling – Erlang
- Drawbridge – Microsoft research project
- OSv – More general purpose
- “Something about Unikernels seems to attract the fans of the ‘less common’ languages”
- plus a bunch more..
- Unikernels and security
- Bunch of people point out problems and alternative solutions the unikernel are trying to solve.
An introduction to monitoring and alerting with timeseries at scale, with Prometheus by Jamie Wilkinson
- SRE ultimately responsible for the reliability of google.com , less that 50% of time on ops
- History of monitoring, Nagios doesn’t scale, hard to configure
- Black-box monitoring for alerts
- White-box monitoring for charts
- Borgmon at Google, same tool used my many teams at google
- Borgmon not Open Source, but instead we’ll look at Prometheus
- Several alternatives alternatives
- Alert design
- SLI – a measurment
- SLO – a goal
- SLA – economic incentives
- Every time you get paged you should react with sense of urgency
- Those that are not important shouldn’t be paged on, perhaps just to console
- Client exports a interface usually http , prometheus polls /metrics on this server gets plain page with numbers
- Metrics are numbers not strings
- Don’t need timestamps into data
- Tell prometheus where the targets are in the “scrape_configs”
- All sorts of ways to find targets (DNS, etc)
- Variables all have labels, name, things like localtions
- Rule evaluation
- recording rules
- tasks run built in fuctions like sum up data by label (eg all machines with the same region label), find rate of change etc
- Pretty graphs shown in demo
- Prometheus exporting daemon/proxy
- Language ability to support things like flapping detection/ignore
- Grafana support for Prometheus exists
Andrew Stuart gave an overview of the current state of unikernels:Overview
- Unikernel zoo is increasing.
- MirageOS is the most mature at present and requires code written in OCaml.
- HalVM requires you code to be written in Haskell
- Ling requires your code to be written in Erlang.
- OSv is not language specific and very minimalist.
- rump kernels is essentially a very stripped down version of NetBSD and will run some other unikernels.
- Threading, not forking.
- Might be a Linux based unikernel coming.
- Suggests machines with user sign-in capabilities will be come less come due to security risks.
- Unikernels are not invulnerable.
- MirageOS have a bitcoin pinata.
Craige McWhirter: Sentrifarm - open hardware telemetry system for Australian farming conditions - Andrew McDonnell - LCA2016
- Low power
- Using radio for communication
- Local storage
- They entered Hackaday - actual entry
- Wanted to learn new skills
- Have fun
- Perhaps produce something useful
- There were lots of discarded prototypes
- So many cheap devices facilitating experimentation.
- Radio links were not quite as open as he would have liked.
- Used Lora based ISM-band radio
- Learned how much easier it is to have PCBs fabricated these days.
- Fabrication lead times can be about 6 months.
- 8 devices Carambola2 - Linux OpenWRT board
- Replaces need for Arduino IDE
- Open Source
- IDE agnostic
- Specifically MQTT-SN for low bandwidth
- Gateway runs OpenWRT
Andrew provided an overview of how the gateway processing model worked.Backend
- Ubuntu 14.04
- Docker 1.8.3
- Carbon + Whisper + Graphite
- Custom Python scripts
Millions of lines of code and Andrew only had to write 7.
3D printed some components.
- Made a custom holder for the PCB
- Used OpenSCAD to design the component.
- Made the antenna himself with plans off the Internet.
- Got range up to 9km.
Andrew's project is an ingenious solution to a serious problem. I need one of these for myself!Updated:
Added the talk itself below.