Planet Linux Australia
Since the last launch, Mark and I have put a lot of work into carefully integrating a rate 0.8 LDPC code developed by Bill, VK5DSP. The coded 115 kbit/s system is now working error free on the bench down to -112dBm, and can transfer a new hi-res image in just a few seconds. With a tx power of 50mW, we estimate a line of site range of 100km. We are now out-performing commercial FSK telemetry chip sets using our open source system.
However disaster struck soon after launch at Mt Barker High School oval. High winds blew the payloads into a tree and three of them were chopped off, leaving the balloon and a lone payload to continue into the stratosphere. One of the payloads that hit the tree was our SSDV, tumbling into a neighboring back yard. Oh well, we’ll have another try in December.
Now I’ve been playing a lot of Kerbal Space Program lately. It’s got me thinking about vectors, for example in Kerbal I learned how to land two space craft at exactly the same point on the Mun (Moon) using vectors and some high school equations of motion. I’ve also taken up sailing – more vectors involved in how sails propel a ship.
The high altitude balloon consists of a latex, helium filled weather balloon a few meters in diameters. Strung out beneath that on 50m of fishing line are a series of “payloads”, our electronic gizmos in little foam boxes. The physical distance helps avoid interference between the radios in each box.
While the balloon was held near the ground, it was keeled over at an angle:
It’s tethered, and not moving, but is acted on by the force of the lift from the helium and drag from the wind. These forces pivot the balloon around an arc with a radius of the tether. If these forces were equal the balloon would be at 45 degrees. Today it was lower, perhaps 30 degrees.
When the balloon is released, it is accelerated by the wind until it reaches a horizontal velocity that matches the wind speed. The payloads will also reach wind speed and eventually hang vertically under the balloon due to the force of gravity. Likewise the lift accelerates the balloon upwards. This is balanced by drag to reach a vertical velocity (the ascent rate). The horizontal and vertical velocity components will vary over time, but lets assume they are roughly constant over the duration of our launch.
Now today the wind speed was 40 km/hr, just over 10 m/s. Mark suggested a typical balloon ascent rate of 5 m/s. The high school oval was 100m wide, so the balloon would take 100/10 = 10s to traverse the oval from one side to the gum tree. In 10 seconds the balloon would rise 5×10 = 50m, approximately the length of the payload string. Our gum tree, however, rises to a height of 30m, and reached out to snag the lower 3 payloads…..
A few days ago while riding my bike I was involved in a spirited exchange of opinions with a gentleman in a motor vehicle. After said exchange he attempted to run me off the road, and got out of his car, presumably with intent to assault me. Despite the surge of adrenaline I declined to engage in fisticuffs, dodged around him, and rode off into the sunset. I may have been laughing and communicating further with sign language. It’s hard to recall.
I thought I’d apply some year 11 physics to see what all the fuss was about. I was in the middle of the road, preparing to turn right at a T-junction (this is Australia remember). While his motivations were unclear, his vehicle didn’t look like an ambulance. I am assuming he as not an organ-courier, and that there probably wasn’t a live heart beating in an icebox on the front seat as he raced to the transplant recipient. Rather, I am guessing he objected to me being in that position, as that impeded his ability to travel at full speed.
The street in question is 140m long. Our paths crossed half way along at the 70m point, with him traveling at the legal limit of 14 m/s, and me a sedate 5 m/s.
Lets say he intended to brake sharply 10m before the T junction, so he could maintain 14 m/s for at most 60m. His optimal journey duration was therefore 4 seconds. My monopolization of the taxpayer funded side-street meant he was forced to endure a 12 second journey. The 8 second difference must have seemed like eternity, no wonder he was angry, prepared to risk physical injury and an assault charge!
My endeavor to produce a digital voice mode that competes with SSB continues. For a big chunk of 2016 I took a break from this work as I was gainfully employed on a commercial HF modem project. However since December I have once again been working on a 700 bit/s codec. The goal is voice quality roughly the same as the current 1300 bit/s mode. This can then be mated with the coherent PSK modem, and possibly the 4FSK modem for trials over HF channels.
I have diverged somewhat from the prototype I discussed in the last post in this saga. Lots of twists and turns in R&D, and sometimes you just have to forge ahead in one direction leaving other branches unexplored.
SamplesSample 1300 700C hts1a Listen Listen hts2a Listen Listen forig Listen Listen ve9qrp_10s Listen Listen mmt1 Listen Listen vk5qi Listen Listen vk5qi 1% BER Listen Listen cq_ref Listen Listen
Note the 700C samples are a little lower level, an artifact of the post filtering as discussed below. What I listen for is intelligibility, how easy is the same to understand compared to the reference 1300 bit/s samples? Is it muffled? I feel that 700C is roughly the same as 1300. Some samples a little better (cq_ref), some (ve9qrp_10s, mmt1) a little worse. The artifacts and frequency response are different. But close enough for now, and worth testing over air. And hey – it’s half the bit rate!
I threw in a vk5qi sample with 1% random errors, and it’s still usable. No squealing or ear damage, but perhaps more sensitive that 1300 to the same BER. Guess that’s expected, every bit means more at a lower bit rate.
Some of the samples like vk5qi and cq_ref are strongly low pass filtered, others like ve9qrp are “flat” spectrally, with the high frequencies at about the same level as the low frequencies. The spectral flatness doesn’t affect intelligibility much but can upset speech codecs. Might be worth trying some high pass (vk5qi, cq_ref) or low pass (ve9qrp_10s) filtering before encoding.
Below is a block diagram of the signal processing. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.
The amplitudes and even the Vector Quantiser (VQ) entries are in dB, which is very nice to work in and matches the ears logarithmic amplitude response. The VQ was trained on just 120 seconds of data from a training database that doesn’t include any of the samples above. More work required on the VQ design and training, but I’m encouraged that it works so well already.
Here is a 3D plot of amplitude in dB against time (300 frames) and the K=20 frequency vectors for hts1a. You can see the signal evolving over time, and the low levels at the high frequency end.
The post filter is another key step. It raises the spectral peaks (formants) an lowers the valleys (anti-formants), greatly improving the speech quality. When the peak/valley ratio is low, the speech takes on a muffled quality. This is an important area for further investigation. Gain normalisation after post filtering is why the 700C samples are lower in level than the 1300 samples. Need some more work here.
The two stage VQ uses 18 bits, energy 4 bits, and pitch 6 bits for a total of 28 bits every 40ms frame. Unvoiced frames are signalled by a zero value in the pitch quantiser removing the need for a voicing bit. It doesn’t use differential in time encoding to make it more robust to bit errors.
Days and days of very careful coding and checks at each development step. It’s so easy to make a mistake or declare victory early. I continually compared the output speech to a few Codec 2 1300 samples to make sure I was in the ball park. This reduced the subjective testing to a manageable load. I used automated testing to compare the reference Octave code to the C code, porting and testing one signal processing module at a time. Sometimes I would just printf rows of vectors from two versions and compare the two, old school but quite effective and spotting the step where the bug crept in.
The Octave simulation code can be driven by the scripts newamp1_batch.m and newamp1_fby.m, in combination with c2sim.
To try the C version of the new mode:codec2-dev/build_linux/src$ ./c2enc 700C ../../raw/hts1a.raw - | ./c2dec 700C - -| play -t raw -r 8000 -s -2 -
Some thoughts on FEC. A (23,12) Golay code could protect the most significant bits of 1st VQ index, pitch, and energy. The VQ could be organised to tolerate errors in a few of its bits by sorting to make an error jump to a ‘close’ entry. The extra 11 parity bits would cost 1.5dB in SNR, but might let us operate at significantly lower in SNR on a HF channel.
Over the next few weeks we’ll hook up 700C to the FreeDV API, and get it running over the air. Release early and often – lets find out if 700C works in the real world and provides a gain in performance on HF channels over FreeDV 1600. If it looks promising I’d like to do another lap around the 700C algorithm, investigating some of the issues mentioned above.
One of my favorite images below, just before impact with the ground. You can see the parachute and the tangled remains of the balloon in the background, the yellow fuzzy line is the nylon rope close to the lens.
Well done to the AREG club members (in particular Mark) for all your hard work in preparing the payloads and ground stations.
High Altitude Balloons is a fun hobby. It’s a really nice day out driving in the country with nice people in a car packed full of technology. South Australia has some really nice bakeries that we stop at for meat pies and donuts on the way. Yum. It was very satisfying to see High Definition (HD) images immediately after take off as the balloon soared above us. Several ground stations were collecting packets that were re-assembled by a central server – we crowd sourced the image reception.
Open Source FSK modem
Surprisingly we were receiving images while mobile for much of the flight. I could see the Eb/No move up and down about 6dB over 3 second cycles, which we guess is due to rotation or swinging of the payload under the balloon. The antennas used are not omnidirectional so the change in orientation of tx and rx antennas would account for this signal variation. Perhaps this can be improved using different antennas or interleaving/FEC.
Our little modem is as good as the Universe will let us make it (near perfect performance against theory) and it lived up to the results predicted by our calculations and tested on the ground. Bill, VK5DSP, developed a rate 0.8 LDPC code that provides 6dB coding gain. We were receiving 115 kbit/s data on just 50mW of tx power at ranges of over 100km. Our secret is good engineering, open source software, $20 SDRs, and a LNA. We are outperforming commercial chipsets with open source.
The work on our wonderful little FSK modem continues. Brady O’Brien, KC9TPA has been refactoring the code for the past few weeks. It is now more compact, has a better command line interface, and most importantly runs faster so we getting close to running high speed telemetry on a Raspberry Pi and fully embedded platforms.
I think we can get another 4dB out of the system, bringing the MDS down to -116dBm – if we use 4FSK and lose the RS232 start/stop bits. What we really need next is custom tx hardware for open source telemetry. None of the chipsets out there are quite right, and our demod outperforms them all so why should we compromise?
The project has had some interesting spin offs. The members of AREG are getting really interested in SDR on Linux resulting in a run on recycled laptops from ASPItech, a local electronics recycler!
Today I was part of the AREG team that flew Horus 37 – a High Altitude Balloon flight. The payload included hardware sending Slow Scan TV (SSTV) images at 115 kbit/s, based on the work Mark and I documented in this blog post from earlier this year.
It worked! Using just 50mW of transmit power and open source software we managed to receive SSTV images at bit rates of up to 115 kbit/s:
More images here.
Here is a screen shot of the Python dashboard for the FSK demodulator that Mark and Brady have developed. It gives us some visibility into the demod state and signal quality:
(View-Image on your browser to get a larger version)
The Eb/No plot shows the signal strength moving up and down over time, probably due to motion of our car. The Tone Frequency Estimate shows a solid lock on the two FSK frequencies. The centre of the Eye Diagram looks good in this snapshot.
Octave and C LDPC Library
There were some errors in received packets, which appear as stripes in the images:
On the next flight we plan to add a LDPC FEC code to protect against these errors and allow the system to operate at signal levels about 8dB lower (more than doubling our range).
Bill, VK5DSP, has developed a rate 0.8 LDPC code designed for the packet length of our SSTV software (2064 bits/packet including checksum). This runs with the CML library – C software designed to be called from Matlab via the MEX file interface. I previously showed how the CML library can be used in GNU Octave.
I like to develop modem algorithms in GNU Octave, then port to C for real time operation. So I have put some time into developing Octave/C software to simulate the LDPC encoded FSK modem in Octave, then easily port exactly the same LDPC code to C. For example the write_code_to_C_include_file() Octave function generates a C header file with the code matrices and test vectors. There are test functions that use an Octave encoder and C decoder and compare the results to an Octave decoder. It’s carefully tested and bit exact to 64-bit double precision! Still a work in progress, but has been checked into codec2-dev SVN:ldpc_fsk_lib.m Library of Octave functions to support LDPC over FSK modems test_ldpc_fsk_lib.m Test and demo functions for Octave and C library code mpdecode_core.c CML MpDecode.c LDPC decoder functions re-factored H2064_516_sparse.h Sample C include file that describes Bill’s rate 0.8 code ldpc_enc.c Command line LDPC encoder ldpc_dec.c Command line LDPC decoder drs232_ldpc.c Command line SSTV deframer and LDPC decoder
This software might be useful for others who want to use LDPC codes in their Matlab/Octave work, then run them in real time in C. With the (2064,512) code, the decoder runs at about 500 kbit/s on one core of my old laptop. I would also like to explore the use of these powerful codes in my HF Digital Voice work.
SSTV Hardware and Software
Mark did a fine job putting the system together and building the payload hardware and it’s enclosure:
It uses a Raspberry Pi, with a FSK modulator we drive from the Pi’s serial port. The camera aperture is just visible at the front. Mark has published the software here. The tx side is handled by a single Python script. Here is the impressive command line used to start the rx side running:#!/bin/bash # # Start RX using a rtlsdr. # python rx_gui.py & rtl_sdr -s 1000000 -f 441000000 -g 35 - | csdr convert_u8_f | csdr bandpass_fir_fft_cc 0.1 0.4 0.05 | csdr fractional_decimator_ff 1.08331 | csdr realpart_cf | csdr convert_f_s16 | ./fsk_demod 2XS 8 923096 115387 - - S 2> >(python fskdemodgui.py --wide) | ./drs232_ldpc - - | python rx_ssdv.py --partialupdate 16
We have piped together a bunch of command line utilities on the Linux command line. A hardware analogy is a bunch of electronic boards on a work bench connected via coaxial jumper leads. It works quite well and allows us to easily prototype SDR radio systems on Linux machines from a laptop to a RPi. However down the track we need to get it all “in one box” – a single, cross platform executable anyone can run.
We did some initial tests with the LDPC decoder today but hit integration issues that flat lined our CPU. Next steps will be to investigate these issues and try LDPC encoded SSTV on the next flight, which is currently scheduled for the end of October. We would love to have some help with this work, e.g. optimizing and testing the software. Please let us know if you would like to help!
Mark’s blog post on the flight
AREG blog post detailing the entire flight, including set up and recovery
High Speed Balloon Data Link – Development and Testing of the SSTV over FSK system
All your Modems are belong to Us – The origin of the “ideal” FSK demod used for this work.
FreeDV 2400A – The C version of this modem developed by Brady and used for VHF Digital Voice
LDPC using Octave and CML – using the CML library LDPC decoder in GNU Octave
A friend of mine is developing a commercial OQPSK modem and was a bit stuck. I’m not surprised as I’ve had problems with OQPSK in the past as well. He called to run a few ideas past me and I remembered I had developed a coherent GMSK modem simulation a few years ago. Turns out MSK and friends like GMSK can be interpreted as a form of OQPSK.
A few hours later I had a basic OQPSK modem simulation running. At that point we sat down for a bottle of Sparkling Shiraz and some curry to celebrate. The next morning, slightly hung over, I spent another day sorting out the diabolical phase and timing ambiguity issues to make sure it runs at all sorts of timing and phase offsets.
So oqsk.m is a reference implementation of an Offset QPSK (OQPSK) modem simulation, written in GNU Octave. It’s complete, including timing and phase offset estimation, and phase/timing ambiguity resolution. It handles phase, frequency, timing, and sample clock offsets. You could run it over real world channels.
It’s performance is bang on ideal for QPSK:
I thought it would be useful to publish this blog post as OQPSK Modems are hard. I’ve had a few run-in with these beasts over the years and had headaches every time. This business about the I and Q arms being half a symbol offset from each other makes phase synchronisation very hard and does your head in. Here is the Tx waveform, you can see the half symbol time offset in the instant where I and Q symbols change:
As this is unfiltered OQPSK, the Tx waveform is just the the Tx symbols passed through a zero-order hold. That’s a fancy way of saying we keep the symbols values constant for M=4 samples then change them.
There are very few complete reference implementations of high quality modems on the Internet, so it’s become a bit of a mission of mine. By “complete” I mean pushing past the textbook definitions to include real world synchronisation. By “high quality” I mean tested against theoretical performance curves with different channel impairments. Or even tested at all. OQPSK is a bit obscure and it’s even harder to find any details of how to build a real world modem. Plenty of information on the basics, but not the nitty gritty details like synchronisation.
The PLL and timing loop simultaneously provides phase and timing estimation. I derived it from a similar algorithm used for the GMSK modem simulation. Unusually for me, the operation of the timing and phase PLL loop is still a bit of mystery. I don’t quite fully understand it. Would welcome more explanation from any readers who are familiar to it. Parts of it I understand (and indeed I engineered) – the timing is estimated on blocks of samples using a non-linearity and DFT, and the PLL equations I worked through a few years ago. It’s also a bit old school, I’m used feed forward type estimators and not something this “analog”. Oh well, it works.
Here is the phase estimator PLL loop doing it’s thing. You can see the Digital Controlled Oscillator (DCO) phase tracking a small frequency offset in the lower subplot:
Phase and Timing Ambiguities
The phase/timing estimation works quite well (great scatter diagram and BER curve), but can sync up with some ambiguities. For example the PLL will lock on the actual phase offset plus integer multiples of 90 degrees. This is common with phase estimators for QPSK and it means your constellation has been rotated by some multiple of 90 degrees. I also discovered that combinations of phase and timing offsets can cause confusion. For example a 90 degree phase shift swaps I and Q. As the timing estimator can’t tell I from Q it might lock onto a sequence like …IQIQIQI… or …QIQIQIQ…. leading to lots of pain when you try to de-map the sequence back to bits.
So I spent a Thursday exploring these ambiguities. I ended up correlating the known test sequence with the I and Q arms separately, and worked out how to detect IQ swapping and the phase ambiguity. This was tough, but it’s now handling the different combinations of phase, frequency and timing offsets that I throw at it. In a real modem with unknown payload data a Unique Word (UW) of 10 or 20 bits at the start of each data frame could be used for ambiguity resolution.
The modem lacks an initial frequency offset estimator, but the PLL works OK with small freq offsets like 0.1% of the symbol rate. It would be useful to add an outer loop to track these frequency offsets out.
As it uses feedback loops its not super fast to sync and best suited to continuous rather than burst operation.
The timing recovery might need some work for your application, as it just uses the nearest whole sample. So for a small over-sample rate M=4, a timing off set of 2.7 samples will mean it chooses sample 3, which is a bit coarse, although given our BER results it appears unfiltered PSK isn’t too sensitive to timing errors. Here is the timing estimator tracking a sample clock offset of 100ppm, you can see the coarse quantisation to the nearest sample in the lower subplot:
For small M, a linear interpolator would help. If M is large, say 10 or 20, then using the nearest sample will probably be good enough.
This modem is unfiltered PSK, so it has broad lobes in the transmit spectrum. Here is the Tx spectrum at Eb/No=4dB:
The transmit filter is just a “zero older hold” and the received filter an integrator. Raised cosine filtering could be added if you want a narrow bandwidth. This will probably make it more sensitive to timing errors.
Like everything with modems, test it by measuring the BER. Please.
oqsk.mGNU Octave OQPSK modem simulation
GMSK Modem Simulation blog post that was used as a starting point for the OQPSK modem.
Use #lcapapers to tell Linux.conf.au what you want to see in 2018
Michael Still and Michael Davies get the Rusty Wrench award
Karaoke – Jack Skinner
- Talk with random slides
- End to end encrypted communication system
- No entity owns your conversations
- Bridge between walled gardens (eg IRC and Slack)
- In Very late Beta, 450K user accounts
- Run or Write your own servers or services or client
Cooked – Pete the Pirate
- How to get into Sous Vide cooking
- Create home kit
- Beaglebone Black
- Rice cooker, fish tank air pump.
- Also use to germinate seeds
- Also use this system to brew beer
Emoji Archeology 101 – Russell Keith-Magee
- 1963 Happy face created
Continuously Delivering Security in the Cloud – Casey West
- This is a talk about operation excellence
- Why are system attacked? Because they exist
- Resisting Change to Mitigate Risk – It’s a trap!
- You have a choice
- Going fast with unbounded risk
- Going slow to mitigate risk
- Advanced Persistent Threat (ATP) – The breach that lasts for months
- Successful attacks have
- Leaked or misused creditials
- Miconfigured or unpatched software
- Changing very little slowly helps all three of the above
- A moving target is harder to hit
- Cloud-native operability lets platforms move faster
- Composable architecture (serverless, microservices)
- Automated Processes (CD)
- Collaborative Culture (DevOps)
- Production Environment (Structured Platform)
- The 3 Rs
- Rotate credentials every few minutes or hours
- Credentials will leak, Humans are weak
- “If a human being generates a password for you then you should reject it”
- Computers should generate it, every few hours
- Repave every server and application every few minutes/hours
- Implies you have things like LBs that can handle servers adding and leaving
- Container lifecycle
- Note: No “change “step
- A Server that doesn’t exist isn’t being cromprimised
- Regularly blow away running containers
- Repave ≠ Patch
- uptime <= 3600
- Repair vulnerable runtime environments every few minutes or hours
- What stuff will need repair?
- Runtime Environments (eg rails)
- Operating Systems
- The Future of security is build pipelines
- Try to put in credential rotation and upsteam imports into your builds
- Embracing Change to Mitigate Risk
- Less of a Trap (in the cloud)
- Lenovo Thinkpad X230T
- Bought Aug 2013
- Ariginal capacity 62 KWh – 5hours and 12W
- Capacity down to 1.9Wh – 10 minutes
- 45N1079 replacement bought
- DRM on laptop claimed it was not genuine and refused to recharge it.
- Batteries talk SBS protocol to laptop
- SMBus port and SMClock port
- Throw Away
- Replace Cells
- Easy to damage
- Might not work
- Hack firmware on battery
- Talk at DEFCON 19
- But this is different model from that
- Couldn’t work out how to get to firmware
- Added something in between
- Update the firmware on the machine
- Embeded Controller (EC)
- Looking though the firmware for Battery Authentication
- Found routine that look plausable
- But other stuff was encrypted
- EC Update process
- BIOS update puts EC update in spare flash memory area
- After the BIOs grabs that and applies update
- Pulled apart the BIOs, found EcFwUpdateDxe.efi routine that updates the EC
- Found that stuff send to the EC still encrypted.
- Unencryption done by flasher program
- Flasher program
- Encrypted itself (decrypted by the current fireware)
- JTAG interface for flashing debug
- Physically difficult to get to
- Luckily Russian Hackers have already grabbed a copy
- The Decryption function in the Flasher program
- Appears to be blowfish
- Found the key (in expanded form) in the firmware
- Enough for the encryption and decryption
- Outer checksum checked by BIOs
- Post-decryption sum – checked by the flasher (bricks EC if bad)
- Section Echecksums (also bricks)
- noop the checks in code
- noop another check that sometimes failer
- Different error message
- Found a second authentication process
- noop out the 2nd challenge in the BIOs
- Posted writeup, posted to hacker news
- 1 million page views
- Uploaded code to github
- Other people doing stuff with the embedded controller
- No longer works on latest laptops, EC firmware appears to be signed
- Anything can be broken with physical access and significant determination
- Australian Elections use a lot of software
- Encoding and counting preferential votes
- For voting in polling places
- For voting over the internet
- How do we know this software is correct
- The Paper ballot box is engineered around a serious of problems
- In the past people bought their own voting paper
- The Australian Ballot used in many places (eg NZ)
- Franch use different method with envelopes and glass boxes
- The US has had lots of problems and different ways
- Four cases studies in Aus
- vVote: Victoria
- Vic state election 2014
- 1121 votes for overseas Australians voting in Embassies etc
- Based on Pret a Voter
- You can varify that what you voted was what went though
- Source code on bitbucket
- Crypto signed, varified, open source, etc
- Not going forward
- Didn’t get the electoral commissions input and buy-in.
- A little hard to use
- iVote: NSW and WA
- 280,000 votes over Internet in 2015 NSW state election ( around 5-6% of total votes)
- Vote on a device of your choosing
- Vote encrypted and send over Internet
- Get receipt number
- Exports to a varification service. You can telephone them, give them your number and they will read back you votes
- Website used 3rd-party analytics provider with export-grade crypto
- Vulnerable to injection of content, votes could be read or changed
- Fixed (after 66k votes cast)
- NSW iVote really wasn’t varifiable
- About 5000 people called into service and successfully verified
- How many tried to verify but failed?
- Commission said 1.7% of electors verified and none identified any anomalies with their vote (Mar 2015)
- How many tried and failed? “in the 10s” (Oct 2015)
- Parliamentary said how many failed? Seven or 5 (Aug 2016)
- How many failed to get any vote? 627 (Aug 2016)
- This is a failure rate of about 10%
- It is believed it was around 200 unique (later in 2016)
- Vote Counting software
- Errors in NSW counting
- NSW legislative voting redistributed votes are selected at random
- No source code for this
- Use same source code for lots of other elections
- Re-ran some of the votes, found randomness could change results. Found one most likely cost somebody a seat, but not till 4 years later.
- Generate the random key publicly
- Open up the source code
- They electorial peopel didn’t want to do this.
- In the 2016 localgovt count we found 2 more bugs
- One candidate should have won with 54% probability but didn’t
- The Australian Senate Count
- AEC consistent refuses to revel the source code
- The Senate Date is release, you can redo it yourself any bugs will become evident
- What about digitising the ballots?
- How would we know if that wasn’t working?
- Only by auditing the paper evidence
- The Americas have a history or auditing the paper ballots
- But the Australian vote is a lot more complex so everything not 100% yet
- Stuff is online
It is with a little sadness, but a lot of pride that I announce my retirement from GovHack, at least retirement from the organising team It has been an incredible journey with a lot of amazing people along the way and I will continue to be it’s biggest fan and support. I look forward to actually competing in future GovHacks and just joining in the community a little more than is possible when you are running around organising things! I think GovHack has grown up and started to walk, so as any responsible parent, I want to give it space to grow and evolve with the incredible people at the helm, and the new people getting involved.
Just quickly, it might be worth reflecting on the history. The first “GovHack” event was a wonderfully run hackathon by John Allsopp and Web Directions as part of the Gov 2.0 Taskforce program in 2009. It was small with about 40 or so people, but extremely influential and groundbreaking in bringing government and community together in Australia, and I want to thank John for his work on this. You rock! I should also acknowledge the Gov 2.0 Taskforce for funding the initiative, Senator at the time Kate Lundy for participating and giving it some political imprimatur, and early public servants who took a risk to explore new models of openness and collaboration such as Aus Gov CTO John Sheridan. A lot of things came together to create an environment in which community and government could work together better.
Over the subsequent couple of years there were heaps of “apps” competitions run by government and industry. On the one hand it was great to see experimentation however, unfortunately, several events did silly things like suing developers for copyright infringement, including NDAs for participation, or setting actual work for development rather than experimentation (which arguably amounts to just getting free labour). I could see the tech community, my people, starting to disengage and become entirely and understandably cynical of engaging with government. This would be a disastrous outcome because government need geeks. The instincts, skills and energy of the tech community can help reinvent the future of government so I wanted to right this wrong.
In 2012 I pulled together a small group of awesome people. Some from that first GovHack event, some from BarCamp, some I just knew and we asked John if we could use the name (thank you again John!) and launched a voluntary, community run, annual and fun hackathon, by hackers for hackers (and if you are concerned by that term, please check out what a hacker is). We knew if we did something awesome, it would build the community up, encourage governments to open data, show off our awesome technical community, and provide a way to explore tricky problems in new and interesting ways. But we had to make is an awesome event for people to participate in.
It has been wonderful to see GovHack grow from such humble origins to the behemoth it is today, whilst also staying true to the original purpose, and true to the community it serves. In 2016 (for which I was on maternity leave) there were over 3000 participants in 40 locations across two countries with active participation by Federal, State/Territory and Local Governments. There are always growing pains, but the integrity of the event and commitment to community continues to be a huge part of the success of the event.
In 2015 I stepped back from the lead role onto the general committee, and Geoff Mason did a brilliant job as Head Cat Herder! In 2016 I was on maternity leave and watched from a distance as the team and event continued to evolve and grow under the leadership of Richard Tubb. I feel now that it has its own momentum, strong leadership, an amazing community of volunteers and participation and can continue to blossom. This is a huge credit to all the people involved, to the dedicated national organisers over the years, to the local organisers across Australia and New Zealand, and of course, to all the community who have grown around it.
A few days ago, a woman came up to me at linux.conf.au and told me about how she had come to Australia not knowing anyone, and gone to GovHack after seeing it advertised at her university, and she made all her friends and relationships there and is so extremely happy. It made me teary, but also was a timely reminder. Our community is amazing. And initiatives like GovHack can be great enablers for our community, for new people to meet, build new communities, and be supported to rock. So we need to always remember that the projects are only as important as how much they help our community.
I continue to be one of GovHack’s biggest fans. I look forward to competing this year and seeing where current and future leadership takes the event and they have my full support and confidence. I will be looking for my next community startup after I finish writing my book (hopefully due mid year :)).
If you love GovHack and want to help, please volunteer for 2017, consider joining the leadership, or just come along for fun. If you don’t know what GovHack is, I’ll see you there!
Keeping Linux Great
- Previous Keynotes have posed question I’ll pose answers
- What is the free of open source software, it has no future
- FLOSS is yesterday’s gravy
- Based on where the technology is today. How would FLOSS work with punch cards?
- Other people have said similar things
- Software, Linux and similar all going down in google trends
- But “app” is going up
- Small pieces losely joined
- Linux used to be great could you could pipe stuff to little programs
- That is what is happening to software
- Example – share a page to another app in a mobile interface
- All apps no longer need to send mail, they just have to talk to the mail app
- So What should you do?
- Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
- Components must be 5> million or >20 LOC , only a handful or them
- At the other end apps are smaller since they can depend on the OS or other Apps for lots of functionality so they don’t have to write it themselves.
- Example node with thousands of dependencies
- Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
- App Freedom
- “Advanced programming environments conflate the runtime with the devtime” – Bret Victor
- Open Source software rarely does that
- “It turns out that Object Orientation didn’t work out, it is another legacy with are stuck with”
- Having the source code is nice but it is not a requirement. Access to the runtime is what you want. You need to get it where people are using it.
- Liberal Software
- But not everything wasn’t to be a programmer
- 75% comes from 6 generic web applications ( collection, storage, reservation, etc)
- A lot of functionality requires big data or huge amounts of machines or is centralised so open sourcing the software doesn’t do anything useful
- If it was useful it could be patented, if it was not useful but literary then it was just copyright
Open Source Accelerating Innovation – Allison Randal
- Story of Stallman and the printer
- Don’t talk about the story of the context
- Stallman was living in a free software domain, propriety software was creeping in
- Software only became subject to copyright in early 80s
- First age of software – 1940s – 1960s
- Software was low value
- Software was all free and open, given away
- Precursor – The 1970s
- Middle Age of Software – 1980s
- Start of Windows, Mac, Oracle and other big software companies
- Also start of GNU and BSD
- Who Leads?
- Propritory software was seen as the innovator and always would be.
- Free Software was seen to be always chasing after windows
- The 2000s
- Free Software caught up with Propritory
- Used by big companies
- “Open Source” name adopted
- dot-com bubble had burst
- Web 2.0
- Economic necessity, everyone else getting it for free
- Collaborative Process – no silver bullet but a better chance
- Innovations lead by open source
- Software Freedoms
- About Control over our material enviroment
- If you don’t other freedoms then you don’t have a free society
- Modern Age of Software
- Cops in 2010 42% used OS software, In 2015 78% using
- Using Open Source is now just table stakes
- Competitive edge for companies is participating is OS
- Most participation pushes innovation even faster
- Now What?
- The New innovative companies
- Amazing experiences
- Augment Workers
- Deliver cool stuff to customers
- Use Network effects, Brand names
- Businesses making contribution to society
- Need to look at software that just doesn’t cover commercial use cases.
- The New innovative companies
- Next Phase
- Myopic monocultures – risk cause they miss the dangers
- empowered to change the rule for the better
Surviving the Next 30 Years of Free Software – Karen M. Sandler
- We’re not getting any younger
- Software Relicensing
- Need to get approval of authors to re-license
- Has had to contact surviving spouse and get them to agree to re-license the code
- One survivor wanted payment. Didn’t understand that code would be written out of the project.
- There are surely other issues that that we have no considered
- Copyright Assignment is a way around it
- But not everybody likes that.
- Bequeathment doesn’t work
- In some jurisdictions copyrights have to assessed for their value before being transferred. Taxes could be owed
- Who is your next of Kin?
- They might share your OS values or even think of them
- Need perpetual care of copyrights
- Debian Copyright Aggregation Projects
- A Trust
- Assign copyrights today, will give you back the rights you want but these expire on your death
- Would be a registry for free software
- Companies could participate to
- Recognize the opportunity with age
- A lot of people with a lot of spare time
All of these are available as Kindle books, but I’m sure you can get 3D copies too:
The Five Dysfunctions of a Team: A Leadership Fable by Patrick M. Lencioni
Leading Change by John P. Kotter
Who Says Elephants Can’t Dance? Louis V. Gerstner Jr.
Nonviolent Communication: A language of Life by Marshall B. Rosenberg and Arun Gandhi
Content as a driver of change: then and now – Lana Brindley
- Humans have always told stories
- Cave Drawings
- Australian Indigenous art is the oldest continuous art in the world
- Stories of extinct mega-fauna
- Stories of morals but sometimes also funny
- Early Written Manuals
- We remember the Eureka
- Religious Leaders
- Bible was only redistributed book, restricted to clergy
- Fairy Tales
- Charles Perrault versions.
- Brother Grim
- Cautionary tales for adults
- Very gruesome in the originals and many versions
- Easiest and entertaining way for illiterate people to share moral stories
- Master and Apprentice
- Cheap Labour and Learn a Trade
- Journals and Letters
- In the early 19th century letter writing started happoning
- Recipe Books
- Paper Manuals
- Traditionally the proper method for technical docs
- Printed version will probably go away
- Digital form may live on
- Training Courses
- Face to face training has it’s benifits
- Online is where techical stuff is moving
- Online Books
- Online version of a printed book
- Designed to be read from beginning to end, TOC, glossary, etc
- Quite common
- Data Typing (DITA)
- Break down the content into logical pices
- Store in a database
- Mix on the fly
- Doing this sort of the since 1960s and 1970s
- Single Sourcing
- Walked away from old idea of telling a story
- Look at how people consumed and learnt difficult concepts
- Deliver the same content many ways (beginner user, advanced, reference)
- Chunks of information we can deliver however we like
- User-Side Content Curation
- Organised like a wikipedia article
- Imagine a side listing lots of cars for sale, the filters curate the content
- What comes next?
- Large datasets and let people filter
- Power going from producers to consumers
- Consumers want to filter themselves, not leave the producers to do this
- References and further reading for talk
- Free and open source software suffers from poor usability
- We’ve struggled with open source software, heard devs talk about users with contempt
- We define users by what they can’t do
- How do I hate thee let I count the ways
- Why were we being made to feel stupid when we used free software
- Software is “made by me for me”, just for brainiac me
- Lots of stories about stupid users. Should we be calling our users stupid?
- We often talk/draw about users as faceless icons
- Take pride in having prickly attitudes
- Whiney, entitled and demanding
- We wouldn’t want some of them as friends
- Not talk about those sort of users
- Lets Chat about chat
- Slack – used by OS projects, not the freest, propritory
- Better in many ways less friction, in many ways
- Steep Learning curves
- How long to get to the level of (a) Stop hating it? (b) Are Kicking ass
- How do we get people over that level as quickly as possible
- They don’t want to be badass at using your tool. They want you to be badass at what using your tool allows them to do
- Badass: Making Users Awesome – Kathy Sierra
- Perfect is the enemy of the good
- Understand who your users are; see them as people like your friends and colleagues; not faceless icons
The Vulkan Graphics API, what it means for Linux – David Airlie
- What is Vulkan
- Not OpenGL++
- From Scratch, Low Level, Open Graphics API
- Loader (Mostly just picks the driver)
- Layers (sometimes optional) – Seperate from the drivers.
- Application Bug fixing
- Default GPU selection
- Drivers (ICDs)
- Open Source test Suite. ( “throw it over the wall Open Source”)
- Why a new 3D API
- OpenGL is old, from 1992
- OpenGL Design based on 1992 hardware model
- State machine has grown a lot as hardware has changed
- Lots of stuff in it that nobody uses anymore
- Some ideas were not so good in retrospec
- Single context makes multi-threading hard
- Sharing context is not reliable
- Orientated around windows, off-screen rendering is a bolt-on
- GPU hardware has converged to just 3-5 vendors with similar hardware. Not as much need to hid things
- Vulkan moves a lot of stuff up to the application (or more likely the OS graphics layer like Unity)
- Vulkan gives applications access to the queues if they want them.
- Shading Language – SPIR-V
- Binary formatted, seperate from Vulkan, also used by OpenGL
- Write Shaders HSL or GLSL and they get converted to SPIR-V
- Driver Development
- Almost all Error checking needed since done on the validation layer
- Simpler to explicitly build command stream and then submit
- Linux Support
- Closed source Drivers
- AMD (amdgpu-pro) – promised open source “real soon now … a year ago”
- Open Source
- Intel Linux (anv) –
- on release day. 3.5 people over 8 months
- SPIR -> NIR
- Vulkan X11/Wayland WSI
- anv Vulkan <– Core driver, not sharable
- NIR -> i965 gen
- ISL Library (image layout/tiling)
- radv (for AMD GPUs)
- Dave has been working on it since early July 2016 with one other guy
- End of September Doom worked.
- One Benchmark faster than AMD Driver
- Valve hired someone to work on the driver.
- Similar model to Intel anv driver.
- Works on the few Vulkan games, working on SteamVR
- Intel Linux (anv) –
- Closed source Drivers
Building reliable Ceph clusters – Lars Marowsky-Brée
- Storage Project
- Multiple front ends (S3, Swift, Block IO, iSCSI, CephFS)
- Built on RADOS data store
- Software Defined Storage
- Commodity servers + ceph + OS + Mngt (eg Open Attic)
- Makes sense at 4+ servers with 10 drives each
- metadata servce
- CRUSH algorithm to speread out the data, no centralised table (client goes directly to data)
- Access Methods
- Use only what you need
- RADOS Block devices <– most stable
- S3 (or Swift) via RadosGW <– Mature
- CephFS <— New and pretty stable , avoid stuff non meta-data intensive
- Introducing Dependability
- Most outages are caused by Humans
- At Scale everything fails
- The Distributed systems are still vulnerable to correlated failures (eg same batch of hard drives)
- Advantages of Heterogeneity – Everything is broken different
- Homogeneity is non-sustainable
- Failure is inevitable; suffering is optional
- Prepare for downtime
- Test if system meets your SLA when under load and when degraded and during recovery
- How much available do you need?
- An extra nine will double your price
- A Bag full of suggestions
- Embrace diversity
- Auto recovery requires a >50% majority
- 3 suppliers?
- Mix arch and stuff between racks/pods and geography
- Maybe you just go with manually added recovery
- Hardware Choices
- Vendors have reference archetectures
- Hard to get vendors to mix, they don’t like that and fewer docs.
- Hardware certification reduces the risk
- Small variations can have huge impact
- Customer bought network card and switch one up from the ref architecture. 6 months of problems till firmware bug fixed.
- How many monitors do I need?
- Not performance critcal
- 3 is usually enough as long as well distributed
- Big envs maybe 5 or 7
- Don’t coverge (VMs) these with other types of nodes
- Avoid Desktop Disks and SSDs
- Storage Node sizing
- A single node should not be more than 10% of your capacity
- You need space capacity at least as big as a single node (to recover after fail)
- Erasure Encode more durabily and high percentage of disk used
- But recovery a lot slower, high overhead, etc
- Different strokes for different pools
- Network cards, different types, cross connect, use last years cards
- Gateways: tests okay under failure
- Config drift: Use config mngt (puppet etc)
- Perf as system ages
- SSD degradation
- Latest software is always the best
- Usually good to update
- Can do rolling upgrades
- But still test a little on a staging server first
- Always test on your system
- Don’t trust metrics from vendors
- Test updates
- test your processes
- Use OS to avoid vendor lock in
- Disaster will strike
- Have backups and test them and recoveries
- Avoid Complexity
- Be aggressive in what you test
- Be commiserative in what you deploy only what you need
- Q: Minimum size?
- A: Not if you can fit on a single server
- Embrace diversity
- Is it alright to compromise or even deliberately ignore the happiness of maintainers so that we can enjoy free software?
- Huge growth in usage and downloads of Open Source software
- 2/3s of popular open source projects on github are maintained by one of two people
- Why so few?
- Style has changed, lots of smaller projects
- Being a maintainer isn’t glamorous of fun most of the time
- 1% are creating the content that 99% of people consume
- “Rapid evolution [..] poses the risk of introducing errors faster than people can fix them”
- Consumption scales for most thing, not for open source because it creates more work for the maintainer
- “~80% of contributors on github don’t know how to solve a merge conflict”
- People see themselves as users of OS software, not potential maintainers – examples of rants by users against maintainers and the software
- “Need maintainers, not contributors”
- “Helping people over their first pull request, not helping them triage issues”
- Why are we not talking about this?
- Lets take a trip back in History
- Originally Stallman said Free software was about freedom, not popularity. eg “as is” disclaimer of warranty
- Some people create software sometimes.
- Debian Social Contract, 4 freedoms, etc places [OS / Free] software and users first, maintainers often not mentioned.
- Orientated around the user not the producer
- Four Freedoms of OS producers
- Decide to participate
- Say no to contributions or requests
- Define the priorities and policies of the project
- Step down or move on
- Other Issues maintainers need help with
- Community best practices
- Project analytics
- Tools and bots for maintainers (especially for human coordination)
- Conveying support status ( for contributors, not just user support )
- Finding funding
- People have talked about this before, mostly they concentrated on a few big projects like Linux or Apache (and not much written since 2005)
- Doesn’t reflect the ecosystem today, thousands of small projects, github, social media, etc
- Open source today is not what open source was 20 years ago
- Q: What do you see as responsibly and potential for orgs like Github?
- A: Joined github to help with this. Hopes that github can help with tools.
- Q: How can we get metrics on real projects, no just plaything on github
- A: People are using stars on github, which is useless. One idea is to look at dependencies. libraries.io is looking. Hope for better metrics.
- Q: Is it all agile programmings fault?
- A: Possibly, people this days are learning to code but average level is lower and they don’t know what is under the hood. Pretty good in general but. “Under the hood it is not just a hammer, it is a human being”
- Q: Your background is in funding, how does transiticion work when a project or some people on it start getting money?
- A: It is complicated, need some guidelines. Some projects have made it work well ( “jsmobile” I think she said ). Need best practice and to keep things transparent
- Q: How to we get out to the public (even programmers/tech people at tech companies) what OS is really like these days?
- A: Example of Rust. Maybe some outreach and general material
- Q: Is Patreon or other crowd-funding a good way to fund projects?
- A: Needs a good target, requires a huge following which is hard to people who are not good at marketing. Better for one-time vs recurring. Hard to decide exactly what money should be used for
Handle Conflict, Like a Boss! – Deb Nicholson
- Conflict is natural
- “When they had no outfit for their conflict they turned into Reavers and ate people and stuff”
- People get caught up in their area not the overall goal for their organisation
- People associate with a role, don’t like when it gets changed or eliminated
- Need to go deep, people don’t actually tell you the problem straight away
- If things get too bad, then go to another project
- Identify the causes of conflict
- 3 Styles of handling conflict
- Can let things fester
- They come across as unconnected
- Looks like support for the status quo
- Compromise on everything
- Looks like not taking seriously
- Going to wear down everyone else
- People won’t tell you when things are wrong
- Going a little deeper
- People don’t understand history (and why things are weird)
- go to historical motivations and get buy-in for the strategy that reflects the new reality
- People are acting to motivations you don’t see
- Ask about the other persons motivations
- Fear (often of change)
- “What is the worse that could happen?”
- Right Place, wrong time
- Stuff is going to the wrong person or group
- Help everyone get perspective
- Don’t do the same forum, method, people all the time if it always has conflict.
- People don’t understand history (and why things are weird)
- What do you do with the Info
- Put yourself in other persons shoes
- Find alignment
- A Word about who is doing this conflict resolution
- Shouldn’t be just a single person/role
- Or only women
- Should be everyone/anyone
- But if it is within a big or then maybe hire someone
- Planning for future conflicts
- Assuming the best
- No ad hominem (hard to go back)
- Conflict resolution between groups
- What could we accomplish if we worked together
- Doesn’t look good to outsiders
- More Face-to-Face between projects (towards a common goal)
Open Compute Project Down Under – Andrew Ruthven
- What is Open Compute
- Vanity free Computing ( remove pretty bits )
- Stripped Down – we don’t need, no video, minimum extra posts)
- Efficient and easy
- Maintenance, Air flow, Electricity
- Came out of Facebook, now a foundation
- 1/10th the number of techs/server
- Projects and Technologies
- 9 main areas, over 4000 people working on it.
- Design and Specs
- Recent Hardware
- Some comes in 19″ racks
- HPE, Microsoft Project Olympus
- In Aus / NZ
- Telstra – 2 rack of OCP Decathleon, Open Networking using Hyper Scalers
- Large Gaming site
- Catalyst IT
- Why OCP for Catalyst
- Very Open source software orientated company
- Have a Cloud Operation
- Looking at for a while
- Finally ordered first unit in 2016 (Winterfell)
- Cumulus Linux switches from Penguin computing, works of 12volt in Open Rack
- Issues for Aus / Nz
- Very small scale, sometimes to small for vendors
- Supply chain hard, ended up using an existing integrator
- Hyper Scalers in Aus, will ship to NZ
- Number of comapnies seee to NZ
- Scale is an issue for failures aswell as supply
- Have >1 power shelf
- Have at least 2 racks with 4 power sheleves
- Too small for vendors to get certification
- Trust in new hardware
- Your Own deployment
- Green field DC
- Use DC Designs
- Allow for 48U racks (2.5 metres tall)
- 2x or 4x 3-phase circuits per rack
- Existing DCs
- Consider modifications
- 19″ servers options
- 48OU Open rack if you have enough height
- 22OU is you don’t have enough height
- Carefully check the specs
- Open Networking
- Run collectd etc directly on your switch
- Supply Chain
- Community Support
- OCP has a Aus/NZ Mailing list (ocp-anz)
- Discussion on what is a priority across Aus and NZ
- Green field DC
400,000 ephemeral containers: testing entire ecosystems with Docker – Daniel Axtens
- A pretty interesting talk. It was largely a demo so I didn’t grab many notes
Community Building Beyond the Black Stump – Josh Simmons
- How to build communities when you don’t live in a big city
- Whats in a meetup?
- Santa Rosa County, north of San Franscisco
- Not easy to get to SF
- SF meetups not always relevant
- After meeting with one other person, created “North Bay web Professionals”, minimal existing groups
- Multidisciplinary community worked better
- Designers, Marketers, Web Devs, writers, etc
- Hired each other
- Seemed to work better, fewer toxic dynamics
- Safe space for beginners
- 23 People at first event (worked hard to tell people)
- Told everyone that we knew even if not interested
- Contacted the competitors
- Contacting firms, schools
- Co-working spaces (formal of de-facto like cafes)
- Other meetup groups, even in unrelated areas.
- Adapting to the needs of the community
- You might have a vision
- But you must adapt to who turns up and what they want/need
- First meeting
- Asked people to bring food
- Fluffy start time so could greet people and mingle
- Went round room and got people to introduce themselves
- Intro ended up being a thing they always did
- Helped people remember names
- Got everyone to say a little
- put people in a social mindset
- Framework for events decided
- Decided on next meeting date, some prep
- Ended up going late
- Format became. Social -> talk -> Social on each night.
- Used facebook and meetup
- 1/3 of people came just from meetup promoting automatically
- Go where people already are
- Renamed from “North Bay Web professions” to “North Bay Web and Interactive Media professionals”
- “Ask a person, not a search engine”
- Hosted over 169 events – Core was the monthly meeting
- Tried to keep the topics a little broad
- Often the talk was narrow but compensated with a broad Q&A afterwards
- Thinking of people as “members” not “attendees” , have to work at getting them come back
- Also hosted
- Lunches, rotated all around the region so eventually near everywhere, Casual
- Topical meetups
- Charity Hackathon, teamed up with students and non-profits to do website for non-profit. Student was an apprentice.
- Hosted Ag+Tech mixers with local farmers groups
- Helped local cities put out tech RFPs
- Q: Success measures? A: Survey of member, things like Job referrals, what have learnt
Servo Architecture: Safety and Performance – Jack Moffitt
- 1994 Netscape Navigator
- 2002 Mozilla Release
- 2008 multi-core CPU stuff not making firefox faster
- 2016 CPUs now have on-chip GPUs
- Very hard to write multi-threaded C++ to allow mozilla to take advantage of many cores
- How to make Servo Faster?
- In the past – Monolithic browser engines
- Single browser engine handling multiple tabs
- Two processes – Pool Content processes vs Chrome process
- If one process dies on a page doesn’t take out whole browser
- Sanboxing lets webpage copies have less privs
- Less overhead than whole processes
- Thread per page
- More responsive
- More robust to failure
- Is this the best we can do?
- Pipeline splitting them up
- Child pipelines for inner iframes (eg ads)
- In the past – Monolithic browser engines
- Rust can fail better
- Most failures stop at thread boundaries
- Still do sandbox and privledges
- Option to still have some tabs in multiple processes
- Using the GPU
- Frees up main CPU
- Are VERY fast at some stuff
- Easiest place to start is rendering
- Don’t browsers already use the GPU?
- Only in a limited way for compositing
- Key ideas
- Retain mode not immediate mode (put things in optimal order first)
- Designed to render CSS content (CSS is actually pretty simple)
- Draw the whole frame every frame (things are fast enough, simpler to not try to optimise)
- Chop screen into 256×256 tiles
- Tile assignment
- Create a big tree
- merge and assign render targets
- create and execute batches
- Rasterize on CPU and upload glyth to GPU
- Paste and shadow usign the GPU
- Using the GPU
- Project Quantum
- Taking technology we made in servo and put it in gecko
- Research in progress
- Pathfinder – GPU font rasterizer – Now faster than everything else
- Magic DOM
- Wins in JS/DOM intergration
- Fusing reflectors and DOM objects
- Self hosted JS
- External colaborations: ML, Power Mngt, WebBluetooth, etc
- Get involved
- Test nightlies
- Curated bugs for new contributors
In Case of Emergency: Break Glass – BCP, DRP, & Digital Legacy – David Bell
- BCP = Business continuity Plan
- A process to prevent and recover from business continuity plans
- BIP = Business interuptions plan
- BRP = Recovery plan
- RPO = Recovery point objective, targetted recovery point (when you last backed up)
- RTO = Recovery time objective
- Because things will go wrong
- Because things should not go even more wrong
- Create your BCP
- Identify events that may interrupt, loss access to physical site, loss of staff
- 3 copies
- 2 different media/formats
- 1 offsite and online
- Check how long it will take to download or fetch
- Who has the Authority
- Communication chains, phone trees, contact details
- Practice Early, Practice often
- Real-world scenarios
- Measure, measure, measure
- Record your results
- Convert your into an action item
- Have different people on the tests
- Each Biz Unit or team should have their own BCP
- Recovery can be expensive, make sure you know what your insurance will cover
- Breaking the Glass
- Documentation is the Key
- Secure credentials super important
- Shamir secret sharing, need number of people to re-create the share
- Digital Legacy
- Do the same for your personal data
- What uses them
- billing arrangments
- What are your wishes for the above.
- Talk to your family and friends
- Document backups and backup your documentation
- Secret sharing, offer to do the same for your friends
- Other / Questions
- Think about 2-Facter devices
- Google and some others companies can setup “Next of Kin” contacts
Designing for failure: On the decommissioning of Persona
- Worked for Mozilla on Persona
- Persona did authentication on the web
- You would go to a website
- Type in your email address
- Redirects via login page by your email provider
- You login and redirect back
- Started centralised, designed to be uncentralised as it is taken up
- Some sites were only offering login via social media
- Some didn’t offer traditional logins for emails or local usernames
- Imposes 3rd party between you and your user.
- Those 3rd parties have their own rules, eg real name requirements
- Persona Failed
- Traditional logins now more common
- Cave Diving
- Equipment and procedures designed to let you still survive if something fails
- Training review deaths and determines how can be prevented
- “5 rules of accident analysis” for cave diving
- Three weeks ago switched off Persona
- Encourage others to share mistakes
- Just having a free license is not enough to succeed
- Had a built in centralisation point
- Protocol designed so browser could eventually natively implement but initially login.persona.com was using it.
- Relay between provider and website went via Mozilla until browser natively implemented
- No ability to fork the project
- Bits rot more quickly online
- Stuff that is online must be continually maintain (especially security)
- Need a way to have software maintained without experts
- Complexity Limits agency
- Limits who can run project at all
- Lots of work for those people who can run it
- A free license don’t further my feeedom if we can’t run the software
- Prolong Your Project’s Life
- Bad ideas
- We used popups and people reflexively closed them
- API wasn’t great
- Didn’t measure the right thing
- Is persona product or infrastructure?
- Treated like a product, not a good fit
- Explicitly define and communicate your scope
- “Solves authentication” or “Authenticate email addresses”
- Broke some sites
- Got used by FireFoxOS which was not a good fit
- Ruthlessly oppose complexity
- Tried to do too much mean’t it was overly complex
- Complex hard to maintain and review and grow
- Hard for newbies to join
- If it is complex then it is hard to even test that is is working as expected
- Focus and simplify
- Almost no outside contributors, especially bad when mozilla dropped it.
- Plan for Your Projects Failure
- “Sometimes that [bus failure] is just a commuter bus that picks up that person and takes them to another job”
- If you know you are dead say it
- 3 years after we pulled people off project till officially killed
- Might work for local software but services cost money to run
- Sooner you admit you are dead the sooner people can plan to your departure
- Ensure your users can recover without your involvement
- Hard to do when you think your project is going to save the world
- Example firefox sync has a copy of the data locally so even if it dies user will survive
- Use standard data formats
- eg OPML for RSS providers
- Minimise the harm caused when your project goes away