Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 week 5 days ago

Sridhar Dhanapalan: Tweet: Australia struggles to bring equality to its indig…

Tue, 2016-03-08 16:28

Australia struggles to bring equality to its indigenous population j.mp/1oDZNmy https://t.co/WDdv4vH6IB

Sridhar Dhanapalan: Tweet: More inventions than Edison: Artur Fischer https:/…

Tue, 2016-03-08 16:28

More inventions than Edison: Artur Fischer j.mp/1oNX2iZ

Sridhar Dhanapalan: Tweet: Walcome to Malbourne: A curious transformation is…

Tue, 2016-03-08 16:28

Walcome to Malbourne: A curious transformation is happening to Victoria’s vowels, and it’s not going unnoticed j.mp/1oLzOK4

Sridhar Dhanapalan: Tweet: https://t.co/wRV3BIsCLL

Tue, 2016-03-08 16:28

https://t.co/wRV3BIsCLL

Sridhar Dhanapalan: Tweet: Rising inequality holds back economic growth: OECD…

Tue, 2016-03-08 16:28

Rising inequality holds back economic growth: OECD report j.mp/1ohCrDL

OpenSTEM: Credit for the Work

Tue, 2016-03-08 11:31

In our research for OpenSTEM material we often find (or rediscover) that the “famous” person we all know is not the person who actually first did whatever it was. This applies to inventors, scientists, explorers.

Marco Polo was not the first to go East and hang out with the heirs of Genghis Khan, Magellan did not actually circumnavigate the world (he died on the way, in the Philippines), and so on.

In the field of science this has also happened quite often and it’s quite frustrating (to put it mildly). It’s important that the people who do the work credit the credit – and particularly not other people claiming (or otherwise getting, such as through a Nobel prize) that work as their own. That’s distinctly uncool.

Rosalind Franklin

Rosalind Franklin was an accomplished British chemist and X-ray crystallographer. It was her work that first showed the double-helix form of DNA. Watson & Crick (with Wilkins) ran with it (without her permission even) and they only mentioned her name in a footnote. As we all know, Watson, Crick and Wilkins received the Nobel prize for “discovering DNA”. False history.

X-ray diffraction image of the double helix structure of the DNA molecule, taken 1952 by Raymond Gosling, commonly referred to as “Photo 51”, during work by Rosalind Franklin on the structure of DNA

(Raymond Gosling/King’s College London)

While it’s not exclusively women who get a bad deal here, there are a fair number, and the research shows that this is often as a result of some very arrogant other people in their surroundings who grab and run with the work. Sexism and chauvinism have played a big role there.

An article by Katherine Handcock at A Mighty Girl provides a short bio of 15 Women Scientists – many of which you may never have heard of, but all of which did critical work. She writes:

For centuries, women have made important contributions to the sciences, but in many cases, it took far too long for their discoveries to be recognized — if they were acknowledged at all. And too often, books and academic courses that explore the history of science neglect the remarkable, ground breaking women who changed the world. In fact, it’s a rare person, child or adult, who can name more than two or three female scientists from history — and, even in those instances, the same few names are usually mentioned time and again.

Read the full article at A Might Girl: Those Who Dared To Discover: 15 Women Scientists You Should Know

David Rowe: Codec 2 Masking Model Part 4

Mon, 2016-03-07 16:31

This post describes how the masking model frequency/amplitude pairs are quantised.

This work is very new. There are so many different areas to pursue. However I decided “release early and often” – push a first pass right through the quantisation process. That way I can write about it, publish the results, and get some feedback. This post presents a first pass including samples at 700 and 1000 bit/s.

Histograms

Quantisation takes a floating point value (a real number) and represents it with a small number of bits. In this case we have 4 frequencies, and 4 amplitudes. Eight numbers in total that we must send over the channel. If we sent the floating point values that would be 8×32 = 256 bits/frame. With a 40ms frame update rate that is 256/0.04 = 6400 bit/s. Too high. So we need to come up with efficient quantisers to minimise the number of bits flowing over the channel, while keeping a reasonable speech quality. Speech coding is the art of what can I throw away?

There are a few tricks we can use. The dynamic range (maximum and minimum) values tend to be limited. A good way to look at the range is using a histogram of each value. I ran samples of 10 speakers through the simulations and logged the frequency and amplitudes to generate some histograms.

Here are the frequencies and differences between each frequency. The frequencies were first sorted into ascending order.

Here are the amplitudes, with the mean (frame energy) removed:

Voiced speech tends to have a “low pass” spectral slope – more energy at low frequencies than high frequencies. Unvoiced speech tends to be “high pass”. As discussed in the first post the ear is not very sensitive to fixed “filtering” of speech, ie the absolute value of the formant amplitudes. You can have a gentle band pass filter, some high pass or low pass filtering, and it all sounds fine.

So I reasoned I could fit a straight line the amplitudes, like this:

The first plot is the time domain speech, it’s a frame of Mark saying “five”. The second plot shows the spectral amplitudes Am (red) and the mask (purple) we have fit to them. The mask is described by just four frequencies/amplitude points. The frequencies are labelled by the black crosses.

The last plot shows the four frequency/amplitude points (red), and a straight line fit to them (blue line). The error in the straight line fit to each red point is also shown.

So keeping this straight line fit in mind, lets get back to the histograms. Here are the histograms of the amplitudes, followed by the histograms of the amplitude errors after the straight line fit:

Note how narrow the 2nd plot’s histograms are compared to the first? This makes the values “easier” to represent with a small number of bits. In statistics, we would say the variance of these variables is smaller.

OK, but now we need to send the parameters that describe the straight line. That would be the gradient and y-intercept, here are the histograms:

Notice the mean of the gradient is skewed to negative values? Speech contains more voiced (vowels) than unvoiced speech (constants), and voiced speech is “low pass” (a negative gradient).

The y-intercept is a fancy way of saying the “frame energy”. It goes up and down with the level of the speech.

Quantisation of Frequencies

The frequencies are found in random order by the AbyS algorithm. We can also transmit them in any order, and reconstruct the same spectral envelope at the decoder. For convenience we sort them into ascending order. This reduces the distance between each frequency sample, and lets us delta code the frequencies. You can see above that the histograms of the last 3 delta frequencies cover about the same range.

I used 3 bits for each frequency, giving a total of 12 bit/s frame.

Quantisation of Amplitudes

I used the straight line fit method for the amplitudes. I used 3 bits for the gradient, and 3 bits for the errors in 5dB steps over the range of -15 to 15 dB. I assumed the y intercept would require as many bits as the frame energy (5 bits/frame) that is used for the existing Codec 2 modes.

Bit Allocation

The quantisation work leads us to a simulation of quantised 1000 and 700 bit/s codecs with the following bit allocations. In the 700 bit/s mode, we don’t transmit the straight line fit errors.

Parameter Bits/frame (High Rate) Bits/frame (Low Rate) Pitch (Wo) 7 7 Voicing 1 1 Energy 5 5 Mask freqs 12 12 Mask amp gradient 3 3 Mask amp errors 12 0 Bits/frame 40 28 Frame period (s) 0.04 0.04 Bits/s 1000 700

Samples

Here are some samples of the first pass 700 and 1000 bit/s codecs. Also provided are the samples from Part 3, the (unquantised AbyS, and Codec 2 700B and 1300). The synthetic phase spectra is derived from the decoded amplitude spectrum. “newamp” is the name for the simulations using the masking model.

Sample 700B 1300 newamp AbyS newamp AbyS 700 newamp AbyS 1000 ve9qrp_10s Listen Listen Listen Listen Listen mmt1 Listen Listen Listen Listen Listen vkqi Listen Listen Listen Listen Listen

I can notice some “tinkles”, “running water” types sounds on the quantised newamp samples. This could be chunks of spectrum coming and going quickly. Also some roughness on long vowels, like “five” in the vk5qi samples.

I feel the newamp 700 samples are better than 700B, which is the direction I want to be heading. Please tell me what you think.

Command Lines

codec2-dev SVN revision 2716

octave:49> newamp_batch("../build_linux/src/vk5qi","../build_linux/src/vk5qi_am.out", "../build_linux/src/vk5qi_aw.out")

~/codec2-dev/build_linux/src$ ./c2sim ../../wav/vk5qi.wav --amread vk5qi_am.out --awread vk5qi_aw.out --phase0 --postfilter -o - | play -t raw -r 8000 -s -2 -

Further Work

A bunch of ideas that come to mind:

  • The roughness in long vowels could be frame to frame amplitude variations. It would be interesting to explore these errors, for example plotting them.
  • Try delta-time approach. This might help with gentle evolution of the parameters and frame-frame noise. A gentle evolution of the slope of the straight line might sound better than the current scheme – as there will be less frame to frame noise.
  • Plotting trajectories of the parameters over time would give us some more insight into quantisation, and help us determine if we can use Trellis Decoding for additional robustness.
  • It may be possible to weight certain errors on the AbyS loop, for example steer it away from outlier amplitude points that have a poor straight line fit.
  • Can we choose a set of amplitudes that fit exactly to a straight line but still sound OK? Can we modify the model in a way that doesn’t affect the speech quality but helps us quantisate to a compact set of bits?
  • Look for quantiser overload – values outside of the quantiser range.
  • Take another look at what the AbyS loop is doing, track down some problem frames.
  • Try a higher and lower (e.g. 3) number of frequency/amplitude points.
  • Frequencies don’t have to be on the harmonic amplitude mWo grid.
  • Experiment with the shape of the masking functions.
  • Try Vector Quantisation.
  • The parameters are all quite orthogonal, which lets us modify them independently. For example we could move the amplitudes a little to aid quantisation, but keep the frequencies the same. Compressing the frame energy will leave intelligibility unchanged.
  • Get something a little better than what we have here and put it on the air and get some real tests over real conversations.
  • Samples like vk5qi have a lot of low pass energy. We might be wasting bits coding that. The 4th freq frequency histogram has a mean of 3.2kHz. This is very high and it could be argued we don’t need this sample, or it could be coded with low resolution.

Links

Codec 2 Masking Model Part 1

Codec 2 Masking Model Part 2

Codec 2 Masking Model Part 3

OpenSTEM: Tumour patient gets world’s first 3D-printed vertebrae | ABC

Mon, 2016-03-07 10:31
An Australian neurosurgeon completed a world-first surgery, removing two cancer-riddled vertebrae at the top of his neck and replacing them with a 3D-printed (titanium) body part.

David Rowe: Torturing the Clutch in my EV

Thu, 2016-03-03 20:31

My 17 year old son recently been given a drivers license, and has, like his sister before him, taken over my EV. Free driving (Dad pays the electricity bills) is kind of irresistible. And also like his sister before him – I was full of fear and angst. You see my EV is something of a prototype, and likes to be babied by it’s Creator. Lots of traps for the unwary. If you want an EV fit for general consumption talk to Tesla.

Sure enough, just 4 weeks later, I get “the phone call” from my son. The EV has died. Motor running but making nasty sounds and won’t move. Fortunately, it stopped a few blocks from home. I attended the scene, and all I could hear was a grating sound from the front. We pushed it home and I consulted my motor vehicle brains trusts (friends Kyle and Scott) who pronounced a likely transmission failure.

I prepared to drop the motor and gearbox, something I haven’t done since I blew up the armature 6 years ago. Looking forward to the project, as doing something mechanical is a welcome change in my lifestyle. Feeling determined as well, my EV must be kept running!

I was talking about the problem on the local repeater when Gary, VK5FGRY popped up and said he might be able to help. Gary has a fully equipped workshop and years of experience with car repairs. He also has the most important resource of all – time. Gary came around this morning at 9:30am to assess the situation. He suggested we make a start, so we could at least work out what the problem was.

Within a few hours the gearbox was on the ground and the problem found – a stripped spline in the hub of the clutch plate. The noise I could hear was the stripped spline being filed away by the (somewhat harder) gearbox input shaft. Gary also discovered a few minor issues with stripped or missing gearbox mounting bolts. Note the filings on the inside of the hub, and splines mostly gone:

This was a lucky escape – I thought I was up for a new gearbox ($500 second hand). We guessed the torque of the electric motor (200Nm, about twice that of the original infernal combustion engine) had caused the fault. Well it’s been Electric for 8 years and 50,000km, so I guess I can’t complain.

We headed out and bought a new clutch plate ($100), some bolts, and transmission oil. After a nice lunch of home made bread and condiments, we picked up the tools again and by 6pm my little EV was on the road! Yayyyyyyy. Thank you so much Gary!

Here is the clutch (purple) re-assembled and attached to the electric motor, just before the gearbox was re-installed. Silver metal is the adapter plate. Lots of cables all over the place:

It was a great day. Nice change from my usual keyboard and laptop filled life. Lovely summer day, working outside with the tools, getting a bit dirty (but not oily – this is an electric car so no grease under my bonnet). So much nicer to do it with good company – especially someone as experienced as Gary.

I do live an interesting life. What did you do today Dad? “I worked with a friend to fix a home brew Electric Car!”

Links

My EV page

My EValbum page. Lots of pictures and technical stuff.

OpenSTEM: Wintergatan – Marble Machine (music instrument using 2000 marbles)

Thu, 2016-03-03 13:31

A marvellous piece of engineering using mainly wood, 2000 marbles, some Lego technic bits, and some electronics. Hand-driven. Watch and enjoy! (4m30s)



Marble Machine built and composed by Martin Molin

Video filmed and edited by Hannes Knutsson

sthbrx - a POWER technical blog: Learning From the Best

Wed, 2016-03-02 23:00

When I first started at IBM I knew how to alter Javascript and compile it. This is because of my many years playing Minecraft (yes I am a nerd). Now I have leveled up! I can understand and use Bash, Assembly, Python, Ruby and C! Writing full programs in any of these languages is a very difficult prospect but none the less achievable with what I know now. Whereas two weeks ago it would have been impossible. Working here even for a short time has been an amazing Learning experience for me, plus it looks great on a resume! Learning how to write C has been one of the most useful things I have learnt. I have already written programs for use both in and out of IBM. The first program I wrote was the standard newbie 'hello world' exercise. I have now expanded on that program so that it now says, "Hello world! This is Callum Scarvell". This is done using strings that recognise my name as a set character. Then I used a header file called conio.h or curses.h to recognise 'cal' as the short form of my name. This is so now I can abbreviate my name easier. Heres what the code looks like:

#include <stdio.h> #include <string.h> #include <curses.h> int main() { printf("Hello, World! This Is cal"); char first_name[] = "Callum"; char last_name[] = "Scarvell"; char name[100]; /* testing code */ if (strncmp(first_name, "Callum", 100) != 0) return 1; if (strncmp(last_name, "Scarvell",100) != 0) return 1; last_name[0] = 'S'; sprintf(name, "%s %s", first_name, last_name); if (strncmp(name, "Callum Scarvell", 100) == 0) { printf("This is %s\n",name); } /*printf("actual string is -%s-\n",name);*/ return 0; } void Name_Rec() { int i,j,k; char a[30],b[30]; clrscr(); puts("Callum Scarvell : \n"); gets(a); printf("\n\ncal : \n\n%c",a[0]); for(i=0;a[i]!='\0';i++)

The last two lines have been left out to make it a challenge to recreate. Feel free to test your own knowledge of C to finish the program! My ultimate goal for this program is to make it generate the text 'Hello World! This is Callum Scarvell's computer. Everybody else beware!'(which is easy) then import it into the Linux kernel to the profile login screen. Then I will have my own unique copy of the kernel. And I could call myself an LSD(Linux system developer). That's just a small pet project I have been working on in my time here. Another pet project of mine is my own very altered copy of the open source game NetHack. It's written in C as well and is very easy to tinker with. I have been able to do things like set my characters starting hit points to 40, give my character awesome starting gear and keep save files even after the death of a character. These are just a couple small projects that made learning C so much easier and a lot more fun. And the whole time I was learning C, Ruby, or Python I had some of the best system developers in the world showing me the ropes. This made things even easier, and much more comprehensive. So really its no surprise that in three short weeks I managed to learn almost four different languages and how to run a blog from the raw source code. The knowledge given to me by the OzLabs team is priceless and invaluable. I will forever remember all the new faces and what they taught me. And the Linux Gods will answer your prayers whether e-mail or in person because they walk among us! So if you ever get an opportunity to do work experience, internship or a graduate placement take the chance to do it because you will learn many things that are not taught in school.

If you would like to reveiw the source code for the blog or my work in general you can find me at CallumScar.github.com or find me on facebook, Callum Scarvell.

And a huge thankyou to the OzLabs team for taking me on for the three weeks and for teaching me so much! I am forever indebted to everyone here.

David Rowe: Project Whack a Mole Part 1

Wed, 2016-03-02 14:31

As a side project I’ve been working on a Direction Finding (DF) system. Although it relies on phase it’s very different to Doppler. It uses a mixer to frequency multiplex signals from two antennas into a SDR. Some maths works out the phase difference between the antennas, which can be used to compute a bearing.

The use case is tracking down a troll who is annoying us on our local repeater. He pops up for a few seconds at a time, like the game of Whack a Mole. It’s also fun to work on a new(ish) type of DF system, and play with RF.

I’ve got the system measuring phase angles between two antennas on the bench, so thought I better come up for air and blog on my progress so far.

Hardware

Here is a block diagram of the hardware:

The trick is to get signals from two antennas into the SDR, in such a way that the phase difference can be measured. One approach is to phase lock two or more SDRs. My approach is to frequency shift the a2 signal, which is then summed with a1 and sent to the SDR. I used a Minicircuits ADE-1 mixer (left) and home made hybrid combiner (centre):

For testing on the bench I use a sig-gen and a splitter (right) to generate the a1 and a2 signals. I can vary the phase by varying the cable lengths.

Here is a spec-an plot showing a1 in the centre and the a2 “sidebands”, at +/- 32kHz:

The LO frequency of 32kHz was chosen as it (i) greater than the 16kHz bandwidth of FM signals and (ii) means we can use a modest sampling rate of 192kHz to capture the 3 signals, (iii) we can use a common “watch” crystal to generate it. The LO input on the mixer is rated down to 500kHz but works OK with a conversion loss of 9dB.

Signal Processing Design

OK so we have the two signals a1 and a2 present at each antenna. Theta is an arbitrary phase offset that both signals experience due to propagation time from the transmitter, and other phase shifts common to both signals, like the SDRs signal processing. Phi is the phase difference between a1 and a2, this is what we want to compute. Alpha is the phase offset of the local oscillator. Only a2 experiences this phase shift as it passes through the mixer. Omega-l is the local oscillator frequency, and omega is the carrier frequency. The summed signal presented to the SDR input is called r, which we can derive:

Note we assume the two signals a1 and a2 are complex, but the mixer is real (double sided). So there are a total of three signals at the SDR input. Now lets mess about with the phase terms of the three signals that make up r:

So the output is 2 times phi, the phase difference between the two antennas. Yayyyyyy. The 2phi output also implies an ambiguity of 180 degrees, which is what we would expect with just 2 antennas. I’ll worry about that later, e.g. with a third channel or mounting the hardware on the edge of our city such that bearings are only expected from one hemisphere.

There are several ways to implement the signal processing. I like the sample by sample approach:

It’s all implemented in df_mixer.m. This can run with a simulated signal or input from a HackRF SDR.

Walk Through and Results

Lets look at the algorithm in action with a1 and a2 generated on the bench using a splitter and two lengths of coax to set the phase difference. The signal generator was set to 439.048MHz and -30dBm. We sample about 1 second using the HackRF SDR, the run the Octave script:

$ hackrf_transfer -r df1.iq -f 439000000 -n 10000000 -l 20 -g 40

% octave:25> df_mixer.m

Here is the input signal, the wanted signals are at 48kHz (a1), 16kHz and 80kHz (a2).

We pass that through these Band Pass Filters (BPFs):

To get the three signals:

After the signal processing magic we can plot the output for each sample on the complex plane. Its like a scatter plot, and gives us a feel for how reliable the phase estimates are:

We can also find the angle for each sample and plot a histogram. The tighter this histogram is the more confidence we have:

Testing with Cables

So how to test? I ended up inserting short lengths of transmission line, using adapters and attenuators. I guessed the velocity as 2/3 the speed of light. This spreadsheet summarises the results:

When I insert adapters in the opposite antenna line the phase angle reduces. I inserted a 10dB attenuator and the phase angle changed roughly in proportion to the attenuator length. It worked just fine despite the amplitude difference. So it’s doing something sensible. Wow!

Discussion

The central carrier and two “sidebands” looks a lot like an AM signal. I initially thought I could demodulate it using envelope detection. However that was a flop, so I got the paper and pencil out and worked out the math. This was challenging but I do enjoy a good engineering puzzle. After a few goes over several days I came up with the math above, and tested it using a simulation.

Note we don’t really care what sort of modulation the signal has. It could be a carrier, FM, or SSB. We just look at the phase so it’s insensitive to amplitude differences in the two signals. Any frequency and phase modulation is present on both a1 and a2 and is removed by the signal processing, leaving just the phase difference term. So the algorithm essentially strips modulation.

This means “processing gain” is possible. We can make phase estimates on every sample over say 1 second. We can then average the phase estimates. This may lead to a good phase estimate at SNRs lower than we can demodulate the signal. Plucking DF bearings out of the noise. Just like the FFT of a weak sine wave in noise creates a nice sharp line if you sample the signal long enough.

This system is phase based so will be affected by multipath signals. Mounting the system with a direct line of site to the transmitter is a good idea. The histogram gives us a confidence measure, and may be useful in detecting multipath or multiple bearings. Presenting this histogram information visually on a 3D or intensity map would be a useful area to explore.

The absolute phase estimates are sensitive to frequency offset, for reasons I haven’t worked out yet. The HackRF is about 4kHz off my sig-gen at 439MHz, which shifts the phase estimates. So it might need tuning or re-calibration to a known bearing.

I haven’t worked out where the “noise” in the scatter diagram comes from. The phase is the product of several non-linearities so we expect it to jump around a bit. Given we are just interested in phase, perhaps a limiter or three could be included at some point in the processing.

Off Line Direction Finding

One neat possibility with this approach is off line DF. Imagine every time the squelch opens, we log the SDR baseband Fs=192kHz signal onto a hard disk. A 1 Tbyte disk would store 720 hours at Fs=192kHz (2 byte IQ samples). We can then then use a sound editor to jump to the position where our Mole appears for a few seconds, and run the DF signal processing on that segment. We can tweak parameters, even run it a few times, to improve the bearing. We can compare this to the same signal received at different sites across town, to get a cross bearing.

We can do this off line DF-ing days later, or download the samples and process at a location remote to the DF site. It also provides a documented record for ACMA, should evidence be required for prosecution.

Further Work

My next step is to configure the HackRF for high gain so I can try some off-air signals. The repeater output is about -70dBm inside my home office so that will do for a start. If that works I will try DF-ing repeater input signals, perhaps with the hardware mounted on a mast outside. I have a UHF BPF I will insert to prevent overload from out of band signals.

I’m hoping it will be as accurate as Doppler systems, e.g. capable of resolving say 16 different bearings on a “ring of LEDs” or similar virtual display. I bet there are many issues I need to sort out and perhaps a show stopper lurking somewhere. We shall see! It’s good to experiment. Failure is an option.

We could simplify the hardware significantly. Other mixers could be tried. The circuit is insensitive to levels so the combining could be very simple, we don’t need a hybrid. Just connect the two signals to the same node. If the mixer has poor RF-IF isolation (carrier feed-through) there could be a problem. This could be alleviated by ensuring a1 is > 10dB above the a2 carrier feed-through. A very simple approach would be using a UHF transistor for the 32kHz clock oscillator, and injecting a2 into the emitter or base.

The 32kHz transistor clock oscillator I built was hard to start. Here is the saga of getting the 32kHz oscillator to run.

More

Project Whack a Mole Part 2

Latex Source

Have to put this somewhere in case I need it again. I used HostMath to build up the equations and Rogers Online Equations to render it to a PNG.

\begin{array}{lcl}

a_{1} & = & e^{j(\omega t+\phi +\theta)} \\

a_{2} & = & e^{j(\omega t+\theta)} \\

r & = & a_{1}+a_{2}cos(w_{l}t+\alpha ) \\

& = & e^{j(\omega t+\phi +\theta)})+\frac{1}{2}e^{j((\omega+\omega_{l}) t+\alpha +\theta)}+\frac{1}{2}e^{j((\omega-\omega_{l}) t-\alpha +\theta}

\end{array}

\begin{array}{lcl}

phase_{1} & = & \omega t + \phi +\theta \\

phase_{2} & = & (\omega+\omega_{l})t+\alpha +\theta \\

phase_{3} & = & (\omega-\omega_{l})t-\alpha +\theta \\

phase_{2}+phase_{3} & = & \omega t + \omega_{l}t+\alpha +\theta + \omega t - \omega_{l}t -\alpha + \theta \\

& = & 2\omega t + 2\theta \\

2phase_{1} - (phase_{2}+phase_{3}) & = & 2\omega t + 2\phi + 2\theta -2\omega t - 2\theta \\

& = & 2\phi

\end{array}

Chris Neugebauer: Python in the Caribbean? More of this!

Wed, 2016-03-02 06:26

I don’t often make a point of blogging about the conferences I end up at, but sometimes there are exceptions to be made.

A couple of weekends ago, a happy set of coincidences meant that I was able to attend the first PyCaribbean, in Santo Domingo, capital city of the Dominican Republic. I was lucky enough to give a couple of talks there, too.

This was a superbly well-organised conference. Leonardo and Vivian were truly excellent hosts, and it showed that they were passionate about welcoming the world to their city. They made sure breakfast and lunch at the venue were well catered. We weren’t left wanting in the evenings either, thanks to organised outings to some great local bars and restaurants over each of the evenings.

Better still, the organisers were properly attentive to issues that came up: when the westerners (including me) went up to Leo asking where the coffee was at breakfast (“we don’t drink much of that here”), the situation was resolved within hours. This attitude of resolving mismatches in the expectations of locals vs visitors was truly exceptional, and regional conference organisers can learn a lot from it.

The programme was, in my opinion, better than by rights any first-run conference should be. Most of the speakers were from countries further afield than the Caribbean (though I don’t believe anyone travelled further than me), and the keynotes were all of a standard that I’d expect from much more established conferences. Given that the audience was mostly from the DR – or Central America, at a stretch – the organisers showed that they truly understood the importance of bringing the world’s Python community to their local community. This is a value that it took us at PyCon Australia several years to grok, and PyCaribbean was doing it during their first year.

A wonderful side-effect of this focus on quality is, the programme was also of a standard high enough that someone could visit from nearby parts of the US and still enjoy a programme of a standard matching some of the best US regional Python conferences.

A bit about the city and venue: Even though the DR has a reputation as a touristy island, Santo Domingo is by no means a tourist town. It’s a working city in a developing nation: the harbour laps up very close to the waterfront roads (no beaches here), the traffic patterns help make crossing the road an extreme sport (skilled jaywalking ftw), and toilet paper and soap at the venue was mostly a BYO affair (sigh). Through learning and planning ahead, most of this culture shock subsided beyond my first day at the event, but it’s very clear that PyCaribbean was no beachside junket.

In Santo Domingo, the language barrier was a lot more confronting than I’d expected, too. Whilst I lucked out on getting a cabbie at the airport who could speak a tiny bit of English, and a receptionist with fluent English at the hotel, that was about the extent of being able to communicate. Especially funny was showing up at the venue, and not being allowed in, until I realised that the problem was not being allowed to wear shorts inside government buildings (it took a while to realise that was what the pointing at my legs meant).

You need at least some Spanish to function in Santo Domingo, and whilst I wasn’t the only speaker who was caught out by this, I’m still extremely grateful for the organisers for helping bridge the language barrier when we were all out and about during the evening events. This made the conference all the more enjoyable.

Will I be back for another PyCaribbean? Absolutely. This was one of the best regional Python conferences I’ve ever been to. The organisers had a solid vision for the event, far earlier than most conferences I’ve been to; the local community was grateful, eager to learn, and were rewarded by talks of a very high standard for a regional conferences; finally, everyone who flew into Santo Domingo got what felt like a truly authentic introduction to Dominican Culture, thanks to the solid efforts of the organisers.

Should you go to the next PyCaribbean? Yes. Should your company sponsor it? Yes. It’s a truly legitimate Python conference that in a couple of years time will be amongst the best in the world.

In PyCaribbean, the Python community’s gained a wonderful conference, and the Caribbean has gained a link with the global Python community, and one that it can be truly proud of at that. If you’re anywhere near the area, PyCaribbean is worthy of serious consideration.

Peter Lieverdink: Accidental Space Tourist - SocialSpaceWA

Tue, 2016-03-01 11:27

Like many people, I love the beautiful images we receive from space telescopes and spacecraft that orbit other worlds in the solar system. Also like many other people, I expect, I never really stop to think how we get those images, just assuming they get sent to earth via some magic space internet.

However, there is no internet (magic or otherwise, yet) in space and getting the data to create these pretty images (and to do science) is rather involved.

Quite by accident I got a chance to learn a lot more about that process.

SocialSpaceWA

Whilst not working, I stumbled across a retweet by the European Space Agency, asking for people to apply to visit their deep space tracking station in New Norcia, Western Australia (NNO) as part of their SocialSpace programme. I didn't really have anything on, qualified to apply by way of having an ESA member nation passport, don't live more than 16 hours flying away, so I thought "why not?".

Why not indeed. I applied a few days before the closing date and only a week later I got the happy news I'd been selected to attend. I immediately grabbed some return tickets to Perth and then started fretting about doing this thing with 15 total strangers. Eep!



Time-lapse of land-fall over southern WA, after crossing the Great Australian Bight.

Of course, fretting was totally unwarranted. ESA had organised a bus to drive us all to New Norcia from Perth, and a bunch of delegates organised to meet up with Daniel (the ESA chef-de-mission) before heading to the bus pick-up. Of course, my fellow delegates were all space geeks too and we all got on really well (especially once Daniel started handing out ESA swag :-)

The trip to New Norcia was in a lovely airconditioned bus, which made coping with the heat wave rather easy.

Introductions

As an ice-breaker, we all shared a group dinner that evening at the New Norcia hotel. After a round of 140 character introductions, we split into groups and each group was joined by an ESA engineer, who talked a little about who they were and the work they did on the site.

After dinner, John Goldsmith gave a talk about astrophotography and the sights of the night sky in preparation for an observing session with some people from the Perth Observatory, who'd driven up with cars full of (rather lovely) telescopes. Sadly I missed the talk because I was volunteered to help out with the telescopes. On the up-side, that resulted in my first TV appearance ever on Channel 10 in Perth.

The seeing was excellent (New Norcia has proper dark skies) so it ended up being a fairly late night.

Unfortunately, that meant the morning wasn't quite as early as I'd hoped it would be. Because of the dark skies, and three hour time difference with home, I had planned to not go back to Perth for the night. Instead, I wanted to stay in New Norcia and then get up early to catch the planetary alignment in action. I ended up seeing it just fine, but it was getting a little bit too light at that stage to easily cature all planets on camera.

Because most delegates elected to stay back in Perth overnight (where the hotels have airco) they wouldn't be back before 10am, which gave me time to have a nice and relaxing early morning at the hotel, with fresh coffee.



Aaaah, the serenity.

Down to business

Once my partners in crime had arrived, we all moved to the ESA education room at the New Norcia monastery for some enlightening sessions about the ESA Tracking Network (ESTRACK) and NNO by ESA engineers.

ESTRACK

Yves Doat spoke about why the ESTRACK network is needed and what it currently consists of. He showed us highlights of some of the missions they've supported over the past decades, from the Giotto mission past Halley's Comet in 1986 through to the current Rosetta/Philae mission to comet 67P Churyumov-Gerasimenko.

Deep Space Comms

Klaus-Jürgen Schulz dove into the details of deep space communications and paid particular attention to the difficulties of communicating with spacecraft that are close to the sun (which is an issue for the BepiColombo mission to Mercury, of course!) he finished his presentation by telling us about the future of deep space communications, using light rather than radio, to obtain much higher rates of data transmission.

Ground Station Operations

Next, Marc Roubert explained the operational intricacies of running ground stations. Since they are generally located in relatively remote radio-silent areas, getting construction materials and equipment to the site can pose a real problem. Bush fires, sand storms, snow and the occasional leopard (for the Argentinian site) can interfere with operations as well.

Their location can also pose problems for the power supply. The sites use a lot of power to cryo-cool the amplifiers. Fire can cut power lines, so generators are needed.

All delegates became very excited when he said that due to the cost of power in Australia, NNO was actually going solar. ESA have built a 250kW solar plant on the New Norcia site, which will pay for istelf in only 7 years and save about 400 tons of CO2 per year.

They're not yet allowed to feed power back into the grid, because the infrastructure wouldn't be able to cope. But they built the plant to produce only as much power as they need, so there isn't that much to feed back currently anyway.

The trouble with big antennas

Gunther Sessler then gave us the low-down on the new NNO-2 antenna. How it was constructed and what it can do that the 35m NNO-1 antenna can't, which is mainly obtain signal from spacecraft even if they're slightly off-course (which can happen easily if a rocket slightly over- or underperforms at launch).

As it turns out, the 35m NNO-1 antenna has a beam with of 60 millidegrees and to acquire a signal from a spacecraft, it has to be somewhere within that beam. I did the maths on that, and 60 millidegrees equates to a circle with a diameter of only 200m at a distance of 1000km (eg: a spacecraft on its way to orbit just clearing the horizon) Now 200m sounds like a lot, but when you realise a spacecraft is doing upwards of 5km/sec at that point, locking on to it becomes a much harder problem!

That's where the wider beam width of the 4.5m NNO-2 antenna comes in. It can see a larger part of the sky, so can pick up spacecraft that are slightly off-course a lot easier. And if the space craft is even more off-course, the 0.75m antenna has a wider beam width still.

With some smarts, once the 0.75m antenna locks on to a spacecraft, it can be used to center the 4.5m dish on it in turn. And once the 4.5m antenna is locked, its data can be used to in turn lock the 35m NNO-1 on the craft.

Putting it all together

The final presentation was by Peter Droll, who put it all together and gave us an overview of how ESTRACK was used to send the Lisa Pathfinder mission on its way to the L1 langrange point.  That was done by boosting its orbit with several engine burns, after ach of which the crafts position needed to be known exactly in order to caculate the next burn.

LPF is trialing equipment for detecting gravitational waves in space and should have started science operations today. Fittingly, this presentation was on the morning of the LIGO announcement :-)

Tour

We had a quick lunch after the presentations and then hopped back on the bus to go see the NNO dishes. The Inmarsat Cricket Team had prepared well and gave us a tour of NNO-1, allowing us to stick our heads absolutely eveywhere. 

The only spanner in the works in terms of social media was that the inside of the dish is really well shielded against radio interference, so all our phones stopped working! Luckily, the Nikon with borrowed fish-eye lens worked fine.

You can see all of my SocialSpaceWA photos on Flickr.

We toured the NNO-1 dish, as well as the the generator and battery buildings and the control room. Two lucky souls managed to score the chance to actually operate NNO-1 and I grabbed a bit of video whilst Matt took the dish for a joyride. I am assured that New Norcia doesn't do hayrides like Parkes, and that nobody plays cricket in the dish either (but they do play football!)



Taking NNO-1 for a joyride.

Inauguration

After the tour, VIPs started arriving for the formal inauguration ceremony. After a welcome to country, we heard talks from the WA deputy premier and the European Union ambassador to Australia, praising the virtues of scientific cooperation. I definitely hope there will be more of that in the future, if only to make more space infrastructure more readily accessible for visiting! :-)

Speeches over, we all hopped back on the bus to finally go and see the new NNO-2 antenna. It's located a few hundred meters away from the main complex and since we were still enjoying the heat wave, the transport was most welcome. That is, until the smaller of the buses couldn't cope with the rather steep hill and we all had to do the last hundred meters or so on foot.

The sun was setting as we arrived at the NNO-2 site and with the thin crescent moon it made a rather lovely backdrop for the blessing of the new facility by three monks from the New Norcia Monastery, followed by the antenna doing a little dance.

Good luck on you mission, NNO-2!



Image: Vaughan Puddey.

Wrap-Up

The formal proceedings over, we were all bused back to the monastery where ESA treated us to a delicious dinner as the stars came out. The monks are New Norcia turn out to make a rather decent drop of wine as well. I'm not a fan of beer, but I'm told their ale is pretty good too :-)

Finally it was time to hop on the bus and head back to Perth and after a final farewell drink, all delegates went their separate ways again.

But one thing we did all agree on: if you ever get the chance to do some accidental space tourism, take that chance with both hands and don't let go!



Thumbs up for New Norcia!

Thank you, ESA, Inmarsat and New Norcia!

Tags: SocialSpaceWAdeep spacespaceadventureESA

Chris Smart: Configuring Postfix to forward emails via localhost to secure, authenticated GMail

Tue, 2016-03-01 10:30

It’s pretty easy to configure postfix on a local Linux box to forward emails via an external mail server. This way you can just send via localhost in your programs or any system daemons and the rest is automatically handled for you.

Here’s how to forward via GMail using authentication and encryption on Fedora (23 at the time of writing). You should consider enabling two-factor authentication on your gmail account, and generate a password specifically for postfix.

Install packages:

sudo dnf install cyrus-sasl-plain postfix mailx

Basic postfix configuration:

#Only listen on IPv4, not IPv6. Omit if you want IPv6.

sudo postconf inet_protocols=ipv4

 

#Relay all mail through to TLS enabled gmail

sudo postconf relayhost=[smtp.gmail.com]:587

 

#Use TLS encryption for sending email through gmail

sudo postconf smtp_use_tls=yes

 

#Enable authentication for gmail

sudo postconf smtp_sasl_auth_enable=yes

 

#Use the credentials in this file

sudo postconf smtp_sasl_password_maps=hash:/etc/postfix/sasl_passwd

 

#This file has the certificate to trust gmail encryption

sudo postconf smtp_tls_CAfile=/etc/ssl/certs/ca-bundle.crt

 

#Require authentication to send mail

sudo postconf smtp_sasl_security_options=noanonymous

sudo postconf smtp_sasl_tls_security_options=noanonymous

By default postfix listens on localhost, which is probably what you want. If you don’t for some reason, you could change the inet_interfaces parameter in the config file, but be warned that then anyone on your network (or potentially the Internet if it’s a public address) could send mail through your system. You may also want to consider using TLS on your postfix server.

By default, postfix sets myhostname to your fully-qualified domain name (check with hostname -f) but if you need to change this for some reason you can. For our instance it’s not really necessary because we’re forwarding email through a relay and not accepting locally.

Check that our configuration looks good:

sudo postconf -n

sudo postfix check

Create a password file using a text editor:

sudoedit /etc/postfix/sasl_passwd

The content should be in this form (the brackets are required, just replace your username@gmail.com address and password):

[smtp.gmail.com]:587 username@gmail.com:password

Hash the password for postfix:

sudo postmap /etc/postfix/sasl_passwd

Tail the postfix log:

sudo journalctl -f -u postfix.service &

Start the service (you should see it start up in the log):

sudo systemctl start postfix

Send a test email, replace username@gmail.com with your real email address:

echo "This is a test." | mail -s "test message" username@gmail.com

You should see the email go through the journalctl log and be forwarded, something like:

Feb 29 04:32:51 hostname postfix/smtp[4115]: 87BE620221: to=, relay=smtp.gmail.com[209.85.146.108]:587, delay=1.9, delays=0.04/0.06/0.55/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1456720371 m32sm102235580ksj.52 - gsmtp)

David Rowe: Codec 2 Masking Model Part 3

Mon, 2016-02-29 15:31

I’ve started working on this project again. It’s important as progress will feed into both the HF and VHF work. It’s also kind of cool as it’s very unlike what anyone else is doing out there in speech coding land where it’s all LPC/LSP.

In Part 1 I described how the spectral amplitudes can be modelled by masking curves. The next step is to (i) decimate the model to a small number of samples and (ii) quantise those samples using a modest number of bits/frame.

This post describes the progress I have made in decimating the masking model parameters, the top yellow box here:

Analysis By Synthesis

Back when I was just a wee slip of speech coder, I worked on Code Excited Linear Prediction (CELP). These codecs use a technique called Analysis by Synthesis (AbyS). To choose the speech model parameters, a bunch of them are tried, the resulting speech synthesised, and the results evaluated. The set of parameters that minimises the difference between the input speech and synthesised output speech are transmitted to the decoder.

Trying every possible set of parameters keeps the encoder DSPs rather busy, and just getting them to run in real time was quite a challenge at the time (late 1980’s on 10MIP DSPs).

Time goes by, and it’s now 30 years later. After a few dead ends, I’ve worked out a way to use AbyS to select the best 4 amplitude/frequency pairs to describe the speech spectrum. It works like this:

  1. In each frame there are L possible frequency positions, each position being the frequency of each harmonic. For each frequency there is a corresponding harmonic amplitude {Am}.
  2. At each harmonic position, I generate a masking function, and measure the error between that and the target spectral envelope.
  3. After all possible masking functions are evaluated, I choose the one that minimises the error to the target.
  4. The process is then repeated for the next stage, until we have used 4 masking functions in total. As each masking function is “fitted”, the total error gradually reduces.
  5. The output is four frequencies and four amplitudes. These must be sent to the decoder, where they can be used to generate a spectral envelope that approximates the original.

The following plots show AbyS in action for frame 50 of hts1a:

The red line is the spectral envelope defined by the harmonic amplitudes {Am}. Magenta is the model the decoder uses based on 4 frequency/amplitude samples, and found using AbyS. The black crosses indicate the frequencies found using AbyS.

Here is a plot of the error (actually Mean Square Error) for each mask position at each stage. As we add more samples to the model, the error compared to the target decreases. You can see a sharp dip in the first (blue top curve) around 2500Hz. That is the frequency chosen for the first mask sample. With the first sample fixed, we then search for the best position for the next sample (dark green), which occurs around 500Hz.

Samples

Here are some samples from the AbyS model compared to the Codec 2 700B and 1300 modes. The AbyS frequency/amplitude pairs are unquantised, but other parameters (synthetic phase, pitch, voicing, energy, frame update rate) are the same as Codec 2 700B/1300.

Sample 700B 1300 newamp AbyS ve9qrp_10s Listen Listen Listen mmt1 Listen Listen Listen vk5qi Listen Listen Listen

At 700 bits/s we have 28 bit/s frame available. Assuming 7 bits for pitch, 1 for voicing, and 5 for frame energy that leaves us a budget of 15 bits/frame for the AbyS freq/amp pairs. At 1300 bit/s we have 52 bit/s frame total with 39 bits/frame available for AbyS freq/amp pairs.

My goal is to get 1300 bit/s quality at 700 bit/s using the AbyS masking model technique. That would significantly boost the quality at 700 bits/s and let us use the COHPSK modem that works really well on HF channels.

Command Lines

newamp.m was configured with decimation_in_time on and set to 4 (40ms frame update rate with interpolation at 10ms intervals). This is the same frame update rate as Codec 2 700B and 1300 modes. The phase0 model was enabled in c2sim to use synthetic phases and a single voicing bit, just like the Codec 2 modes. The synthetic phases were derived from a LPC model but can also be synthesised from any amplitude spectra, such as the AbyS masking model.

octave:20> newamp_batch("../build_linux/src/vk5qi")

$ ./c2sim ../../raw/vk5qi.raw --amread vk5qi_am.out --phase0 --postfilter -o - | sox -t raw -r 8000 -s -2 - ~/Desktop/abys/vk5qi.wav

Happy Birthday to Me

This is my 300th blog post in 10 years! Yayyyyyy. That’s about one rather detailed post every two weeks. I started with this one in April 2006 just after I hung up my trousers and departed the corporate world.

This blog currently gets visited by 3500 unique IPs/day although it regularly hits 5000/day. I type posts up in Emacs, then paste them into WordPress for final editing. I draw figures in LibreOffice Impress, and plots using GNU Octave.

I quite like writing, it gives me a chance to exercise the teacher inside me. Reporting on what I have done helps get it straight in my head. If I solve a problem I figure the solution might be useful for others.

I hope this blog has been useful for you too.

Links

Codec 2 Masking Model Part 1

Codec 2 Masking Model Part 2

OpenSTEM: Leap Day Special: 50% off Family and Teacher Subscriptions!

Mon, 2016-02-29 11:31

To celebrate the quirkiness of the leap day, we’re doing a very special offer – just from 29 Feb 2016 until 1 Mar 2016!

Leap years are funny things. Did you know, for instance, that in

Ireland and the United Kingdom when it was expected that men would always ask women to marry them and not the other way around, there was a tradition that it was acceptable for

women to ask men to marry them on Leap Year’s Day?

An OpenSTEM subscription provides free access to all our base PDF Resources for an entire year! This is many megabytes of awesome materials for you to use, full of colourful text and images. New PDFs are added all the time.

To make use of this limited offer, simply go to the special Leap Day page, or use the LEAPDAY coupon code when checking out one of the aforementioned subscriptions in the store. You will need to specifically add either the Private Family or One Teacher subscription to your cart.

You can also take a peek at what our different resources look like on our Curriculum Samples page.

Francois Marier: Extracting Album Covers from the iTunes Store

Mon, 2016-02-29 10:49

The iTunes store is a good source of high-quality album cover art. If you search for the album on Google Images, then visit the page and right-click on the cover image, you will get a 170 px by 170 px image. Change the 170x170 in the URL to one of the following values to get a higher resolution image:

  • 170x170
  • 340x340
  • 600x600
  • 1200x1200
  • 1400x1400

Alternatively, use this handy webapp to query the iTunes search API and get to the source image directly.

Colin Charles: Amazon RDS updates February 2016

Mon, 2016-02-29 04:25

I think one of the big announcements that came out from the Amazon Web Services world in October 2015 was the fact that you could spin up instances of MariaDB Server on it. You would get MariaDB Server 10.0.17. As of this writing, you are still getting that (the MySQL shipping then was 5.6.23, and today you can create a 5.6.27 instance, but there were no .24/.25/.26 releases). I’m hoping that there’s active work going on to make MariaDB Server 10.1 available ASAP on the platform.

Just last week you would have noticed that Amazon has rolled out MySQL 5.7.10. The in-place upgrades are not available yet, so updating is via dump/reload or using read replicas. According to the forums, a lot of people have been wanting to use the JSON functionality.

Are you trying MySQL 5.7 on RDS? How about your usage of MariaDB Server 10.0 on RDS? I’d be interested in feedback either as a comment here, or via email.

Colin Charles: SCALE14x trip report

Sun, 2016-02-28 13:25

SCALE14x was held at Pasadena, Los Angeles this year from January 21-24 2016. I think its important to note that the venue changed from the Hilton LAX — this is a much bigger space, as the event is much bigger, and you’ll also notice that the expo hall has grown tremendously.

I had a talk in the MySQL track, and that was just one of over 180 talks. There were over 3,600 people attending, and it showed by the number of people coming by the MariaDB Corporation booth. I spent sometime there with Rod Allen, Max Mether, and Kurt Pastore, and the qualified leads we received were pretty high. Of course it didn’t hurt that we were also giving away a Sphero BB-8 Droid.

The MySQL track room was generally always full. We learned some interesting tidbits like Percona Server 5.7 would be GA in February 2016 (true!), the strong crowd at the MariaDB booth and quite a bit more. People are definitely interested in MySQL 5.7’s JSON functionality.

The highlight of my talk, The MySQL Server Ecosystem in 2016 was that it brought along quite a good discussion on Twitter. Its clear people are very interested in this and there is much opportunity for writing about this!

The Mark Shuttleworth keynote

But there were other SCALE14x highlights, like the keynote by Mark Shuttleworth. It was generally a very moving keynote, and here are a few bits that I took as notes:

  • Technology changes lives
  • Society evolves because it becomes possible to live differently
  • New software moves too fast for distributions (6 months is too long). Look at Github. Speed vs. integration/trust/maintenance (the work of a distro)
  • snapcraft 2.0 (learn more about your first snap): reduce the amount of work to package software. Install software together transactionally.
An overview of a next-gen filesystem

Another talk I found interesting was the talk about bitrot, and filesystems like btrfs and ZFS. Best to read the presentation, and the article that was referenced.

Scaling GlusterFS at Facebook

A talk by Facebook is usually quite full, and I was interested in how they were using GlusterFS and if anyone has managed to successfully run a database over it yet (no). This was a talk given by Richard Wareing who’s been at Facebook for over 5 years:

  • GB’s to many PBs, 100’s of millions of files. QPS (FOPs) is 10s of billions per day, namespace (volume), TBs to PBs and Bricks: 1000’s. Version 3.3.x is when they started and now they use 3.6.x (trail mainline closely)
  • Use cases: archival, backing data store for large scale applications, anything that doesn’t fit into other DBs
  • RAID6, controller is enterprise grade, storage is more consumer grade
  • Primarily using XFS, and are starting to use btrfs (about 20% of the fleet run on it)
  • closed source AntFarm, JD, and their IPv6 support (they removed IPv4 support). They have JSON Statistic dumps which they contributed upstream.
  • a good mantra, pragmatism over correctness
Some expo hall chatter

There was plenty to followup post-SCALE14x with many having questions about MariaDB Server, or wanting to buy services around it from MariaDB Corporation. I learned for example that Rackspace maintains their own IUS repository of packages they think their customers will find important to use. The idea behind it is that its Inline with Upstream Stable. Naturally you will find MariaDB Server as well as packages for all the engines like CONNECT.

I also learned that Stacki uses MariaDB Server for provisioning, as was evidenced by their github issue.

Its incredibly rewarding to note that pretty much everyone knew what MariaDB Server was. Its been a long journey (six years!) but it sure feels sweet. Ilan and his team put on a great SCALE so I can’t wait to be back again next year.