Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 5 hours 6 min ago

David Rowe: How Fourier Transforms Work

Thu, 2018-08-16 09:04

The best explanation of the Discrete Fourier Transform (DFT) I have ever seen from Bill Cowley on his Low SNR blog.

sthbrx - a POWER technical blog: Improving performance of Phoronix benchmarks on POWER9

Wed, 2018-08-15 14:22

Recently Phoronix ran a range of benchmarks comparing the performance of our POWER9 processor against the Intel Xeon and AMD EPYC processors.

We did well in the Stockfish, LLVM Compilation, Zstd compression, and the Tinymembench benchmarks. A few of my colleagues did a bit of investigating into some the benchmarks where we didn't perform quite so well.

LBM / Parboil

The Parboil benchmarks are a collection of programs from various scientific and commercial fields that are useful for examining the performance and development of different architectures and tools. In this round of benchmarks Phoronix used the lbm benchmark: a fluid dynamics simulation using the Lattice-Boltzmann Method.

lbm is an iterative algorithm - the problem is broken down into discrete time steps, and at each time step a bunch of calculations are done to simulate the change in the system. Each time step relies on the results of the previous one.

The benchmark uses OpenMP to parallelise the workload, spreading the calculations done in each time step across many CPUs. The number of calculations scales with the resolution of the simulation.

Unfortunately, the resolution (and therefore the work done in each time step) is too small for modern CPUs with large numbers of SMT (simultaneous multi-threading) threads. OpenMP doesn't have enough work to parallelise and the system stays relatively idle. This means the benchmark scales relatively poorly, and is definitely not making use of the large POWER9 system

Also this benchmark is compiled without any optimisation. Recompiling with -O3 improves the results 3.2x on POWER9.

x264 Video Encoding

x264 is a library that encodes videos into the H.264/MPEG-4 format. x264 encoding requires a lot of integer kernels doing operations on image elements. The math and vectorisation optimisations are quite complex, so Nick only had a quick look at the basics. The systems and environments (e.g. gcc version 8.1 for Skylake, 8.0 for POWER9) are not completely apples to apples so for now patterns are more important than the absolute results. Interestingly the output video files between architectures are not the same, particularly with different asm routines and compiler options used, which makes it difficult to verify the correctness of any changes.

All tests were run single threaded to avoid any SMT effects.

With the default upstream build of x264, Skylake is significantly faster than POWER9 on this benchmark (Skylake: 9.20 fps, POWER9: 3.39 fps). POWER9 contains some vectorised routines, so an initial suspicion is that Skylake's larger vector size may be responsible for its higher throughput.

Let's test our vector size suspicion by restricting Skylake to SSE4.2 code (with 128 bit vectors, the same width as POWER9). This hardly slows down the x86 CPU at all (Skylake: 8.37 fps, POWER9: 3.39 fps), which indicates it's not taking much advantage of the larger vectors.

So the next guess would be that x86 just has more and better optimized versions of costly functions (in the version of x264 that Phoronix used there are only six powerpc specific files compared with 21 x86 specific files). Without the time or expertise to dig into the complex task of writing vector code, we'll see if the compiler can help, and turn on autovectorisation (x264 compiles with -fno-tree-vectorize by default, which disables auto vectorization). Looking at a perf profile of the benchmark we can see that one costly function, quant_4x4x4, is not autovectorised. With a small change to the code, gcc does vectorise it, giving a slight speedup with the output file checksum unchanged (Skylake: 9.20 fps, POWER9: 3.83 fps).

We got a small improvement with the compiler, but it looks like we may have gains left on the table with our vector code. If you're interested in looking into this, we do have some active bounties for x264 (lu-zero/x264).

Test Skylake POWER9 Original - AVX256 9.20 fps 3.39 fps Original - SSE4.2 8.37 fps 3.39 fps Autovectorisation enabled, quant_4x4x4 vectorised 9.20 fps 3.83 fps

Nick also investigated running this benchmark with SMT enabled and across multiple cores, and it looks like the code is not scalable enough to feed 176 threads on a 44 core system. Disabling SMT in parallel runs actually helped, but there was still idle time. That may be another thing to look at, although it may not be such a problem for smaller systems.

Primesieve

Primesieve is a program and C/C++ library that generates all the prime numbers below a given number. It uses an optimised Sieve of Eratosthenes implementation.

The algorithm uses the L1 cache size as the sieve size for the core loop. This is an issue when we are running in SMT mode (aka more than one thread per core) as all threads on a core share the same L1 cache and so will constantly be invalidating each others cache-lines. As you can see in the table below, running the benchmark in single threaded mode is 30% faster than in SMT4 mode!

This means in SMT-4 mode the workload is about 4x too large for the L1 cache. A better sieve size to use would be the L1 cache size / number of threads per core. Anton posted a pull request to update the sieve size.

It is interesting that the best overall performance on POWER9 is with the patch applied and in SMT2 mode:

SMT level baseline patched 1 14.728s 14.899s 2 15.362s 14.040s 4 19.489s 17.458s LAME

Despite its name, a recursive acronym for "LAME Ain't an MP3 Encoder", LAME is indeed an MP3 encoder.

Due to configure options not being parsed correctly this benchmark is built without any optimisation regardless of architecture. We see a massive speedup by turning optimisations on, and a further 6-8% speedup by enabling USE_FAST_LOG (which is already enabled for Intel).

LAME Duration Speedup Default 82.1s n/a With optimisation flags 16.3s 5.0x With optimisation flags and USE_FAST_LOG set 15.6s 5.3x

For more detail see Joel's writeup.

FLAC

FLAC is an alternative encoding format to MP3. But unlike MP3 encoding it is lossless! The benchmark here was encoding audio files into the FLAC format.

The key part of this workload is missing vector support for POWER8 and POWER9. Anton and Amitay submitted this patch series that adds in POWER specific vector instructions. It also fixes the configuration options to correctly detect the POWER8 and POWER9 platforms. With this patch series we get see about a 3x improvement in this benchmark.

OpenSSL

OpenSSL is among other things a cryptographic library. The Phoronix benchmark measures the number of RSA 4096 signs per second:

$ openssl speed -multi $(nproc) rsa4096

Phoronix used OpenSSL-1.1.0f, which is almost half as slow for this benchmark (on POWER9) than mainline OpenSSL. Mainline OpenSSL has some powerpc multiplication and squaring assembly code which seems to be responsible for most of this speedup.

To see this for yourself, add these four powerpc specific commits on top of OpenSSL-1.1.0f:

  1. perlasm/ppc-xlate.pl: recognize .type directive
  2. bn/asm/ppc-mont.pl: prepare for extension
  3. bn/asm/ppc-mont.pl: add optimized multiplication and squaring subroutines
  4. ppccap.c: engage new multipplication and squaring subroutines

The following results were from a dual 16-core POWER9:

Version of OpenSSL Signs/s Speedup 1.1.0f 1921 n/a 1.1.0f with 4 patches 3353 1.74x 1.1.1-pre1 3383 1.76x SciKit-Learn

SciKit-Learn is a bunch of python tools for data mining and analysis (aka machine learning).

Joel noticed that the benchmark spent 92% of the time in libblas. Libblas is a very basic BLAS (basic linear algebra subprograms) library that python-numpy uses to do vector and matrix operations. The default libblas on Ubuntu is only compiled with -O2. Compiling with -Ofast and using alternative BLAS's that have powerpc optimisations (such as libatlas or libopenblas) we see big improvements in this benchmark:

BLAS used Duration Speedup libblas -O2 64.2s n/a libblas -Ofast 36.1s 1.8x libatlas 8.3s 7.7x libopenblas 4.2s 15.3x

You can read more details about this here.

Blender

Blender is a 3D graphics suite that supports image rendering, animation, simulation and game creation. On the surface it appears that Blender 2.79b (the distro package version that Phoronix used by system/blender-1.0.2) failed to use more than 15 threads, even when "-t 128" was added to the Blender command line.

It turns out that even though this benchmark was supposed to be run on CPUs only (you can choose to render on CPUs or GPUs), the GPU file was always being used. The GPU file is configured with a very large tile size (256x256) - which is fine for GPUs but not great for CPUs. The image size (1280x720) to tile size ratio limits the number of jobs created and therefore the number threads used.

To obtain a realistic CPU measurement with more that 15 threads you can force the use of the CPU file by overwriting the GPU file with the CPU one:

$ cp ~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_cpu.blend ~/.phoronix-test-suite/installed-tests/system/blender-1.0.2/benchmark/pabellon_barcelona/pavillon_barcelone_gpu.blend

As you can see in the image below, now all of the cores are being utilised!

Fortunately this has already been fixed in pts/blender-1.1.1. Thanks to the report by Daniel it has also been fixed in system/blender-1.1.0.

Pinning the pts/bender-1.0.2, Pabellon Barcelona, CPU-Only test to a single 22-core POWER9 chip (sudo ppc64_cpu --cores-on=22) and two POWER9 chips (sudo ppc64_cpu --cores-on=44) show a huge speedup:

Benchmark Duration (deviation over 3 runs) Speedup Baseline (GPU blend file) 1509.97s (0.30%) n/a Single 22-core POWER9 chip (CPU blend file) 458.64s (0.19%) 3.29x Two 22-core POWER9 chips (CPU blend file) 241.33s (0.25%) 6.25x tl;dr

Some of the benchmarks where we don't perform as well as Intel are where the benchmark has inline assembly for x86 but uses generic C compiler generated assembly for POWER9. We could probably benefit with some more powerpc optimsed functions.

We also found a couple of things that should result in better performance for all three architectures, not just POWER.

A summary of the performance improvements we found:

Benchmark Approximate Improvement Parboil 3x x264 1.1x Primesieve 1.1x LAME 5x FLAC 3x OpenSSL 2x SciKit-Learn 7-15x Blender 3x

There is obviously room for more improvements, especially with the Primesieve and x264 benchmarks, but it would be interesting to see a re-run of the Phoronix benchmarks with these changes.

Thanks to Anton, Daniel, Joel and Nick for the analysis of the above benchmarks.

Michael Still: city2surf 2018 wrap up

Mon, 2018-08-13 15:00

city2surf 2018 was yesterday, so how did the race go? First off, thanks to everyone who helped out with my fund raising for the Black Dog Institute — you raised nearly $2,000 AUD for this important charity, which is very impressive. Thanks for everyone’s support!

city2surf is 14kms, with 166 meters of vertical elevation gain. For the second year running I was in the green start group, which is for people who have previously finished the event in less than 90 minutes. There is one start group before this, red, which is for people who can finish in less than 70 minutes. In reality I think its unlikely that I’ll ever make it to red — it would require me to shave about 30 seconds per kilometre off my time to just scrape in, and I think that would be hard to do.

Training for city2surf last year I tore my right achilles, so I was pretty much starting from scratch for this years event — at the start of the year I could run about 50 meters before I had issues. Luckily I was referred to an excellent physiotherapist who has helped me build back up safely — I highly recommend Cameron at Southside Physio Therapy if you live in Canberra.

Overall I ran a lot in training for this year — a total of 540 kilometres. I was also a lot more consistent than in previous years, which is something I’m pretty proud of given how cold winters are in Canberra. Cold weather, short days, and getting sick seem to always get in the way of winter training for me.

On the day I was worried about being cold while running, but that wasn’t an issue. It was about 10 degrees when we started and maybe a couple of degrees warmer than that at the end. The maximum for the day was only 16, which is cold for Sydney at this time of year. There was a tiny bit of spitting rain, but nothing serious. Wind was the real issue — it was very windy at the finish, and I think if it had been like that for the entire race it would have been much less fun.

That said, I finished in 76:32, which is about three minutes faster than last year and a personal best. Overall, an excellent experience and I’ll be back again.

The post city2surf 2018 wrap up appeared first on Made by Mikal.

Matthew Oliver: Keystone Federated Swift – Final post coming

Fri, 2018-08-10 11:05

This is a quick post to say the final topology post is coming. It’s currently in draft from and I hope to post it soon. I just realised it’s been a while so thought I’d better give an update.

 

The last post goes into what auth does, what is happening in keystone, what needs to happen  to really make this topology work and then talks about the brittle POC I created to have something to demo. I’ll be discussing other better options/alternative. But all this means it’s become much more detailed then I originally expected. I’ll hope to get it up by mid next week.

 

Thanks for waiting.

David Rowe: Testing a RTL-SDR with FSK on HF

Tue, 2018-08-07 15:04

There’s a lot of discussion about ADC resolution and SDRs. I’m trying to develop a simple HF data system that uses RTL-SDRs in “Direct Sample” mode. This blog post describes how I measured the Minimum Detectable Signal (MDS) of my 100 bit/s 2FSK receiver, and a spreadsheet model of the receiver that explains my results.

Noise in a receiver comes from all sorts of places. There are two sources of concern for this project – HF band noise and ADC quantisation noise. On lower HF frequencies (7MHz and below) I’m guess-timating signals weaker than -100dBm will be swamped by HF band noise. So I’d like a receiver that has a MDS anywhere under that. The big question is, can we build such a receiver using a low cost SDR?

Experimental Set Up

So I hooked up the experimental setup in the figure below:

The photo shows the actual hardware. It was spaced apart a bit further for the actual test:

Rpitx is sending 2FSK at 100 bit/s and about 14dBm Tx power. It then gets attenuated by some fixed and variable attenuators to beneath -100dBm. I fed the signal into a RTL-SDR plugged into my laptop, demodulated the 2FSK signal, and measured the Bit Error Rate (BER).

I tried a command line receiver:

rtl_sdr -s 1200000 -f 7000000 -D 2 - | csdr convert_u8_f | csdr shift_addition_cc `python -c "print float(7000000-7177000)/1200000"` | csdr fir_decimate_cc 25 0.05 HAMMING | csdr bandpass_fir_fft_cc 0 0.1 0.05 | csdr realpart_cf | csdr convert_f_s16 | ~/codec2-dev/build_linux/src/fsk_demod 2 48000 100 - - | ~/codec2-dev/build_linux/src/fsk_put_test_bits -

and also gqrx, using this configuration:

with the very handy UDP output option sending samples to the FSK demodulator:

$ nc -ul 7355 | ./fsk_demod 2 48000 100 - - | ./fsk_put_test_bits -

Both versions demodulate the FSK signal and print the bit error rate in real time. I really love the csdr tools, and gqrx is also great for a more visual look at the signal and the ability to monitor the audio.

For these tests the gqrx receiver worked best. It attenuated nearby interferers better (i.e. better sideband rejection) and worked at lower Rx signal levels. It also has a “hardware AGC” option that I haven’t worked out how to enable in the command line tools. However for my target use case I’ll eventually need a command line version, so I’ll have to improve the command line version some time.

The RF Gods are smiling on me today. This experimental set up actually works better than previous bench tests where we needed to put the Tx in another room to get enough isolation. I can still get 10dB steps from the attenuator at -120dBm (ish) with the Tx a few feet from the Rx. It might be the ferrites on the cable to the switched attenuator.

I tested the ability to get solid 10dB steps using a CW (continuous sine wave) signal using the “test” utility in rpitx. FSK bounces about too much, especially with the narrow spec an settings I need to measure weak signals. The configuration of the Rigol DSA815 I used to see the weak signals is described at the end of this post on the SM2000.

The switched attenuator just has 10dB steps. I am getting zero bit errors at -115dBm, and the modem fell over on the next step (-125dBm). So the MDS is somewhere in between.

Model

This spreadsheet (click for the file) models the receiver:

By poking the RTL-SDR with my signal generator, and plotting the output waveforms, I worked out that it clips at around -30dBm (a respectable S9+40dB). So that’s the strongest signal it can handle, at least using the rtl_sdr command line options I can find. Even though it’s an 8 bit ADC I figure there are 7 magnitude bits (the samples are unsigned chars). So we get 6dB per bit or 42dB dynamic range.

This lets us work out the the power of the quantisation noise (42dB beneath -30dBm). This noise power is effectively spread across the entire bandwidth of the ADC, a little bit of noise power for each Hz of bandwidth. The bandwidth is set by the sample rate of the RTL-SDRs internal ADC (28.8 MHz). So now we can work out No (N-nought), the power/unit Hz of bandwidth. It’s like a normalised version of the receiver “noise floor”. An ADC with more bits would have less quantisation noise.

There follows some modem working which gives us an estimate of the MDS for the modem. The MDS of -117.6dBm is between my two measurements above, so we have a good agreement between this model and the experimental results. Cool!

Falling through the Noise Floor

The “noise floor” depends on what you are trying to receive. If you are listening to wide 10kHz wide AM signal, you will be slurping up 10kHz of noise, and get a noise power of:

-146.6+10*log10(10000) = -106.6 dBm

So if you want that AM signal to have a SNR of 20dB, you need a received signal level of -86.6dB to get over the quantisation noise of the receiver.

I’m trying to receive low bit rate FSK which can handle a lot more noise before it falls over, as indicated in the spreadsheet above. So it’s more robust to the quantisation noise and we can have a much lower MDS.

The “noise floor” is not some impenetrable barrier. It’s just a convention, and needs to be considered relative to the bandwidth and type of the signal you want to receive.

One area I get confused about is noise bandwidth. In the model above I assume the noise band width is the same as the ADC sample rate. Please feel free to correct me if that assumption is wrong! With IQ signals we have to consider negative frequencies, complex to real conversions, which all affects noise power. I muddle through this when I play with modems but if anyone has a straight forward explanation of the noise bandwidth I’d love to hear it!

Blocking Tests

At the suggestion of Mark, I repeated the MDS tests with a strong CW interferer from my signal generator. I adjusted the Sig Gen and Rx levels until I could just detect the FSK signal. Here are the results, all in dBm:

Sig Gen 2FSK Rx MDS Difference -51 -116 65 -30 -96 66

The FSK signal was at 7.177MHz. I tried the interferer at 7MHz (177 kHz away) and 7.170MHz (just 7 kHz away) with the same blocking results. I’m pretty impressed that the system can continue to work with a 65dB stronger signal just 7kHz away.

So the interferer desensitises the receiver somewhat. When listening to the signal on gqrx, I can hear the FSK signal get much weaker when I switch the Sig Gen on. However it often keeps demodulating just fine – FSK is not sensitive to amplitude.

I can also hear spurious tones appearing; the quantisation noise isn’t really white noise any more when a strong signal is present. Makes me feel like I still have a lot to learn about this SDR caper, especially direct sampling receivers!

As with the MDS results – my blocking results are likely to depend on the nature of the signal I am trying to receive. For example a SSB signal or higher data rate might have different blocking results.

Still, 65dB rejection on a $27 radio (at least for my test modem signal) is not too shabby. I can get away with a S9+40dB (-30dBm) interferer just 7kHz away with my rx signal near the limits of where I want to detect (-96dBm).

Conclusions

So I figure for the lower HF bands this receivers performance is OK – the ADC quantisation noise isn’t likely to impact performance and the strong signal performance is good enough. An overload of -30dBm (S9+40dB) is also acceptable given the use case is remote communications where there is unlikely to be any nearby transmitters in the input filter passband.

The 100 bit/s signal is just a starting point. I can use that as a reference to help me understand how different modems and bit rates will perform. For example I can increase the bit rate to say 1000 bit/s 2FSK, increasing the MDS by 10dB, and still be well beneath my -100dBm MDS target. Good.

If it does falls over in the real world due to MDS performance, overload or blocking, I now have a good understanding of how it works so it will be possible to engineer a solution.

For example a pre-amp with X dB gain would lower the quantisation noise power by X dB and allow us to detect weaker signals but then the Rx would overload at -30-X dB. If we have strong signal problems but our target signal is also pretty strong we can insert an attenuator. If we drop in another SDR I can recompute the quantisation noise from it’s specs, and estimate how well it will perform.

Reading Further

Rpitx and 2FSK, first part in this series.
Spreadsheet used to do the working for the quantisation noise.

Gary Pendergast: WordPress’ Gutenberg: The Long View

Mon, 2018-08-06 21:04

WordPress has been around for 15 years. 31.5% of sites use it, and that figure continues to climb. We’re here for the long term, so we need to plan for the long term.

Gutenberg is being built as the base for the next 15 years of WordPress. The first phase, replacing the post editing screen with the new block editor, is getting close to completion. That’s not to say the block editor will stop iterating and improving with WordPress 5.0, rather, this is where we feel confident that we’ve created a foundation that we can build everything else upon.

Let’s chat about the long-term vision and benefit of the Gutenberg project.

Francois Marier: Mercurial commit series in Phabricator using Arcanist

Thu, 2018-08-02 02:18

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes... pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong... Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad 5 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2484 Included changes: M modules/libpref/init/all.js M netwerk/base/nsChannelClassifier.cpp M netwerk/base/nsChannelClassifier.h M toolkit/components/url-classifier/Classifier.cpp M toolkit/components/url-classifier/SafeBrowsing.jsm M toolkit/components/url-classifier/nsUrlClassifierDBService.cpp M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M xpcom/base/ErrorList.py ~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4 3 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html ~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76 2 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (e40312d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2486 Included changes: M toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html M toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes... pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9 2 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485 Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Updated an existing Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M browser/base/content/test/general/trackingPage.html M netwerk/test/unit/test_trackingProtection_annotateChannels.js M toolkit/components/antitracking/test/browser/browser_imageCache.js M toolkit/components/antitracking/test/browser/browser_subResources.js M toolkit/components/antitracking/test/browser/head.js M toolkit/components/antitracking/test/browser/popup.html M toolkit/components/antitracking/test/browser/tracker.js M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Simon Lyall: Audiobooks – July 2018

Wed, 2018-08-01 19:04

The Return of Sherlock Holmes by Sir Arthur Conan Doyle

I switched to Stephen Fry for this collection. Very happy with his reading of the stories. He does both standard and “character” voices well and is not distracting. 8/10

Roughing It by Mark Twain

A bunch of anecdotes and stories from Twain’s travels in Nevada & other areas in the American West. Quality varies. Much good but some stories fall flat. Verbose writing (as was the style at the time…) 6/10

Asteroids Hunters by Carrie Nugent

Spin off of a Ted talk. Covers hunting for Asteroids (by the author and others) rather than the Asteroids themselves. Nice level of info in a short (2h 14m) book. 7/10

Things You Should Already Know About Dating, You F*cking Idiot by Ben Schwartz & Laura Moses

100 dating tips (roughly in order of use) in 44 minutes. Amusing and useful enough. 7/10

Protector – A Classic of Known Space by Larry Niven

Filling in a spot in Niven’s universe. Better than many of his Known Space stories. Great background on the Pak in Hard Core package. Narrator gave everybody strong Australian accents for some reason. 7/10

The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly

Good book on 12 long term “deep trends” ( filtering, remixing, tracking, etc ) and how they have worked over the last and next few decades (especially in the context of the Internet). Pretty interesting and mostly plausible. 7/10

Caesar’s Last Breath: Decoding the Secrets of the Air Around Us by Sam Kean

Works it’s way though the gases in & evolution of earth’s atmosphere, their discovery and several interesting asides. Really enjoyed this, would have enjoyed 50% more of it. 9/10

David Rowe: Rpitx and 2FSK

Tue, 2018-07-31 17:04

This post describes tests to evaluate the use of rpitx as a 2FSK transmitter.

About 10 years ago I worked on the Village Telco – a system for community telephone networks based on WiFi. At the time we used WiFi SoCs which were open source at the OS level, but the deeper layers were completely opaque which led (at least for me) to significant frustration.

Since then I’ve done a lot of work on the physical layer, in particular building my skills in modem implementation. Low cost SDR has also become a thing, and CPU power has dropped in price. The physical layer is busy migrating from hardware to software. Software can, and should, be free.

So now we can build open source radios. No more chip sets and closed source.

Sadly, some other aspects haven’t changed. In many parts of the world it’s still very difficult (and expensive) to move an IP packet over the “last 100 miles”. So, armed with some new skills and technology, I feel it’s time for another look at developing world and humanitarian communications.

I’m exploring the use rpitx as the heart of HF and UHF data terminals. This clever piece of software turns a Raspberry Pi into a radio transmitter. Evariste (F5OEO) the author of rpitx, has recently developed the v2beta branch that has improved performance, and includes some support for FreeDV waveforms.

Running Tests

I have some utilities for the Codec 2 FSK modem that generate frames of test bits. I modified the fsk_mod_ext_vco utility to drive a utility Evariste kindly wrote for FreeDV experiments with rpitx. So here are the command lines that generate 600 seconds (10 minutes) of 100 bit/s 2FSK test frames, and transmit them out of rpitx, using a 7.177 MHz carrier frequency:

$ ./fsk_get_test_bits - 60000 | ./fsk_mod_ext_vco - ~/rpitx/2fsk_100.f 2 --rpitx 800 100 ~/rpitx $ sudo ./freedv 2fsk_100.f 7177000

On the receive side I used my FT-817 connected to FreeDV to sample the signal as a wave file, then fed the signal into C and Octave versions of the demodulator. The RPi is top left at rear, the HackRF in the foreground was used initially as a reference transmitter:

Results

It works really well! One of the most important tests for any modem is adding calibrated noise and measuring the Bit Error Rate (BER). I tried Eb/No = 9dB (-5.7dB SNR), and obtained 1% BER, right on theory for a 2FSK modem:

$ ./cohpsk_ch ~/Desktop/2fsk_100_rx_rpi.wav - -26 | ./fsk_demod 2 8000 100 - - | ./fsk_put_test_bits - FSK BER 0.009076, bits tested 11900, bit errors 108 SNR3k(dB): -5.62

This line takes the sample wave file from the FT-817, adds some noise using the cohpsk_ch utility, then pipes the signal to the FSK demodulator, before counting the bit errors in the test frames. I adjusted the 3rd “No” parameter in cohpsk_ch by hand until I obtained about 1% BER, then checked the SNR against the theoretical SNR for an Eb/No of 9dB.

Here are some plots from the Octave version of the demodulator, with no channel noise applied. The first plot shows the time and frequency domain signal at the input of the demodulator. I set the shift at 800 Hz, so you can see one tone at 800 Hz,the other at 1600 Hz:

Here is the output the of the FSK demodulator filters (red and blue for the two filter outputs). We can see a nice separation, but the red “high” level is a bit lower than blue. Red is probably the 1600 Hz tone, the FT-817 has a gentle low pass filter in it’s output, reducing higher frequency tones by a few dB.

There is some modulation on the filter outputs, which I think is explained by the timing offset below:

The sharp jump at 160 samples is expected, that’s normal behaviour for modem timing estimators, where a sawtooth shape is expected. However note the undulation of the timing estimate as it ramps up, indicating the modem sample clock has a little jitter. I guess this is an artefact of rpitx. However the BER results are fine and the average sample clock offset (not shown) is about 50ppm which is better than many sound cards I have seen in use with FreeDV. Many of our previous modem transmitters (e.g. the first pass at Wenet) started with much larger sample clock offsets.

A common question about rpitx is “how clean is the spectrum”. Here is a sample from my Rigol DSA815, with a span of 1MHz around the 7.177 MHz tx frequency. The Tx power is actually 11dBm, but the marker was bouncing around due to FSK modulation. On a wider span all I can see are the harmonics one would expect of a square wave signal. Just like any other transmitter, these would be removed by a simple low pass filter.

So at 7.177 MHz it’s clean to the limits of my spec analyser, and exceeds spectral purity requirements (-43dBc + 10log(Pwatts)) for Amateur (and I presume other service) communications.

Linux Users of Victoria (LUV) Announce: LUV August 2018 Main Meeting: PF on OpenBSD and fail2ban

Tue, 2018-07-31 15:03
Start: Aug 7 2018 19:00 End: Aug 7 2018 21:00 Start: Aug 7 2018 19:00 End: Aug 7 2018 21:00 Location:  Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Link:  http://www.melbourne.vic.gov.au/community/hubs-bookable-spaces/kathleen-syme-lib...

PLEASE NOTE CHANGED START TIME

7:00 PM to 9:00 PM Tuesday, August 7, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

August 7, 2018 - 19:00

read more

Linux Users of Victoria (LUV) Announce: LUV August 2018 Workshop: Dual-Booting Windows 10 & Ubuntu

Tue, 2018-07-31 15:03
Start: Aug 25 2018 12:30 End: Aug 25 2018 16:30 Start: Aug 25 2018 12:30 End: Aug 25 2018 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:  http://luv.asn.au/meetings/map

UPDATE: Rescheduled due to a conflict on the original date - please note new date!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

August 25, 2018 - 12:30

read more

Francois Marier: Recovering from a botched hg histedit on a mercurial bookmark

Fri, 2018-07-27 15:42

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done hg up ff8505d177b9 hg bookmarks hashstore-crash-1434206-new for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206 hg bookmarks hashstore-crash-1434206-copy hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Ian Wienand: Local qemu/kvm virtual machines, 2018

Fri, 2018-07-27 15:02

For work I run a personal and a work VM on my laptop. When I was at VMware I dogfooded internal builds of Workstation which worked well, but was always a challenge to have its additions consistently building against latest kernels. About 5 and half years ago, the only practical alternative option was VirtualBox. IIRC SPICE maybe didn't even exist or was very early, and while VNC is OK to fiddle with something, completely impractical for primary daily use.

VirtualBox is fine, but there is the promised land of all the great features of qemu/kvm and many recent improvements in 3D integration always calling. I'm trying all this on my Fedora 28 host, with a Fedora 28 guest (which has been in-place upgraded since Fedora 19), so everything is pretty recent. Periodically I try this conversion again, but, spoiler alert, have not yet managed to get things quite right.

As I happened to close an IRC window, somehow my client seemed to crash X11. How odd ... so I thought, everything has just disappeared anyway; I might as well try switching again.

Image conversion has become much easier. My primary VM has a number of snapshots, so I used the VirtualBox GUI to clone the VM and followed the prompts to create the clone with squashed snapshots. Then simply convert the VDI to a RAW image with

$ qemu-img convert -p -f vdi -O raw image.vdi image.raw

Note if you forget the progress meter, send the pid a SIGUSR1 to get it to spit out a progress.

virt-manager has come a long way too. Creating a new VM was trivial. I wanted to make sure I was using all the latest SPICE gl etc., stuff. Here I hit some problems with what seemed to be permission denials on drm devices before even getting the machine started. Something suggested using libvirt in session mode, with the qemu:///session URL -- which seemed more like what I want anyway (a VM for only my user). I tried that, put the converted raw image in my home directory and the VM would boot. Yay!

It was a bit much to expect it to work straight away; while GRUB did start, it couldn't find the root disks. In hindsight, you should probably generate a non-host specific initramfs before converting the disk, so that it has a larger selection of drivers to find the boot devices (especially the modern virtio drivers). On Fedora that would be something like

sudo dracut --no-hostonly --regenerate-all -f

As it turned out, I "simply" attached a live-cd and booted into that, then chrooted into my old VM and regenerated the initramfs for the latest kernel manually. After this the system could find the LVM volumes in the image and would boot.

After a fiddly start, I was hopeful. The guest kernel dmesg DRM sections showed everything was looking good for 3D support, along with the glxinfo showing all the virtio-gpu stuff looking correct. However, I could not get what I hoped was trivial automatic window resizing happening no matter what. After a bunch of searching, ensuring my agents were running correctly, etc. it turns out that has to be implemented by the window-manager now, and it is not supported by my preferred XFCE (see https://bugzilla.redhat.com/show_bug.cgi?id=1290586). Note you can do this manually with xrandr --output Virtual-1 --auto to get it to resize, but that's rather annoying.

I thought that it is 2018 and I could live with Gnome, so installed that. Then I tried to ping something, and got another selinux denial (on the host) from qemu-system-x86 creating icmp_socket. I am guessing this has to do with the interaction between libvirt session mode and the usermode networking device (filed https://bugzilla.redhat.com/show_bug.cgi?id=1609142). I figured I'd limp along with ICMP and look into details later...

Finally when I moved the window to my portrait-mode external monitor, the SPICE window expanded but the internal VM resolution would not expand to the full height. It looked like it was taking the height from the portrait-orientation width.

Unfortunately, forced swapping of environments and still having two/three non-trivial bugs to investigate exceeded my practical time to fiddle around with all this. I'll stick with VirtualBox for a little longer; 2020 might be the year!

David Rowe: Prototyping FreeDV 2200

Thu, 2018-07-26 11:04

In the previous post on Codec 2 2200 I described a prototype speech codec for a new, higher quality FreeDV mode. In this post I describe the modem development and initial over the air tests.

Bandwidth, not SNR limited

Turns out I am RF bandwidth limited rather than SNR limited. My target bandwidth over the channel is 2000 Hz, as FreeDV needs to pass through the various crystal filters in Commercial Off the Shelf (COTS) SSB radios. Things get messy near the edge of those filters. I want to stick with QPSK for now as QAM means a 4dB hit in SNR at the same bit rate. By messing with my waveform design spreadsheet it turns out I can get between 2000 and 2400 bit/s of coded speech through the channel by reducing the FEC code rate. This is acceptable for high-ish SNR channels with light fading.

This gives us an AWGN channel operating point of 4dB SNR, so we have 6dB margin for fading if we design for an average channel SNR of 10dB. That feels about right.

For the speech quality target I wanted to used 20ms frame updates (the lower rate Codec 2 1300 and 700C modes use 40ms updates). However that was a bit of a squeeze. I messed about with Vector Quantisation for a while but the quality was slipping beneath the Codec 2 2400 bit/s target I had set for this project. Then I hit on the idea of 25ms updates, and that seems to work well. I can’t hear any difference between 20 and 25ms updates.

Note the interaction between the speech codec frame rate and the modem bandwidth. As the entire system is open source, I can adapt with the codec to help the modem. This is a very powerful engineering technique that is not available to teams using closed source codec or modem software.

FreeDV 2200, like FreeDV 1600 has some “unprotected bits”. The LDPC forward error correction only covers part of the frame. I’m not sure about this approach, and will talk to Bill, VK5DSP sometime about a lower rate code that protects the entire frame.

Initial tests

A digital voice system is complex. Building the real time C code for a new codec, FEC, modem stack can take a lot of time and effort. FreeDV 700D took around 1000 man-hours. So I tend to simulate the whole thing in GNU Octave first. This allows me to prototype the entire system with the least possible effort. If it works, I then port to C, and I have a working reference to test the C port against. The basic idea is find any bad news early.

I modified the Octave version of the modem to handle the different bit rate through changing the number of carriers, symbol rate, and FEC. I started by modifying the raw, uncoded OFDM modem simulations (ofdm_tx.m, ofdm_rx.m), then moved to the versions that include the LDPC FEC.

After a few days of careful programming, testing, and refactoring I was able to transmit and receive test frame using the prototype “FreeDV 2200” waveform. Such a surprise when something complex starts working! Engineers are not used to success, we spend most of our time chasing bugs.

Using the new waveform I ran some coded speech through simulated AWGN and HF channels at the calculated operating points. Ouch! That hurts. Some really high amplitude excursions in the decoded speech, especially when the frames are messed up by a fade or during initial synchronisation.

These sorts of noises used to be common in the early days of speech coding. In contrast, my recent codecs have a “softer” response to bit errors – the speech gets messed up, but not in a way that causes pain.

Error Masking

OK so my initial tests showed that even with 10dB average SNR, sometimes that nasty old HF channel wipes out the codec and it makes loud cracks. What to do? I don’t have any bandwidth left over for more FEC and don’t want to dramatically change the Codec design.

The first thing I did was change the variable bit rate allocation Huffman code to a fixed bit allocation, as explained in the Codec 2 2200 blog post. That helped. The next step was to spent a few days developing some “error masking” ideas. These don’t actually fix the problem, but make it sound more acceptable.

The LDPC decoder kindly tells us when it can’t successfully decode a frame. When that happens, the masking algorithm swings into action. We know some of the bits are rubbish, but not exactly which ones. An AGC algorithm adjusts the energy of the current frame to be similar to the last good one. This helps with the big spikes. If we get a burst of errors the level is smoothly reduced, effectively squelching the signal.

After a few days of tweaking I had this working OK. Here are some plots of the speech signal before and after masking:

Here are some plots that illustrate the masking algorithm. The top plot is the number of errors/frame, they appear in bursts with the channel fading. The lower plot is the energy of the decoded speech before and after masking. When there are no errors, the energy is the same. During an error burst we reduce the gain, exponentially dropping the level if an error burst continues.

Here are some speech samples so you can hear the effect of bit errors:

No errors Listen HF Channel Listen HF Channel Masking Listen HF Channel SSB Listen HF Channel FreeDV 1600 Listen

The FreeDV 1600 sample takes a while to sync, perhaps due to that deep fade at the start of the sample. The 2200 modem was derived from 700D, which can sync up at very low SNRs. Much to my annoyance the SSB sample doesn’t seem very bothered by the fading – a testament to robustness of SSB and why it’s so hard to beat. The first few seconds of the SSB sample are “down in the mud” and would probably be lost to untrained listeners.

Over the Air Testing with the Peters

So now I had simulations of every part of FreeDV 2200 ready to roll. The modem simulations work on wave files, which can be played over the air through real HF radios. This is a very important test, as real radios have crystal filters, power amplifiers, and audio filtering that can affect the passage of bits through them.

My first test was across the bench. I played the OFDM signal at low power through my IC-7200 tuned to 7.177MHz, and connected to an external dipole. I received the signal on my FT-817 without an antenna, while recording the signal using FreeDV. I took special care to make sure the IC-7200 wasn’t being over driven, adjusting the transmit drive so the ALC wasn’t moving at all.

The scatter diagram was a bit messed up, and SNR about 13dB, which is pretty typical of tests through real radios, even with no noise. I measure SNR from the scatter diagram, so any distortion in the signal will lower the SNR reported by FreeDV. An important figure of merit is the effect on Bit Error Rate (BER), which can be measured in dB. So I generated a transmit signal with additive noise that had an Eb/No of 3dB. When passed directly to the Rx script I measured a BER of 3%. When the same signal was passed through the radios, then to the Rx script, I measured a BER of 5%.

Looking at the PSK BER versus Eb/No curves, this is a difference of 1.2dB. The distortion of the path through the radios is equivalent to 1.2dB more noise in the system. That’s quite acceptable to me; given the wide bandwidth of the signal compared to the crystal filter skirts, PA distortion, likely rx overload as the radios are so close, and other real world factors.

The next step was some ground wave and DX tests. Peter, VK2PTM, happened to be staying with me. So we hopped in his van and set up a receive station about 4km away, where Peter, VK5APR joined us for the tests. We received a very nice signal on Peter’s (2TPM Peter that is) KX3, in fact a nicer scatter diagram than with my FT-817. Zero BER, despite just a few watts Tx power.

Peter has documented our FreeDV experiments on his blog.

For the final test, we radiated the same test signal from Adelaide about 700km to Peter, VK3RV, about 700km away in Sunbury, near Melbourne. Once again we were on 40M, and this time I was using a few 10’s or watts. Hard to tell exactly with these waveforms. Peter kindly recorded the FreeDV 2200 signal, and emailed it back to me.

You don’t have to be called Peter to be a FreeDV early adopter. But it helps.

This time there was some significant fading, and bursts of bit errors. The SNR averaged about 8dB over our 60 second sample but being a real HF channel the SNR wandered all over the place:


Note the deep fade at the start of the sample, which is also evident in the second plot of uncoded and coded errors above. The Scatter diagram is the classic cross shape, as the amplitudes of the QPSK symbols fade in and out. The channel noise is evident in the width of the cross.

When played back through the speech codec it worked OK (Listen), and sounds similar to the simulated HF channel samples above. The deep fades are gently muted and the overall effect was not unpleasant. Long distance HF conditions are poor here at the moment, so I didn’t get a chance to test the system on more benign HF channels.

I’m very pleased with these tests. It’s a very complicated system and so many things can go wrong when it hits the real world. The careful work I put into test and simulation has paid off. I am particularly happy with the 2000 Hz wide modem waveform making it through real radios and real channels as I felt that was a major risk.

Summary

These two posts have described the development to date of a new FreeDV mode. It’s a work in progress, and I feel another iteration on the speech codec quality and FEC would be useful. I’m quite pleased with the higher bit rate version of the OFDM modem. It would also be nice to get the current work into real time, so we can test it on air – using the open source principle of “release early and often”.

The development process for a new FreeDV waveform is not linear. I started with the codec, got a feel for the bit rate, played with the modem waveforms design spreadsheet, then headed back to the codec design. When I put the waveform through a simulated channels I found bit errors made the speech codec fall over so it was back to the codec for some rework.

However the cool thing about open source is that we can consider all these factors at the same time. If you are stuck with a closed source codec or modem you have far more limited options and will end up with poorer performance. Ok, I also have the skills to work on both, but that’s why I’m publishing these posts and the source code! It’s a great starting point for anyone else who wants to learn, or build on this work. Yayyy open source!

Help Wanted

An important next step is to put FreeDV 2200 into real time, so we can test with real over the air conversations. This means porting some code from Octave to C, unit tests, and a bunch of integration with the FreeDV API and FreeDV GUI program. Please contact me if you have C programming skills and would like to help. I’ll help you with the Octave side, and you’ll learn a lot!

If you can’t code, but would like to see a high quality FreeDV mode out there, please consider supporting the work using Patreon or PayPal.

Appendix – Command Lines for testing the new Waveform

Here are the command lines I used to test the new FreeDV waveform over the air. I tend to prototype in Octave, to minimise the amount of time it takes to get something over the air that I can listen to. A wrong turn can mean months of work, so an efficient process matters.

Use c2sim to generate files of Codec 2 model parameters from an input speech file:

$ ./c2sim ../../raw/vk5qi.raw --framelength_s 0.0125 --dump vk5qi_l --phase0 --lpc 10 --dump_pitch_e vk5qi_l_pitche.txt

Run an Octave script to encode model parameters to a 2200 (ish) bit/s second bit stream:

octave:40> newamp2_batch("../build_linux/src/vk5qi_l","no_output","mode", "enc", "bitstream", "vk5qi_2200_enc.c2");

Run an Octave script to generate OFDM modem transmitter audio. The payload data is known test frames at 2200 bit/s:

octave:41> ofdm_ldpc_tx("ofdm_ldpc_test.raw","2200",1,300);

Three hundred seconds are generated in this example, about 5 minutes.

This raw file can be “played” over the air through a HF SSB radio, then recorded as a wave file by a HF radio receiver. The received wave (or raw) file, is run through the OFDM modem receiver. Using the known test frame data, it measures the Bit Error rate (BER) and plots various statistics. It also generates a file of errors, that shows each bit that was received in error:

octave:42> ofdm_ldpc_rx('~/Desktop/ofdm_ldpc_test_awgn_sunbury001_rx_60.wav',"2200",1,"subury001_error.bin")

This C utility “corrupts” the Codec 2 bit stream using the file of errors, simulating the path of real Codec 2 bits through the channel:

$ ../build_linux/src/insert_errors vk5qi_2200_enc.c2 vk5qi_2200_sunbury001_dec.c2 subury001_error.bin

Now decode the bit stream to a set of Codec 2 model parameters:

octave:43> newamp2_batch("../build_linux/src/vk5qi_l","output_prefix","../build_linux/src/vk5qi_l_dec_sunbury001", "mode", "dec", "bitstream", "vk5qi_2200_sunbury001_dec.c2", "mask", "subury001_error.bin");

The “mask” parameter uses the error file to implement an error masking algorithm. In a real world system, the LDPC decoder would tell use which frames were bad, so this is a mild cheat to expedite development.

Finally, run C part of Codec 2 decoder to listen to the results:

$ ./c2sim ../../raw/vk5qi.raw --framelength_s 0.0125 --pahw vk5qi_l_dec_sunbury001 --hand_voicing vk5qi_l_dec_sunbury001_v.txt -o - | aplay -f S16

Reading Further

Codec 2 2200
modem waveform design spreadsheet
Steve Ports an OFDM modem from Octave to C

David Rowe: Codec 2 2200

Wed, 2018-07-25 11:04

FreeDV 700D has shown digital voice can outperform SSB at low SNRs over HF multipath channels. With practice the speech quality is quite usable, but it’s not high quality and is sensitive to different speakers and microphones.

However at times it’s like magic! The modem signal is buried in all sorts of HF noise, getting hammered by fading – then one of my friends 800km away breaks the FreeDV squelch (at -2dB!) and it’s like they are in the room. No more hunched-over-the-radio listening to my neighbours plasma TV power supply.

There is a lot of promise in FreeDV, and it’s starting to come together.

So now I’d like to see what we can do at a higher SNR, where SSB is an armchair copy. What sort of quality can we develop at 10dB SNR with light fading?

The OFDM modem and LDPC Forward Error Correction (FEC) used for 700D are working well, so the idea is to develop a high bit rate/high(er) quality speech codec, and adapt the OFDM modem and LPDC code.

So over the last few months I have been jointly designed a new speech codec, OFDM modem, and LDPC forward error correction scheme called “FreeDV 2200”. The speech codec mode is called “Codec 2 2200”.

Coarse Quantisation of Spectral Magnitudes

I started with the design used for Codec 2 700C. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples covering 100 to 4000 Hz. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.

This Mel-resampling is not perfect. Some samples, like hts1a, have narrow formants at higher frequencies that get undersampled by the Mel resampling, leading to some distortion. Linear Predictive Coding (LPC) (as used in many Codec 2 modes) does a better job for some of these samples. More work required here.

In Codec 2 700C, we vector quantise the K=20 Mel sample vectors. For this new Codec 2 mode, I experimented with directly quantising each sample. To my surprise, I found very little difference in speech quality with coarsely quantised 6dB steps. Furthermore, I found we can limit the rate of change between samples to a maximum of +/-12 dB. This allows each sample to be delta-coded in frequency using a step of 0, -6, +6, -12, or +12dB. Using a 2 or 3 bit Huffman coding approach it turns out we need 45-ish bit/s frame for good quality speech.

After some experimentation with bit errors, I ended up using a fixed bit allocation rather than Huffman coding. Huffman coding uses variable length symbols (in my case 2 and 3 bits). After a bit error you don’t know where the next variable length symbol in the string starts and ends. So a single bit error early in the bits describing the spectrum can corrupt the entire spectrum.

Parameter Bits Per Frame Spectral Magnitude 44 Energy 4 Pitch 7 Voicing 1 Total 56

At a 25ms frame update rate this is 56/0.025 = 2240 bits/s.

Here are some samples that show the effect of the processing stages. During development it’s useful to listen to the effect each stage has on the codec speech quality, for example to diagnose problems with a sample that codes poorly.

Original Listen Codec 2 Unquantised 12.5ms frames Listen Synthesised phases Listen K=20 Mel Listen Quantised to 6dB steps Listen Fully Quantised with 44 bits/25ms frame Listen

Here is a plot of 50 frames of coarsely quantised amplitude samples, the 6dB steps are quite apparent:

Listening Tests

To test the prototype 2200 mode I performed listening tests using 15 samples. I have collected these samples over time as examples that tend to challenge speech codecs and in particular code poorly using FreeDV 1600 (i.e. Codec 2 1300). During my listening tests I use a small set of powered speakers and for each sample chose which codec algorithm I prefer, then average the results over the 15 samples.

Over the 15 samples I felt 2200 was just in front of 2400 (on average), and well in front to 1300. There were some exceptions of course – it would useful to explore them some time. I was hoping that I could make 2200 significantly better than 2400, but ran out of time. However I am pleased that I have developed a new method of spectral quantisation for Codec 2 and come up with a fully quantised speech codec based on it at such an early stage.

Here are a few samples processed with various speech codecs. On some of them 2200 isn’t as good as 2400 or even 1300. I’m presenting some of the poorer samples on purpose – the poorly coded samples are the interesting ones worth investigating. Remember the aims of this work are significant improvements over (i) Codec 2 1300, and (ii) SSB at around 10dB SNR. See what you think.

Sample 1300 2400 2200 SSB 10dB hts1a Listen Listen Listen Listen ve9qrp Listen Listen Listen Listen mmt1 Listen Listen Listen Listen vk5qi Listen Listen Listen Listen 1 vk5local Listen Listen Listen Listen 2 vk5local Listen Listen Listen Listen

Further Work

I feel the coarse quantisation trick is a neat way to reduce the information in the speech spectrum and is worth exploring further. There must be more efficient, and higher quality ways to encode this information. I did try some vector quantisation but wasn’t happy with the results (I always seem to struggle with VQ). However I just don’t have the time to explore every R&D path this work presents.

Quantisers can be stored very efficiently, as the dynamic range of the spectral sample is low (a few bits/sample) compared to floating point numbers. This simplifies storage requirements, even for large VQs.

Machine learning techniques could also be applied, using some of the ideas in this post as “pre-processing” steps. However that’s a medium term project which I haven’t found time for yet.

In the next post, I’ll talk about the modem and LDPC codec required to build FreeDV 2200.

Dave Hall: The Road to DrupalCon Dublin

Wed, 2018-07-25 03:04

DrupalCon Dublin is just around the corner. Earlier today I started my journey to Dublin. This week I'll be in Mumbai for some work meetings before heading to Dublin.

On Tuesday 27 September at 1pm I will be presenting my session Let the Machines do the Work. This lighthearted presentation provides some practical examples of how teams can start to introduce automation into their Drupal workflows. All of the code used in the examples will be available after my session. You'll need to attend my talk to get the link.

As part of my preparation for Dublin I've been road testing my session. Over the last few weeks I delivered early versions of the talk to the Drupal Sydney and Drupal Melbourne meetups. Last weekend I presented the talk at Global Training Days Chennai, DrupalCamp Ghent and DrupalCamp St Louis. It was exhausting presenting three times in less than 8 hours, but it was definitely worth the effort. The 3 sessions were presented using hangouts, so they were recorded. I gained valuable feedback from attendees and became aware of some bits of my talk needed some attention.

Just as I encourage teams to iterate on their automation, I've been iterating on my presentation. Over the next week or so I will be recutting my demos and polishing the presentation. If you have a spare 40 minutes I would really appreciate it if you watch one of the session recording below and leave a comment here with any feedback.

Global Training Days Chennai DrupalCamp Ghent

Note: I recorded the audience not my slides.

DrupalCamp St Louis

Note: There was an issue with the mic in St Louis, so there is no audio from their side.

Dave Hall: Trying Drupal

Wed, 2018-07-25 03:04

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have simplytest.me added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think simplytest.me is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

AttachmentSize try-contentful-1.png342.82 KB try-contentful-2.png214.5 KB try-contentful-3.png583.02 KB try-contentful-5.png826.13 KB try-drupal-1.png1.19 MB try-drupal-2.png455.11 KB try-drupal-3.png330.45 KB try-drupal-4.png239.5 KB try-drupal-5.png203.46 KB try-drupal-6.png332.93 KB try-drupal-7.png196.75 KB try-drupal-8.png333.46 KB try-drupal-9.png1.74 MB try-drupal-10.png1.77 MB try-drupal-11.png1.12 MB try-drupal-12.png1.1 MB try-drupal-13.png216.49 KB

Dave Hall: Drupal Puppies

Wed, 2018-07-25 03:04

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

Dave Hall: Wix aren't the Only Ones Violating the GPL

Wed, 2018-07-25 03:04

Recently Matt Mullenweg called out Wix for violating the GPL. Wix's CEO, Avishai Abrahami's post on the company blog missed some of the key issues when it comes to using code licensed under the terms of the GNU General Public License. The GPL doesn't require you to only release the changes made to a dependency. If you use a GPL library in your project, then the entire project is deemed to be a derivate work. This is why the GPL is called a viral license, because if one part of your project is covered by the GPL, then it applies to the entirety of your app. Other people more qualified than me have explained this in more detail, more elegantly than I ever could.

This reminded me of a popular platform that is built on a GPL violation. FlightRadar24 allows users to track flights in real time. FR24's data is collected from multiple sources, including the US FAA and a network of volunteers that run tracking equipment. One of the most common tracking setups uses a Raspberry Pi hooked up to a cheap DVB-T USB tuner. librtlsdr is used to tune the USB stick to the right frequency so it can listen to the tracking signals.

The FlightRadar24 software, FR24Feed, is a single binary that includes a statically linked copy of librtlsdr. On multiple occasions I have asked for a full copy of the source code for the FR24Feed binaries that FlightRadar24 are distributing. They have refused.

FlightRadar are under the mistaken belief that they only have to distribute the source for their modifications to librtlsdr. They have made their patched version of the code available on the FlightRadar24 downloads page. FlightRadar24 say they won't release the source to ensure the quality of their data. I'm sorry, but the GPL doesn't include a clause that excuses you from distributing the source code if you have a bad architecture.

It is disappointing to see FlightRadar24 refusing to comply with the terms of the GPL. I still have everything I need to run a tracking station sitting in a drawer. I'll set up my tracking station once FlightRadar24 release all the full source for FR24Feed.

Dave Hall: Many People Want To Talk

Wed, 2018-07-25 03:04

WOW! The response to my blog post on the future of Drupal earlier this week has been phenomenal. My blog saw more traffic in 24 hours than it normally sees in a 2 to 3 week period. Around 30 comments have been left by readers. My tweet announcing the post was the top Drupal tweet for a day. Some 50 hours later it is still number 4.

It seems to really connected with many people in the community. I am still reflecting on everyone's contributions. There is a lot to take in. Rather than rush a follow up that responds to the issues raised, I will take some time to gather my thoughts.

One thing that is clear is that many people want to use DrupalCon Baltimore next week to discuss this issue. I encourage people to turn up with an open mind and engage in the conversation there.

A few people have suggested a BoF. Unfortunately all of the official BoF slots are full. Rather than that be a blocker, I've decided to run an unofficial BoF on the first day. I hope this helps facilitate the conversation.

Unofficial BoF: The Future of Drupal

When: Tuesday 25 April 2017 @ 12:30-1:30pm
Where: Exhibit Hall - meet at the Digital Echidna booth (#402) to be directed to the group
What: High level discussion about the direction people think Drupal should take.
UPDATE: An earlier version of this post had this scheduled for Monday. It is definitely happening on Tuesday.

I hope to see you in Baltimore.