Planet Linux Australia
The plastic is about 12mm thick and smells like a 2.5D job done by a 3d printer 'just because'. So a quick tinker in Fusion 360 and the 1/2 inch thick flatland part was born. After removing the hold down tabs and flapping the remains away 3 M6 bolt holds were hand drilled. Notice the subtle shift on the inside of the part where the extrusion and stepper motor differ in size.
It was quicker to just do that rather than try to remount and register on the cnc and it might not have even worked with the limited z range of the machine.
The below image only has two of the three bolts in place. With the addition of the new bolt heading into the z axis the rigidity of the machine went right up. The shaft that the z axis is mounted onto goes into the 12mm empty hole in the part.
This does open up the mental thoughts of how many other parts would be better served by not being made out of plastic.
The 6th Multicore World will be held on Monday 20th to Wednesday 22nd of February 2017 at Shed 6 on the Wellington (NZ) waterfront. Nicolás Erdödy (Open Parallel) has once again done an amazing job at finding some the significant speakers in the world in parallel programming and multicore systems to attend. Although a short - and not an enormous conference - the technical quality is always extremely high, dealing with some of the most fundamental problems and recent experiences in these fields.
Librarians are stepping into the breach to help students become smarter evaluators of the information that floods into their lives. That’s increasingly necessary in an era in which fake news is a constant.Spotting fake news – by librarian Janelle Hagen – Lakeside School Seattle
Join 'em, support 'em, donate, promote... whatever. They all do good work. Really good work. And we should all support them as much as we can. Help me, help them, by following them, amplifying their voices, donating or even better?Joining them! And if all you've got is gratitude for the work they do, then drop 'em a line and just say a simple thank you :)Software Freedom Conservancy
Donate: sfconservancy.org/donateOpen Source Initiative
Join: opensource.org/joinDrupal Association
Join: as above, just choose monthly sustaining member
Our standalone Foundation (Prep/Kindy etc) students are introduced to the World Map this week, as they start putting stickers on it, showing where in the world they and their families come from – the origin of the title of this unit (Me and My Global Family). This helps students to feel connected with each other and to start to understand both the notion of the ‘global family’, as well as the idea that places can be represented by pictures (maps). Of course, we don’t expect most 5 year olds to understand the world map, but the sooner they start working with it, the deeper the familiarity and understanding later on.Students building Stonehenge with blocks
All the other younger students are learning about movements of celestial bodies (the Earth and Moon, as they go around the Sun and each other) and that people have measured time in the past with reference to both the Sun and the Moon – Solar and Lunar calendars. To make these ideas more concrete, students study ancient calendars, such as Stonehenge, Newgrange and Abu Simbel, and take part in an activity building a model of Stonehenge from boxes or blocks.Years 3 to 6 Demon Duck of Doom
Our older primary students are going back into the Ice Age (and who wouldn’t want to, in this weather!), as they explore the routes of modern humans leaving Africa, as part of understanding how people reached Australia. Aboriginal people arrived in Australia as part of the waves of modern humans spreading across the world. However, the Australia they encountered was very different from today. It was cold, dry and very dusty, inhabited by giant Ice Age animals (the Demon Duck of Doom is always a hot favourite with the students!) and overall, a pretty dangerous place. We challenge students to imagine life in those times, and thereby start to understand the basis for some of the Dreamtime stories, as well as the long and intricate relationship between Aboriginal people and the Australian environment.
We thought it would be fun to track what’s happening in schools using our primary HASS program, on a weekly basis. Now we know that some of you are doing different units and some will start in different weeks, depending on what state you’re in, what term dates you have etc, but we will run these posts based off those schools which are implementing the units in numerical order and starting in the week beginning 30 January, 2017.
Week 1 is an introductory week for all units, and usually sets some foundations for the rest of the unit.Foundation to Year 3
Our youngest students are still finding their feet in the new big world of school! We have 2 units for Term 1, depending on whether the class is standalone, or integrating with some Year 1 students. This week standalone classes will be starting a discussion about their families – geared towards making our newest students feel welcome and comfortable at school.
Those integrating with Year 1 or possibly Year 2, as well, will start working with their teachers on a Class Calendar, marking terms and holidays, as well as celebrations such as birthdays and public holidays. This helps younger students start to map out the coming year, as well as provide a platform for discussions about how they spent the holidays. Year 2 and 3 students may choose to focus more on discussing which season we are in now, and what the weather’s like at the moment (I’m sure most of you are in agreement that it’s too hot!). Students can track the weather on the calendar as well.Years 3 to 6
Some Year 3 students may be in classes integrating with Year 4 students, rather than Year 2. Standalone Year 3 classes have a choice of doing either unit. These older students will be undertaking the Timeline Activity and getting a physical sense of history and spans of time. Students love an excuse to get outdoors, even when it’s hot, and this activity gives them a preview of material they will be covering later in the year, as well as giving them a hands-on understanding of how time has passed and how where we are compares to past events. This activity can even reinforce the concept of a number line from Maths, in a very kinaesthetic way.
The newly released FreeDV 700C mode uses the Coherent PSK (COHPSK) modem which I developed in 2015. This post describes the challenges of building HF modems for DV, and how the COHPSK modem evolved from the FDMDV modem used for FreeDV 1600.
HF channels are tough. You need a lot of SNR to push bits through them. There are several problems to contend with:
When the transmit signal is reflected off the ionosphere, two or more copies arrive at the receiver antenna a few ms apart. These echoes confuse the demodulator, just like a room with bad echo can confuse a listener.
Here is a plot of a BPSK baseband signal (top). Lets say we receive two copies of this signal, from two paths. The first is identical to what we sent (top), but the second is delayed a few samples and half the amplitude (middle). When you add them together at the receiver input (bottom), it’s a mess:
The multiple paths combining effectively form a comb filter, notching out chunks of the modem signal. Loosing chunks of the modem spectrum is bad. Here is the magnitude and phase frequency response of a channel with the two paths used for the time domain example above:
Note that comb filtering also means the phase of the channel is all over the place. As we are using Phase Shift Keying (PSK) to carry our precious bits, strange phase shifts are more bad news.
All of these impairments are time varying, so the echoes/notches, and phase shifts drift as the ionosphere wiggles about. As well as the multipath, it must deal with noise and operate at SNRs of around 0dB, and frequency offsets between the transmitter and receiver of say +/- 100 Hz.
If commodity sound cards are used for the ADC and DAC, the modem must also handle large sample clock offsets of +/-1000 ppm. For example the transmitter DAC sample clock might be 7996 Hz and the receiver ADC 8004 Hz, instead of the nominal 8000 Hz.
As the application is Push to Talk (PTT) Digital Voice, the modem must sync up quickly, in the order of 100ms, even with all the challenges above thrown at it. Processing delay should be around 100ms too. We can’t wait seconds for it to train like a data modem, or put up with several seconds of delay in the receive speech due to processing.
Using standard SSB radio sets we are limited to around 2000 Hz of RF bandwidth. This bandwidth puts a limit on the bit rate we can get through the channel. The amplitude and phase distortion caused by typical SSB radio crystal filters is another challenge.
Designing a modem for HF Digital Voice is not easy!
In 2012, the FDMDV modem was developed as our first attempt at a modem for HF digital voice. This is more or less a direct copy of the FDMDV waveform which was developed by Francesco Lanza, HB9TLK and Peter Martinez G3PLX. The modem software was written in GNU Octave and C, carefully tested and tuned, and most importantly – is open source software.
This modem uses many parallel carriers or tones. We are using Differential QPSK, so every symbol contains 2 bits encoded as one of 4 phases.
Lets say we want to send 1600 bits/s over the channel. We could do this with a single QPSK carrier at Rs = 800 symbols a second. Eight hundred symbols/s times two bit/symbol for QPSK is 1600 bit/s. The symbol period Ts = 1/Rs = 1/800 = 1.25ms. Alternatively, we could use 16 carriers running at 50 symbols/s (symbol period Ts = 20ms). If the multipath channel has echoes 1ms apart it will make a big mess of the single carrier system but the parallel tone system will do much better, as 1ms of delay spread won’t upset a 20ms symbol much:
We handle the time-varying phase of the channel using Differential PSK (DPSK). We actually send and receive phase differences. Now the phase of the channel changes over time, but can be considered roughly constant over the duration of a few symbols. So when we take a difference between two successive symbols the unknown phase of the channel is removed.
Here is an example of DPSK for the BPSK case. The first figure shows the BPSK signal top, and the corresponding DBPSK signal (bottom). When the BPSK signal changes, we get a +1 DBPSK value, when it is the same, we get a -1 DBPSK value.
The next figure shows the received DBPSK signal (top). The phase shift of the channel is a constant 180 degrees, so the signal has been inverted. In the bottom subplot the recovered BPSK signal after differential decoding is shown. Despite the 180 degree phase shift of the channel it’s the same as the original Tx BPSK signal in the first plot above.
This is a trivial example, in practice the phase shift of the channel will vary slowly over time, and won’t be a nice neat number like 180 degrees.
DPSK is a neat trick, but has an impact on the modem Bit Error Rate (BER) – if you get one symbol wrong, the next one tends to be corrupted as well. It’s a two for one deal on bit errors, which means crappier performance for a given SNR than regular (coherent) PSK.
To combat frequency selective fading we use a little Forward Error Correction (FEC) on the FreeDV 1600 waveform. So if one carrier gets notched out, we can use bits in the other carriers to recover the missing bits. Unfortunately we don’t have the bandwidth available to protect all bits, and the PTT delay requirement means we have to use a short FEC code. Short FEC codes don’t work as well as long ones.
Over the next few years I spent some time thinking about different modem designs and trying a bunch of different ideas, most of which failed. Research and disappointment. You just have to learn from your mistakes, talk to smart people, and keep trying. Then, towards the end of 2014, a few ideas started to come together, and the COHPSK modem was running in real time in mid 2015.
The major innovations of the COHPSK modem are:
- The use of diversity to help combat frequency selective fading. The baseline modem has 7 carriers. A copy of these are made, and sent at a higher frequency to make 14 tones in total. Turns out the HF channel giveth and taketh away. When one tone is notched out another is enhanced (an anti-fade). So we send each carrier twice and add them back together at the demodulator, averaging out the effect of frequency selective fades:
- To use diversity we need enough bandwidth to fit a copy of the baseline modem carriers. This implies the need for a vocoder bit rate of much less than 1600 bit/s – hence several iterations at a 700 bits/s speech codec – a completely different skill set – and another 18 months of my life to develop Codec 2 700C.
- Coherent QPSK detection is used instead of differential detection, which halves the number of bit errors compared to differential detection. This requires us to estimate the phase of the channel on the fly. Two known symbols are sent followed by 4 data symbols. These known, or Pilot symbols, allow us to measure and correct for the current phase of each carrier. As the pilot symbols are sent regularly, we can quickly acquire – then track – the phase of the channel as it evolves.
Here is a figure that shows how the pilot and data symbols are distributed across one frame of the COHPSK modem. More information of the frame design is available in the cohpsk frame design spreadsheet, including performance calculations which I’ll explain in the next blog post in this series.
In the next post I’ll show how reading a few graphs and adding a few dBs together can help us estimate the performance of the FDMDV and COHPSK modems on HF channels.
cohpsk_plots.m Octave script used to generate plots for this post.
FDMDV Modem Page
We especially need to run a privileged runner to make this happen.
Assuming that GitLab Runner has already been successfully installed, head to Admin -> Runner in the webUI of your GitLab instance and note your Registration token.
From a suitable account on your GitLab instance register a shared runner:% sudo /usr/bin/gitlab-ci-multi-runner register --docker-privileged \ --url https://gitlab.my.domain/ci \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --docker-image "docker:latest" \
Your shared runner should now be ready to run.
This applies to self-hosting a GitLab instance. If you are using the gitlab.com hosted services, a suitable runner is already supplied.
There are many types of executors for runners, suiting a variety of scenarios. This example's scenario is that both GitLab and the desired runner are on the same instance.
If you have a little laptop with an Intel CPU that supports turbo boost, you might find that it’s getting a little hot when you’re using it on your lap.
For example, taking a look at my CPU:
lscpu |egrep "Model name|MHz"
We can see that it’s a 2.7GHz CPU with turbo boost taking it up to 3.5GHz.
Model name: Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
CPU MHz: 524.633
CPU max MHz: 3500.0000
CPU min MHz: 400.0000
Here’s a way that you can enable and disable turbo boost with a systemd service, which lets you hook it into other services or disable it on boot.
By default, turbo boost is on, so starting our service will disable it.
Create the service.
cat << EOF | sudo tee \
Description=Disable Turbo Boost on Intel CPU
ExecStart=/bin/sh -c "/usr/bin/echo 1 > \
ExecStop=/bin/sh -c "/usr/bin/echo 0 > \
Reload systemd manager configuration.
sudo systemctl daemon-reload
Test it by running something CPU intensive and watching the current running MHz.
cat /dev/urandom > /dev/null &
lscpu |grep "CPU MHz"
CPU MHz: 3499.859
Now disable turbo boost and check the CPU speed again.
sudo systemctl start disable-turbo-boost
lscpu |grep "CPU MHz"
CPU MHz: 2699.987
Don’t forget to kill the CPU intensive process
I recently got a new Dell XPS 13 (9360) laptop for work and it’s running Fedora pretty much perfectly.
However, when I load up Cheese (or some other webcam program) the video from the webcam flickers. Given that I live in Australia, I had to change the powerline frequency from 60Hz to 50Hz to fix it.
sudo dnf install v4l2-ctl
v4l2-ctl --set-ctrl power_line_frequency=1
I wanted this to be permanent each time I turned my machine on, so I created a udev rule to handle that.
cat << EOF | sudo tee /etc/udev/rules.d/50-dell-webcam.rules
PROGRAM="/usr/bin/v4l2-ctl --set-ctrl \
power_line_frequency=1 --device /dev/%k", \
It’s easy to test. Just turn flicker back on, reload the rules and watch the flicker in Cheese automatically disappear
Presentation to Linux Users of Victoria, 7th February, 2017
An overview of cloud computing platforms in general, and OpenStack in particular, is provided introduces this presentation. Cloud computing is one of the most significant changes to IT infrastructure and employment in the past decade, with major corporate services (Amazon, Microsoft) gaining particular significance in the late 2000s. In mid-2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack, with initial code coming from NASA's Nebula project and Rackspace's Cloud Files project, and soon gained prominence as the largest open-source cloud platform. Although a cross-platform service, it was quickly available on various Linux distributions including Debian, Ubuntu, SuSE (2011), and Red Hat (2012).
OpenStack is governed by the OpenStack Foundation, a non-profit corporate entity established in September 2012. Correlating with the release cycle of the product, OpenStack Summits are held every six months for developers, users and managers. The most recent Summit was held in Barcelona in late October 2016, with over 5000 attendees, almost 1000 organisations and companies, and 500 sessions, spread out over three days, plus one day of "Upstream University" prior to the main schedule, plus one day after the main schedule for contributor working parties. The presentation will cover the major announcements of the conference as well as a brief overview of the major streams, as well the direction of OpenStack as the November Sydney Summit approaches.
Debian/Stretch has been frozen. Before the freeze I got almost all the bugs in policy fixed, both bugs reported in the Debian BTS and bugs that I know about. This is going to be one of the best Debian releases for SE Linux ever.
Systemd with SE Linux is working nicely. The support isn’t as good as I would like, there is still work to be done for systemd-nspawn. But it’s close enough that anyone who needs to use it can use audit2allow to generate the extra rules needed. Systemd-nspawn is not used by default and it’s not something that a new Linux user is going to use, I think that expert users who are capable of using such features are capable of doing the extra work to get them going.
In terms of systemd-nspawn and some other rough edges, the issue is the difference between writing policy for a single system vs writing policy that works for everyone. If you write policy for your own system you can allow access for a corner case without a lot of effort. But if I wrote policy to allow access for every corner case then they might add up to a combination that can be exploited. I don’t recommend blindly adding the output of audit2allow to your local policy (be particularly wary of access to shadow_t and write access to etc_t, lib_t, etc). But OTOH if you have a system that’s running in enforcing mode that happens to have one daemon with more access than is ideal then all the other daemons will still be restricted.
As for previous releases I plan to keep releasing updates to policy packages in my own apt repository. I’m also considering releasing policy source to updates that can be applied on existing Stretch systems. So if you want to run the official Debian packages but need updates that came after Stretch then you can get them. Suggestions on how to distribute such policy source are welcome.
Please enjoy SE Linux on Stretch. It’s too late for most bug reports regarding Stretch as most of them won’t be sufficiently important to justify a Stretch update. The vast majority of SE Linux policy bugs are issues of denying wanted access not permitting unwanted access (so not a security issue) and can be easily fixed by local configuration, so it’s really difficult to make a case for an update to Stable. But feel free to send bug reports for Buster (Stretch+1).
Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).Request an additional IPv6 block
The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.
If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.Setup the new IPv6 address
If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:auto lo iface lo inet loopback allow-hotplug eth0 iface eth0 inet dhcp pre-up iptables-restore /etc/network/iptables.up.rules iface eth0 inet6 static address 2600:3c01::xxxx:xxxx:xxxx:939f/64 gateway fe80::1 pre-up ip6tables-restore /etc/network/ip6tables.up.rules iface tun0 inet6 static address 2600:3c01:xxxx:xxxx::/64 pre-up ip6tables-restore /etc/network/ip6tables.up.rules
where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.
Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:ping6 2600:3c01:xxxx:xxxx::1 OpenVPN configuration
The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:proto udp
in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:server-ipv6 2600:3c01:xxxx:xxxx::/64 push "route-ipv6 2000::/3"
to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.
In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:net.ipv6.conf.all.forwarding=1
and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):# openvpn -A INPUT -p udp --dport 1194 -j ACCEPT -A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT -A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.
With all of this done, apply the settings by running:sysctl -p /etc/sysctl.d/openvpn.conf ip6tables-apply systemctl restart openvpn.service Testing the connection
Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.
Then you can ping the server's new IP address:ping6 2600:3c01:xxxx:xxxx::1
and from the server, you can ping the client's IP (which you can see in the network settings):ping6 2600:3c01:xxxx:xxxx::1002
Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:ping6 ipv6.google.com
and then pinging your client from an IPv6-enabled host somewhere:ping6 2600:3c01:xxxx:xxxx::1002
Linux Users of Victoria (LUV) Announce: LUV Main February 2017 Meeting: OpenStack Summit/Data Structures and Algorithms
Tuesday, February 7, 2017
6:30 PM to 8:30 PM
6th Floor, Trinity College (EPA Victoria building)
200 Victoria St., Carlton
• Lev Lafayette, OpenStack and the OpenStack Barcelona Summit
• Jacinta Richardson, Data Structures and Algorithms in the 21st Century
200 Victoria St. Carlton VIC 3053 (the EPA building)
Late arrivals needing access to the building and the sixth floor please call 0490 049 589.
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
LUV would like to acknowledge Red Hat for their help in obtaining the venue.
Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.February 7, 2017 - 18:30
It's been a good three years now since I swapped my HP laptop for a Macbook Pro. In the mean time, I've started doing a bit more astrophotography and of course the change of operating system has affected the tools I use to obtain and process photos.
Amateur astronomers have traditionally mostly used Windows, so there are a lot of Windows tools, both freeware and payware, to help. I used to run the freeware ones in Wine on Ubuntu with varying levels of success.
When I first got the Mac, I had a lot of trouble getting Wine to run reliably and eventually ended up doing my alignment and processing manually in The Gimp. However, that's time consuming and rather fiddly and limited to stacking static exposures.
However, I've recently started finding quite a bit of Mac OS based astrophotography software. I don't know if that means it's all fairly new or whether my Google skills failed me over the past years :-)Software
I thought I'd document what I use, in the hope that I can save others who want to use their Macs some searching.
Some are Windows software, but run OK on Mac OS X. You can turn them into normal double click applications using a utility called WineSkin Winery.
Obtaining data from video camera:
Format-converting video data:
- Handbrake (Mac OS X, free, open source)
Processing video data:
- AutoStakkert! (Windows + Wine, free for non-commercial use, donationware)
Obtaining data from DSLR:
- AstroDSLR (Mac OS X, payware, free trial)
Processing and stacking DSLR files and post-processing video stacks:
- The Gimp (Max OS X, free, open source)
A few weeks ago I bought a ZWO ASI120MC-S astro camera, as that was on sale and listed by Nebulosity as supported by OSX. Until then I'd messed around with a hacked up Logitech webcam, which seemed to only be supported by the Photo Booth app.
I've not done any guiding yet (I need a way to mount the guide scope on the main scope - d'oh) but the camera works well with Nebulosity 4 and oaCapture. I'm looking forward to being able to grab Jupiter with it in a month or so and Saturn and Mars later this year.
The image to the right is a stack of 24x5 second unguided exposures of the trapezium in M42. Not too bad for a quick test on a half-moon night.Tags: astronomyastrophotographyMacOSXsoftwarehardware
Had a friendly meeting a few days ago with a young person debating their future career path. They had a very good IT-orientated resume (give this person a job, seriously) but were debating whether they should go down the path of a Business Analyst. It was fairly clear that they lived and breathed IT, whereas the BA choice was one of some indifference. In reverse, there was a situation when VPAC had a year of summer school graduates where it became quickly obvious that none of them had any passion for IT.
We’ve just released a new experimental mode for Digital Voice called FreeDV 800XA. This uses the Codec 700C mode, 100 bit/s for synchronisation, and a 4FSK modem, actually the same modem that has been so successful for images from High Altitude Balloons.
FSK has the advantage of being a constant amplitude waveform, so efficient class C amplifiers can be used. However as it currently stands, 800XA has no real protection for the multipath common on HF channels, for example symbols that have an echo delayed by a few ms.
So I decided to start looking at equalisers. Some Googling suggested the Constant Modulus Algorithm (CMA) Equaliser might be a suitable choice for FSK, and turned up some sample code on DSP stack exchange.
I had a bit of trouble getting the algorithm to work for bandpass FSK signals, so posted this question on CMA equalisation for FSK. I received some kind help, and eventually made the equaliser work on a simulated HF channel. Here is the Octave simulation cma.m
How it works
The equaliser attempts to correct for the channel using the received signal, which is corrupted by noise.
There is a “gotcha” in using a FIR filter to equalise a channel response. Consider a channel H(z) with a simple 3 sample impulse response h(n). Now we could equalise this with the exact inverse 1/H(z). Here is a plot of our example channel frequency response and the ideal equaliser which is exactly the inverse:
Now here is a plot of the impulse responses of the channel h(n), and equaliser h'(n):
The ideal equaliser response h'(n) is much longer than the 3 samples of the channel impulse response h(n). The CMA algorithm requires our equaliser to be a FIR filter. Counter-intuitively, we need to use an FIR equaliser with a number of taps significantly larger than the expected channel impulse response we are trying to equalise.
One explanation for this – the channel response can be considered to be a Finite Impulse response (FIR) filter H(z). The exact inverse 1/H(z), when expressed in the time domain, is an Infinite Impulse Response (IIR) filter, which have, you know, an infinitely long impulse response!
The figures below show the CMA equaliser doing it’s thing in a multipath channel with AWGN noise. In Figure 1 the error is reduced over time, and the lower plot shows the combined channel-equaliser impulse response. If the equaliser was perfect the combined channel-equaliser response would be 1.
Figure 2 below shows the CMA going to work on a FSK signal. The top subplot is the transmitted FSK signal, you can see the two different frequencies in the waveform. The middle plot shows the received signal, after it has been messed up by the multipath channel. It’s clear that the tone amplitudes are different. Looking carefully at the point where the tones transition (e.g. around sample 25 and 65) there is intersymbol interference due to multipath echoes, messing up the start of each FSK symbol.
However in the bottom subplot the equaliser has worked it’s magic and the waveform is looking quite nice. The tone levels are nearly equal and much of the ISI removed. Yayyyyyy.
Figure 4 shows the magnitude frequency response at several stages in the simulation. The top subplot is the channel response. It’s a comb filter, typical of multipath channels. The middle subplot is the equaliser response. Ideally, this should be the exact inverse of the channel. It’s pretty close at the low end but seems to lose it’s way at very low and high frequencies. The lower plot is the combined response, which is close to 0dB at the low frequencies. Cool.
Figure 4 is the transmit spectrum of the modem signal (top), and the spectrum after the channel has mangled it (lower). Note one tone is now lower than the other. Also note that the modem signal only has energy in the low-mid range of the spectrum. This might explain why the equaliser does a good job in that region of the spectrum – it’s where we have energy to drive the adaption.
Problems for HF Digital Voice
Unfortunately the CMA equaliser only works well at high SNRs, and takes seconds to converge. I am interested in low SNR (around 0dB in a 3000 Hz noise bandwidth) and it’s Push To Talk (PTT) radio so we a need fast initial training, around 100ms. Then it must follow the time varying HF channel, continually retraining on the fly.
For further work I really should measure BER versus Eb/No for a variety of SNRs and convergence times, and measure what BER improvement we are buying with equalisation. BER is King, much easier that squinting at time domain waveforms.
If the CMA cost function was used with known information (like pilot symbols or the Unique Word we have in 800XA) it might be able to work faster. This would involve deconvolution on the fly, rather than using iterative or adaptive techniques.