Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 2 min ago

Tim Serong: Random Test Subject

Mon, 2017-01-30 00:04

Almost every time I fly, it seems like I get pulled aside for the random explosives trace detection test. I always assumed it was because I usually look like a crazy mountain man (see photo). But, if you google around for “airport random explosives test”, you’ll find forum posts from security staff assuring everyone they’re not doing profiling, and even a helpful FAQ from Newcastle Airport (“Why are you always chosen for ‘explosive testing’?“) which says the process is “as the officer finishes screening one person, they are required to ask the next person walking through screening to undertake the ETD test”.

So maybe it’s just bad luck. Except possibly for that time at Hobart airport last week, where I was seeing off a colleague after linux.conf.au 2017. As far as I could tell, we were the only two people approaching security, and my colleague was in front. He was waved through to the regular security screening, and I was asked over for an explosives test, to which I replied “you’re most welcome to test me if you like, but I’m not actually going through security into departures”. The poor guy looked a bit nonplussed at this, then moved on to the next traveler who’d since appeared in line behind us.

What to do about this in future? Obviously, I need a new t-shirt, with text something like one of these:

If anyone else would like a t-shirt along these lines, the images above conveniently link to my Redbubble store. Or, if you’d rather DIY, there’s PNGs here, here and here (CC-BY-SA as usual, and no, they’re not broken, it’s white text on a transparent background).

Finally, for some real randomness, check out Keith Packard’s ChaosKey To Production presentation. I’m not actually affiliated with Keith, but the ChaosKey sure looks nifty.

Clinton Roy: South Coast Track Report

Sun, 2017-01-29 22:00

Please note this is a work in progress

I had previously stated my intention to walk the South Coast Track. I have now completed this walk and now want a space where I can collect all my thoughts.

Photos: Google Photos album

The sections I’m referring to here come straight from the guide book. Due to the walking weather and tides all being in our favour, we managed to do the walk in six days. We flew in late on the first day and did not finish section one of the walk, on the second day we finished section one and then completed section two and three. On day three it was just the Ironbound range. On day four it was just section five. Day five we completed section six and the tiny section seven. Day six was section eight and day seven was cockle creak (TODO something’s not adding up here)

The hardest day, not surprisingly, was day three where we tackled the Ironbound range, 900m up, then down. The surprising bit was how easy the ascent was and how god damn hard the descent was. The guide book says there are three rest camps on the descent, with one just below the peak, a perfect spot for lunch. Either this camp is hidden (e.g. you have to look behind you) or it’s overgrown, as we all missed it. This meant we ended up skipping lunch and were slipping down the wed, muddy awful descent side for hours. When we came across the mid rest camp stop, because we’d been walking for so long, everyone assumed we were at the lower camp stop and that we were therefore only an hour or so away from camp. Another three hours later or so we actually came across the lower camp site, and the by that time all sense of proportion was lost and I was starting to get worried that somehow we’d gotten lost and were not on the right trail and that we’d run out of light. In the end I got into camp about an hour before sundown (approx eight) and B&R got in about half an hour before sundown. I was utterly exhausted, got some water, pitched the tent, collapsed in it and fell asleep. Woke up close to midnight, realised I hadn’t had any lunch or dinner, still wasn’t actually feeling hungry. I forced myself to eat a hot meal, then collapsed in bed again.

TODO: very easy to follow trail.
TODO: just about everything worked.
TODO: spork
TODO: solar panel
TODO: not eating properly
TODO: needing more warmth

I could not have asked for better walking companions, Richard and Bec.


Filed under: camping, Uncategorized

Julien Goodwin: Charging my ThinkPad X1 Gen4 using USB-C

Sun, 2017-01-29 00:02
As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.

I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).

At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.

One of the less-touted features of USB-C is USB-PD[1], which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.

Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).

After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.

Devkits were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.

It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.

By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.

Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.

I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.

At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.

Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.

Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.

Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.

At linux.conf.au last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.

Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.





Rough timeline:
  • April 21st 2016, decide to start working on the project
  • April 28th, devkits arrive
  • May 8th, schematic largely complete, work starts on PCB layout
  • June 14th, order sent to CM
  • ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
  • early August, boards arrive from California
  • Somewhere here I try, and fail, to reflow a modified connector onto a board
  • October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
  • January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
  • January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
  • January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.


1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.

Michael Still: Recording performance information from short lived processes with prometheus

Sat, 2017-01-28 16:00
Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome.

The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really.

First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time:

$ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz $ tar xvzf pushgateway-0.3.1.linux-386.tar.gz $ cd pushgateway-0.3.1.linux-386 $ ./pushgateway

Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts.

The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway:

echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job

This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL:

# TYPE some_metric untyped some_metric{instance="",job="some_job"} 3.14

You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first.

On jobs and instances

Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances.

For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers.

So, we go from something like this:



To an architecture which looks a bit like this:



Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures:



So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way to lookup jobs and instances via DNS, but that's a story for another day.

To harp on a point here, all of these processes would be running a web server exporting metrics in google land -- that means that prometheus would know that its monitoring a frontend job because it would be listed in the configuration file as such. You can see this in the configuration file from the previous post. Here's the relevant snippet again:

- job_name: 'node' static_configs: - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']

The job "node" runs on three targets (instances), named "molokai:9100", "dell:9100", and "eeebox:9100".

However, we live in the ghetto for these ephemeral scripts and want to use the pushgateway for more than one such script, so we have to tell lies via the pushgateway. So for my simple emphemeral script, we'll tell the pushgateway that the job is the script name and the instance can be an empty string. If we don't do that, then prometheus will think that the metric relates to the pushgateway process itself, instead of the ephemeral process.

We tell the pushgateway what job and instance to use like this:

echo "some_metric 3.14" | curl --data-binary @- http://localhost:9091/metrics/job/frontend/instance/0

Now we'll get this at the metrics URL:

# TYPE some_metric untyped some_metric{instance="",job="some_job"} 3.14 some_metric{instance="0",job="frontend"} 3.14

The first metric there is from our previous attempt (remember when I said that values are never cleared out?), and the second one is from our second attempt. To clear out values you'll need to restart the pushgateway process. For simple ephemeral scripts, I think its ok to leave the instance empty, and just set a job name -- as long as that job name is globally unique.

We also need to tell prometheus to believe our lies about the job and instance for things reported by the pushgateway. The scrape configuration for the pushgateway therefore ends up looking like this:

- job_name: 'pushgateway' honor_labels: true static_configs: - targets: ['molokai:9091']

Note the honor_labels there, that's the believing the lies bit.

There is one thing to remember here before we can move on. Job names are being blindly trusted from our reporting. So, its now up to us to keep job names unique. So if we export a metric on every machine, we might want to keep the job name specific to the machine. That said, it really depends on what you're trying to do -- so just pay attention when picking job and instance names.

On metric types

Prometheus supports a couple of different types for the metrics which are exported. For now we'll discuss two, and we'll cover the third later. The types are:

  • Gauge: a value which goes up and down over time, like the fuel gauge in your car. Non-motoring examples would include the amount of free disk space on a given partition, the amount of CPU in use, and so forth.
  • Counter: a value which always increases. This might be something like the number of bytes sent by a network card -- the value only resets when the network card is reset (probably by a reboot). These only-increasing types are valuable because its easier to do maths on them in the monitoring system.
  • Histograms: a set of values broken into buckets. For example, the response time for a given web page would probably be reported as a histogram. We'll discuss histograms in more detail in a later post.


I don't really want to dig too deeply into the value types right now, apart from explaining that our previous examples haven't specified a type for the metrics being provided, and that this is undesirable. For now we just need to decide if the value goes up and down (a gauge) or just up (a counter). You can read more about prometheus types at https://prometheus.io/docs/concepts/metric_types/ if you want to.

A typed example

So now we can go back and do the same thing as before, but we can do it with typing like adults would. Let's assume that the value of pi is a gauge, and goes up and down depending on the vagaries of space time. Let's also show that we can add a second metric at the same time because we're fancy like that. We'd therefore need to end up doing something like (again heavily based on the contents of the README):

cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/frontend/instance/0 # TYPE some_metric gauge # HELP approximate value of pi in the current space time continuum some_metric 3.14 # TYPE another_metric counter # HELP another_metric Just an example. another_metric 2398 EOF

And we'd end up with values like this in the pushgateway metrics URL:

# TYPE some_metric gauge some_metric{instance="0",job="frontend"} 3.14 # HELP another_metric Just an example. # TYPE another_metric counter another_metric{instance="0",job="frontend"} 2398

A tangible example

So that's a lot of talking. Let's deploy this in my home lab for something actually useful. The node_exporter does not report any SMART health details for disks, and that's probably a thing I'd want to alert on. So I wrote this simple script:

#!/bin/bash hostname=`hostname | cut -f 1 -d "."` for disk in /dev/sd[a-z] do disk=`basename $disk` # Is this a USB thumb drive? if [ `/usr/sbin/smartctl -H /dev/$disk | grep -c "Unknown USB bridge"` -gt 0 ] then result=1 else result=`/usr/sbin/smartctl -H /dev/$disk | grep -c "overall-health self-assessment test result: PASSED"` fi cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/$hostname/instance/$disk # TYPE smart_health_passed gauge # HELP whether or not a disk passed a "smartctl -H /dev/sdX" smart_health_passed $result EOF done

Now, that's not perfect and I am sure that I'll re-write this in python later, but it is actually quite useful already. It will report if a SMART health check failed, and now I could write an alerting rule which looks for disks with a health value of 0 and send myself an email to go to the hard disk shop. Once your pushgateways are being scraped by prometheus, you'll end up with something like this in the console:



I'll explain how to turn this into alerting later.

Tags for this post: prometheus monitoring ephemeral_script pushgateway
Related posts: A pythonic example of recording metrics about ephemeral scripts with prometheus; Basic prometheus setup; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment

Michael Still: Basic prometheus setup

Fri, 2017-01-27 16:00
I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else.

The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data.

The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing:

$ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz $ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz $ cd node_exporter-0.14.0-rc.1.linux-386 $ ./node_exporter

That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics:

$ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"' node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11

Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on.

So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure.

Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial:

$ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz $ tar xvzf prometheus-1.5.0.linux-386.tar.gz $ cd prometheus-1.5.0.linux-386

Now we need to move some things around to install this nicely. I did the puppet equivalent of:

  • Moving the prometheus file to /usr/bin
  • Creating an /etc/prometheus directory and moving console_libraries and consoles into it
  • Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second
  • And creating an empty data directory, in my case at /data/prometheus


The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one:

# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'stillhq' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['molokai:9090'] - job_name: 'node' static_configs: - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']

Here you can see that I want to scrape each of my web servers which exports metrics every 15 seconds, and I also want to calculate values (such as firing alerts) every 15 seconds too. This might not scale if you have bajillions of processes or machines to monitor. I also label all of my values as coming from my domain, so that if I ever aggregate these values with another prometheus from somewhere else the origin will be clear.

The other interesting bit for now is the scrape configuration. This lists the metrics exporters to monitor. In this case its prometheus itself (molokai:9090), and then each of my machines in the home lab (molokai, dell, and eeebox -- all on port 9100). Remember, port 9090 is the prometheus binary itself and port 9100 is that node_exporter binary we now have running on all of our machines.

Now if we start prometheus, it will do its thing. There is some configuration which needs to be passed on the command line here (instead of in the configration file), so my command line looks like this:

/usr/bin/prometheus -config.file=/etc/prometheus/prometheus.yml \ -web.console.libraries=/etc/prometheus/console_libraries \ -web.console.templates=/etc/prometheus/consoles \ -storage.local.path=/data/prometheus

Prometheus also presents an interactive user interface on port 9090, which is handy. Here's an example of it graphing the load average on each of my machines (it was something which caused a nice jaggy line):



You can see here that the user interface has a drop down for selecting values that are known, and that the key at the bottom tells you things about each time series in the graph. So for example, if we added {instance="eeebox:9100"} to the end of the value in the text box at the top, then we'd be filtering for values with that label set, and would as a result only show one value in the graph (the one for eeebox).

If you're interested in very simple dashboarding of basic system metrics, that's actually all you need to do. In my next post about prometheus I'm going to show how to write your own binary which exports values to be graphed. In my case, the temperature outside my house.

Tags for this post: prometheus monitoring node_exporter
Related posts: Recording performance information from short lived processes with prometheus; A pythonic example of recording metrics about ephemeral scripts with prometheus; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment

Binh Nguyen: Life in the Philippines, Duterte Background, and More

Fri, 2017-01-27 04:06
- interesting history. Aetan people were original inhabitants of Philippines. Now a minority group. Malaysian and Indonesian people settled there thereafter. Then Chinese and Muslims came. Finally, Spanish and US colonialism and now relative independence... Two main wars that changed it's history. That between it's colonisers (US and Spain) and between itself and the US which helped them

Lev Lafayette: Data Centre Preparation: A Summary

Thu, 2017-01-26 18:04

When installing a new system (whether HPC, cloud, or just a bunch of servers, disks, etc), they must be housed. Certainly this can be without any specialist environment, especially if one is building a small test cluster; for example with half-a-dozen old but homogeneous systems, each connected with 100BASE-TX ethernet to a switch etc.

read more

Binh Nguyen: Does the Military-Industrial Complex Make Sense?, Random Thoughts, and More

Thu, 2017-01-26 03:12
Given the importance of the Military-Industrial Complex now plays in many economies I thought it'd be worthwhile to take a look at whether it actually makes sense? - the military industrial complex doesn't work the way that a lot of people think. It's actually linked in to the way the financial system works and the way the world panned out after the World Wars. Since the US/West came out

Tridge on UAVs: ArduPilot on the CL84 TiltWing VTOL

Wed, 2017-01-25 17:16

ArduPilot now supports TiltWing VTOL aircraft! First test flights done earlier today in Canberra

We did the first test flights of the Unique Models CL84 TiltWing VTOL with ArduPilot today. Support for tilt-wing aircraft is in the 3.7.1 stable release, with enhancements to support the tilt mechanism of the CL84 added for the upcoming 3.8.0 release.

This CL84 model operates as a tricopter in VTOL flight, and as a normal aileron/elevator plane in forward flight. The unusual thing from the point of view of ArduPilot is that the tilt mechanism uses a retract style servo, which means it can be commanded to be fully up or fully down, but you can't ask it to hold any angle in between. That makes for some interesting challenges in the VTOL transition code.

For the 3.8.0 release there is a new parameter Q_TILT_TYPE that controls whether the tilt mechanism for tiltrotors and tiltwings is a continuous servo or a binary (retract style) servo. In either case the Q_TILT_RATE parameter sets the rate at which the servo changes angle (in degrees/second).

This aircraft has been previously tested in hover by Greg Covey (see http://discuss.ardupilot.org/t/tiltrotor-support-for-plane/8805) but has not previously been tested with automatic transitions.
Many thanks to Grant, Peter, James and Jack from CanberraUAV for their assistance with testing the CL84 today and the videos and photos!

Lev Lafayette: Keeping The Build Directory in EasyBuild and Paraview Plugins

Tue, 2017-01-24 14:04

By default, EasyBuild will delete the build directory of an successful installation and will save failures of attempted installs for diagnostic purposes. In some cases however, one may want to save the build directory. This can be useful, for example, for diagnosis of *successful* builds. Another example is the installation of plugins for applications such as Paraview, which *require* access to the the successful buildir.

read more

Lev Lafayette: The Provision of HPC Resources to Top Universities

Tue, 2017-01-24 12:04

Recently, the University of Melbourne is ranked #1 in Australia and #33 in the world, according to the Times Higher Education World University Rankings of 2015-2016 [1]. The rankings are based on a balanced metric between citations, industry income, international outlook, research, and teaching.

read more

Binh Nguyen: Saving Capitalist Democracy 2, Random Thoughts, and More

Tue, 2017-01-24 00:47
Obviously, this next part is a continuation of one of my others posts:  http://dtbnguyen.blogspot.com/2016/11/saving-capitalist-democracy-random.html - just get better at doing things. If counties such as Cuba can deliver good healthcare on limited resources surely more prosperous countries can as well? http://www.smh.com.au/lifestyle/health-and-wellbeing/

Ben Martin: OHC2017 zero to firmware in < 2 hours

Mon, 2017-01-23 22:06
I thought I'd make some modifications along the way in the build, so I really couldn't do a head to head with the build time I had heard about (a lowish number of minutes). The on/off switch being where it was didn't fit my plans so I made that an off boarder and also moved the battery to off board so that I might use the space below the screen for something, perhaps where the stylus lives in the case.


I did manage to go from opening the packet to firmware environment setup, built, and uploaded in less than 2 hours total. No bridges, no hassles, cable shrinks around the place and 90 degree headers across the bottom of the board for future tinkering.

This is going to look extremely stylish in a CNCed hardwood case. My current plan is to turn it into a smart remote control. Rotary encoder for volume, maybe modal so that the desired "program" can be selected quickly from a list without needing to flick or page through things.

Michael Still: Gods of Metal

Mon, 2017-01-23 22:00



ISBN: 9780141982267
LibraryThing
In this follow-up to Command and Control, Schlosser explores the conscientious objectors and protestors who have sought to highlight not just the immorality of nuclear weapons, but the hilariously insecure state the US government stores them in. In all seriousness, we are talking grannies with heart conditions being able to break in.

My only real objection to this book is that is more of a pamphlet than a book, and feels a bit like things that didn't make it into the main book. That said, it is well worth the read.

Tags for this post: book eric_schlosser nuclear weapons safety protest
Related posts: Command and Control; Random linkage; Fast Food Nation; Starfish Prime; Why you should stand away from the car when the cop tells you to; Random fact for the day Comment Recommend a book

Binh Nguyen: Explaining Prophets 3, Random Thoughts, and More

Sat, 2017-01-21 03:38
Obvious continuation of my other two posts: http://dtbnguyen.blogspot.com/2016/12/explaining-prophets-fake-news-and-more_26.html http://dtbnguyen.blogspot.com/2017/01/explaining-prophets-2-what-is-liberal.html - many of the world's top scientists seemed to be have possessed the same abilities with regards to 'pre-cognition' and 'lucid dreaming'. Like the other scientists I mentioned in the

Simon Lyall: Linux.conf.au 2017 – Friday – Closing

Sat, 2017-01-21 00:03

Code of Consult and Safety

  • Badge
    • Putting prefered pronoun
    • Emoji
  • Free Childcare
    • Sponsored by Github
    • Approx 10 kids
  • Assistance Grants
  • Attendees
    • Breakdown by gender etc
    • Roughly 25% of attendees and speakers not men
  • More numbers
    • 104 Matrix chat users
    • 554 attendees
    • 2900 coffee cups
    • Network claimed to 7.5Gb/s
    • 1.6 TB over the week, 200Mb/s max
    • 30 Session Chairs
    • 12 Miniconfs
    • 491 Proposals (130 more than the others)
    • 6 Tutorials, 75 talks, 80 speakers
    • 4 Keynote speakers
    • 21 Sponsors

Linux.conf.au 2018 – Sydney

  • A little bit of history repeating
  • 2001, 2007, 2018
  • Venue is UTS
  • 5 minutes to food, train station
  • https://lca2018.org
  • @lca2018 on twitter
  • Looking for a few extra helpers

Raffle

  • In support of Outreachy
  • 3 interns funded

Final Bit

  • Thanks to team members

 

 

Share

David Rowe: Balloon Meets Gum Tree

Fri, 2017-01-20 16:03

Today I attended the launch of Horus 38, a high altitude ballon flight carrying 4 payloads, one of which was the latest version of the SSDV system Mark and I have been working on.

Since the last launch, Mark and I have put a lot of work into carefully integrating a rate 0.8 LDPC code developed by Bill, VK5DSP. The coded 115 kbit/s system is now working error free on the bench down to -112dBm, and can transfer a new hi-res image in just a few seconds. With a tx power of 50mW, we estimate a line of site range of 100km. We are now out-performing commercial FSK telemetry chip sets using our open source system.

However disaster struck soon after launch at Mt Barker High School oval. High winds blew the payloads into a tree and three of them were chopped off, leaving the balloon and a lone payload to continue into the stratosphere. One of the payloads that hit the tree was our SSDV, tumbling into a neighboring back yard. Oh well, we’ll have another try in December.

Now I’ve been playing a lot of Kerbal Space Program lately. It’s got me thinking about vectors, for example in Kerbal I learned how to land two space craft at exactly the same point on the Mun (Moon) using vectors and some high school equations of motion. I’ve also taken up sailing – more vectors involved in how sails propel a ship.

The high altitude balloon consists of a latex, helium filled weather balloon a few meters in diameters. Strung out beneath that on 50m of fishing line are a series of “payloads”, our electronic gizmos in little foam boxes. The physical distance helps avoid interference between the radios in each box.

While the balloon was held near the ground, it was keeled over at an angle:

It’s tethered, and not moving, but is acted on by the force of the lift from the helium and drag from the wind. These forces pivot the balloon around an arc with a radius of the tether. If these forces were equal the balloon would be at 45 degrees. Today it was lower, perhaps 30 degrees.

When the balloon is released, it is accelerated by the wind until it reaches a horizontal velocity that matches the wind speed. The payloads will also reach wind speed and eventually hang vertically under the balloon due to the force of gravity. Likewise the lift accelerates the balloon upwards. This is balanced by drag to reach a vertical velocity (the ascent rate). The horizontal and vertical velocity components will vary over time, but lets assume they are roughly constant over the duration of our launch.

Now today the wind speed was 40 km/hr, just over 10 m/s. Mark suggested a typical balloon ascent rate of 5 m/s. The high school oval was 100m wide, so the balloon would take 100/10 = 10s to traverse the oval from one side to the gum tree. In 10 seconds the balloon would rise 5×10 = 50m, approximately the length of the payload string. Our gum tree, however, rises to a height of 30m, and reached out to snag the lower 3 payloads…..

David Rowe: Physics of Road Rage

Fri, 2017-01-20 16:03

A few days ago while riding my bike I was involved in a spirited exchange of opinions with a gentleman in a motor vehicle. After said exchange he attempted to run me off the road, and got out of his car, presumably with intent to assault me. Despite the surge of adrenaline I declined to engage in fisticuffs, dodged around him, and rode off into the sunset. I may have been laughing and communicating further with sign language. It’s hard to recall.

I thought I’d apply some year 11 physics to see what all the fuss was about. I was in the middle of the road, preparing to turn right at a T-junction (this is Australia remember). While his motivations were unclear, his vehicle didn’t look like an ambulance. I am assuming he as not an organ-courier, and that there probably wasn’t a live heart beating in an icebox on the front seat as he raced to the transplant recipient. Rather, I am guessing he objected to me being in that position, as that impeded his ability to travel at full speed.

The street in question is 140m long. Our paths crossed half way along at the 70m point, with him traveling at the legal limit of 14 m/s, and me a sedate 5 m/s.

Lets say he intended to brake sharply 10m before the T junction, so he could maintain 14 m/s for at most 60m. His optimal journey duration was therefore 4 seconds. My monopolization of the taxpayer funded side-street meant he was forced to endure a 12 second journey. The 8 second difference must have seemed like eternity, no wonder he was angry, prepared to risk physical injury and an assault charge!

David Rowe: Codec 2 700C

Fri, 2017-01-20 16:03

My endeavor to produce a digital voice mode that competes with SSB continues. For a big chunk of 2016 I took a break from this work as I was gainfully employed on a commercial HF modem project. However since December I have once again been working on a 700 bit/s codec. The goal is voice quality roughly the same as the current 1300 bit/s mode. This can then be mated with the coherent PSK modem, and possibly the 4FSK modem for trials over HF channels.

I have diverged somewhat from the prototype I discussed in the last post in this saga. Lots of twists and turns in R&D, and sometimes you just have to forge ahead in one direction leaving other branches unexplored.

Samples

Sample 1300 700C hts1a Listen Listen hts2a Listen Listen forig Listen Listen ve9qrp_10s Listen Listen mmt1 Listen Listen vk5qi Listen Listen vk5qi 1% BER Listen Listen cq_ref Listen Listen

Note the 700C samples are a little lower level, an artifact of the post filtering as discussed below. What I listen for is intelligibility, how easy is the same to understand compared to the reference 1300 bit/s samples? Is it muffled? I feel that 700C is roughly the same as 1300. Some samples a little better (cq_ref), some (ve9qrp_10s, mmt1) a little worse. The artifacts and frequency response are different. But close enough for now, and worth testing over air. And hey – it’s half the bit rate!

I threw in a vk5qi sample with 1% random errors, and it’s still usable. No squealing or ear damage, but perhaps more sensitive that 1300 to the same BER. Guess that’s expected, every bit means more at a lower bit rate.

Some of the samples like vk5qi and cq_ref are strongly low pass filtered, others like ve9qrp are “flat” spectrally, with the high frequencies at about the same level as the low frequencies. The spectral flatness doesn’t affect intelligibility much but can upset speech codecs. Might be worth trying some high pass (vk5qi, cq_ref) or low pass (ve9qrp_10s) filtering before encoding.

Design

Below is a block diagram of the signal processing. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.

The amplitudes and even the Vector Quantiser (VQ) entries are in dB, which is very nice to work in and matches the ears logarithmic amplitude response. The VQ was trained on just 120 seconds of data from a training database that doesn’t include any of the samples above. More work required on the VQ design and training, but I’m encouraged that it works so well already.

Here is a 3D plot of amplitude in dB against time (300 frames) and the K=20 frequency vectors for hts1a. You can see the signal evolving over time, and the low levels at the high frequency end.

The post filter is another key step. It raises the spectral peaks (formants) an lowers the valleys (anti-formants), greatly improving the speech quality. When the peak/valley ratio is low, the speech takes on a muffled quality. This is an important area for further investigation. Gain normalisation after post filtering is why the 700C samples are lower in level than the 1300 samples. Need some more work here.

The two stage VQ uses 18 bits, energy 4 bits, and pitch 6 bits for a total of 28 bits every 40ms frame. Unvoiced frames are signalled by a zero value in the pitch quantiser removing the need for a voicing bit. It doesn’t use differential in time encoding to make it more robust to bit errors.

Days and days of very careful coding and checks at each development step. It’s so easy to make a mistake or declare victory early. I continually compared the output speech to a few Codec 2 1300 samples to make sure I was in the ball park. This reduced the subjective testing to a manageable load. I used automated testing to compare the reference Octave code to the C code, porting and testing one signal processing module at a time. Sometimes I would just printf rows of vectors from two versions and compare the two, old school but quite effective and spotting the step where the bug crept in.

Command line

The Octave simulation code can be driven by the scripts newamp1_batch.m and newamp1_fby.m, in combination with c2sim.

To try the C version of the new mode:

codec2-dev/build_linux/src$ ./c2enc 700C ../../raw/hts1a.raw - | ./c2dec 700C - -| play -t raw -r 8000 -s -2 -

Next Steps

Some thoughts on FEC. A (23,12) Golay code could protect the most significant bits of 1st VQ index, pitch, and energy. The VQ could be organised to tolerate errors in a few of its bits by sorting to make an error jump to a ‘close’ entry. The extra 11 parity bits would cost 1.5dB in SNR, but might let us operate at significantly lower in SNR on a HF channel.

Over the next few weeks we’ll hook up 700C to the FreeDV API, and get it running over the air. Release early and often – lets find out if 700C works in the real world and provides a gain in performance on HF channels over FreeDV 1600. If it looks promising I’d like to do another lap around the 700C algorithm, investigating some of the issues mentioned above.

David Rowe: Horus 39 – Fantastic High Speed SSDV Images

Fri, 2017-01-20 16:03

A great result from our high speed SSDV image (Wenet) system, which we flew as part of Horus 38 on Saturday Dec 3. A great write up and many images on the AREG web site.

One of my favorite images below, just before impact with the ground. You can see the parachute and the tangled remains of the balloon in the background, the yellow fuzzy line is the nylon rope close to the lens.

Well done to the AREG club members (in particular Mark) for all your hard work in preparing the payloads and ground stations.

High Altitude Balloons is a fun hobby. It’s a really nice day out driving in the country with nice people in a car packed full of technology. South Australia has some really nice bakeries that we stop at for meat pies and donuts on the way. Yum. It was very satisfying to see High Definition (HD) images immediately after take off as the balloon soared above us. Several ground stations were collecting packets that were re-assembled by a central server – we crowd sourced the image reception.

Open Source FSK modem

Surprisingly we were receiving images while mobile for much of the flight. I could see the Eb/No move up and down about 6dB over 3 second cycles, which we guess is due to rotation or swinging of the payload under the balloon. The antennas used are not omnidirectional so the change in orientation of tx and rx antennas would account for this signal variation. Perhaps this can be improved using different antennas or interleaving/FEC.

Our little modem is as good as the Universe will let us make it (near perfect performance against theory) and it lived up to the results predicted by our calculations and tested on the ground. Bill, VK5DSP, developed a rate 0.8 LDPC code that provides 6dB coding gain. We were receiving 115 kbit/s data on just 50mW of tx power at ranges of over 100km. Our secret is good engineering, open source software, $20 SDRs, and a LNA. We are outperforming commercial chipsets with open source.

The same modem has been used for low bit rate RTTY telemetry and even innovative new VHF/UHF Digital Voice modes.

The work on our wonderful little FSK modem continues. Brady O’Brien, KC9TPA has been refactoring the code for the past few weeks. It is now more compact, has a better command line interface, and most importantly runs faster so we getting close to running high speed telemetry on a Raspberry Pi and fully embedded platforms.

I think we can get another 4dB out of the system, bringing the MDS down to -116dBm – if we use 4FSK and lose the RS232 start/stop bits. What we really need next is custom tx hardware for open source telemetry. None of the chipsets out there are quite right, and our demod outperforms them all so why should we compromise?

Recycled Laptops

The project has had some interesting spin offs. The members of AREG are getting really interested in SDR on Linux resulting in a run on recycled laptops from ASPItech, a local electronics recycler!

Links

Balloon meets Gum Tree
Horus 37 – High Speed SSTV Images
High Speed Balloon Data Link
All Your Modem are Belong To Us
FreeDV 2400A and 2400B Demos
Wenet Source Code
Nov 2016 Wenet Presentation