Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 10 min 27 sec ago

OpenSTEM: Am I a Neanderthal?

Wed, 2017-03-15 10:05
Early reconstruction of Neanderthal

The whole question of how Neanderthals are related to us (modern humans) has been controversial ever since the first Neanderthal bones were found in Germany in the 19th century. Belonging to an elderly, arthritic individual (a good example of how well Neanderthals cared for each other in social groups), the bones were reconstructed to show a stooping individual, with a more ape-like gait, leading to Neanderthals being described as the “Missing Link” between apes and humans, and given the epithet “ape-man”.

Who were the Neanderthals? Modern reconstruction – Smithsonian Museum of Natural History

Neanderthals lived in the lands surrounding the Mediterranean Sea, and as far east as the Altai Mountains in Central Asia, between about 250,000 and about 30,000 years ago. They were a form of ancient human with certain physical characteristics – many of which probably helped them cope with the cold of Ice Ages. Neanderthals evolved out of an earlier ancestorHomo erectus, possibly through another species – Homo heidelbergensis. They had a larger brain than modern humans, but it was shaped slightly differently, with less development in the prefrontal cortex, which allows critical thinking and problem-solving, and larger development at the back of the skull, and in areas associated with memory in our brains. It is possible that Neanderthals had excellent memory, but poor analytical skills. They were probably not good at innovation – a skill which became vital as the Ice Age ended and the global climate warmed, sea levels rose and plant and animal habitats changed.

Neanderthals were stockier than modern humans, with shorter arms and legs, and probably stronger and all-round tougher. They had a larger rib cage, and probably bigger lungs, a bigger nose, larger eyes and little to no chin. Most of these adaptations would have helped them in Ice Age Europe and Asia – a more compact body stayed warmer more easily and was tough enough to cope with a harsh environment. Large lungs helped oxygenate the blood and there is evidence that they had more blood supply to the face – so probably had warm, ruddy cheeks. The large nose warmed up the air they breathed, before it reached their lungs, reducing the likelihood of contracting pneumonia. Neanderthals are known to have had the same range of hair colours as modern humans and fair skin, red hair and freckles may have been more common.

They made stone tools, especially those of the type called Mousterian, constructed simple dwellings and boats, made and used fire, including for cooking their food, and looked after each other in social groups. Evidence of skeletons with extensive injuries occurring well before death, shows that these individuals must have been cared for, not only whilst recovering from their injuries, but also afterwards, when they would probably not have been able to obtain food themselves. Whether or not Neanderthals intentionally buried their dead is an area of hot controversy. It was once thought that they buried their dead with flowers in the grave, but the pollen was found to have been introduced accidentally. However, claims of intentional burial are still debated from other sites.

What Happened to the Neanderthals? Abrigo do Lagar Velho

Anatomically modern humans emerged from Africa about 100,000 years ago. Recent studies of human genetics suggests that modern humans had many episodes of mixing with various lineages of human ancestors around the planet. Modern humans moved into Asia and Europe during the Ice Age, expanding further as the Ice Age ended. Modern humans overlapped with Neanderthals for about 60,000 years, before the Neanderthals disappeared. It is thought that a combination of factors led to the decline of Neanderthals. Firstly, the arrival of modern humans, followed by the end of the Ice Age, brought about a series of challenges which Neanderthals might have been unable to adapt to, as quickly as necessary. Modern humans have more problem solving and innovation capability, which might have meant that they were able to out-compete Neanderthals in a changing environment. The longest held theory is that out ancestors wiped out the Neanderthals in the first genocide in (pre)history. A find of Neanderthals in a group, across a range of ages, some from the same family group, who all died at the same time, is one of the sites, which might support this theory, although we don’t actually know who (or what) killed the group. Cut marks on their bones show that they were killed by something using stone tools. Finally, there is more and more evidence of what are called “transitional specimens”. These are individuals who have physical characteristics of both groups, and must represent inter-breeding. An example is the 4 year old child from the site of Abrigo do Lagar Velho in Portugal, which seems to have a combination of modern and Neanderthal features. The discovery of Neanderthals genes in many modern people living today is also proof that we must have interbred with Neanderthals in the past. It is thought that the genes were mixed several times, in several parts of the world.

Am I a Neanderthal?

So how do we know if we have Neanderthals genes? Neanderthal genes have some physical characteristics, but also other attributes that we can’t see. In terms of physical characteristics, Neanderthal aspects to the skull include brow ridges (ridges of bone above the eyes, under the eyebrows); a bump on the back of the head – called an occipital chignon, or bun, because it looks like a ‘bun’ hairstyle, built into the bone; a long skull (like Captain Jean-Lu Picard from Star Trek – actor Patrick Stewart); a small, or non-existent chin; a large nose; a large jaw with lots of space for wisdom teeth; wide fingers and thumbs; thick, straight hair; large eyes; red hair, fair skin and freckles! The last may seem a little surprising, but it appears that the genes for these characteristics came from Neanderthals – who had a wide range of hair colours, fair skin and, occasionally, freckles. Increased blood flow to the face also would have given Neanderthals lovely rosy cheeks!

Less obvious characteristics include resistance to certain diseases – parts of our immune systems, especially with reference to European and Asian diseases; less positively, an increased risk of other diseases, such as type 2 diabetes. Certain genes linked to depression are present, but ‘switched off’ in Neanderthals. The way that these genes link to depression, and their role in the lifestyles of early people (where they may have had benefits that are no longer relevant) are future topics for research and may help us understand more about ourselves.

Neanderthals genes are present in modern populations from Europe, Asia, Northern Africa, Australia and Oceania. So, depending on which parts of the world our ancestry is from, we may have up to 4% of our genetics from long-dead Neanderthal ancestors!

Craige McWhirter: Japanese House in Python for Minecraft

Wed, 2017-03-15 09:04

I have kids that I'm teaching to hack. They started of on Scratch (which is excellent) and are ready to move up to Python. They also happen to be mad Minecraft fans, so now they're making their way through Adventures in Minecraft.

As I used Scratch when they were, I'm also hacking in Python & Minecraft as they are. It helps if I hit the bumps and hurdles before they do, as well as have a sound handle on the problems they're solving.

Now I've branched out from the tutorial and I'm just having fun with it and leaving behind code the kids can use, hack whatever. This code is in my minecraft-tools repo (for want of a better name). It's just a collection of random tools I've written for Minecraft aren't quite up to being their own thing. I expect this will mostly be a collection of python programs to construct things inside Minecraft via CanaryMod and CanaryRaspberryJuicePlugin.

The first bit of code to be shaken out of the tree is japanese_house.py which produces a Minecraft interpretation of a classic Japanese house. Presently it only produces the single configuration that is little more than an empty shell.

I intend to add an interior fit out plus a whole bunch of optional configurations you can set at run time but for now it is what it is, as I'm going to move onto writing geodesic domes and transport | teleport rings (as per the Expanse, which will lead to eventually coding a TARDIS, that will you know, be actually bigger on the inside ;-)

David Rowe: Testing FreeDV 700C

Tue, 2017-03-14 18:06

Since releasing FreeDV 700C I’ve been “instrumenting” the FreeDV GUI program – adding some code to perform various tests of the 700C waveform, especially over the air.

With the kind help of Gerhard OE3GBB, Mark VK5QI, and Peter VK5APR, I have collected some samples and performed some tests. The goals of this work were:

  1. Compare 700C Over the Air (OTA) to simulation on an AWGN channel.
  2. Compare 700C OTA to SSB on an AWGN channel.

Instrumentation

Here is a screen shot of the latest FreeDV GUI Options screen:

I’ve added some features to the top three rows:

Test Frames Send a payload of known test bits rather than vocoder bits Channel Noise Simulate a channel using AWGN noise SNR SNR of AWGN noise Attn Carrier Attenuate just one carrier Carrier The 700C carrier (1-14) to attenuate Simulated Interference Tone Enable an interfering sine wave of specified frequency and amplitude Clipping Enable clipping of 700C tx waveform, to increase RMS power Diversity Combine for plots Scatter and Test Frame plots use combined (7 carrier) information.

To explore these options it is useful to run in full duplex mode (Tools-PTT Half Duplex unchecked) and using a loopback sound device:

$ sudo modprobe snd-aloop

More information on loopback in the FreeDV GUI README.

Clipping the 700C tx waveform reduces the Peak to Average Power ratio (PAPR) which may be result in a higher average power over the channel. However clipping distorts the waveform and add some “shoulders (i.e. noise) to the spectrum adjacent to the 700C waveform:

Several users have noticed this distortion. At this stage I’m unsure if clipping is useful or not.

The Diversity Combine option is useful to explore each of the 14 carriers separately before they are combined into 7 carriers.

Many of these options were designed to explore tx filtering. I have long wondered if any of the FreeDV carriers were receiving less power than others, for example due to ripple or a low pass response from the crystal filter. A low power carrier would have a high bit error rate, adversely affecting overall performance. Plotting the scatter diagram or bit error rate on a carrier by carrier basis can measure the effect of tx filtering – if it exists.

Some of the features above – like attenuating a single carrier – were designed to “test the test”. Much of the work I do on FreeDV (and indeed other projects) involves carefully developing software and writing “code to test the code”. For example to build the experiments described in this blog post I worked several hours day for several weeks. Not glamorous, but where the real labour lies in R&D. Careful, meticulous testing and experimentation. One percent inspiration … then code, test, test.

Comparing Analog SSB to Digital Voice

One of my goals is to develop a HF DV system that is competitive with analog SSB. So we need a way to compare analog and DV at the same SNR. So I came with the idea of a wave files of analog SSB and DV which have the same average (RMS) power. If these are fed into a SSB transmitter, then they will be received at the same SNR. I added 10 seconds of a 1000Hz sine wave at the start for good measure – this could be used to measure the actual SNR.

I developed two files:

  1. sine_analog_700c
  2. sine_analog_testframes700c

The first has the same voice signal in analog and 700C, the second uses test frames instead of encoded voice.

Interfering Carriers

One feature described above simulates an interfering carrier (like a birdie), something I have seen on the air. Here is a plot of a carrier in the middle of one of the 700C carriers, but about 10dB higher:

The upper RH plot is a rolling plot of bit errors for each carrier. You can see one carrier is really messed up – lots of bit errors. The average bit error rate is about 1%, which is where FreeDV 700C starts to become difficult to understand. These bit errors would not be randomly distributed, but would affect one part of the codec all the time. For example the pitch might be consistently wrong, or part of the speech spectrum. I found that as long as the interfering carrier is below the FreeDV carrier, the effect on bit error rate is negligible.

Take away: The tx station must tune away from any interfering carriers that poke above the FreeDV signal carriers. Placing the interfering tones between FreeDV carriers is another possibility, e.g. a 50Hz shift of the tx signal.

Results – Transmit Filtering

Simulation results suggest 700C should produce reasonable results near 0db SNR. So that’s the SNR I’m shooting for in Over The Air (OTA) testing.

Mark VK5QI sent me several minutes of test frames so I could determine if there were any carriers with dramatically different bit error rates, which would indicate the presence of some tx filtering. Here is the histogram of BERs for each carrier for Mark’s signal, which was at about 3dB SNR:

There is one bar for each I and Q QPSK bit of the 14 carriers – 28 bars total (note Diversity combination was off). After running for a few minutes, we can see a range of 5E-2 and 8E-2 (5 to 8%). In terms of AWGN modem performance, this is only about 1dB difference in SNR or Eb/No, as per the BER versus Eb/No graphs in this post on the COHPSK modem used for 700C. One carrier being pinned at say 20% BER, or a slope of increasing BER with carrier frequency – would have meant tx filtering trouble.

Peter VK5APR, sent me a high SNR signal (he lives just 4 km away). Initially I could see a X shaped scatter diagram, a possible sign of tx filtering. However this ended up being some amplitude pumping due to Fast AGC on my radio. After I disabled fast AGC, I could see a scatter diagram with 4 clear dots, and no X-shape. Check.

I performed an additional test using my IC7200 as a transmitter, and a HackRF as a receiver. The HackRF has no crystal filter and a very flat response, so any distortion would be due to the IC7200 transmit filtering. Once again – 4 clean dots on the scatter diagram and no X-shape.

So I am happy to conclude that transmit filtering does not seem to be a problem, at least of the radios tested. All performance issues are therefore likely to be caused by me and my algorithms!

Results – Low SNR testing

Peter, VK5APR, configured his station to send the analog/700C equi-power test wave files described above at very low power, such that the received SNR at my station was about 0dB. As we are so close it was reasonable to assume the channel was AWGN, indeed we could see no sign of NVIS fading and the band (40M) was devoid of DX at the 12 noon test time.

Here is the rx signal I received, and the same file run through the 700C decoder. Neither the SSB or the decoded 700C audio are pretty. However it’s fair to say we could just get a message through on both modes and that 700C is holding it’s own next to SSB. The results are close to my simulations which was the purpose of this test.

You can decode the off air signal yourself if you download the first file and replay it through the FreeDV GUI program using “Tools – Start/Stop Play File from Radio”.

Discussion

While setting up these tests, Peter and I conversed comfortably for some time over FreeDV 700C at a high SNR. This proved to me that for our audience (experienced users of HF radio) – FreeDV 700C can be used for conversational contacts. Given the 700C codec is really just a first pass – that’s a fine result.

However it’s a near thing – the 700C codec adds a lot of distortion just compressing the speech. It’s pretty bad even if the SNR is high. The comments on the Codec 2 700C blog post indicate many lay-people can’t understand speech compressed by 700C. Add any bit errors (due to low SNR or fading) and it quickly becomes hard to understand – even for experienced users. This makes 700C very sensitive to bit errors as the SNR drops. But hey – every one of those 28 bits/frame counts at 700 bit/s so it’s not surprising.

In contrast, SSB scales a bit better with SNR. However even at high SNRs, that annoying hiss is always there – which is very fatiguing. Peter and I really noticed that hiss after a few minutes back on SSB. Yuck.

SSB gets a lot of it’s low SNR “punch” from making effective use of peak power. Here is a plot of the received SSB:

It’s all noise except for the speech peaks, where the “peak SNR” is much higher than 0dB. Our brains are adept at picking out words from those peaks, integrating the received phonetic symbols (mainly vowel energy) in our squishy biological receive filters. It’s a pity we didn’t evolve to detect coherent PSK. A curse on your evolution!

In contrast – 700C allocates just as much power to the silence between words as the most important parts of speech. This suggests we could do a better job at tailoring the codec and modem to peak power, e.g. allocating more power to parts of the speech that really matter. I had a pass at Time Variable Quantisation a few years ago. A variable rate codec might be called for, tightly integrated to the modem to pack more bits/power into perceptually important parts of speech.

The results above assumed equal average power for SSB and FreeDV 700C. It’s unclear if this happens in the real world. For example we may need to “back off” FreeDV drive further than SSB; SSB may use a compressor; and the PAs we are using are generally designed for PEP rather than average power operation.

Next Steps

I’m fairly happy with the baseline COHPSK modem, it seems to be hanging on OK as long as there aren’t any co-channel birdies. The 700C codec works better than expected, has plenty of room for improvement – but it’s sensitive to bit errors. So I’m inclined to try some FEC next. Aim for error free 700C at 0dB, which I think will be superior to SSB. I’ll swap out the diversity for FEC. This will increase the raw BER, but allow me to run a serious rate 0.5 code. I’ll start just with an AWGN channel, then tackle fading channels.

Links

FreeDV 700C
Codec 2 700C

Matthew Oliver: Setting up a basic keystone for Swift + Keystone dev work

Tue, 2017-03-14 16:07

As a Swift developer, most of the development works in a Swift All In One (SAIO) environment. This environment simulates a mulinode swift cluster on one box. All the SAIO documentation points to using tempauth for authentication. Why?

Because most the time authentication isn’t the things we are working on. Swift has many moving parts, and so tempauth, which only exists for testing swift and is configured in the proxy.conf file works great.

However, there are times you need to debug or test keystone + swift integration. In this case, we tend to build up a devstack for keystone component. But if all we need is keystone, then can we just throw one up on a SAIO?… yes. So this is how I do it.

Firstly, I’m going to be assuming you have SAIO already setup. If not go do that first. not that it really matters, as we only configure the SAIO keystone component at the end. But I will be making keystone listen on localhost, so if you are doing this on anther machine, you’ll have to change that.

Further, this will set up a keystone server in the form you’d expect from a real deploy (setting up the admin and public interfaces).

 

Step 1 – Get the source, install and start keystone

Clone the sourcecode:
cd $HOME
git clone https://github.com/openstack/keystone.git

Setup a virtualenv (optional):
mkdir -p ~/venv/keystone
virtualenv ~/venv/keystone
source ~/venv/keystone/bin/activate

Install keystone:
cd $HOME/keystone
pip install -r requirements.txt
pip install -e .
cp etc/keystone.conf.sample etc/keystone.conf

Note: We are running the services from the source so config exists in source etc.

 

The fernet keys seems to assume a full /etc path, so we’ll create it. Maybe I should update this to put all config there but for now meh:
sudo mkdir -p /etc/keystone/fernet-keys/
sudo chown $USER -R /etc/keystone/

Setup the database and fernet:
keystone-manage db_sync
keystone-manage fernet_setup

Finally we can start keystone. Keystone is a wsgi application and so needs a server to pass it requests. The current keystone developer documentation seems to recommend uwsgi, so lets do that.

 

First we need uwsgi and the python plugin, one a debian/ubuntu system you:
sudo apt-get install uwsgi uwsgi-plugin-python

Then we can start keystone, by starting the admin and public wsgi servers:
uwsgi --http 127.0.0.1:35357 --wsgi-file $(which keystone-wsgi-admin) &
uwsgi --http 127.0.0.1:5000 --wsgi-file $(which keystone-wsgi-public) &

Note: Here I am just backgrounding them, you could run then in tmux or screen, or setup uwsgi to run them all the time. But that’s out of scope for this.

 

Now a netstat should show that keystone is listening on port 35357 and 5000:
$ netstat -ntlp | egrep '35357|5000'
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 26916/uwsgi
tcp 0 0 127.0.0.1:35357 0.0.0.0:* LISTEN 26841/uwsgi

Step 2 – Setting up keystone for swift

Now that we have keystone started, its time to configure it. Firstly you need the openstack client to configure it so:
pip install python-openstackclient

Next we’ll use all keystone defaults, so we only need to pick an admin password. For the sake of this how-to I’ll pick the developer documentation example of `s3cr3t`. Be sure to change this. So we can do a basic keystone bootstrap with:
keystone-manage bootstrap --bootstrap-password s3cr3t

Now we just need to set up some openstack env variables so we can use the openstack client to finish the setup. To make it easy to access I’ll dump them into a file you can source. But feel free to dump these in your bashrc or whatever:
cat > ~/keystone.env <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=s3cr3t
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_DOMAIN_ID=default
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://localhost:5000/v3
EOF

source ~/keystone.env

 

Great, now  we can finish configuring keystone. Let’s first setup a service project (tennent) for our Swift cluster:
openstack project create service

Create a user for the cluster to auth as when checking user tokens and add the user to the service project, again we need to pick a password for this user so `Sekr3tPass` will do.. don’t forget to change it:
openstack user create swift --password Sekr3tPass --project service
openstack role add admin --project service --user swift

Now we will create the object-store (swift) service and add the endpoints for the service catelog:
openstack service create object-store --name swift --description "Swift Service"
openstack endpoint create swift public "http://localhost:8080/v1/AUTH_\$(tenant_id)s"
openstack endpoint create swift internal "http://localhost:8080/v1/AUTH_\$(tenant_id)s"

Note: We need to define the reseller_prefix we want to use in Swift. If you change it in Swift, make sure you update it here.

 

Now we can add roles that will match to roles in Swift, namely an operator (someone who will get a Swift account) and reseller_admins:
openstack role create SwiftOperator
openstack role create ResellerAdmin

Step 3 – Setup some keystone users to auth as.

TODO: create all the tempauth users here

 

Here, it would make sense to create the tempauth users devs are used to using, but I’ll just go create a user so you know how to do it. First create a project (tennent) for this example demo:
openstack project create --domain default --description "Demo Project" demo

Create a user:
openstack user create --domain default --password-prompt matt

We’ll also go create a basic user role:
openstack role create user

Now connect the 3 pieces together by adding user matt to the demo project with the user role:
openstack role add --project demo --user matt user

If you wanted user matt to be a swift operator (have an account) you’d:
openstack role add --project demo --user matt SwiftOperator

or even a reseller_admin:
openstack role add --project demo --user matt ResellerAdmin

If your in a virtual env, you can leave it now, because next we’re going to go back to your already setup swift to do the Swift -> Keystone part:
deactivate

Step 4 – Configure Swift

To get swift to talk to keystone we need to add 2 middlewares to the proxy pipeline. And in the case of a SAIO, remove the tempauth middleware. But before we do that we need to install the keystonemiddleware to get one of the 2 middlware’s, keystone’s authtoken:
sudo pip install keystonemiddleware

Now you want to replace your tempauth middleware in the proxy path pipeline with authtoken keystoneauth so it looks something like:
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync authtoken keystoneauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

Then in the same ‘proxy-server.conf’ file you need to add the paste filter sections for both of these new middlewares:
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_host = localhost
auth_port = 35357
auth_protocol = http
auth_uri = http://localhost:5000/
admin_tenant_name = service
admin_user = swift
admin_password = Sekr3tPass
delay_auth_decision = True
# cache = swift.cache
# include_service_catalog = False
[filter:keystoneauth]
use = egg:swift#keystoneauth
# reseller_prefix = AUTH
operator_roles = admin, SwiftOperator
reseller_admin_role = ResellerAdmin

Note: You need to make sure if you change the reseller_prefix here, you change it in keystone. And notice this is where you map operator_roles and reseller_admin_role in swift to that in keystone. Here anyone in with the keystone role admin or SwiftOperator are swift operators and those with the ResellerAdmin role are reseller_admins.

 

And that’s it. Now you should be able to restart your swift proxy and it’ll go off and talk to keystone.

 

You can use your Python swiftclient now to go talk, and whats better swiftclient understands the OS_* variables, so you can just source your keystone.env and talk to your cluster (to be admin) or export some new envs for the user you’ve created. If you want to use curl you can. But _much_ easier to use swiftclient.

 

Tip: You can use: swift auth to get the auth_token if you want to then use curl.

 

If you want to authenticate via curl then for v3, use: https://docs.openstack.org/developer/keystone/devref/api_curl_examples.html

 

Or for v2, I use:
url="http://localhost:5000/v2.0/tokens"
auth='{"auth": {"tenantName": "demo", "passwordCredentials": {"username": "matt", "password": ""}}}'

 

curl -s -d "$auth" -H 'Content-type: application/json' $url |python -m json.tool

 

or

curl -s -d "$auth" -H 'Content-type: application/json' $url |python -c "import sys, json; print json.load(sys.stdin)['access']['token']['id']"

To just print out the token. Although a simple swift auth would do all this for you.

James Morris: LSM mailing list archive: this time for sure!

Tue, 2017-03-14 10:01

Following various unresolved issues with existing mail archives for the Linux Security Modules mailing list, I’ve set up a new archive here.

It’s a mailman mirror of the vger list.

Sridhar Dhanapalan: How to Create a Venn Diagram with Independent Intersections in PowerPoint

Mon, 2017-03-13 22:02

A Venn diagram can be a great way to explain a business concept. This is generally not difficult to create in modern presentation software. I often use Google Slides for its collaboration abilities.

Where it becomes difficult is when you want to add a unique colour/pattern to an intersection, where the circles overlap. Generally you will either get one circle overlapping another, or if you set some transparency then the intersection will become a blend of the colours of the circles.

I could not work out how to do this in Google Slides, so on this occasion I cheated and did it in Microsoft PowerPoint instead. I then imported the resulting slide into Slides.

This worked for me in PowerPoint for Mac 2016. The process is probably the same on Windows.

Firstly, create a SmartArt Venn Diagram

Insert > SmartArt > Relationship > Basic Venn

Separate the Venn circles

SmartArt Design > Convert > Convert to Shapes

Ungroup shapes

Shape Format > Group Objects > Ungroup

Split out the intersections

Shape Format > Merge Shapes > Fragment

From there, you can select the intersection as an independent shape. You can treat each piece separately. Try giving them different colours or even moving them apart.

This can be a simple but impactful way to get your point across.

OpenSTEM: This Week in HASS – term 1, week 7

Mon, 2017-03-13 10:05

This week our youngest students are looking in depth at different types of celebrations; slightly older students are examining how people got around in the ‘Olden Days’; and our older primary students have some extra time to finish their activities from last week.

Foundation to Year 3 First car made in Qld, 1902

In the stand-alone Foundation (Prep) unit (F.1), students are discussing celebrations – which ones do we recognise in Australia, how these compare with celebrations overseas, and what were these celebrations like in days gone by. Our integrated Foundation (Prep) unit (F.5) and students in Years 1 (1.1), 2 (2.1) and 3 (3.1), are examining Transport in the Past – how did their grandparents get around? How did people get around 100 years ago? How did kids get to school? How did people do the shopping? Students even get to dream about how we might get around in the future…

Years 3 to 6 Making mud bricks

At OpenSTEM we recognise that good activities, which engage students and allow for real learning, take time. Nobody likes to get really excited about something and then be rushed through it and quickly moved on to something else. This part of the unit has lots of hands-on activities for Year 3 (3.5) students in an integrated class with Year 4, as well as Year 4 (4.1), 5 (5.1) and 6 (6.1) students. In recognition of that, two weeks are allowed for the students to really get into making Ice Ages and mud bricks, and working out how to survive the challenges of living in a Neolithic village – including how to trade, count and write. Having enough time allows for consolidation of learning, as well as allowing teachers to potentially split the class into different groups engaged in different activities, and then rotate the groups through the activities over a 2 week period.

Binh Nguyen: Prophets/Genesis/Terraforming Mars, Seek Menu, and More

Sun, 2017-03-12 19:02
Obvious continuation from my previous other posts with regards to prophets/pre-cogs: http://dtbnguyen.blogspot.com/2017/03/prophetspre-cogsstargate-program-8.html http://dtbnguyen.blogspot.com/2017/02/life-in-india-prophetspre-cogsstargate_82.html http://dtbnguyen.blogspot.com/2017/02/life-in-iran-examining-prophetspre-cogs.html http://dtbnguyen.blogspot.com/2017/02/

Craige McWhirter: Solarized xmobar, trayer and dmenu for xmonad

Fri, 2017-03-10 17:36

Like many other people, Ethan Schoonover's precision colours for machines and people, known as Solarized is my preferred colour pallet, especially as I spend most of my time in terminals.

I've "Solarized" most things in my work environment except my window manager, xmonad. Well to be more specific, the xmobar, trayer and dmenu applications that I use within xmonad.

The commits:

Modifications were made to xmonad.hs, .xmobarrc and trayer in .xsession that switches out the existing and default colours for those I've selected from the Solarized palette. I now have a fully Solarized window manager that fits in much better with the rest of my workspace.

Relevant code snippets:

xmonad.hs

, logHook = dynamicLogWithPP $ xmobarPP { ppOutput = hPutStrLn xmproc , ppCurrent = xmobarColor "#859900" "" . wrap "[" "]" , ppVisible = xmobarColor "#2aa198" "" . wrap "(" ")" , ppLayout = xmobarColor "#2aa198" "" , ppTitle = xmobarColor "#859900" "" . shorten 50 } , modMask = mod4Mask -- Rebind Mod to the Windows key --, borderWidth = 1 } `additionalKeys` -- Custom dmenu launcher [ ((mod4Mask, xK_p ), spawn "exe=`dmenu_path | dmenu -fn \"Open Sans-10\" -p \"λ:\" -nb \"#073642\" -nf \"#93a1a1\" -sb \"#002b36\" -sf \"#859900\"` && eval \"exec $exe\"")

.xmobarrc

-- Appearance font = "xft:OpenSans:size=10:antialias=true" , bgColor = "#073642" , fgColor = "#93a1a1" ... --Plugins , commands = -- CPU Activity Monitor ... , "--low" , "#2aa198" , "--normal" , "#859900" , "--high" , "#dc322f" ... -- cpu core temperature monitor ... , "--low" , "#2aa198" , "--normal" , "#859900" , "--high" , "#dc322f" ... -- Memory Usage Monitor ... , "--low" , "#2aa198" , "--normal" , "#859900" , "--high" , "#dc322f" ... -- Battery Monitor ... , "--low" , "#dc322f" , "--normal" , "#859900" , "--high" , "#2aa198" , "--" -- battery specific options -- discharging status , "-o" , "<left>% (<timeleft>)" -- AC "on" status , "-O" , "<fc=#2aa198>Charging</fc>" -- charged status , "-i" , "<fc=#859900>Charged</fc>" ... -- Weather Monitor ... , "--low" , "#2aa198" , "--normal" , "#859900" , "--high" , "#dc322f" ... -- Network Activity Monitor ... , "--low" , "#2aa198" , "--normal" , "#859900" , "--high" , "#dc322f" ... -- Time and Date Display , Run Date "<fc=#268bd2>%a %b %_d %H:%M</fc>" "date" 10 ...

.xsession

exec /usr/bin/trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 10 --transparent true --alpha 0 --tint 0x073642 --height 22 &

That at least implements the colours I selected from Solarized and should be a good starting point for anyone else.

James Morris: Hardening the LSM API

Thu, 2017-03-09 22:01

The Linux Security Modules (LSM) API provides security hooks for all security-relevant access control operations within the kernel. It’s a pluggable API, allowing different security models to be configured during compilation, and selected at boot time. LSM has provided enough flexibility to implement several major access control schemes, including SELinux, AppArmor, and Smack.

A downside of this architecture, however, is that the security hooks throughout the kernel (there are hundreds of them) increase the kernel’s attack surface. An attacker with a pointer overwrite vulnerability may be able to overwrite an LSM security hook and redirect execution to other code. This could be as simple as bypassing an access control decision via existing kernel code, or redirecting flow to an arbitrary payload such as a rootkit.

Minimizing the inherent security risk of security features, is, I believe, an essential goal.

Recently, as part of the Kernel Self Protection Project, support for marking kernel pages as read-only after init (ro_after_init) was merged, based on grsecurity/pax code. (You can read more about this in Kees Cook’s blog here). In cases where kernel pages are not modified after the kernel is initialized, hardware RO page protections are set on those pages at the end of the kernel initialization process. This is currently supported on several architectures (including x86 and ARM), with more architectures in progress.

It turns out that the LSM hook operations make an ideal candidate for ro_after_init marking, as these hooks are populated during kernel initialization and then do not change (except in one case, explained below). I’ve implemented support for ro_after_init hardening for LSM hooks in the security-next tree, aiming to merge it to Linus for v4.11.

Note that there is one existing case where hooks need to be updated, for runtime SELinux disabling via the ‘disable’ selinuxfs node. Normally, to disable SELinux, you would use selinux=0 at the kernel command line. The runtime disable feature was requested by Fedora folk to handle platforms where the kernel command line is problematic. I’m not sure if this is still the case anywhere. I strongly suggest migrating away from runtime disablement, as configuring support for it in the kernel (via CONFIG_SECURITY_SELINUX_DISABLE) will cause the ro_after_init protection for LSM to be disabled. Use selinux=0 instead, if you need to disable SELinux.

It should be noted, of course, that an attacker with enough control over the kernel could directly change hardware page protections. We are not trying to mitigate that threat here — rather, the goal is to harden the security hooks against being used to gain that level of control.

Rusty Russell: Quick Stats on zstandard (zstd) Performance

Thu, 2017-03-09 12:02

Was looking at using zstd for backup, and wanted to see the effect of different compression levels. I backed up my (built) bitcoin source, which is a decent representation of my home directory, but only weighs in 2.3GB. zstd -1 compressed it 71.3%, zstd -22 compressed it 78.6%, and here’s a graph showing runtime (on my laptop) and the resulting size:

zstandard compression (bitcoin source code, object files and binaries) times and sizes

For this corpus, sweet spots are 3 (the default), 6 (2.5x slower, 7% smaller), 14 (10x slower, 13% smaller) and 20 (46x slower, 22% smaller). Spreadsheet with results here.

Binh Nguyen: Prophets/Pre-Cogs/Stargate Program 8, Github Download Script, and More

Wed, 2017-03-08 20:41
Obvious continuation from my previous other posts with regards to prophets/pre-cogs: http://dtbnguyen.blogspot.com/2017/02/life-in-india-prophetspre-cogsstargate_82.html http://dtbnguyen.blogspot.com/2017/02/life-in-iran-examining-prophetspre-cogs.html http://dtbnguyen.blogspot.com/2017/02/life-in-venezuela-examining-prophetspre.html http://dtbnguyen.blogspot.com/2017/01/

Matthew Oliver: pudb debugging tips

Wed, 2017-03-08 12:06

As an OpenStack Swift dev I obviously write a lot of Python. Further Swift is cluster and so it has a bunch of moving pieces. So debugging is very important. Most the time I use pudb and then jump into the PyCharms debugger if get really stuck.

Pudb is curses based version of pdb, and I find it pretty awesome and you can use it while ssh’d somewhere. So I thought I’d write a tips that I use. Mainly so I don’t forget

OpenSTEM: Oceanography and the Continents

Wed, 2017-03-08 10:04

Marie Tharp (30 July, 1920 – 23 August, 2006) was an oceanographer and cartographer who mapped the oceans of the world. She worked with Bruce Heezen, who collected data on a ship, mapping the ocean floor.

Tharp and Heezen

Tharp turned the data into detailed maps. At that time women were not allowed to work on research ships, as it was thought that they would bring bad luck! However, Tharp was a skilled cartographer, and as she made her maps of the floor of the oceans of the world, with their ridges and valleys, she realised that there were deep valleys which showed the boundaries of continental plates. She noticed that these valleys were also places with lots of earthquakes and she became convinced of the basics of plate tectonics and continental drift.

Between 1959 and 1963, Tharp was not mentioned in any of the scientific papers published by Heezen, and he dismissed her theories disparagingly as “girl talk”. As this video  from National Geographic shows, she stuck to her guns and was vindicated by the evidence, eventually managing to persuade Heezen, and the scientific community at large, of the validity of the theories. In 1977, Heezen and Tharp published a map of the entire ocean floor. Tharp obtained degrees in English, Music, Geology and Mathematics during the course of her life. In 2001, a few weeks before her 81st birthday, Marie Tharp was awarded the Lamont-Doherty Heritage Award at Columbia University, in the USA, as a pioneer of oceanography. She died of cancer in 2006.

The National Geographic video provides an excellent testimony to this woman pioneer in oceanography.

Lev Lafayette: Multicore World 2017: A Review

Tue, 2017-03-07 16:05

Multicore World is a small conference held annually in New Zealand hosted by Open Parallel. What it lacks in numbers however it makes up in quality of the presenters. The 2017 conference included a typically impressive array of speakers dealing with some of the most difficult issues facing computational science, and included several important announcements in the fields of supercomputing, the Internet of Things, and manufacting issues.

read more

Craige McWhirter: Adventures in Unemployment

Mon, 2017-03-06 13:15

Due to a recent corporate fire sale, implosion, what ever you'd like to call it, I found myself joining thousands of my former colleagues unemployed and looking for "new opportunities" (hire me, I'm dead set amazing).

As a parent who also has an ex-wife and children it is incumbent upon me to inform the Department of Human Services (DHS) of any changes to my income within strict time frames. So like a dutiful slave of the state, I called them to advise of my new $0 income status.

The following conversation actually happened:

[DHS] "So taking into account your new income of $0, you will need to pay $114 / month."

[McW] "With an income of $0, how would you expect me to pay that?"

[DHS] "Borrow money from family and friends."

[McW] "You know you just said that out loud, right?"

[DHS] "Yes sir."

[McW] "Okay, so let me clarify this. I have an income of $0, 3 dependent children living with me, one dependent adult and the DHS priority is not for me to provide food and shelter for them but to pay child support?"

[DHS] "That is correct."

[McW] "..and this is something you've not only said out loud but on a phone call that's being recorded for 'service quality and training purposes'."

[DHS] "That is the nature of the legislation and what we are trained to say."

[McW] "You do see the problem here, don't you?"

[DHS] "Yes sir, I do."

[McW] "Are there any other things you're trained to say that might help?"

[DHS] "You could apply for work benefits."

[McW] "Okay, let's think this one through. Let's say I did get the dole, which would be about $400 / fortnight, less than my fortnightly rent even before I commence buying food, would the DHS still want $114 from that?

[DHS] "Yes, child support would be taken from the benefits before they were paid to you."

[McW] [long pause] "Back to the $0 income and obvious incapacity to pay, when the inevitable non-payment occurs, what does the DHS do next?"

[DHS] "Despite your excellent payment history, the DHS would have to pursue avenues for collection."

[McW] "So I have a family of 6 to shelter and support and the DHS will still end up going collect to strip us of whatever they can? That's not particularly helpful to anyone, not those I'm directly supporting nor my children for whom the DHS is collecting child support."

[DHS] "That's correct sir, once a child support debt of $1,000 is accrued, DHS will pursue collection avenues. Is there anything else I can help you with?"

[McW] "Unless you can change the legislation, I think we're good here. Thank you."

Having been in the child support "game" for about 13 years, having seen female friends dudded by former male partners, have seen male friends rorted by former female partners, it's not as though I was unaware the system was truly broken and unfair to all parties in so many cases.

This conversation however, was truly breathtaking. I doubt Douglas Adams could have scripted this any better. :-)

OpenSTEM: The Week in HASS – term 1, week 6

Mon, 2017-03-06 10:04

HASS students have a global focus this week. The younger students are looking at calendars, celebrations and which countries classmates are connected to, around the world. Older students are starting to explore what happened at the end of the Ice Age and the beginnings of agriculture and trade. These students will also be applying the scientific method to practical examinations – creating their own mini Ice Ages in a bowl and making mud bricks.

Foundation to Year 3

Our standalone Foundation/Prep classes (F.1) are looking at calendars and celebrations this week, starting to explore the world beyond their own family and gain an identity relative to each other. Integrated Foundation/Prep (F.5) and Year 1 (1.1) classes; as well as Year 1 (1.1), 2 (2.1) and 3 (3.1) classes are examining our OpenSTEM blackline world map and putting coloured dots on all the countries that they and their families are connected to, either through relatives, or by having lived there themselves. It is through this sort of exercise that students can start to understand the concept of the “global family”.

Year 3 to Year 6 Making an Ice Age

Students in years 3 (3.5), 4 (4.1), 5 (5.1) and 6 (6.1) are consolidating their learning and expanding into subjects, such as Science and Economics and Business. The ever-popular Ice Ages and Mud Bricks activity links to core Science curricular strands and allows students to explore their learning in very tactile ways. Whilst undertaking the activity, students make a mini Ice Age in a bowl, attempting to predict what will happen to their clay landscape when it is flooded and frozen, and then comparing these predictions to their recorded observations, during empirical testing. Students also make their own mud bricks by hand, once again predicting how to make the bricks strongest and testing different construction techniques. We have even had classes test the strength of their mud brick walls under simulated flood conditions, working inside a tidy tray.

Making mud bricks

Students move on from studying the Ice Age, looking at what happened as the climate changed and global sea levels rose. The pressures that these changes brought to people’s lives is examined by looking at the origins of agriculture with domestic plants and animals. Students consider how people needed to wok together to survive. The cooperative Trade and Barter activity allows students to role play life in a Neolithic village. Faced with a range of challenges, such as floods and droughts, students discover how to prioritise their needs for food to survive the winter, against their wants. They also discover that trade, counting and writing all grew out of the needs for people to exchange items and help each other to survive. This activity covers all the basic concepts in the Economics and Business curriculum, whilst providing a context that is meaningful to the students and their own experiences. Replicating the way that people developed trade, counting and writing in the historic period, the students’ experiences during the Trade and Barter activity lay the foundations for a deeper understanding of the basic concepts of Economics and Business.

flooding the mud brick wall

Ben Martin: Non self replicating reprap 3d printer

Sun, 2017-03-05 18:21
The reprap is designed to be able to "self replicate" to a degree. If a part on a reprap 3d printer breaks then a replacement part can be printed and attached. Parts can evolve as new ideas come along. Having parts crack or weaken on a 3d printer can be undesirable though.

A part on this printer was a mix of acrylic and PLA, both of which were cracked. Not quite what one would hope for as a foot of the y-axis. It is an interesting design with the two driving rods the same length as the alloy channel at the back of the printer.



A design I thought of called for 1/2 inch alloy in order to wrap the existing alloy extrusion with a 3mm cover. The dog bone on the slot is manually added in Fusion 360 so it is larger than needed. The whole thing being a learning exercise for me as to how to create 2.5D parts. The belt tensioning is on a 6mm subassembly that is mounted on the bracket in the right of the image below.


The bracket and subassembly are shown mounted below. Yes, using four M6 bolts to tension a belt is overkill. I would imagine you can stretch the belt to breaking point quite easily with these bolts. The two rods are locked into place using M3 tapped grub screws. The end brackets are bolted to the back extrusion using two M6 bolts.


The z-axis is now supported by a second 10mm alloy custom bracket. This combination makes it much, much harder to wobble the z-axis than the original design using plastic parts.




Pia Waugh: Service Innovation in New Zealand – the new digital transformation

Fri, 2017-03-03 10:01

Over the past fortnight I have had the pleasure and privilege of working closely with the Service Innovation team in the New Zealand Government to contribute to their next steps in achieving proper service integration. It was an incredible two weeks as part of an informal exchange between our agencies to share expertise and insights. I am very thankful for the opportunity to work with the team and to contribute in some small way to their visionary, ambitious and world leading agenda. I also recommend everyone watch closely the work of Service Innovation team, and contact them if you are interested in giving feedback on the model.

I spent a couple of weeks in New Zealand looking at their “Service Innovation” agenda, which is, I can confidently say, one of the most exciting things I’ve ever seen for genuine digital transformation. The Kiwis have in place already a strong and technically sound vision for service integration, a bunch of useful guidance (including one of the best gov produced API guides I’ve ever seen!) and a commitment to delivering integrated services as a key part of their agenda and programme along with brilliant skills and visionaries across government.

I believe New Zealand will be the one to watch over the coming year and, with a little luck, they could redefine the baseline for what everyone should be aspiring to. They could be the first government to properly demonstrate Gov as a Platform, not just better digital government, which is quite exciting! Systemic change and transformation generally happens once a generation if you are lucky, so do keep an eye on the Kiwis. They are set to  leave us all behind!

There is more internal documentation which I encourage the team to publish, like the Federated Services Model Reference Architecture and other gems.

In a couple of weeks, on the back of a raft of ongoing work, we analysed why it is with such great guidance available, why would siloed approaches still be happening? We found that the natural motivations of agencies would always drive an implementation that was designed to meet the specific agency needs rather than the system needs across government. That was unsurprising but the new insight was the service delivery teams themselves, who wanted to do the best possible implementations but with little time and resource, and high expectations, couldn’t take the time needed to find, read, interpret, translate into practice and verify implementation of the guidance. Which is quite fair! So we looked at models of reducing the barriers for those teams to do things better by providing reusable infrastructure and reference implementations, and either changing or tweaking the motivations of agencies themselves.

This is an ongoing piece of work, but fundamentally we looked at the idea that if we made the best technical path also the easiest path for service delivery teams to follow, then there would be a reasonable chance of a consumable systems approach to delivering these services. If support and skills was available with tools, code, dev environments, reference implementations, lab environments and other useful tools for designing and delivering government services faster, better and cheaper, then service delivery teams and agencies both would have a natural motivation to take that approach. Basically, we surmised that vision and guidance probably needed to be supplemented by implementation to make it real, moving from policy to application.

It is great to see other jurisdictions like New Zealand starting to experiment and implement the consumable mashable government model! I want to say a huge thank you to the New Zealand Government for sharing their ideas, but mostly for now picking up and being in such a great position to show everyone what Gov as a Platform and Gov as an API should look like. I wish you luck and hope to be a part of your success, even just in a small way!

Rock on.

Binh Nguyen: Life in Iraq, Random Stuff, and More

Fri, 2017-03-03 06:38
- ~40M people. Main industries are petroleum, chemicals, textiles, leather, construction materials, food processing, fertilizer, metal fabrication/processing. Export goods are crude oil 84%, crude materials excluding fuels 8%, food and live animals 5%. Main export partners: China 22.6%, India 21.1%, South Korea 11.2%, United States 7.8%, Italy 6.7%, Greece 6% (2015). Main import partners: Turkey