Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 15 min 46 sec ago

Michael Still: On layers

8 hours 17 min ago
There's been a lot of talk recently about what we should include in OpenStack and what is out of scope. This is interesting, in that many of us used to believe that we should do ''everything''. I think what's changed is that we're learning that solving all the problems in the world is hard, and that we need to re-focus on our core products. In this post I want to talk through the various "layers" proposals that have been made in the last month or so. Layers don't directly address what we should include in OpenStack or not, but they are a useful mechanism for trying to break up OpenStack into simpler to examine chunks, and I think that makes them useful in their own right.

I would address what I believe the scope of the OpenStack project should be, but I feel that it makes this post so long that no one will ever actually read it. Instead, I'll cover that in a later post in this series. For now, let's explore what people are proposing as a layering model for OpenStack.

What are layers?

Dean Troyer did a good job of describing a layers model for the OpenStack project on his blog quite a while ago. He proposed the following layers (this is a summary, you should really read his post):

  • layer 0: operating system and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova
  • layer 2: extended basics -- Neutron, Cinder, Swift, Ironic
  • layer 3: optional services -- Horizon and Ceilometer
  • layer 4: turtles all the way up -- Heat, Trove, Moniker / Designate, Marconi / Zaqar

Dean notes that Neutron would move to layer 1 when nova-network goes away and Neutron becomes required for all compute deployments. Dean's post was also over a year ago, so it misses services like Barbican that have appeared since then. Services are only allowed to require services from lower numbered layers, but can use services from higher number layers as optional add ins. So Nova for example can use Neutron, but cannot require it until it moves into layer 1. Similarly, there have been proposals to add Ceilometer as a dependency to schedule instances in Nova, and if we were to do that then we would need to move Ceilometer down to layer 1 as well. (I think doing that would be a mistake by the way, and have argued against it during at least two summits).

Sean Dague re-ignited this discussion with his own blog post relatively recently. Sean proposes new names for most of the layers, but the intent remains the same -- a compute-centric view of the services that are required to build a working OpenStack deployment. Sean and Dean's layer definitions are otherwise strongly aligned, and Sean notes that the probability of seeing something deployed at a given installation reduces as the layer count increases -- so for example Trove is way less commonly deployed than Nova, because the set of people who want a managed database as a service is smaller than the set of of people who just want to be able to boot instances.

Now, I'm not sure I agree with the compute centric nature of the two layers proposals mentioned so far. I see people installing just Swift to solve a storage problem, and I think that's a completely valid use of OpenStack and should be supported as a first class citizen. On the other hand, resolving my concern with the layers model there is trivial -- we just move Swift to layer 1.

What do layers give us?

Sean makes a good point about the complexity of OpenStack installs and how we scare away new users. I agree completely -- we show people our architecture diagrams which are deliberately confusing, and then we wonder why they're not impressed. I think we do it because we're proud of the scope of the thing we've built, but I think our audiences walk away thinking that we don't really know what problem we're trying to solve. Do I really need to deploy Horizon to have working compute? No of course not, but our architecture diagrams don't make that obvious. I gave a talk along these lines at pyconau, and I think as a community we need to be better at explaining to people what we're trying to do, while remembering that not everyone is as excited about writing a whole heap of cloud infrastructure code as we are. This is also why the OpenStack miniconf at 2015 has pivoted from being a generic OpenStack chatfest to being something more solidly focussed on issues of interest to deployers -- we're just not great at talking to our users and we need to reboot the conversation at community conferences until its something which meets their needs.

We intend this diagram to amaze and confuse our victims

Agreeing on a set of layers gives us a framework within which to describe OpenStack to our users. It lets us communicate the services we think are basic and always required, versus those which are icing on the cake. It also let's us explain the dependency between projects better, and that helps deployers work out what order to deploy things in.

Do layers help us work out what OpenStack should focus on?

Sean's blog post then pivots and starts talking about the size of the OpenStack ecosystem -- or the "size of our tent" as he phrases it. While I agree that we need to shrink the number of projects we're working on at the moment, I feel that the blog post is missing a logical link between the previous layers discussion and the tent size conundrum. It feels to me that Sean wanted to propose that OpenStack focus on a specific set of layers, but didn't quite get there for whatever reason.

Next Monty Taylor had a go at furthering this conversation with his own blog post on the topic. Monty starts by making a very important point -- he (like all involved) both want the OpenStack community to be as inclusive as possible. I want lots of interesting people at the design summits, even if they don't work directly on projects that OpenStack ships. You can be a part of the OpenStack community without having our logo on your product.

A concrete example of including non-OpenStack projects in our wider community was visible at the Atlanta summit -- I know for a fact that there were software engineers at the summit who work on Google Compute Engine. I know this because I used to work with them at Google when I was a SRE there. I have no problem with people working on competing products being at our summits, as long as they are there to contribute meaningfully in the sessions, and not just take from us. It needs to be a two way street. Another concrete example is Ceph. I think Ceph is cool, and I'm completely fine with people using it as part of their OpenStack deploy. What upsets me is when people conflate Ceph with OpenStack. They are different. They're separate. And that is fine. Let's just not confuse people by saying Ceph is part of the OpenStack project -- it simply isn't because it doesn't fall under our governance model. Ceph is still a valued member of our community and more than welcome at our summits.

Do layers help us work our what to focus OpenStack on for now? I think they do. Should we simply say that we're only going to work on a single layer? Absolutely not. What we've tried to do up until now is have OpenStack be a single big thing, what we call "the integrated release". I think layers gives us a tool to find logical ways to break that thing up. Perhaps we need a smaller integrated release, but then continue with the other projects but on their own release cycles? Or perhaps they release at the same time, but we don't block the release of a layer 1 service on the basis of release critical bugs in a layer 4 service?

Is there consensus on what sits in each layer?

Looking at the posts I can find on this topic so far, I'd have to say the answer is no. We're close, but we're not aligned yet. For example, one proposal has a tweak to the previously proposed layer model that adds Cinder, Designate and Neutron down into layer 1 (basic services). The author argues that this is because stateless cloud isn't particularly useful to users of OpenStack. However, I think this is wrong to be honest. I can see that stateless cloud isn't super useful by itself, but we are assuming that OpenStack is the only piece of infrastructure that a given organization has. Perhaps that's true for the public cloud case, but the vast majority of OpenStack deployments at this point are private clouds. So, you're an existing IT organization and you're deploying OpenStack to increase the level of flexibility in compute resources. You don't need to deploy Cinder or Designate to do that. Let's take the storage case for a second -- our hypothetical IT organization probably already has some form of storage -- a SAN, or NFS appliances, or something like that. So stateful cloud is easy for them -- they just have their instances mount resources from those existing storage pools like they would any other machine. Eventually they'll decide that hand managing that is horrible and move to Cinder, but that's probably later once they've gotten through the initial baby step of deploying Nova, Glance and Keystone.

The first step to using layers to decide what we should focus on is to decide what is in each layer. I think the conversation needs to revolve around that for now, because it we drift off into whether existing in a given layer means you're voted off the OpenStack island, when we'll never even come up with a set of agreed layers.

Let's ignore tents for now

The size of the OpenStack "tent" is the metaphor being used at the moment for working out what to include in OpenStack. As I say above, I think we need to reach agreement on what is in each layer before we can move on to that very important conversation.


Given the focus of this post is the layers model, I want to stop introducing new concepts here for now. Instead let me summarize where I stand so far -- I think the layers model is useful. I also think the layers should be an inverted pyramid -- layer 1 should be as small as possible for example. This is because of the dependency model that the layers model proposes -- it is important to keep the list of things that a layer 2 service must use as small and coherent as possible. Another reason to keep the lower layers as small as possible is because each layer represents the smallest possible increment of an OpenStack deployment that we think is reasonable. We believe it is currently reasonable to deploy Nova without Cinder or Neutron for example.

Most importantly of all, having those incremental stages of OpenStack deployment gives us a framework we have been missing in talking to our deployers and users. It makes OpenStack less confusing to outsiders, as it gives them bite sized morsels to consume one at a time.

So here are the layers as I see them for now:

  • layer 0: operating system, and Oslo
  • layer 1: basic services -- Keystone, Glance, Nova, and Swift
  • layer 2: extended basics -- Neutron, Cinder, and Ironic
  • layer 3: optional services -- Horizon, and Ceilometer
  • layer 4: application services -- Heat, Trove, Designate, and Zaqar

I am not saying that everything inside a single layer is required to be deployed simultaneously, but I do think its reasonable for Ceilometer to assume that Swift is installed and functioning. The big difference here between my view of layers and that of Dean, Sean and Monty is that I think that Swift is a layer 1 service -- it provides basic functionality that may be assumed to exist by services above it in the model.

I believe that when projects come to the Technical Committee requesting incubation or integration, they should specify what layer they see their project sitting at, and the justification for a lower layer number should be harder than that for a higher layer. So for example, we should be reasonably willing to accept proposals at layer 4, whilst we should be super concerned about the implications of adding another project at layer 1.

In the next post in this series I'll try to address the size of the OpenStack "tent", and what projects we should be focussing on.

Tags for this post: openstack kilo technical committee tc layers

Related posts: My candidacy for Kilo Compute PTL; Juno TC Candidacy; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic


Andrew Pollock: [life] Day 244: TumbleTastics, photo viewing, bulk goods and a play in the park

9 hours 19 min ago

Yesterday was another really lovely day. Zoe had her free trial class with TumbleTastics at 10am, and Zoe was asking for a leotard for it, because that's what she was used to wearing when she went to Gold Star gymnastics in Mountain View. After Sarah dropped her off, we dashed out to Cannon Hill to go to K Mart in search of a leotard.

We found a leotard, and got home with enough time to scooter over to TumbleTastics for the class. Zoe had an absolute ball again, and the teacher was really impressed with her physical ability, and suggested for her regular classes that start next term, that she be slotted up a level. It sounds like her regularly scheduled class will have older kids and more boys, so that should be good. I just love that Zoe's so physically confident.

We scootered back home, and after lunch, drove back to see Hannah to view our photos from last week's photo shoot. There were some really beautiful photos in the set, so now I need to decide which one I want on a canvas.

Since we were already out, I thought we could go and check out the food wholesaler at West End that we'd failed to get to last week. I'm glad we did, because it was awesome. There was a coffee shop attached to the place, so we grabbed a coffee and a babyccino together after doing some shopping there.

While we were out, I thought it was a good opportunity check out a new park, and we drove down to what I guess was Orleigh Park, and had an absolutely fantastic afternoon down there by the river, the shade levels in the mid-afternoon were fantastic. I'd like to make an all day outing one day on the CityCat and CityGlider, and bus one way and take the ferry back the other way with a picnic lunch in the middle.

We headed home after about an hour playing in the park, and Zoe watched a little bit of TV before Sarah came to pick her up.

Zoe's spending the rest of the school holidays with Sarah, so I'm going to use the extra spare time to try and catch up on my taxes and my real estate licence training, which I've been neglecting.

Colin Charles: Oracle Linux ships MariaDB

14 hours 19 min ago

I can’t remember why I was installing Oracle Enterprise Linux 7 on Oracle VirtualBox a while back, but I did notice something interesting. It ships, just like CentOS 7, MariaDB Server 5.5. Presumably, this means that MariaDB is now supported by Oracle, too ;-) [jokes aside, It’s likely because OEL7 is meant to be 100% compatible to RHEL7]

The only reason I mention this now is Vadim Tkachenko, probably got his most retweeted tweet recently, stating just that. If you want to upgrade to MariaDB 10, don’t forget that the repository download tool provides CentOS 7 binaries, which should “just work”.

If you want to switch to MySQL, there is a Public Yum repository that MySQL provides (and also don’t forget to check the Extras directory of the full installation – from OEL7 docs sub-titled: MySQL Community and MariaDB Packages). Be sure to read the MySQL docs about using the Yum repository. I also just noticed that the docs now have information on replacing a third-party distribution of MySQL using the MySQL yum repository.

Related posts:

  1. MariaDB 10.0.5 storage engines – check the Linux packages
  2. Using MariaDB on CentOS 6
  3. MariaDB 10.0.3: installing the additional engines

Russell Coker: Links September 2014

18 hours 19 min ago

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Related posts:

  1. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...
  2. Links May 2014 Charmian Gooch gave an interesting TED talk about her efforts...
  3. Links September 2013 Matt Palmer wrote an insightful post about the use of...

Michael Still: Blueprints implemented in Nova during Juno

Tue, 2014-09-30 23:28
As we get closer to releasing the RC1 of Nova for Juno, I've started collecting a list of all the blueprints we implemented in Juno. This was mostly done because it helps me write the release notes, but I am posting it here because I am sure that others will find it handy too.


  • Reserve 10 sql schema version numbers for back ports of Juno migrations to Icehouse. launchpad specification

Ongoing behind the scenes work

Object conversion

  • Support sub-classing objects. launchpad specification
  • Stop using the scheduler run_instance method. Previously the scheduler would select a host, and then boot the instance. Instead, let the scheduler select hosts, but then return those so the caller boots the instance. This will make it easier to move the scheduler to being a generic service instead of being internal to nova. launchpad specification
  • Refactor the nova scheduler into being a library. This will make splitting the scheduler out into its own service later easier. launchpad specification
  • Move nova to using the v2 cinder API. launchpad specification
  • Move prep_resize to conductor in preparation for splitting out the scheduler. launchpad specification

  • Use JSON schema to strongly validate v3 API request bodies. Please note this work will later be released as v2.1 of the Nova API. launchpad specification
  • Provide a standard format for the output of the VM diagnostics call. This work will be exposed by a later version of the v2.1 API. launchpad specification
  • Move to the OpenStack standard name for the request id header, in a backward compatible manner. launchpad specification
  • Implement the v2.1 API on the V3 API code base. This work is not yet complete. launchpad specification

  • Refactor the internal nova API to make the nova-network and neutron implementations more consistent. launchpad specification

General features

Instance features


  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad: disk; instances; and IO ops specification

  • i18n Enablement for Nova, turn on the lazy translation support from Oslo i18n and updating Nova to adhere to the restrictions this adds to translatable strings. launchpad specification
  • Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
  • Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
  • Include status information in API listings of hypervisor hosts. launchpad specification
  • Allow API callers to specify more than one status to filter by when listing services. launchpad specification
  • Add quota values to constrain the number and size of server groups a users can create. launchpad specification

Hypervisor driver specific




  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification

Tags for this post: openstack juno blueprints implemented

Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Review priorities as we approach juno-3; Thoughts from the PTL


Francois Marier: Encrypted mailing list on Debian and Ubuntu

Tue, 2014-09-30 15:56

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.


What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf: - email: key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

Michael Still: My candidacy for Kilo Compute PTL

Tue, 2014-09-30 12:27
This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:

I'd like another term as Compute PTL, if you'll have me. We live in interesting times. openstack has clearly gained a large amount of mind share in the open cloud marketplace, with Nova being a very commonly deployed component. Yet, we don't have a fantastic container solution, which is our biggest feature gap at this point. Worse -- we have a code base with a huge number of bugs filed against it, an unreliable gate because of subtle bugs in our code and interactions with other openstack code, and have a continued need to add features to stay relevant. These are hard problems to solve. Interestingly, I think the solution to these problems calls for a social approach, much like I argued for in my Juno PTL candidacy email. The problems we face aren't purely technical -- we need to work out how to pay down our technical debt without blocking all new features. We also need to ask for understanding and patience from those feature authors as we try and improve the foundation they are building on. The specifications process we used in Juno helped with these problems, but one of the things we've learned from the experiment is that we don't require specifications for all changes. Let's take an approach where trivial changes (no API changes, only one review to implement) don't require a specification. There will of course sometimes be variations on that rule if we discover something, but it means that many micro-features will be unblocked. In terms of technical debt, I don't personally believe that pulling all hypervisor drivers out of Nova fixes the problems we face, it just moves the technical debt to a different repository. However, we clearly need to discuss the way forward at the summit, and come up with some sort of plan. If we do something like this, then I am not sure that the hypervisor driver interface is the right place to do that work -- I'd rather see something closer to the hypervisor itself so that the Nova business logic stays with Nova. Kilo is also the release where we need to get the v2.1 API work done now that we finally have a shared vision for how to progress. It took us a long time to get to a good shared vision there, so we need to ensure that we see that work through to the end. We live in interesting times, but they're also exciting as well.

I have since been elected unopposed, so thanks for that!

Tags for this post: openstack kilo compute ptl

Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; On layers; Expectations of core reviewers


Colin Charles: Trip report: LinuxCon North America, CentOS Dojo Paris, WebExpo Prague

Tue, 2014-09-30 05:25

I had quite a good time at LinuxCon North America/CloudOpen North America 2014, alongside my colleague Max Mether – between us, we gave a total of five talks. I noticed that this year there was a database heavy track — Morgan Tocker from Oracle’s MySQL Team had a few talks as did Martin MC Brown from Continuent. 

The interest in MariaDB stems from the fact that people are starting to just see it appear in CentOS 7, and its just everywhere (you can even get it from the latest Ubuntu LTS). This makes for giving interesting talks, since many are shipping MariaDB 5.5 as the default choice, but that’s something we released over 2 years ago; clearly there are many interesting new bits in MariaDB 10.0 that need attention!

Chicago is a fun place to be — the speaker gift was an architectural tour of Chicago by boat, probably one of the most useful gifts I’ve ever received (yes, I took plenty of photos!). The Linux Foundation team organised the event wonderfully as always, and I reckon the way the keynotes were setup with the booths in the same room was a clear winner — pity we didn’t have a booth there this year. 

Shortly afterwards, I headed to Paris for the CentOS Dojo. The room was full (some 50 attendees?), whom were mainly using CentOS and its clear that CentOS 7 comes with MariaDB so this was a talk to get people up to speed with what’s different with MySQL 5.5, what’s missing from MySQL 5.6, and when to look at MariaDB 10. We want to build CentOS 7 packages for the MariaDB repository (10.0 is already available with MariaDB 10.0.14), so watch MDEV-6433 in the meantime for the latest 5.5 builds.

Then there was WebExpo Prague, with over 1,400 attendees, held in various theatres around Prague. Lots of people here also using MariaDB, some rather interesting conversations on having a redis front-end, how we power many sites, etc. Its clear that there is a need for a meetup group here, there’s plenty of usage.

Related posts:

  1. Using MariaDB on CentOS 6
  2. Trip Report: OpenWest Conference
  3. Trip Report: DrupalCon Portland 2013

Andrew Pollock: [life] Day 243: Day care for a day

Mon, 2014-09-29 21:25

I had to resort to using Zoe's old day care today so I could do some more Thermomix Consultant training. Zoe's asked me on and off if she could go back to her old day care to visit her friends and her old teachers, so she wasn't at all disappointed when she could today. Megan was even there as well, so it was a super easy drop off. She practically hugged me and sent me on my way.

When I came back at 3pm to pick her up, she wanted to stay longer, but wavered a bit when I offered to let her stay for another hour and ended up coming home with me.

We made a side trip to the Valley to check my post office box, and then came home.

Zoe watched a bit of TV, and then Sarah arrived to pick her up. After some navel gazing, I finished off the day with a very strenuous yoga class.

Sonia Hamilton: Git and mercurial abort: revision cannot be pushed

Mon, 2014-09-29 11:29

I’ve been migrating some repositories from Mercurial to Git; as part of this migration process some users want to keep using Mercurial locally until they have time to learn git.

First install the hg-git tools; for example on Ubuntu:

sudo aptitude install python-setuptools python-dev sudo easy_install hg-git

Make sure the following is in your ~/.hgrc:

[extensions] hgext.bookmarks = hggit =

Then, in your existing mercurial repository, add a new remote that points to the git repository. For example for a BitBucket repository:

cd <mercurial repository> cat .hg/hgrc [paths] # the original hg repository default = # the git version (on BitBucket in this case) bbgit = git+ssh://

Then you can go an hg push bbgit to push from your local hg repository to the remote git repository.

mercurial abort: revision cannot be pushed

You may get the error mercurial abort: revision cannot be pushed since it doesn’t have a ref when pushing from hg to git, or you might notice that your hg work isn’t being pushed. The solution here is to reset the hg bookmark for git’s master branch:

hg book -f -r tip master hg push bbgit

If you find yourself doing this regularly, this small shell function (in your ~/.bashrc) will help:

hggitpush () { # $1 is hg remote name in hgrc for repo # $2 is branch (defaults to master) hg book -f -r tip ${2:-master} hg push $1 }

Then from your shell you can run commands like:

hggitpush bbgit dev hggitpush foogit # defaults to pushing to master

Sridhar Dhanapalan: Twitter posts: 2014-09-22 to 2014-09-28

Mon, 2014-09-29 00:26

David Rowe: SM1000 Part 6 – Noise and Radio Tests

Sun, 2014-09-28 14:29

For the last few weeks I have been debugging some noise issues in “analog mode”, and testing the SM1000 between a couple of HF radios.

The SM1000 needs to operate in “analog” mode as well as support FreeDV Digital Voice (DV mode). In analog mode, the ADC samples the mic signal, and sends it straight to the DAC where it is sent to the mic input of the radio. This lets you use the SM1000 for SSB as well as DV, without unplugging the SM1000 and changing microphones. Analog mode is a bit more challenging as electrical noise in the SM1000, if not controlled, makes it through to the transmit audio. DV mode is less sensitive, as the modem doesn’t care about low level noise.

Tracking down noise sources involves a lot of detail work, not very exciting but time consuming. For example I can hear a noise in the received audio, is it from the DAC or ADC side? Write software so I can press a button to send 0 samples to the DAC so I can separate the DAC and ADC at run time. OK it’s the ADC side, is it the ADC itself or the microphone amplifier? Break net and terminate ADC with 1k resistor to ground (thanks Matt VK5ZM for this suggestion). OK it’s the microphone amplifier, so is it on the input side or the op-amp itself? Does the noise level change with the mic gain control? No, then it must not be from the input. And so it goes.

I found noise due to the ADC, the mic amp, the mic bias circuit, and the 5V switcher. Various capacitors and RC filters helped reduce it to acceptable levels. The switcher caused high frequency hiss, this was improved with a 100nF cap across R40, and a 1500 ohm/1nF RC filter between U9 and the ADC input on U1 (prototype schematic). The mic amp and mic bias circuit was picking up 50Hz noise at the frame rate of the DSP software that was fixed with 220uF cap across R40 and a 100 ohm/220uF RC filter in series with R39, the condenser mic bias network.

To further improve noise, Rick and I are also working on changes to the PCB layout. My analog skills are growing and I am now working methodically. It’s nice to learn some new skills, useful for other radio projects as well. Satisfying.

Testing Between Two Radios

Next step is to see how the SM1000 performs over real radios. In particular how does it go with nearby RF energy? Does the uC reset itself, is there RF noise getting into the sensitive microphone amplifier and causing runaway feedback in analog mode? Also user set up issues: how easy is it to interface to the mic input of a radio? Is the level reaching the radio mic input OK?

The first step was to connect the SM1000 to a FT817 as the transmit radio, then to a IC7200 via 100dB of attenuation. The IC7200 receive audio was connected to a laptop running FreeDV. The FT817 was set to 0.5W output so I wouldn’t let the smoke out of my little in-line attenuators. This worked pretty well, and I obtained SNRs of up to 20dB from FreeDV. It’s always a little lower through real radios, but that’s acceptable. The PTT control from the SM1000 worked well. It was at this point that I heard some noises using the SM1000 in “analog” mode that I chased down as described above.

At the IC7200 output I recorded this file demonstrating audio using the stock FT817 MH31 microphone, the SM1000 used in analog mode, and the SM1000 in DV mode. The audio levels are unequal (MH31 is louder), but I am satisfied there are no strange noises in the SM1000 audio (especially in analog mode) when compared to the MH31 microphone. The levels can be easily tweaked.

Then I swapped the configuration to use the IC7200 as the transmitter. This has up to 100W PEP output, so I connected it to an end fed dipole, and used the FT817 with the (non-resonant) VHF antenna as the receiver. It took me a while to get the basic radio configuration working. Even with the stock IC7200 mic I could hear all sorts of strange noises in the receive audio due to the proximity of the two radios. Separating them (walking up the street with the FT817) or winding the RF gain all the way down helped.

However the FreeDV SNR was quite low, a maximum of 15dB. I spent some time trying to work out why but didn’t get to the bottom of it. I suspect there is some transmit pass-band filtering in the IC7200, making some FDMDV carriers a few dB lower than others. Note x-shaped scatter diagram and sloped spectrum below:

However the main purpose of these tests was to see how the SM1000 handled high RF fields. So I decided to move on.

I tested a bunch of different combinations, all with good results:

  • IC7200 with stock HM36 mic, SM1000 in analog mode, SM1000 in DV mode (high and low drive)
  • Radios tuned to 7.05, 14.235 and 28.5 MHz.
  • Tested with IC7200 and SM1000 running from the same 12V battery (breaking transformer isolation).
  • Had a 1m headphone cable plugged into the SM1000 act as an additional “antenna”.
  • Rigged up an adaptor to plug the FT817 MH31 mic into the CN5 “ext mic” connector on the SM1000. Total of 1.5m in mic lead, so plenty of opportunity for RF pick up.
  • Running full power into low and 3:1 SWR loads. (Matt, VK5ZM suggested high SWR loads is a harsh RF environment).

Here are some samples, SM1000 analog, stock IC7200 mic, SM1000 DV low drive, SM1000 high drive. There are some funny noises on the analog and stock mic samples due to the proximity of the rx to the tx, but they are consistent across both samples. No evidence of runaway RF feedback or obvious strange noises. Once again the DV level is a bit lower. All the nasty HF channel noise is gone too!

Change Control

Rick and I are coordinating our work with a change log text file that is under SVN version control. As I perform tests and make changes to the SM1000, I record them in the change log. Rick then works from this document to modify the schematic and PCB, making notes on the change log. I can then review his notes against the latest schematic and PCB files. The change log, combined with email and occasional Skype calls, is working really well, despite us being half way around the planet from each other.

SM1000 Enclosure

One open issue for me is what enclosure we provide for the Beta units. I’ve spoken to a few people about this, and am open to suggestions from you, dear reader. Please comment below on your needs or ideas for a SM1000 enclosure. My requirements are:

  1. Holes for loudspeaker, PTT switch, many connectors.
  2. Support operation in “hand held” or “small box next to the radio” form

  3. Be reasonably priced, quick to produce for the Qty 100 beta run.

It’s a little over two months since I started working on the SM1000 prototype, and just 6 months since Rick and I started the project. I’m very pleased with progress. We are on track to meet our goal of having Betas available in 2014. I’ve kicked off the manufacturing process with my good friend Edwin from Dragino in China, ordering parts and working together with Rick on the BOM.

Glen Turner: Ubiquitous survelliance, VPNs, and metadata

Sat, 2014-09-27 10:28

My apologies for the lack of diagrams accompanying this post. I had not realised when I selected LiveJournal to host my blog that it did not host images.

There have been a lot of remarks, not the least by a minister, about the use of VPNs to avoid metadata collection. Unfortunately VPNs cannot be presumed to be effective in avoiding metadata collection, because of the sheer ubiquity of surveillance and the traffic analysis opportunities that ubiquity makes possible.

By ‘metadata’ I mean the production of flow records, one record per flow, with no sampling or aggregation.

By ‘ubiquitous surveillance’ I mean the ability to completely tap and record the ingress and egress data of a computer. Furthermore, the sharing of that data with other nations, such as via the Five Eyes programme. It is a legal quirk in the US and in Australia that a national spy agency may not, without a warrant or reasonable cause, be able to analyse the data of its own citizens directly, but can obtain that same information via a Five Eyes partner without a warrant or reasonable cause.

By ‘VPN service’ I mean a overseas service which sells subscriber-based access to a OpenVPN or similar gateway. The subscriber runs a OpenVPN client, the service runs a OpenVPN server. The traffic from within that encrypted VPN tunnel is then NATed and sent out the Internet-facing interface of the OpenVPN server. The traffic from the subscriber appears to have the IP address of the VPN server; this makes VPN services popular for avoiding geo-locked Internet content from Hula, Netflix and BBC iPlayer.

The theory is that this IP address misdirection also defeats ubiquitous surveillance. An agency producing metadata from the subscriber's traffic sees only communication with the VPN service. An agency tapping the subscriber's traffic sees only the IP address of the subscriber exchanging encrypted content with the IP address of the VPN service.

Unfortunately ubiquitous surveillance is ubiquitous: if a national spy agency cannot tap the traffic itself then it can ask its Five Eyes partner to do the tap. This means that the traffic of the VPN service is also tapped. One interface contains traffic with the VPN subscribers; the other interface contains unencrypted traffic from all subscribers to the Internet. Recall that the content of the traffic with the VPN subscribers is encrypted.

Can a national spy agency relate the unencrypted Internet traffic back to the subscriber's connections? If so then it can tap content and metdata as if the VPN service was not being used.

Unfortunately it is trivial for a national spy agency to do this. ‘Traffic analysis’ is the examination of patterns of traffic. TCP traffic is very vulnerable to traffic analysis:

  • Examining TCP traffic we see a very prominent pattern at the start of every connection. This ‘TCP three-way handshake’ sends one small packet all by itself for the entire round-trip time, receives one small packet all by itself for the entire round trip time, then sends one large packet. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

  • Examining TCP traffic we see a very prominent pattern which a connection encounters congestion. This ‘TCP multiplicative decrease’ halves the rate of transmission upon traffic where the sender has not received a Acknowledgement packet within the expected time. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

These are only the gross features. It doesn't take much imagination to see that the interval between Acks can be used to group connections with the same round-trip time. Or that the HTTP GET and response is also prominent. Or that jittering in web streaming connections is prominent.

In short, by using traffic analysis a national spy agency can — with a high probability — assign the unencrypted traffic on the Internet interface to the encrypted traffic from the VPN subscriber. That is, given traffic with (Internet site IP address, VPN service Internet-facing IP address) and (VPN service subscriber-facing IP address, Subscriber IP address) then traffic analysis allows a national spy agency to reduce that to (Internet site IP address, Subscriber IP address). That is, the same result as if the VPN service was not used.

The only question remains is if the premier national spy agencies are actually exchanging tables of (datetime, VPN service subscriber-facing IP address, Internet site IP address, Subscriber IP address) to allow national taps of (datetime, VPN server IP address, Subscriber IP address) to be transformed into (datetime, Internet site IP address, Subscriber IP address). There is nothing technical to prevent them from doing so. Based upon the revealed behaviour of the Five Eyes agencies it is reasonable to expect that this is being done.

Tim Serong: Dear ASIO

Sat, 2014-09-27 10:27

Since the Senate passed legislation expanding your surveillance powers on Thursday night, you’ve copped an awful lot of flack on Twitter. Part of the problem, I think – aside from the legislation being far too broad – is that we don’t actually know who you are, or what exactly it is you get up to. You could be part of a spy novel, a movie or a decades-long series of cock ups. You could be script kiddies with a budget. Or you could be something else entirely.

At times like this I try to remind myself to assume good faith; to remember that most people are basically decent and are trying to live a good life. Some people are even trying to make the world a better place, whatever that might mean.

For those of you then who are decent people, and who are trying to keep Australia safe from whatever mysterious threats are out there that we don’t know about – all without wishing to impinge on or risk destroying the freedoms that we enjoy here – you have my thanks.

For those of you involved in the formulation of The National Security Legislation Amendment Bill 2014 (No 1) – you who might be reading this post as I type it, rather than after I publish it – I have tried very, very hard to imagine that you honestly believe you are making the world a better place. And maybe you do actually think that, but for my part I cannot see the powers granted as anything other than a direct assault on our democracy. As Glenn Greenwald pointed out, I should be more worried about bathroom accidents, restaurant meals and lightning strikes than terrorism. As a careful bath user with a strong stomach and a sturdy house to hide in, I think I’m fairly safe on that front. Frankly I’m more worried about climate change. Do you have anyone on staff who can investigate that threat to our national security?

Anyway, thanks for reading, and I’ll take it as a kindness if you don’t edit this post without asking first.


Tim Serong

Linux Users of Victoria (LUV) Announce: LUV Main October 2014 Meeting: MySQL + CCNx

Fri, 2014-09-26 23:29
Start: Oct 7 2014 19:00 End: Oct 7 2014 21:00 Start: Oct 7 2014 19:00 End: Oct 7 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.


Stewart Smith, A History of MySQL

Hank, Content-Centric Networking

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 7, 2014 - 19:00

read more

Andrew Pollock: [life] Day 240: A day of perfect scheduling

Fri, 2014-09-26 21:25

Today was a perfectly lovely day, the schedule just flowed so nicely.

I started the day making a second batch of pizza sauce for the Riverfire party I'm hosting tomorrow night. Once that was finished, we walked around the corner to my dentist for a check up.

Zoe was perfect during the check up, she just sat in the corner of the room and watched and also played on her phone. The dentist commented on how well behaved she was. It blew my mind to run into Tanya there for the second time in a row. We're obviously on the same schedules, but it's just crazy to always wind up with back to back appointments.

After the appointment, we pretty much walked onto a bus to the city, so we could meet Nana for lunch. While we were on the bus, I called up and managed to get haircut appointments for both of us at 3pm. I figured we could make the return trip via CityCat, and the walk home would take us right past the hairdresser.

The bus got us in about 45 minutes early, so we headed up to the Museum of Brisbane in City Hall to see if we could get into the clock tower. We got really lucky, and managed to get onto the 11:45am tour.

Things have changed since I was a kid and my Nana used to take me up the tower. They no longer let you be up there when the bells chime, which is a shame, but apparently it's very detrimental to your hearing.

Zoe liked the view, and then we went back down to King George Square to wait for Nana.

We went to Jo Jo's for lunch, and they somehow managed to lose Zoe and my lunch order, and after about 40 minutes of waiting, I chased it up, and it still took a while to sort out. Zoe was very patient waiting the whole time, despite being starving.

After lunch, she wanted to see Nana's work, so we went up there. On the way back out, she wanted to play with the Drovers statues on Ann Street for a bit. After that, we made our way to North Quay and got on a CityCat, which nicely got us to the hairdresser in time for our appointment.

After that, we walked home, and drove around to check out a few bulk food places that I've learned about from my Thermomix Consultant training. We checked out a couple in Woolloongabba, and they had some great stuff available to the public.

It was getting late, so after a failed attempt at finding one in West End, we returned home so I could put dinner on.

It was a smoothly flowing day today, and Zoe handled it so well.

Michael Still: The Decline and Fall of IBM: End of an American Icon?

Fri, 2014-09-26 18:27

ISBN: 0990444422


This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.

Tags for this post: book robert_cringely ibm corporate decline

Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks Comment Recommend a book

Andrew Pollock: [life] Day 239: Cousin catch up, ice skating and a TM5 pickup

Fri, 2014-09-26 09:25

My sister, brother-in-law and niece are in town for a wedding on the weekend, so after collecting Zoe from the train station, we headed out to Mum and Dad's for the morning to see them all. My niece, Emma, has grown heaps since I last saw her. Her and Zoe had some nice cuddles and played together really well.

I'd also promised Zoe that I'd take her ice skating, so that dovetailed pretty well with the visit, as instead of going to Acacia Ridge, we went to Boondall after lunch and skated there.

Zoe was very confident this time on the ice. She spent more time without her penguin than with it, so I think next time she'll be fine without one at all. She only had a couple of falls, the first one I think was a bit painful for her and a shock, but after that she was skating around really well. I think she was quite proud of herself.

My new Thermomix had been delivered to my Group Leader's house, so after that, we drove over there so I could collect it and get walked through how I should handle deliveries for customers. Zoe napped in the car on the way, and woke up without incident, despite it being a short nap. She had a nice time playing with Maria's youngest daughter while Maria walked me through everything, which was really lovely.

Time got away on me a bit, and we hurried home so that Sarah could pick Zoe up. I then got stuck into making some pizza sauce for our Riverfire pizza party on Saturday night.

Craige McWhirter: Enabling OpenStack Roles To Resize Volumes Via Policy

Thu, 2014-09-25 14:28

If you have volume backed OpenStack instances, you may need to resize them. In most usage cases you'll want to have un-privileged users resize the instances. This documents how you can modify the Cinder policy to allow tenant members assigned to a particular role to have permissions to resize volumes.

  • You've already created your OpenStack tenant.
  • You've already created your OpenStack user.
  • You know how to allocate roles to users in tenants.
Select the Role

You will need to create or identify a suitable role. In this example I'll use "Support".

Modify policy.json

Once the role has been created or identified, add these lines to the /etc/cinder/policy.json on the Cinder API server(s):

"context_is_support": [["role:Support"]], "admin_or_support": [["is_admin:True"], ["rule:context_is_support"]],

Modify "volume_extension:volume_admin_actions:reset_status" to use the new context:

"volume_extension:volume_admin_actions:reset_status": [["rule:admin_or_support"]], Add users to the role

Add users who need priveleges to resize volumes to the role SupportAdmin in their tennant.

The users you have added to the "Support" role should now be able to resize volumes.