Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 27 min 18 sec ago

Ben Martin: A new libferris is coming! 2.0.x

Sun, 2018-06-10 17:11
A while back I ported most of the libferris suite over to using boost for smart pointers and for signals. The later was not such a problem but there were always some fringe cases to the former and this lead to a delay in releasing it because there were some known issues.

I have moved that code into a branch locally and reverted back to using the Modern C++ Loki library for intrusive reference counting and sigc++. I imported my old test suite into the main libferris repo and will flesh that out over time.

I might do a 2.0.0 or 1.9.9 release soonish so that the entire stack is out there. As this has the main memory management stuff that has been working fine for the last 10 years this shouldn't be more unstable than it was before.

I was tempted to use travis ci for testing but will likely move to using a local vm. Virtualization has gotten much more convenient and I'm happy to setup a local test VM for this task which also breaks a dependency on companies which really doesn't need to be there. Yes, I will host releases and a copy of git in some place like github or gitlab or whatever to make that distribution more convenient. On the other hand, anyone could run the test suite which will be in the main libferris distro if they feel the desire.

So after this next release I will slowly at leisure work to flesh out the testsuite and fix issues that I find by running it over time. This gives a much more incremental development which will hopefully be more friendly to the limited time patches that I throw at the project.

One upside of being fully at the mercy of my time is that the project is less likely to die or be taken over by a company and lead in an unnatural direction. The downside is that it relies on my free time which is split over robotics, cnc, and other things as well as libferris.

As some have mentioned, a flatpak or docker image for libferris would be nice. Ironically this makes the whole thing a bit more like plan9 with a filesystem microkernel like subsystem (container) than just running it as a native though rpm or deb, but whatever makes it easier.

OpenSTEM: This Week in Australian History

Fri, 2018-06-08 17:05
Today we introduce a new category for OpenSTEM® Blog articles: “This Week in Australian History”. So many important dates in Australian history seem to become forgotten over time that there seems to be a need to highlight some of these from time to time. For teachers of students from Foundation/Prep/Kindy to Year 6 looking for […]

Matthew Oliver: Keystone Federated Swift – Separate Clusters + Container Sync

Fri, 2018-06-08 15:04

This is the third post in the series of Keystone Federated Swift. To bounce back to the start you can visit the first post.

Separate Clusters + Container Sync

The idea with this topology is to deploy each of your OpenStack federated clusters each with their own unique swift cluster and then use another swift feature, container sync, to push objects you create on one federated environment to another.

In this case the keystone servers are federated. A very similar topology could be a global Swift cluster, but each proxy only talks to single region’s keystone. Which would mean a user visiting a different region would authenticate via federation and be able to use the swift cluster, however would use a different account name. In both cases container sync could be used to synchronise the objects, say from the federated account to that of the original account. This is because container sync can synchronise both between containers in separate clusters or in the same.

 

Setting up container sync

Setting up container sync is pretty straight forward. And is also well documented. At a high level to goes like this. Firstly you need to setup a trust between the different clusters. This is achieved by creating a container-sync-realms.conf file, the online example is:

[realm1]
key = realm1key
key2 = realm1key2
cluster_clustername1 = https://host1/v1/
cluster_clustername2 = https://host2/v1/

[realm2]
key = realm2key
key2 = realm2key2
cluster_clustername3 = https://host3/v1/
cluster_clustername4 = https://host4/v1/

 

Each realm is a set of different trusts. And you can have as many clusters in a realm as you want, so as youcan see you can build up different realms. In our example we’d only need 1 realm, and lets use some better names.

[MyRealm]
key = someawesomekey
key2 = anotherkey
cluster_blue = https://blueproxyvip/v1
cluster_green = https://greenproxyvip/v1

NOTE: there is nothing stopping you from only having 1 cluster defined as you can use container sync within a cluster, or adding more clusters to a single realm.

 

Now in our example both the green and blue clusters need to have the MyRealm realm defined in their /etc/swift/container-sync-realms.conf file. The 2 keys are there so you can do key rotation. These keys should be kept secret as these keys will be used to define trust between the clusters.

 

The next step is to make sure you have the container_sync middleware in your proxy pipeline. There are 2 parts to container sync, the backend daemon that periodically checks containers for new objects and sends changes to the other cluster, and the middleware that is used to authenticate requests sent by container sync daemons from other clusters. We tend to place the container_sync middleware before (to the left of) any authentication middleware.

 

The last step is to tell container sync what containers to keep in sync. This is all done via container meta-data which is controlled by the user. Let’s assume we have 2 accounts, AUTH_matt on the blue and AUTH_federatedmatt on the green. And we wanted to sync a container called mycontainer. Note, the containers don’t have to be called the same. Then we’d start by making sure the 2 containers have the same container sync key, which is defined by the owner of the container, this isn’t the realm keys but work in a similar way. And then telling 1 container to sync with the other.
NOTE: you can make the relationship go both ways.

 

Let’s use curl first:

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
'http://blueproxyvip/v1/AUTH_matt/mycontainer'

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
-H 'X-Container-Sync-To: //MyRealm/blue/AUTH_matt/mycontainer' \
'http://greenproxyvip/v1/AUTH_federatedmatt/mycontainer'

Or via the swift client, noting that you need to change identities to set each account.

# To the blue cluster for AUTH_matt
$ swift  post -k 'secret' mycontainer

 

# To the green cluster for AUTH_federatedmatt
$ swift  post \
-t '//MyRealm/blue/AUTH_matt/mycontainer' \
-k 'secret' mycontainer

In a federated environment, you’d just need to set some key for each of your containers you want to work on while your away (or all of them I guess). Then when you visit you can just add the sync-to metadata when you create containers on the other side. Likewise, if you knew the name of your account on the other side you could make a sync-to if you needed to work on something over there.

 

To authenticate containersync generates and compares a hmac on both sides where the hmac consists of both the realm and container keys, the verb, object name etc.

 

The obvious next question is great, but then do I need to know the name of each cluster, well yes, but you can simply find them by asking swift via the info call. This is done by hitting the /info swift endpoint with whatever tool you want. If your using the swift client, then it’s:

$ swift info

Pros and cons Pros

The biggest pro for this approach is you don’t have to do anything special, if you have 1 swift cluster or a bunch throughout your federated environments the all you need to do it setup a container sync trust between them and the users can sync between themselves.

 

Cons

There are a few I can think off the top of my head:

  1. You need to manually set the metadata on each container. Which might be fine if it’s just you, but if you have an app or something it’s something else you need to think about.
  2. Container sync will move the data periodically, so you may not see it in the other container straight away.
  3. More storage is used. If it’s 1 cluster or many, the objects will exist in both accounts.
Conclusion

This is an interesting approach, but I think it would be much better to have access to the same set of objects everywhere I go and it just worked. I’ll talk about how to go about that in the next post as well as talk about 1 specific way I got working as a POC.

 

Container sync is pretty cool, Swiftstack have recently open sourced a another tool 1space, that can do something similar. 1space looks awesome but I haven’t have a chance to play with it yet. And so will add it to the list of Swift things I want to play with whenever I get a chance.

Gary Pendergast: Podcasting: Tavern Style

Thu, 2018-06-07 15:03

Earlier today, I joined JJJ and Jeff on episode 319 of the WP Tavern’s WordPress Weekly podcast!

We chatted about GitHub being acquired by Microsoft (and what that might mean for the future of WordPress using Trac), the state of Gutenberg, WordCamp Europe, as well as getting into a bit of the philosophy that drives WordPress’ auto-update system.

Finally, Jeff was kind enough to name me a Friend of the Show, despite my previous appearance technically not being a WordPress Weekly episode.

Russell Coker: BTRFS and SE Linux

Wed, 2018-06-06 23:02

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log security.selinux: 0000 73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F system_u:object_ 0010 72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73 r:auditd_log_t:s 0020 30 00 0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash set -e COUNT=$(ps aux|grep [s]bin/auditd|wc -l) date if [ "$COUNT" = "1" ]; then echo "all good" else echo "failed" exit 1 fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun 1 12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do ssh stretch "reboot -nffd" > /dev/null 2>&1 & sleep 20 done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri 1 Jun 12:26:13 UTC 2018 all good Fri 1 Jun 12:26:33 UTC 2018 failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log 37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun 1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

Related posts:

  1. SE Linux in Debian I have now got a Debian Xen domU running the...
  2. SE Linux Status in Debian 2012-01 Since my last SE Linux in Debian status report [1]...
  3. More BTRFS Fun I wrote a BTRFS status report yesterday commenting on the...

Michael Still: Mirroring all your repos from github

Wed, 2018-06-06 09:00

So let me be clear here, I don’t think its a bad thing that Microsoft bought github. No one is forcing you to use their services, in fact they make it trivial to stop using them. So what’s the big deal.

I’ve posted about a few git mirror scripts I run at home recently: one to mirror gerrit repos; and one to mirror arbitrary github users.

It was therefore trivial to whip up a slightly nicer script intended to help you forklift your repos out of github if you’re truly concerned. Its posted on github now (irony intended).

Now you can just do something like:

$ pip install -U -r requirements.txt $ python download.py --github_token=foo --username=mikalstill

I intend to add support for auto-creating and importing gitlab repos into the script, but haven’t gotten around to that yet. Pull requests welcome.

The post Mirroring all your repos from github appeared first on Made by Mikal.

Simon Lyall: Audiobooks – May 2018

Tue, 2018-06-05 11:03

Ramble On by Sinclair McKay

The history of walking in Britain and some of the author’s experiences. A pleasant listen. 7/10

Inherit the Stars by James P. Hogan

Very hard-core Sci Fi (all tech, no character) about a 50,000 year old astronaut’s body being found on the moon. Dated in places (everybody smokes) but I liked it. 7/10

Sapiens: A Brief History of Humankind by Yuval Noah Harari

A good overview of pre-history of human species plus an overview of central features of cultures (government, religion, money, etc). Interesting throughout. 9/10

The Adventures of Sherlock Holmes II by Sir Arthur Conan Doyle, read by David Timson

Another four Holmes stories. I’m pretty happy with Timson’s version. Each is only about an hour long. 7/10

The Happy Traveler: Unpacking the Secrets of Better Vacations by Jaime Kurtz

Written by a “happiness researcher” rather than a travel expert. A bit different from what I expected. Lots about structuring your trips to maximize your memories. 7/10

Mrs. Kennedy and Me: An Intimate Memoir by Clint Hill with Lisa McCubbin

I’ve read several of Hill’s books of his time in the US Secret Service, this overlaps a lot of these but with some extra Jackie-orientated material. I’d recommend reading the others first. 7/10

The Lost Continent: Travels in Small Town America by Bill Bryson

The author drives through small-town American making funny observations. Just 3 hours long so good bang for buck. Almost 30 years old so feels a little dated. 7/10

A Splendid Exchange: How Trade Shaped the World by William J. Bernstein

A pretty good overview of the growth of trade. Concentrates on the evolution of  routes between Asia and Europe. Only brief coverage post-1945. 7/10

The Adventures of Sherlock Holmes III by Sir Arthur Conan Doyle

The Adventure of the Cardboard Box; The Musgrave Ritual; The Man with the Twisted Lip; The Adventure of the Blue Carbuncle. All well done. 7/10

The Gentle Giants of Ganymede (Giants Series, Book 2) by James P. Hogan

Almost as hard-core as the previous book but with less of a central mystery. Worth reading if you like the 1st in the series. 7/10

An Army at Dawn: The War in North Africa, 1942-1943 – The Liberation Trilogy, Book 1 by Rick Atkinson

I didn’t like this as much as I expected or as much as similar books. Can’t quite place the problem though. Perhaps works better when written. 7/10

The Adventures of Sherlock Holmes IV by Sir Arthur Conan Doyle

A Case of Identity; The Crooked Man; The Naval Treaty; The Greek Interpreter. I’m happy with Timson’s version . 7/10

Michael Still: Quick note: pre-pulling docker images for ONAP OOM installs

Tue, 2018-06-05 11:00

Writing this down here because it took me a while to figure out for myself…

ONAP OOM deploys ONAP using Kubernetes, which effectively means Docker images at the moment. It needs to fetch a lot of Docker images, so there is a convenient script provided to pre-pull those images to make install faster and more reliable.

The script in the OOM codebase isn’t very flexible, so Jira issue OOM-655 was filed for a better script. The script was covered in code review 30169. Disappointingly, the code reviewer there doesn’t seem to have actually read the jira issue or the code before abandoning the patch — which isn’t very impressive.

So how do you get the nicer pre-pull script?

Its actually not too hard once you know the review ID. Just do this inside your OOM git clone:

$ git review -d 30169

You might be prompted for your gerrit details because the ONAP gerrit requires login. Once git review has run, you’ll be left sitting in a branch from when the review was uploaded that includes the script:

$ git branch master * review/james_forsyth/30169

Now just rebase that to bring it in mine with master and get on with your life:

$ git rebase -i origin Successfully rebased and updated refs/heads/review/james_forsyth/30169.

You’re welcome. I’d like to see the ONAP community take code reviews a bit more seriously, but ONAP seems super corporate (even compared to OpenStack), so I’m not surprised that they haven’t done a very good job here.

The post Quick note: pre-pulling docker images for ONAP OOM installs appeared first on Made by Mikal.