Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 40 min 25 sec ago

Sridhar Dhanapalan: XO-1 Training Pack

Fri, 2016-07-01 19:02

Our One Education programme is growing like crazy, and many existing deployments are showing interest. We wanted to give them a choice of using their own XOs to participate in the teacher training, rather than requiring them to purchase new hardware. Many have developer-locked XO-1s, necessitating a different approach than our official One Education OS.

The solution is our XO-1 Training Pack. This is a reconfiguration of OLPC OS 10.1.3 to be largely consistent with our 10.1.3-au release. It has been packaged for easy installation.

Note that this is not a formal One Education OS release, and hence is not officially supported by OLPC Australia.

If you’d like to take part in the One Education programme, or have questions, use the contact form on the front page.

Update: We have a list of improvements in 10.1.3-au builds over the OLPC OS 10.1.3 release. Note that some features are not available in the XO-1 Training Pack owing to the lesser storage space available on XO-1 hardware. The release notes have been updated with more detail.

Update: More information on our One News site.

Sridhar Dhanapalan: OLPC Australia Education Newsletter, Edition 9

Fri, 2016-07-01 19:02

Edition 9 of the OLPC Australia Education Newsletter is now available.

In this edition, we provide a few classroom ideas for mathematics, profile the Jigsaw activity, de-mystify the Home views in Sugar and hear about the OLPC journey of Girraween Primary School.

To subscribe to receive future updates, send an e-​​mail to education-​newsletter+subscribe@​laptop.​org.​au.

Maxim Zakharov: Apache + YAJL

Fri, 2016-07-01 13:04

//github.com/Maxime2/stan-challenge - here on GitHub is my answer to Stan code challenge. It is an example how one can use SAX-like streaming parser inside an Apache module to process JSON with minimal delays.

Custom made Apache module gives you some savings on request processing time by avoiding invocation of any interpreter to process the request with any programming language (like PHP, Python or Go). The stream parser allows to start processing JSON as soon as the first buffer filled with data while the whole request is still in transmission. And again, as it is an Apache module, the response is starting to construct while request is processing (and still transmitting).

Red&Orange

sthbrx - a POWER technical blog: A Taste of IBM

Fri, 2016-07-01 11:45

As a hobbyist programmer and Linux user, I was pretty stoked to be able to experience real work in the IT field that interests me most, Linux. With a mainly disconnected understanding of computer hardware and software, I braced myself to entirely relearn everything and anything I thought I knew. Furthermore, I worried that my usefulness in a world of maintainers, developers and testers would not be enough to provide any real contribution to the company. In actual fact however, the employees at OzLabs (IBM ADL) put a really great effort into making use of my existing skills, were attentive to my current knowledge and just filled in the gaps! The knowledge they've given me is practical, interlinked with hardware and provided me with the foot-up that I'd been itching for to establish my own portfolio as a programmer. I was both honoured and astonished by their dedication to helping me make a truly meaningful contribution!

On applying for the placement, I listed my skills and interests. Having a Mathematics, Science background, I listed among my greatest interests development of scientific simulation and graphics using libraries such as Python matplotlib and R. By the first day they got me to work, researching and implementing a routine in R that would qualitatively model the ability of a system to perform common tasks - a benchmark. A series of these microbenchmarks were made; I was in my element and actually able to contribute to a corporation much larger than I could imagine. The team at IBM reinforced my knowledge from the ground up, introducing the rigorous hardware and corporate elements at a level I was comfortable with.

I would say that my greatest single piece of take-home knowledge over the two weeks was knowledge of the Linux Kernel project, Git and GitHub. Having met the arch/powerpc and linux-next maintainers in person placed the Linux and Open Source development cycle in an entirely new perspective. I was introduced to the world of GitHub, and thanks to a few rigorous lessons of Git, I now have access to tools that empower me to safely and efficiently write code, and to build a public portfolio I can be proud of. Most members of the office donated their time to instruct me on all fronts, whether to do with career paths, programming expertise or conceptual knowledge, and the rest were all very good for a chat.

Approaching the tail-end of Year Twelve, I was blessed with some really good feedback and recommendations regarding further study. If during the two weeks I had any query regarding anything ranging from work-life to programming expertise even to which code editor I should use (a source of much contention) the people in the office were very happy to help me. Several employees donated their time to teach me really very intensive and long lessons regarding the software development concepts, including (but not limited to!) a thorough and helpful lesson on Git that was just on my level of understanding.

Working at IBM these past two weeks has not only bridged the gap between my hobby and my professional prospects, but more importantly established friendships with professionals in the field of Software Development. Without a doubt this really great experience of an environment that rewards my enthusiasm will fondly stay in my mind as I enter the next chapter of my life!

Tridge on UAVs: Using X-Plane 10 with ArduPilot SITL

Fri, 2016-07-01 10:59

ArduPilot has been able to use X-Plane as a HIL (hardware in the loop) backend for quite some time, but it never worked particularly well as the limitations of the USB interface to the hardware prevented good sensor timings.

We have recently added the ability to use X-Plane 10 as a SITL backend, which works much better. The SITL (software in the loop) system runs ArduPilot natively on your desktop machine, and talks to X-Plane directly using UDP packets.

The above video demonstrates flying a Boeing 747-400 in X-Plane 10 using ArduPilot SITL. It flies nicely, and does an automatic takeoff and landing quite well. You can use almost any of the fixed wing aircraft in X-Plane with ArduPilot SITL, which opens up a whole world of simulation to explore. Many people create models of their own aircraft in order to test out how they will fly or to test them in conditions (such as very high wind) that may be dangerous to test with a real model.

I have written up some documentation on how to use X-Plane 10 with SITL to help people get started. Right now it only works with X-Plane 10 although I may add support for X-Plane 9 in the future.

Michael Oborne has added nice support for using X-Plane with SITL in the latest beta of MissionPlanner, and does nightly builds of the SITL binary for Windows. That avoids the need to build ArduPilot yourself if you just want to fly the standard code and not modify it yourself.

Limitations

There are some limitations to the X-Plane SITL backend. First off, X-Plane has quite slow network support. On my machine I typically get a sensor data rate of around 27Hz, which is far below the 1200 Hz we normally use for simulation. To overcome this the ArduPilot SITL code does sensor extrapolation to bring the rate up to around 900Hz, which is plenty for SITL to run. That extrapolation introduces small errors which can make the ArduPilot EKF state estimator unhappy. To avoid that problem we run with "EKF type 10" which is a fake AHRS interface that gets all state information directly from the simulator. That means you can't use the X-Plane SITL backend to test EKF settings.

The next limitation is that the simulation fidelity depends somewhat on the CPU load on your machine. That is an unfortunate consequence of X-Plane not supporting lock-step scheduling. So you may notice that simulated aircraft on your machine may not fly identically to the same aircraft on someone elses machine. You can reduce this effect by lowering the graphics settings in X-Plane.

We can currently only get joystick input from X-Plane for aileron, elevator, rudder and throttle. It would be nice to support flight mode switches, flaps and other controls that are normally used with ArduPilot. That is probably possible, but isn't implemented yet. So if you want a full controller then you can instead connect a joystick to SITL directly instead of via X-Plane (for example using the MissionPlanner joystick module or the mavproxy joystick module).

Finally, we only support fixed wing aircraft in X-Plane at the moment. I have been able to fly a helicopter, but I needed to give manual collective control from a joystick as we don't yet have a way to provide collective pitch input over the X-Plane data interface.

Manned AIrcraft and ArduPilot

Please don't assume that because ArduPilot can fly full sized aircraft in a simulator that you should use ArduPilot to fly real manned aircraft. ArduPilot is not suitable for manned applications and the development team would appreciate it if you did not try to use it for manned aircraft.

Happy Flying

I hope you enjoy flying X-Plane 10 with ArduPilot SITL!

Russell Coker: Coalitions

Fri, 2016-07-01 03:03

In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

Related posts:

  1. praying for rain Paul Dwerryhouse posted a comment about the Prime Minister asking...
  2. The 2013 Federal Election Seven hours ago I was handing out how to...
  3. Victorian State Election Election Tomorrow On Saturday we will have a Victorian state...

Binh Nguyen: Are we now the USSR?, Brexit, and More

Fri, 2016-07-01 02:44
Look at what's happened and you'll see the parallels: - in many parts of the world the past and current social and economic policies on offer basically aren't delivering. Clear that there is a democratic deficit. The policies at the top aren't dealing with enough of the population's problems CrossTalk BREXIT - GOAL! (Recorded 24 June) https://www.youtube.com/watch?v=kgKIc0bobO4 The Schulz Brexit

Linux Users of Victoria (LUV) Announce: LUV Main July 2016 Meeting: ICT in Education / To Search Perchance to Find

Wed, 2016-06-29 23:03
Start: Jul 5 2016 18:30 End: Jul 5 2016 20:30 Start: Jul 5 2016 18:30 End: Jul 5 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Link:  http://luv.asn.au/meetings/map

Speakers:

  • Dr Gill Lunniss and Daniel Jitnah, ICT in Education
  • Tim Baldwin, To Search Perchance to Find: Improving Information Access over
    Technical Web User Forums

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 5, 2016 - 18:30

read more

Linux Users of Victoria (LUV) Announce: LUV Beginners July Meeting: GNU COBOL

Wed, 2016-06-29 23:03
Start: Jul 16 2016 12:30 End: Jul 16 2016 16:30 Start: Jul 16 2016 12:30 End: Jul 16 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond

Link:  http://luv.asn.au/meetings/map

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages. Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 16, 2016 - 12:30

read more

sthbrx - a POWER technical blog: Kernel interfaces and vDSO test

Fri, 2016-06-24 16:30
Getting Suckered

Last week a colleague of mine came up to me and showed me some of the vDSO on PowerPC and asked why on earth does it fail vdsotest. I should come clean at this point and admit that I knew very little about the vDSO and hadn't heard of vdsotest. I had to admit to this colleague that I had no idea everything looked super sane.

Unfortunately (for me) I got hooked, vdsotest was saying it was getting '22' instead of '-1' and it was the case where the vDSO would call into the kernel. It plagued me all night, 22 is so suspicious. Right before I got to work the next morning I had an epiphany, "I bet 22 is EINVAL".

Virtual Dynamically linked Shared Objects

The vDSO is a mechanism to expose some kernel functionality into userspace to avoid the cost of a context switch into kernel mode. This is a great feat of engineering, avoiding the context switch can have a dramatic speedup for userspace code. Obviously not all kernel functionality can be placed into userspace and even for the functionality which can, there may be edge cases in which the vDSO needs to ask the kernel.

Who tests the vDSO? For the portion that lies exclusively in userspace it will escape all testing of the syscall interface which is really what kernel developers are so focused on not breaking. Enter Nathan Lynch with vdsotest who has done some great work!

The Kernel

When the vDSO can't get the correct value without the kernel, it simply calls into the kernel because the kernel is the definitive reference for every syscall. On PowerPC something like this happens (sorry, our vDSO is 100% asm): 1

/* * Exact prototype of clock_gettime() * * int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp); * */ V_FUNCTION_BEGIN(__kernel_clock_gettime) .cfi_startproc /* Check for supported clock IDs */ cmpwi cr0,r3,CLOCK_REALTIME cmpwi cr1,r3,CLOCK_MONOTONIC cror cr0*4+eq,cr0*4+eq,cr1*4+eq bne cr0,99f /* [snip] */ /* * syscall fallback */ 99: li r0,__NR_clock_gettime sc blr

For those not familiar, this couldn't be more simple. The start checks to see if it is a clock id that the vDSO can handle and if not it jumps to the 99 label. From here simply load the syscall number, jump to the kernel and branch to link register aka 'return'. In this case the 'return' statement would return to the userspace code which called the vDSO function.

Wait, having the vDSO calling into the kernel call gets us the wrong result? Or course it should, vdsotest is assuming a C ABI with return values and errno but the kernel doesn't do that, the kernel ABI is different. How does this even work on x86? Ohhhhh vdsotest does 2

static inline void record_syscall_result(struct syscall_result *res, int sr_ret, int sr_errno) { /* Calling the vDSO directly instead of through libc can lead to: * - The vDSO code punts to the kernel (e.g. unrecognized clock id). * - The kernel returns an error (e.g. -22 (-EINVAL)) * So we need to recognize this situation and fix things up. * Fortunately we're dealing only with syscalls that return -ve values * on error. */ if (sr_ret < 0 && sr_errno == 0) { sr_errno = -sr_ret; sr_ret = -1; } *res = (struct syscall_result) { .sr_ret = sr_ret, .sr_errno = sr_errno, }; }

That little hack isn't working on PowerPC and here's why:

The kernel puts the return value in the ABI specified return register (r3) and uses a condition register bit (condition register field 0, SO bit), so unlike x86 on error the return value isn't negative. To make matters worse, the condition register is very difficult to access from C. Depending on your definition of 'access from C' you might consider it impossible, a fixup like that would be impossible.

Lessons learnt
  • vDSO supplied functions aren't quite the same as their libc counterparts. Unless you have very good reason, and to be fair, vdsotest does have a very good reason, always access the vDSO through libc
  • Kernel interfaces aren't C interfaces, yep, they're close but they aren't the same
  • 22 is in fact EINVAL
  • Different architectures are... Different!
  • Variety is the spice of life

P.S I have a hacky patch waiting review

  1. arch/powerpc/kernel/vdso64/gettimeofday.S 

  2. src/vdsotest.h 

Ian Wienand: Zuul and Ansible in OpenStack CI

Wed, 2016-06-22 08:16

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Pia Waugh: Pia, Thomas and Little A’s Excellent Adventure – Week 1

Tue, 2016-06-21 11:01

We arrived in Auckland after a fairly difficult flight. Little A had a mild cold and did NOT cope with the cabin pressure well, so there was a lot of walking cuddles around the plane kitchen to not disturb other passengers. After a restful night we picked up our rental car, a roomy 4 wheel drive, and drove to Turangi, a beautiful scenic introduction to our 3 month adventure! Our plan is to spend 3 months in Turing as a bit of a babymoon: to get to know little A as she goes through that lovely 6-9 months development which includes crawling, learning to eat and other fun stuff. We are also planning to catch a LOT of trout (and even keep some!), catch up with some studies and reading, and take the time to plan out the next chapter of our life. I’m also hoping to write a book if I can, but more on that later

So each week we’ll blog some highlights! Photos will be added every few days to the flickr album.

Arrival

The weather in Turangi has been gorgeous all week. Sunny and much warmer than Canberra, but of course Thomas would rather rain as that would get the Trout moving in the river We are renting a 3 bedroom house with woodfire heating which is toasty warm and very comfortable. The only downside is that we have no internet at the house, and the data plan on my phone doesn’t work at all at the house. So we are fairly offline, which has its pros and cons Good for relaxing, reflection, studying, writing and planning. Bad for Pia who feels like she has lost a limb! Meanwhile, the local library has reasonable WiFi and we have become a regular visitors.

Little A

Little A has made some new steps this week. She learned how to do raspberries, which she now does frequently. She also rolled over completely unassisted for the first time and spends a lot of time trying to roll more. Finally, she decided she wanted to start on solids. We know this because when Thomas was holding her whilst eating a banana, he turned away for a second to speak to me and she launched herself onto the banana, gumming furiously! So we have now tried some mashed potato, pumpkin and some water from the sippy cup. In all cases she insists on grabbing the spoon or sippy cup to feed herself.

Studies

Both of us are doing some extra studies whilst on this trip. I’m finishing off my degree this semester with a subject on policy and law, and another on white collar crime. Both are fascinating! Thomas is reading up on some areas of law he wants to brush up on for work and fun.

Book

My book preparations are going well, and I will be blogging about that in a few weeks once I get a bit more done. Basically I’m writing a book about the history and future of our species, focusing on the major philosophical and technological changes that have come and are coming, and the key things we need to carefully think about and change if we are to take advantage of how the world itself has fundamentally changed. It is a culmination of things I’ve been thinking about and exploring for the last 15 years, so I hope it proves useful in making a better world for everyone

Fishing

Part of the reason we have based this little sabbatical at Turangi is because it is arguably the best Trout fishing in the world, and is one of Thomas’ favourite places. It is a quaint and sleepy little country town with everything we need. The season hasn’t really kicked off yet and the fish aren’t running upstream yet, but we still netted 12 fish this week, of which we kept one Rainbow Trout for a delicious meal of Manuka smoked fish

Stewart Smith: Building OPAL firmware for POWER9

Mon, 2016-06-20 13:00

Recently, we merged into the op-build project (the build scripts for OpenPOWER Firmware) a defconfig for building OPAL for (certain) POWER9 simulators. I won’t bother linking over to articles on the POWER9 chip or schedule (there’s search engines for that), but with this commit – if you happen to be able to get your hands on a POWER9 simulator, you can now boot to the petitboot bootloader on it!

We’re using upstream Linux 4.7.0-rc3 and upstream skiboot (master), so all of this code is already upstream!

Now, by no means is this complete. There’s some fairly fundamental things that are missing (e.g. PCI) – but how many other platforms can you build open source firmware for before you can even get your hands on a simulator?

Binh Nguyen: Religious Conspiracies, Is Capitalism Collapsing 2?, and More

Fri, 2016-06-17 18:24
This is obviously a continuation of my past post, http://dtbnguyen.blogspot.com/2016/06/is-capitalism-collapsing-random.html You're probably wondering how on earth we've moved on to religious conspiracies. You'll figure this out in a second: - look back far enough and you'll realise that the way religion was practised and embraced in society was very different a long time ago and now. In fact,

Chris Smart: Booting Fedora 24 cloud image with KVM

Fri, 2016-06-17 17:02

Fedora 24 is on the way, here’s how you can play with the cloud image on your local machine.

Download the image:
wget https://alt.fedoraproject.org/pub/alt/stage/24_RC-1.2/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2

Make a new local backing image (so that we don’t write to our downloaded image) called my-disk.qcow2:
qemu-img create -f qcow2 -b Fedora-Cloud-Base-24-1.2.x86_64.qcow2 my-disk.qcow2

The cloud image uses cloud-init to configure itself on boot which sets things like hostname, usernames, passwords and ssh keys, etc. You can also run specific commands at two stages of the boot process (see bootcmd and runcmd below) and output messages (see final_message below) which is useful for scripted testing.

Create a file called meta-data with the following content:
instance-id: FedoraCloud00
local-hostname: fedoracloud-00

Next, create a file called user-data with the following content:
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
 
bootcmd:
 - [ sh, -c, echo "=========bootcmd=========" ]
 
runcmd:
 - [ sh, -c, echo "=========runcmd=========" ]
 
# add any ssh public keys
ssh_authorized_keys:
  - ssh-rsa AAA...example...SDvZ user1@domain.com
 
# This is for pexpect so that it knows when to log in and begin tests
final_message: "SYSTEM READY TO LOG IN"

Cloud init mounts a CD-ROM on boot, so create an ISO image out of those files:
genisoimage -output my-seed.iso -volid cidata -joliet -rock user-data meta-data

If you want to SSH in you will need a bridge of some kind. If you’re already running libvirtd then you should have a virbr0 network device (used in the example below) to provide a local network for your cloud instance. If you don’t have a bridge set up, you can still boot it without network support (leave off the -netdev and -device lines below).

Now we are ready to boot this!
qemu-kvm -name fedora-cloud \
-m 1024 \
-hda my-disk.qcow2 \
-cdrom my-seed.iso \
-netdev bridge,br=virbr0,id=net0 \
-device virtio-net-pci,netdev=net0 \
-display sdl

You should see a window pop up and Fedora loading and cloud-init configuring the instance. At the login prompt you should be able to log in with the username fedora and password that you set in user-data.

sthbrx - a POWER technical blog: Introducing snowpatch: continuous integration for patches

Wed, 2016-06-15 15:33

Continuous integration has changed the way we develop software. The ability to make a code change and be notified quickly and automatically whether or not it works allows for faster iteration and higher quality. These processes and technologies allow products to quickly and consistently release new versions, driving continuous improvement to their users. For a web app, it's all pretty simple: write some tests, someone makes a pull request, you build it and run the tests. Tools like GitHub, Travis CI and Jenkins have made this process simple and efficient.

Let's throw some spanners in the works. What if instead of a desktop or web application, you're dealing with an operating system? What if your tests can only be run when booted on physical hardware? What if instead of something like a GitHub pull request, code changes were sent as plain-text emails to a mailing list? What if you didn't have control the development of this project, and you had to work with an existing, open community?

These are some of the problems faced by the Linux kernel, and many other open source projects. Mailing lists, along with tools like git send-email, have become core development infrastructure for many large open source projects. The idea of sending code via a plain-text email is simple and well-defined, not reliant on a proprietary service, and uses universal, well-defined technology. It does have shortcomings, though. How do you take a plain-text patch, which was sent as an email to a mailing list, and accomplish the continuous integration possibilities other tools have trivially?

Out of this problem birthed snowpatch, a continuous integration tool designed to enable these practices for projects that use mailing lists and plain-text patches. By taking patch metadata organised by Patchwork, performing a number of git operations and shipping them off to Jenkins, snowpatch can enable continuous integration for any mailing list-based project. At IBM OzLabs, we're using snowpatch to automatically test new patches for Linux on POWER, skiboot, snowpatch itself, and more.

snowpatch is written in Rust, an exciting new systems programming language with a focus on speed and safety. Rust's amazing software ecosystem, enabled by its package manager Cargo, made development of snowpatch a breeze. Using Rust has been a lot of fun, along with the practical benefits of (in our experience) faster development, and confidence in the runtime stability of our code. It's still a young language, but it's quickly growing and has an amazing community that has always been happy to help.

We still have a lot of ideas for snowpatch that haven't been implemented yet. Once we've tested a patch and sent the results back to a patchwork instance, what if the project's maintainer (or a trusted contributor) could manually trigger some more intensive tests? How would we handle it if the traffic on the mailing list of a project is too fast for us to test? If we were running snowpatch on multiple machines on the same project, how would we avoid duplicating effort? These are unsolved problems, and if you'd like to help us with these or anything else you think would be good for snowpatch, we take contributions and ideas via our mailing list, which you can subscribe to here. For more details, view our documentation on GitHub.

Thanks for taking your time to learn a bit about snowpatch. In future, we'll be talking about how we tie all these technologies together to build a continuous integration workflow for the Linux kernel and OpenPOWER firmware. Watch this space!

This article was originally posted on IBM developerWorks Open. Check that out for more open source from IBM, and look out for more content in their snowpatch section.

Rusty Russell: Minor update on transaction fees: users still don’t care.

Wed, 2016-06-15 13:01

I ran some quick numbers on the last retargeting period (blocks 415296 through 416346 inclusive) which is roughly a week’s worth.

Blocks were full: median 998k mean 818k (some miners blind mining on top of unknown blocks). Yet of the 1,618,170 non-coinbase transactions, 48% were still paying dumb, round fees (like 5000 satoshis). Another 5% were paying dumbround-numbered per-byte fees (like 80 satoshi per byte).

The mean fee was 24051 satoshi (~16c), the mean fee rate 60 satoshi per byte. But if we look at the amount you needed to pay to get into a block (using the second cheapest tx which got in), the mean was 16.81 satoshis per byte, or about 5c.

tl;dr: It’s like a tollbridge charging vehicles 7c per ton, but half the drivers are just throwing a quarter as they drive past and hoping it’s enough. It really shows fees aren’t high enough to notice, and transactions don’t get stuck often enough to notice. That’s surprising; at what level will they notice? What wallets or services are they using?

Ben Martin: Terry & ROS

Tue, 2016-06-14 22:35
After a number of adventures I finally got a ROS stack setup so that move_base, amcl, and my robot base all like each other well enough for navigation to function. Luckily I added some structural support to the physical base as the self driving control is a little snappier than I normally tend to drive the robot by hand.

There was an upgrade from Indigo to Kinetic in the mix and the coupled update to Ubuntu Xenial to match the ROS platform update. I found a bunch of ROS packages that I used are not currently available for Kinetic, so had an expanding catkin ws for self compiled system packages to complete the update. Really cool stuff like rosserial wasn't available. Then I found that a timeout there caused a bunch of error messages about mismatched read sizes. I downgrade to the indigo version of rosserial and the error was still there, so I assume it relates to the various serial drivers in the Linux kernel doing different timing than they did before. Still, one would have hoped that rosserial was more resilient to multiple partial packet delivery. But with a timeout bump all works again. FWIW I've seen similar in boost, you try to read 60 bytes and get 43 then need to get that remaining 17 and stuff the excess in a readback buffer for the next packet read attempt. The boost one hit me going from 6 to 10 channel io to a rc receiver-to-uart arduino I created. The "joy" of low level io.

I found that the issues stopping navigation from working for me out of the box on Indigo were still there in Kinetic.  So I now have a very cool bit of knowledge to tell if somebody has navigation working or is just assuming that what one reads equals what will work out of the box.

Probably the next ROS thing will be trying to get a moveit stack for the mearm. I've got one of these cut and so will soon have it built. It seems like an ideal thing to work on MoveIt for because its a simple low cost arm that anybody can cut out and servo up. I've long wanted a simple tutorial on MoveIt for affordable arms. It might be that I'm the one writing that tutorial rather than just reading it.

Video and other goodness to follow. As usual, persistence it the key^TM.

OpenSTEM: Celebrating explorers!

Tue, 2016-06-14 15:03

We continue publishing resources on explorers, a very diverse range from around the world and throughout time.  Of course James Cook was an interesting person, but isn’t it great to also offer students an opportunity to investigate some other people that they hadn’t yet heard the name of?  It is good to show the diversity and how it wasn’t just Europeans who explored.

And did you spot our selection of female explorers? Unfortunately there aren’t that many, but they did awesome work. Nellie Bly is my personal favourite (pictured on the right). Such fabulous initiative.

As small introductory gift this month for those who haven’t yet got a subscription, use this special link to our Explorers category page  to get 50% off the price of one explorer resource PDF, some will then be only $1. If you have come to the site via the link, the discount will automatically be applied to your cart on checkout, to the most expensive item from the Explorer category.Alternatively you can use coupon code NL1606EXPL. This offer is only valid until end June 2016.

Which one will you choose? You can write a comment on this post: tell us which explorer, and why!

Linux Australia News: Council Minutes Tuesday 07 June 2016

Mon, 2016-06-13 13:01
Fri, 2016-06-03 19:36 - 20:43

1. Meeting overview and key information
Present
Hugh, Kathy, Katie, Cherie, Sae Ra

Apologies:
Tony, Craige

Meeting opened by Hugh at 1936hrs and quorum was achieved

MOTION that the previous minutes of Hugh are correct
Moved: Kathy
Seconded: Cherie
2 abstention Carried

2. Log of correspondence
Motions moved on list
Nil

General correspondence

VPAC Closure
Email notice to be sent out.
ACTION complete, no further action

Insurance
UPDATE: Sohan was chased 25th April, awaiting update
UPDATE: an invoice has been sent. Kathy to raise the invoice and ping Sae Ra to approve in Westpac.
UPDATE: All now approved and paid

3. Review of action items from previous meetings

Membership Team:
Kathy, chatted wtih Agileware and have asked for 2 quotes, 1 with trial quote and one with full hosting quote

Email from DONNA BENJAMIN regarding website and update to D8 or possible rebuild.
ACTION: Needs analysis of website feature requirements. Should we make things simpler?
Understanding the User Needs of the website
ACTION: Kathy to approach the Membership team Subcommittee.
Draft a survey to better understand the usage of the public facing website. The membership team is in progress.
ACTION Kathy to communicate with the Linux Aus List.
Survey has been sent out to the Linux Aus list.
Need to compile a summary of answers from the survey.
UPDATE: Survey has been completed, and key findings and recommendations sent to the Linux Aus list.
UPDATE: Survey was sentout to the LA list. Look at wireframes and look at candidate platforms. Tl:dr membership team have been doing stuff.
UPDATE: Quotes received from Agileware and DevApp regarding hosting of CiviCRM, Kathy currently following up.

ACTION: Josh H and Tony to assess an appropriate amount to transfer funds back from NZ to Australia.
High Priority. Tony to update Action Register
UPDATE: still blocked by getting access to the NZ accounts.
UPDATE: Tony still attempting to engage with the bank
There is an outstanding amount for the Ellis’ that needs to be sorted.

Please refer to Action Items list for more items.

4. Items for discussion

Standing item: review activities list for new items
https://github.com/linuxaustralia/constitution_and_policies/blob/master/activities-calendar.md

Call for bids for linux.conf.au 2019
ACTION: Tony to do the Python script magic and amend dates, then to formally announce the call for bids
In Progress

Informal request to Hugh from Ben Dechrai BuzzConf https://buzzconf.io/sponsor-buzzconf-2016/ asking if LA would consider sponsoring. Have said would float informally with Council in first instance.
Very much in the Linux Australia.
Excellent atmosphere.
It is not an open source event. Semi-corporate presenters.
There are no issues with the integrity or running of the event just not sure that LA should be sponsoring the event because it's not open source focussed.
IF buzzconf was to subscribe to the more open source of the way of things. It is more of a case of needing corporate sponsorship.
It is worth exploring further
Logo/visibility and what options are available etc. It would be worthwhile looking at numbers on what things we could provide.
ACTION: Hugh to have a chat with Ben
UPDATE: In Progress - Hugh to review action items from F2F re:amount and go from there.

Event Updates:
LCA2016 update
We believe we have processed all payments and finances
Event report well received. It also went to Geelong stakeholders.
Caught up with Donna Benjamin and donna now has the large format printer.

LCA2017 update
LCA2017 want to use a different payment provider, Stripe instead of SecurePay. Fees not significantly different.
ACTION: Tony to assess and respond to Chris Neugebauer, allowing us to use SecurePay in future if Stripe doesn’t work out
Papers committee up and running
3 On papers committee, 6/7 women, good diversity of employers
CfP sometime this month
Website/general graphics design took a little longer than expected but now pretty much sorted
F2F meeting at end of month, overall level of comfort high
May end handling accom. bookings themselves

LCA2018 update
LCA2018 team to be contacted re moving forward.

PyConAU update
In Progress

Drupal South Gold Coast 2016
Query from Jana if we have Not for Profit organisation
Register of stuff like are we NfP organisation
Kathy to follow-up with status.

OSDConf 2015
Post Event report released

GovHack
WEbsite has launched and looks spiffy. There are 30 events this year. Initial comms have gone out and sponsorship is on track.

JoomlaDay
In progress

DrupalGov
--

WordCamp Sunshine Coast 2016
Books can be closed.

WordCamp Sydney 2016
Very well organised and on top of everything

CiviCRM 2017 interested in aligning with LA
Nothing to report

5. Items for noting

6. Other business
6 Month check, how are we travelling.

Carried Over from Previous Minutes
Linux Australia as charity funds funnel?
It’s been raised that it would be nice to have Linux Australia (or something else?) as a local, tax deductible vessel via which Australian could donate directly to, for example, the Software Freedom Conservancy.
ACTION: Kathy to reach out to Jon, and try to partner with EFA in regards to Tax Deductible donations. Kathy to respond to the digital rights campaign email.

New Other Business
7. Other business carried from previous Council
ACTION: Cherie to review Minutes from F2F and add to next meeting agenda if further discussion required.

8. In Camera
2 items were discussed in camera

2043AEST close.