Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 55 min 34 sec ago

Jeremy Kerr: Custom kernels in OpenPower firmware

Wed, 2015-06-24 12:27

As of commit 2aff5ba6 in the op-build tree, we're able to easily replace the kernel in an OpenPower firmware image.

This commit adds a new partition (called BOOTKERNEL) to the PNOR image, which provides the petitboot bootloader environment. Since it's now in its own partition, we can replace the image with a custom build. Here's a little guide to doing that, using an example of using a separate branch of op-build that provides a little-endian kernel.

You can check if your currently-running firmware has this BOOTKERNEL partition by running pflash -i on the BMC. It should list BOOTKERNEL in the partition table listing:

# pflash -i Flash info: ----------- Name = Micron N25Qx512Ax Total size = 64MB Erase granule = 4KB Partitions: ----------- ID=00 part 00000000..00001000 (actual=00001000) ID=01 HBEL 00008000..0002c000 (actual=00024000) [...] ID=11 HBRT 00949000..00ca9000 (actual=00360000) ID=12 PAYLOAD 00ca9000..00da9000 (actual=00100000) ID=13 BOOTKERNEL 00da9000..01ca9000 (actual=00f00000) ID=14 ATTR_TMP 01ca9000..01cb1000 (actual=00008000) ID=15 ATTR_PERM 01cb1000..01cb9000 (actual=00008000) [...] #

If your partition table does not contain a BOOTKERNEL partition, you'll need to upgrade to a more recent PNOR image to proceed.

First (if you don't have one already), grab a suitable version of op-build. In this example, we'll use my le branch, which has little-endian support:

git clone --recursive git://github.com/jk-ozlabs/op-build.git cd op-build git checkout -b le origin/le git submodule update

Then, prepare our environment and configure for the relevant platform - in this case, habanero:

. op-build-env op-build habanero_defconfig

If you'd like to change any of the kernel config (for example, to add or remove drivers), you can do that now, using the 'linux-menuconfig' target. This is only necessary if you wish to make changes. Otherwise, the default kernel config will work.

op-build linux-menuconfig

Next, we build just the userspace and kernel parts of the firmware image, by specifying the linux26-rebuild-with-initramfs build target:

op-build linux26-rebuild-with-initramfs

If you're using a fresh op-build tree, this will take a little while, as it downloads and builds a toolchain, userspace and kernel. Once that's complete, you'll have a built kernel image in the output tree:

output/build/images/zImage.epapr

Transfer this file to the BMC, and flash using pflash. We specify the -P <PARTITION> argument to write to a single PNOR partition:

pflash -P BOOTKERNEL -e -p /tmp/zImage.epapr

And that's it! The next boot will use your newly-build kernel in the petitboot bootloader environment.

Out-of-tree kernel builds

If you'd like to replace the kernel from op-build with one from your own external source tree, you have two options. Either point op-build at your own tree, or build you own kernel using the initramfs that op-build has produced.

For the former, you can override certain op-build variables to reference a separate source. For example, to use an external git tree:

op-build LINUX_SITE=git://github.com/jk-ozlabs/linux LINUX_VERSION=v3.19

See Customising OpenPower firmware for other examples of using external sources in op-build.

The latter option involves doing a completely out-of-op-build build of a kernel, but referencing the initramfs created by op-build (which is in output/images/rootfs.cpio.xz). From your kernel source directory, add CONFIG_INITRAMFS_SOURCE argument, specifying the relevant initramfs. For example:

make O=obj ARCH=powerpc \ CONFIG_INITRAMFS_SOURCE=../op-build/output/images/rootfs.cpio.xz

Russell Coker: Smart Phones Should Measure Charge Speed

Wed, 2015-06-24 12:26

My first mobile phone lasted for days between charges. I never really found out how long it’s battery would last because there was no way that I could use it to deplete the charge in any time that I could spend awake. Even if I had managed to run the battery out the phone was designed to accept 4*AA batteries (it’s rechargeable battery pack was exactly that size) so I could buy spare batteries at any store.

Modern phones are quite different in physical phone design (phones that weigh less than 4*AA batteries aren’t uncommon), functionality (fast CPUs and big screens suck power), and use (games really drain your phone battery). This requires much more effective chargers, when some phones are intensively used (EG playing an action game with Wifi enabled) they can’t be charged as they use more power than the plug-pack supplies. I’ve previously blogged some calculations about resistance and thickness of wires for phone chargers [1], it’s obvious that there are some technical limitations to phone charging based on the decision to use a long cable at ~5V.

My calculations about phone charge rate were based on the theoretical resistance of wires based on their estimated cross-sectional area. One problem with such analysis is that it’s difficult to determine how thick the insulation is without destroying the wire. Another problem is that after repeated use of a charging cable some conductors break due to excessive bending. This can significantly increase the resistance and therefore increase the charging time. Recently a charging cable that used to be really good suddenly became almost useless. My Galaxy Note 2 would claim that it was being charged even though the reported level of charge in the battery was not increasing, it seems that the cable only supplied enough power to keep the phone running not enough to actually charge the battery.

I recently bought a USB current measurement device which is really useful. I have used it to diagnose power supplies and USB cables that didn’t work correctly. But one significant way in which it fails is in the case of problems with the USB connector. Sometimes a cable performs differently when connected via the USB current measurement device.

The CurrentWidget program [2] on my Galaxy Note 2 told me that all of the dedicated USB chargers (the 12V one in my car and all the mains powered ones) supply 1698mA (including the ones rated at 1A) while a PC USB port supplies ~400mA. I don’t think that the Note 2 measurement is particularly reliable. On my Galaxy Note 3 it always says 0mA, I guess that feature isn’t implemented. An old Galaxy S3 reports 999mA of charging even when the USB current measurement device says ~500mA. It seems to me that method the CurrentWidget uses to get the current isn’t accurate if it even works at all.

Android 5 on the Nexus 4/5 phones will tell the amount of time until the phone is charged in some situations (on the Nexus 4 and Nexus 5 that I used for testing it didn’t always display it and I don’t know why). This is an useful but it’s still not good enough.

I think that what we need is to have the phone measure the current that’s being supplied and report it to the user. Then when a phone charges slowly because apps are using some power that won’t be mistaken for a phone charging slowly due to a defective cable or connector.

Related posts:

  1. Cooling Phones According to the bureau of meteorology today is 39C. But...
  2. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
  3. Qi Phone Charging I have just bought a wireless phone charging system based...

Russell Coker: One Android Phone Per Child

Tue, 2015-06-23 12:26

I was asked for advice on whether children should have access to smart phones, it’s an issue that many people are discussing and seems worthy of a blog post.

Claimed Problems with Smart Phones

The first thing that I think people should read is this XKCD post with quotes about the demise of letter writing from 99+ years ago [1]. Given the lack of evidence cited by people who oppose phone use I think we should consider to what extent the current concerns about smart phone use are just reactions to changes in society. I’ve done some web searching for reasons that people give for opposing smart phone use by kids and addressed the issues below.

Some people claim that children shouldn’t get a phone when they are so young that it will just be a toy. That’s interesting given the dramatic increase in the amount of money spent on toys for children in recent times. It’s particularly interesting when parents buy game consoles for their children but refuse mobile phone “toys” (I know someone who did this). I think this is more of a social issue regarding what is a suitable toy than any real objection to phones used as toys. Obviously the educational potential of a mobile phone is much greater than that of a game console.

It’s often claimed that kids should spend their time reading books instead of using phones. When visiting libraries I’ve observed kids using phones to store lists of books that they want to read, this seems to discredit that theory. Also some libraries have Android and iOS apps for searching their catalogs. There are a variety of apps for reading eBooks, some of which have access to many free books but I don’t expect many people to read novels on a phone.

Cyber-bullying is the subject of a lot of anxiety in the media. At least with cyber-bullying there’s an electronic trail, anyone who suspects that their child is being cyber-bullied can check that while old-fashioned bullying is more difficult to track down. Also while cyber-bullying can happen faster on smart phones the victim can also be harassed on a PC. I don’t think that waiting to use a PC and learn what nasty thing people are saying about you is going to be much better than getting an instant notification on a smart phone. It seems to me that the main disadvantage of smart phones in regard to cyber-bullying is that it’s easier for a child to participate in bullying if they have such a device. As most parents don’t seem concerned that their child might be a bully (unfortunately many parents think it’s a good thing) this doesn’t seem like a logical objection.

Fear of missing out (FOMO) is claimed to be a problem, apparently if a child has a phone then they will want to take it to bed with them and that would be a bad thing. But parents could have a policy about when phones may be used and insist that a phone not be taken into the bedroom. If it’s impossible for a child to own a phone without taking it to bed then the parents are probably dealing with other problems. I’m not convinced that a phone in bed is necessarily a bad thing anyway, a phone can be used as an alarm clock and instant-message notifications can be turned off at night. When I was young I used to wait until my parents were asleep before getting out of bed to use my PC, so if smart-phones were available when I was young it wouldn’t have changed my night-time computer use.

Some people complain that kids might use phones to play games too much or talk to their friends too much. What do people expect kids to do? In recent times the fear of abduction has led to children doing playing outside a lot less, it used to be that 6yos would play with other kids in their street and 9yos would be allowed to walk to the local park. Now people aren’t allowing 14yo kids walk to the nearest park alone. Playing games and socialising with other kids has to be done over the Internet because kids aren’t often allowed out of the house. Play and socialising are important learning experiences that have to happen online if they can’t happen offline.

Apps can be expensive. But it’s optional to sign up for a credit card with the Google Play store and the range of free apps is really good. Also the default configuration of the app store is to require a password entry before every purchase. Finally it is possible to give kids pre-paid credit cards and let them pay for their own stuff, such pre-paid cards are sold at Australian post offices and I’m sure that most first-world countries have similar facilities.

Electronic communication is claimed to be somehow different and lesser than old-fashioned communication. I presume that people made the same claims about the telephone when it first became popular. The only real difference between email and posted letters is that email tends to be shorter because the reply time is smaller, you can reply to any questions in the same day not wait a week for a response so it makes sense to expect questions rather than covering all possibilities in the first email. If it’s a good thing to have longer forms of communication then a smart phone with a big screen would be a better option than a “feature phone”, and if face to face communication is preferred then a smart phone with video-call access would be the way to go (better even than old fashioned telephony).

Real Problems with Smart Phones

The majority opinion among everyone who matters (parents, teachers, and police) seems to be that crime at school isn’t important. Many crimes that would result in jail sentences if committed by adults receive either no punishment or something trivial (such as lunchtime detention) if committed by school kids. Introducing items that are both intrinsically valuable and which have personal value due to the data storage into a typical school environment is probably going to increase the amount of crime. The best options to deal with this problem are to prevent kids from taking phones to school or to home-school kids. Fixing the crime problem at typical schools isn’t a viable option.

Bills can potentially be unexpectedly large due to kids’ inability to restrain their usage and telcos deliberately making their plans tricky to profit from excess usage fees. The solution is to only use pre-paid plans, fortunately many companies offer good deals for pre-paid use. In Australia Aldi sells pre-paid credit in $15 increments that lasts a year [2]. So it’s possible to pay $15 per year for a child’s phone use, have them use Wifi for data access and pay from their own money if they make excessive calls. For older kids who need data access when they aren’t at home or near their parents there are other pre-paid phone companies that offer good deals, I’ve previously compared prices of telcos in Australia, some of those telcos should do [3].

It’s expensive to buy phones. The solution to this is to not buy new phones for kids, give them an old phone that was used by an older relative or buy an old phone on ebay. Also let kids petition wealthy relatives for a phone as a birthday present. If grandparents want to buy the latest smart-phone for a 7yo then there’s no reason to stop them IMHO (this isn’t a hypothetical situation).

Kids can be irresponsible and lose or break their phone. But the way kids learn to act responsibly is by practice. If they break a good phone and get a lesser phone as a replacement or have to keep using a broken phone then it’s a learning experience. A friend’s son head-butted his phone and cracked the screen – he used it for 6 months after that, I think he learned from that experience. I think that kids should learn to be responsible with a phone several years before they are allowed to get a “learner’s permit” to drive a car on public roads, which means that they should have their own phone when they are 12.

I’ve seen an article about a school finding that tablets didn’t work as well as laptops which was touted as news. Laptops or desktop PCs obviously work best for typing. Tablets are for situations where a laptop isn’t convenient and when the usage involves mostly reading/watching, I’ve seen school kids using tablets on excursions which seems like a good use of them. Phones are even less suited to writing than tablets. This isn’t a problem for phone use, you just need to use the right device for each task.

Phones vs Tablets

Some people think that a tablet is somehow different from a phone. I’ve just read an article by a parent who proudly described their policy of buying “feature phones” for their children and tablets for them to do homework etc. Really a phone is just a smaller tablet, once you have decided to buy a tablet the choice to buy a smart phone is just about whether you want a smaller version of what you have already got.

The iPad doesn’t appear to be able to make phone calls (but it supports many different VOIP and video-conferencing apps) so that could technically be described as a difference. AFAIK all Android tablets that support 3G networking also support making and receiving phone calls if you have a SIM installed. It is awkward to use a tablet to make phone calls but most usage of a modern phone is as an ultra portable computer not as a telephone.

The phone vs tablet issue doesn’t seem to be about the capabilities of the device. It’s about how portable the device should be and the image of the device. I think that if a tablet is good then a more portable computing device can only be better (at least when you need greater portability).

Recently I’ve been carrying a 10″ tablet around a lot for work, sometimes a tablet will do for emergency work when a phone is too small and a laptop is too heavy. Even though tablets are thin and light it’s still inconvenient to carry, the issue of size and weight is a greater problem for kids. 7″ tablets are a lot smaller and lighter, but that’s getting close to a 5″ phone.

Benefits of Smart Phones

Using a smart phone is good for teaching children dexterity. It can also be used for teaching art in situations where more traditional art forms such as finger painting aren’t possible (I have met a professional artist who has used a Samsung Galaxy Note phone for creating art work).

There is a huge range of educational apps for smart phones.

The Wikireader (that I reviewed 4 years ago) [4] has obvious educational benefits. But a phone with Internet access (either 3G or Wifi) gives Wikipedia access including all pictures and is a better fit for most pockets.

There are lots of educational web sites and random web sites that can be used for education (Googling the answer to random questions).

When it comes to preparing kids for “the real world” or “the work environment” people often claim that kids need to use Microsoft software because most companies do (regardless of the fact that most companies will be using radically different versions of MS software by the time current school kids graduate from university). In my typical work environment I’m expected to be able to find the answer to all sorts of random work-related questions at any time and I think that many careers have similar expectations. Being able to quickly look things up on a phone is a real work skill, and a skill that’s going to last a lot longer than knowing today’s version of MS-Office.

There are a variety of apps for tracking phones. There are non-creepy ways of using such apps for monitoring kids. Also with two-way monitoring kids will know when their parents are about to collect them from an event and can stay inside until their parents are in the area. This combined with the phone/SMS functionality that is available on feature-phones provides some benefits for child safety.

iOS vs Android

Rumour has it that iOS is better than Android for kids diagnosed with Low Functioning Autism. There are apparently apps that help non-verbal kids communicate with icons and for arranging schedules for kids who have difficulty with changes to plans. I don’t know anyone who has a LFA child so I haven’t had any reason to investigate such things. Anyone can visit an Apple store and a Samsung Experience store as they have phones and tablets you can use to test out the apps (at least the ones with free versions). As an aside the money the Australian government provides to assist Autistic children can be used to purchase a phone or tablet if a registered therapist signs a document declaring that it has a therapeutic benefit.

I think that Android devices are generally better for educational purposes than iOS devices because Android is a less restrictive platform. On an Android device you can install apps downloaded from a web site or from a 3rd party app download service. Even if you stick to the Google Play store there’s a wider range of apps to choose from because Google is apparently less restrictive.

Android devices usually allow installation of a replacement OS. The Nexus devices are always unlocked and have a wide range of alternate OS images and the other commonly used devices can usually have an alternate OS installed. This allows kids who have the interest and technical skill to extensively customise their device and learn all about it’s operation. iOS devices are designed to be sealed against the user. Admittedly there probably aren’t many kids with the skill and desire to replace the OS on their phone, but I think it’s good to have option.

Android phones have a range of sizes and features while Apple only makes a few devices at any time and there’s usually only a couple of different phones on sale. iPhones are also a lot smaller than most Android phones, according to my previous estimates of hand size the iPhone 5 would be a good tablet for a 3yo or good for side-grasp phone use for a 10yo [5]. The main benefits of a phone are for things other than making phone calls so generally the biggest phone that will fit in a pocket is the best choice. The tiny iPhones don’t seem very suitable.

Also buying one of each is a viable option.

Conclusion

I think that mobile phone ownership is good for almost all kids even from a very young age (there are many reports of kids learning to use phones and tablets before they learn to read). There are no real down-sides that I can find.

I think that Android devices are generally a better option than iOS devices. But in the case of special needs kids there may be advantages to iOS.

Related posts:

  1. Choosing an Android Phone My phone contract ends in a few months, so I’m...
  2. Standardising Android Don Marti wrote an amusing post about the lack of...
  3. My Ideal Mobile Phone Based on my experience testing the IBM Seer software on...

Sam Watkins: sswam

Mon, 2015-06-22 16:30

I learned a useful trick with the bash shell today.

We can use printf “%q ” to escape arguments to pass to the shell.

This can be useful in combination with ssh, in case you want to pass arguments containing shell special characters or spaces. It can also be used with su -c, and sh -c.

The following will run a command exactly on a remote server:

sshc() { remote=$1 ; shift         ssh "$remote" "`printf "%q " "$@"`" }

Example:

sshc user@server touch "a test file" "another file"

Sridhar Dhanapalan: Twitter posts: 2015-06-15 to 2015-06-21

Mon, 2015-06-22 00:27

Michael Still: Yet another possible cub walk

Sat, 2015-06-20 19:28
Jacqui and Catherine kindly agreed to come on another test walk for a possible cub walk. This one was the Sanctuary Loop at Tidbinbilla. To be honest this wasn't a great choice for cubs -- whilst being scenic and generally pleasant, the heavy use of black top paths and walkways made it feel like a walk in the Botanic Gardens, and the heavy fencing made it feel like an exhibit at a zoo. I'm sure its great for a weekend walk or for tourists, but if you're trying to have a cub adventure its not great.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150620-tidbinbilla photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

Russell Coker: BTRFS Status June 2015

Sat, 2015-06-20 15:26

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT hetz0/be0-mail@2015-03-10 2.88G – 387G – hetz0/be0-mail@2015-03-11 1.12G – 388G – hetz0/be0-mail@2015-03-12 1.11G – 388G – hetz0/be0-mail@2015-03-13 1.19G – 388G –

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

Related posts:

  1. BTRFS Status Dec 2014 My last problem with BTRFS was in August [1]. BTRFS...
  2. BTRFS Status March 2014 I’m currently using BTRFS on most systems that I can...
  3. BTRFS Status July 2014 My last BTRFS status report was in April [1], it...

Rusty Russell: Mining on a Home DSL connection: latency for 1MB and 8MB blocks

Fri, 2015-06-19 13:28

I like data.  So when Patrick Strateman handed me a hacky patch for a new testnet with a 100MB block limit, I went to get some.  I added 7 digital ocean nodes, another hacky patch to prevent sendrawtransaction from broadcasting, and a quick utility to create massive chains of transactions/

My home DSL connection is 11Mbit down, and 1Mbit up; that’s the fastest I can get here.  I was CPU mining on my laptop for this test, while running tcpdump to capture network traffic for analysis.  I didn’t measure the time taken to process the blocks on the receiving nodes, just the first propagation step.

1 Megabyte Block

Naively, it should take about 10 seconds to send a 1MB block up my DSL line from first packet to last.  Here’s what actually happens, in seconds for each node:

  1. 66.8
  2. 70.4
  3. 71.8
  4. 71.9
  5. 73.8
  6. 75.1
  7. 75.9
  8. 76.4

The packet dump shows they’re all pretty much sprayed out simultaneously (bitcoind may do the writes in order, but the network stack interleaves them pretty well).  That’s why it’s 67 seconds at best before the first node receives my block (a bit longer, since that’s when the packet left my laptop).

8 Megabyte Block

I increased my block size, and one node dropped out, so this isn’t quite the same, but the times to send to each node are about 8 times worse, as expected:

  1. 501.7
  2. 524.1
  3. 536.9
  4. 537.6
  5. 538.6
  6. 544.4
  7. 546.7
Conclusion

Using the rough formula of 1-exp(-t/600), I would expect orphan rates of 10.5% generating 1MB blocks, and 56.6% with 8MB blocks; that’s a huge cut in expected profits.

Workarounds
  • Get a faster DSL connection.  Though even an uplink 10 times faster would mean 1.1% orphan rate with 1MB blocks, or 8% with 8MB blocks.
  • Only connect to a single well-connected peer (-maxconnections=1), and hope they propagate your block.
  • Refuse to mine any transactions, and just collect the block reward.  Doesn’t help the bitcoin network at all though.
  • Join a large pool.  This is what happens in practice, but raises a significant centralization problem.
Fixes
  • We need bitcoind to be smarter about ratelimiting in these situations, and stream serially.  Done correctly (which is hard), it could also help bufferbloat which makes running a full node at home so painful when it propagates blocks.
  • Some kind of block compression, along the lines of Gavin’s IBLT idea. I’ve done some preliminary work on this, and it’s promising, but far from trivial.

 

Michael Still: Further adventures in the Jerrabomberra wetlands

Fri, 2015-06-19 09:28
There was another walk option for cubs I wanted to explore at the wetlands, so I went back during lunch time yesterday. It was raining really quite heavily during this walk, but I still had fun. I think this route might be the winner -- its a bit longer, and a bit more interesting as well.



                                       



See more thumbnails



Interactive map for this route.



Tags for this post: blog pictures 20150618-jerrabomberra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

Michael Still: Exploring possible cub walks

Wed, 2015-06-17 15:28
I've been exploring possible cub walks for a little while now, and decided that Jerrabomberra Wetlands might be an option. Most of these photos will seem a bit odd to readers, unless you realize I'm mostly interested in the terrain and its suitability for cubs...



                                 



Interactive map for this route.



Tags for this post: blog pictures 20150617-jerrabomerra_wetlands photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

Ben Martin: Abide the Slide

Wed, 2015-06-17 08:39
The holonomic drive robot takes it's first rolls! This is what you get when you contort a 3d printer into a cross format and attach funky wheels. Quite literally as the control board is an Arduino Mega board with Atmel 2650 MCU and a RAMPS 1.4 stepper controller board plugged into it. The show is controlled over rf24 link from a hand made controller. Yes folks, a regression to teleoperating for now. I'll have to throw the thing onto scales later, but the steppers themselves add considerable weight to the project, but there doesn't seem to be much problem moving the thing around under it's own power.







The battery is a little underspeced, it will surely supply enough current, and doesn't get hot after operation, but the overall battery capacity is low so the show is over fairly quickly. A problem that is easily solved by throwing more dollars at the battery. The next phase is to get better mechanical stability by tweaking things and changing the software to account for the fact that one wheel axis is longer than the other. From there some sensor feedback (IMU) and a fly by wire mode will be on the cards.







This might end up going into ROS land too, encapsulating the whole current setup into being a "robot base controller" and using other hardware above to run sensors, navigation, and decision logic.



Stewart Smith: OPAL firmware specification, conformance and documentation

Tue, 2015-06-16 11:26

Now that we have an increasing amount of things that run on top of OPAL:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

and that the OpenPower ecosystem is rapidly growing (especially around people building OpenPower machines), the need for more formal specification, conformance testing and documentation for OPAL is increasing rapidly.

If you look at the documentation in the skiboot tree late last year, you’d notice a grand total of seven text files. Now, we’re a lot better (although far from complete).

I’m proud to say that I won’t merge new code that adds/modifies an OPAL API call or anything in the device tree that doesn’t come with accompanying documentation, and this has meant that although it may not be perfect, we have something that is a decent starting point.

We’re in the interesting situation of starting with a working system, with mainline Linux kernels now for over a year (maybe even 18 months) being able to be booted by skiboot and run on powernv hardware (the more modern the kernel the better though).

So…. if anyone loves going through deeply technical documentation… do I have a project you can contribute to!

Arjen Lentz: On Removal of Citizenship – Short Cuts | London Review of Books

Mon, 2015-06-15 14:25

Many governments would like to rid themselves of unwanted residents, and those that countenance statelessness threaten to increase rather than reduce the problems associated with any who are poorly integrated. Their efforts are also wrong in principle. Citizenship, Hannah Arendt said, is ‘the right to have rights’. Citizenship isn’t a transient privilege, but an ancient status on which legal order is built. If individuals are accused of wrongdoing, they should be brought to trial, not issued a notice by the Home Office that cuts them loose and exposes them to unregulated and potentially lethal action by another country.

Richard Jones: PyCon Australia 2015 Early Bird Registrations Now Open!

Mon, 2015-06-15 13:26

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.

Stewart Smith: FreeBSD on OpenPower

Mon, 2015-06-15 08:26

There’s been some work on porting FreeBSD over to run natively on top of OPAL, that is, on bare metal OpenPower machines (not just under KVM).

This is one of four possible things to run natively on an OPAL system:

  1. Linux
  2. hello_world (in skiboot tree)
  3. ppc64le_hello (as I wrote about yesterday)
  4. FreeBSD

It’s great to see that another fully featured OS is getting ported to POWER8 and OPAL. It’s not yet at a stage where you could say it was finished or anything (PCI support is pretty preliminary for example, and fancy things like disks and networking live on PCI).

Sridhar Dhanapalan: Twitter posts: 2015-06-08 to 2015-06-14

Mon, 2015-06-15 00:27

Stewart Smith: hello world as ppc66le OPAL payload!

Sun, 2015-06-14 13:27

While the in-tree hello-world kernel (originally by me, and Mikey managed to CUT THE BLOAT of a whole SEVENTEEN instructions down to a tiny ten) is very, very dumb (and does one thing, print “Hello World” to the console), there’s now an alternative for those who like to play with a more feature-rich Hello World rather than booting a more “real” OS such as Linux. In case you’re wondering, we use the hello world kernel as a tiny test that we haven’t completely and utterly broken things when merging/developing code.

https://github.com/andreiw/ppc64le_hello is a wonderful example of a small (INTERACTIVE!) starting point for a PowerNV (as it’s called in Linux) or “bare metal” (i.e. non-virtualised) OS on POWER.

What’s more impressive is that this was all developed using the simulator rather than real hardware (although I think somebody has tried it on some now).

Kind of neat!

Binh Nguyen: The Value of Money - Part 2

Sat, 2015-06-13 16:28
This is obviously a continuation from my last post, http://dtbnguyen.blogspot.com.au/2015/06/repairing-musical-instrumentselectrical.html

No one wants to live from day to day, week to week and for the most part you don't have that when you have a salaried job. You regularly receive a lump sum each fortnight or month from which you draw down to pay for life's expenses.



Over time you actually discover it's an illusion though. A former teacher of mine once said that a salary of about 70-80K wasn't all that much. To kids that seemed liked a lot of money though. Now it actually makes a lot more sense. Factor in tax, life expenses, rental, etc... and most of it dries up very quickly.



When you head to business or law school it's the same thing. You regularly deal with millions, billions, and generally gratuitous amounts of money. This doesn't change all that much when you head out into the real world. The real world creates a perception whereby consumption and possession of certain material goods are almost a necessity in order to live and work comfortably within your profession. Ultimately, this means that no matter how much you earn it still doesn't seem like it's enough.



The greatest irony of this is that you only really discover that the the perception of the value of such (gratuitous) goods changes drastically if you are on your own or you are building a company.



I semi-regularly receive offers of business/job opportunities through this blog and other avenues (scams as well as real offers. Thankfully, most of the 'fishy ones' are picked up by SPAM filters). The irony is this. I know that no matter how much money is thrown at a business there is still no guarantee of success and a lot of the time savings can dry up in a very short space of time (especially if it is a 'standard business'. Namely, one that doesn't have a crazy level of growth ('real growth' not anticipated or 'projected growth')).



This is particularly the case if specialist 'consultants' (they can charge you a lot of money for what seems like obvious advice) need to be brought in. The thing I'm seeing is that basically a lot of what we sell one another is 'mumbo jumbo'. Stuff that we generally don't need but ultimately convince one another of in order to make a living and perhaps even allow us to do something we enjoy.



What complicates this further is that no matter how much terminology and theory we throw at something ultimately most people don't value things at the same value. A good example of this is asking random people what the value of a used iPod Classic 160GB is? I remember questioniong the value (200) of it by a salesman. He justfied the store price by stating that people were selling it for 600-700 on eBay. A struggling student would likely value it at around closer to 150. A person in hospitality valued it at 240. The average, knowledeagble community member would perceive (most likely remember) the associated value with the highest mark though.



Shift this back into the workplace and things become even more complicated. Think about the 'perception' of your profession. A short while back I met a sound engineer who made a decent salary (around 80K) but had to work 18 hour days continuously based on his description. His quality of life was ultimately shot and his wage should have obviously been much higher. His perceived value was 80K. His effective value was much lower.



Think about 'perception' once more. Some doctors/specialists who migrate but have the skills to practice but not the money to purchase insurance, re-take certification exams, etc... become taxi drivers in their new country. Their effective value (as a worker) becomes that of a taxi driver, nothing more.



Many skilled professions actually require extended periods of study/training, an apprenticeship of some form, a huge amount of hours put in, or just time trying to market your skills. A good chunk people may end up making a lot of money but most don't. Perceived value is the end salary but actual value is much lower.

Think about 'perception' in IT. In some companies they look down upon you if you work in this particular area. What's interesting is what they use you for. They basically shove more menial tasks downwards into the IT department because, 'nobody else wants to do it'. The perceived value of the worker in question doesn't seem much more different than a labourer.



The irony is that they're often just as well qualified as anybody in the firm in question and the work can often be varied to make you wonder what exactly is the actual value of an average IT worker. I've been trying to do the calculations. Average IT graduate is worth about 55K.

http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/4125.0main+features2320Jan%202013

http://www.payscale.com/research/AU/Job=Graduate_Software_Engineer/Salary

http://www.graduatecareers.com.au/research/researchreports/graduatesalaries/



Assuming he works at a SME (any industry not just IT) firm he'll be doing a lot of varied tasks (a lot of firms will tend to pigeon hole you into becoming a specialist). At a lot of service providers and SME firms I've looked at one hour of down time equates to about five figures. If you work in the right firm or you end up really good at your job you end up saving your firm somewhere between 5-7 figures each year. At much larger firms this figure is closer to about 6-8 figures each year.



At a lot of firms we suffer from hardware failure. The standard procedure is to simply purchase new hardware to deal with the problem (it's quicker and technically free despite the possible loss of downtime due to diagnosis and response time). The thing I've found out is that if you are actually able to repair/re-design the hardware itself you can actually save/make a lot (particularly telecommunications and network hardware). This is especially the case if the original design cut corners. Once again savings are similar to the previous point.



In an average firm there may be a perception that IT there is simply to support the function a business. It's almost like a utility now (think electricity, water, gas, etc... That's how low some companies perceieve technology. They perceive it to be a mere cost rather than something that can benefit their business). What a lot of people neglect is how much progress can be made given the use of appropiate technology. Savings/productivity gains are similar to the previous points.



What sort of stops us from realising just exactly what our value is is the siloed nature of the modern business world (specialists rather than generalists a lot of the time) and the fact that various laws, regulations, and so on are designed to help stop us from being potentially exploited.



The only way you actually realise what you're worth is if you work as an individual or start a company.



Go ahead, break down what you actually do in your day. You'll be surprised at how much you may actually be worth.



What you ultimately find out though is that (if you're not lazy) you're probably underpaid. The irony is that if the company were to pay you exactly what you were worth they would go bankrupt. Moreover, you only realistically have a small number of chances/opportunities to demonstrate your true worth. A lot of the time jobs are conducted on the basis of intermittency. Namely, you're there to do something specialised difficult every once in a while, not necessarily all the time.



It would be a really interesting world if we didn't have company structures/businesses. I keep on finding out over and over again that you simply get paid more for more skills as an individual. This is especially the case if there is no artificial barrier between you and the getting the job done. The work mightn't be stable but once you deal with that you have a very different perspective of the world even if it's only a part time job.



If you have some talent, I'd suggest you try starting your own company or work as an individual at some point in your life. The obvious problem will be coming up with an idea which will create money though. Don't worry about it. You will find opportunities along the way as you gain more life experience and understand where value comes from. At that point, start doing the numbers and do a few tests to see whether your business instincts are correct. You may be surprised at what you end up finding out.

http://forums.whirlpool.net.au/archive/1505450



Here's are other things I've worked out:

  • if you need a massive and complex business plan in order to justify your business's existence (particularly to investors) then you should rethink your business
  • if you need to 'spin things' or else have a bloated marketing department then there's likely nothing much special about the product or service that you are selling
  • if your business is fairly complex at a small level think about when it will be like when it scales up. Try to remove as many obstacles as you can when you're company is still young to ensure future success if unexpected growth comes your way
  • if you narrow yourself to one particular field you can limit your opportunities. In the normal world it can lead to stagnation (no real change in salary/value), specialisation (guaranteed increase in salary/value) though niether is a given. In smaller companies multiple roles may be critical to the survival/profitability of that particular company. The obvious risk is if they leave you're trying to fill in for multiple roles
  • a lot of goods and services exist in a one to one relationship. You can only sell it once and you have to maximise the profit on that. Through the use of broadcast style technologies we can achieve one to many relationships allowing us to build substantial wealth easily and quickly. This makes valuation of technology companies much more difficult. However, once you factor in overheards and risk of success versus failure things tend to normalise
  • perception means a lot. Think about a pair of Nike runners versus standard supermarket branded ones. There is sometimes very little difference in quality though the price of the Nike runners may be double. The same goes for some of the major fashion labels. They are sometimes produced en-masse in cheap Asian/African countries
  • if there are individuals and companies offering the opportunity to engage in solid business ventures, take them. Your perspective on life and lifestyle will change drastically if things turn out successfully
  • in reality, there are very few businesses where you can genuinely say the future looks bright for all of eternity. This is the same across every single sector
  • make friends with everyone. You'll be surprised at what you can learn and what opportunities you may be able to find
  • the meaning of 'market value' largely dissolves into nothingness in the real world. Managing perception accounts a good deal for what you can charge for something
  • just like investments the value of a good or service will normalise over time. You need volatility (this can be achieved via any means) to be able to make abnormal profits though
  • for companies where goods and services have high overheads 7-8 figures a week/month/year can mean nothing. If the overheads are high enough it's possible that they company may go under in a very short space of time. Find something which doesn't and focus in on that whether it be a primary or side business
  • the more you know the better off you'll be if you're willing to take calculated risk, are patient, and perservere. Most of the time things will normalise
  • in general, the community perception is that making more with high expenses is more successful than making less with no expenses
  • comments from people like Joe Hockey make a lot of sense to those who have had a relatively privileged background but they also go to the core of the matter. There are a lot of impediments in life now. I once recall walking past a begging 'aboriginal'. A white middle-upper class man simply admonished him to get a job. If you've ever worked with people like that or you've ever factored in his background you'll realise that this is almost impossible. Everybody has a go at people who work within the 'cash economy' and do not contribute to the tax base of the country but it's easy to understand a lot of why people do it. There are a lot of impediments in life despite whatever anyone says whether you're working at the top or bottom end of the scale
http://forums.whirlpool.net.au/archive/1937638

http://www.abc.net.au/news/2015-06-10/janda-its-not-hockeys-job-comment-that-should-worry-us/6535484

http://www.smh.com.au/comment/smh-letters/joe-hockey-doesnt-grasp-simple-economics-20150610-ghkl9v.html

http://www.bbc.co.uk/news/education-33109052

  • throw in some wierdness like strange pay for seemingly unskilled jobs and everything looks bizarre. A good example of this is a nightfill worker (stock stacker) at a supermarket in Australia. He can actually earn a lot more than those in skilled professions. It's not just about skills or knowledge when it comes to earning a high wage 
http://forums.whirlpool.net.au/archive/2219972

http://forums.whirlpool.net.au/archive/1937638
  • there are a lot of overqualified people out there (but there are a hell of lot more underqualified people out there are well. I've worked both sides of the equation). If you are lucky someone will give you a chance at a something appropriate to your level but a lot of the time you'll just have to make do
  • you may be shocked at how, who, and what makes money and vice-versa (how, who, and what doesn't make money). For instance, something which you can get for free you can sell while some products/services which have had a lot of effort put into them may not get any sales
https://www.ozbargain.com.au/node/197991

  • there are very few companies that you could genuinely say are 100% technology orientated. Even in companies that are supposedly technology orientated there are still politicial issues that you must deal with
  • by using certain mechanisms you can stop resales of your products/services which can force purchase only from known avenues. This is a common strategy in the music industry with MIDI controllers and stops erosion/canibalisation of sales of new product through minimisation of sales of used products
  • it's easy to be impressed by people who are simply quoting numbers. Do your research. People commonly quote high growth figures but in reality Most aren't impressive as they seem. They seem even less impressive when you factor in inflation, Quantitive Easing programs, etc... In a lot of cases companies/industries (even many countries if you to think about it) would actually be at a standstill or else going backwards.
http://www.inc.com/sageworks/the-15-most-profitable-industries-for-private-companies.html

https://biz.yahoo.com/p/sum_qpmd.html

http://www.forbes.com/sites/sageworks/2013/04/28/the-most-profitable-businesses-to-start/ http://www.forbes.com/sites/sageworks/2014/08/31/the-least-profitable-businesses-in-the-u-s/

http://www.businessinsider.com/sector-profit-margins-sp-500-2012-8



http://www.tradingeconomics.com/country-list/inflation-rate

https://en.wikipedia.org/wiki/List_of_countries_by_inflation_rate

http://data.worldbank.org/indicator/FP.CPI.TOTL.ZG

https://en.wikipedia.org/wiki/Quantitative_easing



http://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

https://en.wikipedia.org/wiki/List_of_countries_by_real_GDP_growth_rate

Simon Lyall: Feeds I follow: Citylab, Commitstrip, MKBHD, Offsetting Bahaviour

Sat, 2015-06-13 15:28

I thought I’d list of the feeds/blogs/sites I currently follow. Mostly I do this via RSS using Newsblur.

Share

James Bromberger: Logical Volume Management with Debian on Amazon EC2

Sat, 2015-06-13 00:27

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances.

However it should be noted that the same auto-management of capacity is not true in the EC2 instance’s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed.

However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible.

Enter: Logical Volume Management, or LVM. It’s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) — and LVM 1 was many years before that — so it’s pretty mature now. It’s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them.

In this post, I’ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you’d do some basic maintenance to add and remove (where possible) storage with minimal interruption.

Getting Started

First a little prep work for a new Debian instance with LVM.

As I’d like to give the instance its own ability to manage its storage, I’ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I’ll create a new Role I’ll name EC2-MyServer (or similar), and at this point I’ll skip giving it any actual privileges (later we’ll update this). As at this date, we can only associate an instance role/profile at instance launch time.

Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I’ll be managing on a separate disk from the root file system.

First, I need to get the LVM utilities installed. It’s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:

apt update && apt install lvm2

After a few moments, the package is installed. I’ll choose a location that I want my data to live in, such as /opt/.  I want a separate disk for this task for a number of reasons:

  1. Root EBS volumes cannot currently be encrypted using Amazon’s Encrypted EBS Volumes at this point in time. If I want to also use AWS’ encryption option, it’ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It’s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that’s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.

I will create this extra volume in the AWS console and present it to this host. I’ll start by using a web browser (we’ll use CLI later) with the EC2 console.

The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won’t be able to connect them to my instance. It’s not a huge issue, as I would just delete the volume and try again.

I navigate to the “Instances” panel of the EC2 Console, and find my instance in the list:

A (redacted) list of instance from the EC2 console.

Here I can see I have located an instance and it’s running in US-East-1A: that’s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:

wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone

The returned text is simply: “us-east-1a”.

Time to navigate to “Elastic Block Store“, choose “Volumes” and click “Create“:

Creating a volume in AWS EC2: ensure the AZ is the same as your instance

You’ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn’t include the t2 family. However, you have an option of using encryption with LVM – where the customer looks after the encryption key – see LUKS.

What’s nice is that I can do both — have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext!

I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as “creating”, and then shortly afterwards as “available”. Time to attach it: select the disk, and either right-click and choose “Attach“, or from the menu at the top of the list, chose “Actions” -> “Attach” (both do the same thing).

Attaching a volume to an instance: you’ll be prompted for the compatible instances in the same AZ.

At this point in time your EC2 instance will now notice a new disk; you can confirm this with “dmesg |tail“, and you’ll see something like:

[1994151.231815]  xvdg: unknown partition table

(Note the time-stamp in square brackets will be different).

Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we’re adding in LVM here – between this “raw” device, and the filesystem we are yet to make….

Marking the block device for LVM

Our first operation with LVM is to put a marker on the volume to indicate it’s being use for LVM – so that when we scan the block device, we know what it’s for. It’s a really simple command:

pvcreate /dev/xvdg

The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:

  Physical volume "/dev/xvdg" successfully created Checking our EBS Volume

We can check on the EBS volume – which LVM sees as a Physical Volume – using the “pvs” command.

# pvs   PV         VG   Fmt  Attr PSize PFree   /dev/xvdg       lvm2 ---  5.00g 5.00g

Here we see the entire disk is currently unused.

Creating our First Volume Group

Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we’ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:

# vgcreate  OptVG /dev/xvdg   Volume group "OptVG" successfully created

And likewise we can check our set of Volume Groups with ” vgs”:

# vgs   VG    #PV #LV #SN Attr   VSize VFree   OptVG   1   0   0 wz--n- 5.00g 5.00g

The Attribute flags here indicate this is writable, resizable, and allocating extents in “normal” mode. Lets proceed to make our (first) Logical Volume in this Volume Group:

# lvcreate -n OptLV -L 4.9G OptVG   Rounding up size to full physical extent 4.90 GiB   Logical volume "OptLV" created

You’ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk – and this will be used later when we want to move data between raw disk devices.

If I wanted to use LVM for Snapshots, then I’d want to leave more space free (unallocated) again.

We can check on our Logical Volume:

# lvs   LV    VG    Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert   OptLV OptVG -wi-a----- 4.90g

The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we’re going to format and mount. But before we do, we should review what file system we’ll use.

Filesystems Popular Linux file systems Name Shrink Grow Journal Max File Sz Max Vol Sz btrfs Y Y N 16 EB 16 EB ext3 Y off-line Y Y 2 TB 32 TB ext4 Y off-line Y Y 16 TB 1 EB xfs N Y Y 8 EB 8 EB zfs* N Y Y 16 EB 256 ZB

For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I’ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that’s a possibility; but for tried and trusted, I’ll use ext4.

The selection of ext4 also means that I’ll only be able to shrink this file system off-line (unmounted).

I’ll make the filesystem:

# mkfs.ext4 /dev/OptVG/OptLV mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 1285120 4k blocks and 321280 inodes Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

And now mount this volume and check it out:

# mount /dev/OptVG/OptLV /opt/ # df -HT /opt Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  5.1G   11M  4.8G   1% /opt

Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:

/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0

With this in place, we can now start using this disk.  I selected here not to update the filesystem every time I access a file or folder – updates get logged as normal but access time is just ignored.

Time to expand

After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn’t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4’s online resize ability will allow us to do this transparently.

For this example, we’ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original – we’re going to online-migrate all our data from one to the other.

As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:

[1999786.341602]  xvdh: unknown partition table

And now we initalise this as a Physical volume for LVM:

# pvcreate /dev/xvdh   Physical volume "/dev/xvdh" successfully created

And then add this disk to our existing OptVG Volume Group:

# vgextend OptVG /dev/xvdh   Volume group "OptVG" successfully extended

We can now review our Volume group with vgs, and see our physical volumes with pvs:

# vgs   VG    #PV #LV #SN Attr   VSize  VFree   OptVG   2   1   0 wz--n- 14.99g 10.09g # pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 a--   5.00g 96.00m   /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g

There are now 2 Physical Volumes – we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG.

Now its time to stop using the /dev/xvgd volume for any new requests:

# pvchange -x n /dev/xvdg   Physical volume "/dev/xvdg" changed   1 physical volume changed / 0 physical volumes not changed

At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I’d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):

# pvmove /dev/sdb1 /dev/sdd1   /dev/xvdg: Moved: 0.1%   /dev/xvdg: Moved: 8.6%   /dev/xvdg: Moved: 17.1%   /dev/xvdg: Moved: 25.7%   /dev/xvdg: Moved: 34.2%   /dev/xvdg: Moved: 42.5%   /dev/xvdg: Moved: 51.2%   /dev/xvdg: Moved: 59.7%   /dev/xvdg: Moved: 68.0%   /dev/xvdg: Moved: 76.4%   /dev/xvdg: Moved: 84.7%   /dev/xvdg: Moved: 93.3%   /dev/xvdg: Moved: 100.0%

During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric – clearly data should be flowing off the old disk, and on to the new.

A note on disk throughput

The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There’s a few things we can do to tweak this:

  • EBS Optimised: a launch-time option that reserves network throughput from certain instance types back to the EBS service within the AZ. Depending on the size of the instance this is 500 MB/sec up to 4GB/sec. Note that for the c4 family of instances, EBS Optimised is on by default.
  • Size of GP2 disk: the larger the disk, the longer it can sustain high IO throughput – but read this for details.
  • Size and speed of PIOPs disk: if consistent high IO is required, then moving to Provisioned IO disk may be useful. Looking at the (2 weeks) history of Cloudwatch logs for the old volume will give me some idea of the duty cycle of the disk IO.
Back to the move…

Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:

# pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 ---   5.00g 5.00g   /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g

So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:

# vgreduce OptVG /dev/xvdg   Removed "/dev/xvdg" from volume group "OptVG"

Then I cleanly wipe the labels from the volume:

# pvremove /dev/xvdg   Labels on physical volume "/dev/xvdg" successfully wiped

If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time

Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:

Detach an EBS volume from an EC2 instance

Wait for a few seconds, and the disk is then shown as “available“; I then chose to delete the disk in the EC2 console (and stop paying for it).

Back to the Logical Volume – it’s still 4.9 GB, so I add 4.5 GB to it:

# lvresize -L +4.5G /dev/OptVG/OptLV   Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).   Logical volume OptLV successfully resized

We now have 0.6GB free space on the physical volume (pvs confirms this).

Finally, its time to expand out ext4 file system:

# resize2fs /dev/OptVG/OptLV resize2fs 1.42.12 (29-Aug-2014) Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.

And with df we can now see:

# df -HT /opt/ Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  9.9G   12M  9.4G   1% /opt Automating this

The IAM Role I made at the beginning of this post is now going to be useful. I’ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateNewVolumes",       "Action": "ec2:CreateVolume",       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:AvailabilityZone": "us-east-1a",           "ec2:VolumeType": "gp2"         },         "NumericLessThanEquals": {           "ec2:VolumeSize": "250"         }       }     }   ] }

This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I’ll add another policy to permit this instance role to tag volumes in this AZ that don’t yet have a tag called InstanceId:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "TagUntaggedVolumeWithInstanceId",       "Action": [         "ec2:CreateTags"       ],       "Effect": "Allow",       "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",       "Condition": {         "Null": {           "ec2:ResourceTag/InstanceId": "true"         }       }     }   ] }

Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",       "Action": [         "ec2:CreateSnapshot",         "ec2:DeleteSnapshot",         "ec2:DeleteVolume",         "ec2:DescribeSnapshotAttribute",         "ec2:DescribeVolumeAttribute",         "ec2:DescribeVolumeStatus",         "ec2:ModifyVolumeAttribute"       ],       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:ResourceTag/InstanceId": "i-123456"         }       }     }   ] }

Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that’s not currently possible.

Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1434114682836", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*", "Condition": { "StringEquals": { "ec2:ResourceTag/InstanceID": "i-123456" } } }, { "Sid": "Stmt1434114745717", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456" } ] }

Now with this in place, we can start to fire up the AWS CLI we spoke of. We’ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.

AZ=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone` Region=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone|rev|cut -c 2-|rev` InstanceId=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id VolumeId=`aws ec2 --region ${Region} create-volume --availability-zone ${AZ} --volume-type gp2 --size 1 --query "VolumeId" --output text` aws ec2 --region ${Region} create-tags --resource ${VolumeID} --tags Key=InstanceId,Value=${InstanceId} aws ec2 --region ${Region} attach-volume --volume-id ${VolumeId} --instance-id ${InstanceId}

…and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.