Planet Linux Australia

Syndicate content
Planet Linux Australia -
Updated: 16 min 55 sec ago

Binh Nguyen: Diplomacy, Russia Vs USA, and More

Mon, 2016-09-05 03:20
Over and over again, the US and Russia (and other countries) seem to get in each others way. The irony is that while there are many out there who believe that they can come to an 'agreement' of sorts the more I look the more difficult I find this proposition to occur. - first, let's introduce ourselves to modern diplomacy though some videos Diplomacy

Ben Martin: Houndbot rolling stock upgrade

Sun, 2016-09-04 17:16
After getting Terry the robot to navigate around inside with multiple Kinects as depth sensors I have now turned my attention to outdoor navigation using two cameras as sensors. The cameras are from a PS4 eye which I hacked to be able to connect to a normal machine. The robot originally used 5.4 inch wheels which were run with foam inside them. This sort of arrangement can be seen in many builds in the Radio Controlled (RC) world and worked well when the robot was simple and fairly light. Now that it is well over 10kg the same RC style build doesn't necessarily still work. Foam compresses a bit to easily.

I have upgraded to 12 inch wheels with air tube tires. This jump seemed a bit risky, would the new setup overwhelm the robot? Once I modified the wheels and came up with an initial mounting scheme to test I think the 12 inch is closer to what the robot naturally wants to have. This should boost the maximum speed of the machine to around 20km/h which is probably as much as you might want on something autonomous. For example, if your robot can out run you things get interesting.

I had to get the wheels attached in order to work out clearances for the suspension upgrade. While the original suspension worked great for a robot that you only add 1-2kg to, with an itx case, two batteries, a fused power supply etc things seem to have added up to too much weight for the springs to counter.

I now have some new small 'coil overs' in hand which are taken from mini mountain bike suspension. They are too heavy for what I am using, with around 600lb/inch compression. I have in mind some places that use coil overs in between the RC ones and the push bike ones which I may end up using. Also with slightly higher travel distance.

As the photo reveals, I don't actually have the new suspension attached yet. I'm thinking about a setup based around two bearing mounts from sparkfun. I'd order from servocity but sfe has cheaper intl shipping :o Anyway, two bearing mounts at the top, two at the bottom and a steel shaft that is 8mm in the middle and 1/4 inch (6.35mm) on the edges. Creating the shafts like that, with the 8mm part just the right length will trap the shaft between the two bearing mounts for me. I might tack weld on either side of the coil over mounts so there is no side to side movement of the suspension.

Yes, hubs and clamping collars were by first thought for the build and would be nice, but a reasonable result for a manageable price is also a factor.

Danielle Madeley: Websockets + on the ESP8266 w/ Micropython

Sun, 2016-09-04 02:01

I recently learned about the ESP8266 while at Pycon AU. It’s pretty nifty: it’s tiny, it has wifi, a reasonable amount of RAM (for a microcontroller) oh, and it can run Python. Specifically Micropython. Anyway I purchased a couple from Adafruit (specifically this one) and installed the Micropython UNIX port on my computer (be aware with the cheaper ESP8266 boards, they might not be very reflashable, or so I’ve been told, spend the extra money for one with decent flash).

The first thing you learn is that the ports are all surprisingly different in terms of what functionality they support, and the docs don’t make it clear like they do for CPython. I learned the hard way there is a set of docs per port, which maybe is why you the method you’re looking for isn’t there.

The other thing is that even though you’re getting to write in Python, and it has many Pythonic abstractions, many of those abstractions are based around POSIX and leak heavily on microcontrollers. Still a number of them look implementable without actually reinventing UNIX (probably).

The biggest problem at the moment is there’s no “platform independent” way to do asynchronous IO. On the microcontroller you can set top-half interrupt handlers for IO events (no malloc here, yay!), gate the CPU, and then execute bottom halfs from the main loop. However that’s not going to work on UNIX. Or you can use select, but that’s not available on the ESP8266 (yet). Micropython does support Python 3.5 asyncio coroutines, so hopefully the port of asyncio to the ESP8266 happens soon. I’d be so especially ecstatic if I could do await pin.trigger(Pin.FALLING).

There’s a few other things that could really help make it feel like Python. Why isn’t disabling interrupts a context manager/decorator. It’s great that you can try/finally your interrupt code, but the with keyword is so much more Pythonic. Perhaps this is because the code is being written by microprocessor people… which is why they’re so into protocols like MQTT for talking to their devices.

Don’t get me wrong, MQTT is a great protocol that you can cram onto all sorts of devices, with all sorts of crappy PHYs, but I have wifi, and working SSL. I want to do something more web 2.0. Something like websockets. In fact, I want to take another service’s REST API and websockets, and deliver that information to my device… I could build a HTTP server + MQTT broker, but that sounds like a pain. Maybe I can just build a web server with and connect to that directly from the device?!

The ESP8266 already has some very basic websocket support for its WebREPL, but that’s not very featureful and seems to only implement half of the spec. If we’re going to have Python on a device, maybe we can have something that looks like the great websockets module. Turns out we can! is a little harder, it requires a handshake which is not documented (I reversed it in the end), and decoding a HTTP payload, which is not very clearly documented (had to read the source). It’s not the most efficient protocol out there, but the chip is more than fast enough to deal with it. Also fun times, it turns out there’s no platform independent way to return from waiting for IO. Basically it turned out there were a lot of yaks to shave.

Where it all comes into its own though is the ability to write what is pretty much everyday, beautiful Python however, it’s worth it over Arduino sketches or whatever else takes your fancy.

uwebsockets/usocketio on Github.

Linux Users of Victoria (LUV) Announce: LUV Main September 2016 Meeting: Spartan / The Future is Awesome

Sun, 2016-09-04 00:03
Start: Sep 6 2016 18:30 End: Sep 6 2016 20:30 Start: Sep 6 2016 18:30 End: Sep 6 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053



  • Lev Lafayette, Spartan: A Linux HPC/Cloud Hybrid
  • Paul Fenwick, The Future is Awesome (and what you can do about it)

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

September 6, 2016 - 18:30

read more

Chris Smart: Configuring QEMU bridge helper after “access denied by acl file” error

Wed, 2016-08-31 22:03

QEMU has a neat bridge-helper utility which allows a non-root user to easily connect a virtual machine to a bridged interface. In Fedora at least, qemu-bridge-helper runs as setuid (any user can run as root) and privileges are immediately dropped to cap_net_admin. It also has a simple white/blacklist ACL mechanism in place which limits connections to virbr0, libvirt’s local area network.

That’s all great, but often you actually want a guest to be a part of your real network. This means it must connect to a bridged interface (often br0) on a physical network device.

If your user tries to kick up such a QEMU guest while specifying bridge,br=br0, something like this (although probably also with a disk or kernel and initramfs):

qemu-system-x86_64 \
-machine accel=kvm \
-cpu host \
-netdev bridge,br=br0,id=net0 \
-device virtio-net-pci,netdev=net0

You may run into the following error:

access denied by acl file
qemu-system-ppc64: -netdev bridge,br=br0,id=net0: bridge helper failed

As mentioned above, this is the QEMU bridge config file /etc/qemu/bridge.conf restricting bridged interfaces to virbr0 for all users by default. So how to make this work more nicely?

One way is to simply edit the main config file and change virbr0 to all, however that’s not particularly fine-grained.

Instead, we could create a new config file for the user which specifies any (or all) bridge devices that this user is permitted to connect guests to. This way all other users are restricted to virbr0 while your user can connect to other bridges.

This doesn’t have to be a user, it could also be a group (just substitute ${USER} for the group, below), and you can also add multiple files.

Instead of allow you can use deny in the same way to prevent a user or group from attaching to any or all bridges.

So, let’s create a new file for our user and give them access to all interfaces (requires sudo):

echo "allow all" | sudo tee /etc/qemu/${USER}.conf
echo "include /etc/qemu/${USER}.conf" | sudo tee --append /etc/qemu/bridge.conf
sudo chown root:${USER} /etc/qemu/${USER}.conf
sudo chmod 640 /etc/qemu/${USER}.conf

This user should now be able to successfully kick up the guest connected to br0.
qemu-system-x86_64 \
-machine accel=kvm \
-cpu host \
-netdev bridge,br=br0,id=net0 \
-device virtio-net-pci,netdev=net0

Russell Coker: Monitoring of Monitoring

Mon, 2016-08-29 14:02

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

Related posts:

  1. Health and Status Monitoring via Smart Phone Health Monitoring Eric Topol gave an interesting TED talk about...
  2. Planning Servers for Failure Sometimes computers fail. If you run enough computers then you...
  3. Shelf-life of Hardware Recently I’ve been having some problems with hardware dying. Having...

Steven Hanley: [mtb/events] Oxfam Trailwalker - Sydney 2016 - ARNuts

Mon, 2016-08-29 14:00

A great day out on the trail with friends (fullsize)
Though it did not really hit me in the lead up or during the event until half way that it was yet another 100km and these are indeed somewhat tough to get through. The day out in the bush with my friends Alex, David and Julie was awesome.

As I say in the short report with the photos linked below, Oxfam is a great charity and that they have these trailwalker events in many places around the world to fundraise and get people to enjoy some quality outdoor time is pretty awesome. This is a hard course, that it took us 14h30m to get through it shows that but it sure is pretty, amazing native flowers, views (water ways and bush) and that it can get in to Manly with hardly realising you are in the middle of the biggest city in Australia is awesome.

My words and photos are online in my Oxfam Trailalker - Sydney 2016 - ARnuts gallery. What a fun day out!.

Craig Sanders: fakecloud

Sun, 2016-08-28 16:02

I wrote my first Mojolicious web app yesterday, a cloud-init meta-data server to enable running pre-built VM images (e.g. as provided by debian, ubuntu, etc) without having to install and manage a complete, full-featured cloud environment like openstack.

I hacked up something similar several years ago when I was regularly building VM images at home for openstack at work, with just plain-text files served by apache, but that had pretty-much everything hard-coded. fakecloud does a lot more and allows per-VM customisation of user-data (using the IP address of the requesting host). Not bad for a day’s hacking with a new web framework.

fakecloud is a post from: Errata

Chris Smart: Live migrating Btrfs from RAID 5/6 to RAID 10

Fri, 2016-08-26 14:03

Recently it was discovered that the RAID 5/6 implementation in Btrfs is broken, due to the fact that can miscalculate parity (which is rather important in RAID 5 and RAID 6).

So what to do with an existing setup that’s running native Btfs RAID 5/6?

Well, fortunately, this issue doesn’t affect non-parity based RAID levels such as 1 and 0 (and combinations thereof) and it also doesn’t affect a Btrfs filesystem that’s sitting on top of a standard Linux Software RAID (md) device.

So if down-time isn’t a problem, we could re-create the RAID 5/6 array using md and put Btrfs back on top and restore our data… or, thanks to Btrfs itself, we can live migrate it to RAID 10!

A few caveats though. When using RAID 10, space efficiency is reduced to 50% of your drives, no matter how many you have (this is because it’s mirrored). By comparison, with RAID 5 you lose a single drive in space, with RAID 6 it’s two, no-matter how many drives you have.

This is important to note, because a RAID 5 setup with 4 drives that is using more than 2/3rds of the total space will be too big to fit on RAID 10. Btrfs also needs space for System, Metadata and Reserves so I can’t say for sure how much space you will need for the migration, but I expect considerably more than 50%. In such cases, you may need to add more drives to the Btrfs array first, before the migration begins.

So, you will need:

  • At least 4 drives
  • An even number of drives (unless you keep one as a spare)
  • Data in use that is much less than 50% of the total provided by all drives (number of disks / 2)

Of course, you’ll have a good, tested, reliable backup or two before you start this. Right? Good.

Plug any new disks in and partition or luksFormat them if necessary. We will assume your new drive is /dev/sdg, you’re using dm-crypt and that Btrfs is mounted at /mnt. Substitute these for your actual settings.
cryptsetup luksFormat /dev/sdg
UUID="$(cryptsetup luksUUID /dev/sdg)"
echo "luks-${UUID} UUID=${UUID} none" >> /etc/crypttab
cryptsetup luksOpen luks-${UUID} /dev/sdg
btrfs device add /dev/mapper/luks-${UUID} /mnt

The migration is going to take a long time, so best to run this in a tmux or screen session.

time btrfs balance /mnt
time btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt

After this completes, check that everything has been migrated to RAID 10.
btrfs fi df /srv/data/
Data, RAID10: total=2.19TiB, used=2.18TiB
System, RAID10: total=96.00MiB, used=240.00KiB
Metadata, RAID10: total=7.22GiB, used=5.40GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

If you still see some RAID 5/6 entries, run the same migrate command and then check that everything has migrated successfully.

Now while we’re at it, let’s defragment everything.
time btrfs filesystem defragment /srv/data/ # this defrags the metadata
time btrfs filesystem defragment -r /srv/data/ # this defrags data

For good measure, let’s rebalance again without the migration (this will also take a while).
time btrfs fi balance start --full-balance /srv/data/

Francois Marier: Debugging gnome-session problems on Ubuntu 14.04

Thu, 2016-08-25 16:02

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome DEBUG: Creating shared data directory /var/lib/lightdm-data/username DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry] Name=GNOME Comment=This session logs you into GNOME Exec=gnome-session --session=gnome TryExec=gnome-shell X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1 init: Déconnecté du bus D-Bus notifié init: Le processus logrotate main (11831) a été tué par le signal TERM init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127 gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127 gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry] Name=GNOME debug Comment=This session logs you into GNOME debug Exec=gnome-session --debug --session=gnome TryExec=gnome-shell X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000 gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121 ... /usr/bin/gnome-shell: error while loading shared libraries: cannot open shared object file: No such file or directory gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127) gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update apt-file search

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

Maxim Zakharov: Small fix for AMP WordPress plugin

Wed, 2016-08-24 12:03

If you use AMP plugin for WordPress to make AMP (Accelerated Mobile Pages) version of your posts and have some troubles validating them on AMP validator, you may try this fix for AMP plugin to make those pages valid.

Russell Coker: Basics of Backups

Sat, 2016-08-20 18:02

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.


On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Failing Systems – update 2016-08-22

When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

When in doubt make a backup.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

Related posts:

  1. No Backups WTF Some years ago I was working on a project that...
  2. Hard Drives for Backup The general trend seems to be that cheap hard drives...
  3. document storage I have been asked for advice about long-term storage of...

Colin Charles: Speaking in August 2016

Sat, 2016-08-20 04:01

I know this is a tad late, but there have been some changes, etc. recently, so apologies for the delay of this post. I still hope to meet many of you to chat about MySQL/Percona Server/MariaDB Server, MongoDB, open source databases, and open source in general in the remainder of August 2016.

  • LinuxCon+ContainerCon North America – August 22-24 2016 – Westin Harbour Castle, Toronto, Canada – I’ll be speaking about lessons one can learn from database failures and enjoying the spectacle that is the 25th anniversary of Linux!
  • Chicago MySQL Meetup Group – August 29 2016 – Vivid Seats, Chicago, IL – more lessons from database failures here, and I’m looking forward to meeting users, etc. in the Chicago area

While not speaking, Vadim Tkachenko and I will be present at the @scale conference. I really enjoyed my time there previously, and if you get an invite, its truly a great place to learn and network.

sthbrx - a POWER technical blog: Getting In Sync

Wed, 2016-08-17 16:23

Since at least v1.0.0 Petitboot has used device-mapper snapshots to avoid mounting block devices directly. Primarily this is so Petitboot can mount disks and potentially perform filesystem recovery without worrying about messing it up and corrupting a host's boot partition - all changes happen to the snapshot in memory without affecting the actual device.

This of course gets in the way if you actually do want to make changes to a block device. Petitboot will allow certain bootloader scripts to make changes to disks if configured (eg, grubenv updates), but if you manually make changes you would need to know the special sequence of dmsetup commands to merge the snapshots back to disk. This is particulary annoying if you're trying to copy logs to a USB device!

Depending on how recent a version of Petitboot you're running, there are two ways of making sure your changes persist:

Before v1.2.2

If you really need to save changes from within Petitboot, the most straightforward way is to disable snapshots. Drop to the shell and enter

nvram --update-config petitboot,snapshots?=false reboot

Once you have rebooted you can remount the device as read-write and modify it as normal.

After v1.2.2

To make this easier while keeping the benefit of snapshots, v1.2.2 introduces a new user-event that will merge snapshots on demand. For example:

mount -o remount,rw /var/petitboot/mnt/dev/sda2 cp /var/log/messages /var/petitboot/mnt/dev/sda2/ pb-event sync@sda2

After calling pb-event sync@yourdevice, Petitboot will remount the device back to read-only and merge the current snapshot differences back to disk. You can also run pb-event sync@all to sync all existing snapshots if desired.

Colin Charles: What’s next

Wed, 2016-08-17 16:01

I received an overwhelming number of comments when I said I was leaving MariaDB Corporation. Thank you – it is really nice to be appreciated.

I haven’t left the MySQL ecosystem. In fact, I’ve joined Percona as their Chief Evangelist in the CTO Office, and I’m going to focus on the MySQL/Percona Server/MariaDB Server ecosystem, while also looking at MongoDB and other solutions that are good for Percona customers. Thanks again for the overwhelming response on the various social media channels, and via emails, calls, etc.

Here’s to a great time at Percona to focus on open source databases and solutions around them!

My first blog post on the Percona blog – I’m Colin Charles, and I’m here to evangelize open source databases!, the press release.

Binh Nguyen: Neo-Colonialism and Neo-Liberalism, Intelligence Analysis, and More

Tue, 2016-08-16 09:07
Watch a lot of media outlets and over and over again and you hear the terms 'Neocolonialism' and 'Free Trade' from time to time. Until fairly recently, I wasn't entirely aware of what exactly this meant and how it came to be. As indicated in my last post, up until a certain point wealth was distributed rather evenly throughout the world. Then 'colonialism' happened and the wealth gap between

Colin Charles: Changing of the guard

Tue, 2016-08-16 04:01

I posted a message to the internal mailing lists at MariaDB Corporation. I have departed (I resigned) the company, but definitely not the community. Thank you all for the privilege of serving the large MariaDB Server community of users, all 12 million+ of you. See you on the mailing lists, IRC, and the developer meetings.

The Japanese have a saying, “leave when the cherry blossoms are full”.

I’ve been one of the earliest employees of this post-merge company, and was on the founding team of the MariaDB Server having been around since 2009. I didn’t make the first company meeting in Mallorca (August 2009) due to the chickenpox, but I’ve been to every one since.

We made the first stable MariaDB Server 5.1 release in February 2010. Our first Linux distribution release was in openSUSE. Our then tagline: MariaDB: Community Developed. Feature Enhanced. Backward Compatible.

In 2013, we had to make a decision: merge with our sister company SkySQL or take on investment of equal value to compete; majority of us chose to work with our family.

Our big deal was releasing MariaDB Server 5.5 – Wikipedia migrated, Google wanted in, and Red Hat pushed us into the enterprise space.

Besides managing distributions and other community related activities (and in the pre-SkySQL days Rasmus and I did everything from marketing to NRE contract management, down to even doing press releases – you wear many hats when you’re in a startup of less than 20 people), in this time, I’ve written over 220 blog posts, spoken at over 130 events (an average of 18 per year), and given generally over 250 talks, tutorials and keynotes. I’ve had numerous face-to-face meetings with customers, figuring out what NRE they may need and providing them solutions. I’ve done numerous internal presentations, audience varying from the professional services & support teams, as well as the management team. I’ve even technically reviewed many books, including one of the best introductions by our colleague, Learning MySQL & MariaDB.

Its been a good run. Seven years. Uncountable amount of flights. Too many weekends away working for the cause. A whole bunch of great meetings with many of you. Seen the company go from bootstrap, merger, Series A, and Series B.

It’s been a true privilege to work with many of you. I have the utmost respect for Team MariaDB (and of course my SkySQL brethren!). I’m going to miss many of you. The good thing is that MariaDB Server is an open source project, and I’m not going to leave the project or #maria. I in fact hope to continue speaking and working on MariaDB Server.

I hope to remain connected to many of you.

Thank you for this great privilege.

Kind Regards,
Colin Charles

Steven Hanley: [mtb/events] Razorback Ultra - Spectacular run in the Victorian Alps

Mon, 2016-08-15 14:00

Alex and another Canberran on the Razorback (fullsize)
Alex and I signed up for the Razorback Ultra because it is in an amazing part of the country and sounded like a fun event to go do. I was heading into it a week after Six Foot, however this is all just training for UTA100 so why not. All I can say is every trail runner should do this event, it is amazing.

The atmosphere at the race is laid back and it is all about heading up into the mountains and enjoying yourself. I will be back for sure.

My words and photos are online in my Razorback Ultra 2016 gallery. This is truly one of the best runs in Australia.

Steven Hanley: [mtb/events] Geoquest 2016 - Port Mac again with Resultz

Mon, 2016-08-15 14:00

My Mirage 730 - Matilda, having a rest while we ran around (fullsize)
I have fun at Goequest and love doing the event however have been a bit iffy about trying to organise a team for a few years. As many say one of the hardest things in the event is getting 4 people to the start line ready to go.

This year my attitude was similar to last, if I was asked to join a team I would probably say yes. I was asked and thus ended up racing with a bunch of fun guys under the banner of Michael's company Resultz Racing. Another great weekend on the mid north NSW coast with some amazing scenery (the two rogaines were highlights, especially the punchbowl waterfall on the second one).

My words and photos are online in my Geoquest 2016 gallery. Always good fun and a nice escape from winter.

Steven Hanley: [mtb] The lots of vert lunch run, reasons to live in Canberra

Mon, 2016-08-15 14:00

Great view of the lake from the single track on the steep side of BM (fullsize)
This run that is so easy to get out for at lunch is a great quality climbing session and shows off canberra beautifully. What fun.

Photos and some words are online on my Lots of vert lunch run page.