Planet Linux Australia
After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.
The solution I found was to install this missing package:apt install libwayland-egl1-mesa-lts-xenial Looking for clues in the logs
The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome DEBUG: Creating shared data directory /var/lib/lightdm-data/username DEBUG: Session pid=12743: Logging to .xsession-errors
This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):[Desktop Entry] Name=GNOME Comment=This session logs you into GNOME Exec=gnome-session --session=gnome TryExec=gnome-shell X-LightDM-DesktopName=GNOME
I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1 init: Déconnecté du bus D-Bus notifié init: Le processus logrotate main (11831) a été tué par le signal TERM init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM
Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:gnome-session: WARNING: App 'gnome-shell.desktop' exited with code 127 gnome-session: WARNING: App 'gnome-shell.desktop' exited with code 127 gnome-session: WARNING: App 'gnome-shell.desktop' respawning too quickly gnome-session: CRITICAL: We failed, but the fail whale is dead. Sorry....
It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.Increasing the amount of logging
In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop
and then adding --debug to the command line inside gnome-debug.desktop:[Desktop Entry] Name=GNOME debug Comment=This session logs you into GNOME debug Exec=gnome-session --debug --session=gnome TryExec=gnome-shell X-LightDM-DesktopName=GNOME debug
After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.
This time, I had a lot more information in ~/.xsession-errors:gnome-session: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000 gnome-session: DEBUG(+): GsmAutostartApp: started pid:13121 ... /usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory gnome-session: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127) gnome-session: WARNING: App 'gnome-shell.desktop' exited with code 127
which suggests that gnome-shell won't start because of a missing library.Finding the missing library
To find the missing library, I used the apt-file command:apt-file update apt-file search libwayland-egl.so.1
and found that this file is provided by the following packages:
Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.
I filed a bug for this on Launchpad.
I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.Essential Requirements
Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.
Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).
It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).Backup Devices
USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.
Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.
The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.
With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.Multiple Backups
It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.
To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.
For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.Offsite Backups
One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!
Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.
If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.
A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.
A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.Testing
On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.Secret Data
Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.
One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.
Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.
One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.Backup Corruption
With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.Failing Systems – update 2016-08-22
When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.
One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.
A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.
When in doubt make a backup.Any Suggestions?
If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.
I know this is a tad late, but there have been some changes, etc. recently, so apologies for the delay of this post. I still hope to meet many of you to chat about MySQL/Percona Server/MariaDB Server, MongoDB, open source databases, and open source in general in the remainder of August 2016.
- LinuxCon+ContainerCon North America – August 22-24 2016 – Westin Harbour Castle, Toronto, Canada – I’ll be speaking about lessons one can learn from database failures and enjoying the spectacle that is the 25th anniversary of Linux!
- Chicago MySQL Meetup Group – August 29 2016 – Vivid Seats, Chicago, IL – more lessons from database failures here, and I’m looking forward to meeting users, etc. in the Chicago area
While not speaking, Vadim Tkachenko and I will be present at the @scale conference. I really enjoyed my time there previously, and if you get an invite, its truly a great place to learn and network.
Since at least v1.0.0 Petitboot has used device-mapper snapshots to avoid mounting block devices directly. Primarily this is so Petitboot can mount disks and potentially perform filesystem recovery without worrying about messing it up and corrupting a host's boot partition - all changes happen to the snapshot in memory without affecting the actual device.
This of course gets in the way if you actually do want to make changes to a block device. Petitboot will allow certain bootloader scripts to make changes to disks if configured (eg, grubenv updates), but if you manually make changes you would need to know the special sequence of dmsetup commands to merge the snapshots back to disk. This is particulary annoying if you're trying to copy logs to a USB device!
Depending on how recent a version of Petitboot you're running, there are two ways of making sure your changes persist:Before v1.2.2
If you really need to save changes from within Petitboot, the most straightforward way is to disable snapshots. Drop to the shell and enternvram --update-config petitboot,snapshots?=false reboot
Once you have rebooted you can remount the device as read-write and modify it as normal.After v1.2.2
To make this easier while keeping the benefit of snapshots, v1.2.2 introduces a new user-event that will merge snapshots on demand. For example:mount -o remount,rw /var/petitboot/mnt/dev/sda2 cp /var/log/messages /var/petitboot/mnt/dev/sda2/ pb-event sync@sda2
After calling pb-event sync@yourdevice, Petitboot will remount the device back to read-only and merge the current snapshot differences back to disk. You can also run pb-event sync@all to sync all existing snapshots if desired.
I received an overwhelming number of comments when I said I was leaving MariaDB Corporation. Thank you – it is really nice to be appreciated.
I haven’t left the MySQL ecosystem. In fact, I’ve joined Percona as their Chief Evangelist in the CTO Office, and I’m going to focus on the MySQL/Percona Server/MariaDB Server ecosystem, while also looking at MongoDB and other solutions that are good for Percona customers. Thanks again for the overwhelming response on the various social media channels, and via emails, calls, etc.
Here’s to a great time at Percona to focus on open source databases and solutions around them!
My first blog post on the Percona blog – I’m Colin Charles, and I’m here to evangelize open source databases!, the press release.
I posted a message to the internal mailing lists at MariaDB Corporation. I have departed (I resigned) the company, but definitely not the community. Thank you all for the privilege of serving the large MariaDB Server community of users, all 12 million+ of you. See you on the mailing lists, IRC, and the developer meetings.
The Japanese have a saying, “leave when the cherry blossoms are full”.
I’ve been one of the earliest employees of this post-merge company, and was on the founding team of the MariaDB Server having been around since 2009. I didn’t make the first company meeting in Mallorca (August 2009) due to the chickenpox, but I’ve been to every one since.
We made the first stable MariaDB Server 5.1 release in February 2010. Our first Linux distribution release was in openSUSE. Our then tagline: MariaDB: Community Developed. Feature Enhanced. Backward Compatible.
In 2013, we had to make a decision: merge with our sister company SkySQL or take on investment of equal value to compete; majority of us chose to work with our family.
Our big deal was releasing MariaDB Server 5.5 – Wikipedia migrated, Google wanted in, and Red Hat pushed us into the enterprise space.
Besides managing distributions and other community related activities (and in the pre-SkySQL days Rasmus and I did everything from marketing to NRE contract management, down to even doing press releases – you wear many hats when you’re in a startup of less than 20 people), in this time, I’ve written over 220 blog posts, spoken at over 130 events (an average of 18 per year), and given generally over 250 talks, tutorials and keynotes. I’ve had numerous face-to-face meetings with customers, figuring out what NRE they may need and providing them solutions. I’ve done numerous internal presentations, audience varying from the professional services & support teams, as well as the management team. I’ve even technically reviewed many books, including one of the best introductions by our colleague, Learning MySQL & MariaDB.
Its been a good run. Seven years. Uncountable amount of flights. Too many weekends away working for the cause. A whole bunch of great meetings with many of you. Seen the company go from bootstrap, merger, Series A, and Series B.
It’s been a true privilege to work with many of you. I have the utmost respect for Team MariaDB (and of course my SkySQL brethren!). I’m going to miss many of you. The good thing is that MariaDB Server is an open source project, and I’m not going to leave the project or #maria. I in fact hope to continue speaking and working on MariaDB Server.
I hope to remain connected to many of you.
Thank you for this great privilege.
Alex and another Canberran on the Razorback (fullsize)
Alex and I signed up for the Razorback Ultra because it is in an amazing part of the country and sounded like a fun event to go do. I was heading into it a week after Six Foot, however this is all just training for UTA100 so why not. All I can say is every trail runner should do this event, it is amazing.
The atmosphere at the race is laid back and it is all about heading up into the mountains and enjoying yourself. I will be back for sure.
My words and photos are online in my Razorback Ultra 2016 gallery. This is truly one of the best runs in Australia.
My Mirage 730 - Matilda, having a rest while we ran around (fullsize)
I have fun at Goequest and love doing the event however have been a bit iffy about trying to organise a team for a few years. As many say one of the hardest things in the event is getting 4 people to the start line ready to go.
This year my attitude was similar to last, if I was asked to join a team I would probably say yes. I was asked and thus ended up racing with a bunch of fun guys under the banner of Michael's company Resultz Racing. Another great weekend on the mid north NSW coast with some amazing scenery (the two rogaines were highlights, especially the punchbowl waterfall on the second one).
My words and photos are online in my Geoquest 2016 gallery. Always good fun and a nice escape from winter.
Vote Green Maybe I threw a wish in the well For a better Australia today I looked at our leaders today And now they're in our way I'll not trade my freedom for them All our dollars and cents to the rich I wasn't looking for this But now they're in our way Our democracy is squandered Broken promises Lies everywhere Hot nights Winds are blowing Freak weather events, climate change Hey I get to vote soon And this isn't crazy But here's my idea So vote Greens maybe It's hard to look at our future But here's my idea So vote Greens maybe Hey I get to vote soon And this isn't crazy But here's my idea So vote Greens maybe And all the major parties Try to shut us up But here's my idea So vote Greens maybe Liberal and Labor think they should rule I take no time saying they fail They gave us nothing at all And now they're in our way I beg for a fairer Australia At first sight our policies are real I didn't know if you read them But it's the Greens way Your vote can fix things Healthier people Childrens education Fairer policies A change is coming Where you think you're voting, Greens? Hey I get to vote soon And this isn't crazy But here's my idea So vote Greens maybe It's worth a look to a brighter future But here's my idea So vote Greens maybe Before this change in our lives I see children in detention I see humans fleeing horrors I see them locked up and mistreated Before this change in our lives I see a way to fix this And you should know that Voting Green can help fix this, Green, Green, Green... It's bright to look at our future But here's my idea So vote Greens maybe Hey I get to vote soon And this isn't crazy But here's my idea So vote Greens maybe And all the major parties Try to shut us up But here's my idea So vote Greens maybe Before this change in our lives I see children in detention I see humans fleeing horrors I see them locked up and mistreated Before this change in our lives I see a way to fix this And you should know that So vote Green Saturday Call Me Maybe (Carly Rae Jepsen) I threw a wish in the well Don't ask me I'll never tell I looked at you as it fell And now you're in my way I trade my soul for a wish Pennies and dimes for a kiss I wasn't looking for this But now you're in my way Your stare was holding Ripped jeans Skin was showing Hot night Wind was blowing Where you think you're going baby? Hey I just met you And this is crazy But here's my number So call me maybe It's hard to look right at you baby But here's my number So call me maybe Hey I just met you And this is crazy But here's my number So call me maybe And all the other boys Try to chase me But here's my number So call me maybe You took your time with the call I took no time with the fall You gave me nothing at all But still you're in my way I beg and borrow and steal At first sight and it's real I didn't know I would feel it But it's in my way Your stare was holding Ripped jeans Skin was showing Hot night Wind was blowing Where you think you're going baby? Hey I just met you And this is crazy But here's my number So call me maybe It's hard to look right at you baby But here's my number So call me maybe Before you came into my life I missed you so bad I missed you so bad I missed you so so bad Before you came into my life I missed you so bad And you should know that I missed you so so bad, bad, bad, bad.... It's hard to look right at you baby But here's my number So call me maybe Hey I just met you And this is crazy But here's my number So call me maybe And all the other boys Try to chase me But here's my number So call me maybe Before you came into my life I missed you so bad I missed you so bad I missed you so so bad Before you came into my life I missed you so bad And you should know that So call me, maybe
No reflections (fullsize)
None outside either (fullsize)
Better when full/open (fullsize)
Also better when closed, much brightness (fullsize) For over a year I have been planning to do this, my crumpler bag (the complete seed) which I bought in 2008 has been my primary commuting and daily use bag for stuff since that time and as much as I love the bag there is one major problem. No reflective marking anywhere on the bag.
Some newer crumplers have reflective strips and other such features and if I really wanted to spend big I could get them to do a custom bag with whatever colours and reflective bits I can dream up. There are also a number of other brands that do a courier bag with reflective bits or even entire panels or similar that are reflective. However this is the bag I own and it is still perfectly good for daily use so no need to go buy something new.
So I got a $4 sewing kit I had sitting around in the house, some great 3M reflective tape material and finally spent the time to rectify this feature missing from the bag. After breaking 3 needles and spending a while getting it done I now have a much safer bag especially commuting home on these dark winter nights. The sewing work is a bit messy however it is functional which is all that matters to me.
Twenty-five years ago, a small band of programmers from the University of Minnesota ruled the internet. And then they didn’t.
The committee meeting where the team first presented the Gopher protocol was a disaster, “literally the worst meeting I’ve ever seen,” says Alberti. “I still remember a woman in pumps jumping up and down and shouting, ‘You can’t do that!’ ”
Among the team’s offenses: Gopher didn’t use a mainframe computer and its server-client setup empowered anyone with a PC, not a central authority. While it did everything the U (University of Minnesota) required and then some, to the committee it felt like a middle finger. “You’re not supposed to have written this!” Alberti says of the group’s reaction. “This is some lark, never do this again!” The Gopher team was forbidden from further work on the protocol.
Read the full article (a good story of Gopher and WWW history!) at https://www.minnpost.com/business/2016/08/rise-and-fall-gopher-protocol
Vicky talked about the importance non-committing contributors but the primary focus is on committing contributors due to time limits.
Covered the different types of drive-thru contributors and why they show up.
- Scratching an itch.
- Unwilling / Unable to find an alternative to this project
- They like you.
Why do they leave?
- Itch has been sratched.
- Not enough time.
- No longer using the project.
- Often a high barrier to contribution.
- Absence of appreciation.
- Unpleasant people.
- Inappropriate attribution.
- It takes more time to help them land patches
- Reluctance to help them "as they're not community".
It appears to be that many project see community as the foundation but Vicky contended it is contributors.
More drive-thru contributors are a sign of a healthy project and can lead to a larger community.
- Have better processes in place.
- Faster patch and release times.
- More eyes and shallower bugs
- Better community, code and project reputation.
Leads to a healthier overall project.Methods for Maxmising drive-thru contributions:
- give your project super powers.
- Ensures efficient and successful contributions.
- Minimises questions.
- Standardises processes.
- Vicky provided a documentation quick start guide.
- Code review.
- "Office hours" for communication.
- New contributor events.
- Tag starter bugs
- Contributor SLA
- Use containers / VM of dev environment
- Value contributions and contributors
- Culture of documentation
- Default to assistance
Outreach! * Gratitude * Recognition * Follow-up!
Institute the "No Asshole" rule.
Keith spoke about porting Python to mobile devices. CPython being written in C enables it to leverage the supported platforms of the C language and be compiled a wide range of platforms.
There was a deep dive in the options and pitfalls when selecting a method to and implementing Python on Android phones.
Ouroboros is a pure Python implementation of the Python standard library.
Most of the tools discussed are at an early stage of development.Why?
- Being able to run on new or mobile platforms addresses an existential threat.
- The threat also presents an opportunity to grown, broaden and improve Python.
- Wants Python to be a "first contact" language, like (Visual) Basic once was.
- Unlike Basic, Python also support very complex concepts and operations.
- Presents an opportunity to encourage broader usage by otherwise passive users.
- Technical superiority is rarely enough to guarantee success.
- A breadth of technical domains is required for Python to become this choice.
- Technical problems are the easiest to solve.
- Te most difficult problems are social and community and require more attention.
Keith's will be putting his focus into BeeWare and related projects.
Fortune favours the prepared mind
One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.
ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.
For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.
Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.M.2
M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.
The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.
The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.
The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.
Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.
One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.Power Use
I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.
The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.
I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.
I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.
From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.The Future
M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.
Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.
For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.
This is continuing on from my previous blog about NERSC’s Shifter which lets you safely use Docker containers in an HPC environment.
Getting Shifter to work in Slurm is pretty easy, it includes a plugin that you must install and tell Slurm about. My test config was just:required /usr/lib64/shifter/shifter_slurm.so shifter_config=/etc/shifter/udiRoot.conf
as I was installing by building RPMs (out preferred method is to install the plugin into our shared filesystem for the cluster so we don’t need to have it in the RAM disk of our diskless nodes). One that is done you can add the shifter programs arguments to your Slurm batch script and then just call shifter inside it to run a process, for instance:#!/bin/bash #SBATCH -p debug #SBATCH --image=debian:wheezy shifter cat /etc/issue
results in the following on our RHEL compute nodes:[samuel@bruce Shifter]$ cat slurm-1734069.out Debian GNU/Linux 7 \n \l
simply demonstrating that it works. The advantage of using the plugin and this way of specifying the images is that the plugin will prep the container for us at the start of the batch job and keep it around until it ends so you can keep running commands in your script inside the container without the overhead of having to create/destroy it each time. If you need to run something in a different image you just pass the --image option to shifter and then it will need to set up & tear down that container, but the one you specified for your batch job is still there.
That’s great for single CPU jobs, but what about parallel applications? Well turns out that’s easy too – you just request the configuration you need and slap srun in front of the shifter command. You can even run MPI applications this way successfully. I grabbed the dispel4py/docker.openmpi Docker container with shifterimg pull dispel4py/docker.openmpi and tried its Python version of the MPI hello world program:#!/bin/bash #SBATCH -p debug #SBATCH --image=dispel4py/docker.openmpi #SBATCH --ntasks=3 #SBATCH --tasks-per-node=1 shifter cat /etc/issue srun shifter python /home/tutorial/mpi4py_benchmarks/helloworld.py
This prints the MPI rank to demonstrate that the MPI wire up was successful and I forced it to run the tasks on separate nodes and print the hostnames to show it’s communicating over a network, not via shared memory on the same node. But the output bemused me a little:[samuel@bruce Python]$ cat slurm-1734135.out Ubuntu 14.04.4 LTS \n \l libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],0]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce001 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'. Hello, World! I am process 0 of 3 on bruce001. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],1]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce002 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- Hello, World! I am process 1 of 3 on bruce002. libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0 -------------------------------------------------------------------------- [[30199,2],2]: A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces: Module: OpenFabrics (openib) Host: bruce003 Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- Hello, World! I am process 2 of 3 on bruce003.
It successfully demonstrates that it is using an Ubuntu container on 3 nodes, but the warnings are triggered because Open-MPI in Ubuntu is built with Infiniband support and it is detecting the presence of the IB cards on the host nodes. This is because Shifter is (as designed) exposing the systems /sys directory to the container. The problem is that this container doesn’t include the Mellanox user-space library needed to make use of the IB cards and so you get warnings that they aren’t working and that it will fall back to a different mechanism (in this case TCP/IP over gigabit Ethernet).
Open-MPI allows you to specify what transports to use, so adding one line to my batch script:export OMPI_MCA_btl=tcp,self,sm
cleans up the output a lot:Ubuntu 14.04.4 LTS \n \l Hello, World! I am process 0 of 3 on bruce001. Hello, World! I am process 2 of 3 on bruce003. Hello, World! I am process 1 of 3 on bruce002.
This also begs the question then – what does this do for latency? The image contains a Python version of the OSU latency testing program which uses different message sizes between 2 MPI ranks to provide a histogram of performance. Running this over TCP/IP is trivial with the dispel4py/docker.openmpi container, but of course it’s lacking the Mellanox library I need and as the whole point of Shifter is security I can’t get root access inside the container to install the package. Fortunately the author of the dispel4py/docker.openmpi has their implementation published on Github and so I forked their repo, signed up for Docker and pushed a version which simply adds the libmlx4-1 package I needed.
Running the test over TCP/IP is simply a matter of submitting this batch script which forces it onto 2 separate nodes:#!/bin/bash #SBATCH -p debug #SBATCH --image=chrissamuel/docker.openmpi:latest #SBATCH --ntasks=2 #SBATCH --tasks-per-node=1 export OMPI_MCA_btl=tcp,self,sm srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py
giving these latency results:[samuel@bruce MPI]$ cat slurm-1734137.out # MPI Latency Test # Size [B] Latency [us] 0 16.19 1 16.47 2 16.48 4 16.55 8 16.61 16 16.65 32 16.80 64 17.19 128 17.90 256 19.28 512 22.04 1024 27.36 2048 64.47 4096 117.28 8192 120.06 16384 145.21 32768 215.76 65536 465.22 131072 926.08 262144 1509.51 524288 2563.54 1048576 5081.11 2097152 9604.10 4194304 18651.98
To run that same test over Infiniband I just modified the export in the batch script to force it to use IB (and thus fail if it couldn’t talk between the two nodes):#!/bin/bash #SBATCH -p debug #SBATCH --image=chrissamuel/docker.openmpi:latest #SBATCH --ntasks=2 #SBATCH --tasks-per-node=1 export OMPI_MCA_btl=openib,self,sm srun shifter python /home/tutorial/mpi4py_benchmarks/osu_latency.py
which then gave these latency numbers:[samuel@bruce MPI]$ cat slurm-1734138.out # MPI Latency Test # Size [B] Latency [us] 0 2.52 1 2.71 2 2.72 4 2.72 8 2.74 16 2.76 32 2.73 64 2.90 128 4.03 256 4.23 512 4.53 1024 5.11 2048 6.30 4096 7.29 8192 9.43 16384 19.73 32768 29.15 65536 49.08 131072 75.19 262144 123.94 524288 218.21 1048576 565.15 2097152 811.88 4194304 1619.22
So you can see that’s basically an order of magnitude improvement in latency using Infiniband compared to TCP/IP over gigabit Ethernet (which is what you’d expect).
Because there’s no virtualisation going on here there is no extra penalty to pay when doing this, no need to configure any fancy device pass through, no loss of any CPU MSR access, and so I’d argue that Shifter makes Docker containers way more useful for HPC than virtualisation or even Docker itself for the majority of use cases.
Am I excited about Shifter – yup! The potential to allow users build and application stack themselves right down to the OS libraries and (with a little careful thought) having something that could get native interconnect performance is fantastic. Throw in the complexities of dealing with conflicting dependencies between Python modules, system libraries, bioinformatics tools, etc, etc, and needing to provide simple methods for handling these and the advantages seem clear.
So the plan is to roll this out into production at VLSCI in the near future. Fingers crossed!
So, way back when (sometime in the early 1990s) there was Windows 3.11 and times were… for Workgroups. There was this Windows NT thing, this OS/2 thing and something brewing at Microsoft to attempt to make the PC less… well, bloody awful for a user.
Again, thanks to abandonware sites, it’s possible now to try out very early builds of Microsoft Chicago – what would become Windows 95. With the earliest build I could find (build 56), I set to work. The installer worked from an existing Windows 3.11 install.
I ended up using full system emulation rather than normal qemu later on, as things, well, booted in full emulation and didn’t otherwise (I was building from qemu master… so it could have actually been a bug fix).
Unfortunately, I didn’t have the Plus Pack components (remember Microsoft Plus! ?- yes, the exclamation mark was part of the product, it was the 1990s.) and I’m not sure if they even would have existed back then (but the installer did ask).
Obviously if you were testing Chicago, you probably did not want to upgrade your working Windows install if this was a computer you at all cared about. I installed into C:\CHICAGO because, well – how could I not!
I didn’t really try to get network going, it may not have been fully baked in this build, or maybe just not really baked in this copy of it, but the installer there looks a bit familiar, but not like the Windows 95 one – maybe more like NT 3.1/3.51 ?
But at the end… it installed and it was time to reboot into Chicago:
So… this is what Windows 95 looked like during development back in July 1993 – nearly exactly two years before release. There’s some Windows logos that appear/disappear around the place, which are arguably much cooler than the eventual Windows 95 boot screen animation. The first boot experience was kind of interesting too:
Luckily, there was nothing restricting the beta site ID or anything. I just entered the number 1, and was then told it needed to be 6 digits – so beta site ID 123456 it is! The desktop is obviously different both from Windows 3.x and what ended up in Windows 95.
Those who remember Windows 3.1 may remember Dr Watson as an actual thing you could run, but it was part of the whole diagnostics infrastructure in Windows, and here (as you can see), it runs by default. More odd is the “Switch To Chicago” task (which does nothing if opened) and “Tracker”. My guess is that the “Switch to Chicago” is the product of some internal thing for launching the new UI. I have no ideawhat the “Tracker” is, but I think I found a clue in the “Find File” app:
Well, that wasn’t as exciting as I was hoping for (after all, weren’t there interesting database like file systems being researched at Microsoft in the early 1990s?). It’s about here I should show the obligatory About box:
It’s… not polished, and there’s certainly that feel throughout the OS, it’s not yet polished – and two years from release: that’s likely fair enough. Speaking of not perfect:
But, most importantly, Solitaire is present! You can browse the Programs folder and head into Games and play it! One odd tihng is that applications have two >> at the end, and there’s a “Parent Folder” entry too.
More unfinished things are found in the “File cabinet”, such as properties for anything:
But let’s jump into Control Panels, which I managed to get to by heading to C:\CHICAGO\Control.sys – which isn’t exactly obvious, but I think you can find it through Programs as well.The “Window Metrics” application is really interesting! It’s obvious that the UI was not solidified yet, that there was a lot of experimenting to do. This application lets you change all sorts of things about the UI:
Another unfinished thing? That familiar Properties for My Computer, which is actually “Advanced System Features” in the control panel, and from the [Sample Information] at the bottom left, it looks like we may not be getting information about the machine it’s running on.
You do get some information in the System control panel, but a lot of it is unfinished. It seems as if Microsoft was experimenting with a few ways to express information and modify settings.
By Andrew Lonsdale.
- Talked about using python-ppt for collaborating on PowerPoint presentations.
- Covered his journey so far and the lessons he learned.
- Gave some great examples of re-creating XKCD comics in Python (matplotlib_venn).
- Claimed the diversion into Python and Matplotlib has helped is actual research.
- Spoke about how using Python is great for Scientific research.
- Summarised that side projects are good for Science and Python.
- Recommended Elegant SciPy
- Demo's using Emoji to represent bioinformatics using FASTQE (FASTQ as Emoji).