Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 3 min 26 sec ago

Sridhar Dhanapalan: Twitter posts: 2015-06-08 to 2015-06-14

Mon, 2015-06-15 00:27

Stewart Smith: hello world as ppc66le OPAL payload!

Sun, 2015-06-14 13:27

While the in-tree hello-world kernel (originally by me, and Mikey managed to CUT THE BLOAT of a whole SEVENTEEN instructions down to a tiny ten) is very, very dumb (and does one thing, print “Hello World” to the console), there’s now an alternative for those who like to play with a more feature-rich Hello World rather than booting a more “real” OS such as Linux. In case you’re wondering, we use the hello world kernel as a tiny test that we haven’t completely and utterly broken things when merging/developing code.

https://github.com/andreiw/ppc64le_hello is a wonderful example of a small (INTERACTIVE!) starting point for a PowerNV (as it’s called in Linux) or “bare metal” (i.e. non-virtualised) OS on POWER.

What’s more impressive is that this was all developed using the simulator rather than real hardware (although I think somebody has tried it on some now).

Kind of neat!

Binh Nguyen: The Value of Money - Part 2

Sat, 2015-06-13 16:28
This is obviously a continuation from my last post, http://dtbnguyen.blogspot.com.au/2015/06/repairing-musical-instrumentselectrical.html

No one wants to live from day to day, week to week and for the most part you don't have that when you have a salaried job. You regularly receive a lump sum each fortnight or month from which you draw down to pay for life's expenses.



Over time you actually discover it's an illusion though. A former teacher of mine once said that a salary of about 70-80K wasn't all that much. To kids that seemed liked a lot of money though. Now it actually makes a lot more sense. Factor in tax, life expenses, rental, etc... and most of it dries up very quickly.



When you head to business or law school it's the same thing. You regularly deal with millions, billions, and generally gratuitous amounts of money. This doesn't change all that much when you head out into the real world. The real world creates a perception whereby consumption and possession of certain material goods are almost a necessity in order to live and work comfortably within your profession. Ultimately, this means that no matter how much you earn it still doesn't seem like it's enough.



The greatest irony of this is that you only really discover that the the perception of the value of such (gratuitous) goods changes drastically if you are on your own or you are building a company.



I semi-regularly receive offers of business/job opportunities through this blog and other avenues (scams as well as real offers. Thankfully, most of the 'fishy ones' are picked up by SPAM filters). The irony is this. I know that no matter how much money is thrown at a business there is still no guarantee of success and a lot of the time savings can dry up in a very short space of time (especially if it is a 'standard business'. Namely, one that doesn't have a crazy level of growth ('real growth' not anticipated or 'projected growth')).



This is particularly the case if specialist 'consultants' (they can charge you a lot of money for what seems like obvious advice) need to be brought in. The thing I'm seeing is that basically a lot of what we sell one another is 'mumbo jumbo'. Stuff that we generally don't need but ultimately convince one another of in order to make a living and perhaps even allow us to do something we enjoy.



What complicates this further is that no matter how much terminology and theory we throw at something ultimately most people don't value things at the same value. A good example of this is asking random people what the value of a used iPod Classic 160GB is? I remember questioniong the value (200) of it by a salesman. He justfied the store price by stating that people were selling it for 600-700 on eBay. A struggling student would likely value it at around closer to 150. A person in hospitality valued it at 240. The average, knowledeagble community member would perceive (most likely remember) the associated value with the highest mark though.



Shift this back into the workplace and things become even more complicated. Think about the 'perception' of your profession. A short while back I met a sound engineer who made a decent salary (around 80K) but had to work 18 hour days continuously based on his description. His quality of life was ultimately shot and his wage should have obviously been much higher. His perceived value was 80K. His effective value was much lower.



Think about 'perception' once more. Some doctors/specialists who migrate but have the skills to practice but not the money to purchase insurance, re-take certification exams, etc... become taxi drivers in their new country. Their effective value (as a worker) becomes that of a taxi driver, nothing more.



Many skilled professions actually require extended periods of study/training, an apprenticeship of some form, a huge amount of hours put in, or just time trying to market your skills. A good chunk people may end up making a lot of money but most don't. Perceived value is the end salary but actual value is much lower.

Think about 'perception' in IT. In some companies they look down upon you if you work in this particular area. What's interesting is what they use you for. They basically shove more menial tasks downwards into the IT department because, 'nobody else wants to do it'. The perceived value of the worker in question doesn't seem much more different than a labourer.



The irony is that they're often just as well qualified as anybody in the firm in question and the work can often be varied to make you wonder what exactly is the actual value of an average IT worker. I've been trying to do the calculations. Average IT graduate is worth about 55K.

http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/4125.0main+features2320Jan%202013

http://www.payscale.com/research/AU/Job=Graduate_Software_Engineer/Salary

http://www.graduatecareers.com.au/research/researchreports/graduatesalaries/



Assuming he works at a SME (any industry not just IT) firm he'll be doing a lot of varied tasks (a lot of firms will tend to pigeon hole you into becoming a specialist). At a lot of service providers and SME firms I've looked at one hour of down time equates to about five figures. If you work in the right firm or you end up really good at your job you end up saving your firm somewhere between 5-7 figures each year. At much larger firms this figure is closer to about 6-8 figures each year.



At a lot of firms we suffer from hardware failure. The standard procedure is to simply purchase new hardware to deal with the problem (it's quicker and technically free despite the possible loss of downtime due to diagnosis and response time). The thing I've found out is that if you are actually able to repair/re-design the hardware itself you can actually save/make a lot (particularly telecommunications and network hardware). This is especially the case if the original design cut corners. Once again savings are similar to the previous point.



In an average firm there may be a perception that IT there is simply to support the function a business. It's almost like a utility now (think electricity, water, gas, etc... That's how low some companies perceieve technology. They perceive it to be a mere cost rather than something that can benefit their business). What a lot of people neglect is how much progress can be made given the use of appropiate technology. Savings/productivity gains are similar to the previous points.



What sort of stops us from realising just exactly what our value is is the siloed nature of the modern business world (specialists rather than generalists a lot of the time) and the fact that various laws, regulations, and so on are designed to help stop us from being potentially exploited.



The only way you actually realise what you're worth is if you work as an individual or start a company.



Go ahead, break down what you actually do in your day. You'll be surprised at how much you may actually be worth.



What you ultimately find out though is that (if you're not lazy) you're probably underpaid. The irony is that if the company were to pay you exactly what you were worth they would go bankrupt. Moreover, you only realistically have a small number of chances/opportunities to demonstrate your true worth. A lot of the time jobs are conducted on the basis of intermittency. Namely, you're there to do something specialised difficult every once in a while, not necessarily all the time.



It would be a really interesting world if we didn't have company structures/businesses. I keep on finding out over and over again that you simply get paid more for more skills as an individual. This is especially the case if there is no artificial barrier between you and the getting the job done. The work mightn't be stable but once you deal with that you have a very different perspective of the world even if it's only a part time job.



If you have some talent, I'd suggest you try starting your own company or work as an individual at some point in your life. The obvious problem will be coming up with an idea which will create money though. Don't worry about it. You will find opportunities along the way as you gain more life experience and understand where value comes from. At that point, start doing the numbers and do a few tests to see whether your business instincts are correct. You may be surprised at what you end up finding out.

http://forums.whirlpool.net.au/archive/1505450



Here's are other things I've worked out:

  • if you need a massive and complex business plan in order to justify your business's existence (particularly to investors) then you should rethink your business
  • if you need to 'spin things' or else have a bloated marketing department then there's likely nothing much special about the product or service that you are selling
  • if your business is fairly complex at a small level think about when it will be like when it scales up. Try to remove as many obstacles as you can when you're company is still young to ensure future success if unexpected growth comes your way
  • if you narrow yourself to one particular field you can limit your opportunities. In the normal world it can lead to stagnation (no real change in salary/value), specialisation (guaranteed increase in salary/value) though niether is a given. In smaller companies multiple roles may be critical to the survival/profitability of that particular company. The obvious risk is if they leave you're trying to fill in for multiple roles
  • a lot of goods and services exist in a one to one relationship. You can only sell it once and you have to maximise the profit on that. Through the use of broadcast style technologies we can achieve one to many relationships allowing us to build substantial wealth easily and quickly. This makes valuation of technology companies much more difficult. However, once you factor in overheards and risk of success versus failure things tend to normalise
  • perception means a lot. Think about a pair of Nike runners versus standard supermarket branded ones. There is sometimes very little difference in quality though the price of the Nike runners may be double. The same goes for some of the major fashion labels. They are sometimes produced en-masse in cheap Asian/African countries
  • if there are individuals and companies offering the opportunity to engage in solid business ventures, take them. Your perspective on life and lifestyle will change drastically if things turn out successfully
  • in reality, there are very few businesses where you can genuinely say the future looks bright for all of eternity. This is the same across every single sector
  • make friends with everyone. You'll be surprised at what you can learn and what opportunities you may be able to find
  • the meaning of 'market value' largely dissolves into nothingness in the real world. Managing perception accounts a good deal for what you can charge for something
  • just like investments the value of a good or service will normalise over time. You need volatility (this can be achieved via any means) to be able to make abnormal profits though
  • for companies where goods and services have high overheads 7-8 figures a week/month/year can mean nothing. If the overheads are high enough it's possible that they company may go under in a very short space of time. Find something which doesn't and focus in on that whether it be a primary or side business
  • the more you know the better off you'll be if you're willing to take calculated risk, are patient, and perservere. Most of the time things will normalise
  • in general, the community perception is that making more with high expenses is more successful than making less with no expenses
  • comments from people like Joe Hockey make a lot of sense to those who have had a relatively privileged background but they also go to the core of the matter. There are a lot of impediments in life now. I once recall walking past a begging 'aboriginal'. A white middle-upper class man simply admonished him to get a job. If you've ever worked with people like that or you've ever factored in his background you'll realise that this is almost impossible. Everybody has a go at people who work within the 'cash economy' and do not contribute to the tax base of the country but it's easy to understand a lot of why people do it. There are a lot of impediments in life despite whatever anyone says whether you're working at the top or bottom end of the scale
http://forums.whirlpool.net.au/archive/1937638

http://www.abc.net.au/news/2015-06-10/janda-its-not-hockeys-job-comment-that-should-worry-us/6535484

http://www.smh.com.au/comment/smh-letters/joe-hockey-doesnt-grasp-simple-economics-20150610-ghkl9v.html

http://www.bbc.co.uk/news/education-33109052

  • throw in some wierdness like strange pay for seemingly unskilled jobs and everything looks bizarre. A good example of this is a nightfill worker (stock stacker) at a supermarket in Australia. He can actually earn a lot more than those in skilled professions. It's not just about skills or knowledge when it comes to earning a high wage 
http://forums.whirlpool.net.au/archive/2219972

http://forums.whirlpool.net.au/archive/1937638
  • there are a lot of overqualified people out there (but there are a hell of lot more underqualified people out there are well. I've worked both sides of the equation). If you are lucky someone will give you a chance at a something appropriate to your level but a lot of the time you'll just have to make do
  • you may be shocked at how, who, and what makes money and vice-versa (how, who, and what doesn't make money). For instance, something which you can get for free you can sell while some products/services which have had a lot of effort put into them may not get any sales
https://www.ozbargain.com.au/node/197991

  • there are very few companies that you could genuinely say are 100% technology orientated. Even in companies that are supposedly technology orientated there are still politicial issues that you must deal with
  • by using certain mechanisms you can stop resales of your products/services which can force purchase only from known avenues. This is a common strategy in the music industry with MIDI controllers and stops erosion/canibalisation of sales of new product through minimisation of sales of used products
  • it's easy to be impressed by people who are simply quoting numbers. Do your research. People commonly quote high growth figures but in reality Most aren't impressive as they seem. They seem even less impressive when you factor in inflation, Quantitive Easing programs, etc... In a lot of cases companies/industries (even many countries if you to think about it) would actually be at a standstill or else going backwards.
http://www.inc.com/sageworks/the-15-most-profitable-industries-for-private-companies.html

https://biz.yahoo.com/p/sum_qpmd.html

http://www.forbes.com/sites/sageworks/2013/04/28/the-most-profitable-businesses-to-start/ http://www.forbes.com/sites/sageworks/2014/08/31/the-least-profitable-businesses-in-the-u-s/

http://www.businessinsider.com/sector-profit-margins-sp-500-2012-8



http://www.tradingeconomics.com/country-list/inflation-rate

https://en.wikipedia.org/wiki/List_of_countries_by_inflation_rate

http://data.worldbank.org/indicator/FP.CPI.TOTL.ZG

https://en.wikipedia.org/wiki/Quantitative_easing



http://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

https://en.wikipedia.org/wiki/List_of_countries_by_real_GDP_growth_rate

Simon Lyall: Feeds I follow: Citylab, Commitstrip, MKBHD, Offsetting Bahaviour

Sat, 2015-06-13 15:28

I thought I’d list of the feeds/blogs/sites I currently follow. Mostly I do this via RSS using Newsblur.

Share

James Bromberger: Logical Volume Management with Debian on Amazon EC2

Sat, 2015-06-13 00:27

The recent AWS introduction of the Elastic File System gives you an automatic grow-and-shrink capability as an NFS mount, an exciting option that takes away the previous overhead in creating shared block file systems for EC2 instances.

However it should be noted that the same auto-management of capacity is not true in the EC2 instance’s Elastic Block Store (EBS) block storage disks; sizing (and resizing) is left to the customer. With current 2015 EBS, one cannot simply increase the size of an EBS Volume as the storage becomes full; (as at June 2015) an EBS volume, once created, has fixed size. For many applications, that lack of resize function on its local EBS disks is not a problem; many server instances come into existence for a brief period, process some data and then get Terminated, so long term managment is not needed.

However for a long term data store on an instance (instead of S3, which I would recommend looking closely at from a durability and pricing fit), and where I want to harness the capacity to grow (or shrink) disk for my data, then I will need to leverage some slightly more advanced disk management. And just to make life interesting, I wish to do all this while the data is live and in-use, if possible.

Enter: Logical Volume Management, or LVM. It’s been around for a long, long time: LVM 2 made a debut around 2002-2003 (2.00.09 was Mar 2004) — and LVM 1 was many years before that — so it’s pretty mature now. It’s a powerful layer that sits between your raw storage block devices (as seen by the operating system), and the partitions and file systems you would normally put on them.

In this post, I’ll walk through the process of getting set up with LVM on Debian in the AWS EC2 environment, and how you’d do some basic maintenance to add and remove (where possible) storage with minimal interruption.

Getting Started

First a little prep work for a new Debian instance with LVM.

As I’d like to give the instance its own ability to manage its storage, I’ll want to provision an IAM Role for EC2 Instances for this host. In the AWS console, visit IAM, Roles, and I’ll create a new Role I’ll name EC2-MyServer (or similar), and at this point I’ll skip giving it any actual privileges (later we’ll update this). As at this date, we can only associate an instance role/profile at instance launch time.

Now I launch a base image Debian EC2 instance launched with this IAM Role/Profile; the root file system is an EBS Volume. I am going to put data that I’ll be managing on a separate disk from the root file system.

First, I need to get the LVM utilities installed. It’s a simple package to install: the lvm2 package. From my EC2 instance I need to get root privileges (sudo -i) and run:

apt update && apt install lvm2

After a few moments, the package is installed. I’ll choose a location that I want my data to live in, such as /opt/.  I want a separate disk for this task for a number of reasons:

  1. Root EBS volumes cannot currently be encrypted using Amazon’s Encrypted EBS Volumes at this point in time. If I want to also use AWS’ encryption option, it’ll have to be on a non-root disk. Note that instance-size restrictions also exist for EBS Encrypted Volumes.
  2. It’s possibly not worth make a snapshot of the Operating System at the same time as the user content data I am saving. The OS install (except the /etc/ folder) can almost entirely be recreated from a fresh install. so why snapshot that as well (unless that’s your strategy for preserving /etc, /home, etc).
  3. The type of EBS volume that you require may be different for different data: today (Apr 2015) there is a choice of Magnetic, General Purpose 2 (GP2) SSD, and Provisioned IO/s (PIOPS) SSD, each with different costs; and depending on our volume, we may want to select one for our root volume (operating system), and something else for our data storage.
  4. I may want to use EBS snapshots to clone the disk to another host, without the base OS bundled in with the data I am cloning.

I will create this extra volume in the AWS console and present it to this host. I’ll start by using a web browser (we’ll use CLI later) with the EC2 console.

The first piece of information we need to know is where my EC2 instance is running. Specifically, the AWS Region and Availability Zone (AZ). EBS Volumes only exist within the one designated AZ. If I accidentally make the volume(s) in the wrong AZ, then I won’t be able to connect them to my instance. It’s not a huge issue, as I would just delete the volume and try again.

I navigate to the “Instances” panel of the EC2 Console, and find my instance in the list:

A (redacted) list of instance from the EC2 console.

Here I can see I have located an instance and it’s running in US-East-1A: that’s AZ A in Region US-East-1. I can also grab this with a wget from my running Debian instance by asking the MetaData server:

wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone

The returned text is simply: “us-east-1a”.

Time to navigate to “Elastic Block Store“, choose “Volumes” and click “Create“:

Creating a volume in AWS EC2: ensure the AZ is the same as your instance

You’ll see I selected that I wanted AWS to encrypt this and as noted above, at this time that doesn’t include the t2 family. However, you have an option of using encryption with LVM – where the customer looks after the encryption key – see LUKS.

What’s nice is that I can do both — have AWS Encrypted Volumes, and then use encryption on top of this, but I have to manage my own keys with LUKS, and should I lose them, then I can keep all the cyphertext!

I deselected this for my example (with a t2.micro), and continue; I could see the new volume in the list as “creating”, and then shortly afterwards as “available”. Time to attach it: select the disk, and either right-click and choose “Attach“, or from the menu at the top of the list, chose “Actions” -> “Attach” (both do the same thing).

Attaching a volume to an instance: you’ll be prompted for the compatible instances in the same AZ.

At this point in time your EC2 instance will now notice a new disk; you can confirm this with “dmesg |tail“, and you’ll see something like:

[1994151.231815]  xvdg: unknown partition table

(Note the time-stamp in square brackets will be different).

Previously at this juncture you would format the entire disk with your favourite file system, mount it in the desired location, and be done. But we’re adding in LVM here – between this “raw” device, and the filesystem we are yet to make….

Marking the block device for LVM

Our first operation with LVM is to put a marker on the volume to indicate it’s being use for LVM – so that when we scan the block device, we know what it’s for. It’s a really simple command:

pvcreate /dev/xvdg

The device name above (/dev/xvdg) should correspond to the one we saw from the dmesg output above. The output of the above is rather straight forward:

  Physical volume "/dev/xvdg" successfully created Checking our EBS Volume

We can check on the EBS volume – which LVM sees as a Physical Volume – using the “pvs” command.

# pvs   PV         VG   Fmt  Attr PSize PFree   /dev/xvdg       lvm2 ---  5.00g 5.00g

Here we see the entire disk is currently unused.

Creating our First Volume Group

Next step, we need to make an initial LVM Volume Group which will use our Physical volume (xvdg). The Volume Group will then contain one (or more) Logical Volumes that we’ll format and use. Again, a simple command to create a volume group by giving it its first physical device that it will use:

# vgcreate  OptVG /dev/xvdg   Volume group "OptVG" successfully created

And likewise we can check our set of Volume Groups with ” vgs”:

# vgs   VG    #PV #LV #SN Attr   VSize VFree   OptVG   1   0   0 wz--n- 5.00g 5.00g

The Attribute flags here indicate this is writable, resizable, and allocating extents in “normal” mode. Lets proceed to make our (first) Logical Volume in this Volume Group:

# lvcreate -n OptLV -L 4.9G OptVG   Rounding up size to full physical extent 4.90 GiB   Logical volume "OptLV" created

You’ll note that I have created our Logical Volume as almost the same size as the entire Volume Group (which is currently one disk) but I left some space unused: the reason for this comes down to keeping some space available for any jobs that LVM may want to use on the disk – and this will be used later when we want to move data between raw disk devices.

If I wanted to use LVM for Snapshots, then I’d want to leave more space free (unallocated) again.

We can check on our Logical Volume:

# lvs   LV    VG    Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert   OptLV OptVG -wi-a----- 4.90g

The attribytes indicating that the Logical Volume is writeable, is allocating its data to the disk in inherit mode (ie, as the Volume Group is doing), and that it is active. At this stage you may also discover we have a device /dev/OptVG/OptLV, and this is what we’re going to format and mount. But before we do, we should review what file system we’ll use.

Filesystems Popular Linux file systems Name Shrink Grow Journal Max File Sz Max Vol Sz btrfs Y Y N 16 EB 16 EB ext3 Y off-line Y Y 2 TB 32 TB ext4 Y off-line Y Y 16 TB 1 EB xfs N Y Y 8 EB 8 EB zfs* N Y Y 16 EB 256 ZB

For more details see Wikipedia comparison. Note that ZFS requires 3rd party kernel module of FUSE layer, so I’ll discount that here. BTRFS only went stable with Linux kernel 3.10, so with Debian Jessie that’s a possibility; but for tried and trusted, I’ll use ext4.

The selection of ext4 also means that I’ll only be able to shrink this file system off-line (unmounted).

I’ll make the filesystem:

# mkfs.ext4 /dev/OptVG/OptLV mke2fs 1.42.12 (29-Aug-2014) Creating filesystem with 1285120 4k blocks and 321280 inodes Filesystem UUID: 4f831d17-2b80-495f-8113-580bd74389dd Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done

And now mount this volume and check it out:

# mount /dev/OptVG/OptLV /opt/ # df -HT /opt Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  5.1G   11M  4.8G   1% /opt

Lastly, we want this to be mounted next time we reboot, so edit /etc/fstab and add the line:

/dev/OptVG/OptLV /opt ext4 noatime,nodiratime 0 0

With this in place, we can now start using this disk.  I selected here not to update the filesystem every time I access a file or folder – updates get logged as normal but access time is just ignored.

Time to expand

After some time, our 5 GB /opt/ disk is rather full, and we need to make it bigger, but we wish to do so without any downtime. Amazon EBS doesn’t support resizing volumes, so our strategy is to add a new larger volume, and remove the older one that no longer suits us; LVM and ext4’s online resize ability will allow us to do this transparently.

For this example, we’ll decide that we want a 10 GB volume. It can be a different type of EBS volume to our original – we’re going to online-migrate all our data from one to the other.

As when we created the original 5 GB EBS volume above, create a new one in the same AZ and attach it to the host (perhaps a /dev/xvdh this time). We can check the new volume is visible with dmesg again:

[1999786.341602]  xvdh: unknown partition table

And now we initalise this as a Physical volume for LVM:

# pvcreate /dev/xvdh   Physical volume "/dev/xvdh" successfully created

And then add this disk to our existing OptVG Volume Group:

# vgextend OptVG /dev/xvdh   Volume group "OptVG" successfully extended

We can now review our Volume group with vgs, and see our physical volumes with pvs:

# vgs   VG    #PV #LV #SN Attr   VSize  VFree   OptVG   2   1   0 wz--n- 14.99g 10.09g # pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 a--   5.00g 96.00m   /dev/xvdh  OptVG lvm2 a--  10.00g 10.00g

There are now 2 Physical Volumes – we have a 4.9 GB filesystem taking up space, so 10.09 GB of unallocated space in the VG.

Now its time to stop using the /dev/xvgd volume for any new requests:

# pvchange -x n /dev/xvdg   Physical volume "/dev/xvdg" changed   1 physical volume changed / 0 physical volumes not changed

At this time, our existing data is on the old disk, and our new data is on the new one. Its now that I’d recommend running GNU screen (or similar) so you can detach from this shell session and reconnect, as the process of migrating the existing data can take some time (hours for large volumes):

# pvmove /dev/sdb1 /dev/sdd1   /dev/xvdg: Moved: 0.1%   /dev/xvdg: Moved: 8.6%   /dev/xvdg: Moved: 17.1%   /dev/xvdg: Moved: 25.7%   /dev/xvdg: Moved: 34.2%   /dev/xvdg: Moved: 42.5%   /dev/xvdg: Moved: 51.2%   /dev/xvdg: Moved: 59.7%   /dev/xvdg: Moved: 68.0%   /dev/xvdg: Moved: 76.4%   /dev/xvdg: Moved: 84.7%   /dev/xvdg: Moved: 93.3%   /dev/xvdg: Moved: 100.0%

During the move, checking the Monitoring tab in the AWS EC2 Console for the two volumes should show one with a large data Read metric, and one with a large data Write metric – clearly data should be flowing off the old disk, and on to the new.

A note on disk throughput

The above move was a pretty small, and empty volume. Larger disks will take longer, naturally, so getting some speed out of the process maybe key. There’s a few things we can do to tweak this:

  • EBS Optimised: a launch-time option that reserves network throughput from certain instance types back to the EBS service within the AZ. Depending on the size of the instance this is 500 MB/sec up to 4GB/sec. Note that for the c4 family of instances, EBS Optimised is on by default.
  • Size of GP2 disk: the larger the disk, the longer it can sustain high IO throughput – but read this for details.
  • Size and speed of PIOPs disk: if consistent high IO is required, then moving to Provisioned IO disk may be useful. Looking at the (2 weeks) history of Cloudwatch logs for the old volume will give me some idea of the duty cycle of the disk IO.
Back to the move…

Upon completion I can see that the disk in use is the new disk and not the old one, using pvs again:

# pvs   PV         VG    Fmt  Attr PSize  PFree   /dev/xvdg  OptVG lvm2 ---   5.00g 5.00g   /dev/xvdh  OptVG lvm2 a--  10.00g 5.09g

So all 5 GB is now unused (compare to above, where only 96 MB was PFree). With that disk not containing data, I can tell LVM to remove the disk from the Volume Group:

# vgreduce OptVG /dev/xvdg   Removed "/dev/xvdg" from volume group "OptVG"

Then I cleanly wipe the labels from the volume:

# pvremove /dev/xvdg   Labels on physical volume "/dev/xvdg" successfully wiped

If I really want to clean the disk, I could choose to use shred(1) on the disk to overwrite with random data. This can take a lng time

Now the disk is completely unused and disassociated from the VG, I can return to the AWS EC2 Console, and detach the disk:

Detach an EBS volume from an EC2 instance

Wait for a few seconds, and the disk is then shown as “available“; I then chose to delete the disk in the EC2 console (and stop paying for it).

Back to the Logical Volume – it’s still 4.9 GB, so I add 4.5 GB to it:

# lvresize -L +4.5G /dev/OptVG/OptLV   Size of logical volume OptVG/OptLV changed from 4.90 GiB (1255 extents) to 9.40 GiB (2407 extents).   Logical volume OptLV successfully resized

We now have 0.6GB free space on the physical volume (pvs confirms this).

Finally, its time to expand out ext4 file system:

# resize2fs /dev/OptVG/OptLV resize2fs 1.42.12 (29-Aug-2014) Filesystem at /dev/OptVG/OptLV is mounted on /opt; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/OptVG/OptLV is now 2464768 (4k) blocks long.

And with df we can now see:

# df -HT /opt/ Filesystem              Type  Size  Used Avail Use% Mounted on /dev/mapper/OptVG-OptLV ext4  9.9G   12M  9.4G   1% /opt Automating this

The IAM Role I made at the beginning of this post is now going to be useful. I’ll start by adding an IAM Policy to the Role to permit me to List Volumes, Create Volumes, Attach Volumes and Detach Volumes to my instance-id. Lets start with creating a volume, with a policy like this:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateNewVolumes",       "Action": "ec2:CreateVolume",       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:AvailabilityZone": "us-east-1a",           "ec2:VolumeType": "gp2"         },         "NumericLessThanEquals": {           "ec2:VolumeSize": "250"         }       }     }   ] }

This policy puts some restrictions on the volumes that this instance can create: only within the given Availability Zone (matching our instance), only GP2 SSD (no PIOPs volumes), and size no more than 250 GB. I’ll add another policy to permit this instance role to tag volumes in this AZ that don’t yet have a tag called InstanceId:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "TagUntaggedVolumeWithInstanceId",       "Action": [         "ec2:CreateTags"       ],       "Effect": "Allow",       "Resource": "arn:aws:ec2:us-east-1:1234567890:volume/*",       "Condition": {         "Null": {           "ec2:ResourceTag/InstanceId": "true"         }       }     }   ] }

Now that I can create (and then tag) volumes, this becomes a simple procedure as to what else I can do to this volume. Deleting and creating snapshots of this volume are two obvious options, and the corresponding policy:

{   "Version": "2012-10-17",   "Statement": [     {       "Sid": "CreateDeleteSnapshots-DeleteVolume-DescribeModifyVolume",       "Action": [         "ec2:CreateSnapshot",         "ec2:DeleteSnapshot",         "ec2:DeleteVolume",         "ec2:DescribeSnapshotAttribute",         "ec2:DescribeVolumeAttribute",         "ec2:DescribeVolumeStatus",         "ec2:ModifyVolumeAttribute"       ],       "Effect": "Allow",       "Resource": "*",       "Condition": {         "StringEquals": {           "ec2:ResourceTag/InstanceId": "i-123456"         }       }     }   ] }

Of course it would be lovely if I could use a variable inside the policy condition instead of the literal string of the instance ID, but that’s not currently possible.

Clearly some of the more important actions I want to take are to attach and detach a volume to my instance:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1434114682836", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:volume/*", "Condition": { "StringEquals": { "ec2:ResourceTag/InstanceID": "i-123456" } } }, { "Sid": "Stmt1434114745717", "Action": [ "ec2:AttachVolume" ], "Effect": "Allow", "Resource": "arn:aws:ec2:us-east-1:123456789:instance/i-123456" } ] }

Now with this in place, we can start to fire up the AWS CLI we spoke of. We’ll let the CLI inherit its credentials form the IAM Instance Role and the polices we just defined.

AZ=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone` Region=`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone|rev|cut -c 2-|rev` InstanceId=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id VolumeId=`aws ec2 --region ${Region} create-volume --availability-zone ${AZ} --volume-type gp2 --size 1 --query "VolumeId" --output text` aws ec2 --region ${Region} create-tags --resource ${VolumeID} --tags Key=InstanceId,Value=${InstanceId} aws ec2 --region ${Region} attach-volume --volume-id ${VolumeId} --instance-id ${InstanceId}

…and at this stage, the above manipulation of the raw block device with LVM can begin. Likewise you can then use the CLI to detach and destroy any unwanted volumes if you are migrating off old block devices.

Clinton Roy: clintonroy

Fri, 2015-06-12 19:34

We are delighted to announce that online registration is now open for PyCon Australia 2015. The sixth PyCon Australia is being held in Brisbane, Queensland from July 31st – 4th August at the Pullman Brisbane and is expected to draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 100 to register. Early bird registration starts from $50 for full-time students, $180 for enthusiasts and $460 for professionals. Offers this good won’t last long, so head straight to http://2015.pycon-au.org and register right away.

PyCon Australia has endeavoured to keep tickets as affordable as possible. We are able to do so, thanks to our Sponsors and Contributors.

We have also worked out favourable deals with accommodation providers for PyCon delegates. Find out more about the options at http://2015.pycon-au.org/register/accommodation

To begin the registration process, and find out more about each level of ticket, visit http://2015.pycon-au.org/register/prices

Important Dates to Help You Plan

June 8: Early Bird Registration Opens — open to the first 100 tickets

June 29: Financial Assistance program closes.

July 8: Last day to Order PyCon Australia 2015 T-shirts

July 19: Last day to Advise Special Dietary Requirements

July 31 : PyCon Australia 2015 Begins

About PyCon Australia

PyCon Australia is the national conference for the Python Programming Community. The sixth PyCon Australia will be held on July 31 through August 4th, 2015 in Brisbane, bringing together professional, student and enthusiast developers with a love for developing with Python. PyCon Australia informs the country’s Python developers with presentations, tutorials and panel sessions by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2015, visit our website at http://pycon-au.org or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, Red Hat Asia-Pacific, and Netbox Blue; and our Gold sponsors, The Australian Signals Directorate and Google Australia. For full details of our sponsors, see our website.



Filed under: Uncategorized

Stewart Smith: gcov code coverage for OpenPower firmware

Fri, 2015-06-12 18:26

For skiboot (which provides the OPAL boot and runtime firmware for OpenPower machines), I’ve been pretty interested at getting some automated code coverage data for booting on real hardware (as well as in a simulator). Why? Well, it’s useful to see that various test suites are actually testing what you think they are, and it helps you be able to define more tests to increase what you’re covering.

The typical way to do code coverage is to make GCC build your program with GCOV, which is pretty simple if you’re a userspace program. You build with gcov, run program, and at the end you’re left with files on disk that contain all the coverage information for a tool such as lcov to consume. For the Linux kernel, you can also do this, and then extract the GCOV data out of debugfs and get code coverage for all/part of your kernel. It’s a little bit more involved for the kernel, but not too much so.

To achieve this, the kernel has to implement a bunch of stub functions itself rather than link to the gcov library as well as parse the GCOV data structures that GCC generates and emit the gcda files in debugfs when read. Basically, you replace the part of the GCC generated code that writes the files out. This works really nicely as Linux has fancy things like a VFS and debugfs.

For skiboot, we have no such things. We are firmware, we don’t have a damn file system interface. So, what do we do? Write a userspace utility to parse a dump of the appropriate region of memory, easy! That’s exactly what I did, a (relatively) simple user space app to parse out the gcov gcda files from a skiboot memory image – something we can easily dump out of the simulator, relatively easily (albeit slower) from the FSP on an IBM POWER system and even just directly out of a running system (if you boot a linux kernel with the appropriate config).

So, we can now get a (mostly automated) code coverage report simply for the act of booting to petitboot: https://open-power.github.io/skiboot/boot-coverage-report/ along with our old coverage report which was just for the unit tests (https://open-power.github.io/skiboot/coverage-report/). My current boot-coverage-report is just on POWER7 and POWER8 IBM FSP based systems – but you can see that a decent amount of code both is (and isn’t) touched simply from the act of booting to the bootloader.

The numbers we get are only approximate for any code run on more than one CPU as GCC just generates code that does a load/add/store rather than using an atomic increment.

One interesting observation was that (at least on smaller systems, which are still quite large by many people’s standards), boot time was not really noticeably increased.

For more information on running with gcov, see the in-tree documentation: https://github.com/open-power/skiboot/blob/master/doc/gcov.txt

Linux Users of Victoria (LUV) Announce: LUV Beginners June Meeting: Getting started with Raspberry Pi

Thu, 2015-06-11 15:30
Start: Jun 20 2015 12:30 End: Jun 20 2015 16:30 Start: Jun 20 2015 12:30 End: Jun 20 2015 16:30 Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

Link:  http://luv.asn.au/meetings/map

Wen Lin will introduce the wonder of Raspberry Pi - a project that has taken the world by storm since the first RasPi SBC was introduced in Feb 2012. After some intro, he will take the audience through a brief overview of Raspberry Pi's latest development around the world as well as a quick glance of a sample of Raspberry Pi related projects in the Community. Then, in the second half of his session, Wen will go into some hands-on demos, focussing on getting a Raspberry Pi up and running as a micro computer running Linux.

LUV would like to acknowledge Red Hat for their help in obtaining the Trinity College venue and VPAC for hosting.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 20, 2015 - 12:30

read more

Binh Nguyen: Repairing Musical Instruments/Electrical Equipment, the Value of Money, and Dating

Wed, 2015-06-10 21:14
If you've been reading this blog for a while now you've noticed that I do a lot of tinkering. One of the things I've been tinkering wih a lot of late though has been electronic music hardware/software. Some things to note:

- you should make the assumption that no one is going to help you with with regards to circuit diagrams when it comes to fixing machines, re-designing/modifying them, etc... The best that you'll be able to manage are teardown pictures/diagrams posted by others out on the 'Interwebs'. 

Don't make the assumption that your problem is the exact same as others out there. Most of the time though they'll be the usual problems that other electronic devices face such as improper contact (also referred to as dry soldering) or failed electronic components.The biggest problem that you will face will be the intermittent issues. For instance, thermally related or else physical contact problems that haven't quite made themselves completely obvious. I had something like this recently. A screen on a Maschine was basically malfunctioning from time to time. The owner told me to press down on the screen to make it work. I tried it and it seemed to work. After tearing it down and trying to fix various contacts it became obvious that this one was slightly more difficult to fix. Putting pressure across the board didn't provide any further clues until a capacitor (C207, halfway across the PCB) fell off (and the problem seemed to be consistent). Re-soldering seems to have fixed the problem. 

Interesting facts. Maschine screens are interchangeable from side to side in case you want/need to repair one of these. They are their own separate module (except in the Mikro based on the description I'm seeing). They are not soldered on to the PCB but are connected via ZIF connectors. http://www.soundonsound.com/sos/jun09/articles/nimaschine.htm

http://www.soundonsound.com/sos/jan13/articles/ni-maschine.htm

http://www.soundonsound.com/sos/dec11/articles/ni-maschine-mikro.htmRepairing a lot of (non-trivial) electronics is a balance between luck, skill, perseverence, etc... Tips on dealing with intermittent problems include using physical pressure applied at strategic points to narrow down the source, purchasing better diagnostic equipment (sometimes your only choice), and using hair dryers/compressed air as a means of temperature regulation.

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/812965-maschine-mikro-has-met-beer-how-crack-open.html

http://www.illmuzik.com/forums/threads/maschine-mk1-controller-help.32980/

http://maschinemusic.com/forum/topics/maschine-mk2-defective-screens

https://www.native-instruments.com/forum/threads/maschine-studio-screens.211153/page-2

https://www.native-instruments.com/forum/threads/hardware-screens-not-working-properly.230738/

http://www.illmuzik.com/forums/threads/maschine-mk1-controller-help.32980/

https://www.native-instruments.com/forum/threads/i-got-2-maschines-and-on-botth-the-screens-are-flickering-out-after-not-even-1-year.193729/



- same with software interfacing. Some companies build their equipment with the express purpose of linking their hardware and software. They have no incentive to help you build something that will interface with their hardware/software. It will take luck, perservence and knowledge of reverse engineering to do what you need (See the relevant chapters in my book on 'Cloud and Internet Security' for further details regarding this.).

http://www.native-instruments.com/forum/threads/midi-keyboard-in-maschine-help.149559/

http://www.youtube.com/watch?v=JkDKV9ys3z8

http://www.youtube.com/watch?v=P2zFEHyBoZU

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY

http://www.mpc-tutor.com/understanding-midi-on-the-akai-mpc/

http://www.acidboxblues.com/2012/07/so-ableton-live-crashes-and-you-think.html



- there is a good chance that you may be eletrocuted at some point. Take measures to reduce the chances that the amount of power that can exit through your body. I often work with rubber gloves, wear rubber sole shoes, etc... Isolate the problem as much as you can and work across modules. If in doubt order in a new module rather than doing component level repair. It will reduce the chances of you getting 'zapped' and sometimes may be the most viable, economic option available once you factor in the amount of time you must spend working on the problem. Finally, if in doubt send it off to someone more accomplished to have the repair completed. This seems obvious but I've come across some people who have tried to scrimage and have done more bad than good when attemping to 'repair' something.

http://dtbnguyen.blogspot.com.au/2013/03/repairing-laptop-power-bricks.html

- you will come across 'smelly equipment' from time to time. I recently came across a Maschine that had been used in a 'smoky environment'. It was so 'smoky' that I actually felt as though I was getting high from simply being around it. I had to tear it down and soak it (the control pads which seem to be made of a silicone and rubber compound, not the device) in hot water and bleach twice for several hours before I could operate 'normally' within it's vicinity. A tip, if you do have to use solvents or other cleaning chemicals test it at a lower concentration and amount first. You don't want to find out later down the line that the substance you used was actually highly corrosive and may have damaged sensitive electronic components.

https://www.gearslutz.com/board/so-much-gear-so-little-time/811452-how-remove-smoke-odour-gear-2.html

https://www.gearslutz.com/board/so-much-gear-so-little-time/632928-anyway-remove-cigarette-odor-used-gear.html

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/456754-how-do-remove-smell-smoke-y-synthesizer.html

http://forums.whirlpool.net.au/archive/1041569

https://www.gearslutz.com/board/so-much-gear-so-little-time/632928-anyway-remove-cigarette-odor-used-gear.html

http://en.wikipedia.org/wiki/Sodium_hydroxide

http://www.ebay.com/gds/How-to-Eliminate-Smoke-Smell-from-Your-eBay-Purchases-/10000000001669988/g.html

http://www.head-fi.org/t/60646/cigarette-smoke-smell-in-electronics-how-to-get-rid-of



As I was growing up, people often told me to, "do something you enjoy". Other told me to, "do something which will help you make heaps of money". Now, I'm a little, older, and a little bit wiser. I say, try and find a nice balance between the two.



You don't really realise what the value of money is until you actually are forced to consider what you earn and what you actually spend. For instance, the general belief is that everyone goes to school and works hard in an effort to find a good, high earning profession at the end of all of it. Recently, I've been looking at the numbers more carefully and for everything you have to put up with in some places you really wonder whether it's all worth it.



Increasingly, many of us are working extended hours (your job description may say 9 to 5 but in reality your hours are much longer or else you have to deal with an undue number of 'off hour' incidents) with unrealistic expectations, lack of training, favouritism/nepotism, un-supportive/directionless management and/or team mates for not much more and after you've factored in travel time/costs, bills, day to day living costs and so on there's not enough left over to say that it was actually worth it especially if it's not in a role that you particularly enjoy.



Even if you make heaps of money you've given up so much time during the week that you can be too burnt out to enjoy it.



Ironically, it's much the same even in some of the 'glamour industries' such as law, medicine, finance, and IT.

http://forums.whirlpool.net.au/forum-replies.cfm?t=2413007

http://forums.whirlpool.net.au/archive/2345937

http://www.amazon.com/The-Striped-Prison-Lisa-Pryor/dp/0330423509

http://skepticlawyer.com.au/2009/12/21/pin-striped-prison/



Moreover, it's the same with a lot of businesses. Live enough and you basically see that in spite of the impressive numbers (7 to 8 figures a month/year) that a lot of businesses may report it doesn't seem like they're going anywhere. They just seem/feel to be struggling to stay afloat a lot of the time. Things make a lot more sense to me now why many lot of companies seem so paranoid when it comes to profit margins and maintaining large amounts of cash savings on hand in case something goes bad (Microsoft has been somewhat notorious when it comes to this).



The obvious answer to this conundrum is to run your own business (or search for your 'dream job'). Unless you've actually been involved in a startup or have been involved in building a company from the ground up you don't realise how much stress is involved. Unless you actually enjoy the work, you're essentially stuck in the same doom loop scenario. Moreover, finding your 'dream job' is made much more difficult by the lack of opportunities, competition, and the fact by recruiters who may not be entirely up front about the job in question. The only thing I've been learning over and over again is to try and find a balance between time, money, and doing what you enjoy. Moreover, once you find something you enjoy and are making money out of it, make the most of it and stick at it for as long as you possibly can (whether that be your own business or working for someone else).



It's pretty darn obvious people use the information on this blog for all sorts of weird and wonderful things. For those girls who have supposedly been lusting after the man behind this blog, please send photos!!! :-) For those who are looking for immigration benefits though please though photos and send money too!!! :-)

http://www.native-instruments.com/en/support/downloads/drivers-other-files/http://www.native-instruments.com/forum/threads/maschine-hardware-holding-back-software.192152/

http://www.native-instruments.com/forum/threads/how-many-installs-of-maschine-do-we-get.82355/

https://www.native-instruments.com/forum/threads/maschine-sound-libraries-etc.104439/

https://www.native-instruments.com/en/support/knowledge-base/show/421/on-how-many-computers-can-i-activate-my-native-instruments-product/

http://www.native-instruments.com/en/support/knowledge-base/show/1136/i-have-activated-the-maschine-software-but-not-yet-received-my-free-version-of-massive/

http://www.native-instruments.com/en/support/knowledge-base/show/559/how-to-activate-a-native-instruments-product-on-an-offline-computer/

http://www.native-instruments.com/en/support/contact-support/registration-support/

Binh Nguyen: Custom MIDI (Hardware and Software) Controllers, MP3 Players, and SD Card Experiments

Tue, 2015-06-09 19:55
If you're like me (a technologist who has an interest in music) you've probably looked at a variety of MIDI controllers on the market but haven't found one that quite ticks all the boxes for everything that you want to do. It's also likely that you've looked at having multiple controllers and/or some of the higher end equipment but as always you can't always justify the cost of what you want versus what you actually need.



Of late, I've been looking at building my own (MIDI controllers). After all, these devices are relatively simple and often used highly standardised components (membrance based switches, encoders/knobs/other, some chips, etc...). Look at the following links/teardowns and you'll notice that there is very little to distinguish between them with many components being available from your local electronics store.

https://www.flickr.com/photos/psychlist1972/sets/72157631489556008/detail/

http://www.illuminatedsounds.com/?cat=23

http://www.illuminatedsounds.com/?p=744

http://bangbang-nyc.com/2013/05/ableton-push-disassembled/

http://pushmod.blogspot.com.au/http://www.synthtopia.com/content/2013/08/26/ableton-push-stripped-bare/

http://www.mpcstuff.com/akstst.html



I've looked at starting from scratch for hardware builds but they have proven to be prohibitively expensive for my experiment (3D printing is an increasingly viable option especially as public libraries let them out for free, public use but there are limitations especially with regards to construction. For instance, many printers will require multiple sessions before a complete device can be constructed, there are durability concerns, etc...). Instead I've been looking at using existing electronics to interface with.

http://www.umidi.co/index.html

http://custommidicontrollers.com/

http://www.instructables.com/id/Custom-Built-MIDI-Controller/



For instance, finding something suitable to turn into a MIDI controller (calculators, toy pianos spring to mind). The circuitry is often very simple and basically all you need to is hook it up to an environmental control interface device with multiple sensors. A hardware interface is then used to provide electrical signal to MIDI control translation (such as an Arduino device). The other option is to analyse the electrical signal on a case by case basis. Then use this as a basis for writing a translation program which will turn the electrical signal into a MIDI signal which can be used to interface with other equipment, your existing software, etc...

http://www.musicradar.com/reviews/tech/akai-mpd24-22920

http://vvvv.org/contribution/mpd24-akai-midi-mapper

http://mods-n-hacks.wonderhowto.com/how-to/build-simple-midi-controller-251069/

http://shiftmore.blogspot.com.au/2009/12/calculator-midi-usb-controller.html

http://www.codetinkerhack.com/2012/11/how-to-turn-piano-toy-into-midi.html

http://www.codetinkerhack.com/2013/01/how-to-add-velocity-aftertouch-midi.html

http://makezine.com/2010/11/30/usbhacking/

https://wiki.python.org/moin/PythonInMusic

http://www.native-instruments.com/forum/threads/turning-any-usb-hardware-into-a-midi-device.47017/

http://createdigitalmusic.com/2009/11/novation-releases-all-midi-details-for-launchpad/

http://www.widisoft.com/english/widi-audio-to-midi-vst.html

http://code.google.com/p/audio2midi/

http://www.synthtopia.com/content/2013/07/26/midimorphosis-converts-audio-to-midi/

http://www.nativekontrol.com/



Another option I've been looking at is using third party electronic devices (such as a tablet or else cheaper MIDI control devices in combination with other software) to provide emulation for often much more expensive hardware. Good examples of this include the the high end hardware controllers such as Native Instrument's Maschine, Ableton's Push, a Akai's MPC/APC series, etc... (Even when purchased second hand these devices can often fetch up to around 80-90% of their retail value. Factor in the problem that few retailers are willing to provide demonstration equipment for them (StoreDJ is an exception) and you can understand why so many people re-sell their equipment with explanations often stating that the piece of equipment quite simply didn't fit into their setup.)

http://motscousus.com/stuff/2011-07_Novation_Launchpad_Ableton_Live_Scripts/ http://www.sample-hold.com/2011/12/19/make-fun-of-your-launchpad-with-launchplay-vst-plugin/

http://audionewsroom.net/2010/01/novation-launchpad-13-a-hackers-perspective.html http://www.afrodjmac.com/blog/2013/03/14/more-ways-to-turn-your-launchpad-into-a-push

http://beatwise.proboards.com/thread/1315/free-preset-carbon-push-emulation

http://www.reddit.com/r/abletonlive/comments/1aopop/push_emulation_now_available_on_apc40_free_to/?



There are several main options to look at including TouchOSC, MIDI Designer, and Lemur. The two I've been most curious about are Lemur and TouchOSC though. Installation and setup consist of a daemon/service on your computer, an application of some sort on your tablet, and an editor that can be tablet or computer based. Thereafter, there are often 'templates' which are basically skins and underlying software code which allows you to design a MIDI interface from scratch and interface with other equipment/software directly from your tablet.

https://liine.net/en/products/lemur/

http://iosmidi.com/

http://mididesigner.com/

http://hexler.net/software/touchosc-android

http://djtechtools.com/2013/01/28/how-to-dj-using-liines-lemur-app-for-ipad/

http://www.youtube.com/watch?v=KJxAnm3j8TI

http://createdigitalmusic.com/2014/11/lemur-now-android-supports-cabled-connections-want-touch-app/https://liine.net/en/community/user-library/view/421/

https://liine.net/wiki/android_devices

https://liine.net/en/products/lemur/premium/livecontrol-2/

https://www.reddit.com/r/edmproduction/comments/2kdy8d/as_someone_who_is_bad_at_playing_the_keyboard/

http://www.native-instruments.com/forum/threads/ipad-maschine.181052/

There are obvious issues here. Apple iPads are almost as expensive as some of the MIDI controllers we're looking at in this document. One option is to purchase the iPad Mini or something second hand. Basically, what I've been reading indicates that either option will do but that the screen size of the iPad Mini may make things a bit fiddly particularly if you have large hands. The other option is to use Android only applications. The only problem is that the iOS universe is often much more diverse than the Android one.

http://www.kvraudio.com/forum/viewtopic.php?t=397495

http://support.liine.net/customer/portal/questions/1244470-ipad-mini-compatibility-with-lemur-drum-pad-

http://forum.liine.net/viewtopic.php?f=25&t=2391

https://www.ableton.com/en/help/article/control-live-mobile-device/

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/700437-ni-maschine-mikro-vs-ipad-lemur.html

http://digitaldjtools.net/mappings/

http://forum.watmm.com/topic/76701-considering-an-ipad-mini/

https://documentation.meraki.com/SM/Monitoring_and_Reporting/Activation_Lock_Bypass_for_iOS_Devices


http://cydiamate.net/doulci-ios-8-3-activation-lock-bypass/



The other thing that needs to be considered is how you should interface. In theory, wireless is a great option. In practice I've been seeing stories about consistently lost connnections. Look at a hardware USB interface if need be.

http://www.djcity.com.au/irig-midi-interface-for-iphone-and-ipad

http://www.djcity.com.au/irig-midi2



To be honest though a lot of the emulators for the Push (and other devices) aren't perfect. You lose a bit of functionality (in some cases you gain a lot of extra functionality though but the emulation still isn't perfect). It's likely going to make you want to purchase these devices more or ward you off of them completely because they don't fit into your workflow.



With the cessation of production of the iPod Classic and other high capacity music player options I've been looking at alternatives on and off for a while. Clearly, high capacity SD based storage options are extremely expensive at this stage at the high end. One alternative though is using adapter cards for inexpensive, readily available, older low capacity MP3 players which utilise hard drives. The adapters required are available for around $10-20. Obvious problems using SD based storage include regarding speed limitations, capacity limitations, high prices, etc... Moreover, some of the adapters won't fit in the case, or there needs to be workarounds. For instance, currently there aren't enough 128GB SD cards at a reasonable price locally so running multiple SD cards in RAID configuration may be the compromise that you have to make for the immediate future.

http://www.ebay.com/bhp/sd-card-to-ide

http://cubicgarden.com/2013/05/05/upgrading-the-pacemakers-hard-drive/

http://www.head-fi.org/t/566780/official-ipod-video-classic-5g-5-5g-6g-6-5g-7g-ssd-mod-thread/270

http://www.ebay.com/itm/SD-SDHC-MMC-Card-to-1-8-ZIF-LIF-CE-SSD-Adapter-40pin-ZIF-LIF-cable-/111091174857



One interesting piece of information that I've come across recently is that there isn't much stopping people using SDXC cards in supposedly SDHC only card readers (either drivers or simple hardware blocks are the limitations). Basically, the primary difference between SDHC and SDXC are that the default file formats are one uses FAT32 as the default format while the other uses exFAT respectively. Clearly this limitation can be overcome with the right tools and knowledge though. For instance, Windows by default doesn't allow this so other options need to be employed.

https://gbatemp.net/threads/how-to-use-a-64gb-micro-sdxc-in-your-sdhc-compliant-flash-cart.335912/

http://www.ridgecrop.demon.co.uk/

http://www.tarkan.info/20121226/tutorials/ipod-and-sdhc-sdxc-cards

http://arstechnica.com/civis/viewtopic.php?t=1151548

http://en.wikipedia.org/wiki/Secure_Digital

https://www.ifixit.com/Guide/iPod+5th+Generation+%28Video%29+CF+or+SDHC-SDXC+Memory+instead+of+HDD+Replacement/7492

https://www.raspberrypi.org/forums/viewtopic.php?f=2&t=2252



http://superuser.com/questions/282202/which-consumes-more-power-hard-drive-or-sd-card-card-reader

http://raspberrypi.stackexchange.com/questions/1765/possible-to-connect-sata-device-to-the-sd-slot

http://www.techbuy.com.au/p/208703/HARD_DRIVE_-_EXTERNAL_DRIVE_CASE_SATA_-_USB_2.5/8WARE/WI21.asp

http://www.warcom.com.au/shop/flypage/computer-parts/media-players/49000?gclid=CKXk-bfN2sUCFUsHvAod-gQAGg

http://www.i-tech.com.au/products/144200_8ware_Portable_Wireless_Streaming.aspx

Sridhar Dhanapalan: Twitter posts: 2015-06-01 to 2015-06-07

Mon, 2015-06-08 00:27

Chris Samuel: Thoughts on the white spots of Ceres

Sun, 2015-06-07 10:26

If you’ve been paying attention to the world of planetary exploration you’ll have noticed the excitement about the unexpected white spots on the dwarf planet Ceres. Here’s an image from May 29th that shows them well.

Having looked at a few images my theory is that impacts are exposing some much higher albedo material, which you can see here at the top of the rebound peak at the center of the crater, and that the impact has thrown some of this material up and that material has fallen back as Ceres has rotated slowly beneath it giving rise to the blobs to the side of the crater.

If my theory is right then if you know Ceres gravity and its rotational speed and the distance between the rebound peak and the other spots then you should be able to work out how far up the material was thrown up. That might tell you something about the size of the impact (depending on how much you know about the structure of Ceres itself).

As an analogy, here’s an impact on Mars captured by the HiRise camera on MRO that shows an area of ice exposed by an impact.

Fading Impact Streaks and Exposed Ice – http://t.co/whWFvVZ0P7 pic.twitter.com/zLQR91uFcI

— HiRISE (@HiRISE) June 6, 2015

This item originally posted here:



Thoughts on the white spots of Ceres

James Morris: Hiring Subsystem Maintainers

Fri, 2015-06-05 16:27

The regular LWN kernel development stats have been posted here for version 4.1 (if you really don’t have a subscription, email me for a free link).  In this, Jon Corbet notes:

over 60% of the changes going into this kernel passed through the hands of developers working for just five companies. This concentration reflects a simple fact: while many companies are willing to support developers working on specific tasks, the number of companies supporting subsystem maintainers is far smaller. Subsystem maintainership is also, increasingly, not a job for volunteer developers..

As most folks reading this would know, I lead the mainline Linux Kernel team at Oracle.  We do have several people on the team who work in leadership roles in the kernel community (myself included), and what I’d like to make clear is that we are actively looking to support more such folk.

If you’re a subsystem maintainer (or acting in a comparable leadership role), please always feel free to contact me directly via email to discuss employment possibilities.  You can also contact Oracle kernel folk who may be presenting or attending Linux conferences.

Michael Still: More coding club

Fri, 2015-06-05 13:28
This is the second post about the coding club at my kid's school. I was away for four weeks travelling for work and then getting sick, so I am still getting back up to speed with what the kids have been up to while I've been away. This post is an attempt to gather some resources that I hope will be useful during the session today -- it remains to be seen how this maps to what the kids actually did while I was away.



First off, the adults have decided to give Python for Kids a go as a teaching resource. The biggest catch with this book is that its kind of expensive -- at AUD $35 a copy, we can't just issue a copy to every kid in the room. That said, perhaps the kids don't each need a copy, as long as the adults are just using it as a guide for what things to cover.



It appears that while I was away chapters 1 through 4 have been covered. 1 is about install python, and then 2-3 are language construct introductions. This is things like what a variable is, mathematical operators, strings, tuples and lists. So, that's all important but kind of dull. On the other hand, chapter 4 covers turtle graphics, which I didn't even realize that python had a module for.



I have fond memories of doing logo graphics as a kid at school. Back in my day we'd sometimes even use actual robots to do some of the graphics, although most of it was simulated on Apple II machines of various forms. I think its important to let the kids of today know that these strange exercises they're doing used to relate to physical hardware that schools actually owned. Here are a couple of indicative pictures stolen from the Internet:











So, I think that's what we'll keep going with this week -- I'll let the kids explain where they got to with turtle graphics and then we'll see how far we can take that without it becoming a chore.



Tags for this post: coding_club kids coding python turtle graphics logo

Related posts: Coding club day one: a simple number guessing game in python; JPEG 2 MPEG howto; Graphics from the command line; Implementing SCP with paramiko; Packet capture in python; I'm glad I've turned on comments here



Comment

Michael Still: Geocaching at the border

Thu, 2015-06-04 20:28
Today's lunch walk was around Tuggeranong Pines again. At the back of the pine forest is the original train line from the 1880s which went down to Cooma. I walked as far as the old Tuggeranong siding before turning back. Its interesting, as there is evidence that there has been track work done here in the last ten years or so, even though the line hasn't been used since 1989.



                       



Interactive map for this route.



Tags for this post: blog pictures 20150604-geocaching photo canberra bushwalk

Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches



Comment

Rusty Russell: What Transactions Get Crowded Out If Blocks Fill?

Wed, 2015-06-03 14:29

What happens if bitcoin blocks fill?  Miners choose transactions with the highest fees, so low fee transactions get left behind.  Let’s look at what makes up blocks today, to try to figure out which transactions will get “crowded out” at various thresholds.

Some assumptions need to be made here: we can’t automatically tell the difference between me taking a $1000 output and paying you 1c, and me paying you $999.99 and sending myself the 1c change.  So my first attempt was very conservative: only look at transactions with two or more outputs which were under the given thresholds (I used a nice round $200 / BTC price throughout, for simplicity).

(Note: I used bitcoin-iterate to pull out transaction data, and rebuild blocks without certain transactions; you can reproduce the csv files in the blocksize-stats directory if you want).

Paying More Than 1 Person Under $1 (< 500000 Satoshi)

Here’s the result (against the current blocksize):

Sending 2 Or More Sub-$1 Outputs

Let’s zoom in to the interesting part, first, since there’s very little difference before 220,000 (February 2013).  You can see that only about 18% of transactions are sending less than $1 and getting less than $1 in change:

Since March 2013…

Paying Anyone Under 1c, 10c, $1

The above graph doesn’t capture the case where I have $100 and send you 1c.   If we eliminate any transaction which has any output less than various thresholds, we’ll catch that. The downside is that we capture the “sending myself tiny change” case, but I’d expect that to be rarer:

Blocksizes Without Small Output Transactions

This eliminates far more transactions.  We can see only 2.5% of the block size is taken by transactions with 1c outputs (the dark red line following the block “current blocks” line), but the green line shows about 20% of the block used for 10c transactions.  And about 45% of the block is transactions moving $1 or less.

Interpretation: Hard Landing Unlikely, But Microtransactions Lose

If the block size doesn’t increase (or doesn’t increase in time): we’ll see transactions get slower, and fees become the significant factor in whether your transaction gets processed quickly.  People will change behaviour: I’m not going to spend 20c to send you 50c!

Because block finding is highly variable and many miners are capping blocks at 750k, we see backlogs at times already; these bursts will happen with increasing frequency from now on.  This will put pressure on Satoshdice and similar services, who will be highly incentivized to use StrawPay or roll their own channel mechanism for off-blockchain microtransactions.

I’d like to know what timescale this happens on, but the graph shows that we grow (and occasionally shrink) in bursts.  A logarithmic graph prepared by Peter R of bitcointalk.org suggests that we hit 1M mid-2016 or so; expect fee pressure to bend that graph downwards soon.

The bad news is that even if fees hit (say) 25c and that prevents all the sub-$1 transactions, we only double our capacity, giving us perhaps another 18 months. (At that point miners are earning $1000 from transaction fees as well as $5000 (@ $200/BTC) from block reward, which is nice for them I guess.)

My Best Guess: Larger Blocks Desirable Within 2 Years, Needed by 3

Personally I think 5c is a reasonable transaction fee, but I’d prefer not to see it until we have decentralized off-chain alternatives.  I’d be pretty uncomfortable with a 25c fee unless the Lightning Network was so ubiquitous that I only needed to pay it twice a year.  Higher than that would have me reaching for my credit card to charge my Lightning Network account :)

Disclaimer: I Work For BlockStream, on Lightning Networks

Lightning Networks are a marathon, not a sprint.  The development timeframes in my head are even vaguer than the guesses above.  I hope it’s part of the eventual answer, but it’s not the bandaid we’re looking for.  I wish it were different, but we’re going to need other things in the mean time.

I hope this provided useful facts, whatever your opinions.

Rusty Russell: Current Blocksize, by graphs.

Wed, 2015-06-03 13:29

I used bitcoin-iterate and gnumeric to render the current bitcoin blocksizes, and here are the results.

My First Graph: A Moment of Panic

This is block sizes up to yesterday; I’ve asked gnumeric to derive an exponential trend line from the data (in black; the red one is linear)

Woah! We hit 1M blocks in a month! PAAAANIC!

That trend line hits 1000000 at block 363845.5, which we’d expect in about 32 days time!  This is what is freaking out so many denizens of the Bitcoin Subreddit. I also just saw a similar inaccurate [correction: misleading] graph reshared by Mike Hearn on G+ :(

But Wait A Minute

That trend line says we’re on 800k blocks today, and we’re clearly not.  Let’s add a 6 hour moving average:

Oh, we’re only halfway there….

In fact, if we cluster into 36 blocks (ie. 6 hours worth), we can see how misleading the terrible exponential fit is:

What! We’re already over 1M blocks?? Maths, you lied to me!

Clearer Graphs: 1 week Moving Average

Actual Weekly Running Average Blocksize

So, not time to panic just yet, though we’re clearly growing, and in unpredictable bursts.

Michael Still: Melrose trig

Tue, 2015-06-02 16:30
I went for a short geocaching walk at lunch today. Three geocaches in 45 minutes, so not too shabby. One of those caches was at the Melrose trig point, so bagged that too. There is some confusion here, as John Evans and I thought that Melrose was on private land. However, there is no signage to that effect in the area and the geocache owner asserts this is public land. ACTMAPi says the area is Tuggeranong Rural Block 35, but isn't clear on if the lease holder exists. Color me confused and possibly an accidental trespasser.



         



Interactive map for this route.



Tags for this post: blog pictures 20150602-melrose photo canberra bushwalk trig_point

Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger



Comment

Michael Still: In A Sunburned Country

Mon, 2015-06-01 20:29






ISBN: 0965000281

LibraryThing

This is the first Bill Bryson book I've read, and I have to say I enjoyed it. Bill is hilarious and infuriating at the same time, which surprisingly to me makes for a very entertaining combination. I'm sure he's not telling the full story in this book -- its just not possible for someone so ill prepared to not just die in the outback somewhere. Take his visit to Canberra for example -- he drives down from Sydney, hits the first hotel he finds and then spends three days there. No wonder he's bored. Eventually he bothers to drive for another five minutes and finds there is more to the city than one hotel. On the other hand, he maligns my home town in such a hilarious manner I just can't be angry at him.



I loved this book, highly recommended.



Tags for this post: book bill_bryson australia travel

Related posts: In Sydney!; American visas for all!; Melbourne; Sydney Australia in Google Maps; Top Gear Australia; Linux presence at Education Expo Comment Recommend a book

Michael Still: The linux.conf.au 2016 Call For Proposals is open!

Mon, 2015-06-01 16:29
The OpenStack community has been well represented at linux.conf.au over the last few years, which I think is reflective of both the growing level of interest in OpenStack in the general Linux community, as well as the fact that OpenStack is one of the largest Python projects around these days. linux.conf.au is one of the region's biggest Open Source conferences, and has a solid reputation for deep technical content.



Its time to make it all happen again, with the linux.conf.au 2016 Call For Proposals opening today! I'm especially keen to encourage talk proposals which are somehow more than introductions to various components of OpenStack. Its time to talk detail about how people's networking deployments work, what container solutions we're using, and how we're deploying OpenStack in the real world to do seriously cool stuff.



The conference is in the first week of February in Geelong, Australia. I'd be happy to chat with anyone who has questions about the CFP process.



Tags for this post: openstack conference linux.conf.au lca2016

Related posts: LCA 2007 Video: CFQ IO; LCA 2006: CFP closes today; I just noticed...; LCA2006 -- CFP opens soon!; I just noticed...; Updated: linux.conf.au 2007 MythTV tutorial homework



Comment