Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 5 min 3 sec ago

Francois Marier: Letting someone ssh into your laptop using Pagekite

13 hours 14 min ago

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org. 3600 IN CNAME fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1 Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1 Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org CheckHostIP no ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

Stewart Smith: Running OPAL in qemu – the powernv platform

Fri, 2015-08-28 13:26

Ben has a qemu tree up with some work-in-progress patches to qemu to support the PowerNV platform. This is the “bare metal” platform like you’d get on real POWER8 hardware running OPAL, and it allows us to use qemu like my previous post used the POWER8 Functional Simulator – to boot OpenPower firmware.

To build qemu for this, follow these steps:

apt-get -y install gcc python g++ pkg-config libz-dev libglib2.0-dev \ libpixman-1-dev libfdt-dev git git clone https://github.com/ozbenh/qemu.git cd qemu ./configure --target-list=ppc64-softmmu make -j `grep -c processor /proc/cpuinfo`

This will leave you with a ppc64-softmmu/qemu-system-ppc64 binary. Once you’ve built your OpenPower firmware to run in a simulator, you can boot it!

Note that this qemu branch is under development, and is likely to move/change or even break.

I do it like this:

cd ~/op-build/output/images; # so skiboot.lid is in pwd ~/qemu/ppc64-softmmu/qemu-system-ppc64 -m 1G -M powernv \ -kernel zImage.epapr -nographic \ -cdrom ~/ubuntu-vivid-ppc64el-mini.iso

and this lets me test that we launch the Ubunut vivid installer correctly.

You can easily add other qemu options such as additional disks or networking and verify that it works correctly. This way, you can do development on some skiboot functionality or a variety of kernel and op-build userspace (such as the petitboot bootloader) without needing either real hardware or using the simulator.

This is useful if, say, you’re running on ppc64el, for which the POWER8 functional simulator is currently not available on.

Stewart Smith: doing nothing on modern CPUs

Fri, 2015-08-28 12:26

Sometimes you don’t want to do anything. This is understandably human, and probably a sign you should either relax or get up and do something.

For processors, you sometimes do actually want to do absolutely nothing. Often this will be while waiting for a lock. You want to do nothing until the lock is free, but you want to be quick about it, you want to start work once that lock is free as soon as possible.

On CPU cores with more than one thread (e.g. hyperthreading on Intel, SMT on POWER) you likely want to let the other threads have all of the resources of the core if you’re sitting there waiting for something.

So, what do you do? On x86 there’s been the PAUSE instruction for a while and on POWER there’s been the SMT priority instructions.

The x86 PAUSE instruction delays execution of the next instruction for some amount of time while on POWER each executing thread in a core has a priority and this is how chip resources are handed out (you can set different priorities using special no-op instructions as well as setting the Relative Priority Register to map how these coarse grained priorities are interpreted by the chip).

So, when you’re writing spinlock code (or similar, such as the implementation of mutexes in InnoDB) you want to check if the lock is free, and if not, spin for a bit, but at a lower priority than the code running in the other thread that’s doing actual work. The idea being that when you do finally acquire the lock, you bump your priority back up and go do actual work.

Usually, you don’t continually check the lock, you do a bit of nothing in between checking. This is so that when the lock is contended, you don’t just jam every thread in the system up with trying to read a single bit of memory.

So you need a trick to do nothing that the complier isn’t going to optimize away.

Current (well, MySQL 5.7.5, but it’s current in MariaDB 10.0.17+ too, and other MySQL versions) code in InnoDB to “do nothing” looks something like this:

ulint ut_delay(ulint delay) { ulint i, j; UT_LOW_PRIORITY_CPU(); j = 0; for (i = 0; i < delay * 50; i++) { j += i; UT_RELAX_CPU(); } if (ut_always_false) { ut_always_false = (ibool) j; } UT_RESUME_PRIORITY_CPU(); return(j); }

On x86, UT_RELAX_CPU() ends up being the PAUSE instruction.

On POWER, the UT_LOW_PRIORITY_CPU() and UT_RESUME_PRIORITY_CPU() tunes the SMT thread priority (and on x86 they’re defined as nothing).

If you want an idea of when this was all written, this comment may be a hint:

/*!< in: delay in microseconds on 100 MHz Pentium */

But, if you’re not on x86 you don’t have the PAUSE instruction, instead, you end up getting this code:

# elif defined(HAVE_ATOMIC_BUILTINS) # define UT_RELAX_CPU() do { \ volatile lint volatile_var; \ os_compare_and_swap_lint(&volatile_var, 0, 1); \ } while (0)

Which you may think “yep, that does nothing and is not optimized away by the compiler”. Except you’d be wrong! What it actually does is generates a lot of memory traffic. You’re now sitting in a tight loop doing atomic operations, which have to be synchronized between cores (and sockets) since there’s no real way that the hardware is going to be able to work out that this is only a local variable that is never accessed from anywhere.

Additionally, the ut_always_false and j variable there is also attempts to trick the complier into not optimizing the loop away, and since ut_always_false is a global, you’re generating traffic to a single global variable too.

Instead, what’s needed is a compiler barrier. This simple bit of nothing tells the compiler “pretend memory has changed, so you can’t optimize around this point”.

__asm__ __volatile__ ("":::"memory")

So we can eliminate all sorts of useless non-work and instead do what we want: do nothing (a for loop for X iterations that isn’t optimized away by the compiler) and don’t have side effects.

In MySQL bug 74832 I detailed this with the appropriately produced POWER assembler. Unfortunately, this patch (submitted under the OCA) has sat since November 2014 (so, over 9 months) with no action. I’m a bit disappointed by that to be honest.

Anyway, the real moral of this story is: don’t implement your own locking primitives. You’re either going to get it wrong or you’ll be wrong in a few years when everything changes under you.

See also:

Donna Benjamin: D8 Accelerate - Game over?

Thu, 2015-08-27 11:27
Thursday, August 27, 2015 - 10:47

The Drupal 8 Accelerate campaign has raised over two hundred and thirty thousand dollars ($233,519!!).  That's a lot of money! But our goal was to raise US$250,000 and we're running out of time. I've personally helped raise $12,500 and I'm aiming to raise 8% of the whole amount, which equals $20,000. I've got less than $7500 now to raise. Can you help me? Please chip in.

Most of my colleagues on the board have contributed anchor funding via their companies. As a micro-enterprise, my company Creative Contingencies is not in a position to be able to that, so I set out to crowdfund my share of the fundraising effort.

I'd really like to shout out and thank EVERYONE who has made a contribution to get me this far.Whether you donated cash, or helped to amplify my voice, thank you SO so soooo much. I am deeply grateful for your support.

If you can't, or don't want to contribute because you do enough for Drupal that's OK! I completely understand. You're awesome. :) But perhaps you know someone else who is using Drupal, who will be using Drupal you could ask to help us? Do you know someone or an organisation who gets untold value from the effort of our global community? Please ask them, on my behalf, to Make a Donation

If you don't know anyone, perhaps you can help simply by sharing my plea? I'd love that help. I really would!

And if you, like some others I've spoken with, don't think people should be paid to make Free Software then I urge you to read Ashe Dryden's piece on the ethics of unpaid labor in the Open Source Community. It made me think again.

Do you want to know more about how the money is being spent? 

See: https://assoc.drupal.org/d8-accelerate-awarded-grants

Perhaps you want to find out how to apply to spend it on getting Drupal8 done?

See: https://assoc.drupal.org/d8-accelerate-application

Are you curious about the governance of the program?

See: https://www.drupal.org/governance/d8accelerate

And just once more, with feeling, I ask you to please consider making a donation.

So how much more do I need to get it done? To get to GAME OVER?

  • 1 donation x $7500 = game over!
  • 3 donations x $2500
  • 5 donations x $1500
  • 10 donations x $750
  • 15 donationsx $500 <== average donation
  • 75 donations x $100 <== most common donation
  • 100 donations x $75
  • 150 donations x $50
  • 500 donations x $15
  • 750 donations x $10 <== minimum donation

Thank you for reading this far. Really :-)

James Morris: Linux Security Summit 2015 – Wrapup, slides

Thu, 2015-08-27 05:27

The slides for all of the presentations at last week’s Linux Security Summit are now available at the schedule page.

Thanks to all of those who participated, and to all the events folk at Linux Foundation, who handle the logistics for us each year, so we can focus on the event itself.

As with the previous year, we followed a two-day format, with most of the refereed presentations on the first day, with more of a developer focus on the second day.  We had good attendance, and also this year had participants from a wider field than the more typical kernel security developer group.  We hope to continue expanding the scope of participation next year, as it’s a good opportunity for people from different areas of security, and FOSS, to get together and learn from each other.  This was the first year, for example, that we had a presentation on Incident Response, thanks to Sean Gillespie who presented on GRR, a live remote forensics tool initially developed at Google.

The keynote by kernel.org sysadmin, Konstantin Ryabitsev, was another highlight, one of the best talks I’ve seen at any conference.

Overall, it seems the adoption of Linux kernel security features is increasing rapidly, especially via mobile devices and IoT, where we now have billions of Linux deployments out there, connected to everything else.  It’s interesting to see SELinux increasingly play a role here, on the Android platform, in protecting user privacy, as highlighted in Jeffrey Vander Stoep’s presentation on whitelisting ioctls.  Apparently, some major corporate app vendors, who were not named, have been secretly tracking users via hardware MAC addresses, obtained via ioctl.

We’re also seeing a lot of deployment activity around platform Integrity, including TPMs, secure boot and other integrity management schemes.  It’s gratifying to see the work our community has been doing in the kernel security/ tree being used in so many different ways to help solve large scale security and privacy problems.  Many of us have been working for 10 years or more on our various projects  — it seems to take about that long for a major security feature to mature.

One area, though, that I feel we need significantly more work, is in kernel self-protection, to harden the kernel against coding flaws from being exploited.  I’m hoping that we can find ways to work with the security research community on incorporating more hardening into the mainline kernel.  I’ve proposed this as a topic for the upcoming Kernel Summit, as we need buy-in from core kernel developers.  I hope we’ll have topics to cover on this, then, at next year’s LSS.

We overlapped with Linux Plumbers, so LWN was not able to provide any coverage of the summit.  Paul Moore, however, has published an excellent write-up on his blog. Thanks, Paul!

The committee would appreciate feedback on the event, so we can make it even better for next year.  We may be contacted via email per the contact info at the bottom of the event page.

BlueHackers: The Legacy of Autism and the Future of Neurodiversity

Tue, 2015-08-25 09:50

The New York Times published an interesting review of a book entitled “NeuroTribes: The Legacy of Autism and the Future of Neurodiversity”, authored by Steve Silberman (534 pp. Avery/Penguin Random House).

Silberman describes how autism was discovered by a few different people around the same time, but with each the publicity around their work is warped by their environment and political situation.

This means that we mainly know the angle that one of the people took, which in turn warps our view of Aspergers and autism. Ironically, the lesser known story is actually that of Hans Asperger.

I reckon it’s an interesting read.

James Purser: Mark got a booboo

Mon, 2015-08-24 23:31

Mark Latham losing his AFR column because an advertiser thought his abusive tweets and articles weren't worth being associated with isn't actually a freedom of speech issue.

Nope, not even close to it.

Do you know why?

Because freedom of speech DOES NOT MEAN YOU'RE ENTITLED TO A GODS DAMNED NEWSPAPER COLUMN!!

No one is stopping Latho from spouting his particular brand of down home "outer suburban dad" brand of putresence.

Hell, all he has to do to get back up and running is go and setup a wordpress account and he can be back emptying his bile duct on the internet along with the rest of us who don't get cushy newspaper jobs because we managed to completely screw over our political career in a most spectacular way

Hey, he could setup a Patreon account and everyone who wants to can support him directly, either monthly sub, or a per flatulence rate.

This whole thing reeks of a massive sense of entitlement, both with Latho himself and his media supporters. Bolt, Devine and others who have lept to his defence all push this idea that any move to expose writers to consequences arising from their rantings is some sort of mortal offense against democracy and freedom. Of course, while they do this, they demand the scalps of anyone who dares to write abusive rants against their own positions.

Sigh.

Oh and as I've been reminded, Australia doesn't actually have Freedom of Speech as they do in the US.

Blog Catagories: media

David Rowe: Dual Rav 4 SM1000 Installation

Mon, 2015-08-24 15:30

Andy VK5AKH, and Mark VK5QI, have mirror image SM1000 mobile installations, same radio, even the same car! Some good lessons learned on testing and debugging microphone levels that will be useful for other people installing their SM1000. Read all about it on Mark’s fine blog.

David Rowe: Codec 2 Masking Model Part 1

Mon, 2015-08-24 11:30

Many speech codecs use Linear Predictive Coding (LPC) to model the short term speech spectrum. For very low bit rate codecs, most of the bit rate is allocated to this information.

While working on the 700 bit/s version of Codec 2 I hit a few problems with LPC and started thinking about alternatives based on the masking properties of the human ear. I’ve written Octave code to prototype these ideas.

I’ve spent about 2 weeks on this so far, so thought I better write it up. Helps me clarify my thoughts. This is hard work for me. Many of the steps below took several days of scratching on paper and procrastinating. The human mind can only hold so many pieces of information. So it’s like a puzzle with too many pieces missing. The trick is to find a way in, a simple step that gets you a working algorithm that is a little bit closer to your goal. Like evolution, each small change needs to be viable. You need to build a gentle ramp up Mount Improbable.

Problems with LPC

We perceive speech based on the position of peaks in the speech spectrum. These peaks are called formants. To clearly perceive speech the formants need to be distinct, e.g. two peaks with a low level (anti-formant) region between them.

LPC is not very good at modeling anti-formants, the space between formants. As it is an all pole model, it can only explicitly model peaks in the speech spectrum. This can lead to unwanted energy in the anti-formants which makes speech sound muffled and hard to understand. The Codec 2 LPC postfilter improves the quality of the decoded speech by suppressing interformant-energy.

LPC attempts to model spectral slope and other features of the speech spectrum which are not important for speech perception. For example “flat”, high pass or low pass filtered speech is equally easy for us to understand. We can pass speech through a Q=1 bandpass or notch filter and it will still sound OK. However LPC wastes bits on these features, and get’s into trouble with large spectral slope.

LPC has trouble with high pitched speakers where it tends to model individual pitch harmonics rather than formants.

LPC is based on “designing” a filter to minimise mean square error rather than the properties of the human ear. For example it works on a linear frequency axis rather than log frequency like the human ear. This means it tends to allocates bits evenly across frequency, whereas an allocation weighted towards low frequencies would be more sensible. LPC often produces large errors near DC, an important area of human speech perception.

LPC puts significant information into the bandwidth of filters or width of formants, however due to masking the ear is not very sensitive to formant bandwidth. What is more important is sharp definition of the formant and anti-formant regions.

So I started thinking about a spectral envelope model with these properties:

  1. Specifies the location of formants with just 3 or 4 frequencies. Focuses on good formant definition, not the bandwidth of formants.
  2. Doesn’t care much about the relative amplitude of formants (spectral slope). This can be coarsely quantised or just hard coded using, e.g. voiced speech has a natural low pass spectral slope.
  3. Works in the log amplitude and log frequency domains.

Auditory Masking

Auditory masking refers to the “capture effect” of the human ear, a bit like an FM receiver. If you hear a strong tone, then you cant hear slightly weaker tones nearby. The weaker ones are masked. If you can’t hear these masked tones, there is no point sending them to the decoder. So we can save some bits. Masking is often used in (relatively) high bit rate audio codecs like MP3.

I found some Octave code for generating masking curves (Thanks Jon!), and went to work applying masking to Codec 2 amplitude modelling.

Masking in Action

Here are some plots to show how it works. Lets take a look at frame 83 from hts2a, a female speaker. First, 40ms of the input speech:

Now the same frame in the frequency domain:

The blue line is the speech spectrum, the red the amplitude samples {Am}, one for each harmonic. It’s these samples we would like to send to the decoder. The goal is to encode them efficiently. They form a spectral envelope, that describes the speech being articulated.

OK so lets look at the effect of masking. Here is the masking curve for a single harmonic (m=3, the highest one):

Masking theory says we can’t hear any harmonics beneath the level of this curve. This means we don’t need to send them over the channel and can save bits. Yayyyyyy.

Now lets plot the masking curves for all harmonics:

Wow, that’s a bit busy and hard to understand. Instead, lets just plot the top of all the masking curves (green):

Better. We can see that the entire masking curve is dominated by just a few harmonics. I’ve marked the frequencies of the harmonics that matter with black crosses. We can’t really hear the contribution from other harmonics. The two crosses near 1500Hz can probably be tossed away as they just describe the bottom of an anti-formant region. So that leaves us with just three samples to describe the entire speech spectrum. That’s very efficient, and worth investigating further.

Spectral Slope and Coding Quality

Some speech signals have a strong “low pass filter” slope between 0 an 4000Hz. Others have a “flat” spectrum – the high frequencies are about the same level as low frequencies.

Notice how the high frequency harmonics spread their masking down to lower frequencies? Now imagine we bumped up the level of the high frequency harmonics, e.g. with a first order high pass filter. Their masks would then rise, masking more low frequency harmonics, e.g. those near 1500Hz in the example above. Which means we could toss the masked harmonics away, and not send them to the decoder. Neat. Only down side is the speech would sound a bit high pass filtered. That’s no problem as long as it’s intelligible. This is an analog HF radio SSB replacement, not Hi-Fi.

This also explains why “flat” samples (hts1a, ve9qrp) with relatively less spectral slope code well, whereas others (kristoff, cq_ref) with a strong spectral slope are harder to code. Flat speech has improved masking, leaving less perceptually important information to model and code.

This is consistent with what I have heard about other low bit rate codecs. They often employ pre-processing such as equalisation to make the speech signal code better.

Putting Masking to work

Speech compression is the art of throwing stuff away. So how can we use this masking model to compress the speech? What can we throw away? Well lets start by assuming only the samples with the black crosses matter. This means we get to toss quite a bit of information away. This is good. We only have to transmit a subset of {Am}. How I’m not sure yet. Never mind that for now. At the decoder, we need to synthesise the speech, just from the black crosses. Hopefully it won’t sound like crap. Let’s work on that for now, and see if we are getting anywhere.

Attempt 1: Lets toss away any harmonics that have a smaller amplitude than the mask (Listen). Hmm, that sounds interesting! Apart from not being very good, I can hear a tinkling sound, like trickling water. I suspect (but haven’t proved) this is because harmonics are coming and going quickly as the masking model puts them above and below the mask, which makes them come and go quickly. Little packets of sine waves. I’ve heard similar sounds on other codecs when they are nearing their limits.

Attempt 2: OK, so how about we set the amplitude of all harmonics to exactly the mask level (Listen): Hmmm, sounds a bit artificial and muffled. Now I’ve learned that muffled means the formants are not well formed. Needs more difference between the formats and anti-formant regions. I guess this makes sense if all samples are exactly on the masking curve – we can just hear ALL of them. The LPC post filter I developed a few years ago increased the definition of formants, which had a big impact on speech quality. So lets try….

Attempt 3: Rather than deleting any harmonics beneath the mask, lets reduce their level a bit. That way we won’t get tinkling – harmonics will always be there rather than coming and going. We can use the mask instead of the LPC post filter to know which harmonics we need to attenuate (Listen).

That’s better! Close enough to using the original {Am} (Listen), however with lots of information removed.

For comparison here is Codec 2 700B (Listen and Codec 2 1300 (aka FreeDV 1600 when we add FEC) Listen. This is the best I’ve done with LPC/LSP to date.

The post filter algorithm is very simple. I set the harmonic magnitudes to the mask (green line), then boost only the non-masked harmonics (black crosses) by 6dB. Here is a plot of the original harmonics (red), and the version (green) I mangle with my model and send to the decoder for synthesis:

Here is a spectrogram (thanks Audacity) for Attempt 1, 2, and 3 for the first 1.6 seconds (“The navy attacked the big”). You can see the clearer formant representation with Attempt 3, compared to Attempt 2 (lower inter-formant energy), and the effect of the post filter (dark line in center of formants).

Command Line Kung Fu

If you want to play along:



~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --dump kristoff

 

octave:49> newamp_batch("../build_linux/src/kristoff");

 

~/codec2-dev/build_linux/src$ ./c2sim ../../raw/kristoff.raw --amread kristoff_am.out -o - | play -t raw -r 8000 -e signed-integer -b 16 - -q

The “newamp_fbf” script lets you single step through frames.

Phases

To synthesise the speech at the decoder I also need to come up with a phase for each harmonic. Phase and speech is still a bit of a mystery to me. Not sure what to do here. In the zero phase model, I sampled the phase of the LPC synthesis filter. However I don’t have one of them any more.

Lets think about what the LPC filter does with the phase. We know at resonance phase shifts rapidly:

The sharper the resonance the faster it swings. This has the effect of dispersing the energy in the pitch pulse exciting the filter.

So with the masking model I could just choose the center of each resonance, and swing the phase about madly. I know where the center of each resonance is, as we found that with the masking model.

Next Steps

The core idea is to apply a masking model to the set of harmonic magnitudes {Am} and select just 3-4 samples of that set that define the mask. At the decoder we use the masking model and a simple post filter to reconstruct a set of {Am_} that we use to synthesise the decoded speech.

Still a few problems to solve, however I think this masking model holds some promise for high quality speech at low bit rates. As it’s completely different to conventional LPC/LSP I’m flying blind. However the pieces are falling into place.

I’m currently working on i) how to reduce the number of samples to a low number ii) how to determine which ones we really need (e.g. discarding interformant samples); and iii) how to represent the amplitude of each sample with a low or zero number of bits. There are also some artifacts with background noise and chunks of spectrum coming and going.

I’m pretty sure the frequencies of the samples can be quantised coarsely, say 3 bits each using scalar quantisation, or perhaps 8 bit/s frame using VQ. There will also be quite a bit of correlation between the amplitudes and frequencies of each sample.

For voiced speech there will be a downwards (low pass) slope in the amplitudes, for unvoiced speech more energy at high frequencies. This suggests joint VQ of the sample frequencies and amplitudes might be useful.

The frequency and amplitude of the mask samples will be highly correlated in time (small frame to frame variations) so will have good robustness to bit errors if we apply trellis decoding techniques. Compared to LPC/LSP the bandwidth of formants is “hard coded” by the masking curves, so the dreaded LSPs-too-close due to bit errors R2D2 noises might be a thing of the past. I’ll explore robustness to bit errors when we get to the fully quantised stage.

Sridhar Dhanapalan: Twitter posts: 2015-08-17 to 2015-08-23

Mon, 2015-08-24 00:27

David Rowe: A Miserable Debt Free Life Part 2

Sun, 2015-08-23 09:31

The first post was very popular, and sparked debate all over the Internet. I’ve read many of the discussions, and would like to add a few points.

Firstly I don’t feel I did a very good job of building my assets – plenty of my friends have done much better in terms of net worth and/or early retirement. Many have done the Altruism thing better than I. Sites like Mr. Money Moustache do a better job at explaining the values I hold around money. Also I’ve lost interest in more accumulation, but my lifestyle seems interesting to people, hence these posts.

The Magical 10%

The spreadsheet I put up was not for you. It was just a simple example, showing how compound interest, savings and time can work for you. Or against you, if you like easy credit and debt. A lot of people seem hung up on the 10% figure I used.

I didn’t spell out exactly what my financial strategy is for good reason.

You need to figure out how to achieve your goals. Maybe its saving, maybe it’s getting educated to secure a high income, or maybe it’s nailing debt early. Some of my peers like real estate. I like shares, a good education, professional experience, and small business. I am mediocre at most of them. I looked at other peoples stories, then found something that worked for me.

But you need to work this out. It’s part of the deal, and you are not going to get the magic formula from a blog post by some guy sitting on a couch with too much spare time on his hands and an Internet connection.

The common threads are spending less than your earn, investment, and time. And yes, this is rocket science. The majority of the human race just can’t do it. Compound interest is based on exponential growth – which is completely under-appreciated by the human race. We just don’t get exponential growth.

Risk

Another issue around the 10% figure is risk. People want guarantees, zero risk, a cook book formula. Life doesn’t work like that. I had to deal with shares tumbling after 9/11 and the GFC, and a divorce. No one on a forum in the year 2000 told me about those future events when I was getting serious about saving and investing. Risk and return are a part of life. The risk is there anyway – you might lose your job tomorrow or get sick or divorced or have triplets. It’s up to you if you want to put that risk to work or shy away from it.

Risk can be managed, plan for it. For example you can say “what happens if my partner loses his job for 12 months”, or “what happens if the housing market dips 35% overnight”. Then plug those numbers in and come up with a strategy to manage that risk.

Lets look at the down side. If the magical 10% is not achieved, or even if a financial catastrophe strikes, who is going to be in a better position? Someone who is frugal and can save, or someone maxed out on debt who can’t live without the next pay cheque?

There is a hell of lot more risk in doing nothing.

Make a Plan and Know Thy Expenditure

Make your own plan. There is something really valuable in simply having a plan. Putting some serious thought into it. Curiously, I think this is more valuable than following the plan. I’m not sure why, but the process of planning has been more important to me than the actual plan. It can be a couple of pages of dot points and a single page spreadsheet. But write it down.

Some people commented that they know what they spend, for example they have a simple spreadsheet listing their expenses or a budget. Just the fact that they know their expenditure tells me they have their financial future sorted. There is something fundamental about this simple step. The converse is also true. If you can’t measure it, you can’t manage it.

No Magic Formula – It’s Hard Work

If parts of my experience don’t work for you, go and find something that does. Anything of value is 1% inspiration and 99% perspiration. Creating your own financial plan is part of the 99%. You need to provide that. Develop the habit of saving. Research investment options that work for you. Talk to your successful friends. Learn to stop wasting money on stuff you don’t need. Understand compound interest in your saving and in your debt. Whatever it takes to achieve your goals. These things are hard. No magic formula. This is what I’m teaching my kids.

Work your System

There is nothing unique about Australia, e.g. middle class welfare, socialised medicine, or high priced housing. Well it is quite nice here but we do speak funny and the drop bears are murderous. And don’t get me started on Tony Abbott. The point is that all countries have their risks and opportunities. Your system will be different to mine. Health care may suck where you live but maybe house prices are still reasonable, or the average wage in your profession is awesome, or the cost of living really low, or you are young without dependents and have time in front of you. Whatever your conditions are, learn to make them work for you.

BTW why did so few people comment on the Altruism section? And why so many on strategies for retiring early?

Binh Nguyen: Cracking a Combination Lock, Some Counter-Stealth Thoughts, and More Apple Information

Sat, 2015-08-22 23:55
Someone was recently trying to sell a safe but they didn't have the combination (they had proof of ownership if you're wondering). Anybody who has been faced with this situation is often torn because sometimes the item in question is valuable but the safe can be of comparable value so it's a lose lose situation. If you remember that the original combination then all is fine and well (I first encountered this situation in a hotel when I locked something but forgot the combination. It took me an agonising amount of time to recall the unlock code). If not, you're left with physical destruction of the safe to get back in, etc...



Tips on getting back in:

- did you use mneumonics of some sort to get at the combination?

- is there a limitation on the string that can be entered (any side intelligence is useful)?

- is there a time lock involved?

- does changing particular variables make it easier to get back in non-descructively?

- keep a log on the combinations that you have tried to ensure you don't re-cover the same territory



In this case, things were a bit odd. It had rubber buttons which when removed exposed membrane type switches which could be interfaced via an environmental sensor acquisition and interface device (something like an Arduino)(if you're curious this was designed and produced by a well known international security firm proving that brand doesn't always equate to quality). Once you program it and wire things up correctly, it's simply a case of letting your robot and program run until you open the safe. Another option is a more robust robot where it pushes buttons but obviously this takes quite a bit more hardware (which can make the project pretty expensive and potentially unworthwhile) to get working.

http://techcrunch.com/2015/05/14/this-robot-cracks-open-combination-locks-in-seconds/



As I covered in my book on 'Cloud and Internet Security' please use proper locks with adequate countemeasures (time locks, variable string lengths, abnormal characters, shim proof, relatively unbreakable, etc...) and have a backup in case something goes wrong.

https://play.google.com/store/books/author?id=Binh+Nguyen

http://www.amazon.com/mn/search/?_encoding=UTF8&camp=1789&creative=390957&field-author=Binh%20Nguyen&linkCode=ur2&search-alias=digital-text&sort=relevancerank&tag=bnsb-20&linkId=3BWQJUK2RCDNUGFY



Been thinking about stealth design and counter measures a bit more.



- when you look at the the 2D thrust vectoring configuration of the F-22 Raptor you think why didn't they go 3D at times. One possible reason may be the 'letterbox effect'. It was designed as an air superiority fighter predominantly that relies heavily on BVR capabilities. From front on the plume effect is diminished (think about particle/energy weapon implementation problems) making it more difficult to detect. Obviously, this potentially reduces sideward movement (paricularly in comparison with 3D TVT options. Pure turn is more difficult but combined bank and turn isn't). Obvious tactic is to force the F-22 into sideward movements if it is ever on your tail (unlikely, due to apparently better sensor technology though)

- the above is a null point if you factor in variable thrust (one engine fires at a higher rate of thrust relative to the other) but it may result in feedback issues. People who have experience with fly by wire systems or high performance race cars which are undertuned will better understand this

- people keep on harping on about how 5th gen fighters can rely more heavily on BVR capabilities. Something which is often little spoken of is the relatively low performance of AAM (Air to Air Missile) systems (Morever, there is a difference between seeing, achieving RADAR lock, and achieving a kill). There must be upgrades along the way/in the pipeline to make 5th gen fighters a viable/economic option into the future

- the fact that several allied nations (Japan, Korea, and Turkey are among them currently)(India, Indonesia, and Russia are among those who are developing their own based on non-Western design) are developing their own indiginous 5th gen fighters which have characteristics more similar to the F-22 Raptor (the notable exception may be Israel who are maintaining and upgrading their F-15 fleet) and have air superiority in mind tells us that the F-35 is a much poorer brother to the F-22 Raptor in spite of what is being publicly said

https://www.rt.com/usa/312220-f-35-flying-saucer-tech/

http://www.news1130.com/2015/08/12/f-35-might-not-meet-performance-standards-of-cf-18s-says-u-s-think-tank/

http://www.defensenews.com/story/defense/air-space/strike/2015/08/10/turkey-upgrade-f-16-block-30-aircraft/31408875/

https://en.wikipedia.org/wiki/Mitsubishi_ATD-X

http://www.businessinsider.in/Indo-Russian-5th-Generation-Fighter-Aircraft-program-Delays-and-the-possible-outcomes/articleshow/47655536.cms

http://www.defenseone.com/technology/2015/02/heres-what-youll-find-fighter-jet-2030/104736/

https://en.wikipedia.org/wiki/Fifth-generation_jet_fighter

https://en.wikipedia.org/wiki/TAI_TFX

https://en.wikipedia.org/wiki/KAI_KF-X

http://www.defenseindustrydaily.com/kf-x-paper-pushing-or-peer-fighter-program-010647/

Warplanes: No Tears For The T-50

https://www.strategypage.com/htmw/htairfo/articles/20150421.aspx

- it's clear that the US and several allied nations believe that current stealth may have limited utility in the future. In fact, the Israeli's have said that within 5-10 years the JSF may lost any significant advantage that it currently has without upgrades

- everyone knows of the limited utility of AAM (Air to Air Missile) systems. It will be interesting to see whether particle/energy weapons are retrofitted to the JSF or whether they will be reserved entirely for 6th gen fighters. I'd be curious to know how much progress they've made with regards to this particularly with regards to energy consumption

- even if there have been/are intelligence breaches in the design of new fighter jets there's still the problem of production. The Soviets basically had the complete blue prints for NASA's Space Shuttle but ultimately decided against using it on a regular basis/producing more because like the Americans they discovered that it was extremely uneconomical. For a long time, the Soviets have trailed the West with regards to semiconductor technology which means that their sensor technology may not have caught up. This mightn't be the case with the Chinese. Ironically, should the Chinese fund the Russians and they work together they may achieve greater progress then working too independently

http://www.abc.net.au/news/2015-08-18/former-spy-molly-sasson-says-soviet-mole-infiltrated-asio/6704096

https://en.wikipedia.org/wiki/Buran_(spacecraft)

- some of the passive IRST systems out have current ranges of about 100-150km mark (that is publicly acknowledged)http://www.washingtonexaminer.com/the-price-of-stealth/article/2570647

http://aviationweek.com/technology/new-radars-irst-strengthen-stealth-detection-claims

https://en.wikipedia.org/wiki/Stealth_aircraft

http://thediplomat.com/2014/10/how-effective-is-chinas-new-anti-stealth-radar-system-really/

http://www.wired.co.uk/news/archive/2012-10/01/radar-detects-stealth-aircraft

https://en.wikipedia.org/wiki/Radar

http://www.migflug.com/jetflights/p-i-r-a-t-e-versus-raptor.html

http://nationalinterest.org/blog/the-buzz/are-us-fighter-jets-about-become-obsolete-12612

http://nationalinterest.org/feature/are-submarines-about-become-obsolete-12253

http://theminiaturespage.com/boards/msg.mv?id=374487

http://www.navytimes.com/story/military/tech/2015/02/09/greenert-questions-stealth-future/22949703/

http://watchingamerica.com/WA/2015/03/23/the-us-navy-has-already-stopped-believing-in-the-jsf/

- disoriention of gyroscopes has been used as a strategy against UCAV/UAVs. I'd be curious about how such technology would work against modern fighters which often go into failsafe mode (nobody wants to lose a fighter jet worth 8 or more figures. Hence, the technology) when the pilot blacks out... The other interesting thing would be how on field technologies such as temporal sensory deprivation (blinding, deafening, dis-orirentation, etc...) could be used in unison from longer range. All technologies which have been tested and used against ground based troops before)

http://defensesystems.com/articles/2015/08/10/kaist-researchers-take-out-drones-with-sound.aspx

https://en.wikipedia.org/wiki/Brown_note

- I've been thinking/theorising about some light based detection technologies to aircraft in general. One option I've been considering is somewhat like a sperical ball. The spherical ball is composed of lenses which focus in on a centre which is composed of sensors which would be a hybrid based technology based on the photoelectric effect and spectrascopic theory. The light would automatically trigger a voltage (much like a solar cell) while use of diffraction/spectrascopic theory would enable identification of aircraft from long range using light. The theory behind this is based on the way engine plumes work and the way jet fuels differ. Think about this carefully. Russian rocket fuel is very different from Western rocket fuel. I suspect it's much the same for jet fuel. We currently identify star/planet composition on roughly the same theory. Why not fighter aircraft? Moreover, there are other distinguishing aspects of the jet fighter nozzle exhausts (see my previous post and the section on LOAN systems, http://dtbnguyen.blogspot.com/2015/07/joint-strike-fighter-f-35-notes.html). Think about the length and shape of each one based on their current flight mode (full afterburner, cruising, etc...) and the way most engine exhausts are unique (due to a number of different reasons including engine design, fuel, etc...). Clearly, the F-22, F-35, B-2, and other stealth have very unique nozzle shapes when compared to current 4th gen fighter options and among one another. The other thing is that given sufficient research (and I suspect a lot of time) I believe that the benefits of night or day flight will/could be largely mitigated. Think about the way in which light and camera filters (and night vision) work. They basically screen out based on frequency/wavelength to make things more visible. You should be able achieve the same thing during daylight. The other bonus of such technology is that it is entirely passive giving the advantage back to the party in defense and intelligence is relatively easy to collect. Just show up at a demonstration or near an airfield...

https://en.wikipedia.org/wiki/Jet_fuel

http://foxtrotalpha.jalopnik.com/so-what-were-those-secret-flying-wing-aircraft-spotted-1555124270

http://www.globalsecurity.org/military/world/stealth-aircraft-vulnerabilities-contrails.htm

https://en.wikipedia.org/wiki/Electro-optical_sensor

https://en.wikipedia.org/wiki/Optical_spectrometer

https://en.wikipedia.org/wiki/AN/AAQ-37 

- such technology may be a moot point as we have already made progress on cloaking (effectively invisible to the naked eye) technology (though exact details are classified as is a lot of other details regarding particle/energy weapons and shielding technologies)... There's also the problem of straight lines. For practical purposes, light travels in straight lines... OTH type capabilities are beyond such technology (for the time being. Who knows what will happen in the future?)

- someone may contest that I seem to be focusing in on exhaust only but as as you aware this style of detection should also work against standard objects as well (though it's practicallity would be somewhat limited). Just like RADAR though you give up on being able to power through weather and other physical anomalies because you can't use a conventional LASER. For me, this represents a balance between being detected from an attackers perspective and being able to track them from afar... If you've ever been involved in a security/bug sweep you will know that a LASER even of modest power can be seen from quite a distance away

- everybody knows how dependent allied forces are upon integrated systems (sensors, re-fuelling, etc...)

- never fly straight and level against a 5th gen fighter. Weave up and down and side to side even on patrols to maximise the chances of detection earlier in the game because all of them don't have genuine all aspect stealth

- I've been thinking of other ways of defending against low observability aircraft. The first is based on 'loitering' weapons. Namely, weapons which move at low velocity/loiter until they come within targeting range of aicraft. Then they 'activate' and chase their target much like a 'moving mine' (a technology often seen in cartoons?). Another is essentially turning off all of your sensors once they become within targeting range. Once they end up in passive detection range, then you fire in massive, independent volleys knowing full well that low observability aircraft have low payload capability owing to comprimises in their design

- as stated previously, I very much doubt that the JSF is as bad some people are portraying

http://sputniknews.com/military/20150816/1025815446.html

http://news.usni.org/2015/08/13/davis-f-35b-external-weapons-give-marines-4th-5th-generation-capabilities-in-one-plane

- it's clear that defense has become more integrated with economics now by virtue of the fact that most of our current defense theory is based on the notion of deterrence. I beleive that the only true way forward is reform of the United Nations, increased use of un-manned technologies, and perhaps people coming to terms with their circumstances more differently (unlikely given how long humanity has been around), etc... There is a strong possibility that the defense estabilshment's belief that future defense programs could be unaffordable could become true within the context of deterence and our need to want to control affairs around the word. We need cheaper options with the ability to 'push up' when required...

http://www.thephora.net/forum/showthread.php?t=79496

http://breakingdefense.com/2014/04/f-35s-stealth-ew-not-enough-so-jsf-and-navy-need-growlers-boeing-says-50-100-more/

http://theaviationist.com/2013/06/17/su-35-le-bourget/

http://staugustine.com/news/2015-08-18/pentagon-plans-increase-drone-flights-50-percent



All of this is a moot point though because genuine 5th gen fighters should be able to see you from a mile off and most countries who have entered into the stealth technology arena are struggling to build 5th gen options (including Russia who have a long history in defense research and manufacturing). For the most part, they're opting for a combination of direct confrontation and damage limitation through reduction of defensive projection capability through long range weapons such as aicraft carrier destroying missiles, targeting of AWACS/refuelling systems, etc... and like for like battle options...

http://www.businessinsider.com/all-the-weapons-russias-sukhoi-t-50-fighter-jet-is-designed-to-carry-in-one-infographic-2015-8?IR=T

http://www.onislam.net/english/health-and-science/special-coverage/492459-muslim-sibirs-stealth-sukhoi-pak-fa-infographs.html



I've been working on more Apple based technolgy of late (I've been curious about the software development side for a while). It's been intriguing taking a closer look at their hardware. Most people I've come across have been impressed by the Apple ecosystem. To be honest, the more I look at the technology borne from this company the more 'generic' them seem. Much of the technology is simply repackaged but in a better way. They've had more than their fair share of problems.



How to identify MacBook models

https://support.apple.com/en-au/HT201608

How to identify MacBook Pro models

https://support.apple.com/en-us/HT201300



A whole heap of companies including graphic card, game console, and computer manufacturers were caught out with BGA implementation problems (basically, people tried to save money by reducing the quality of solder. These problems have largely been fixed much like the earlier capacitor saga). Apple weren't immune

https://www.ifixit.com/Guide/Yellow+Light+of+Death+Repair/3654

https://www.ifixit.com/Store/Game-Console/PlayStation-3-Yellow-Light-of-Death-YLOD-Fix-Kit/IF213-028-1

http://www.gamefaqs.com/ps3/927750-playstation-3/answers/66227-any-solutions-on-fixing-ylod-yellow-light-of-death



Lines on a screen of an Apple iMac. Can be due to software settings, firmware, or hardware

https://discussions.apple.com/thread/5625161

https://discussions.apple.com/thread/6604981

https://www.ifixit.com/Answers/View/172653/How+to+fix+%22vertical+lines%22+on+my+iMac+27+late+2009

https://www.ifixit.com/Answers/View/349/Vertical+lines+appearing+on+display



Apparently, Macbooks get noisy headjacks from time to time. Can be due to software settings or hardware failure

http://hints.macworld.com/article.php?story=20090729165848939

https://discussions.apple.com/thread/5516994

https://discussions.apple.com/thread/3853844

http://apple.stackexchange.com/questions/8039/how-can-i-make-my-macbook-pros-headphone-jack-stop-humming



One of the strangest things I've found is that in spite of a core failure of primary storage device people still try to sell hardware for almost what the current market value of a perfectly functional machine is. Some people still go for it but I'm guessing they have spare hardware lying around

https://discussions.apple.com/thread/5565827

https://discussions.apple.com/thread/6151526

http://apple.stackexchange.com/questions/158092/a-bad-shutdown-resulting-in-a-flashing-folder-with-question-mark



There are some interesting aspects to their MagSafe power adapters. Some aspects are similar to authentication protocols used by manufacturers such as HP to ensure that that everthing is safe and that only original OEM equipment is used. Something tells me they don't do enough testing though. They seem to have a continuous stream of anomalous problems. It could be similar to the Microsoft Windows security problem though. Do you want an OS delivered in a timely fashion or one that is deprecated but secure at a later date (delivered in a lecture by a Microsoft spokesman a while back). You can't predict everything that happens when things move into mass scale production but I would have thought that the 'torquing' problem would have been obvious from a consumer engineering/design perspective from the outset...

https://en.wikipedia.org/wiki/MagSafe

http://www.righto.com/2013/06/teardown-and-exploration-of-magsafe.html

https://www.ifixit.com/Answers/View/34477/Correct+wiring+of+MagSafe+power+adapter

http://www.instructables.com/id/MacBook-Mag-Safe-Charger-Budget-Repair-Disas/step2/Disassembly-of-Power-Brick-Brute-Force-Attack/

http://apple.stackexchange.com/questions/111617/using-85w-magsafe-inplace-of-60w-magsafe-2-for-mbp-retina-13

https://www.ifixit.com/Answers/View/1855/Definitive+answer+on+using+60w+or+85w+power+adapter+with+Macbook+Air



Upgrading Apple laptop hard drives is similar in complexity to that of PC based laptops

http://www.extremetech.com/computing/58220-upgrade-your-macbook-pros-hard-drive-2

http://www.macinstruct.com/node/130



One thing has to be said of Apple hardware construction. It's radically different to that of PC based systems. I'd rather deal with a business class laptop that is designed to be upgraded and probably exhibits greater reliability to be honest. Opening a lot of their devices has told me that form takes too much in the ratio between form and function

https://www.ifixit.com/Guide/MacBook+Core+2+Duo+Upper+Case+Replacement/515

https://www.ifixit.com/Guide/MacBook+Core+2+Duo+Logic+Board+Replacement/528

https://www.ifixit.com/Guide/MacBook+Pro+15-Inch+Unibody+Late+2011+Logic+Board+Replacement/7518



One frustrating aspect of the Apple ecosystem is that they gradually phase out support of old hardware by inserting pre-requisite checking. Thankfully, as others (and I) have discovered bypassing some of their checks can be trivial at times

https://en.wikipedia.org/wiki/OS_X

http://ask.metafilter.com/276359/How-to-best-upgrade-my-2006-MacBook-Pro

http://osxdaily.com/2011/04/08/hack-mac-os-x-lion-for-core-duo-core-solo-mac/

https://www.thinkclassic.org/viewtopic.php?id=425

http://www.macbreaker.com/2013/06/how-to-install-os-x-109-mavericks-dp1.html

http://apple.stackexchange.com/questions/103054/unsupported-hack-or-workaround-to-get-64-bit-os-x-to-install-on-a-macbook-pro-ha

David Rowe: Hamburgers versus Oncology

Sat, 2015-08-22 08:30

On a similar, but slightly lighter note, this blog was pointed out to me. The subject is high (saturated) fat versus carbohydrate based diets, which is an ongoing area of research, and may (may) be useful in managing diabetes. This gentleman is a citizen scientist (and engineer no less) like myself. Cool. I like the way he using numbers and in particular the way data is presented graphically.

However I tuned out when I saw claims of “using ketosis to fight cancer”, backed only by an anecdote. If you are interested, this claim is throughly debunked on www.sciencebasedmedicine.org.

Bullshit detection 101 – if you find a claim of curing cancer, it’s pseudo-science. If the evidence cited is one persons story (an anecdote) it’s rubbish. You can safely move along. It shows a dangerous leaning towards dogma, rather than science. Unfortunately, these magical claims can obscure useful research in the area. For example exploring a subtle, less sensational effect between a ketogenic diet and diabetes. That’s why people doing real science don’t make outrageous claims without very strong evidence – its kills their credibility.

We need short circuit methods for discovering pseudo science. Otherwise you can waste a lot of time and energy investing spurious claims. People can get hurt or even killed. Takes a lot less effort to make a stupid claim than to prove it’s stupid. These days I can make a call by reading about 1 paragraph, the tricks used to apply a scientific veneer to magical claims are pretty consistent.

A hobby of mine is critical thinking, so I enjoy exploring magical claims from that perspective. I am scientifically trained and do R&D myself, in a field that I earned a PhD in. Even with that background, I know how hard it is to create new knowledge, and how easy it is to fool myself when I want to believe.

I’m not going to try bacon double cheeseburger (without the bun) therapy if I get cancer. I’ll be straight down to Oncology and take the best that modern, evidence based medicine can give, from lovely, dedicated people who have spent 20 years studying and treating it. Hit me with the the radiation and chemotherapy Doc! And don’t spare the Sieverts!

David Rowe: Is Alt-Med Responsible for 20% of Cancer Deaths?

Sat, 2015-08-22 08:30

In my meanderings on the InterWebs this caught my eye:

As a director of a cancer charity I work with patients everyday; my co-director has 40-yrs experience at the cancer coalface.We’re aware there are many cancer deaths that can be prevented if we could reduce the number of patients delaying or abandoning conventional treatment while experimenting with alt/med.It is ironic that when national cancer deaths are falling the numbers of patients embracing alt/med is increasing and that group get poor outcomes.If about 46,000 patients die from cancer in 2015, we suspect 10-20% will be caused by alt/med reliance. This figure dwarfs the road toll, deaths from domestic violence, homicide. suicide and terrorism in this country.

This comment was made by Pip Cornell, in the comments on this article discussing declining cancer rates. OK, so Pips views are anecdotal. She works for a charity that assists cancer sufferers. I’m putting it forward as a theory, not a fact. More research is required.

The good news is evidence based medicine is getting some traction with cancer. The bad news is that Alt-med views may be killing people. I guess this shouldn’t surprise me, Alt-med (non evidence-based medicine) has been killing people throughout history.

The Australian Government has recently introduced financial penalties for parents who do not vaccinate. Raw milk has been outlawed after it killed a toddler. I fully support these developments. Steps in the right direction. I hope they take a look at the effect of alt-med on serious illness like cancer.

Russell Coker: The Purpose of a Code of Conduct

Wed, 2015-08-19 20:26

On a private mailing list there have been some recent discussions about a Code of Conduct which demonstrate some great misunderstandings. The misunderstandings don’t seem particular to that list so it’s worthy of a blog post. Also people tend to think more about what they do when their actions will be exposed to a wider audience so hopefully people who read this post will think before they respond.

Jokes

The first discussion concerned the issue of making “jokes”. When dealing with the treatment of other people (particularly minority groups) the issue of “jokes” is a common one. It’s fairly common for people in positions of power to make “jokes” about people with less power and then complain if someone disapproves. The more extreme examples of this concern hate words which are strongly associated with violence, one of the most common is a word used to describe gay men which has often been associated with significant violence and murder. Men who are straight and who conform to the stereotypes of straight men don’t have much to fear from that word while men who aren’t straight will associate it with a death threat and tend not to find any amusement in it.

Most minority groups have words that are known to be associated with hate crimes. When such words are used they usually send a signal that the minority groups in question aren’t welcome. The exception is when the words are used by other members of the group in question. For example if I was walking past a biker bar and heard someone call out “geek” or “nerd” I would be a little nervous (even though geeks/nerds have faced much less violence than most minority groups). But at a Linux conference my reaction would be very different. As a general rule you shouldn’t use any word that has a history of being used to attack any minority group other than one that you are a member of, so black rappers get to use a word that was historically used by white slave-owners but because I’m white I don’t get to sing along to their music. As an aside we had a discussion about such rap lyrics on the Linux Users of Victoria mailing list some time ago, hopefully most people think I’m stating the obvious here but some people need a clear explanation.

One thing that people should consider “jokes” is the issue of punching-down vs punching-up [1] (there are many posts about this topic, I linked to the first Google hit which seems quite good). The basic concept is that making jokes about more powerful people or organisations is brave while making “jokes” about less powerful people is cowardly and serves to continue the exclusion of marginalised people. When I raised this issue in the mailing list discussion a group of men immediately complained that they might be bullied by lots of less powerful people making jokes about them. One problem here is that powerful people tend to be very thin skinned due to the fact that people are usually nice to them. While the imaginary scenario of less powerful people making jokes about rich white men might be unpleasant if it happened in person, it wouldn’t compare to the experience of less powerful people who are the target of repeated “jokes” in addition to all manner of other bad treatment. Another problem is that the impact of a joke depends on the power of the person who makes it, EG if your boss makes a “joke” about you then you have to work on your CV, if a colleague or subordinate makes a joke then you can often ignore it.

Who does a Code of Conduct Protect

One member of the mailing list wrote a long and very earnest message about his belief that the CoC was designed to protect him from off-topic discussions. He analysed the results of a CoC on that basis and determined that it had failed due to the number of off-topic messages on the mailing lists he subscribes to. Being so self-centered is strongly correlated with being in a position of power, he seems to sincerely believe that everything should be about him, that he is entitled to all manner of protection and that any rule which doesn’t protect him is worthless.

I believe that the purpose of all laws and regulations should be to protect those who are less powerful, the more powerful people can usually protect themselves. The benefit that powerful people receive from being part of a system that is based on rules is that organisations (clubs, societies, companies, governments, etc) can become larger and achieve greater things if people can trust in the system. When minority groups are discouraged from contributing and when people need to be concerned about protecting themselves from attack the scope of an organisation is reduced. When there is a certain minimum standard of treatment that people can expect then they will be more willing to contribute and more able to concentrate on their contributions when they don’t expect to be attacked.

The Public Interest

When an organisation declares itself to be acting in the public interest (EG by including “Public Interest” in the name of the organisation) I think that we should expect even better treatment of minority groups. One might argue that a corporation should protect members of minority groups for the sole purpose of making more money (it has been proven that more diverse groups produce better quality work). But an organisation that’s in the “Public Interest” should be expected to go way beyond that and protect members of minority groups as a matter of principle.

When an organisation is declared to be operating in the “Public Interest” I believe that anyone who’s so unable to control their bigotry that they can’t refrain from being bigoted on the mailing lists should not be a member.

Related posts:

  1. Perfect Code vs Quite Good Code Some years ago I worked on a project where software...
  2. The Purpose of Planet Debian An issue that causes ongoing discussion is what is the...
  3. WTF – Let’s write all the code twice There is an interesting web site WorseThanFailure.com (with the slogan...

James Purser: The next step in the death of the regional networks

Tue, 2015-08-18 23:30

So we were flicking around youtube this evening as we are wont to do and we came across this ad

Now, an ad on youtube is nothing special, however what is special about this one is the fact that it's a local ad. That fishing shop is fifteen minutes from where I live and it's not the first local ad that I've seen on Youtube lately.

This means two things. Youtube can tell that I'm from the area the ad is targetted at, and local businesses now have an alternative to the local tv networks for advertising, an alternative that is available across multiple platforms, has a constant source of new content and is deeply embedded in the internet enabled culture that the networks have been ignoring for the past fifteen years.

Getting rid of the 2/3 rule, or removing the 75% reach rule won't save the networks. Embracing the internet and engaging with people in that space, just might.

Blog Catagories: mediaregional media

Francois Marier: Watching (some) Bluray movies on Ubuntu 14.04 using VLC

Tue, 2015-08-18 16:47

While the Bluray digital restrictions management system is a lot more crippling than the one preventing users from watching their legally purchased DVDs, it is possible to decode some Bluray discs on Linux using vlc.

First of all, install the required packages as root:

apt install vlc libaacs0 libbluray-bdj libbluray1 mkdir /usr/share/libbluray/ ln -s /usr/share/java/libbluray-0.5.0.jar /usr/share/libbluray/libbluray.jar

The last two lines are there to fix an error you might see on the console when opening a Bluray disc with vlc:

libbluray/bdj/bdj.c:249: libbluray.jar not found. libbluray/bdj/bdj.c:349: BD-J check: Failed to load libbluray.jar

and is apparently due to a bug in libbluray.

Then, as a user, you must install some AACS decryption keys. The most interesting source at the moment seems to be labDV.com:

mkdir ~/.config/aacs cd ~/.config/aacs wget http://www.labdv.com/aacs/KEYDB.cfg

but it is still limited in the range of discs it can decode.

David Rowe: OLPC and Measuring if Technology Helps

Tue, 2015-08-18 16:30

I have a penchant for dating teachers who have worked in Australia’s 3rd world. This has given me a deep, personal appreciation of just how hard developing world education can be.

So I was wondering where the OLPC project had gone. And in particular, has it helped people? I have had some experience with this wonderful initiative, and it was the subject of much excitement in my geeky, open source community.

I started to question the educational outcomes of the OLPC project in 2011. Too much tech buzz, and I know from my own experiences (and those of friends in the developing world) that parachuting rich white guy technology into the developing world then walking away just doesn’t work. It just makes geeks and the media feel good, for a little while at least.

Turns out 2.5M units have been deployed world wide, quite a number for any hardware project. One Education alone has an impressive 50k units in the field, and are seeking to deploy many more. Rangan Srikhanta from One Education Australia informed me (via a private email) that a 3 year study has just kicked off with 3 Universities, to evaluate the use of the XO and other IT technology in the classroom. Initial results in 2016. They have also tuned their deployment strategy to address better use of deployed XOs.

Other studies have questioned the educational outcomes of the OLPC project. Quite a vigorous debate in the comments there! I am not a teacher, so don’t profess to have the answers, but I did like this quote:

He added: “…the evidence shows that computers by themselves have no effect on learning and what really matters is the institutional environment that makes learning possible: the family, the teacher, the classroom, your peers.”

Measurement Matters

It’s really important to make sure the technology is effective. I have direct experience of developing world technology deployments that haven’t reached critical mass despite a lot of hard work by good people. With some initiatives like OLPC, even after 10 years (an eternity in IT, but not long in education) there isn’t any consensus. This means it’s unclear if the resources are being well spent.

I have also met some great people from other initiatives like AirJaldi and Inveneo who have done an excellent job of using geeky technology to consistently help people in the developing world.

This matters to me. These days I am developing technology building blocks (like HF Digital Voice), rather than working on direct deployments to the developing world. Not as sexy, I don’t get to sweat amongst the palm trees, or show videos of “unboxing” shiny technology in dusty locations. But for me at least, a better chance to “improve the world a little bit” using my skills and resources.

Failure is an Option

When I started Googling for recent OLPC developments I discovered many posts declaring OLPC to be a failure. I’m not so sure. It innovated in many areas, such as robust, repairable, eco-friendly IT technology purpose designed for education in the developing world. They have shipped 2.5M units, which I have never done with any of my products. It excited and motivated a lot of people (including me).

When working on the Village Telco I experienced difficult problems with interference on mesh networks and in working with closed source radio chip set vendors. This lead to me to ask fundamental questions about sending voice over radio and lead me to my current HF Digital Voice work – which is 1000 times (60db) more efficient than VOIP over Wifi and completely open source.

Pushing developing world education and telecommunications forward is a huge undertaking. Mistakes will be made, but without trying we learn nothing, and get no closer to solutions. So I say GO failure.

Measuring the Effectiveness of my Own Work

Lets put the spotlight on me. Can I can measure the efficacy of my own work in hard numbers? This blog gets visited by 5000 unique IPs a day (150k/month). Unique IPs is a reasonable measure for a blog, and it’s per day, so it shows some recurring utility.

OK, so how about my HF radio digital voice software? Like the OLPC project, that’s a bit harder to measure. Quite a few people trying FreeDV but an unknown number of them are walking away after an initial tinker. A few people are saying publicly it’s not as good as SSB. So “downloads”, like the number of XO laptops deployed, is not a reliable metric of the utility of my work.

However there is another measure. An end-user can directly compare the performance of FreeDV against analog SSB over HF radio. Your communication is either better or it is not. You don’t need any studies, you can determine the answer yourself in just a few minutes. So while I may not have reached my technical goals quite get (I’m still tweaking FreeDV 700), I have a built in way for anyone to determine if the technology I am developing is helping anyone.

Russell Coker: BTRFS Training

Tue, 2015-08-18 15:26

Some years ago Barwon South Water gave LUV 3 old 1RU Sun servers for any use related to free software. We gave one of those servers to the Canberra makerlab and another is used as the server for the LUV mailing lists and web site and the 3rd server was put aside for training. The servers have hot-swap 15,000rpm SAS disks – IE disks that have a replacement cost greater than the budget we have for hardware. As we were given a spare 70G disk (and a 140G disk can replace a 70G disk) the LUV server has 2*70G disks and the 140G disks (which can’t be replaced) are in the server for training.

On Saturday I ran a BTRFS and ZFS training session for the LUV Beginners’ SIG. This was inspired by the amount of discussion of those filesystems on the mailing list and the amount of interest when we have lectures on those topics.

The training went well, the meeting was better attended than most Beginners’ SIG meetings and the people who attended it seemed to enjoy it. One thing that I will do better in future is clearly documenting commands that are expected to fail and documenting how to login to the system. The users all logged in to accounts on a Xen server and then ssh’d to root at their DomU. I think that it would have saved a bit of time if I had aliased commands like “btrfs” to “echo you must login to your virtual server first” or made the shell prompt at the Dom0 include instructions to login to the DomU.

Each user or group had a virtual machine. The server has 32G of RAM and I ran 14 virtual servers that each had 2G of RAM. In retrospect I should have configured fewer servers and asked people to work in groups, that would allow more RAM for each virtual server and also more RAM for the Dom0. The Dom0 was running a BTRFS RAID-1 filesystem and each virtual machine had a snapshot of the block devices from my master image for the training. Performance was quite good initially as the OS image was shared and fit into cache. But when many users were corrupting and scrubbing filesystems performance became very poor. The disks performed well (sustaining over 100 writes per second) but that’s not much when shared between 14 active users.

The ZFS part of the tutorial was based on RAID-Z (I didn’t use RAID-5/6 in BTRFS because it’s not ready to use and didn’t use RAID-1 in ZFS because most people want RAID-Z). Each user had 5*4G virtual disks (2 for the OS and 3 for BTRFS and ZFS testing). By the end of the training session there was about 76G of storage used in the filesystem (including the space used by the OS for the Dom0), so each user had something like 5G of unique data.

We are now considering what other training we can run on that server. I’m thinking of running training on DNS and email. Suggestions for other topics would be appreciated. For training that’s not disk intensive we could run many more than 14 virtual machines, 60 or more should be possible.

Below are the notes from the BTRFS part of the training, anyone could do this on their own if they substitute 2 empty partitions for /dev/xvdd and /dev/xvde. On a Debian/Jessie system all that you need to do to get ready for this is to install the btrfs-tools package. Note that this does have some risk if you make a typo. An advantage of doing this sort of thing in a virtual machine is that there’s no possibility of breaking things that matter.

  1. Making the filesystem
    1. Make the filesystem, this makes a filesystem that spans 2 devices (note you must use the-f option if there was already a filesystem on those devices):

      mkfs.btrfs /dev/xvdd /dev/xvde
    2. Use file(1) to see basic data from the superblocks:

      file -s /dev/xvdd /dev/xvde
    3. Mount the filesystem (can mount either block device, the kernel knows they belong together):

      mount /dev/xvdd /mnt/tmp
    4. See a BTRFS df of the filesystem, shows what type of RAID is used:

      btrfs filesystem df /mnt/tmp
    5. See more information about FS device use:

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem to change it to RAID-1 and verify the change, note that some parts of the filesystem were single and RAID-0 before this change):

      btrfs balance start -dconvert=raid1 -mconvert=raid1 -sconvert=raid1 –force /mnt/tmp

      btrfs filesystem df /mnt/tmp
    7. See if there are any errors, shouldn’t be any (yet):

      btrfs device stats /mnt/tmp
    8. Copy some files to the filesystem:

      cp -r /usr /mnt/tmp
    9. Check the filesystem for basic consistency (only checks checksums):

      btrfs scrub start -B -d /mnt/tmp
  2. Online corruption
    1. Corrupt the filesystem:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    2. Scrub again, should give a warning about errors:

      btrfs scrub start -B /mnt/tmp
    3. Check error count:

      btrfs device stats /mnt/tmp
    4. Corrupt it again:

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=2000 seek=50
    5. Unmount it:

      umount /mnt/tmp
    6. In another terminal follow the kernel log:

      tail -f /var/log/kern.log
    7. Mount it again and observe it correcting errors on mount:

      mount /dev/xvdd /mnt/tmp
    8. Run a diff, observe kernel error messages and observe that diff reports no file differences:

      diff -ru /usr /mnt/tmp/usr/
    9. Run another scrub, this will probably correct some errors which weren’t discovered by diff:

      btrfs scrub start -B -d /mnt/tmp
  3. Offline corruption
    1. Umount the filesystem, corrupt the start, then try mounting it again which will fail because the superblocks were wiped:

      umount /mnt/tmp

      dd if=/dev/zero of=/dev/xvdd bs=1024k count=200

      mount /dev/xvdd /mnt/tmp

      mount /dev/xvde /mnt/tmp
    2. Note that the filesystem was not mountable due to a lack of a superblock. It might be possible to recover from this but that’s more advanced so we will restore the RAID.

      Mount the filesystem in a degraded RAID mode, this allows full operation.

      mount /dev/xvde /mnt/tmp -o degraded
    3. Add /dev/xvdd back to the RAID:

      btrfs device add /dev/xvdd /mnt/tmp
    4. Show the filesystem devices, observe that xvdd is listed twice, the missing device and the one that was just added:

      btrfs filesystem show /mnt/tmp
    5. Remove the missing device and observe the change:

      btrfs device delete missing /mnt/tmp

      btrfs filesystem show /mnt/tmp
    6. Balance the filesystem, not sure this is necessary but it’s good practice to do it when in doubt:

      btrfs balance start /mnt/tmp
    7. Umount and mount it, note that the degraded option is not needed:

      umount /mnt/tmp

      mount /dev/xvdd /mnt/tmp
  4. Experiment
    1. Experiment with the “btrfs subvolume create” and “btrfs subvolume delete” commands (which act like mkdir and rmdir).
    2. Experiment with “btrfs subvolume snapshot SOURCE DEST” and “btrfs subvolume snapshot -r SOURCE DEST” for creating regular and read-only snapshots of other subvolumes (including the root).

Related posts:

  1. Starting with BTRFS Based on my investigation of RAID reliability [1] I have...
  2. BTRFS vs LVM For some years LVM (the Linux Logical Volume Manager) has...
  3. Why I Use BTRFS I’ve just had to do yet another backup/format/restore operation on...

Leon Brooks: Making good Canon LP-E6 battery-pack contacts

Tue, 2015-08-18 14:29
battery-pack contacts

Canon LP-E6 battery packs (such as those using in my 70D camera) have two fine connector wires used for charging them.  These seem to be a weak point, as (if left to themselves) they eventually fail to connect well, which means that they do not charge adequately, or (in the field) do not run the equipment at all.



One experimenter discovered that scrubbing them with the edge of a stiff business card helped to make

with (nonCanon this time) charger contacts

them good.  So I considered something more extensive.



Parts: squeeze-bottle of cleaner (I use a citrus-based cleaner from PlanetArk, which seems to be able to clean almost anything off without being excessively invasive); spray-can

equipment requiredof WD-40; cheap tooth-brush, paper towels (or tissues, or bum-fodder).



Method: lightly

brush headspray cleaner onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.



Lightly spray WD-40 onto contacts. Gently but vigorously rub along the contacts with toothbrush. Paper-dry the contacts.



wider view of brush on contacts

(optional) When thoroughly dry, add a touch of light machine oil. This wards off moisture.



This appears to be just as effective with 3rd-party battery packs.