Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 45 min ago

Ben Martin: A floating shelf for tablets

Fri, 2018-09-07 18:45
The choice of replacing the small marble table entirely or trying to "work around" it with walnut. The lower walnut tabletop is about 44cm by 55cm and is just low enough to give easy access to slide laptop(s) under the main table top. The top floating shelf is wide enough to happily accommodate two ipad sized tablets. The top shelf and lower tabletop are attached to the backing by steel brackets which cut through to the back through four CNC created mortises.


Cutting the mortises was interesting, I had to drop back to using a 1/2 inch cutting bit in order to service the 45mm depth of the timber. The back panel was held down with machining clamps but toggles would have done the trick, it was just what was on hand at the time. I cut the mortises through from the back using an upcut bit and the front turned out very clean without any blow out. You could probably cut yourself on the finish it was so clean.

The upcut doesn't make a difference in this job but it is always good to plan and see the outcomes for the next time when the cut will be exposed. The fine grain of walnut is great to work with CNC, though most of my bits are upcut for metal work.

I will likely move on to adding a head rest to the eames chair next. But that is a story for another day.

Anthony Towns: Money Matters

Thu, 2018-09-06 17:00

I have a few things I need to write, but am still a bit too sick with the flu to put together something novel, so instead I’m going to counter-blog Rob Collins recent claim that Money doesn’t matter. Rob’s thoughts are similar to ones I’ve had before, but I think they’re ultimately badly mistaken.

There’s three related, but very different, ways of thinking about money: as a store of value, as a medium of exchange, and as a unit of account. In normal times, dollars (or pounds or euros) work for all three things, so it’s easy to confuse them, but when you’re comparing different moneys some are better at one than another, and when a money starts failing, it will generally fail at each purpose at different rates.

Rob says “Money isn’t wealth” — but that’s wrong. In so far as money serves as a store of value, it is wealth. That’s why having a million dollars in your bank account makes you feel wealthy. The obvious failure mode for store of value is runaway inflation, and that quickly becomes a humanitarian disaster. Money can be one way to store value, but it isn’t the only way: you can store value by investing in artwork, buying property, building a company, or anything else that you expect to be able to sell at some later date. The main difference between those forms of investment versus money is that, ideally, monetary investments have low risk (perhaps the art you bought goes out of fashion and becomes worthless, or the company goes bankrupt, but your million dollars remains a million dollars), and low variance (you won’t make any huge profits, but you won’t make huge losses either). Unlike other assets, money also tends to be very fungible — if you earn $1000, you can spend $100 and have $900 left over; but if you have an artwork worth $1000 it’s a lot harder to sell one tenth of it.

Rob follows up by saying that money is “a thing you can exchange for other things”, which is true — money is a medium of exchange. Ideally it’s cheap and efficient, hard to counterfeit, and easy to verify. This is mostly a matter of technology: pretty gems are good at these things in some ways, coins and paper notes are good in others, cheques kind of work though they’re a bit to easy to counterfeit and a bit too hard to verify, and these days computer networks make credit card systems pretty effective. Ultimately a lot of modern systems have ended up as walled gardens though, and while they’re efficient, they aren’t cheap: whether you consider the 1% fees credit card companies charge, or the 2%-4% fees paypal charges, or the 30% fees from the Apple App Store or Google Play Stores, those are all a lot larger than how much you’d lose accepting a $50 note from someone directly. I have a lot of hope that Bitcoin’s Lightning Network will eventually have a huge impact here. Note that if money isn’t wealth — that is, it doesn’t manage to be a good store of value even in the short term, it’s not a good medium of exchange either: you can’t buy things with it because the people selling will have to immediately get rid of it or they’ll be making a loss; which is why currencies undergoing hyperinflation result in black markets where trade happens in stable currencies.

With modern technology and electronic derivatives, you could (in theory) probably avoid ever holding money. If you’re a potato farmer and someone wants to buy a potato from you, but you want to receive fertilizer for next season’s crop rather than paper money, the exchange could probably be fully automated by an online exchange so that you end up with an extra hundred grams of fertilizer in your next order, with all the details automatically worked out. If you did have such a system, you’d entirely avoid using money as a store of value (though you’d probably be using a credit account with your fertilizer supplier as a store of value), and you’d at least mostly avoid using money as a medium of exchange, but you’d probably still end up using money as a medium of account — that is you’d still be listing the price of potatoes in dollars.

A widely accepted unit of account is pretty important — you need it in order to make contracts work, and it makes comparing different trades much easier. Compare the question “should I sell four apples for three oranges, or two apples for ten strawberries?” with “should I sell four apples for $5, or two apples for $3” and “should I buy three oranges for $5 or ten strawberries for $3?” While I suppose it’s theoretically possible to do finance and economics without a common unit of account, it would be pretty difficult.

This is a pretty key part and it’s where money matters a lot. If you have an employment contract saying you’ll be paid $5000 a month, then it’s pretty important what “$5000” is actually worth. If a few months down the track there’s a severe inflation event, and it’s only worth significantly less, then you’ve just had a severe pay cut (eg, the Argentinian Peso dropped from 5c USD in April to 2.5c USD in September). If you’ve got a well managed currency, that usually means low but positive inflation, so you’ll instead get a 2%-5% pay cut every year — which is considered desirable by economists as it provides an automatic way to devote less resources to less valuable jobs, without managers having to deliberately fire people, or directly cut peoples’ pay. Of course, people tend to be as smart as economists, and many workers expect automatic pay rises in line with inflation anyway.

Rob’s next bit is basically summarising the concept of sticky prices: if there’s suddenly more money to go around, the economy goes weird because people aren’t able to fix prices to match the new reality quickly, causing shortages if there’s more money before there’s higher prices, or gluts (and probably a recession) if there’s less money and people can’t afford to buy all the stuff that’s around — this is what happened in the global financial crisis in 2008/9, though I don’t think there’s really a consensus on whether the blame for less money going around should be put on the government via the Federal Reserve, or the banks, or some other combination of actors.

To summarise so far: money does matter a lot. Having a common unit of account so you can give things meaningful prices is essential, having a convenient store of value that you can use for large and small amounts, and being able to easily trade it for goods and services is a really big deal. Screwing it up hurts people directly, and can be really massively harmful. You could probably use something different for medium of exchange than method of account (eg, a lot of places accepting cryptocurrencies use the cryptocurrency as medium of exchange, but use regular dollars for both store of value and pricing); but without a store of value you don’t have a medium of exchange, and once you’ve got a method of account, having it also work as a store of value is probably too convenient to skip.

But all that said, money is just a tool — generally money isn’t what anyone wants, people want the things they can get with money. Rob phrases that as “resources and productivity”, which is fine; I think the economics jargon would be “real GDP” — ie, the actual stuff that goes into GDP, as opposed to the dollar figure you put on it. Things start going wonky quickly though, in particular with the phrase “If, given the people currently in our country, and what they are being paid to do today, we have enough resources, and enough labour-and-productivity to …” — this starts mixing up nominal and real terms: people expect to be paid in dollars, but resources and labour are real units. If you’re talking about allocating real resources rather than dollars, you need to balance that against paying people in real resources rather than dollars as well, because that’s what they’re going to buy with their nominal dollars.

Why does that matter? Ultimately, because it’s very easy to get the maths wrong and not have your model of the national economy balanced: you allocate some resources here, pay some money there, then forget that the people you paid will use that money to reallocate some resources. If the error’s large enough and systemic enough, you’ll get runaway inflation and all the problems that go with it.

Rob has a specific example here: an unemployed (but skilled) builder, and a homeless family (who need a house built). Why not put the two together, magic up some money to prime the system and build a house? Voila the builder has a job, and the family has a home and everyone is presumably better off. But you can do the same thing without money: give the homeless family a loaded gun and introduce them to the builder: the builder has a job, and the family get a home, and with any luck the bullet doesn’t even get used! The key problem was that we didn’t inspect the magic sufficiently: the builder doesn’t want a job, or even money, he wants the rewards that the job and the money obtain. But where do those rewards come from? Maybe we think the family will contribute to the economy once they have a roof over their heads — if so, we could commit to that: forget the gun, the family goes to a bank, demonstrates they’ll be able to earn an income in future, and takes out a loan, then goes to the builder and pays for their house, and then they get jobs and pay off their mortgage. But if the house doesn’t let the family get jobs and pay for the home, the things the builder buys with his pay have to come from somewhere, and the only way that can happen is by making everyone else in the country a little bit poorer. Do that enough, and everyone who can will move to a different country that doesn’t have that problem.

Loans are a serious answer to the problem in general: if the family is going to be able to work and pay for the house eventually, the problem isn’t one of money, it’s one of risk: whoever currently owns the land, or the building supplies, or whatever doesn’t want to take the risk they’ll never see anything for letting the house get built. But once you have someone with founds who is willing to take the risk, things can start happening without any change in government policies. Loaning directly to the family isn’t the only way; you could build a set of units on spec, and run a charity that finds disadvantaged families, and sets them up, and maybe provide them with training or administrative support to help them get into the workforce, at which point they can pay you back and you can either turn a profit, or help the next disadvantaged family; or maybe both.

Rob then asks himself a bunch of questions, which I’ll answer too:

  • What about the foreign account deficit? (It doesn’t matter in the first place, unless perhaps you’re anti-immigrant, and don’t want foreigners buying property)
  • What about the fact that lots of land is already owned by someone? (There’s enough land in Australia outside of Sydney/Melbourne that this isn’t an issue; I don’t have any idea what it’s like in NZ, but see Tokyo for ways of housing people on very little land if it is a problem)
  • How do we fairly get the family the house they deserve? (They don’t deserve a house; if they want a nice house, they should work and save for it. If they’re going through hard times, and just need a roof over their heads, that’s easily and cheaply done, and doesn’t need a lot of space)
  • Won’t some people just ride on the coat-tails of others? (Yes, of course they will. That’s why you target the assistance to help them survive and get back on their feet, and if they want to get whatever it is they think they deserve, they can work for it, like everyone else)
  • Isn’t this going to require taking things other people have already earnt? (Generally, no: people almost always buy houses with loans, for instance, rather than being given them for free, or buying them outright; there might be a need to raises taxes, but not to fundamentally change them, though there might be other reasons why larger reform is worthwhile)

This brings us back to the claim Rob makes at the start of his blog: that the whole “government cannot pay for healthcare” thing is nonsense. It’s not nonsense: at the extreme, government can’t pay for enough healthcare for everyone to live to 120 while feeling like they’re 30. Even paying enough for everyone to have the best possible medical care isn’t feasible: even if NZ has a uniform health care system with 100% of its economy devoted to caring for the sick and disabled, there’s going to be a specialist facility somewhere overseas that does a better job. If there isn’t a uniform healthcare system (and there won’t be, even if only due to some doctors/nurses being individually more talented), there’ll also be better and worse places to go in NZ. The reason we have worrying fiscal crises in healthcare and aged support isn’t just a matter of money that can be changed with inflation, it’s that the real economic resources we’re expecting to have don’t align with the promises we’re already making. Those resources are usually expressed in dollar terms, but that’s because having a unit of account makes talking about these things easier: we don’t have to explicitly say “we’ll need x surgeons and y administrators and z MRI machines and w beds” but instead can just collect it all and say “we’ll need x billion dollars”, and leave out a whole mass of complexity, while still being reasonably accurate.

(Similar with “education” — there are limits to how well you can educate everyone, and there’s a trade off between how many resources you might want to put into educating people versus how many resources other people would prefer. In a democracy, that’s just something that’s going to get debated. As far as land goes, on the other hand, I don’t think there’s a fundamental limit to the government taking control over land it controls, though at least in Australia I believe that’s generally considered to be against the vibe of the constitution. If you want to fairly compensate land holders for taking their land, that goes back to budget negotiations and government priorities, and doesn’t seem very interesting in the abstract)

Probably the worst part of Rob’s blog is this though: “We get 10% less things done. Big deal.” Getting 10% less things done is a disaster, for comparison the Great Recession in the US had a GDP drop of less than half that, at -4.2% between 2007Q4 and 2009Q2, and the Great Depression was supposedly about -15% between 1929 and 1932. Also, saying “we’d want 90% of folk not working” is pretty much saying “90% of folk have nothing of value to contribute to anyone else”, because if they did, they could do that, be paid for it, and voila, they’re working. That simply doesn’t seem plausible to me, and I think things would get pretty ugly if it ended up that way despite it’s implausibility.

(Aside: for someone who’s against carbs, “potato farmer” as the go to example seems an interesting choice… )

Lev Lafayette: New Developments in Supercomputing

Tue, 2018-09-04 19:08

Over the past 33 years the International Super Computing conference in Germany has become one of the world's major computing events with the bi-annual announcement of the Top500 systems, which continues to be dominated in entirety by Linux systems. In June this year over 3,500 people attended ISC with a programme of tutorials, workshops and miniconferences, poster sessions, student competitions, a vast vendor hall, and numerous other events.

This presentation gives an overview of ISC and makes an attempt to cover many of the new developments and directions in supercomputing including new systems. metrics measurement, machine learning, and HPC education. In addition, the presentation will also feature material from the HPC Advisory Council conference in Fremantle held in August.

Michael Still: Kubernetes Fundamentals: Setting up nginx ingress

Tue, 2018-09-04 15:00

I’m doing the Linux Foundation Kubernetes Fundamentals course at the moment, and I was very disappointed in the chapter on Ingress Controllers. To be honest it feels like an after thought — there is no lab, and the provided examples don’t work if you re-type them into Kubernetes (you can’t cut and paste of course, just to add to the fun).

I found this super annoying, so I thought I’d write up my own notes on how to get nginx working as an Ingress Controller on Kubernetes.

First off, the nginx project has excellent installation resources online at github. The only wart with their instructions is that they changed the labels used on the pods for the ingress controller, which means the validation steps in the document don’t work until that is fixed. That is reported in a github issue and there was a proposed fix that didn’t have an associated issue that pre-dates the creation of the issue.

The basic process, assuming a baremetal Kubernetes install, is this:

$ NGINX_GITHUB="https://raw.githubusercontent.com/kubernetes/ingress-nginx" $ kubectl apply -f $NGINX_GITHUB/master/deploy/mandatory.yaml $ kubectl apply -f $NGINX_GITHUB/master/deploy/provider/baremetal/service-nodeport.yaml

Wait for the pods to fetch their images, and then check if the pods are healthy:

$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-6586bc58b6-tn7l5 1/1 Running 0 20h nginx-ingress-controller-79b7c66ff-m8nxc 1/1 Running 0 20h

That bit is mostly explained by the Linux Foundation course. Well, he links to the github page at least and then you just read the docs. The bit that isn’t well explained is how to setup ingress for a pod. This is partially because kubectl doesn’t have a command line to do this yet — you have to POST an API request to get it done instead.

First, let’s create a target deployment and service:

$ kubectl run ghost --image=ghost deployment.apps/ghost created $ kubectl expose deployments ghost --port=2368 service/ghost exposed

The YAML to create an ingress for this new “ghost” service looks like this:

$ cat sample_ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ghost spec: rules: - host: ghost.10.244.2.13.nip.io http: paths: - path: / backend: serviceName: ghost servicePort: 2368

Where 10.244.2.13 is the IP that my CNI assigned to the nginx ingress controller. You can look that up with a describe of the nginx ingress controller pod:

$ kubectl describe pod nginx-ingress-controller-79b7c66ff-m8nxc -n ingress-nginx | grep IP IP: 10.244.2.13

Now we can create the ingress entry for this ghost deployment:

$ kubectl apply -f sample_ingress.yaml ingress.extensions/ghost created

This causes the nginx configuration to get re-created inside the nginx pod by magix pixies. Now, assuming we have a route from our desktop to 10.244.2.13, we can just go to http://ghost.10.244.2.13.nip.io in a browser and you should be greeted by the default front page for the ghost installation (which turns out to be a publishing platform, who knew?).

To cleanup the ingress, you can use the normal “get”, “describe”, and “delete” verbs that you use for other things in kubectl, with the object type of “ingress”.

Simon Lyall: Audiobooks – August 2018

Mon, 2018-09-03 11:04

Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari

An interesting listen. Covers both history of humanity and then extrapolates ways things might go in the future. Many plausible ideas (although no doubt some huge misses). 8/10

Higher: A Historic Race to the Sky and the Making of a City by Neal Bascomb

The architects, owners & workers behind the Manhattan Trust Building, the Chrysler Building and the Empire State Building all being built New York at the end of the roaring 20s. Fascinating and well done. 9/10

The Invention Of Childhood by Hugh Cunningham

The story of British childhood from the year 1000 to the present. Lots of quotes (by actors) from primary sources such as letters (which is less distracting than sometimes). 8/10

The Sign of Four by Arthur Conan Doyle – Read by Stephen Fry

Very well done reading by Fry. Story excellent of course. 8/10

My Happy Days in Hollywood: A Memoir by Garry Marshall

Memoir by writer, producer (Happy Days, etc) and director (Pretty Woman, The Princess Diaries, etc). Great stories mostly about the positive side of the business. Very inspiring 8/10

Napoleon by J. Christopher Herold

A biography of Napoleon with a fair amount about the history of the rest of Europe during the period thrown in. A fairly short (11 hours) book but some but not exhausting detail. 7/10

Storm in a Teacup: The Physics of Everyday Life by Helen Czerski

A good popular science book linking everyday situations and objects with bigger concepts (eg coffee stains to blood tests). A fun listen. 7/10

All These Worlds Are Yours: The Scientific Search for Alien Life by Jon Willis

The author reviews recent developments in the search for life and suggests places it might be found and how missions to search them (he gives himself a $4 billion budget) should be prioritised. 8/10

Ready Player One by Ernest Cline

I’m right in the middle of the demographic for most of the references here so I really enjoyed it. Good voicing by Wil Wheaton too. Story is straightforward but all pretty fun. 8/10

Russell Coker: Suggestions for Trump Supporters

Sat, 2018-09-01 21:03

I’ve had some discussions with Trump supporters recently. Here are some suggestions for anyone who wants to have an actual debate about political issues. Note that this may seem harsh to Trump supporters. But it seems harsh to me when Trump supporters use a social event to try and push their beliefs without knowing any of the things I list in this post. If you are a Trump supporter who doesn’t do these things then please try to educate your fellow travellers, they are more likely to listen to you than to me.

Facts

For a discussion to be useful there has to be a basis in facts. When one party rejects facts there isn’t much point. Anyone who only takes their news from an ideological echo chamber is going to end up rejecting facts. The best thing to do is use fact checking sites of which Snopes [1] is the best known. If you are involved in political discussions you should regularly correct people who agree with you when you see them sharing news that is false or even merely unsupported by facts. If you aren’t correcting mistaken people on your own side then you do your own cause a disservice by allowing your people to discredit their own arguments. If you aren’t regularly seeking verification of news you read then you are going to be misled. I correct people on my side regularly, at least once a week. How often do you correct your side?

The next thing is that some background knowledge of politics is necessary. Politics is not something that you can just discover by yourself from first principles. If you aren’t aware of things like Dog Whistle Politics [2] then you aren’t prepared to have a political debate. Note that I’m not suggesting that you should just learn about Dog Whistle Politics and think you are ready to have a debate, it’s one of many things that you need to know.

Dog whistle politics is nothing new or hidden, if you don’t know about such basics you can’t really participate in a discussion of politics. If you don’t know such basics and think you can discuss politics then you are demonstrating the Dunning-Kruger effect [3].

The Southern Strategy [4] is well known by everyone who knows anything about US politics. You can think it’s a good thing if you wish and you can debate the extent to which it still operates, but you can’t deny it happened. If you are unaware of such things then you can’t debate US politics.

The Civil rights act of 1964 [5] is one of the most historic pieces of legislation ever passed in the US. If you don’t know about it then you just don’t know much about US politics. You may think that it is a bad thing, but you can’t deny that it happened, or that it happened because of the Democratic party. This was the time in US politics when the Republicans became the party of the South and the Democrats became the centrist (possibly left) party that they are today. It is ridiculous to claim that Republicans are against racism because Abraham Lincoln was a Republican. Ridiculous claims might work in an ideological echo chamber but they won’t convince anyone else.

Words Have Meanings

To communicate we need to have similar ideas of what words mean. If you use words in vastly different ways to other people then you can’t communicate with them. Some people in the extreme right claim that because the Nazi party in Germany was the
“Nationalsozialistische Deutsche Arbeiterpartei” (“NSDAP”) which translates to English as “National Socialist German Workers Party” that means that they were “socialists”. Then they claim that “socialists” are “leftist” so therefore people on the left are Nazis. That claim requires using words like “left” and “socialism” in vastly different ways to most people.

Snopes has a great article about this issue [6], I recommend that everyone read it, even those who already know that Nazis weren’t (and aren’t) on the left side of politics.

The Wikipedia page of the Unite the Right rally [7] (referenced in the Snopes article) has a photo of people carrying Nazi flags and Confederate flags. Those people are definitely convinced that Nazis were not left wing! They are also definitely convinced that people on the right side of politics (which in the US means the Republican party) support the Confederacy and oppose equal rights for Afro-American people. If you want to argue that the Republican party is the one opposed to racism then you need to come up with an explanation for how so many people who declare themselves on the right of politics got it wrong.

Here’s a local US news article about the neo-Nazi who had “commie killer” written on his helmet while beating a black man almost to death [8]. Another data point showing that Nazis don’t like people on the left.

In other news East Germany (the German Democratic Republic) was not a
democracy. North Korea (the Democratic People’s Republic of Korea) is not a democracy either. The use of “socialism” by the original Nazis shouldn’t be taken any more seriously than the recent claims by the governments of East Germany and North Korea.

Left vs right is a poor summary of political positions, the Political Compass [9] is better. While Hitler and Stalin have different positions on economics I think that citizens of those countries didn’t have very different experiences, one extremely authoritarian government is much like another. I recommend that you do the quiz on the Political Compass site and see if the people it places in similar graph positions to you are ones who you admire.

Sources of Information

If you are only using news sources that only have material you agree with then you are in an ideological echo chamber. When I recommend that someone look for other news sources what I don’t expect in response is an email analysing a single article as justification for rejecting that entire news site. I recommend sites like the New York Times as having good articles, but they don’t only have articles I agree with and they sometimes publish things I think are silly.

A news source that makes ridiculous claims such as that Nazis are “leftist” is ridiculous and should be disregarded. A news source that merely has some articles you disagree with might be worth using.

Also if you want to convince people outside your group of anything related to politics then it’s worth reading sites that might convince them. I often read The National Review [10], not because I agree with their articles (that is a rare occurrence) but because they write for rational conservatives and I hope that some of the extreme right wing people will find their ideas appealing and come back to a place where we can have useful discussions.

When evaluating news articles and news sources one thing to consider is Occam’s Razor [11]. If an article has a complex and implausible theory when a simpler theory can explain it then you should be sceptical of that article. There are conspiracies but they aren’t as common as some people believe and they are generally of limited complexity due to the difficulty people have in keeping secrets. An example of this is some of the various conspiracy theories about storage of politicians’ email. The simplest explanation (for politicians of all parties) is that they tell someone like me to “just make the email work” and if their IT staff doesn’t push back and refuse to do it without all issues being considered then it’s the IT staff at fault. Stupidity explains many things better than conspiracies. Regardless of the party affiliation, any time a politician is accused of poor computer security I’ll ask whether someone like me did their job properly.

Covering for Nazis

Decent people have to oppose Nazis. The Nazi belief system is based on the mass murder of people based on race and the murder of people who disagree with them. In Germany in the 1930s there were some people who could claim not to know about the bad things that Nazis were doing and they could claim to only support Nazis for other reasons. Neo-Nazis are not about creating car companies like VolksWagen all they are about is hatred. The crimes of the original Nazis are well known and well documented, it’s not plausible that anyone could be unaware of them.

Mitch McConnell has clearly stated “There are no good neo-Nazis” [12] in clear opposition to Trump. While I disagree with Mitch on many issues, this is one thing we can agree on. This is what decent people do, they work together with people they usually disagree with to oppose evil. Anyone who will support Nazis out of tribal loyalty has demonstrated the type of person they are.

Here is an article about the alt-right meeting to celebrate Trump’s victory where Richard Spencer said “hail Trump, hail our people, hail victory” while many audience members give the Nazi salute [13]. You can skip to 42 seconds in if you just want to see that part. Trump supporters try to claim it’s the “Roman salute”, but that’s not plausible given that there’s no evidence of Romans using such a salute and it was first popularised in Fascist Italy [14]. The Wikipedia page for the Nazi Salute [15] notes that saying “hail Hitler” or “hail victory” was standard practice while giving the salute. I think that it’s ridiculous to claim that a group of people offering the Hitler salute while someone says “hail Trump” and “hail victory” are anything but Nazis. I also think it’s ridiculous to claim to not know of any correlation between the alt-right and Nazis and then immediately know about the “Roman Salute” defence.

The Americans used to have a salute that was essentially the same as the Nazi Salute, the Bellamy Salute was officially replaced by the hand over heart salute in 1942 [16]. They don’t want anything close to a Nazi salute, and no-one did until very recently when neo-Nazis stopped wearing Klan outfits in the US.

Every time someone makes claims about a supposed “Roman salute” explanation for Richard Spencer’s fans I wonder if they are a closet Nazi.

Anti-Semitism

One final note, I don’t debate people who are open about neo-Nazi beliefs. When someone starts talking about a “Jewish Conspiracy” or use other Nazi phrases then the conversation is over. Nazis should be shunned. One recent conversation with a Trump supported ended quickly after he started talking about a “Jewish conspiracy”. He tried to get me back into the debate by claiming “there are non-Jews in the conspiracy too” but I was already done with him.

Decent Trump Supporters

If you want me to believe that you are one of the decent Trump supporters a good way to start is to disclaim the horrible ideas that other Trump supporters endorse. If you can say “I believe that black people and Jews are my equal and I will not stand next to or be friends with anyone who carries a Nazi flag” then we can have a friendly discussion about politics. I’m happy to declare “I have never supported a Bolshevik revolution or the USSR and will never support such things” if there is any confusion about my ideas in that regard. While I don’t think any reasonable person would think that I supported the USSR I’m happy to make my position clear.

I’ve had people refuse to disclaim racism when asked. If you can’t clearly say that you consider people of other races to be your equal then everyone will think that you are racist.

Related posts:

  1. Coalitions In Australia we are about to have a federal election,...
  2. Senator Online I’ve been asked for my opinion of senatoronline.org.au which claims...
  3. The Meaning of Godwin’s Law A widely cited unofficial rule on the Internet is known...

Lev Lafayette: Exploring Issues in Event-Based HPC Cloudbursting

Fri, 2018-08-31 23:06

The use of cloud compute, especially in proportion to single-node tasks, provides a more effective allocation of financial resources. The introduction of cloud-bursting to scheduling systems could ideally provide on-demand compute resources for High Performance Computing (HPC) systems, where queue wait-times are a source of user consternation.

Using experiential examples in Slurm's Cloudbursting capability (an extension of the scheduler's power management features), initial successes and bug discoveries highlight the problems of replication and latency that limit the scope of cloudbursting. Nevertheless, under such circumstances wrapper scripts for particular subsets of jobs are still considered viable; an example of this approach is indicated by MOAB/NODUS.

A presentation to HPC:AI 2018 Perth Conference

Matthew Oliver: Keystone Federated Swift – Multi-region cluster, multiple federation, access same account

Fri, 2018-08-31 19:05

Welcome to the final post in the series, it has been a long time coming. If required/requested I’m happy to delve into any of these topics deeper, but I’ll attempt to explain the situation, the best approach to take and how I got a POC working, which I am calling the brittle method. It definitely isn’t the best approach but as it was solely done on the Swift side and as I am a OpenStack Swift dev it was the quickest and easiest for me when preparing for the presentation.

To first understand how we can build a federated environment where we have access to our account no matter where we go, we need to learn about how keystone authentication works from a Swift perspective. Then we can look at how we can solve the problem.

Swift’s Keystoneauth middleware

As mentioned in earlier posts, there isn’t any magic in the way Swift authentication works. Swift is an end-to-end storage solution and so authentication is handled via authentication middlewares. further a single Swift cluster can talk to multiple auth backends, which is where the `reseller_prefix` comes into play. This was the first approach I blogged about in these series.

 

There is nothing magical about how authentication works, keystoneauth has it’s own idiosyncrasies, but in general it simply makes a decision whether this request should be allowed. It makes writing your own simple, and maybe an easily way around the problem. Ie. write an auth middleware to auth directly to your existing company LDAP server or authentication system.

 

To setup keystone authentication, you use keystones authtoken middleware and directly afterwards in the pipeline place the Swift keystone middleware, configuring each of them in the proxy configuration:

pipeline = ... authtoken keystoneauth ... proxy-server The authtoken middleware

Generally every request to Swift will include a token, unless it’s using tempurl, container-sync or to a container that has global read enabled but you get the point.

As the swift-proxy is a python wsgi app the request first hits the first middleware in the pipeline (left most) and works it’s way to the right. When it hits the authtoken middleware the token in the request will be sent to keystone to be authenticated.

The resulting metadata, ie the user, storage_url, groups, roles etc, and dumped into the request environment and then passed to the next middleware. The keystoneauth middleware.

The keystoneauth middleware

The keystoneauth middleware checks the request environment for the metadata dumped by the authtoken middleware and makes a decision based on that. Things like:

  • If the token was one for one of the reseller_admin roles, then they have access.
  • If the user isn’t a swift user of the account/project the request is for, is there an ACL that will allow it.
  • If the user has a role that identifies them as a swift user/operator of this Swift account then great.

 

When checking to see if the user has access to the given account (Swift account) it needs to know what account the request is for. This is easily determined as it’s defined by the path of the URL your hitting. The URL you send to the Swift proxy is what we call the storage url. And is in the form of:

http(s)://<url of proxy or proxy vip>/v1/<account>/<container>/<object>

The container and object elements are optional as it depends on what your trying to do in Swift. When the keystoneauth middleware is authenticating it’ll check that the project_id (or tenant_id) metadata dumped by authtoken, when this is concatenated with the reseller_prefix, matches the account in the given storage_url. For example let’s say the following metadata was dumped by authtoken:

{ "X_PROJECT_ID": 'abcdefg12345678', "X_ROLES": "swiftoperator", ... }

And the reseller_prefix for keystone auth was AUTH_ and we make any member of the swiftoperator role (in keystone) a swift operator (a swift user on the account). Then keystoneauth would allow access if the account in the storage URL matched AUTH_abcdefg12345678.

 

When you authenticate to keystone the object storage endpoint will point not only to the Swift endpoint (the swift proxy or swift proxy load balancer), but it will also include your account. Based on your project_id. More on this soon.

 

Does that make sense? So simply put to use keystoneauth in a multi federated environment, we just need to make sure no matter which keystone we end up using and asking for the swift endpoint always returns the same Swift account name.

And there lies our problem, the keystone object storage endpoint and the metadata authtoken dumps uses the project_id/tenant_id. This isn’t something that is synced or can be passed via federation metadata.

NOTE: This also means that you’d need to use the same reseller_prefix on all keystones in every federated environment. Otherwise the accounts wont match.

 

Keystone Endpoint and Federation Side

When you add an object storage endpoint in keystone, for swift, the url looks something like:

http://swiftproxy:8080/v1/AUTH_$(tenant_id)s

 

Notice the $(tenant_id)s at the end? This is a placeholder that keystone internally will replace with the tenant_id of the project you authenticated as. $(project_id)s can also be used and maps to the same thing. And this is our problem.

When setting up federation between keystones (assuming keystone 2 keystone federation) you generate a mapping. This mapping can include the project name, but not the project_id. Theses ids are auto-generated, not deterministic by name, so creating the same project on different federated keystone servers will have different project_id‘s. When a keystone service provider (SP) federates with a keystone identity provider (IdP) the mapping they share shows how the provider should map federated users locally. This includes creating a shadow project if a project doesn’t already exist for the federated user to be part of.

Because there is no way to sync project_id’s in the mapping the SP will create the project which will have a unique project_id. Meaning when the federated user has authenticated their Swift storage endpoint from keystone will be different, in essence as far as Swift is concerned they will have access but to a completely different Swift account. Let’s use an example, let’s say there is a project on the IdP called ProjectA.

project_name project_id IdP ProjectA 75294565521b4d4e8dc7ce77a25fa14b SP ProjectA cb0d5805d72a4f2a89ff260b15629799

Here we have a ProjectA on both IdP and SP. The one on the SP would be considered a shadow project to map the federated user too. However the project_id’s are both different, because they are uniquely  generated when the project is created on each keystone environment. Taking the Object Storage endpoint in keystone as our example before we get:

 

   Object Storage Endpoint IdP http://swiftproxy:8080/v1/AUTH_75294565521b4d4e8dc7ce77a25fa14b SP http://swiftproxy:8080/v1/AUTH_cb0d5805d72a4f2a89ff260b15629799

So when talking to Swift you’ll be accessing different accounts, AUTH_75294565521b4d4e8dc7ce77a25fa14b and AUTH_cb0d5805d72a4f2a89ff260b15629799 respectively. This means objects you write in one federated environment will be placed in a completely different account so you wont be able access them from elsewhere.

 

Interesting ways to approach the problem

Like I stated earlier the solution would simply be to always be able to return the same storage URL no matter which federated environment you authenticate to. But how?

  1. Make sure the same project_id/tenant_id is used for _every_ project with the same name, or at least the same name in the domains that federation mapping maps too. This means direct DB hacking, so not a good solution, we should solve this in code, not make OPs go hack databases.
  2. Have a unique id for projects/tenants that can be synced in federation mapping, also make this available in the keystone endpoint template mapping, so there is a consistent Swift account to use. Hey we already have project_id which meets all the criteria except mapping, so that would be easiest and best.
  3. Use something that _can_ be synced in a federation mapping. Like domain and project name. Except these don’t map to endpoint template mappings. But with a bit of hacking that should be fine.

Of the above approaches, 2 would be the best. 3 is good except if you pick something mutable like the project name, if you ever change it, you’d now authenticate to a completely different swift account. Meaning you’d have just lost access to all your old objects! And you may find yourself with grumpy Swift Ops who now need to do a potentially large data migration or you’d be forced to never change your project name.

Option 2 being unique, though it doesn’t look like a very memorable name if your using the project id, wont change. Maybe you could offer people a more memorable immutable project property to use. But to keep the change simple being able simply sync the project_id should get us everything we need.

 

When I was playing with this, it was for a presentation so had a time limit, a very strict one, so being a Swift developer and knowing the Swift code base I hacked together a varient on option 3 that didn’t involve hacking keystone at all. Why, because I needed a POC and didn’t want to spend most my time figuring out the inner workings of Keystone, when I could just do a few hacks to have a complete Swift only version. And it worked. Though I wouldn’t recommend it. Option 3 is very brittle.

 

The brittle method – Swift only side – Option 3b

Because I didn’t have time to simply hack keystone, I took a different approach. The basic idea was to let authtoken authenticate and then finish building the storage URL on the swift side using the meta-data authtoken dumps into wsgi request env. Thereby modifying the way keystoneauth authenticates slightly.

Step 1 – Give the keystoneauth middleware the ability to complete the storage url

By default we assume the incoming request will point to a complete account, meaning the object storage endpoint in keystone will end with something like:

'<uri>/v1/AUTH_%(tenant_id)s'

So let’s enhance keystoneauth to have the ability to if given only the reseller_prefix to complete the account. So I added a use_dynamic_reseller option.

If you enable use_dynamic_reseller then the keystoneauth middleware will pull the project_id from authtoken‘s meta-data dumped in the wsgi environment. This allows a simplified keystone endpoint in the form:

'<uri>/v1/AUTH_'

This shortcut makes configuration easier, but can only be reliably used when on your own account and providing a token. API elements like tempurl  and public containers need the full account in the path.

This still used project_id so doesn’t solve our problem, but meant I could get rid of the $(tenant_id)s from the endpoints. Here is the commit in my github fork.

Step 2 – Extend the dynamic reseller to include completing storage url with names

Next, we extend the keystoneauth middleware a little bit more. Give it another option, use_dynamic_reseller_name, to complete the account with either project_name or domain_name and project_name but only if your using keystone authentication version 3.

If you are, and want to have an account based of the name of the project, then you can enable use_dynamic_reseller_name in conjuction with use_dynamic_reseller to do so. The form used for the account would be:

<reseller_prefix><project_domain_name>_<project_name>

So using our example previously with a reseller_preix of AUTH_, a project_domain_name of Domain and our project name of ProjectA, this would generate an account:

AUTH_Domain_ProjectA

This patch is also in my github fork.

Does this work, yes! But as I’ve already mentioned in the last section, this is _very_ brittle. But this also makes it confusing to know when you need to provide only the reseller_prefix or your full account name. It would be so much easier to just extend keystone to sync and create shadow projects with the same project_id. Then everything would just work without hacking.

Clinton Roy: Moving to Melbourne

Fri, 2018-08-31 17:00

Now that the paperwork has finally all been dealt with, I can announce that I’ll be moving down to Melbourne to take up a position with the Australian Synchrotron, basically a super duper x-ray machine used for research of all types. My official position is a >in< Senior Scientific Software Engineer <out> I’ll be moving down to Melbourne shortly, staying with friends (you remember that offer you made, months ago?) until I find a rental near Monash Uni, Clayton.

I will be leaving behind Humbug, the computer group that basically opened up my entire career, and The Edge, SLQ, my home-away-from-home study. I do hope to be able to find replacements for these down south.

I’m looking at having a small farewell nearby soon.

A shout out to Netbox Blue for supplying all my packing boxes. Allll of them.

OpenSTEM: This Week in Australian History

Fri, 2018-08-31 15:06
The end of August and beginning of September is traditionally linked to the beginning of Spring in Australia, although the change in seasons is experienced in different ways in different parts of the country and was marked in locally appropriate ways by Aboriginal people. As a uniquely Australian celebration of Spring, National Wattle Day, celebrated […]

Pia Waugh: Mā te wā, Aotearoa

Fri, 2018-08-31 11:01

Today I have some good news and sad news. The good news is that I’ve been offered a unique chance to drive “Digital Government” Policy and Innovation for all of government, an agenda including open government, digital transformation, technology, open and shared data, information policy, gov as a platform, public innovation, service innovation and policy innovation. For those who know me, these are a few of my favourite things

The sad news, for some folk anyway, is I need to leave New Zealand Aotearoa to do it.

Over the past 18 months (has it only been that long!) I have been helping create a revolutionary new way of doing government. We have established a uniquely cross-agency funded and governed all-of-government function, a “Service Innovation Lab”, for collaborating on the design and development of better public services for New Zealand. By taking a “life journey” approach, government agencies have a reason to work together to improve the full experience of people rather than the usual (and natural) focus on a single product, service or portfolio. The Service Innovation Lab has a unique value in providing an independent place and way to explore design-led and evidence-based approaches to service innovation, in collaboration with service providers across public, private and non-profit sectors. You can see everything we’ve done over the past year here  and from the first 10 week experiment here. I have particularly enjoyed working with and exploring the role of the Citizen Advice Bureau in New Zealand as a critical and trusted service delivery organisation in New Zealand. I’m also particularly proud of both our work in exploring optimistic futures as a way to explore what is possible, rather than just iterate away from pain, and our exploration of better rules for government including legislation as code. The next stage for the Lab is very exciting! As you can see in the 2017-18 Final Report, there is an ambitious work programme to accelerate the delivery of more integrated and more proactive services, and the team is growing with new positions opening up for recruitment in the coming weeks!

Please see the New Zealand blog (which includes my news) here

Professionally, I get most excited about system transformation. Everything we do in the Lab is focused on systemic change, and it is doing a great job at having an impact on the NZ (and global) system around it, especially for its size. But a lot more needs to be done to scale both innovation and transformation. Personally, I have a vision for a better world where all people have what they need to thrive, and I feel a constant sense of urgency in transitioning our public institutions into the 21st century, from an industrial age to the information age, so they can more effectively support society as the speed of change and complexity exponentially grows. This is going to take a rethink of how the entire system functions, especially at the policy and legislative levels.

With this in mind, I have been offered an extraordinary opportunity to explore and demonstrate systemic transformation of government. The New South Wales Department of Finance, Services and Innovation (NSW DFSI) has offered me the role of Executive Director for Digital Government, a role responsible for the all-of-government policy and innovation for ICT, digital, open, information, data, emerging tech and transformation, including a service innovation lab (DNA). This is a huge opportunity to drive systemic transformation as part of a visionary senior leadership team with Martin Hoffman (DFSI Secretary) and Greg Wells (GCIDO). I am excited to be joining NSW DFSI, and the many talented people working in the department, to make a real difference for the people of NSW. I hope our work and example will help raise the bar internationally for the digital transformation of governments for the benefit of the communities we serve.

Please see the NSW Government media release here.

One of the valuable lessons from New Zealand that I will be taking forward in this work has been in how public services can (and should) engage constructively and respectfully with Indigenous communities, not just because they are part of society or because it is the right thing to do, but to integrate important principles and context into the work of serving society. Our First Australians are the oldest cluster of cultures in the world, and we have a lot to learn from them in how we live and work today.

I want to briefly thank the Service Innovation team, each of whom is utterly brilliant and inspiring, as well as the wonderful Darryl Carpenter and Karl McDiarmid for taking that first leap into the unknown to hire me and see what we could do. I think we did well I’m delighted that Nadia Webster will be taking over leading the Lab work and has an extraordinary team to take it forward. I look forward to collaborating between New Zealand and New South Wales, and a race to the top for more inclusive, human centred, digitally enabled and values drive public services.

My last day at the NZ Government Service Innovation Lab is the 14th September and I start at NSW DFSI on the 24th September. We’ll be doing some last celebratory drinks on the evening of the 13th September so hold the date for those in Wellington. For those in Sydney, I can’t wait to get started and will see you soon!

David Rowe: Simple Keras “Hello World” Example – Mean Removal

Thu, 2018-08-30 13:04

Inspired by the Wavenet work with Codec 2 I’m dipping my toe into the word of Deep Learning (DL) using Keras. I’ve read Deep Learning with Python (quite an enjoyable read) and set up a Linux box with a GTX graphics card that is making my teenage sons weep with jealousy.

So after a couple of days of messing about here is my first “hello world” Keras example: mean_removal.py. It might be helpful for other Keras noobs. Assuming you have all the packages installed, it runs with either Python 2:

$ python mean_removal.py

Or Python 3:

$ python3 mean_removal.py

It removes the mean from vectors, using just a single layer regression model. The script runs OK on a regular PC without a chunky graphics card.

So I start by generating vectors from random numbers with a zero mean. I then add a random offset to each sample in the vector. Here are 5 vectors with random offsets added to them:

The Keras script is then trained to estimate and remove the offsets, so the output vectors look like:

Estimating the offset is the same as finding the “mean” of the vector. Yes I know we can do that with a “mean” function, but where’s the fun in that!

Here are some other plots that show the training and validation measures, and error metrics at the output:



The last two plots show pretty much all of the offset is removed and it restores the original (non offset) vectors with just a tiny bit of noise. I had to wind back the learning rate to get it to converge without “NAN” losses, possibly as I’m using fairly big input numbers. I’m familiar with the idea of learning rate from NLMS adaptive filters, such as those used for my work in echo cancellation.

Deep Learning for Codec 2

My initial ambitions for DL are somewhat more modest than the sample-by-sample synthesis used in the Wavenet work. I have some problems with Vector Quantisation (VQ) in the low rate Codec 2 modes. The VQ is used to compactly describe the speech spectrum, which carries the intelligibility of the signal.

The VQ gets upset with different microphones, speakers, or minor spectral shaping like gentle high pass/low pass filtering. This shaping often causes a poor vector to be chosen, which results in crappy speech quality. The current VQ error measure can’t tell the difference between spectral features that matter and those that don’t.

So I’d like to try DL to address those issues, and train a system to say “look, this speech and this speech are actually the same. Yes I know one of them has a 3dB/octave low pass filter, please ignore that”.

As emphasised in the text book above, some feature extraction can help with DL problems. For my first pass I’ll be working on parameters extracted by the Codec 2 model (like a compact version of the speech spectrum) rather than speech samples like Wavenet. This will reduce my CPU load significantly, at the expense of speech quality, which will be limited by the unquantised Codec 2 model. But that’s OK as a first step. A notch or two up on Codec 2 at 700 bit/s would be very useful, especially if it can run on a CPU from the first two decades of the 21st century.

Mean Removal on Speech Vectors

So to get started with Keras I chose mean removal. The mean level or constant offset is like the volume or energy in a speech signal, its the simplest form of spectral shaping I could imagine. I trained and tested it with vectors of random numbers, using numbers in the range of the speech spectral samples that Codec 2 plays with.

It’s a bit like an equaliser, vectors with arbitrary spectral shaping go in, “flat” unshaped vectors come out. They can then be sent to a Vector Quantiser. There are probably smarter ways to do this, but I need to start somewhere.

So as a next step I tried offset removal with vectors that represent the spectrum of 40ms speech frame:


This is pretty cool – the network was trained on random numbers but works well with real speech frames. You can also see the spectral slope I mentioned above, the speech energy gradually falls off at high frequencies. This doesn’t affect the intelligibility of the speech but tends to upset traditional Vector Quantisers. Especially mine.

Now that I have something super-basic working, the next step is to train and test networks to deal with some non-trivial spectral shaping.

Reading Further

Deep Learning with Python
WaveNet and Codec 2
Codec 2 700C, the current Codec 2 700 bit/s mode. With better VQ we can improve on this.
Codec 2 at 450 bit/s, some fine work from Thomas and Stefan, that uses a form of machine learning to synthesise 16 kHz speech from 8 kHz inputs.
FreeDV 700D, the recently released FreeDV mode that uses Codec 2 700C. A FreeDV Mode also includes a modem, FEC, protocol.
RNNoise: Learning Noise Suppression, Jean-Marc’s DL network for noise reduction. Thanks Jean-Marc for the brainstorming emails!

Michael Still: What’s missing from the ONAP community — an open design process

Thu, 2018-08-30 13:00

I’ve been thinking a fair bit about ONAP and its future releases recently. This is in the context of trying to implement a system for a client which is based on ONAP. Its really hard though, because its hard to determine how various components of ONAP are intended to work, or interoperate.

It took me a while, but I’ve realised what’s missing here…

OpenStack has an open design process. If you want to add a new feature to Nova for example, the first step is you need to write down what the feature is intended to do, how it integrates with the rest of Nova, and how people might use it. The target audience for that document is both the Nova development team, but also people who operate OpenStack deployments.

ONAP has no equivalent that I can find. So for example, they say that in Casablanca they are going to implement a “AAI Enricher” to ease lookup of data from external systems in their inventory database, but I can’t find anywhere where they explain how the integration between arbitrary external systems and ONAP AAI will work.

I think ONAP would really benefit from a good hard look at their design processes and how approachable they are for people outside their development teams. The current use case proposal process (videos, conference talks, and powerpoint presentations) just isn’t great for people who are trying to figure out how to deploy their software.

Linux Users of Victoria (LUV) Announce: Software Freedom Day 2018 and LUV AGM

Wed, 2018-08-29 19:03
Start: Sep 15 2018 13:00 End: Sep 15 2018 17:00 Start: Sep 15 2018 13:00 End: Sep 15 2018 17:00 Location:  Electron Workshop, 31 Arden Street North Melbourne 3051 Link:  https://www.openstreetmap.org/node/2556615434

It's time once again to get excited about all the benefits that Free and Open Source Software have given us over the past year and get together to talk about how Freedom and Openness can improve our human rights, our privacy, our security and our communities. It's Software Freedom Day!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 15, 2018 - 13:00

read more

Linux Users of Victoria (LUV) Announce: LUV September 2018 Main Meeting: New Developments in Supercomputing

Wed, 2018-08-29 19:03
Start: Sep 4 2018 18:30 End: Sep 4 2018 20:30 Start: Sep 4 2018 18:30 End: Sep 4 2018 20:30 Location:  Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Link:  http://www.melbourne.vic.gov.au/community/hubs-bookable-spaces/kathleen-syme-lib...

PLEASE NOTE RETURN TO ORIGINAL START TIME

6:30 PM to 8:30 PM Tuesday, September 4, 2018
Training Room, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

September 4, 2018 - 18:30

read more

David Rowe: Band Pass Filter and Power Amplifier for Simple HF Data

Wed, 2018-08-29 11:05

Is it possible to move data over HF radio using very simple, low cost hardware and clever SDR software? In the last few posts (here and here) I’ve been constructing and testing building blocks for a simple HF data terminal. This post describes a few more, a 3-8 MHz Band Pass Filter (BPF) and 1W Power Amplifier (PA).

Band Pass Filter

The RTL-SDR samples at 28.8 MHz, capturing a broad chunk of spectrum. In direct mode we just sample the Q-channel, so any energy above 14.4 MHz will be aliased into our passband; e.g. both 21 and 7 MHz will appear as a 7 MHz sampled signal.

In the previous post we determined the ADC overloads at -30dBm, so we want to remove any strong signals above or near that level. One source of strong signals is broadcast band AM radio between 500 to 1600 kHz.

The use case is “100 mile” data links so I’d like the receiver to work on the 80M (3.5 MHz) as well as 40M (7.1 MHz) bands, which sets the BPF passband at 3 to 8 MHz. I hooked up my spec-an to a 40M antenna and could see AM broadcast signals peaking at -40dBm, so I set a BPF specification of > 20dB attenuation at 1.5 MHz to keep the sum of all those signals well away from the -30dBm limit. At the high frequency end I specified at > 30dB attenuation at 21 MHz, to reduce any energy aliased down to 7 MHz.

I designed a cascaded High Pass Low Pass/Filter using some tables from my ancient (but still excellent) copy of “RF Circuit Design”, by Chris Bowick. The Octave rtl_sdr script does the calculations for me. A spreadsheet would work well too.

I simulated the BPF using LTSpice, fixed a few bugs, and tweaked it for real world component values. Here is the circuit and frequency response on log and linear scales:



I soldered up the BPF Manhattan style using commercial axial 1uH inductors and ceramic capacitors, then tested it using the spec-an and tracking generator (note linear scale):

The table at the bottom shows the measured attenuation at some important frequencies. The attenuation is a bit low at 21 MHz, perhaps due to the finite Q of the real world inductors. Quite a good match to the LTSpice simulation and close enough for my experiments. The little step at around 10 MHz is a tracking generator artefact.

The next plot shows the effect of the BPF when my spec-an is connected to my 40M dipole (0 to 10MHz span). Yellow is the received signal without the filter, purple with the filter.

The big spike around 0 Hz is an artefact on the spec-an. The filter is doing a good job of nailing the AM broadcast band energy. You can see a peak around 7.4 MHz where the dipole is resonant. Actually this is a bit of a surprise to me, as I want it resonant around 7.2MHz, I better check that out! At 7.2-ish the insertion loss (difference between the purple and yellow) is a few dB as per the tracking generator plot above. It’s more like 6dB at 7.4 MHz (the dipole peak), not quite sure why. An insertion loss of 3dB at 7.2 MHz is OK for my application.

Power Amplifier

A few weeks ago I hooked the rpitx to my 40M dipole and managed to demodulate the 11mW signal a few km away (over an urban channel) using a mag loop and my FT-817. I decided to build a small 1W PA to make the system usable over “100 mile” HF channels. The actual power is not that critical, as we can trade power off against bit rate. For example if a given HF channel supports 100 bit/s at 1W, we then know we can get 1000 bit/s at 10W.

Even low bit rates can be very useful if you have no other communication. A text message or Tweet, allowing for some overhead, averages about 1000 bits. So at 1000 bit/s you can send 1 txt per second, 3600 an hour, or 86,000/day. That’s very useful communication if you are in a disaster situation and want to tell family you are alive. Or perhaps live in a remote area with no other communication. Of course HF channels come and go, so the actual throughput will be less than that.

I explored the junk box and found a partially constructed Beach 40. I isolated the driver and PA stage and poked it with my signal generator. Turns out it had a bit too much gain (the rpitx has plenty of drive) so I ended up with this simple PA circuit:



The only spurious output I can see is the 2nd harmonic is at -44 dBC, meeting ACMA specs:

The low pass filter at the output has a 3dB point at about 10 MHz which is a little high. It could be brought down a little to increase stop-band attenuation and reduce the 2nd harmonic further. I haven’t done anything about impedance matching the input, as it hits 1W (30dBm) output with 14dBm drive from the rpitx. The 1 inch square heatsink is quite warm after 10 minutes but I can still hold it. It’s not very efficient, 2.9W DC input power for 1W out, however 16dB power gain is quite good for a PA. Anyhoo, it’s a fine starting point for my experiments, we can optimise the PA later if necessary.

Next Steps

OK, so I have most of the building blocks I need for some over the air HF data experiments. There was a bit of engineering involved in building the BPF and PA, but the designs are very simple and can be constructed for a few $ or even from road kill (recycled) components. We now have a very low cost HF data radio, running high performance modems, connected to a Linux computer and Wifi.

Next I will put some software together to estimate data throughput, set the system up with real antennas, and gather some experimental results over real world HF channels.

Reading Further

Rpitx and 2FSK, first part in this series.
Testing a RTL-SDR with FSK on HF, second part in this series.
rtl_sdr.m script that calculates component values for the BPF.

Gary Pendergast: Forking is a Feature

Sun, 2018-08-26 15:04

There’s a new WordPress fork called ClassicPress that’s been making some waves recently, with various members of the Twitterati swinging between decrying it as an attempt to fracture the WordPress community, to it being an unnecessary over-reaction, to it being a death knell for WordPress.

Personally, I don’t think it’s any of the above.

Some years ago, Anil Dash wrote an article on this topic (which I totally ripped forked the name from), you should read it for some context.

Forking is a Feature

While Linus Torvalds is best known as the creator of Linux, it’s one of his more geeky creations, and the social implications of its design, that may well end up being his greatest legacy. Because Linus has, in just a few short years, changed the social dynamic around forking,

Anil Dash

With that context, I genuinely applaud ClassicPress for exercising their fundamental rights under the GPL. The WordPress Bill of Rights makes it quite clear that forking is not just allowed, it’s encouraged. You can and should fork WordPress if you choose to. This isn’t a flaw in the system, this is how it’s supposed to work.

Forks should aways be encouraged.

Forks are a fundamentally healthy aspect of Open Source software. A relatively recent example is the io.js fork of Node.js, which resulted in significant changes to how the Node.js project is governed and developed. WordPress has seen forks in the past, too: Lyceum was a fork that added multi-site support, before it existed in WordPress. WordPress MU was something of a sibling fork which also added multi-site support, and was ultimately merged back into WordPress.

There are examples of forks that went on to become independent projects: WordPress itself is a fork of cafelog/b2. X.org is a fork of XFree86. LibreOffice is a fork of OpenOffice. Blink is a fork of WebKit, which in turn is a fork of KHTML. MariaDB is a fork of MySQL. XBMC has been forked dozens of times. Joomla is a fork of Mambo. (Fun historical coincidence: I very nearly accepted a job offer from Miro, the company behind Mambo, just a couple of months before Joomla came into being!)

Maintaining a fork is hard, thankless work.

All of these independent forks have a common thread: they started with a group of people who were highly experienced in building the software they were forking (often comprising of core developers of the original software). That’s not to say that non-core developers can’t drive a fork, but it does seem to require fairly fundamental knowledge of the strengths and weaknesses of the software, in order to successfully fork it into an independent product.

From a practical perspective, I can tell you that maintaining a fork of WordPress would require an extraordinary amount of work. For example, WordPress.com effectively maintains a fork (which happens to almost exactly match the Core codebase) of WordPress. The task of maintaining this fork falls to a talented team of devops folks, who review and merge each patch.

Now, WordPress.com is really only an internal fork. To maintain a product fork of WordPress would require so much more effort. You’d need to maintain the web infrastructure to push out updates. As the fork diverges from WordPress Core, you would need to figure out how to maintain plugin and theme compatibility. You’d likely need to do your own bug and security fixes, on top of what’s merged from WordPress.

I’m not saying this to dissuade anyone from forking WordPress, rather, it’s important to go into this aware of the challenges that lay ahead. For anyone who uses a fork (whether it be a fork of WordPress, or any other software product), I’m sure the maintainer would appreciate a word of thanks for the work they’ve done to make it possible.

Dave Hall: AWS Parameter Store

Sun, 2018-08-26 01:03

Anyone with a moderate level of AWS experience will have learned that Amazon offers more than one way of doing something. Storing secrets is no exception. 

It is possible to spin up Hashicorp Vault on AWS using an official Amazon quick start guide. The down side of this approach is that you have to maintain it.

If you want an "AWS native" approach, you have 2 services to choose from. As the name suggests, Secrets Manager provides some secrets management tools on top of the store. This includes automagic rotation of AWS RDS credentials on a regular schedule. For the first 30 days the service is free, then you start paying per secret per month, plus API calls.

There is a free option, Amazon's Systems Manager Parameter Store. This is what I'll be covering today.

Structure

It is easy when you first start out to store all your secrets at the top level. After a while you will regret this decision. 

Parameter Store supports hierarchies. I recommend using them from day one. Today I generally use /[appname]-[env]/[KEY]. After some time with this scheme I am finding that /[appname]/[env]/[KEY] feels like it will be easier to manage. IAM permissions support paths and wildcards, so either scheme will work.

If you need to migrate your secrets, use Parameter Store namespace migration script

Access Controls

Like most Amazon services IAM controls access to Parameter Store. 

Parameter Store allows you to store your values as plain text or encrypted using a key using KMS. For encrypted values the user must have have grants on the parameter store value and KMS key. For consistency I recommend encrypting all your parameters.

If you have a monolith a key per application per envionment is likely to work well. If you have a collection of microservices having a key per service per environment becomes difficult to manage. In this case share a key between several services in the same environment.

Here is an IAM policy for an Lambda function to access a hierarchy of values in parameter store:

{   "Version":"2012-10-17",   "Statement":[     {       "Sid":"ReadParams",       "Effect":"Allow",       "Action":[         "ssm:GetParametersByPath"       ],       "Resource":"arn:aws:ssm:us-east-1:1234567890:parameter/my-app/dev/*"     },     {       "Sid":"Decrypt",       "Effect":"Allow",       "Action":[         "kms:Decrypt"       ],       "Resource":"arn:aws:kms:us-east-1:1234567890:key/20180823-7311-4ced-bad5-653587846973"     }   ] }

To allow your developers to manage the parameters in dev you will need a policy that looks like this:

{   "Version":"2012-10-17",   "Statement":[     {       "Sid":"ManageParams",       "Effect":"Allow",       "Action":[         "ssm:DeleteParameter",         "ssm:DeleteParameters",         "ssm:GetParameter",         "ssm:GetParameterHistory",         "ssm:GetParametersByPath",         "ssm:GetParameters",         "ssm:PutParameter"       ],       "Resource":"arn:aws:ssm:us-east-1:1234567890:parameter/my-app/dev/*"     },     {       "Sid":"ListParams",       "Effect":"Allow",       "Action":"ssm:DescribeParameters",       "Resource":"*"     },     {       "Sid":"DecryptEncrypt",       "Effect":"Allow",       "Action":[         "kms:Decrypt",         "kms:Encrypt"       ],       "Resource":"arn:aws:kms:us-east-1:1234567890:key/20180823-7311-4ced-bad5-653587846973"     }   ] }

Amazon has great documentation on controlling access to Parameter Store and KMS.

Adding Parameters

Amazon allows you to store almost any string up to 4Kbs in length in the Parameter store. This gives you a lot of flexibility.

Parameter Store supports deep hierarchies. You will find this becomes annoying to manage. Use hierarchies to group your values by application and environment. Within the heirarchy use a flat structure. I recommend using lower case letters with dashes between words for your paths. For the parameter keys use upper case letters with underscores. This makes it easy to differentiate the two when searching for parameters. 

Parameter store encodes everything as strings. There may be cases where you want to store an integer as an integer or a more complex data structure. You could use a naming convention to differentiate your different types. I found it easiest to encode every thing as json. When pulling values from the store I json decode it. The down side is strings must be wrapped in double quotes. This is offset by the flexibility of being able to encode objects and use numbers.

It is possible to add parameters to the store using 3 different methods. I generally find the AWS web console easiest when adding a small number of entries. Rather than walking you through this, Amazon have good documentation on adding values. Remember to always use "secure string" to encrypt your values.

Adding parameters via boto3 is straight forward. Once again it is well documented by Amazon.

Finally you can maintain parameters in with a little bit of code. In this example I do it with Python.

import boto3 namespace = "my-app" env = "dev" kms_uuid = "20180823-7311-4ced-bad5-653587846973" # Objects must be json encoded then wrapped in quotes because they're stored as strings. parameters = {"key": '"value"', "MY_INT": 1234, "MY_OBJ": '{"name": "value"}'} ssm = boto3.client("ssm") for parameter in parameters: ssm.put_parameter( Name=f"/{namespace}/{env}/{parameter.upper()}", # Everything must go in as a string. Value=str(parameters[parameter]), Type="SecureString", KeyId=kms_uuid, # Use with caution. Overwrite=True, ) Using Parameters

I have used Parameter Store from Python and the command line. It is easier to use it from Python.

My example assumes that it a Lambda function running with the policy from earlier. The function is called my-app-dev. This is what my code looks like:

import json import boto3 def load_params(namespace: str, env: str) -> dict:     """Load parameters from SSM Parameter Store.     :namespace: The application namespace.     :env: The current application environment.     :return: The config loaded from Parameter Store.     """     config = {}     path = f"/{namespace}/env/"     ssm = boto3.client("ssm", region_name="us-east-1")     more = None     args = {"Path": path, "Recursive": True, "WithDecryption": True}     while more is not False:         if more:             args["NextToken"] = more         params = ssm.get_parameters_by_path(**args)         for param in params["Parameters"]:             key = param["Name"].split("/")[3]             config[key] = json.loads(param["Value"])         more = params.get("NextToken", False)     return config

If you want to avoid loading your config each time your Lambda function is called you can store the results in a global variable. This leverages Amazon's feature that doesn't clear global variables between function invocations. The catch is that your function won't pick up parameter changes without a code deployment. Another option is to put in place logic for periodic purging of the cache.

On the command line things are little harder to manage if you have more than 10 parameters. To export a small number of entries as environment variables, you can use this one liner:

$(aws ssm get-parameters-by-path --with-decryption --path /my-app/dev/ | jq -r '.Parameters[] | "export " + (.Name | split("/")[3] | ascii_upcase | gsub("-"; "_")) + "=" + .Value + ";"')

Make sure you have jq installed and the AWS cli installed and configured.

Conclusion

Amazon's System Manager Parameter Store provides a secure way of storing and managing secrets for your AWS based apps. Unlike Hashicorp Vault, Amazon manages everything for you. If you don't need the more advanced features of Secrets Manager you don't have to pay for them. For most users Parameter Store will be adequate.

Michael Still: Learning from the mistakes that even big projects make

Fri, 2018-08-24 15:00

The following is a blog post version of a talk presented at pyconau 2018. Slides for the presentation can be found here (as Microsoft powerpoint, or as PDF), and a video of the talk (thanks NextDayVideo!) is below:

 

OpenStack is an orchestration system for setting up virtual machines and associated other virtual resources such as networks and storage on clusters of computers. At a high level, OpenStack is just configuring existing facilities of the host operating system — there isn’t really a lot of difference between OpenStack and a room full of system admins frantically resolving tickets requesting virtual machines be setup. The only real difference is scale and predictability.

To do its job, OpenStack needs to be able to manipulate parts of the operating system which are normally reserved for administrative users. This talk is the story of how OpenStack has done that thing over time, what we learnt along the way, and what I’d do differently if I had my time again. Lots of systems need to do these things, so even if you never use OpenStack hopefully there are things to be learnt here.

That said, someone I respect suggested last weekend that good conference talks are actionable. A talk full of OpenStack war stories isn’t actionable, so I’ve spent the last week re-writing this talk to hopefully be more of a call to action than just an interesting story. I apologise for any mismatch between the original proposal and what I present here that might therefore exist.Back to the task in hand though — providing control of virtual resources to untrusted users. OpenStack has gone through several iterations of how it thinks this should be done, so perhaps its illustrative to start by asking how other similar systems achieve this. There are lots of systems that have a requirement to configure privileged parts of the host operating system. The most obvious example I can think of is Docker. How does Docker do this? Well… its actually not all that pretty. Docker presents its API over a unix domain socket by default in order to limit control to local users (you can of course configure this). So to provide access to Docker, you add users to the docker group, which owns that domain socket. The Docker documentation warns that “the docker group grants privileges equivalent to the root user“. So that went well.

Docker is really an example of the simplest way of solving this problem — by not solving it at all. That works well enough for systems where you can tightly control the users who need access to those privileged operations — in Docker’s case by making them have an account in the right group on the system and logging in locally. However, OpenStack’s whole point is to let untrusted remote users create virtual machines, so we’re going to have to do better than that.

The next level up is to do something with sudo. The way we all use sudo day to day, you allow users in the sudoers group to become root and execute any old command, with a configuration entry that probably looks a little like this:

# Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL

Now that config entry is basically line noise, but it says “allow members of the group called sudo, on any host, to run any command as root”. You can of course embed this into your python code using subprocess.call() or similar. On the security front, its possible to do a little bit better than a “nova can execute anything” entry. For example:

%sudo ALL=/bin/ls

This says that the sudo group on all hosts can execute /bin/ls with any arguments. OpenStack never actually specified the complete list of commands it executed. That was left as a job for packagers, which of course meant it wasn’t done well.

So there’s our first actionable thing — if you assume that someone else (packagers, the ops team, whoever) is going to analyse your code well enough to solve the security problem that you can’t be bothered solving, then you have a problem. Now, we weren’t necessarily deliberately punting here. Its obvious to me how to grep the code for commands run as root to add them to a sudo configuration file, but that’s unfair. I wrote some of this code, I am much closer to it than a system admin who just wants to get the thing deployed.

We can of course do better than just raw sudo. Next we tried a thing called rootwrap, which was mostly an attempt to provide a better boundary around exactly what commands you can expect an OpenStack binary to execute. So for example, maybe its ok for me to read the contents of a configuration file specific to a virtual machine I am managing, but I probably shouldn’t be able to read /etc/shadow or whatever. We can do that by doing something like this:

sudo nova-rootwrap /etc/nova/rootwrap.conf /bin/ls /etc

Where nova-rootwrap is a program which takes a configuration file and a command line to run. The contents of the configuration file are used to determine if the command line should be executed.

Now we can limit the sudo configuration file to only needing to be able to execute nova-rootwrap. I thought about putting in a whole bunch of slides about exactly how to configure rootwrap, but then I realised that this talk is only 25 minutes and you can totally google that stuff.

So instead, here’s my second actionable thing… Is there a trivial change you can make which will dramatically improve security? I don’t think anyone would claim that rootwrap is rocket science, but it improved things a lot — deployers didn’t need to grep out the command lines we executed any more, and we could do things like specify what paths we were allowed to do things in. Are there similarly trivial changes that you can make to improve your world?

But wait! Here’s my third actionable thing as well — what are the costs of your design? Some of these are obvious — for example with this design executing something with escalated permissions causes us to pay to fork a process. In fact its worse with rootwrap, because we pay to fork, start a python interpreter to parse a configuration file, and then fork again for the actual binary we wanted in the first place. That cost adds up if you need to execute many small commands, for example when plugging in a new virtual network interface. At one point we measured this for network interfaces and the costs were in the tens of seconds per interface.

There is another cost though which I think is actually more important. The only way we have with this mechanism to do something with escalated permissions is to execute it as a separate process. This is a horrible interface and forces us to do some really weird things. Let’s checkout some examples…

Which of the following commands are reasonable?

shred –n3 –sSIZE PATH touch PATH rm –rf PATH mkdir –p PATH

These are just some examples, there are many others. The first is probably the most reasonable. It doesn’t seem wise to me for us to implement our own data shredding code, so using a system command for that seems reasonable. The other examples are perhaps less reasonable — the rm one is particularly scary to me. But none of these are the best example…

How about this one?

utils.execute('tee',               ('/sys/class/net/%s/bridge/multicast_snooping' %                br_name),               process_input='0',               run_as_root=True,               check_exit_code=[0, 1])

Some commentary first. This code existed in the middle of a method that does other things. Its one of five command lines that method executes. What does it do?

Its actually not too bad. Using root permissions, it writes a zero to the multicast_snooping sysctl for the network bridge being setup. It then checks the exit code and raises an exception if its not 0 or 1.

That said, its also horrid. In order to write a single byte to a sysctl as root, we are forced to fork, start a python process, read a configuration file, and then fork again. For an operation that in some situations might need to happen hundreds of times for OpenStack to restart on a node.

This is how we get to the third way that OpenStack does escalated permissions. If we could just write python code that ran as root, we could write this instead:

with open(('/sys/class/net/%s/bridge/multicast_snooping' %            br_name), 'w') as f: f.write('0')

Its not perfect, but its a lot cheaper to execute and we could put it in a method with a helpful name like “disable multicast snooping” for extra credit. Which brings us to…

Hire Angus Lees and make him angry. Angus noticed this problem well before the rest of us. We were all lounging around basking in our own general cleverness. What Angus proposed is that instead of all this forking and parsing and general mucking around, that we just start a separate process at startup with special permissions, and then send it commands to execute.

He could have done that with a relatively horrible API, for example just sending command lines down the pipe and getting their responses back to parse, but instead he implemented a system of python decorators which let us call a method which is marked up as saying “I want to run as root!”.

So here’s the destination in our journey, how we actually do that thing in OpenStack now:

@nova.privsep.sys_admin_pctxt.entrypoint def disable_multicast_snooping(bridge): path = ('/sys/class/net/%s/bridge/multicast_snooping' %             bridge) if not os.path.exists(path): raise exception.FileNotFound(file_path=path) with open(path, 'w') as f: f.write('0')

The decorator before the method definition is a bit opaque, but basically says “run this thing as root”, and the rest is a method which can be called from anywhere within our code.

There are a few things you need to do to setup privsep, but I don’t have time in this talk to discuss the specifics. Effectively you need to arrange for the privsep helper to start with escalated permissions, and you need to move the code which will run with one of these decorators to a sub path of your source tree to stop other code from accidentally being escalated. privsep is also capable of running with more than one set of permissions — it will start a helper for each set. That’s what this decorator is doing, specifying what permissions we need for this method.

And here we land at my final actionable thing. Make it easy to do the  right thing, and hard to do the wrong thing. Rusty Russell used to talk about this at linux.conf.au when he was going through a phase of trying to clean up kernel APIs — its important that your interfaces make it obvious how to use them correctly, and make it hard to use them incorrectly.

In the example used for this talk, having command lines executed as root meant that the prevalent example of how to do many things was a command line. So people started doing that even when they didn’t need escalated permissions — for example calling mkdir instead of using our helper function to recursively make a set of directories.

We’ve cleaned that up, but we’ve also made it much much harder to just drop a command line into our code base to run as root, which will hopefully stop some of this problem re-occuring in the future. I don’t think OpenStack has reached perfection in this regard yet, but we continue to improve a little each day and that’s probably all we can hope for.

privsep can be used for non-OpenStack projects too. There’s really nothing specific about most of OpenStack’s underlying libraries in fact, and there’s probably things there which are useful to you. In fact the real problem is working out what is where because there’s so much of it.

One final thing — privsep makes it possible to specify the exact permissions needed to do something. For example, setting up a network bridge probably doesn’t need “read everything on the filesystem” permissions. We originally did that, but stepped back to using a singled escalated permissions set that maps to what you get with sudo, because working out what permissions a single operation needed was actually quite hard. We were trying to lower the barrier for entry for doing things the right way. I don’t think I really have time to dig into that much more here, but I’d be happy to chat about it sometime this weekend or on the Internet later.

So in summary:

  • Don’t assume someone else will solve the problem for you.
  • Are there trivial changes you can make that will drastically improve security?
  • Think about the costs of your design.
  • Hire smart people and let them be annoyed about things that have always “just been than way”. Let them fix those things.
  • Make it easy to do things the right way and hard to do things the wrong way.

I’d be happy to help people get privsep into their code, and its very usable outside of OpenStack. There are a couple of blog posts about that on my site at http://www.madebymikal.com/?s=privsep, but feel free to contact me at mikal@stillhq.com if you’d like to chat.

Julien Goodwin: Custom output pods for the Standard Research CG635 Clock Generator

Thu, 2018-08-23 17:03
As part of my previously mentioned side project the ability to replace crystal oscillators in a circuit with a higher quality frequency reference is really handy, to let me eliminate a bunch of uncertainty from some test setups.

A simple function generator is the classic way to handle this, although if you need square wave output it quickly gets hard to find options, with arbitrary waveform generators (essentially just DACs) the common option. If you can get away with just sine wave output an RF synthesizer is the other main option.

While researching these I discovered the CG635 Clock Generator from Stanford Research, and some time later picked one of these up used.

As well as being a nice square wave generator at arbitrary voltages these also have another set of outputs on the rear of the unit on an 8p8c (RJ45) connector, in both RS422 (for lower frequencies) and LVDS (full range) formats, as well as some power rails to allow a variety of less common output formats.

All I needed was 1.8v LVCMOS output, and could get that from the front panel output, but I'd then need a coax tail on my boards, as well as potentially running into voltage rail issues so I wanted to use the pod output instead. Unfortunately none of the pods available from Stanford Research do LVCMOS output, so I'd have to make my own, which I did.

The key chip in my custom pod is the TI SN65LVDS4, a 1.8v capable single channel LVDS reciever that operates at the frequencies I need. The only downside is this chip is only available in a single form factor, a 1.5mm x 2mm 10 pin UQFN, which is far too small to hand solder with an iron. The rest of the circuit is just some LED indicators to signal status.


Here's a rendering of the board from KiCad.

Normally "not hand solderable" for me has meant getting the board assembled, however my normal assembly house doesn't offer custom PCB finishes, and I wanted these to have white solder mask with black silkscreen as a nice UX when I go to use them, so instead I decided to try my hand at skillet reflow as it's a nice option given the space I've got in my tiny apartment (the classic tutorial on this from SparkFun is a good read if you're interested). Instead of just a simple plate used for cooking you can now buy hot plates with what are essentially just soldering iron temperature controllers, sold as pre-heaters making it easier to come close to a normal soldering profile.

Sadly, actually acquiring the hot plate turned into a bit of a mess, the first one I ordered in May never turned up, and it wasn't until mid-July that one arrived from a different supplier.

Because of the aforementioned lack of space instead of using stencils I simply hand-applied (leaded) paste, without even an assist tool (which I probably will acquire for next time), then hand-mounted the components, and dropped them on the plate to reflow. I had one resistor turn 90 degrees, and a few bridges from excessive paste, but for a first attempt I was really happy.


Here's a photo of the first two just after being taken off the hot plate.

Once the reflow was complete it was time to start testing, and this was where I ran into my biggest problems.

The two big problems were with the power supply I was using, and with my oscilloscope.

The power supply (A Keithley 228 Voltage/Current source) is from the 80's (Keithley's "BROWN" era), and while it has nice specs, doesn't have the most obvious UI. Somehow I'd set it to limit at 0ma output current, and if you're not looking at the segment lights it's easy to miss. At work I have an EEZ H24005 which also resets the current limit to zero on clear, however it's much more obvious when limiting, and a power supply with that level of UX is now on my "to buy" list.

The issues with my scope were much simpler. Currently I only have an old Rigol DS1052E scope, and while it works fine it is a bit of a pain to use, but ultimately I made a very simple mistake while testing. I was feeding in a trigger signal direct from the CG635's front outputs, and couldn't figure out why the generator was putting out such a high voltage (implausibly so). To cut the story short, I'd simply forgotten that the scope was set for use with 10x probes, and once I realised that everything made rather more sense. An oscilloscope with auto-detection for 10x probes, as well as a bunch of other features I want in a scope (much bigger screen for one), has now been ordered, but won't arrive for a while yet.

Ultimately the boards work fine, but until the new scope arrives I can't determine signal quality of them, but at least they're ready for when I'll need them, which is great for flow.