Planet Linux Australia
I read this book based on the recommendation of Richard Jones, and its really really good. A little sci-fi, a little film noir, and very engaging. I also like that bad things happen to good people in the story -- its gritty and unclean enough to be believable.
I don't want to ruin the book for anyone, but I really enjoyed this and have already ordered the sequels. Oh, and there's a Netflix series based off these books that I'll now have to watch too.
Tags for this post: book james_sa_corey colonization space_travel mystery aliens first_contact
Related posts: Marsbound; Downbelow Station; The Martian; The Moon Is A Harsh Mistress; Starbound; Rendezvous With Rama Comment Recommend a book
This is a book I am working on, hopefully due for completion by early 2017. The purpose of the book is to explore where we are at, where we are going, and how we can get there, in the broadest possible sense. Your comments, feedback and constructive criticism are welcome! The final text of the book will be freely available under a Creative Commons By Attribution license. A book version will be sent to nominated world leaders, to hopefully encourage the necessary questioning of the status quo and smarter decisions into the future. Additional elements like references, graphs, images and other materials will be available in the final digital and book versions and draft content will be published weekly. Please subscribe to the blog posts by the RSS category and/or join the mailing list for updates.
Where are we going and how do we get there? An optimistic book about our future as a species that shows how our global society is changing, what opportunities lie ahead, and what we need to collectively address if we are to create the kind of life we all want to lead. It challenges individuals, governments and corporations to critically assess the status quo, to embrace the opportunities of the new world, and to make intelligent choices for a better future.
We have seen a fundamental shift of several paradigms that underpinned the foundations of our society, but now hold us back. Like a rusty anchor that provided stability in high tide, we are now bound to a dangerous reef as the water lowers. We have seen a shift from central to distributed, from scarcity to surplus and from closed to open systems, wherein the latter of each is proving significantly more successful in the modern context. And yet, many of our assumptions are based on the default idea that centricity, scarcity and closed are the desired state. Are they?
There are many books that talk about technology and the impact it has had on our lives, but technology is only part of the story. The immense philosophical shift, particularly over the past 250 years, has created a modern perspective that all people can be influential, successful and mighty, certainly compared to our peasant ancestors who had very little control over their destinies. People — normal people — are more individually powerful than ever in the history of our species and this has enormous consequences for where we are heading and the opportunities ahead. This distribution of power started with the novel idea that individuals might have inalienable rights, and has been realised through the dramatic transformation of the Internet and wide spread access to modern technologies and communications.
How can we use this power to build a better world? Are we capable of identifying, challenging and ultimately changing the existing ideologies and systems that act to maintain a status quo established in the dark ages? We have come to a fascinating fork in our collective road where we can choose to either maintain a world that relies upon outdated models of scarcity that rely upon inequality, or we can explore new models of surplus and opportunity to see where we go next, together.
This book is in three parts and will include case studies, research and references and questions about the status quo:
- How we got here – looking at the history of modern society including our strengths, weaknesses and major turning points in getting where we are today, including the massive distribution of power from the centre to the periphery over recent centuries and decades. It will also consider the combination of human traits that have served us so well including communication, shared cumulative learning, curiosity, cooperation and competition, experimentation and a constant quest for new forms of stimulation.
- Where we are going – human nature itself hasn’t changed fundamentally and we can look at trends over time and our basic desire for ever more shiny to make some predictions about where we are heading in the short and longer term. It will also consider what great opportunities lie ahead of us such as nanotech and 3D printing to address poverty and hunger, the possibilities of human augmentation given the brain’s capability to adapt to genuinely foreign inputs, the inevitable shift from the Olympics to the Paralympics, and the shift from nationalism to transnationalism, with significant implications for politics and other traditional geopolitically defined power structures.
- How do we get there – the final part of the book will look at the artificial systems, thinking and structures we have put in place that will continue to hold us back from our potential until we address them, systemically. It will cover how the law is always behind reality, how a variety of entrenched systems of thinking present the next major philosophical hurdles to progress, how centrist competitive models are failing against distributed cooperative models, and how our ability to move forward relies on being able to let go of the past. This chapter will cover traditional thinking about property, copyright and law, capitalism and zero sum thinking, traditional belief systems, globalism and digital literacy issues.
Below is a more detailed index of draft chapters which will be linked as they are written on this blog for your interest and feedback. Many thanks to everyone who has encouraged me in doing this, and I hope to make you all proud Enjoy!
Foreword & Introduction
Book 1: Where did we come from
The skills, attributes and context that brought us to where we are.
- Clever monkeys – key traits that brought us to where we are
- Many hands make light work – the growth of communities and diversification of skills
- From gods to people – emergence of rationalism, science and democracy
- Emancipation or individualism – human rights, suffrage movements and liberalism
- Kings in castles to nodes in networks – the shift from centralised to distributed power
- Scarcity to surplus – prosperity and surplus changes how we behave and evolve
- The global village – coming into the 21st century, we are increasingly connected
Book 2: Where we are going
Some predictions, opportunities and analysis of where we are likely to go, based on trends and the consistent predictable human attributes explored in Book 1.
- Massive distribution of everything – things will only get further distributed, so what does this mean for how powerful individuals could become?
- Augmented humanism – wearable and embedded tech is just the first step, so what does it means to be human and how far could we go? Why limit ourselves to replicating human limitations in technology when we could dramatically enhance our selves?
- Restoring cooperative competition – models of cooperative competitive are clearly succeeding but how far can it go, what is the role of traditional power structures (like government) and how can we enable people rather than things?
- Challenging the bell curve – “normal” was broadly popularised and promoted with mass media (radio and television) but the Internet has laid bare our immense variety. Perhaps there is no norm in the future?
- The ghost in the machine – automation, robotics, AI and how we blend the best of technology and humans for a symbiotic future without outsourcing what makes us human. How does this change us, our lives and work as we know it?
- Competitive citizenships – companies already jurisdiction shop for the most beneficial environment, and citizens have started doing the same. With the reducing cost of travel and access to global work opportunities, nations will have to start properly competing to attract and retain citizens.
- Distributed democracy – how can our lives be more broadly represented in a transnational sense when national institutions are limited to national interests?
Book 3: How do we get there
What are the key things we need to question moving forward and make conscious decisions about if we are to fully explore new possibilities for the future.
- Open society, open future
- Property and commons
- Overcoming collective amnesia, tribalism and othering
- Competition and cooperation
- Religion and reality
- Economy vs society
- Nationalism vs transnationalism
Conclusion and call to action
Individuals, governments, corporations and all other human created entities, what roles, responsibilities and rights should you have into the future? What sort of future do you want for your children? What can you do about it today?
Note: the index will change over time, as the book develops
The crossover plate which I thought was going to be the most difficult part was completed in a day. I had some high torsion M6 nuts floating around with one additional great feature, the bolt head is nut shaped giving a low clearance compared to some bolts like socket heads. The crossover is shown from the top in the below image. I first cut down the original spindle mount and sanded it flat to make the "bearing mount" as I called it. Then the crossover attaches to that and the spindle mount attaches to the crossover.
Notice the bolts coming through to the bearing mount. The low profile bolt head just fits on each side of the round 80mm diameter spindle mount. I did have to do a little dremeling out of the bearing mount to fit the nuts on the other side. This was a trade off, I wanted those bolts as far out from the centre line as possible to maximize the possibility that the spindle mount would bolt on flat without interfering with the bolts that attach the crossover to the bearing mount.
A more side profile is shown below. The threaded rod is missing for the z-axis in the picture. It is just a test fit. I may end up putting the spindle in and doing some "dry runs" to make sure that the steppers are happy to move the right distances with the additional weight of the spindle. I did a test run on the z-axis before I started, just resting the spindle on the old spindle and moving the z up and down.
I need to drop out a cabinet of sorts for the cnc before getting into cutting alloy. The last thing I want is alloy chips and drill spirals floating around on the floor and getting trecked into other rooms.
One thing that is not mentioned much is that the spindle itself and bracket runs to around 6-7kg. Below is the spindle hitting 24,000 rpm for the first time.
With this and some other bits a 3040 should be able to machine alloy.
For about two weeks prior and a week after presenting at the OpenStack Summit in Barcelona I had the opportunity to visit several of Europe's major high performance computing facilities, giving each a bit of a standard pitch for the HPC-Cloud hybrid system we had developed at the University of Melbourne.
Tab housekeeping but I also realise that people seem to have missed announcements, developments, etc. that have happened in the last couple of months (and boy have they been exciting). I think we definitely need something like the now-defunct MySQL Newsletter (and no, DB Weekly or NoSQL Weekly just don’t seem to cut it for me!).MyRocks
By October 4 at the Percona Live Amsterdam 2016 event, Percona CEO Peter Zaitsev said that MyRocks is coming to Percona Server (blog). On October 6, it was also announced that MyRocks is coming to MariaDB Server 10.2 (note I created MDEV-9658 back in February 2016, and that’s a great place to follow Sergei Petrunia’s progress!).
Rick Pizzi talks about MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint. His blog also has other thoughts on MyRocks and InnoDB.
With MariaDB MaxScale 2.0 being relicensed under the Business Source License (from GPLv2), almost immediately there was a GPLScale fork; however I think the more interesting/sustainable fork comes in the form of AirBnB MaxScale (GPLv2 licensed). You can read more about it at their introductory post, Unlocking Horizontal Scalability in Our Web Serving Tier.
Vitess 2.0 has been out for a bit, and a good guide is the talk at Percona Live Amsterdam 2016, Launching Vitess: How to run YouTube’s MySQL sharding engine. It is still insanely easy to get going (if you have a credit card), at their vitess.io site.
I neglected to mention my November appearances but I’ll just write trip reports for all this. December appearances are:
- ACMUG MySQL Special Event – Beijing, China – 10 December 2016 – come learn about Percona Server, MyRocks and lots more!
- A bit of a Japan tour, we will be in Osaka on the 17th, Sapporo on the 19th, and Tokyo on the 21st. A bit of talk of the various proxies as well as the various servers that exist in the MySQL ecosystem.
Looking forward to discussing MySQL and its ecosystem this December!
I was unable to edit a PDF form using Evince on Ubuntu, some fields remained empty once I move cursor out of them after entering data on some reason, while other fields worked fine.
Fortunately I have found Master PDF Editor 3 from Code Industry which done job perfectly. It has a free version for non-commercial use. In addition to Evince functionality it supports interactive instructions embedded into PDF document helping to fill the form.
I have tested it on Ubuntu 16.04 and 16.10.
Changes made since previous snapshot:
- added setting reading timeout to socket based on document reading timemout
- added support for wolfssl and mbedtls libraries
- added timeout tracking for https
- removed adjustment on server weight before putting url poprank into url data
- fixed compilation without openssl
- improved OpenSSL detection
- added --enable-mcmodel option for configure
- corrected compilation flags for threadless version of libdpsearch if no apache module selected to build
- switched to CRYPTO_THREADID for OpenSSL 1.0.0 and above
- minor fixes and updates
A big adventure out in the Victorian Alps (fullsize)
So I was keen to see if, after having fun doing a numebr of 100km trail running events, stepping up to 160km (what the Americans call a 100 due to their use of miles) would be just as fun. So not to take it easy I went and entered the hardest in Australia. The Alpine Challenge in the Victorian Alps, 160km on mountain walking trails and fire roads with 7200 metres of climbing.
I had not realy done enough training for this one, I expected to do around 30 hours, though would have loved to go under 28 hours. In the end I was close to expectations after the last 60km became a slow bushwalk. Still it is a great adventure in some of the most amazing parts of our country. I guess now I have done it I know what is needed to go better and think I could run a much better race on that course too.
My words and photos are online in my Alpine Challenge 2016 gallery. What a big mountain adventure that was!.
The MariaDB Server original goals were to be a drop-in replacement. In fact this is how its described (“It is an enhanced, drop-in replacement for MySQL”). We all know that its becoming increasingly hard for that line to be used these days.
Anyhow in March 2016, Debian’s release team has made the decision that going forward, MariaDB Server is what people using Debian Stretch get, when they ask for MySQL (i.e. MariaDB Server is the default provider of an application that requires the use of port 3306, and provides a MySQL-like protocol).
All this has brought some interesting bug reports and discussions, so here’s a collection of links that interest me (with decisions that will affect Debian users going forward).Connectors
- MySQL ODBC in Stretch – do follow the thread
- [debian-mysql] final decision about MySQL r-deps needed / cleaning up the MySQL mess – yes, the MySQL C++ connector is not the same as the MariaDB Connector/C. And let’s not forget the things that depend on the C++ connector, i.e. libreoffice-mysql-connector. Rene Engelhard started this excellent thread with questions that could do with answers.
- Don’t include in stretch – bug#837615 – this is about how MariaDB Server 10.0 (note the version – this matters) should be included, but MySQL 5.6 shouldn’t be.
- MariaDB 10.1? – note that Otto Kekäläinen, CEO of the MariaDB Foundation, says the plan is to skip MariaDB Server 10.1 and go straight to MariaDB Server 10.2. As of this writing, MariaDB Server 10.2 is in its first beta released 27 Sep 2016, so are we expecting a few more betas before the release candidate? History shows there were four betas for 10.1 and one release candidate, while there were three betas and two release candidates of 10.0. There is no response here as to what is gained from skipping MariaDB Server 10.1, but one can guess that this has to do with support cycles.
- default-mysql-client forces removal of mysql-server* and mysql-client* – bug#842011 – bug reporter is a bit hostile towards the package team, but the gist is that “mariadb is NOT a drop-in replacement for mysql.” Users are bound to realise this once Debian Stretch gets more mainstream use.
- [debian-mysql] Bug#840855: Bug#840855: mysql-server: MySQL 5.7? – questioning what happens to MySQL 5.7, and this is really a call to action – if you disagree, email the security and release teams now not after Stretch is released! Quoting Clint Byrum, “The release and security teams have decided that MySQL will live only in unstable for stretch due to the perceived complications with tracking security patches in MySQL.”
- [debian-mysql] About packages that depend on mysql-* / mariadb / virtual-mysql-* – in where we find the API-incompatible libmysqlclient, naming conventions, and more.
Given the impending shutdown of Persona and the lack of a clear alternative to it, I decided to write about some of the principles that guided its design and development in the hope that it may influence future efforts in some way.Permission-less system
There was no need for reliers (sites relying on Persona to log their users in) to ask for permission before using Persona. Just like a site doesn't need to ask for permission before creating a link to another site, reliers didn't need to apply for an API key before they got started and authenticated their users using Persona.
Similarly, identity providers (the services vouching for their users identity) didn't have to be whitelisted by reliers in order to be useful to their users.Federation at the domain level
Just like email, Persona was federated at the domain name level and put domain owners in control. Just like they can choose who gets to manage emails for their domain, they could:
- run their own identity provider, or
- delegate to their favourite provider.
Site owners were also in control of the mechanism and policies involved in authenticating their users. For example, a security-sensitive corporation could decide to require 2-factor authentication for everyone or put a very short expiry on the certificates they issued.
Alternatively, a low-security domain could get away with a much simpler login mechanism (including a "0-factor" mechanism in the case of http://mockmyid.com!).Privacy from your identity provider
While identity providers were the ones vouching for their users' identity, they didn't need to know the websites that their users are visiting. This is a potential source of control or censorship and the design of Persona was able to eliminate this.
The downside of this design of course is that it becomes impossible for an identity provider to provide their users with a list of all of the sites where they successfully logged in for audit purposes, something that centralized systems can provide easily.The browser as a trusted agent
The browser, whether it had native support for the BrowserID protocol or not, was the agent that the user needed to trust. It connected reliers (sites using Persona for logins) and identity providers together and got to see all aspects of the login process.
It also held your private keys and therefore was the only party that could impersonate you. This is of course a power which it already held by virtue of its role as the web browser.
Additionally, since it was the one generating and holding the private keys, your browser could also choose how long these keys are valid and may choose to vary that amount of time depending on factors like a shared computer environment or Private Browsing mode.
Other clients/agents would likely be necessary as well, especially when it comes to interacting with mobile applications or native desktop applications. Each client would have its own key, but they would all be signed by the identity provider and therefore valid.Bootstrapping a complex system requires fallbacks
Persona was a complex system which involved a number of different actors. In order to slowly roll this out without waiting on every actor to implement the BrowserID protocol (something that would have taken an infinite amount of time), fallbacks were deemed necessary:
- centralized fallback identity provider for domains without native support or a working delegation
- centralized verifier until local verification is done within authentication libraries
In addition, to lessen the burden on the centralized identity provider fallback, Persona experimented with a number of bridges to provide quasi-native support for a few large email providers.Support for multiple identities
User research has shown that many users choose to present a different identity to different websites. An identity system that would restrict them to a single identity wouldn't work.
Persona handled this naturally by linking identities to email addresses. Users who wanted to present a different identity to a website could simply use a different email address. For example, a work address and a personal address.No lock-in
Persona was an identity system which didn't stand between a site and its users. It exposed email address to sites and allowed them to control the relationship with their users.
Sites wanting to move away from Persona can use the email addresses they have to both:
- notify users of the new login system, and
- allow users to reset (or set) their password via an email flow.
Websites should not have to depend on the operator of an identity system in order to be able to talk to their users.Short-lived certificates instead of revocation
Instead of relying on the correct use of revocation systems, Persona used short-lived certificates in an effort to simplify this critical part of any cryptographic system.
It offered three ways to limit the lifetime of crypto keys:
- assertion expiry (set by the client)
- key expiry (set by the client)
- certificate expiry (set by the identify provider)
The main drawback of such a pure expiration-based system is the increased window of time between a password change (or a similar signal that the user would like to revoke access) and the actual termination of all sessions. A short expirty can mitigate this problem, but it cannot be eliminated entirely unlike in a centralized identity system.
Linux Users of Victoria (LUV) Announce: LUV Main December 2016 Meeting: HPC Linux in Europe / adjourned SGM / Intro to FreeBSD
6th Floor, 200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
• Lev Lafayette, High Performance Linux in Europe
• adjourned Special General Meeting
• Peter Ross, Introduction to FreeBSD
200 Victoria St. Carlton VIC 3053 (the EPA building)
Late arrivals needing access to the building and the sixth floor please call 0490 049 589.
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
LUV would like to acknowledge Red Hat for their help in obtaining the venue.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.December 6, 2016 - 18:30
It’s most enjoyable being part of a growing company that’s helping to make a real difference for students and our future generation.
For our physical resources, we purposely don’t keep large stock as that would make things prohibitively expensive. The trade-off is that we can’t always do instant shipments. Typically, we do order in new items when we’re running low. It’s not an entirely straightforward process, since we have to assemble kits such as the Ginger Beer Classroom Kit and the Soldering Kit ourselves, by necessity from different sources.
When a product sees a particular spike in interest, we sometimes briefly run out. Actually that’s quite normal and it happens even with companies that keep lots of stock. When out-of-stock, we can generally fulfil the order within 1-2 weeks. A brief delay, but with the advantage that you get what you want, at a reasonable price, from a trusted Australian supplier with friendly service. We believe that these aspects adequately compensate for the lack of “instant gratification”…
So where are we at right now with our physical resources stock? A brief overview:
- Blackline World Map: updated edition just in, plenty of stock
- Mirobot drawing turtle robot kit: plenty of stock (for now)
- Soldering Kit: a few
- Robot Turtles Board Game: awaiting new shipment – also getting some of the new extension cards packs
- Speed cube (3x3x3): a few
- Ginger Beer Classroom Kit: just out again – generally part of our ginger beer (non-alcoholic) cafe program
If you have any questions about any of our products, please don’t hesitate to ask! Contact us.
I wanted to remind you
of some very basic stuff
The type of thing you may forget
when dev work's getting tough
It is, just a simple thing
That many a mind perplexed
So how many of you
Know what's coming next?
If your custom content field
in your new created node
has a blank bit in the space
for a radio button to load
or your rules on field has value
doesn't show the said conditions
then the answer is the same
Go check your role permissions
If you know you've put that block
in the side bar on the right
but where its s'posed to be
is nothing but all white
Or your user lets you know
that the content type petition
isn't in their own menu
Then again it is permissions
If a visitor to the site
sees the gallery as a void
but they uploaded photos
and now they are annoyed
Or your view display is empty
On the file path position
Then you know what I will say
What about your role permissions
So now I hope that when you are
In front of your machines
doubting your state of mind
And staring at your screens
That you will think of this wee rhyme
In Dr Seuss tradition
and quick smart take yourself
to check on your permissions
Originally published at http://chalcedony.co.nz/gemsofwisdom/permissions
"Permissions by Quartz is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License."
Infoxchange, 33 Elizabeth St. RichmondLink: http://luv.asn.au/meetings/map
Website working group
We will be planning modernisation and upgrading of the LUV website. This will include looking at what features are used and what technologies are suitable for the future.
There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.
The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)
Late arrivals, please call (0490) 049 589 for access to the venue.
LUV would like to acknowledge Infoxchange for the venue.
Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.November 19, 2016 - 12:30
I often have folks asking how the text & video consoles work on OpenPOWER machines. Here's a bit of a rundown on how it's implemented, and what may seem a little different from x86 platforms that you may already be used to.
On POWER machines, we get the console up and working super early in the boot process. This means that we can get debug, error and state information out using text console with very little hardware initialisation, and in a human-readable format. So, we tend to use simpler devices for the console output - typically a serial UART - rather than graphical-type consoles, which require a GPU to be up and running. This keeps the initialisation code clean and simple.
However, we still want a facility for admins who are more used to a keyboard & monitor directly plugged-in to have a console facility too. More about that later though.
The majority of OpenPOWER platforms will rely on the attached management controller (BMC) to provide the UART console (as of November 2016: unless you've designed your own OpenPOWER hardware, this will be the case for you). This will be based on ASpeed's AST2400 or AST25xx system-on-chip devices, which provide a few methods of getting console data from the host to the BMC.
Between the host and the BMC, there's a LPC bus. The host is the master of the LPC bus, and the BMC the slave. One of the facilities that the BMC exposes over this bus is a set of UART devices. Each of these UARTs appear as a standard 16550A register set, so having the host interface to a UART is very simple.
As the host is booting, the host firmware will initialise the UART console, and start outputting boot progress data. First, you'll see the ISTEP messages from hostboot, then skiboot's "msglog" output, then the kernel output from the petitboot bootloader.
Because the UART is implemented by the BMC (rather than a real hardware UART), we have a bit of flexibility about what happens to the console data. On a typical machine, there are four ways of getting access to the console:
- Direct physical connection: using the DB-9 RS232 port on the back of the machine;
- Over network: using the BMC's serial-over-LAN interface, using something like ipmitool [...] sol activate;
- Local keyboard/video/mouse: connected to the VGA & USB ports on the back of the machine, or
- Remote keyboard/video/mouse: using "remote display" functionality provided by the BMC, over the network.
The first option is fairly simple: the RS232 port on the machine is actually controlled by the BMC, and not the host. Typically, the BMC firmware will just transfer data between this port and the LPC UART (which the host is interacting with). Figure 1 shows the path of the console data.
The second is similar, but instead of the BMC transferring data between the RS232 port and the host UART, it transfers data between a UDP serial-over-LAN session and the host UART. Figure 2 shows the redirection of the console data from the host over LAN.
The third and fourth options are a little more complex, but basically involve the BMC rendering the UART data into a graphical format, and displaying that on the VGA port, or sending over the network. However, there are some tricky details involved...UART-to-VGA mirroring
Earlier, I mentioned that we start the console super-early. This happens way before any VGA devices can be initialised (in fact, we don't have PCI running; we don't even have memory running!). This means that it's not possible to get these super-early console messages out through the VGA device.
In order to be useful in deployments that use VGA-based management though, most OpenPOWER machines have functionality to mirror the super-early UART data out to the VGA port. During this process, it's the BMC that drives the VGA output, and renders the incoming UART text data to the VGA device. Figure 3 shows the flow for this, with the GPU rendering text console to the graphical output.
In the case of remote access to the VGA device, the BMC takes the contents of this rendered graphic and sends it over the network, to a BMC-provided web application. Figure 4 illustrates the redirection to the network.
This means we have console output, but no console input. That's okay though, as this is purely to report early boot messages, rather than provide any interaction from the user.
Once the host has booted to the point where it can initialise the VGA device itself, it takes ownership of the VGA device (and the BMC relinquishes it). The first software on the host to start interacting with the video device is the Linux driver in petitboot. From there on, video output is coming from the host, rather than the BMC. Because we may have user interaction now, we use the standard host-controlled USB stack for keyboard & mouse control.
Remote VGA console follows the same pattern - the BMC captures the video data that has been rendered by the GPU, and sends it over the network. In this case, the console input is implemented by virtual USB devices on the BMC, which appear as a USB keyboard and mouse to the operating system running on the host.Typical console output during boot
Here's a few significant points of the boot process:3.60212|ISTEP 6. 3 4.04696|ISTEP 6. 4 4.04771|ISTEP 6. 5 10.53612|HWAS|PRESENT> DIMM=00000000AAAAAAAA 10.53612|HWAS|PRESENT> Membuf=0C0C000000000000 10.53613|HWAS|PRESENT> Proc=C000000000000000 10.62308|ISTEP 6. 6
- this is the initial output from hostboot, doing early hardware initialisation in discrete "ISTEP"s41.62703|ISTEP 21. 1 55.22139|htmgt|OCCs are now running in ACTIVE state 63.34569|ISTEP 21. 2 63.33911|ISTEP 21. 3 [ 63.417465577,5] SkiBoot skiboot-5.4.0 starting... [ 63.417477129,5] initial console log level: memory 7, driver 5 [ 63.417480062,6] CPU: P8 generation processor(max 8 threads/core) [ 63.417482630,7] CPU: Boot CPU PIR is 0x0430 PVR is 0x004d0200 [ 63.417485544,7] CPU: Initial max PIR set to 0x1fff [ 63.417946027,5] OPAL table: 0x300c0940 .. 0x300c0e10, branch table: 0x30002000 [ 63.417951995,5] FDT: Parsing fdt @0xff00000
- here, hostboot has loaded the next firmware stage, skiboot, and we're now executing that.[ 22.120063542,5] INIT: Waiting for kernel... [ 22.154090827,5] INIT: Kernel loaded, size: 15296856 bytes (0 = unknown preload) [ 22.197485684,5] INIT: 64-bit LE kernel discovered [ 22.218211630,5] INIT: 64-bit kernel entry at 0x20010000, size 0xe96958 [ 22.247596543,5] OCC: All Chip Rdy after 0 ms [ 22.296864319,5] Free space in HEAP memory regions: [ 22.304756431,5] Region ibm,firmware-heap free: 9b4b78 [ 22.322076546,5] Region ibm,firmware-allocs-memory@2000000000 free: 10cd70 [ 22.341542329,5] Region ibm,firmware-allocs-memory@0 free: afec0 [ 22.392470901,5] Total free: 11999144 [ 22.419746381,5] INIT: Starting kernel at 0x20010000, fdt at 0x305dbae8 (size 0x1d251)
next, the skiboot firmware has loaded the petitboot bootloader kernel (in zImage.epapr format), and is setting up memory regions in preparation for running Linux.zImage starting: loaded at 0x0000000020010000 (sp: 0x0000000020e94ed8) Allocating 0x1545554 bytes for kernel ... gunzipping (0x0000000000000000 <- 0x000000002001d000:0x0000000020e9238b)...done 0x13c0300 bytes Linux/PowerPC load: Finalizing device tree... flat tree at 0x20ea1520 [ 24.074353446,5] OPAL: Switch to little-endian OS -> smp_release_cpus() spinning_secondaries = 159 <- smp_release_cpus() <- setup_system()
we then get the output from the zImage wrapper, which expands the actual kernel code and executes it. In recent firmware builds, the petitboot kernel will suppress most of the Linux boot messages, so we should only see high-priority warnings or error messages.
next up, the petitboot UI will be shown:Petitboot (v1.2.3-a976d01) 8335-GCA 2108ECA ────────────────────────────────────────────────────────────────────────────── [Disk: sda1 / 590328e2-1095-4fe7-8278-0babaa9b9ca5] Ubuntu, with Linux 4.4.0-47-generic (recovery mode) Ubuntu, with Linux 4.4.0-47-generic Ubuntu [Network: enP3p3s0f3 / 98:be:94:67:c0:1b] Ubuntu 14.04.x installer Ubuntu 16.04 installer test kernel System information System configuration Language Rescan devices Retrieve config from URL *Exit to shell ────────────────────────────────────────────────────────────────────────────── Enter=accept, e=edit, n=new, x=exit, l=language, h=help
During Linux execution, skiboot will retain control of the UART (rather than exposing the LPC registers directly to the host), and provide a method for the Linux kernel to read and write to this console. That facility is provided by the OPAL_CONSOLE_READ and OPAL_CONSOLE_WRITE calls in the OPAL API.Which one should we use?
We tend to prefer the text-based consoles for managing OpenPOWER machines - either the RS232 port on the machines for local access, or IPMI Serial over LAN (SOL) for remote access. This means that there's much less bandwidth and latency for console connections, and there is a simpler path for the console data. It's also more reliable during low-level debugging, as serial access involves fewer components of the hardware, software and firmware stacks.
That said, the VGA mirroring implementation should still work well, and is also accessible remotely by the current BMC firmware implementations. If your datacenter is not set up for local RS232 connections, you may want to use VGA for local access, and SoL for remote - or whatever works best in your situation.
We'd love the diydrones community to jump aboard our testing effort for the new ArduPilot sensor system. See http://discuss.ardupilot.org/t/calling-all-testers-new-ardupilot-sensor-drivers/12741 for details
We have just implemented a major upgrade to the ArduPilot sensor drivers for STM32 based boards (PX4v1, Pixhawk, Pixhawk2, PH2Slim, PixRacer and PixhawkMini).The new sensor drivers are a major departure from the previous drivers which used the PX4/Firmware drivers. The new drivers are all "in-tree" drivers, bringing a common driver layer for all our supported boards, so we now use the same drivers on all the Linux boards as we do on the PX4 variants.We would really appreciate some testing on as many different types of boards as possible. The new driver system has an auto-detection system to detect the board type and while we think it is all correct some validation against a wide variety of boards would be appreciated.Advantages of the new system
The new device driver system has a number of major advantages over the old one
* common drivers with our Linux based boards
* significantly lower overhead (faster drivers) resulting in less CPU usage
* significantly less memory usage, leaving more room for other options
* significantly less flash usage, leaving more room for code growth
* remote access to raw SPI and I2C devices via MAVLink2 (useful for development)
* much simpler driver structure, making contributions easier (typical drivers are well under half the lines of code of the PX4 equivalent)
* support for much higher sampling rates for IMUs that support it (ICM-20608 and MPU-9250), leading to better vibration handling