An interesting paper on the use of the Plan 9 operating system for “Grid Computing” (widely dispersed cluster computing using resources from many places – kind of like a Beowulf over the internet) is linked from this site:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.122
(Yes, I’m on a ‘computing kick’ at the moment… Whatever is going to come out of Paris was decided months ago and the present show trial is pointless. So I’m mostly waiting to assess the damage on offer at the end. Oh, and wondering how many Great Lies our Dear Leader will serve up…)
It’s a pdf, 8 pages long, so easy to download. (But hard to cut / paste quotes… so you get to read it yourself ;-)
Abstract
This paper describes the use of the “Plan 9 from Bell Labs ” distributed operating system as a Grid Computing infrastructure. In particular it compares solutions using the de facto standard middleware toolkit for grids, Globus, to an environment constructed using computers running the Plan 9 operating system. These environments are compared based on the features they offer in the context of grid computing:
Here’s the main Plan 9 page:
http://plan9.bell-labs.com/plan9/index.html
Quick summary of it:
http://www.operating-system.org/betriebssystem/_english/bs-plan9.htm
A video of a talk about R.Pi Plan 9, and links to someone playing with it:
http://raspi.tv/2012/plan-9-operating-system-for-the-raspberry-pi-demonstration-by-richard-miller
As my major interest is in just the kind of software development and distributed computing that Plan 9 is geared toward, I’m now interested in trying it out a little. (The “High Concept” being a port of GIStemp and a GCM or two to the Raspberry Pi and then a “Global Gaggle Of Pi” that could run a simulation run over the internet. Who needs a 2048 core $BILLIONs supercomputer when you could have a Global Grid of millions of cores of Pi?… )
Of course, it doesn’t hurt that the logo is a bunny ;-)
Basically, it looks like a much easier way for the “typical person” to get rolling with widely distributed computing and without as much pain and suffering as the alternatives. It also looks to be securely built from the ground up. For instance, the main Plan 9 file server at Bell Labs is outside their firewall… I like any OS that can be run “in the wild” and not get blow up, hacked, or taken down… Which also means this intersects with my other main comput interest: Secure and Private computing.
It may well take a week or three for me to get around to it, as Things To Do are stacking up at about 5 x the rate of getting them done, sigh… But it’s now “on the ToDo list”.
Interesting!
I ran ten systems on the Seti@home project in 1999 for about a year. Distributed computing has lots of potential.
I am weakening. Actually went to the raspberry site.
Almost on topic question. Regarding the subject of EMP – what do you do about a monitor?
The major risk to the monitor (or computer) is from the power lead or the ethernet/usb cables if long enough. If long compared to the EMF field frequency they can act as antennas. That obviously always applies to the power lines so good surge suppressor on the wall plug, coil excess cord length in a coil about 4-5 inches in diameter (creates an inductance which tends to stretch out any pulse giving the surge suppressor more time to dump the surge, and put a couple ferrite choke cores on the power lead right at the back of the monitor.
That will help it survive an EMP surge but no guarantees. Fall back go to a used computer store and get an old working monitor, and wrap it in a couple layers of newspaper or a layer of bubble wrap then over wrap that with a couple layers of heavy duty aluminum foil. Place in a box in the closet and save for a really bad day.
EMP protection is a system problem and to have any reliability you have to test the whole system as installed. All you can do if you don’t have that capability (which is just about everyone) is take your best shot and hope you get lucky. If your protective efforts fail than you can be pretty sure that everyone else not protected also got fried so it won’t matter much. If you get lucky you might be the only person in the area with a working system.
Read the Plan9 pdf last spring. It appeared to have been created with the right things in mind, but I don’t have the knowledge to really evaluate it. Back to the IO communication bottleneck of your previous post. Super fast CPUs are a waste if the node communications are slow. Plan9 might be worth further examination…pg
@Bill S:
Depends on if you mean “for storage” or “when operating”. As the Pi can be used with a 7 inch touch panel or a miniature TV with HDMI, for storage you could just wrap and store in a metal / iron can.
@PG.:
The utility can be good with slow communication, but not for a typical mix of jobs. Only for high compute, low data jobs Like SETI searches, Golomb rulers protein folding, etc. There’s a spectrum of needs from high data low computes (increment hours worked in payroll database) to near no data lots of computes (Golomb ruler.. about 100 bytes a year at length 28). The rule of thumb is for the average mix.
@Larry:
At one time I had SETI installed as screen saver on dozens of computers in our shop… racked up enough work units to be ranked. Then they changed the software and I didn’t have time to do updates.. Oh Well..
I quit after a year (in the 99th percentile at the time) when Sun Microsystems started using seti@home to burn in their new systems before they put them in production. That was their response to Microsoft’s efforts to push up their numbers. They made it clear that they did not want Microsoft to be at the top of the list, so our folks would put new E10k’s on seti@home for about a week after they were built to get them burned in and make sure they were stable.
It actually was a pretty good test suite for that purpose.
I realized I had no hope of getting higher ranking when folks like Microsoft, Compac and Sun were burning through work units by the truckload to get bragging rights. My computation certificate says I did 2,219 cobblestones (their new corrected work unit values) of computation (1.92 quadrillion floating point operations) and was credited (along with a few others) with one interesting hit to be investigated.
All climate change – yes ALL – is 100% natural and has nothing to do with carbon dioxide. It’s obvious that rising CO2 levels have not affected temperatures this century, and there is absolutely no valid physics that any of you can produce to show why climate should be affected by CO2.
The global mean temperature varies in cycles that appear to be regulated by planetary orbits and variations in solar intensity, cosmic rays etc which probably also relate to planetary orbits. For example, the eccentricity of Earth’s orbit has a cycle of about 100,000 years which is thought to relate to the spacing of glacial periods, this being because the annual mean distance of the Sun varies over that 100,000 year cycle. There are numerous cycles, but the two dominant ones in the space of a few thousand years have periods of about 1,000 years and 60 years. Both were rising in the 30 years to 1998, but now we have slight net cooling for 30 years, and probably about 500 years of long term cooling due to start within 100 years.
Solar intensity can also vary because of variations in cloud cover. Reflections from clouds affect the albedo by about 20%. For each 1% change (for example to 19% or 21%) there is a temperature change of about 0.9 degree. So all the climate change in the last few thousand years could have been due just to such changes in cloud cover. Clearly there are also other changes in sunspot activity, and you have to ask yourselves whether than could well explain the 1,000 year cycle. What regulates these long term cycles in sunspot activity? Well, the only things that are “regular” are planetary orbits, and it could well be that planetary magnetic fields which reach to the Sun have some effect on sunspots and possibly cosmic rays intensities which, in turn, may affect cloud formation.
It’s ALL natural and you have no proof that it could not be.
Until you understand that planetary surface temperatures are not established by direct radiation reaching such surfaces you have failed to pay due diligence. The explanation as to what really determines such temperatures on all planets is there to be read and studied here. It all started with an explanation by the brilliant 19th century physicist Josef Loschmidt who has now been proven right with modern day experiments which show that force fields do indeed create temperature gradients that are the state of maximum entropy.
Any relation to “Plan 9 from Outer Space,” widely regarded as the worst scifi film ever made?
@Bob Sykes:
The Unix world folks just love bad puns, inside references, and old campy SciFi… Rolling them all up in one reference, and naming your product with it? Priceless….
So yeah, it’s a reference, but not a relation…