Ding-A-Lings kill Ringling’s

It’s been on all the major news shows, so I’m sure most folks have heard this already.

Ringling Brothers Barnum & Bailey Circus is ending.

http://www.npr.org/sections/thetwo-way/2017/01/15/509903805/after-146-years-ringling-bros-and-barnum-bailey-circus-to-shut-down

After 146 Years, Ringling Bros. And Barnum & Bailey Circus To Shut Down

January 15, 20171:04 AM ET

Now you might think that a business model that worked fine for nearly a Century & 1/2 would be something you can rely on. You would be right. So why did it break? IMHO they were challenged by other venues (from Cirque du Soleil doing great ‘center ring’ shows nation wide to zoos getting their acts together more), but what really was the final straw was a mix of rising costs (look at the ‘production qualities’ of their current center ring shows…) and the demise of the Elephants.

The Elephants on parade was iconic of their brand. On all their images everywhere. They were right up there with Tigers as The Big Draw. Remove the animals from the circus and it isn’t a circus anymore. It’s a traveling vaudeville show with nauseating carnival rides.

Feld announced the news on the company website Saturday night, citing declining ticket sales — which dipped even lower as the company retired its touring elephants.

“This, coupled with high operating costs, made the circus an unsustainable business for the company,” Feld says.

Ringling has been phasing out elephants as a result of shifting public tastes and criticism from animal rights groups over the well-being of the animals.

Now how hard is it to make a connection between ‘phasing out FOO’ and “dropping ticket sales”? Really? I’m doing FOO and it hurts when I do FOO. Golly, maybe I’ll do more FOO! /sarc;

That’s the depth of their reasoning skill.

A TV interview had a Circus Representative saying ~”When we got rid of the elephants, ticket sales dropped a lot more than we expected”. Well Duh! You were seen as gutting your show to please the PC Police. I’d not let my kid see that kind of shameful knuckle under either. You can make a good product that people want to buy, or you can be Politically Correct and go out of business. You choose “piss off my customers to please the idiots who don’t attend”.

Elephants had been a circus mainstay almost as long as the circus itself has been a staple of American entertainment, since Phineas Taylor Barnum introduced Jumbo, an Asian elephant in 1882.

But before the traveling exhibition evolved into a regular destination for wholesome family fun, Barnum “made a traveling spectacle of animals and human oddities popular, while the five Ringling brothers performed juggling acts and skits from their home base in Wisconsin,” reports the AP. “Eventually, they merged and the modern circus was born. The sprawling troupes traveled around America by train, wowing audiences with the sheer scale of entertainment and exotic animals.”

So let’s see… We had animal acts, and we had floor show, and we mixed them and had spectacular success. THEN we start eliminating one of them and sales drop. Gee… wonder why…

So what have they got to sell now? Oh, a vaudeville / juggling act and bad food.

IMHO, the Feld family most likely had too many other businesses to really care about this one, and just wanted the PC Police to go away.

The Feld family bought Ringling in 1967 and employs about 500 people for both touring shows “Circus Extreme” and “Out of This World.”

IMHO, this is really a lesson in what happens when you give in to PC Police. You die.

The animals will now all die and they will not have progeny. This is good for them how?

As working animals, they had a pretty nice niche. Caregivers who tended them. Doctors when sick. Food brought as needed. Predators kept away. Rehearsal and a few shows a few days a week, then a road tour. Heck, I’d sign up for that! (As, it would seem, did a few hundred other people).

Now THE key thing the PITA folks and related forget is simple: No work for them, the animals are extirpated. Dead. Gone. No joy. Nobody is going to sink the money into keeping those animals alive for generations with zero money to do it. They may keep some of them alive on a retirement farm until they expire (just to keep the flack down) but the herd will be extinguished. Do you really think that’s what the herd wants? To all die?

Yes, they had to work for a living. So do the cows in a dairy. So does the dog herding sheep. So does my kid… So?

With any luck, the Feld Family can be coaxed into selling the rights to someone who actually cares about the Circus and the tradition of it. In my dinky farm town, we had a much smaller traveling circus come to town (RB B&B was in the Big City an hour away… got to see it once as a treat). WHY do I have great fondness for Elephants and Tigers? Because I saw them performing and working with people when I was about 5 years old and loved it (and by extension, them). I realized these animals had brains, knew their routine, could hit their spot on the stage, and frankly seemed to enjoy showing off some times. THAT was the moment a bear turned from a predator in the forest (to be killed) into a performer with personality. Kill off that experience, in 50 years nobody will give a damn about Elephants, Tigers, Bears, or much of any other animal. They become vague concepts from a screen, not something experienced with awe.

Does a zoo ‘cut it’ as an alternative? Nope. All you see there is, if you are lucky, a bored to tears animal trying to sleep despite that noise from the other side of the fence. You don’t see the personality nor the intelligence nor the sheer ability of the animals. They become ‘a thing apart’ instead of a ‘kindred spirit in actions’. Zoos are nice, and we visit them every year if possible, but they are not able to showcase the skill and intelligence of the animals. Specimens at a distance are NOT the same as an emotional bond to a performer.

Bottom Line

The lesson all companies need to take away from this is simple:

Give in to the Loony Left PC Police and go out of business.

Subscribe to feed

Posted in Human Interest, News Related | Tagged , , , , | Leave a comment

Orange Pi – First Fire

Well, I’ve got it working, the Orange Pi One that is.

I’ll skip the time spent wandering in the woods and get right down to the #1 issue:

It doesn’t see the HDMI monitor by default using the Armbian images.

Took me three or four iterations of “download os variation, stuff on card, try boot” to eventually just look at the log files on the chip with a Raspberry Pi and see it claimed to be booting. OK…

Then used the WiFi router (and DHCP server) to check that it had registered an IP address…

Then login via ssl and, there it is.

Just don’t expect the “desktop version” to come up with, well, a desktop…

At this point “I’m OK with that” as I’m expecting to run this module as a headless compute node, but really, if you were expecting it to work without another computer to get to it, well…

Here’s a screenshot:

Orange Pi via Raspberry Pi first login screen

Orange Pi via Raspberry Pi first login screen

Remember you can click the image to embiggen…

The Good

It is cheap. About $16 (all up a touch over $25 from Amazon with powersupply).

It is small. Slightly ambiguous in that it doesn’t fit in any of my Raspberry Pi cases, but it looks about the Altoids Mint Tin size or a bit smaller. Corners are square, though. About a thumb wide and little finger long. A stack of these would be about 2/3 the size of a similar stack of R.Pi boards with the same quad core 1.x GHz speed.

It looks like you can get a decent Armbian image or two for it. I got mine here:

http://www.armbian.com/download/ where, after enough clicks and things, puts you to another link from where the actual image comes.

https://dl.armbian.com/orangepione/Debian_jessie_default.7z

I needed to install p7zip to unpack it so:

apt-get install p7zip

and a simple

p7zip -d {name}

unzips it. Then you just dd the image file onto the SD card and the install is done.

dd bs=4M of=/dev/sdX if={very long FOO name.img}

Where of course X in sdX is the actual drive letter assigned when you put the chip in a carrier and plug it into your Raspberry Pi…

So actual install is pretty trivial. Stick that in the O.Pi and boot. It takes a few minutes with vaguely blinking lights (red and green on the board – green is power I think… plus others near the ethernet connector) then it is up.

The Bad

The problem is you won’t know it is up. The screen very briefly gives an Armbian greeting (maybe a second) and then goes black. Nothing. Now you might think having downloaded the “desktop” version the screen would show something. You would be wrong.

A web search showed several folks and several different O.Pi types complaining about various HDMI issues and all sorts of “fiddle the config of the HDMI” to attempt to get a monitor. Some claim to have one… I am not sure I’m going to bother at this point…

It is helpful if you speak Chinese. Or at least read it. At the Orange Pi web site, the top “download” page has all sorts of nice pictures of OS types you can download and run.

http://www.orangepi.org/downloadresources/

Now despite many web pages saying not to trust their images as the clock speeds are too high or the overvolts are set to high or {whatever} kind of technical over-something was done, none of them were security gripes, so I was going to download an image… Except when you click on one of them you get tossed into a “pick one” page with a pretty bird on it (at least I hope that is what the page said) and clicking a link takes you to what I think is a download the actual image page… both of them in Chinese. Is it really that hard to just put a word or two in English on the page for the approximately 3 Billion people on the planet who are functional in English?

The So-So

It takes a special power dongle of 4mm? diameter, so my powersupply came as a mini-USB with an adapter (from Lovrpi). This is very nice as it includes and on/off switch. Now the board has a switch button on it too, but near as I can tell that one doesn’t do anything (that changes anything I’m running…) Their powersupply is very nice too, and I’ve used it to drive a Raspberry Pi Model 2 that needs lots of power. This is a nice solution that doesn’t commit your powersupply to the Round Connector forever.

The board, if driven to full performance, will almost certainly need a heat sink that is not included with the board. It looks like the same one for the Pi M2 or Pi M3 will fit the H3 chip in the Orange Pi. That also strongly implies that at full computes the quad core ARM at almost identical clock in the same size package will overheat without a heat sink and throttle your computes (it does that, backs off the clock when hot). So “someday” I’ll need to get a heat sink. There are web pages talking about that, too…

It doesn’t have that strange boot sequence of the R.Pi where it loads a loader to load the boot loader to load the OS… with a binary blob in the GPU running it. What it does do isn’t yet clear to me… Just “stuff and go” with the image… There are web pages saying you can compile the firmware and load it {somehow}, but many of the “how to load the OS” pages were wrong or had stale links, so not as helpful as you might want and leaves me thinking those firmware how-to pages might not be so right either. (Why there are no links to ‘how to’ pages in this article…)

It has systemD on it. Well, what do you expect from a bog-standard Debian Jessie? So along with “how do I get a monitor to work?” I’ll need to do “the usual” swap to Devuan. Which, given the results of a web search for “Devuan” and “Orange PI” had my web pages where I talk about doing it at the top… implies I’m going to be the only person on the planet running that particular OS / Hardware combination… Sigh. A “Support group and Forum” of One.

In Conclusion

I’m not unhappy with it.
I’m not happy about it.
It’s OK…

For the extra $15? I’d rather go with the Raspberry Pi and avoid the time sink. Were I making a raw compute cluster with my own OS build, it would save enough money on the hardware to be worth it (once you are into it about a dozen nodes and especially if using the even cheaper O.Pi Zero at $10… but it was not available when I made the buy)

It looks like a competent bit of hardware, compromised by a painful software download search / process / language barriers, a lack of out-of-the-zip support for HDMI, and missing heatsinks that are essential for full performance (same s Pi M2 and Pi M3…)

I do like the blue color ;-)

So at this point I’m off to see if I can figure out how to get HDMI to work, put the board on top of the Dogbone case (maybe I can wire-tie it to the top…) and use it headless for low compute load things until I can order a heatsink kit ($5 on Amazon, so need to bundle with other things or the shipping will cost more than the kit…)

Maybe tomorrow I’ll start in on that whole Devuan upgrade bit… (Or maybe just install distcc on it and start a Model E build running on a 16 core cluster ;-)

FWIW, until further notice, I’m likely to stick with the Pi Model 3 as the best combination of Computes/$ and ease of set-up with LOTS of support available. At $35 list and $50 all up with powersupply and heatsinks, if that saves me an hour I’m way ahead of the game for unit count less than 4 or 5. I may dabble with things like the O.Pi or Banana Pi, but that’s just for my own education and experience now that I have a working Raspberry Pi Cluster. Basically, it’s R&D for the question: “IFF Model E needs 88 nodes to run, that’s 22 Pi boards. Can I cut $220 off the board cost with Banana Pi or Orange Pi boards?”. So far the answer is “Yes but… at the cost of $200+ of time”…

Subscribe to feed

Posted in Tech Bits | Tagged , , , | 13 Comments

TT – One Week (only 7 days)

No, I’m not going to count the hours and minutes and seconds… maybe just the hours ;-)

The Trump Transition “TT” at minus 1 week. Just 7 days.

This is the last in the series. The next Trump posting will be about POTUS Trump.

Prior threads in the chain are:

https://chiefio.wordpress.com/2017/01/07/tt-2-weeks-and-counting-the-days/

https://chiefio.wordpress.com/2016/12/30/tt-only-3-more-weeks/

We skipped 4 due to sloth on my part…

https://chiefio.wordpress.com/2016/12/20/tt-minus-5-weeks/

https://chiefio.wordpress.com/2016/12/14/tt-minus-6-weeks/

https://chiefio.wordpress.com/2016/12/02/tt-minus-7-weeks/

https://chiefio.wordpress.com/2016/11/23/tt-minus-8-weeks/

https://chiefio.wordpress.com/2016/11/08/hillary-trump-the-election-and-aftermath/

https://chiefio.wordpress.com/2016/11/16/democrats-lords-of-chaos-choose-defiance/

We are continuing with the news 4-walling “The Russian Did It!” with the latest breathless chapter looking like a bit of a fraud to me. A “Russian” dossier written in English? Yet processed to look like a multigeneration photo copy? With what, to me, looks like digital artifacts? Really? Sheesh. And they got the name right, but forgot to check that the guy in question had never been to Prague… and his passport showed it (as did the people he was with at the time…)

Did the Russians hack Hillary and the DNC? Certainly. I’d bet hard money on it. I’d also bet the Chinese were in, the NSA knew (or at least recorded the traffic and found it later), the Israelis were “in”, Guicerfer was known to be in and said so, so likely a half dozen other “script kiddies” too. The problem is in placing BLAME for the Wiki-LEAK on Russia. No evidence for that has been shown. It is rank speculation.

Then we have the spectacle of the Mutual Admiration Society Circle Of Jerks handing out medals to each other (and The Big O in a mental materbatory self congratulatory self medalling…) Oh, the myopia… even omphaloskeptics would see further…

Then we have the “treat” of Cold Warriors like McCain of Az. dead set against any possibility of actually declaring peace with Putin and Russia, so putting it into law… Hey, Johhny boy, it’s been 1/4 Century now since the USSR become the FSU… Can’t we please “move along” now? Russia is far less “the enemy” than is Soros and his minions.

Speaking of Soros and minions: Looks like he’s funding the Rent-A-Mob not only to disrupt the august Senators in their examination of A Few Good Men, but they intend to be Brats On Parade (or is it Idiots On Parade?) at the inauguration too. I’d break out the Clue Stick (whack.. whack Whack, Whack WHACK WHACK!!) but it seems they are too dense to ‘catch a clue” when a basket of them are tossed in their faces and arms. But here’s a clue:

Dear Dimocrats and Soros Fellow Travelers: The more you do this crap, the more we detest you and vow to never ever EVER vote for a Democrat again. Ever. I’m one of the “undecided” and the “non-partisan” and the “independent voter” group. You know, the ones who actually decide each election… I voted for Obama in the California Primary against Clinton. I voted for Bernie in the Primary for the same reason. It isn’t like I’m a dyed in the wool Republican… more a Libertarian in the classical sense. So it’s pretty simple. I taught my children to grow up and be adults. Seeing you’all acting with the emotional maturity of a 2 year old and tossing tantrums just, well, makes me think you need a good spanking and no dinner…

Guess WHY we, the Deplorables, elected Trump? To administer that spanking… (and take away the Federal Slush Fund Dinner Trough…) Anything happens to stop that, we are not going away, we are getting even more motivated. You don’t want to see what comes after Trump…

So please, try for just One Whole Week to act like an adult. That’s only 18 years old. I’m sure you can manage it if you really really try hard. Afterwards, you can all have an R&R trip to Colorado… for “herbal therapy”, of course…

With that, let the conversation roll on…

Subscribe to feed

Posted in Political Current Events | Tagged , , | 94 Comments

SystemD Depends on Swap

Not exactly a bite on the butt, but an unexpected behaviour in any case.

So I’ve moved to Devuan to avoid “issues” with SystemD.
Why am I reporting an issue with SystemD?

I decided to make a backup copy of my working Devuan chip as my “Daily Driver”, having turned my prior Daily Driver into the Headend chip… so I need to clone that one and ‘backout’ some of the cluster specific things. Isolating the two sets of information. To do that, it can’t be running. Which means something else must be running… which was my old Arch Daily Driver as the fast system of choice.

OK, preparatory to the Chip archive / restore, I decided to clean up the disk space a little. I regularly have done this for years on all sorts of systems using a scriptlette I call DU.

Only the first line is active, the others are left in so you can see some of the evolution over time:

du -BMB -s * .[a-z,A-Z]* | sort -rn > 1DU_`date +%Y%b%d` &

#du -ks * .[a-z]* .[A-Z]* | sort -rn > 1DU_`date +%Y%b%d` &

#du -ks * | sort -rn > 1DU_`date +%Y%b%d%H%M%S` &

The -B option says to set a blocksize to count and MB means millions in powers of 10 (so not needing to deal with 1024 powers…) and to not miss files in the current working dirctory starting with a . that usually means “don’t display and don’t let * grab them either”, then sort it in reverse numeric (so big is on top and 10 sorts before 2 instead of with 1 …) and stuff it in a file named 1DU_{today’s date} the ‘1’ causing it to short to the head of most “ls” listings.

Note that the & at the end launches it as a ‘background job’. A CTL-C will not kill it, you must note the process ID at launch (PID) and do a “kill -HUP PID” in a working terminal window to stop it. This, it seems, may have mattered…

Now I’ve done this for years. On all sorts of systems. Big ones. Little ones. TByte disks. Fast and slow. It bogs down any given disk but not the system. Ever. Disk I/O is always glacial compared to the system.

But not this time…

I launched it on the Pi, against 3 disks. 2 x 1 TB ext4, and 1 x 300 MB ext4 that also had a swap partiion on it. A real disk swap partition makes your SD chip last longer and makes the system faster.

Well, seems all that disk I/O bogs down the Pi integrated I/O chip (used for all things USB and networking). O.K., I can live with that…

Then the system seemed to hang. No screen updates. No cursor movement. No nothin’. Had we another systemD ‘hang the system’? Perhaps, I thought.

So I looked over at the ‘top’ panel. It wasn’t updating. It showed the DU / du process in D Diskwait status. OK, to be expected. Scanning down, systemd was also D diskwait. As was the kswapd daimon.

I have no screencapture of this as I couldn’t get a terminal window to respond to type ‘scrot’.

Well, were we in a lockout? SystemD unable to swap due to diskwait and diskwait unable to release due to lack of swapping for systemD? Who knows – I think. So off to get a cuppa’, think about it, and watch a bit of news…

Coming back about 10 minutes later, systemD still D diskwait. About 20 minutes later, it was not on the page of processes in top… Clicking in windows, hitting return, etc. still gave me nothing. Watching VERY closely, after a while, I could see a slow line of bits being re-written down the ‘top’ panel. It WAS running, just very very slowly.

I went off for lunch and more news…

Eventually, a few hours later, the machine is running normally again.

My Surmise

And here we get a leap off the cliff of conclusion: I surmise that the way SytemD is written is broken in that it is dependent on swap. Either it is so big, it isn’t memory locked, or it is so chatty it must have responsive disk. For “The Unix Way”, certain critical functions are memory locked and can not swap out, and typically don’t need disk to run. Some of those functions, I would speculate, are now in systemD and it is not being locked into memory (or writes into so many files and touches so much stuff it can’t effectively run without rapid disk access nearly instantly).

Which ever way this falls out, it would imply that if swap is placed on very active disks, you will sporadically lock out systemD functions and that will then cause your system to run like molasses in January or potentially hang altogether (though honestly, had I not patience for hours and a keen eye and curiosity, I’d have assumed my system was hard hung in the first 3 minutes of “unresponsive nothing happening”.)

All those decades of system tuning in Unix / Linux land to find just what must be locked into memory and NOT dependent on disk I/O or swap tossed out in the rush to make SystemD the Swiss Army Knife of system functions… so now heavily loaded disk can cause your system to freeze up for the duration of the disk saturation.

OK, that’s my surmise. In any case, it was interesting. Once I’ve got my Daily Driver Devuan chip made, I’ll reboot the same hardware / disks and run the same command and see if it too locks up, or just has slow disk…

Subscribe to feed

Posted in Tech Bits | Tagged , , | 7 Comments

Do Not Buy Sandisk USB Sticks.

Before my last ‘road trip’ I bought a SanDisk Ultra USB 3.0 stick 64 GB. My intent was to put any needed files on it for use with the old HP laptop along with a Knoppix boot partition.

I formatted the thing with gparted just fine. Then on the 2nd? or so use it decided to go “read only”. Figuring it was “something I did” and being in a hurry, I let it got until now.

Now I’ve tried most of the morning (and several times in the prior week) to get this damn thing to stop being a brick and accept a new file system. No Joy.

Somewhere along the line the firmware looks to have decided it didn’t like some aspect of operations and has locked up into ‘read only’ to ‘protect my data’ so I can get it off of a failing device. Only problem is I have no data on it, it it’s corrupted, and I just want to format the damn thing so I can use it.

You can’t format it.
You can’t erase it.
You can’t force bits onto it to reset it.

It’s a brick.

And it isn’t just me.

http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/SanDisk-Cruzer-Blade-16GB-write-protected-error/m-p/279296

Re: SanDisk Cruzer Blade 16GB write protected error
Options
‎07-03-2012 09:46 PM

I am just one more of MANY SanDisk users who is disadvantaged by a “read-only” flash drive. These things should be recalled and redesigned. There are SO many complaints on this forum by now that why would ANYONE want to buy one of these things?
[…]

samjaza
Posts: 1
Registered: ‎07-10-2012
Re: SanDisk Cruzer Blade 16GB write protected error
Options
‎07-10-2012 11:55 AM

Read that less than 1% of thiese things fail?
have had a 32gb micro sdhc for a week, suddenly came up write protected?
tried all sorts of things,
eventually by loading it into a cheap camera I formatted it.
Back into PC! Write protected!

well I got the **bleep** thing for £10, so cheap I bought two, loaded the new, unused one in, and straight away, straight out of the box… write protected!.

less than 1% eh? and why are they so cheap?

Message 86 of 214 (32,772 Views)

Ed_P
SanDisk Guru

Posts: 2,269
Registered: ‎07-23-2010
Re: SanDisk Cruzer Blade 16GB write protected error
Options
‎07-10-2012 03:17 PM

Thank you for posting your experience. I’ve said all along that the problem was a Windows problem rather than a hardware problem. If it was a hardware problem you wouldn’t have been able to format it even in an expensive camera.
Ed

HyperNovaHD

Registered: ‎07-10-2012
Re: SanDisk Cruzer Blade 16GB write protected error
Options
‎07-10-2012 09:33 PM

Hi, I have a Cruzer and I mainly use it for my xbox 360 since I dont have that much storage on it by itself. So I put it into my computer (I dont really know why…) but when I took it out and tried to use it back for my xbox 360, when I tried to copy a file over onto it it said that it cannot do it because there is write protect on it. I dont know if this matters but I have MS Windows 7 Home Premium 32-bit. So, is it possible to get write protection off of it?

Shows this behaviour has been around a long time, and isn’t getting fixed, even in this brand new higher end device.

Note that is 2012, so 5 years ago. A full half decade.

They are in denial if they think the error isn’t in their devices. I’ve tried MacOS, Linux, Windows and more. ALL the advice in several different forums. So far, I’ve got a brick.

http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/Cruzer-FIX/m-p/333491/highlight/true#M9667

1 of 7 (1,509 Views)
Everyone’s Tags: 8 Gig Cruzer Glide Erro…View All (1)

Cacho
SanDisk Professor

Re: Cruzer FIX
Options
‎10-13-2014 01:12 PM

Smiley Happy Hi Anthony,

Dear member of SanDisk Community, welcome.

[ I Have found this Read Write Protection error with my 8 gig Cruzer Glide. ]

Please, try the Cruzer in other PCs, and see if the problem continues.

If still bad, do not grieve more friend, return it because is dead, ist kaputt…

Write protection errors occur when a UFD detects a potential fault within itself. The drive will go into write-protected mode to prevent data loss.
There is no method to fix this.

You might have a pleasant surprise, if can contact with SanDisk Technical Support who will replace the device if it’s authentic and is within warranty. Link: http://www.sandisk.com/about-sandisk/contact-us/

To obtain the internal data of the UFD, you can use (portable apps, free): – USB Flash ChipGenius 4.00, http://filecloud.io/rqa7etmv
– USB Flash Deview 2.30, http://filecloud.io/srp2hegw
– USB Flash DriveLetter 1.30, http://filecloud.io/vgqzcdyh
– USB Drive Manager 4.20, http://filecloud.io/ydi9jpac

To find out more on this, see here:
http://forums.sandisk.com/t5/All-SanDisk-USB-Flash-Drives/HELP-how-do-i-reomove-write-protected-message-from-my-16gb-flash/m-p/323924#M8925

Luck.

Regards, Alfred.

Note: I’ve not tested those links in the message nor can I say if they are safe or malware. I don’t know who are where “filecloude.io” is.

SanDisk, just say no!

FWIW, I also found speed tests showing the Samsung Evo being MUCH faster on read / write than the SanDisk Ultra SD cards. I’d bought many of those Ultra mini-SD figuring their label read speed indicated something good. Now I’m in serious doubt about it, too.

Subscribe to feed

Posted in Tech Bits | Tagged , , , , , | 13 Comments

MPICH2 Installed on Pi Cluster

This conversation begins in “comments” on a prior thread where I posted my first results after finally getting this to work. See here:

https://chiefio.wordpress.com/2017/01/09/cluster-distcc-first-fire/#comment-77561

Which starts at the “get NFS working right” step (before that is celebration of getting distcc going and after it is the rest of the testing of MPI and the program used). There is also a comment about my “issues” with getting ssh to like me and the need to install ‘keyrings’ and then to activate them and save the result on each of the machines participating in the MPICH party.

OK, so what did I do and where did I get the formula?

First off, the internet is a revolution in technical workload. Where, in the past, something might take 4 days of reading manual pages and trying things, it now is often a web search, find a ‘close’ example, try it and proceed directly to debugging.

In this case, I modeled off of an Ubuntu MPICH install / test. Ubuntu is based on Debian that until Jessie was essentially the same as Devuan in not having SystemD, so many old “How To” pages still have the old systemV init scripts and methods in them. (So “everything you know is wrong” under systemD becomes “everything older than a month or two ago is still right” under Devuan.)

Here is the page on which I modeled my installation:

https://help.ubuntu.com/community/MpichCluster

It goes through several steps I’d already done, so I was “good to go” with skipping things like putting host names into the host file (though I did tune it up a bit) and things like creating a set of equivalent machines with equivalent logins.

Since, as SystemD propagates into more of the world, and Ubuntu changes how things are done, this page just might get a re-write to systemd crap, I’m going to quote heavily from it to preserved the Devuan Friendly content.

The basic interesting ‘trick’ they use is to have the common user id / login name share an nfs mounted home directory across all the systems. You can configure it with separate file systems and separate home directories, but a lot of copying will need to be done ;-)

So I’d set up ‘gcm’ as the user id on my headless nodes. I’d also prior to that created a different user account that took the 1001 UID on the headend board, but I’d not used that ID for anything yet. I just went into /etc/hosts and changed the name of it to ‘gcm’. Ditto in /etc/group. BTW, you can’t use a video monitor unless you are in group video in /etc/group, so I just added that name to anywhere that ‘pi’ was listed. Probably overkill, but whatever…

In /etc/group:

video:x:44:pi,gcm
sasl:x:45:
plugdev:x:46:pi,gcm
staff:x:50:
games:x:60:pi,gcm
users:x:100:pi,gcm
nogroup:x:65534:
input:x:101:pi,gcm

and anywhere else “pi” is listed as an added member of the group. For /etc/passwd this gets created when you do an “adduser gcm”, but here’s the line anyway:

gcm:x:1001:1001:Global Circulation Model,1,666-6666,666-666-6666:/Climate/home/gcm:/bin/bash

All the 666s are my answers to questions about the work phone number, etc. etc…

I used the /Climate file system on the headend machine for the home directory, and NFS exported it; that took a while to remind me to do the:

service rpcbind start

/etc/init.d/nfs-kernel-server start

It is “just wrong” that exportfs reports the file systems being exported when they are not until you do those steps… one site speculates it is a link ordering thing… who knows. Another had worse problems with it under systemD… and more exotic fix.

So accounts built on all three machines (I just let it build the home directory in /home on the headless units, so I can login with that directory if desired, then edited their /etc/passwd files to change /home/gcm to /Climate/home/gcm, and on the headend, copied the pristine home directory into /Climate as the ‘real’ copy).

I used:

mkdir /Climate/home /Climate/home/gcm

(cd /home/gcm; tar cf - . ) | (cd /Climate/home/gcm; tar xvf -)

That’s old school from when cp was daft. Now I think “cd /home; cp -r gcm /Climate/home/” would do it… though you might need to set a ‘keep ownership and permissions’ flag…

The common exported file system is built, exported, mounted, restarted, and tested.

The three machines are all in /etc/hosts. We pick it up there:

Setting Up an MPICH2 Cluster in Ubuntu

This guide describes how to build a simple MPICH cluster in ubuntu.

To understand the guide, a basic knowledge of command line usage and the principle mpich & clustering is assumed.

Here we have 4 nodes running Ubuntu server with these host names: ub0,ub1,ub2,ub3;

1. Defining hostnames in etc/hosts/

I didn’t use their names, so here’s my /etc/hosts instead. It doesn’t matter what names you use, as long as they are consistent.

gcm@Headless1:~ $  cat /etc/hosts
127.0.0.1	localhost, Headless1
::1		localhost ip6-localhost ip6-loopback
ff02::1		ip6-allnodes
ff02::2		ip6-allrouters

127.0.1.1	raspberrypi
10.168.168.40	Headend, headend
10.168.168.41	Headless1, headless1
10.168.168.42	Headless2, headless2
10.186.168.43	Headless3, headless3

They make a big deal out of NOT having “Headless1” in the /etc/hosts file as part of the loopback interface (127.0.0.1), but looks like I accidentally left it there on one of them.

Here’s the one from the headend machine:

127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

#127.0.1.1      raspberrypi

10.168.168.40     Headend, headend
10.168.168.41     Headless1, headless1
10.168.168.42     Headless2, headless2
10.168.168.43     Headless3, headless3                           

I’ve not yet built headless3, but put the marker in there anyway… Maybe this weekend…

You can see where I got rid of names other than localhost on 127.0.0.1 in the last one. Maybe it matters ( I did have a bit of oddness with some things…) or maybe they are excessive. A tiny ‘dig here maybe’ for someone for the future.

Then they have you install NFS. Well, I’ve already got that in my build script and besides, it’s a different (slightly) command set.

2. Installing NFS

NFS allows us to create a folder on the master node and have it synced on all the other nodes. This folder can be used to store programs. To Install NFS just run this in the master node’s terminal:

On the headend with the disk (or whatever machine shares out the partition) you need to export the file system. Here’s my /etc/exports entry:

/Climate	10.168.168.0/24(rw,sync,no_root_squash,no_subtree_check)

After which you do an ‘exportfs -a’ or ‘exportfs -r’ to reexport if already running… and exportfs alone to show what is being exported.

root@Devuan:/# exportfs
/Climate      	10.168.168.0/24
root@Devuan:/# 

To mount it on the headless nodes, you need an entry in /etc/fstab like:

10.168.168.40:/Climate	/Climate	nfs	rw,defaults,nolock,auto,noatime

At this point you ought to be able to login as ‘gcm’ to any of the three boards and be in the same file system. The real one on the headend and the nfs mounted one on the headless. At that point I was “reminded” by an hour or two of wandering in the man page desert of the need to do that two command listed above. The linked web page says (where my headend is their ub0 and my headless1 is their ub1):

2. Installing NFS

NFS allows us to create a folder on the master node and have it synced on all the other nodes. This folder can be used to store programs. To Install NFS just run this in the master node’s terminal:

omid@ub0:~$ sudo apt-get install nfs-server

To install the client program on other nodes run this command on each of them:

omid@ub1:~$ sudo apt-get install nfs-client

Note: if you want to be more efficient in controlling several nodes using same commands, ClusterSSH is a nice tool and you can find a basic two-line tutorial here.

3. Sharing Master Folder

Make a folder in all nodes, we’ll store our data and programs in this folder.

omid@ub0:~$ sudo mkdir /mirror

And then we share the contents of this folder located on the master node to all the other nodes. In order to do this we first edit the /etc/exports file on the master node to contain the additional line

/mirror *(rw,sync)

This can be done using a text editor such as vim or by issuing this command:

omid@ub0:~$ echo “/mirror *(rw,sync)” | sudo tee -a /etc/exports

Now restart the nfs service on the master node to parse this configuration once again.

omid@ub0:~$ sudo service nfs-kernel-server restart

Note than we store out data and programs only in master node and other nodes will access them with NFS.

4. Mounting /master in nodes

Now all we need to do is to mount the folder on the other nodes. This can be done manually each time like this:

omid@ub1:~$ sudo mount ub0:/mirror /mirror
omid@ub2:~$ sudo mount ub0:/mirror /mirror
omid@ub3:~$ sudo mount ub0:/mirror /mirror

But it’s better to change fstab in order to mount it on every boot. We do this by editing /etc/fstab and adding this line:

ub0:/mirror /mirror nfs

and remounting all partitions by issuing this on all the slave nodes:

omid@ub1:~$ sudo mount -a
omid@ub2:~$ sudo mount -a
omid@ub3:~$ sudo mount -a

Well, the short form is: Get NFS working and share a common file system between the nodes… As I’ve talked about getting nfs working in the Debian install and now under Devuan cluster, well, let me know if you get stuck…

They define a user “mpiu”, but I’d already used “gcm” so stuck with it. They call their shared file system /mirror, I’d already set up /Climate.

5. Defining a user for running MPI programs

We define a user with same name and same userid in all nodes with a home directory in /mirror.

Here we name it “mpiu”! Also we change the owner of /mirror to mpiu:

omid@ub0:~$ sudo chown mpiu /mirror

After that, they install “OpenSSH”. Since ssh was already installed on Devuan, I skipped that step.

This next part got down into the weeds. You might remember that for ‘distcc’ I kludged around the need for keyrings with a link of the place machine ids are stored to /dev/null. Well, this uses all that security stuff, so here it is (just remember to think ‘gcm’ where they have ‘mpiu’ as the user):

7. Setting up passwordless SSH for communication between nodes

First we login with our new user to the master node:

omid@ub0:~$ su – mpiu

Then we generate an RSA key pair for mpiu:

mpiu@ub0:~$ ssh­-keygen ­-t rsa

You can keep the default ~/.ssh/id_rsa location. It is suggested to enter a strong passphrase for security reasons.

Next, we add this key to authorized keys:

mpiu@ub0:~$ cd .ssh
mpiu@ub0:~/.ssh$ cat id_rsa.pub >> authorized_keys

As the home directory of mpiu in all nodes is the same (/mirror/mpiu) , there is no need to run these commands on all nodes. If you didn’t mirror the home directory, though, you can use ssh-copy-id to copy a public key to another machine’s authorized_keys file safely.

To test SSH run:

mpiu@ub0:~$ ssh ub1 hostname

If you are asked to enter a passphrase every time, you need to set up a keychain. This is done easily by installing… Keychain.

mpiu@ub0:~$ sudo apt-get install keychain

And to tell it where your keys are and to start an ssh-agent automatically edit your ~/.bashrc file to contain the following lines (where id_rsa is the name of your private key file):

if type keychain >/dev/null 2>/dev/null; then
keychain –nogui -q id_rsa
[ -f ~/.keychain/${HOSTNAME}-sh ] && . ~/.keychain/${HOSTNAME}-sh
[ -f ~/.keychain/${HOSTNAME}-sh-gpg ] && . ~/.keychain/${HOSTNAME}-sh-gpg
fi

Exit and login once again or do a source ~/.bashrc for the changes to take effect.

Now your hostname via ssh command should return the other node’s hostname without asking for a password or a passphrase. Check that this works for all the slave nodes.

I didn’t at first install keychain on all the headless nodes. Didn’t work. A few hours chasing why… Eventually went back and did “apt-get install keychain” on all the nodes AND did an initializing “ssh headless1 hostname” to execute the ‘hostname’ command on headless1 and similarly “ssh headless2 hostname” on the headend, then the same thing on headless1 to headend and headless2 and again on headless2 to headend and headless1. That seemed to preload all the password checks on every machine to every machine and the permissions / login failures on running MPICH codes ceased.

They, then, have a step of installing the compilers, but I’ve already done that in my build script. So skipped it.

Finally we get to the actual install of MPICH2, which is curiously anticlimactic:

10. Installing MPICH2

Now the last ingredient we need installed on all the machines is the MPI implementation. You can install MPICH2 using Synaptic by typing:

sudo apt-get install mpich2

Alternatively, MPICH2 can be installed from source as explained in the MPICH installer guide or you can try using some other implementation such as OpenMPI.

To test that the program did indeed install successfully enter this on all the machines:

mpiu@ub0:~$ which mpiexec
mpiu@ub0:~$ which mpirun

Well, I could just add that to the build script… And no, you don’t need to build it from sources unless you want the experience…

The configuration of MPICH is nearly trivial. You list the machines in a file and reference it when you launch mpi. They use the name ‘machinefile’ which I dutifully copied not knowing if it was a ‘special’ name. It isn’t. I’m likely to change the name to something shorter, like maybe “cluster” or even “nodes”…

11. setting up a machinefile

Create a file called “machinefile” in mpiu’s home directory with node names followed by a colon and a number of processes to spawn:

ub3:4 # this will spawn 4 processes on ub3
ub2:2 # this will spawn 2 processes on ub2
ub1 # this will spawn 1 process on ub1
ub0 # this will spawn 1 process on ub0

My machine file is:

gcm@Headless2:/Climate/home/gcm# cat machinefile 
Headend:2
Headless1:4
Headless2:4

That is for now. I’m of course going to add a ‘Headless3’ when the Orange Pi gets integrated. Then I’ll tune things to see just how many cores of the headend I can use and not bog it down. For now, I’m setting it to one-per-cpu on the slave nodes and 1/2 of the cores on the Master node. Notice you can have different files for different tuning invoked at run time…

Their test program is already in the linked comments on the prior thread, but it is the standard “hello world” run on all things for their first debut… I modified it a bit. On running it, the thing was run and gone so fast that I couldn’t check it had actually distributed to the different nodes, despite having active “top” windows running on all of them. I stuck in a POSIX compliant ‘pause’ that I picked up from here:

http://stackoverflow.com/questions/4869507/how-to-pause-in-c down in a comment:

Under POSIX systems, the best solution seems to use:

#include

pause ();

If the process receives a signal whose effect is to terminate it (typically by typing Ctrl+C in the terminal), then pause will not return and the process will effectively be terminated by this signal. A more advanced usage is to use a signal-catching function, called when the corresponding signal is received, after which pause returns, resuming the process.

Note: using getchar() will not work is the standard input is redirected; hence this more general solution.

But in retrospect likely ought to have let it have a timer than ran out as in this comment:

If you want to just delay the closing of the window without having to actually press a button (getchar() method), you can simply use the sleep() method; it takes the amount of seconds you want to sleep as an argument.

#include
// your code here
sleep(3); // sleep for 3 seconds

References: sleep() manual

My current test case is:

root@Headless2:/Climate/home/gcm# cat mpi_hello.c
#include <stdio.h>
#include <mpi.h>
#include <unistd.h>

int main(int argc, char** argv) {
    int myrank, nprocs;
    
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);


    printf("Hello World! from Processor %d of %d\n", myrank, nprocs);
    pause ();

    MPI_Finalize();
    return 0;
}

You compile it, and hating to type anything long twice, I stuffed that in a mini-script of one line and made it executable with “chmod +x Makeit”

root@Headless2:/Climate/home/gcm# cat Makeit
mpicc mpi_hello.c -o mpi_hello

So to do the mpicc command to build the output executable of ‘mpi_hello’ I just type “./Makeit”. Lazy? Hell yes! All *nix is based on the idea that any single character you can compress out of your typing will save you months of time over your computing life…

I even have a ‘run script’ to make it go:

root@Headless2:/Climate/home/gcm# cat Doit
mpiexec -n 10 -f machinefile ./mpi_hello

Where “-n 10” says to make 10 copies and “-f machinefile” says to send them to the machines listed in ‘machinefile’ and the thing to run is in my present directory “./” and named “mpi_hello”.

Running it shows it works:

gcm@Devuan:~ $ ./Doit
Hello World! from Processor 0 of 10
Hello World! from Processor 1 of 10
Hello World! from Processor 6 of 10
Hello World! from Processor 7 of 10
Hello World! from Processor 8 of 10
Hello World! from Processor 9 of 10
Hello World! from Processor 2 of 10
Hello World! from Processor 3 of 10
Hello World! from Processor 4 of 10
Hello World! from Processor 5 of 10

But I’ve still got some ssh / permissions things to work out from headless2, which can accept jobs, but doesn’t want to send them out:

gcm@Headless2:~ $ ./Doit
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).

not exactly a big deal, since all jobs ought to originate from the Master node in my setup, but I would like to know “why”…

“Why? Don’t ask why. Down that path lies insanity and ruin. -E.M.Smith”

So of course I’m off to explore “why”… Did I miss a step on headless2? Is it asymmetrical somehow?

Thus are days lost…

Subscribe to feed

Posted in Tech Bits | Tagged , , , | 4 Comments

“Consensus Trolling”?

Watching MSNBC, they’ve been pushing the DNC Talking Point of “When ALL the Intelligence Agencies…” and “17 Intelligence Agencies said…”, pushing it hard.

It just reeks of the “97% Agree!” line.

All of it based on the logic fault of appeal to authority.

This got me to thinking… Ought we have a new term of art for such a tactic? While the link I provided gives a description of what they are and why they ought to be avoided, many (particularly on the Looney Side Of Left) seem to take this as a “How To Guide” to propaganda. So to highlight that use, perhaps something else is needed?

To that end, I offer “Consensus Trolling” as the act of using “Appeal to The Consensus Of Authority” as the proposed term of art.

So the news, and many on the left in news conferences, et. al. are indulging in Consensus Trolling per the laundry list of TLAs that have said “The Russians Did It!”. Never mind that it is only the politicized top level administrators who are saying this. Never mind that ‘under the covers’ you can never know who did a hack until you catch them; only make Good Guesses. Never mind that the server was so wide open I’d expect a 1/2 dozen agencies and even private parities could get in. Never mind that any of them might be the actual source of the “leak”. Never mind that one of the Russian agencies (the military one) had only gotten in a few weeks earlier and the mail goes back much further. Roll Out The Consensus Trolls! “We, the Politicized Managerial Elite agree, therefor YOU (idiots – implied) shut up and sit down.”

Just rankles.

And they wonder why they lost the election…

Subscribe to feed

Posted in Human Interest, News Related, Political Current Events | Tagged , , | 36 Comments