Bits and pieces where I’d like to keep tabs on them, or something interesting happened, or it just isn’t big enough to make a whole posting. Basically tidy up of the dust bunny information bits…
The Trump & Mattress Pee Pages
This is where to get your own PDF download of the reputed “intelligence report” from Russia about Trump in hotels with hookers just to have them pee on a bed. 35 repetitive pages of it.
http://www.documentcloud.org/documents/3259984-Trump-Intelligence-Allegations.html
To me, it just shouts “FAKE!”. From the faux copy-of-copy-of-copy look with wavy lines of text and tilt on the pages (who does repeated photocopies these days? And it has sharp edges to the fonts despite wavy lines and shaded background? Really?) Then we also have the “looks like photoshop round smudge tool” “coffee stains” on the bottom half and more. The text also looks like nothing I’ve ever seen in a report. Just more high-school and way too repetitive and meandering. Then there’s that whole question of why a Russian Report is written in English (and without Russian artifacts…)
At any rate, it is what it is.
NASA GISS “Forcings” Page
What is used as their “forcings” in the climate models? Here they point to the references:
http://data.giss.nasa.gov/modelforce/
Forcings in GISS Climate Models
We summarize here forcing datasets used in GISS global climate models over the years. Note that the forcings are estimates that may be revised as new information or better understandings of the source data become available. We archive both our current best estimates of the forcings, along with complete sets of forcings used in specific studies. All radiative forcings are with respect to a specified baseline (often conditions in 1850 or 1750).
Forcings can be specified in a number of different ways. Traditionally, forcings have been categorised based on specific components in the radiative transfer calculation (concentrations of greenhouse gases, aerosols, surface albedo changes, solar irradiance, etc.). More recently, attribution of forcings have been made via specific emissions (which may have impacts on multiple atmospheric components) or by processes (such as deforestation) that impact multiple terms at once (e.g., Shindell et al., 2009).
One notes in passing that they are not putting much into the model for tidal forces mixing the oceans and how that changes over time with the lunar orbital cycle (that changes tides rather a lot…) nor much in the way of solar variation cycles driven by planets stirring it about. A model only gives out what you have put into it to find…
Additionally, the definition of how to specify a forcing can also vary. A good description of these definitions and their differences can be found in Hansen et al. (2005). Earlier studies tend to use either the instantaneous radiative imbalance at the tropopause (Fi), or very similarly, the radiative imbalance at the Top-of-the-Atmosphere (TOA) after stratospheric adjustments — the adjusted forcing (Fa). More recently, the concept of an ‘Effective Radiative Forcing’ (Fs) has become more prevalent, a definition which includes a number of rapid adjustments to the imbalance, not just the stratospheric temperatures. For some constituents, these differences are slight, but for some others (particularly aerosols) they can be significant.
In order to compare radiative forcings, one also needs to adjust for the efficacy of the forcing relative to some standard, usually the response to increasing CO2. This is designed to adjust for particular geographical features in the forcing that might cause one forcing to trigger larger or smaller feedbacks than another. Applying the efficacies can then make the prediction of the impact of multiple forcings closely equal the net impact of all of them. This is denoted Fe in the Hansen description. Efficacies can depend on the specific context (i.e. they might be different for a very long term simulation, compared to a short term transient simulation) and don’t necessarily disappear by use of the different forcing definitions above.
Quantifiying the actual forcing within a global climate model is quite complicated and can depend on the baseline climate state. This is therefore an additional source of uncertainty. Within a modern complex climate model, forcings other than solar are not imposed as energy flux perturbations. Rather, the flux perturbations are diagnosed after the specific physical change is made. Estimates of forcings for solar, volcanic and well-mixed GHGs derived from simpler models may be different from the effect in a GCM. Forcings from more heterogenous forcings (aerosols, ozone, land use, etc.) are most often diagnosed from the GCMs directly.
etc.etc. Includes nice graphs in the original.
How to make tomato free marinara sauce
http://www.adventuresofaglutenfreemom.com/2010/01/tomato-free-marinara-sauce/
My todo list grows about twice as fast as I can get things done… but this is now on it.
From this posting I also learned the word “Ufta!”…
Tomato-Free Marinara Sauce
January 8, 2010 By AdventuresgfmomThis is such a cool recipe! I have not made it in a few years, in fact I forgot about until I made a post and referenced a disease called Eosinophilic Esophagitis (EE). Sam used to have a playmate with EE and I got the opportunity to play around with different combinations of foods that never would have occurred to me in a million years! Several recipes would have made my stomach churn just by reading the ingredients and this is one of them!
It was trick-or-treat night in 2005 and Sam and his little buddy we’re going out for the first “real” time to beg for goodies (I say “real” time because I took him out the first time when he was 18 days old, but I don’t think that counts! :-) ). We wanted to make a little party of it and pizza sounded like a great idea, never mind the fact that this little boy could not have tomatoes, dairy, beef, pork, mushrooms… you name it. If it was a “classic” pizza topping, it was pretty much off limits. Of course, I was too stubborn to just give up there! I made the following sauce and we made pizza with just the sauce and added chopped black olives, no cheese or anything else and it was really good! Think Bruschetta on a pizza crust.
Nothing warms my heart more than to see little sets of eyes light up, especially when those eyes belong to little ones that battle such serious medical issues. Children such as these, who have been through more in one short lifetime than most adults ever go through in 40+ years, remind me of the pure and true joy in life. To make a child a pizza that consists of the following ingredients and to get a reaction greater than if you had bought them the latest “it” toy, is priceless.
I don’t remember where I originally found this recipe but Living Without Magazine has it on their website here. One of the reasons I am making it again is not because I need to stay away from nightshades, but it is a really great way to “sneak” some very powerful vegetables in my family’s diet. While I love pumpkin, beets are another thing altogether! When I see a beet, especially those from a can, all I can think of is going to MCL Cafeteria (Ohio), as a kid with my parents and seeing all the “mature” patrons eating pickled beets with eggs. UFTA!
[…]Tomato-Free Marinara Sauce printable recipe 1 Onion, finely chopped 1 Clove Garlic, finely chopped, (optional) (I used 2 cloves of garlic) 1/3 cup Extra Virgin Olive Oil 3 Tbs. Fresh Lemon Juice 1 Tbs. Balsamic Vinegar (in the end, I ended up adding a little more, maybe a tsp.) 1 (8-ounce) can Beets,* drained (reserve the liquid) (Get your clothespin or hold your breath! ;-) ) 1 (14-15 ounce) can Pumpkin Puree (make sure it is not pumpkin pie filling) 1/2-3/4 cups gluten-free Chicken or Vegetable broth (I used Kitchen Basics Chicken Stock) 1 tsp. Coarse Salt 24 grinds Fresh Black Pepper 1/3-1/2 cup Chopped Fresh Basil (I used 1/4 cup dried Basil, plus I added a couple of Tbs. dried Oregano) 1 ½ tsp. Cornstarch or Arrowroot, moistened with 2 Tbs. reserved beet juice Sautee onion and garlic in oil until onion is translucent and slightly brown. Add lemon juice and vinegar. Simmer for 5 minutes. Puree beets until very smooth. (I did this in my food processor, but a blender would be fine too. You will need a little liquid to help the pureeing process: use some beet juice or water, maybe 1/4 cup or so) Add pureed beets, pumpkin puree, salt, pepper and basil to pan. Stir until combined. Whisk in the broth. Simmer over low heat for 5 minutes. Do not over-cook; beets discolor with prolonged cooking. If sauce is too thick, add a little more broth to thin. Whisk in the moistened cornstarch (or arrowroot). Cook for 1 more minute. Taste and adjust seasoning. *TIP: If you prefer, you can use fresh beets. Roast them in the oven until soft and puree them in a food blender before adding to recipe. TIP: If the sauce seems too acidic, add a teaspoon or two of sugar. (I added about a tsp. of Agave Nectar instead of sugar)
Rocket Pi?
How to make your own rocket with onboard computer and camera…
Nice pictures too:
https://www.raspberrypi.org/blog/rocket-man/
More powerful than what took Apollo to the moon and back… ( I have a 6 inch sliderule of the same model that was their ‘backup’ to the onboard dinky computers…)
James Dougherty, co-founder and owner of Real Flight Systems, was looking at how to increase the performance of his high-altitude rockets…
Rocket Pi High Altitude Rocket
These types of rockets… yeah…
James’s goal was to build a ‘plug and run’ video system within a rocket, allowing high-definition video to be captured throughout the entirety of the flight. He also required a fully functioning Linux system that would allow for the recording of in-flight telemetry.
You can totally see the direction he’s headed in, right?
This requirement called for long battery life, high storage to accommodate up to 1080p video, and a lightweight processor, allowing the rocket to be robust and reliable while in flight.
Unsurprisingly, James decided to use the Raspberry Pi for his build, settling for the model B.
Before starting the build, James removed the HDMI port, composite video output, USB post, audio jack, and Microchip LAN9512. Not only did this lessen the weight of the Pi, but these modifications also lowered the power needed to run the setup, thus decreasing the size of battery needed. This shrunken unit, completed with the addition of a Pi camera, meant the Pi could run for 8-10 hours with the recording quality lowered to 720p60 and no audio captured.
Now he could just use the Pi Zero…
But I’d guess that answers the question of physical strength of the Pi board. Yup, you can make DIY smart munitions with one ;-) (GPS chip sold separately, phone service not included…)
How Powerful The Pi?
Linpack benchmarks here:
https://www.howtoforge.com/tutorial/hpl-high-performance-linpack-benchmark-raspberry-pi/
Introduction
In this tutorial we cover how to go about benchmarking a single processor system, the Raspberry Pi. First we will benchmark a single node, and then continue to benchmark multiple nodes, each node representing a Raspberry Pi. There are a few things to be noted here. Firstly, benchmarking a single node or multiple nodes has a few dependencies to be satisfied which will be covered in this tutorial. BUT, on multiple nodes there are even more dependencies like the MPI implementation (like MPICH or OpenMPI) has to be built and running for the HPL to work. So for benchmarking multiple nodes, I assume that your nodes have MPICH installed and running.
What is HPL?HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. The HPL package provides a testing and timing program to quantify the accuracy of the obtained solution as well as the time it took to compute it. The best performance achievable by this software on your system depends on a large variety of factors. This implementation is scalable in the sense that their parallel efficiency is maintained constant with respect to the per processor memory usage. Thus we can use this to benchmark a single processor or a series of distributed processors in parallel. So lets begin installing HPL.
On my perpetual todo list… Compare the Pi Cluster to an Intel box.
Turning Down Power Consumption – H3
Likely works for other ARM chips too.
https://forum.armbian.com/index.php/topic/1614-running-h3-boards-with-minimal-consumption/
Since I wondered why FriendlyARM chose just 432MHz DRAM clock for their new NanoPi NEO (said to be an IoT node for lightweight stuff) and I also wondered how low consumption could be configured with a H3 device I decided to simply try it out.
Since I have no NanoPi NEO lying around (and FriendlyARM seems not to ship developer samples) I used Orange Pi Lite instead. Same amount of DRAM (but dual bank configuration therefore somewhat faster), same voltage regulator but Wi-Fi instead of Ethernet. I adjusted the fex file to stay always on the lower VDD_CPUX voltage (1.1V), disabled all unnecessary stuff (Wi-Fi, HDMI/Mali400 and so on, please see modified fex settings), also adjusted /etc/defaults/cpufreq-utils to jump between 240-912MHz cpufreq and added the following to /etc/rc.local to make H3 as slow as an RPi Zero:
echo 0 >/sys/devices/system/cpu/cpu3/online echo 0 >/sys/devices/system/cpu/cpu2/online echo 0 >/sys/devices/system/cpu/cpu1/online echo 408000 >/sys/devices/platform/sunxi-ddrfreq/devfreq/sunxi-ddrfreq/userspace/set_freq
(disabling 3 CPU cores and limiting DRAM clockspeed to 408 MHz — lowering DRAM clockspeed from 672 MHz down to 408 MHz is responsible for a whopping 200mW difference regarding consumption).
With this single core setup OPi Lite remains at 800mW when idling at 912MHz, when running a ‘sysbench –test=cpu –num-threads=1 –cpu-max-prime=20000 run’ consumption increases by 300mW (and H3 is still a bit faster at 912MHz compared to a RPi Zero at 900 MHz: 808 seconds vs. 930 seconds). Further reducing CPU clockspeed or disabling leds doesn’t help that much or at least my powermeter isn’t that precise.
I find it already pretty nice to be able to limit consumption down to 160mA (800mW) by disabling 3 CPU cores (easy to bring back when needed!), downclocking DRAM and limiting VDD_CPUX voltage to 1.1V. That means that on H3 devices featuring the more flexible SY8106A voltage regulator even lower consumption values could be achieved since VDD_CPUX voltage could be lowered even more. And consumption might be reduced further by disabling more stuff. But that’s something someone else with a multimeter has to test since my equipment isn’t precise enough.
To sum it up: By simply tweaking software settings (most of them not even needing a reboot but accessible from user space) average idle consumption of an H3 device can be reduced from 1.5W (300mA) to almost the half. In this mode (one single CPU core active at 912MHz and DRAM downclocked to 408MHz) a H3 device is still faster than an RPi Zero while providing way more IO and network bandwidth. And if settings are chosen wisely performance can be increased a lot from userspace (transforming a single core H3 @ 912MHz to a quad-core H3 @ 1200/1296 MHz with faster DRAM which translates to roughly 6 times the performance)
So, want a Pi to run on nearly no power? Looks like the higher end chip told to go slow gives more computes with less power suckage… Hmmm….
Orange Pi
Some links for where I found things about setting up the Orange Pi. I’ve sent it off to be a headless compute node for now, but have a heatsink kit “on the way” and will want to get it running ‘headful’ someday… “For that day”:
Orange Pi One board is the most cost-effective development board available on the market today, so I decided to purchase one sample on Aliexpress to try out the firmware, which has not always been perfect simply because Shenzhen Xunlong focuses on hardware design and manufacturing, and spends little time on software development to keep costs low, so the latter mostly relies on the community. Recently, armbian has become popular operating systems for Linux ARM platform in recent months, so I’ve decided to write a getting started guide for Orange Pi One using a Debian Desktop image released by armbian community.
Where to find out more about Armbian:
Though I’m thinking I might just use this as a reason to build Devuan from sources. I could use the practice of a ‘from scratch’ build, and it would assure I got just what I wanted on the card…
The forum that discusses varieties of OS available and has links to the scripts to ‘roll your own’:
http://www.orangepi.org/orangepibbsen/forum.php?mod=viewthread&tid=342
The “git hub” site it links to, for building a kernel yourself:
https://github.com/loboris/OrangePi-Kernel
Linux Kernel for OrangePI H3 boards
AboutThe repository contains Linux kernel sources (3.4.39) adapted for OrangePI H3 boards, gcc toolchain, adapted rootfs and building scripts.
BuildingKernel config files and the files specific to OPI board are placed in build directory.
The included build script build_linux_kernel.sh can be used to build the kernel
./build_linux_kernel.sh [clean | all | 2 | plus] [clean]
[…]
Yada yada cis boom bah!
I’ve not figured out the recipe for user-land yet. But this might be related / it (though it looks like it is the head page for different boards):
hhttps://github.com/loboris/OrangePi-BuildLinux/tree/master/orange
The Users Manual pdf for the ‘mini’ that is very similar:
https://www.manualslib.com/manual/965759/Orange-Pi-Mini.html
UFTA!
In the Peanuts cartoon, Charlie Brown would say “Good Grief” — you can buy a mug or T-shirt.
If this is “sort of” the meaning of UFTA!, then the original is Uff da. (Norwegian)
Spelling changes with national origin of speaker and location.
@EMSmith; Still trying to make that “Blue” Orange Pi go with Devuan? he he he
AS it seems that Devuan is the flavor you think might serve best for this project, so getting comfortable with it’s peculiarities would appear to be the best next step. Not sure that the Raspberry Pi-3B is the best hardware solution but it does have a versatile array of IO possibilities. The ability of the computers to cross-talk via WiFi might greatly reduce wiring demands for an extended system…pg
@John F.:
Yup! Norwegian / Swedish / Nordic…
@P.G.:
The Orange Pi goes, for some level of “go”. At present as a headless compute node that thermally limits on sustained loads. Once heatsink arrives and is applied, that throttle point will move further out. (Found an interesting page on that… for another posting).
I’ve settled on Devuan as the OS of Choice (until and unless something goes bump in the night and I’m pushed into a rethink…) and I’d rather have the same thing on all systems. As the other OSs on the OPi are Debian based, an easy path to Devuan ought to exist as with other Debians.
But…
I also like building my own systems entirely from local source code. While the use of hash and codes has dramatically reduced opportunities for Man In The Middle attacks via the software update process, I still like the “no information leakage” on builds and the no internet traffic profile / costs / time lag and … So one of my “someday soon” projects is to set up a Debian and Devuan source mirror on disk on site. Download once and you can build a dozen times locally with zero exposure. (Yes, for rolling release folks one ought to update it daily, but you are not forced to…)
Then with a local build, you also get rid of questions about the folks doing the builds. Sure I trust them and sure they are known names… but not having them in the loop at all also has a benefit.
As I now have a Build Monster Cluster, it no longer will takes days to compile, as it did for BSD, anyway. Kernel in 30 minutes, the rest likely in a couple of hours. Then I can also tune the compiler flags for the system built the way I want it. (Set the optimizer to do little for a quick compile of an idea, set it to max for a weekend long build going into production… tune memory use for 512 MB systems vs 2000 MB, etc.) Remember that I was Build Master and QA Department in earlier employment, so this isn’t new to me…
So while I’m doing other things, part of the brain is pondering what to do, in what order, on which gear, to make a right and proper Build System that builds Devuan from Debian and Devuan and Pi specific sources housed locally and customized by me.
Since the OPi is most in need of a decent OS build, it is the obvious first target… as the rest are a nicely running Build Monster and taking part of it down to change the OS and tools on it is, er, problematic if you then need to use it to make new OS and tools…
Yes, I could leave the OPi on Armbian and just use distcc on it, or the climate model FORTRAN, but there is a non-zero risk of library incompatibilities and there is the difference in administrative procedures too. Plus the added risks of 2 sets of build people and 2 sets of MTM paths and 2 sets of source downloads and….
Besides, this is the part of the process that I do for fun ;-) Compile, configure and run the model is not so fun 8-{ so an hour a day on fun stuff lets the other 6 hours be tolerated more easily…
BTW: As of now the urge for smaller boards is ended on a compute cluster. Any smaller board size just needs a bigger heat sink… So RPi sized is my default until that changes. Also found some benchmarks of candidate OPI, NanoPI, and RPi boards. In general, the PiM3 wins. To do better needs more heat management (i.e. active fans and bigger heat sinks) and that puts you in the $80 board cost range of Odroids and such. At that point 3 x PiM3 even if heat throttled a little beats on total computes…
Now the ONE Big Question still open: How many threads and MPI jobs can a model be decomposed into and still gain? IF it is small-ish, you rapidly need Intel chips. IFF Large, a larger number of ARM chips will be cheaper and less power needed. I’ll be testing that after I get the first model to run.
I can make an Intel based “cluster” of about 6 cores with what I have on hand, I think, and that ought to be enough to get relative performance data and do the calculations. That Model E says it needs 88 cores to do extended full resolution runs implies strongly that it WILL use 88 cores effectively. That implies high parallel function possible for it. (At some number of cores it will top out… but unknown how many). The typical ratio of Intel to ARM cores I’ve experienced is about 4 : 1 to 10 : 1 depending on specifics of the codes (how threaded, how pipelined, how CISC vs RISC character, how…) so I can figure one quad core ARM Pi board is about the same as one nice 64 bit Intel chip core. That means my present cluster of 16 cores is well below 88 parallel limit and about the same as a quad core high end Intel… that is supposed to run Model II fine and Model E “ok” for lower res and longer run time simulations. I.e. fine for a test case. AND it makes a nice distcc build cluster for compiles in any case.
Sidebar: The Linux Kernel is characterized as painful to compile on a Pi Model 1 (or A or whatever it was called) at about 12 hours. The recommendation was to use an Intel box and get that down to 1/2 hour. Since I already get 1/2 hour kernel builds from the cluster, that validates a very good ratio ( 24 : 1 ) for Pi (original) vs Intel … and 4 : 1 for Pi (M2 / M3 mix) boards vs Intel chipset. So things are in about the right ranges for the estimates.
So “sometime” I get a model running and I run it on a nice Intel system (who has a dead DVD drive and annoying video driver limitations, but computes fine…) and compare that to the Pi Cluster. Using about 100 cores ( 25 boards) as a likely to work number for the parallel upper bound, I can then compute “time to complete a run” and “cost per cluster” for a 100 board Pi Cluster vs a 10 to 25 core of Intel boxes… (so 2 to 4 high end multi core PC boards or larger number of Intel Atom SOC boards). At that point it is mostly just a simple arithmetic problem… $/compute-hour for each. Only real annoyance would be to discover it really wants 160 cores of Intel high end to be reasonable… at which time I’d have to look at de-rezing the model some… Standard High Performance Computing job profiling task… How big? How costly? What architecture? How can software cut those down?
But first I need to get the data, and that means getting one of the models to run, and that is not a load of fun… so sometimes I dabble with a cute little board for $15 to let the mind get back in sync and recharged for another round of “WTF Were They Thinking when they wrote that code?”… ;-)
You know the drill… don’t really want to ‘dispatch’ and pluck the old chicken, but collect eggs for an hour and you work your way up to it…
For those who have the affliction, tomato free sauce is good! But I can’t stand beets, so I will just “suffer” with regular sauce. ;-)
@Phil:
The article goes out of its way to point out there is no beet flavor in the sauce!
Here is an article I just saw that talks about how Google is bringing AI to the Raspberry Pi:
http://www.zdnet.com/article/google-is-bringing-ai-to-your-raspberry-pi/
“Face-recognition, speech-to-text translation, and sentiment analysis are among tools that could be coming to the Pi.”
Ah…that’s just what you were hankering for, isn’t it?
I know. I have actually had some before. But the thought…….
The mind over taste buds issue. ;-)