Got a little bit tired of having a stack of cores not doing anything. Now that it’s all configured as I like it, open a panel on each, and pop up an “htop” display. A nice load bar for each core. 16 of them just sitting there near or at zero…
Couldn’t stand the waste… so spent some time looking at how to set up a code running OpenCL on the GPUs. Found a recipe for doing it on the Odroid. It’s not easy, and filled with “issues”. Climate Models not ready to run yet (still digging though all the knobs to set…)
So I decided to go ahead and put “BOINC” on the cluster. At least it would be doing something worthwhile (sort of ;-) and I’d get my burn-in / acceptance test out of the way. Nothing like running a stack of cores at 100% for a few days to find out if it’s going to crash in the middle of a model run… or something else you care about.
Open-source software for volunteer computing
Use the idle time on your computer (Windows, Mac, Linux, or Android) to cure diseases, study global warming, discover pulsars, and do many other types of scientific research. It’s safe, secure, and easy:
For Android devices, get the BOINC app from the Google Play Store; for Kindle, get it from the Amazon App Store.
You can choose to support projects such as Einstein@Home, IBM World Community Grid, and SETI@home, among many others. If you run several projects, try an account manager such as GridRepublic or BAM! .
I’ve run SETI @ Home, on and off, since sometime in the 80s. (Still no aliens, though… dang it.) It was the first one of these distributed things to make a splash. Then they generalized the process and you can now use the BOINC framework to run any of many different projects.
Not all of them run on an ARM chip, though. Interesting to note that now some of the PC oriented ones also will run in the GPUs on those boxes.
I just semi-randomly signed up for 3 projects. SETI @ Home, an Enigma crack (seems someone has 3 old Enigma encrypted messages from W.W.II that have never been decoded, so they are going for a crack of it), and an Asteroids program that is using known astro-data to fully describe all the asteroids they can (rotation etc.) At some point I ought to find out just what all works on the ARM chips and settle on those I think have the most benefit. For now it was just “what runs and looks at all fun?”.
Installing the application is trivial. It’s in the Debian build already. On the headless boards, do “apt-get install boinc-client”. On the master / headend station, do that and do “apt-get install boinc-manager”
But then there’s some configuration bits to do… I’ve bolded the bit that was annoying for me:
If you do only the basic installation as described above, BOINC manager will not be able to automatically connect to the client. To connect the client you will be required to give the GUI RPC password every time you start BOINC manager. That is not a bug, it is a security feature to prevent other users from using the manager to manipulate the client, changing your projects, etc. Another inconvenience is that boinc (the user named boinc) owns /var/lib/boinc-client/ and all the files and directories in it so you will not be able to edit those files from your regular user account unless you add your username to the boinc group and adjust some permissions as follows, substituting your username for :
Open /etc/group in a text editor.
Look for the line starting with boinc:x::
Edit the line to look like boinc:x:: ( will be a number, do not change it)
Save the file and close the editor.
Open a terminal and enter the following commands, substitude your username for :
sudo ln -s /etc/boinc-client/gui_rpc_auth.cfg /home//gui_rpc_auth.cfg
sudo ln -s /etc/boinc-client/gui_rpc_auth.cfg /var/lib/boinc-client/gui_rpc_auth.cfg
sudo chown boinc:boinc /home//gui_rpc_auth.cfg
sudo chown boinc:boinc /var/lib/boinc-client/gui_rpc_auth.cfg
sudo chmod g+rw /var/lib/boinc-client
sudo chmod g+rw /var/lib/boinc-client/*.*
So I read that, and proceeded to rote do the commands listed, but missed the importance of it. There are files in that directory that must be changed.
The other “quirk” was that the BOINC manager lets you change what machine your are managing with a dropdown menu choice of changing computers, but then it prompts for “computer name” and “password”. Who’s password? On what machine?
So I tried my login name password. Nothing. I tried the boinc account. I changed the boinc account password (as I’d never set it so what was it?) and still no go. Eventually I found out that it has it’s own ‘special’ password. It also has a magic file where you must put the IP number of any managing workstation on each of the client headless boards.
Access control for GUI RPC
GUI RPCs are divided into two categories:
Status operations which return information about tasks, project, etc.
Control operations which change the state of BOINC (suspend/resume, add project, etc.).
Some GUI RPCs are authenticated with a GUI RPC password. This is stored in the file gui_rpc_auth.cfg in the BOINC data directory. On a multiuser computer, this should be protected against access by other users. When BOINC client first runs, it generates a random password. You can change it if you like; max length is 255 characters.
A “local” RPC is one that comes from the computer where the BOINC client is running (but perhaps from a different logged-in user).
Local status RPCs are not authenticated. On a multiuser computer, a user can see the status of any other user’s BOINC client.
Local control RPCs are authenticated using the GUI RPC password.
A “remote” RPC is one that comes from a different computer.
All remote RPCs (both status and control) are authenticated using the GUI RPC password.
By default, remote RPCs are not accepted from any host. To specify a set of hosts from which RPCs are allowed, create a file remote_hosts.cfg in your BOINC data directory containing a list of allowed DNS host names or IP addresses (one per line). Only these hosts will be able to connect. The remote_hosts.cfg file can also have comment lines that start with either a # or a ; character.
Now despite what it said, there was no default password in that .cfg file. So in fact, I had to just not put ANY password into the prompt to connect to another node. Just the computer name. HOWEVER, until the IP number of the management station was put in the remote_hosts.cfg file on the headless nodes, it would not allow the connection…
Once It Works
Then you get to add “projects”. These want an email account and a new password for the project login on the web page. Then you get the software for that project loaded and it starts.
The Management Station lets you allocate how many cores, and what percentage of CPU, and what times of day, and… lots of other controls. The tasks for a given project run niced to very low priority, so generally get out of the way of other use; but on a Pi if it is also your desktop, you will want to limit the use to 50% or 75% of cores, just so you don’t have to wait for a swap when starting to do something.
So as of now, I’ve got 16 cores running full boogie on BOINC. I’ll be leaving the cluster running this way for a day or two, then asses things like stability and core temperatures. Also actual work done.
For now I’m just happy to see all the CPU load bars up there at near 100% showing something is being done ;-)