Lego Pi Parallel SuperComputing

This one has been around a month or two and I’m just now discovering it.

Looks like there’s still some creative folks in the world. At a UK University, they ganged together 64 Raspberry Pi boards into a parallel processing “supercomputer”. (Technically not, as the production supercomputers are way high in performance now, but easily in the scale of what was a supercomputer just a decade or two back, and can be scaled to any number of processors, so it’s a ‘supercomputer architecture’ just waiting for money to be applied… and not very much money at that…)

At any rate, as I’m fond of parallel computing, and have built a personal Beowulf Cluster for fun and production, it’s a step I’d not expected so soon. Yet, there it is. Now I’m looking at my old cluster parts (some old “white boxes” mostly in the garage to free up office space) and thinking “I could fit more than that in a cigar box”… At present I only have two RPi boards, but I can see more are in my future. One is already acting as a Torrent Server, and I could easily use another as a dedicated file server, and I’d like a local DNS caching server and maybe email server and…

I will likely “strap together” the two I’ve got as a “cluster of two” just for the experience. They also install a FORTRAN compiler… GIStemp uses FORTRAN… So I can likely clean out the full sized desktop box that has GIStemp on it (in a small part of a 10 GB disk, so easy to fit in one SD card). Yes, it would require a “re porting” of GIStemp, but as that release is about 4 years out of date, I probably ought to do it anyway. (Gritting my teeth at that… but having “GIStemp on an SD card” would be an interesting “product” to offer… ) OTOH, it would make for a fun posting. GIStemp on a postage stamp sized card with the whole thing in the palm of a hand as the photo… “Fate of the world economy fits in palm of hand” ;-)

But “back to the present”:

University Of Southampton

I love the creativity that uses Lego blocks to make the ‘holder’ for the cards… Here’s their site, and a fun video:

http://www.southampton.ac.uk/~sjc/raspberrypi/

U. of Southampton Raspberry Pi Lego Supercomputer

U. of Southampton Raspberry Pi Lego Supercomputer

In the background you can see the stack, in the foreground is one module that gets stacked.

They have a PDF of the steps to do it:

http://www.southampton.ac.uk/~sjc/raspberrypi/pi_supercomputer_southampton_web.pdf

Not too hard and nothing I’ve not done before. Though they don’t have enough detail on the Legos ;-) and I need to find out if the spouse pitched them when our kids grew up or if they are in the garage somewhere ;-)

They have an updated HTML page on “how to” that has much more detail on the Legos ;-) and has how to set up SSH (nice, that, as it’s next on my ‘todo’ list…) along with some handy scripts and several very nice pictures:

http://www.southampton.ac.uk/~sjc/raspberrypi/pi_supercomputer_southampton.htm

Lego R.Pi Cluster Computer, 64 nodes

Lego R.Pi Cluster Computer, 64 nodes

Kickstarter Project with More Mojo

http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone

The Parallella Computing Platform

To make parallel computing ubiquitous, developers need access to a platform that is affordable, open, and easy to use. The goal of the Parallella project is to provide such a platform! The Parallella platform will be built on the following principles:

Open Access: Absolutely no NDAs or special access needed! All architecture and SDK documents will be published on the web as soon as the Kickstarter project is funded.
Open Source: The Parallella platform will be based on free open source development tools and libraries. All board design files will be provided as open source once the Parallella boards are released.
Affordability: Hardware costs and SDK costs have always been a huge barrier to entry for developers looking to develop high performance applications. Our goal is to bring the Parallella high performance computer cost below $100, making it an affordable platform for all.

The Parallella platform is based on the Epiphany multicore chips developed by Adapteva over the last 4 years and field tested since May 2011. The Epiphany chips consists of a scalable array of simple RISC processors programmable in C/C++ connected together with a fast on chip network within a single shared memory architecture.

Here is a link to the Epiphany Architecture Reference Manual
[...]
Once completed, the 64-core version of the Parallella computer would deliver over 90 GFLOPS of performance and would have the the horse power comparable to a theoretical 45 GHz CPU [64 CPU cores * 700MHz] on a board the size of a credit card while consuming only 5 Watts under typical work loads.

But it costs an order of magnitude more than an a Raspberry Pi… which sounds really big compared to $250 ;-)

End Note

With multicore cheap risc chips, we’re headed to a whole new level of compute power for not much money. Looks like fun, too.

For any heavily compute intensive task, parallel computing is going to make a difference. Things like video and image processing especially. Robotics and such. For large compiles of complex systems, like Linux, ‘distributed make’ can speed things up considerably. (I did that at one company about a decade back when I was ‘build master’ for about a year). Besides, it’s really fun to play with ;-)

One of the more interesting uses of “clusters” is for security via distribution. There are many ‘distributed file systems’, including one that requires a ‘quorum’ to open it. Developed in Italy, it looks like something the Mafia asked for ;-) Any one person can be compromised (or settable up to several), and divulge their password. Until you get a ‘quorum’ of them, you get nothing… The blocks are spread over many systems all over the place, so take any one system (or several) you get nothing. It is RAID structured, so kill a system (or take it) and it rebuilds the missing parts. I’m sure you can see why this is of benefit.

So one of my “someday” projects is to make such a cluster distributed compute and file server. Then spread the parts around via a routing system like Onion and you have a non-stop non-compromise compute and file system. Only question / issue is performance level over multi-hop routes… (One could use VPN instead at the risk of contact tracing).

But it all starts with making a parallel cluster that uses distributed processing…

Back at the Lego RPi:

Not as pretty, but snagging a copy of the Lego RPi HTML text for my future use…

Steps to make Raspberry Pi Supercomputer

Prof Simon Cox

Computational Engineering and Design Research Group

Faculty of Engineering and the Environment

University of Southampton, SO17 1BJ, UK.

V0.2: 8th September 2012

V0.3: 30th November 2012 [Updated with less direct linking to MPICH2 downloads]

V0.4: 9th January 2013 [Updated step 33]

First steps to get machine up

1. Get image from

http://www.raspberrypi.org/downloads

I originally used: 2012-08-16-wheezy-raspbian.zip

Updated 30/11/12: 2012-10-28-wheezy-raspbian.zip

My advice is to to check the downloads page on raspberrypi.org and use the latest version.

2. Use win32 disk imager to put image onto an SD Card (or on a Mac e.g. Disk Utility/ dd)

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the “Write” option to put the image from the disk to your card

3. Boot on Pi

4. Expand image to fill card using the option on screen when you first boot. If you don’t do this on first boot, then you need to use

$ sudo raspi-config

http://elinux.org/RPi_raspi-config

5. Log in and change the password

http://www.simonthepiman.com/beginners_guide_change_my_default_password.php

$ passwd

6. Log out and check that you typed it all OK (!)

$ exit

7. Log back in again with your new password

Building MPI so we can run code on multiple nodes

8. Refresh your list of packages in your cache

$ sudo apt-get update

9. Just doing this out of habit, but note not doing any more than just getting the list (upgrade is via “sudo apt-get upgrade”).

10. Get Fortran… after all what is scientific programming without Fortran being a possibility?

$ sudo apt-get install gfortran

11. Read about MPI on the Pi. This is an excellent post to read just to show you are going to make it by the end, but don’t type or get anything just yet- we are going to build everything ourselves:

http://westcoastlabs.blogspot.co.uk/2012/06/parallel-processing-on-pi-bramble.html

Note there are a few things to note here

a) Since we put Fortran in we are good to go without excluding anything

b) The packages here are for armel and we need armhf in this case… so we are going to build MPI ourselves

12. Read a bit more before you begin:

http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.4.1-installguide.pdf

Note: As the version of MPICH2 updates, you are better to go to:

http://www.mpich.org/documentation/guides/

and get the latest installer’s Guide.

We are going to follow the steps from 2.2 (from the Quick Start Section) in the guide.

13. Make a directory to put the sources in

$ mkdir /home/pi/mpich2

$ cd ~/mpich2

14. Get MPI sources from Argonne.

$ wget http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4.1p1/mpich2-1.4.1p1.tar.gz

[Note that as the MPI source updates, you can navigate to:

http://www.mpich.org/downloads/ to get the latest stable release version for MPICH2]

15. Unpack them.

$ tar xfz mpich2-1.4.1p1.tar.gz

[Note: You will need to update this as the version of MPICH2 increments]

16. Make yourself a place to put the compiled stuff – this will also make it easier to figure out what you have put in new on your system. Also you may end up building this a few times…

$ sudo mkdir /home/rpimpi/

$ sudo mkdir /home/rpimpi/mpich2-install

[I just chose the “rpimpi” to replace the “you” in the Argonne guide and I did the directory creation in two steps]

17. Make a build directory (so we keep the source directory clean of build things)

mkdir /home/pi/mpich_build

18. Change to the BUILD directory

$ cd /home/pi/mpich_build

19. Now we are going to configure the build

$ sudo /home/pi/mpich2/mpich2-1.4.1p1/configure -prefix=/home/rpimpi/mpich2-install

[Note: You will need to update this as the version of MPICH2 increments]

Make a cup of tea

20. Make the files

$ sudo make

Make another cup of tea

21. Install the files

$ sudo make install

Make another cup of tea – it will finish…

22. Add the place that you put the install to your path

$ export PATH=$PATH:/home/rpimpi/mpich2-install/bin

Note to permanently put this on the path you will need to edit .profile

$nano ~/.profile

… and add at the bottom these two lines:

# Add MPI to path

PATH=”$PATH:/home/rpimpi/mpich2-install/bin”

23. Check whether things did install or not

$ which mpicc

$ which mpiexec

24. Change directory back to home and create somewhere to do your tests

$ cd ~

$ mkdir mpi_testing

$ cd mpi_testing

25. Now we can test whether MPI works for you on a single node

mpiexec -f machinefile -n hostname

where machinefile contains a list of IP addresses (in this case just one) for the machines

a) Get your IP address

$ ifconfig

b) Put this into a single file called machinefile

26. $ nano machinefile

Add this line:

192.168.1.161

[or whatever your IP address was]

27. If you use

$ mpiexec -f machinefile –n 1 hostname

Output is:

raspberrypi

28. Now to run a little C code. In the examples subdirectory of where you built MPI is the famous CPI example. You will now use MPI on your Pi to calculate Pi:

$ cd /home/pi/mpi_testing

$ mpiexec -f machinefile -n 2 ~/mpich_build/examples/cpi

Output is

Process 0 of 2 is on raspberrypi

Process 1 of 2 is on raspberrypi

pi is approximately 3.1415926544231318, Error is 0.0000000008333387

Celebrate if you get this far.
Flash me… once

29. We now have a master copy of the main node of the machine with all of the installed files for MPI in a single place. We now want to clone this card.

30. Shutdown your Pi very carefully

$ sudo poweroff

Remove the SD Card and pop it back into your SD Card writer on your PC/ other device. Use Win32 disk imager (or on a Mac e.g. Disk Utility/ dd) to put the image FROM your SD Card back TO your PC:

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the “Read” option to put the image from the disk to your card

Let us call the image “2012-08-16-wheezy-raspbian_backup_mpi_master.img”

31. Eject the card and put a fresh card into your PC/other device. Use win32 disk imager to put image onto an SD Card (or on a Mac e.g. Disk Utility/ dd)

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the “Write” option to put the image from the disk to your card and choose the “2012-08-16-wheezy-raspbian_backup_mpi_master.img” image you just created.

[Note that there are probably more efficient ways of doing this – in particular maybe avoid expanding the filesystem in step 4 of the first section.]

32. Put the card into your second Pi and boot this. You should now have two Raspberry Pis on. Unless otherwise stated, all the commands below are typed from the Master Pi that you built first.
Using SSH instead of password login between the Pis

33. Sort out RSA to allow quick log in. This is the best thing to read:

http://steve.dynedge.co.uk/2012/05/30/logging-into-a-rasberry-pi-using-publicprivate-keys/

In summary (working on the MASTER Pi node)

$ cd ~

$ ssh-keygen -t rsa –C “raspberrypi@raspberrypi”

This set a default location of /home/pi/.ssh/id_rsa to store the key

Enter a passphrase e.g. “myfirstpicluster”. If you leave this blank (not such good security) then no further typing of passphrases is needed.

$ cat ~/.ssh/id_rsa.pub | ssh pi@192.168.1.162 “mkdir .ssh;cat >> .ssh/authorized_keys”

34. If you now log into your other Pi and do

$ ls –al ~/.ssh

You should see a file called “authorized_keys” – this is your ticket to ‘no login heaven’ on the nodes

35. Now let us add the new Pi to the machinefile. (Log into it and get its IP address, as above)

Working on the Master Raspberry Pi (the first one you built):

$ nano machinefile

Make it read

192.168.1.161

192.168.1.162

[or whatever the two IP addresses you have for the machines are]

36. Now to run a little C code again. In the examples subdirectory of where you built MPI is the famous CPI example. First time you will need to enter the passphrase for the key you generated above (unless you left it blank) and also the password for the second Pi.

$ cd /home/pi/mpi_testing

$ mpiexec -f machinefile -n 2 ~/mpich_build/examples/cpi

Output is

Process 0 of 2 is on raspberrypi

Process 1 of 2 is on raspberrypi

pi is approximately 3.1415926544231318, Error is 0.0000000008333387

If you repeat this a second time you won’t need to type any passwords in. Hurray.

Note that we have NOT changed the hostnames yet (so yes, the above IS running on the two machines, but they both have the same hostname at the moment).

37. If you change the hostname on your second machine (see Appendix 1 “Hostname Script”) and run:

$ mpiexec -f machinefile -n 2 ~/mpich_build/examples/cpi

Output:

Process 0 of 2 is on raspberrypi

Process 1 of 2 is on iridispi002

Now you can see each process running on the separate nodes.

CONGRATULATIONS – YOU HAVE NOW FINISHED BUILDING 2-NODE SUPERCOMPUTER

IF YOU FOLLOW THE STEPS BELOW, YOU CAN EXPAND THIS TO 64 (or more) nodes
Acknowledgements

Thanks to all of the authors of the posts linked to in this guide and Nico Maas. Thanks to the team in the lab: Richard Boardman, Steven Johnston, Gereon Kaiping, Neil O’Brien, and Mark Scott. Also to Oz Parchment and Andy Everett (iSolutions). Thanks to Pavittar Bassi in Finance, who made all the orders for equipment happen so efficiently. And, of course, Professor Cox’s son James who provided specialist support on Lego and system testing.
Appendix 1 – Scripts and other things to do
Flash me… one more time (rinse and repeat for each additional node)

1. Power off the worker Pi and eject the card

$ sudo poweroff

We now have a copy of the WORKER nodes of the machine with all of the installed files for MPI in a single place. We now want to clone this card- as it has the ssh key on it in the right place. Shutdown your Pi very carefully

$ sudo poweroff

2. Remove the SD Card and pop it back into your SD Card writer on your PC/ other device. Use Win32 disk imager (or on a Mac e.g. Disk Utility/ dd) to put the image FROM your SD Card back to your PC:

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the “Read” option to put the image from the disk to your card

Let us call the image “2012-08-16-wheezy-raspbian_backup_mpi_worker.img”

3. Eject the card and put a fresh card into the machine. Use win32 disk imager to put image onto an SD Card (or on a Mac e.g. Disk Utility/ dd)

http://www.softpedia.com/get/CD-DVD-Tools/Data-CD-DVD-Burning/Win32-Disk-Imager.shtml

You will use the “Write” option to put the image from the disk to your card and choose the “2012-08-16-wheezy-raspbian_backup_mpi_master.img” image you just created.

[Note that there are probably more efficient ways of doing this – in particular maybe avoid expanding the filesystem in step 4 of the first section.]
Hostname Script

If you want to rename each machine, you can do it from the Master node using:

ssh pi@192.168.1.162 ‘sudo echo “iridispi002″ | sudo tee /etc/hostname’

ssh pi@192.168.1.163 ‘sudo echo “iridispi003″ | sudo tee /etc/hostname’

ssh pi@192.168.1.164 ‘sudo echo “iridispi004″ | sudo tee /etc/hostname’

etc.

You should then reboot each worker node

If you re-run step (‎36) above again, you will get:

$ mpiexec -f machinefile -n 2 ~/mpich_build/examples/cpi

Output:

Process 0 of 2 is on raspberrypi

Process 1 of 2 is on iridispi002

pi is approximately 3.1415926544231318, Error is 0.0000000008333387

This shows the master node still called raspberrypi and the first worker called iridispi002

Using Python

There are various Python bindings for MPI. This guide just aims to show how to get ONE of them working.

1. Let us use mpi4py. More info at

http://mpi4py.scipy.org/

http://mpi4py.scipy.org/docs/usrman/index.html

$ sudo apt-get install python-mpi4py

2. We also want to run the demo so let us get the source too

$ cd ~

$ mkdir mpi4py

$ cd mpi4py

$ wget http://mpi4py.googlecode.com/files/mpi4py-1.3.tar.gz

$ tar xfz mpi4py-1.3.tar.gz

$ cd mpi4py-1.3/demo

3. Repeat steps ‎1 and ‎2 on each of your other nodes (we did not bake this into the system image)

4. Run an example (on your master node)

$ mpirun.openmpi -np 2 -machinefile /home/pi/mpi_testing/machinefile python helloworld.py

Output is:

Hello, World! I am process 0 of 2 on raspberrypi.

Hello, World! I am process 1 of 2 on iridispi002.

5. $ mpiexec.openmpi -n 4 -machinefile /home/pi/mpi_testing/machinefile python helloworld.py

Output is:

Hello, World! I am process 2 of 4 on raspberrypi.

Hello, World! I am process 3 of 4 on iridispi002.

Hello, World! I am process 1 of 4 on iridispi002.

Hello, World! I am process 0 of 4 on raspberrypi.

6. These are handy to remove things if your attempts to get mpi4py don’t quite pan out

$ sudo apt-get install python-mpi4py

$ sudo apt-get autoremove
Keygen script commands

cat ~/.ssh/id_rsa.pub | ssh pi@192.168.1.161 “cat >> .ssh/authorized_keys”

cat ~/.ssh/id_rsa.pub | ssh pi@192.168.1.162 “cat >> .ssh/authorized_keys”

cat ~/.ssh/id_rsa.pub | ssh pi@192.168.1.163 “cat >> .ssh/authorized_keys”

etc. for sending out the key exchanges if you want to do this again having generated a new key
Getting Pip for Raspberry Pi

1. We can install Pip, which gives us a nice way to set up Python packages (and uninstall them too). More info is at

http://www.pip-installer.org/en/latest/index.html

http://www.pip-installer.org/en/latest/installing.html

$ cd ~

$ mkdir pip_testing

$ cd pip_testing

2. A prerequisite for pip is “distribute” so let’s get that first and then install pip. The sudo is because the installation of these has to run as root.

$ curl http://python-distribute.org/distribute_setup.py | sudo python

$ curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | sudo python
Notes on making MPI Shared Libraries for Raspberry Pi

MPI libraries can also be built “shared” so that they can be dynamically loaded. This gives a library file that ends in “.so” etc. not “.a” and we can do that by building those MPI libraries again. This is a repeat of steps above, but written out again using the suffix “_shared” on the directory names.

1. Make a directory to put the sources in

$ mkdir /home/pi/mpich2_shared

$ cd ~/mpich2_shared

2. Get MPI sources from Argonne.

$ wget http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4.1p1/mpich2-1.4.1p1.tar.gz

[Note that as the MPI source updates, you can navigate to:

http://www.mpich.org/downloads/ to get the latest stable release version]

3. Unpack them.

$ tar xfz mpich2-1.4.1p1.tar.gz

[Note: You will need to update this as the version of MPICH2 increments]

4. Make yourself a place to put the compiled stuff – this will also make it easier to figure out what you have put in new on your system.

$ sudo mkdir /home/rpimpi_shared/

$ sudo mkdir /home/rpimpi_shared/mpich2-install_shared

[I just chose the “rpimpi_shared” to replace the “you” in the Argonne guide and I made the directory creation in two steps]

5. Make a build directory (so we keep the source directory clean of build things)

$ mkdir /home/pi/mpich_build_shared

6. Change to the BUILD directory

$ cd /home/pi/mpich_build_shared

7. Now we are going to configure the build

$ sudo /home/pi/mpich2_shared/mpich2-1.4.1p1/configure -prefix=/home/rpimpi_shared/mpich2-install_shared –enable-shared

[Note: You will need to update this as the version of MPICH2 increments]

8. Make the files

$ sudo make

9. Install the files

$ sudo make install

10. Finally add the place that you put the install to your path

$ export PATH=$PATH:/home/rpimpi_shared/mpich2-install_shared/bin

Note to permanently put this on the path you will need to edit .profile

$ nano ~/.profile

… and add at the bottom these two lines:

# Add MPI Shared to path

PATH=”$PATH:/home/rpimpi_shared/mpich2-install_shared/bin”

Subscribe to feed

About these ads

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , . Bookmark the permalink.

14 Responses to Lego Pi Parallel SuperComputing

  1. PhilJourdan says:

    I had read about the Pi computer, but not the lego one! Damn! I am going to have to talk to Network world about keeping up!

    I love legos and computers – so needless to say, that is my all time favorite computer picture.

  2. E.M.Smith says:

    @PhilJourdan:

    Since the Legos are just added to the regular Raspberry Pi board, you can make them any color you like… There’s also room for “creativity”… so you could put Lego “People” on the :”roof”, or add some of the specialized parts like doors or windows to the sides… ;-)

    You ought to look at their HTML “update” link as it has a half dozen pictures, including the “lego in progress” steps.

  3. PhilJourdan says:

    @E.M. – Of course! also attach wheels to fans. The possibilities are endless! And part of the reason I love Legos!

  4. George B says:

    A Pi cluster running Mosix might be fun. What Mosix does is assigns various processes to other nodes in the cluster and keeps track of them. Say, for example, you want to start Apache and have it configured for 10,000 child processes. Mosix will begin to farm out the children to various nodes and keep track of the load on those nodes and constantly shift processes around in order to keep the CPU load balanced on the various nodes of the cluster. With something like the Pi, it becomes nearly an infinitely scalable processor that depends really only on the bandwidth interconnecting the nodes.

    http://www.mosix.org/txt_about.html

    Unlike “load balancing”, Mosix turns the processors into one huge meta processor.

  5. George B says:

    Hmm, Massively Parallel Pi … + Mosix … + Bitcoin … interesting

  6. Brian says:

    I love the idea and it’s a great student project. However, I can’t justify spending over $4,000 to create a cluster with (optimistically) about 1/4 the double-precision FLOPs of either the latest Intel Xeon “Phi” or NVidia’s “Tesla” GPU coprocessor boards. Both of these solutions are pushing 1 TFLOP of true double-precision calcualations for between $2,800 to $3,500. That’s pretty incredible power packed into a full-length, 16-channel PCIe 3.0 card.

  7. DirkH says:

    Re Python; It was always a very slow language. Now the interesting thing is that Javascript has the same powerful semantics, and it was also very slow until 2008 when apple, Google and Mozilla and a few others started creating Just in time compilers.

    There’s a list of Javascript / Ecmascript engines on the wikipedia.

    So the idea is switch from Python to Javascript, enjoy the powerful semantics AND up to 50% of raw compiled C++ speed.

    Terms to look for:
    Clang (C++ parser frontend)
    LLVM (compiler backend)
    Emscripten (Software that turns Clang/LLVM output “bitcode” into a fast executing subset of Javascript)
    ASM.JS (A compiler for said subset in Mozilla nightly builds that will deliver said performance)

    There are several more or less powerful converters from Python to javascript.

    Using said toolchain, people have created a version of the Unreal FPS engine and other demos to run in the browser. Graphics output is done via HTML5 / WEBGL which is supported by Firefox.

    More here:

    http://badassjs.com/post/47031840270/link-excellent-article-clarifying-mozillas-asm-js

    LLVM is a very modern compiler backend; I don’t know if they already support Raspberry’s or Parallela’s but I would look out for that. The old GCC backend is a PITA; LLVM was originally written to replace it and has due to apple’s support gained an even bigger role in the iOS/IPhone space.

    Many language- and platform-transgressing possibilities there.

  8. crosspatch says:

    If you are looking for an awesome and fairly easy to master language, try “go” from Google.

  9. Paul Hanlon says:

    Excellent article, E.M.

    Hehe, when I saw the headline I immediately knew you were talking about the University of Southampton computer, which as you say has been on the RasPi site a couple of months. So I thought I would add something new by linking to the Parallela board, but you got there before me on that too:-).

    At the moment, there seems to be a holy war going on between the chip manufacturers, so we’re going to see more improvements going forward in this arena. ARM have the A15 4 core chip out with an A50 series of chips using 16nm tech on the way at a price point around $20-30.

    Only last year I built myself a “supercomputer” (I use the word liberally here) with two dual graphics cards and 32GB Ram plus SSD (15 second bootup time), and already it is made obselete by the things that are coming out now.

    @DirkH
    I thought that too about Python, but I saw a benchmark between them a few years ago and Python came out ahead of both PHP and Javascript (to my chagrin). There’s been significant advances with Javascript especially with Google’s V8 engine, but it’s a tossup as to whether that has bridged the gap.

    ASM.js looks like a nice library, but it will be years before you are able to use Javascript as we know it today, and have it converted into Assembler. At the moment, you can only use arrays and numbers, and they must be typed. No strings or objects, in fact the code looks more like Assembler with Javascript constructs rather than the other way round, something like High Level Assembler.

    In the meantime you have Cython, which takes Python and converts it directly into C. You’ve also got significantly better scientific and other libraries for Python than Javascript. With the amount of money being invested in Javascript by all the big players that will change, but I wouldn’t be writing off Python just yet.

  10. E.M.Smith says:

    @Brian:

    Yes there’s always a race between a load of cheap parts and one big expensive part. Who wins keeps changing. See that multicore chip at the end of the article. I think it’s the lowest cost / most bang!

    @All:

    Per languages: Its always a holy war… Personally, I’ll take the ‘reasonably fast and reasonably easy to write’ over the “damn fast and painful” and the “damn slow but cool”…

    @George B:

    I keep wanting to make a Mosix system “go” and then never get around to it… Sigh.

    Oh, and I wonder just how many more Bitcoins are left to mine?…

  11. DirkH says:

    Paul Hanlon says:
    9 May 2013 at 11:46 pm
    “In the meantime you have Cython, which takes Python and converts it directly into C. You’ve also got significantly better scientific and other libraries for Python than Javascript. With the amount of money being invested in Javascript by all the big players that will change, but I wouldn’t be writing off Python just yet.”

    Thanks, Paul; I didn’t know Cython, will look into it.

    E.M.Smith says:
    10 May 2013 at 12:03 am
    “Per languages: Its always a holy war… Personally, I’ll take the ‘reasonably fast and reasonably easy to write’ over the “damn fast and painful” and the “damn slow but cool”… ”

    Me too, I guess we both want to get things done. The tools I mentioned are the panzers of the language war. Help you to occupy the territory of the other language or platform fast without needing to actually speak it.

    search terms to use: transpiler , source-to-source-compiler.

  12. Pingback: When is a 4 core CPU a 2 core CPU? | Musings from the Chiefio

Comments are closed.