Raspberry Pi Model 3 – First Impressions

I’ve gotten a new Raspberry Pi Model 3. This is a short note on my first impressions.

First off, it is finally “fast enough” that I’m not occasionally gritting my teeth at it.

The Model 2 is just slow enough at times to be a minor irritant. Yes, usable. Yes ‘enough’ computer.
But also yes, I had to wait for it sometimes.

In fairness, mostly when editing / posting articles in WordPress. That has all sorts of structural issues. Things like sending all the words of the text upstream for spell checking (whenever you type, or every 30 seconds or some such. Some browsers let you tell it when to check, FireFox is autocheck on/off only). So there’s network issues and there’s “size of text” sent issues and then it is all processed in the hideously inefficient way web pages do things ( i.e. virtual machines running generic code inside bloated host browsers inside…)

So it didn’t really surprise me that as the length of an article grew, the sloth of spell check took a bigger toll on the Pi Model 2. At some point it would reach “crossover” and I’d start having “typeahead” issues where my fingers outran the text processing and then the repeated “polling” with a block of words to spell check would start to consume enough resources that other things would start acting “less than perfect”. (In particular, mouse sensing would get a tiny bit of lag. Just enough that “marking text” would sometimes happen slightly “too late”. So I’d stop moving the mouse and let up the button then move to another place on the screen… and THEN it would react to the ‘mouse up’ and mark the text to the new location. Not an issue if you are a slow mouser… for me, I’m faster than that… Also it would tend to “tear off a tab” and make it a new browser window way too often – due to not sensing mouse up before I’d start to move the cursor off of a tab, I think.)

With the Pi 3, those issues are substantially gone. Spell check doesn’t seem to be causing any issues (I deliberately made the last articles a bit long [especially the CETA one] with quoted text, and no problems.) I still have the occasional “tab tearing” but there may be an issue with when I start the mouse move vs button up ( i.e. a PIBKAC problem…) while I think it’s ‘the same time’ maybe I’m a fraction late on mouse-up. Frankly, I’d love to just kill that feature but haven’t had the time to find out what setting (of the millions… it seems) to edit in FireFox.

The Pi 2 vs. 3, per the wiki:


The Raspberry Pi 2 uses a Broadcom BCM2836 SoC with a 900 MHz 32-bit quad-core ARM Cortex-A7 processor, with 256 KB shared L2 cache.

The Raspberry Pi 3 uses a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor, with 512 KB shared L2 cache.

So in addition to the 64 bit vs 32 bit, you also have a 300 Mhz speed uplift, or 300/900 = 1/3 = 33% clock upgrade and a double on the cache size (so on memory hungry apps cache misses will be much lower).

For now, the 64 bits isn’t buying much. Then again, it also isn’t costing much. I’ve just stuck in the Pi 2 Arch Linux without any recompile. As I understand it, so far, there are no custom builds using the full 64 bit ability. (That would take recompilation of the libraries, kernel, apps, etc. in a full on build). This isn’t as bad as it sounds.

As one of my instructors put it, decades back, what a 64 bit word size giveth, byte packing and unpacking taketh away. Similarly, using 64 bit instructions doubles your memory usage and doing 64 bit math much the same. (More on that and the “thumb” instructions below). The “sweet spot” for most things is about 32 bits. You have some byte packing issues, but less float and double math issues. (Doing double precision math on an 8 bit machine is just painful on the CPU) So realistically, the use of a 64 bit word and data path doesn’t buy that much more, net. It is also best if done with more than double the memory added, and the Pi 3 doesn’t add any more.

What this means is that the 64 bit processing ability is there, but unused, and unless you are doing a lot of high precision math, it isn’t going to gain much for you. If mostly doing a lot of text processing, the reduction in memory efficiency may hurt more… except the ARM chip has a way of helping on that score. The Thumb Instructions. These are smaller sized words that can be used for just that saving on space issue.


1.2.2. The Thumb instruction set

The Thumb instruction set is a subset of the most commonly used 32-bit ARM instructions. Thumb instructions are each 16 bits long, and have a corresponding 32-bit ARM instruction that has the same effect on the processor model. Thumb instructions operate with the standard ARM register configuration, allowing excellent interoperability between ARM and Thumb states.

On execution, 16-bit Thumb instructions are transparently decompressed to full 32-bit ARM instructions in real time, without performance loss.

Thumb has all the advantages of a 32-bit core:

32-bit address space

32-bit registers

32-bit shifter, and Arithmetic Logic Unit (ALU)

32-bit memory transfer.

Thumb therefore offers a long branch range, powerful arithmetic operations, and a large address space.

Thumb code is typically 65% of the size of ARM code, and provides 160% of the performance of ARM code when running from a 16-bit memory system. Thumb, therefore, makes the ARM7TDMI core ideally suited to embedded applications with restricted memory bandwidth, where code density and footprint is important.

The availability of both 16-bit Thumb and 32-bit ARM instruction sets gives designers the flexibility to emphasize performance or code size on a subroutine level, according to the requirements of their applications. For example, critical loops for applications such as fast interrupts and DSP algorithms can be coded using the full ARM instruction set then linked with Thumb code.

I presume that Thumb instructions are preserved in the move to 64 bit cores. “Somewhere” I saw the value of about 8% as the total memory size increase moving to 64 bit object code on an ARM. So way better than a double, but you will still take a memory hit (albeit a small one) on making all the code 64 bit compiled. It also means some attention being paid to the details of just how you compile what (IIRC there are settings you can chose to enable Thumb instructions, or not).

Thus the lag in getting a 64 Bit OS released for the Pi 3. If it isn’t done properly and well, you could end up worse off on some measures than just running the 32 bit release.

So mostly it’s just a clock speedup and more cache that you get. For that reason I’d not been in a big hurry (besides, the Pi 2 was “good enough” almost all the time and “tolerable” the rest).

Living on it a few days

I’ve been living on it for a few days now, somewhere around 3 or maybe 4. I’m a happy camper. It doesn’t give me those “I wish it were faster” moments very often at all. (When first opening a browser with 10 tabs in it, reloading the history, I still get that, yet the CPU monitor doesn’t show it pegged so likely not a “Pi issue”. Also when saving a long posting in one tab while having hit ‘reload’ on the preview in another, I’d like more; but again that is as likely to be WordPress as the Pi… or maybe more likely).

In short, I’m quite happy with it, and the “urge” to occasionally boot up the Evo or the Chromebox is simply gone. This is a “very good thing” as now it means I’m free to try installing Linux on the HP Chromebox. Something I was a bit reticent about back when it was my “backup posting box”. What I’d go to when things got too slow on the Pi 2 with a very long posting. It has been “off” for about a month now (or maybe two) and I’ve shown I can live without it even for posting. This also means that the $35 Pi Model 3 is as usable to me as the $179 HP Chromebox. (In fairness, that was the 3 years back price… and time moves on). So despite all their differences (the Chrombox has a fairly fast Intel CPU in it and lots of memory) the speed perception is about the same, now. Yet the tendency for the Chrome “straight jacket” to bind remains. It’s the Chrome way or the highway…

As of now, on my “someday list” is to go ahead with that “Install real linux” onto the Chromebox. If I screw it up, the downside is now very small. At present, about the only thing it does better is video (and I’ve not tested video on the Pi 3 so who knows…) so ‘worst case’ is that my intention of making it a media server to the TV (CromeOS does that nicely) for Netflix gets moved onto some other box.

Anything done locally on the box seems “snappier” and just fine. Editing images in GIMP. Editing pages. I’ve not tried anything exotic yet; and I’m sure it’s not going to impress anyone for raw compute power. Also, the gamers will never be impressed (but I’m not a gamer and don’t need the newest octo-core Intel SuperDuperWattSuckium chip to make real time jet fighting happen…) For those things that I do, it is plenty and is comfortable. Since the Pi 2 was fast enough to recompile a Linux Kernel, this will be more than fast enough. That’s about the biggest one shot thing I do.

Odds & Ends

I got the heat sink kit. A finger applied to the CPU cooling fins shows them ‘mildly warm’ when at idle and ‘that seems a bit hot’ when running full tilt. I’m sure that without heatsink it would be fine, though I’d expect that at full tilt it might be pushing to the acceptable limit of CPU die temps. (That’s at ‘boiling water’ hot…) So I’d get the heatsink. It likely isn’t needed at all, but will increase die lifetime and help if you put the thing inside a challenging environment like a closed box case…I’m using one of those, but with the lid off.

Back at the wiki, per overclocking:

Pi2; 1000 MHz ARM, 500 MHz core, 500 MHz SDRAM, 2 overvolt,
Pi3; 1100 MHz ARM, 550 MHz core, 500 MHz SDRAM, 6 overvolt. In system information CPU speed will appear as 1200 MHz. When in idle speed lowers to 600 MHz.

These are “overclock” settings. I’ve not changed any of those settings on my Pi chip, so I might be running a slower clock than possible or with ‘overvolts’ lower than possible. i.e. I’ve not pushed the limits.

Oddly, or perhaps because…, I’ve not changed anything from the defaults on the Pi 2, a check on the max CPU speed (like this: sudo cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq ) gave me:

[root@ARCH_pi_64 cpufreq]# cat cpuinfo_max_freq 

so it is running at 1.2 Ghz, despite no special settings made.

But apparently I’m not stressing it much via posting anyway:

[root@ARCH_pi_64 cpufreq]# cat cpuinfo_min_freq 
[root@ARCH_pi_64 cpufreq]# cat cpuinfo_cur_freq 

It is also running fairly cool at the moment:

[root@ARCH_pi_64 cpufreq]# /opt/vc/bin/vcgencmd measure_temp 

All of which is just fine with me!

(Once I figure out something reasonable to keep it busy and working, I’ll post some stats from that, too. The “at the wall” numbers.

(Oh, sidebar: Some folks may remember my ‘sporadically crashes when running 100% on 4 cores’ problem on the Pi 2. Well, it was a power supply failure. I was using one with well over 2 Amps capacity and well inside needs, but one morning it started lighting up the ‘low volts’ rainbow square even on low use. It was just marginal on volts, even when lightly loaded, and had drifted down just enough more to let that be known. My guess is that under load, it was “ok for a while”, then would have a burst of demand or a sag of supplied volts and the Pi would crash. While I’ve not gone back and stress tested the Pi 2 on another powersupply, having a component fail in a way that would have caused the issue is usually diagnostic. So that P.S. is in the garbage. It was a cheap “USB charger” cube anyway, not a device sold as a powersupply for a computer.)

From the wiki, per video:

Although the Raspberry Pi 3 does not have H.265 decoding hardware, the CPU, more powerful than its predecessors, is potentially able to decode H.265-encoded videos in software. The Open Source Media Center (OSMC) project said in February 2016:

The new BCM2837 based on 64-bit ARMv8 architecture is backwards compatible with the Raspberry Pi 2 as well as the original. While the new CPU is 64-bit, the Pi retains the original VideoCore IV GPU which has a 32-bit design. It will be a few months before work is done to establish 64-bit pointer interfacing from the kernel and userland on the ARM to the 32-bit GPU. As such, for the time being, we will be offering a single Raspberry Pi image for Raspberry Pi 2 and the new Raspberry Pi 3. Only when 64-bit support is ready, and beneficial to OSMC users, will we offer a separate image. The new quad core CPU will bring smoother GUI performance. There have also been recent improvements to H265 decoding. While not hardware accelerated on the Raspberry Pi, the new CPU will enable more H265 content to be played back on the Raspberry Pi than before.
— Raspberry Pi 3 announced with OSMC support

The Pi 3’s GPU has higher clock frequencies—300 MHz and 400 MHz for different parts—than previous versions’ 250 MHz.

Which means that someday when they actually get the code recompiled and the interfacing worked out, the Pi 3 may run video rather nicely. So once bought, it can still improve as the software catches up.

I’ll be playing around a bit with the software side of things. I’ve tried adding a couple of bits of software under ARCH vis ‘pacman -S’ and gotten odd error messages, but not tried to figure out if it is me, Arch, or the Pi 3 that’s the issue. Example?

[root@ARCH_pi_64 chiefio]# pacman -S sysstat
resolving dependencies...
looking for conflicting packages...

Packages (2) lm_sensors-3.4.0-1  sysstat-11.2.2-1

Total Download Size:   0.36 MiB
Total Installed Size:  1.82 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 lm_sensors-3.4.0-1-...   110.3 KiB   552K/s 00:00 [########################] 100%
error: failed retrieving file 'sysstat-11.2.2-1-armv7h.pkg.tar.xz' from mirror.archlinuxarm.org : The requested URL returned error: 404
warning: failed to retrieve some files
error: failed to commit transaction (unexpected error)
Errors occurred, no packages were upgraded.

[root@ARCH_pi_64 chiefio]# pacman -S kernel26-headers file base-devel abs
warning: file-5.26-1 is up to date -- reinstalling
:: There are 25 members in group base-devel:
:: Repository core
   1) autoconf  2) automake  3) binutils  4) bison  5) fakeroot  6) file
   7) findutils  8) flex  9) gawk  10) gcc  11) gettext  12) grep  13) groff
   14) gzip  15) libtool  16) m4  17) make  18) pacman  19) patch  20) pkg-config
   21) sed  22) sudo  23) texinfo  24) util-linux  25) which

Enter a selection (default=all): 
warning: autoconf-2.69-2 is up to date -- reinstalling
warning: automake-1.15-1 is up to date -- reinstalling
warning: binutils-2.26-3 is up to date -- reinstalling
warning: fakeroot-1.20.2-1 is up to date -- reinstalling
warning: skipping target: file
warning: findutils-4.6.0-1 is up to date -- reinstalling
warning: gawk-4.1.3-1 is up to date -- reinstalling
warning: gcc-5.3.0-5 is up to date -- reinstalling
warning: gettext-0.19.7-1 is up to date -- reinstalling
warning: grep-2.24-1 is up to date -- reinstalling
warning: groff-1.22.3-6 is up to date -- reinstalling
warning: gzip-1.7-1 is up to date -- reinstalling
warning: libtool-2.4.6-4 is up to date -- reinstalling
warning: m4-1.4.17-1 is up to date -- reinstalling
warning: make-4.1-3 is up to date -- reinstalling
warning: pacman-5.0.1-2.1 is up to date -- reinstalling
warning: patch-2.7.5-1 is up to date -- reinstalling
warning: pkg-config-0.29.1-1 is up to date -- reinstalling
warning: sed-4.2.2-3 is up to date -- reinstalling
warning: sudo-1.8.16-1 is up to date -- reinstalling
warning: texinfo-6.1-1 is up to date -- reinstalling
warning: util-linux-2.28-1 is up to date -- reinstalling
warning: which-2.21-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Packages (27) abs-2.4.4-2  autoconf-2.69-2  automake-1.15-1  binutils-2.26-3
              bison-3.0.4-1  fakeroot-1.20.2-1  file-5.26-1  findutils-4.6.0-1
              flex-2.6.0-2  gawk-4.1.3-1  gcc-5.3.0-5  gettext-0.19.7-1
              grep-2.24-1  groff-1.22.3-6  gzip-1.7-1  libtool-2.4.6-4
              linux-headers-d3plug-3.4.2-2  m4-1.4.17-1  make-4.1-3
              pacman-5.0.1-2.1  patch-2.7.5-1  pkg-config-0.29.1-1  sed-4.2.2-3
              sudo-1.8.16-1  texinfo-6.1-1  util-linux-2.28-1  which-2.21-1

Total Download Size:     7.16 MiB
Total Installed Size:  200.43 MiB
Net Upgrade Size:       34.63 MiB

:: Proceed with installation? [Y/n] y
:: Retrieving packages...
 linux-headers-d3plu...     4.9 MiB  2.36M/s 00:02 [########################] 100%
 gawk-4.1.3-1-armv7h      904.3 KiB  1615K/s 00:01 [########################] 100%
error: failed retrieving file 'bison-3.0.4-1-armv7h.pkg.tar.xz' from mirror.archlinuxarm.org : The requested URL returned error: 404
warning: failed to retrieve some files
error: failed retrieving file 'sed-4.2.2-3-armv7h.pkg.tar.xz' from mirror.archlinuxarm.org : The requested URL returned error: 404
warning: failed to retrieve some files
error: failed retrieving file 'flex-2.6.0-2-armv7h.pkg.tar.xz' from mirror.archlinuxarm.org : The requested URL returned error: 404
warning: failed to retrieve some files
error: failed retrieving file 'which-2.21-1-armv7h.pkg.tar.xz' from mirror.archlinuxarm.org : The requested URL returned error: 404
warning: failed to retrieve some files
 abs-2.4.4-2-armv7h         9.7 KiB   973K/s 00:00 [########################] 100%
error: failed to commit transaction (unexpected error)
Errors occurred, no packages were upgraded.

Is there something set wrong? Is it something I didn’t do? Is Arch not ready for it’s id? Who knows. For now, I can swap the chip back to the Pi 2 for software additions. The error message implies I ought to do an update… so it is likely to be me.

As my major interest right now is trying out a different OS and doing builds from scratch, I’m more likely to go that way and not worry about these two oddities. Most folks will not be running Arch anyway (and I’m likely to ‘move on’ in a month or two) so not a big issue.

In Conclusion

I’m happy with it. It is a good “daily driver” without reservations. Sure, it could be faster, everything could be. But I find myself with “no regrets” about booting it up when I look across the room at the HP Chromebox or the Compaq EVO or even the 64 bit Antex/ ASUS box.

In short: The Raspberry Pi Model 3 is everything I need for a desktop, and would be a great learning tool for a tech oriented kid (or one you would like to be more tech oriented…) IMHO it has crossed the threshold from “nice toy, mostly can be used” into “Hey, I can live on that”.

I’m likely done with buying new kit for a while, since I no longer feel a need for more. Perhaps when I start building out a file server I’ll want SATA support or USB 3; but maybe not. Maybe when I’m doing full system builds, or maybe not. For now, picking ONE standard set of library release and kernel and build tools, then getting to work making my infrastructure kit out of the 2 x Pi Model 2’s is more important. I’ve got a perfectly fine desktop in the Pi 3, it’s the two Pi 2s that need a standard OS on them (so they can share compiles and have the right libaries matching up nicely) and need to be configured as the backend compute and file servers. Essentially, I need to build out my back room and set it up for Linux Development in a small cluster. ONLY once that is done, and ONLY if then it is found insufficient for the task, will there be anything gnawing at me about it.

The only other “bit of hardware lust” I can see is that I find the tablet fine for “on the road reading” but a PITA for posting. The Pi Model 3 is connected to a full keyboard and monitor. Not exactly Starbucks-on-the-road design. So “whenever” I go “on the road again”, my laptop envy will return. Perhaps before that I’ll get the fan fixed in the HP G6… We’ll see. For now that’s just not a need.

With that, I’m off to download some Linux images, take some test drives, maybe even try building a 64 bit kernel.

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , . Bookmark the permalink.

13 Responses to Raspberry Pi Model 3 – First Impressions

  1. Larry Ledwick says:

    Sounds like a nice “just enough” step up.
    Drat now I have to spend more money getting a Pi3 ( I know not much money but I have too many things on my “must have toys list” right now.

  2. E.M.Smith says:

    I’d say the P 2 was “just barely enough” and the Pi 3 is “comfortably just enough”. IF you have a Pi 2 already and don’t “have issues”, I’d not bother with the upgrade (likely either waiting for the next one a year out or getting something completely different). IF you are using a Pi and find it “not quite enough”, then I’d say “Go For It!” as your issues will be gone.

    But if you are using an Intel based PC and find it a bit doggy, then moving to a Pi 3 is not likely to make your day. It’s about like a single core 64 bit box from a couple of years back. Folks needing the latest quad core Intel at 3 to 4 GHz will not be happy campers.

    Oh, and remember the basic rule of hardware: IF you can futz and diddle about long enough, the price will come down or the performance will go up, so NEVER buy long in advance of need for “inventory”… that’s what retired gear is for… inventory…

  3. Larry Ledwick says:

    I have two Raspberry Pi 2 which I have not gotten around to messing with other things took over my project time. I intend to set one of them up as your local DNS filter. Currently using an old I5 desktop for “just browsing”. Like you I don’t game or do system intensive things like fiddle with massive spread sheets, just mostly checking news, following links to dig here on the current interest topic and such. I have an I7 MS 7 64 bit system I do my photography on, and and I7 laptop for tasks which require more horsepower, but I want a minimalist, dirt simple plain jane system for my daily browsing which is locked down pretty good to get away from MS 7 64 bit which is what I have on my daily browsing desk top.

  4. p.g.sharrow says:

    Question, What is the practical point where More is not worth the price in effort? Is 64 bit a bridge too far up that river? or is it the last needed bridge? There was no doubt that 16 bit was a huge improvement over 8 bit and 32 bit a good improvement over 16…pg

  5. E.M.Smith says:


    While there are some exotics, like VLIW Very Long Instruction Word machines, for most practical purposes with common architecture CPUs, 32 bit is enough and 64 bit is the limit of gain. To some extent, it depends on what you are doing.

    Heavy double precision math benefits from 64 bit. Engineers and astronavigators.

    For business, 32 bit is about it. Either doing integer math (counting every penny) or single precision floats (Projected growth of 3.525000000000000001 % not so important). For text editing and processing, pretty much 8 bit bytes is enough, 16 bits for multiple languages and fonts / scripts.

    So for some text problems, multiple 8 bit cores would beat a 64 bit single core..

    For the typical home uses, 32 bit is about ideal, but 16 is good.

    In many ways, 64 bit is not really needed nor as useful as faster 32 bit. Thus the Pi folks saying they chose the Pi 3 SOC for the clock speed not the 64bits. IFF interprocessor communication is fast and the compiler multithreads well, I’d rather have a dual core 32 than single 64. IFF I was doing engineering or science that needed double precision and lots of it, I’d choose 64, since “double math” on 64 bits is one op. On 32 bit, about 8 clocks as a guess.

    For hard core science and modeling, I’d choose a 32 bit scalar front end to a NVidia vector unit backend, preferably in 64 bit.

    In reality, for most purposes, more data communication speed and faster disks matters more (thus my mention of Cubietruck having faster USB and SATA.)

    Which is an awful lot of words to say “it depends”…

  6. E.M.Smith says:

    Oh, and 64 bit data path can get double a 32 bit when loading memory or devices, so 64 bit, or wider, memory data path and DMA hardware moves more bits faster, it just doesn’t process it like a CPU does.

    General putpose machines balance all that to “good for most things most of the time”. For any given class of problem, special HW tweaks can win. The old IBM Mainframes, for example, were pretty slow 32 bit 16 register bland CPUs, but they had (for the time ) damn fast multiple IO channels. For business, you shovel bytes around a whole lot more than you change them or do math. Read in a 512 byte record, add 5% to 32 bits of it, write out 512 byte record… For the old Cray Supercomputers, it was the 64 math processors in the vector unit that made it fast on math problems. Read in 128 numbers of 64 bits each, one instruction later, write out 64 double precision products… and the data path from multiple disk heads that let you load that data in parallel. But it was lousy for editing text files… it had 64 MB of memory, but that was only 8 megaWords, so editing a 200 MB file was “sub optimal”… with lots of swaps.

  7. Larry Ledwick says:

    I think the major advantage of 64 bit is in the larger addressable memory and storage limits, not computational precision.

    3 GB is just not big enough for handling multiple very large images in memory without paging and that sort of thing. As such, in my experience it has more to do with secondary features like that than simple processing.

    For example the maximum pixel dimension of a jpeg image is currently 30,000 pixels wide.
    With modern high resolution cameras and image merging into panoramas it is very easy to generate panorama images with a native resolution significantly greater than 30,000 wide. You can see very large gigapixel images assembled this way on http://www.gigapan.com/

    If you have a few hours you want to throw away, browse through their gallery of very large images. The technology and incredible depth of detail has completely outstripped common image formats. None of these images can be stored in their full natural resolution, they have to all be down sampled to display the full image. It takes hours to generate these images (to take the source images) so it works best in uniform lighting conditions.

    Text on display is readable if you zoom into this image
    Panorama size: 2973 megapixels (88584 x 33564 pixels)
    2.97 Gigapixels
    Input images: 312 (24 columns by 13 rows)
    Field of view: 360.0 degrees wide by 136.4 degrees high (top=61.5, bottom=-74.9)

    Monaco Grand Prix 2015, Race Day – Monte Carlo
    Size 23.19 Gigapixels
    You can see individual people on the party boats if you zoom in


    51.42 Gigapixels
    Norways largest panoramic image as of 31.12.2014
    3000 tiles shot with Canon 5D Mark III + 600mm telelens and GigaPan Epic Pro.

    If you zoom in at the base of the bridge (center right of image), you can see people on bicycles crossing the street at a cross walk, and read the street signs.

    Size 21.38 Gigapixels
    Pittsburgh’s entire Golden Triangle area. Consisting of 1748 separate photographs shot with a 600 mm lens, the image totals 21.38 gigapixels

    For example the Nikon 810 maximum sensor resolution is 7360 x 4912 pixels, building a panorama with some overlap between the images you exceed 30,000 wide when you merge more than 5 – 8 images into one panorama image. To get a 180 degree panorama it takes around 12 – 18 images to swing 180 degrees and get sufficient overlap to get reliable stitching between the individual tiles of the image.

    If you you are only using 3680 pixels of the original image (ie 25% on each end) an 18 tile panorama image would be 66240 pixels wide. At full image height of 4912 pixels you would have 325,370,880 image pixels, and if using 24 bit true color you would need 7,808,901,120 bits to store just the image data ignoring the header and meta-data in the image. At 16 bit color depth you drop that down to 5,205,934,080 bits just to store the image data.

    I have the gigapan head equipment to do these images, and one of the reasons I went to 64 bit OS is so I could process these huge images in their native resolutions. The applications to handle the images simply are not capable of processing a full resolution image like this yet.

    Gigapan is the same system used by NASA to assemble the mars panoramas from the rovers. It has just recently been being adopted by scientific groups to document at ultra high resolution scientific sites.

  8. p.g.sharrow says:

    @EM&Larry: I asked a $5 dollar question and got a thousand dollars worth of information! ;-) thank you both. LoL
    It appears that I O is still a bottle neck in the utilization of all this computing power. Specially in regard to long term storage…pg

  9. E.M.Smith says:


    Glad to help, even over the top help ;-)

    I really like Larry’s example. For most folks it isn’t important now. In a few years, it will be much more common. Eventually lots of folks will expect to make those images and look at libraries of them. So now, it fits my category of “special hardware” letting you do something unusual. Eventually, it will fit the category of “balanced use” and 64 bit will be pervasive.

    Yet even then, for things like data com (routers, firewalls, etc.) where serial communications presents one byte at a time, an 8 bit computer is fine, thus the system chips in routers being ‘dinky’ compared to desktops today.

    FWIW, in some box somewhere, I have an early handheld computer with dual processors… it has 2 x 4 bit CPUs… so took a couple of cycles to process each byte, but one CPU could be doing that while the other worked on your program…

    Word size is a fascinating part of system design, and system design is facinating in what can be done with changing the shape of the box, or thinking outside it… There are many various designs with cool effects and abilities. Unfortunately, many left on the cutting room floor of history. I learned on one of them (still barely around), the Burroughs B6700. Had a 52 bit word where some bits enforced security (marking runnable programs vs not and preventing many kinds of attack).


    Lists it as a name change on a 51 bit machine, but it had 48 + 3 tag bits + 1 parity.


  10. Larry Ledwick says:

    On a side note, the thing I find fascinating about those mega resolution images is that they serve as a proof of concept of what the TLA’s can do if they want to.

    Just gang a few hundred image sensors and high dollar lenses into an array, and they could take a stop action picture of the grandstands of a major event and be able to identify probably 80% of the people in the crowd.

    Street view on a busy street, and face recognition software, no problem – generate a list of everyone who attended an event in near real time.

    There is already some open discussion of this sort of persistent observation capability.
    These “God’s eye view of the world really changes the game as far as surveillance and tracking is concerned.


  11. E.M.Smith says:

    And people wonder why Oracle is so large, why there is such a big emphasis on “Big Data” storage systems and massive data fabrics, and more… It isn’t to do billing… but it pays for the R&D for the few systems needed by the TLAs…

  12. E.M.Smith says:

    While running the Hillary Skates video from the FBI Director (thus using a medium level of CPU – all four cores, but about 50% each) I got these stats:

    [root@ARCH_pi_64 cpufreq]# cat cpuinfo_cur_freq
    [root@ARCH_pi_64 cpufreq]# /opt/vc/bin/vcgencmd measure_temp

    Warmer than at idle, but still well away from “destructive” temps (that, IIRC, are about 200 C).

  13. E.M.Smith says:

    Oh, and not quite related, but sort of… I downloaded several “Void Linux” images. Trying it on the 64 Bit ASUS / Antek failed with the same “block cursor” issue from the wrong (new) video driver as many other releases. Seems very few folks use / supply the video driver for that chipset. I think I’ll need to extract it from Centos 6 and apply it to others myself… Sigh.

    Further, I couldn’t get the EVO to see the boot media. But that is a known Evo problem (at least for my box…)

    Which leaves me with the Pi images. Sometime ‘later’ I’ll give that a try. So far, bringing up different Linux releases has been far easier on the Pi than on old x86 gear… mostly due to driver issues as the Pi has exactly one set of drivers needed… not 20,000 variations as in the PC world.

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s