Broken Feature Hell


I’m stuck once again in Broken Feature Hell.

For decades I’ve been working in I.T. doing computer stuff. The whole time has been marked by features that don’t work. Either as ‘bugs’, or as ‘Real Soon Now’ broken promises, or sometimes as “let the customer do QA” (in the Mico$oft model). Sometimes as Sales Guy Hype Not Shipping Yet…

Once, at Apple, I was looking to buy a load of network gear. One Sales Guy had a great product spec. I was ready to buy several and “Make His Day”. Then I said “OK, just bring in a sample and set it up. Show me that it works.”. He was hesitant. Yes, often that is simply because it is a PITA to do the releases and drag the samples out and get the tech guy assigned and all that time he is not selling. But, I was insistent, there would be no sale without a Demo. A working Demo.

So a couple of weeks later they show up with a Demo Unit. We put it on the table. He hands me the manual. I look over the chassis. Nice lights with all the right labels. “OK, plug it in. Let’s fire it up.” says I. He does a sales pitch. “Um, plug it in. Power. Now please.” says I. He looks a bit pained and pale. “Does it work or not? I want to see it run, right now.” says I….

Well, turns out that this prototype unit had worked maybe a few days ago then failed and they were still trying to make it go right back in the lab, but they brought over the nice empty case for me to look at….

A very dramatic case of Broken Feature Hell, in that the whole product didn’t work, and in spectacular fashion, but such is the world of I.T. and new products.

Another time I spent close to a week trying to track down why a client mail server would only deliver mail from outside the company every 20 ish minutes. It worked fine. Just that the mail coming in and going out only happened to move once every 20 minutes or so. For 20 minutes, mail would go out. Often instantly. But no inbound. THen for about 20 minutes, inbound would flow (often instantly) but outbound would not flow. Eventually I worked out that they had 2 “default gateways” set in the configuration. One inside, one outside.

Now the “default gateway” is also known as the router of


resort. That is, there is exactly ONE “LAST” resort. I argued with the client for the better part of a few hours that this was a Very Bad Idea. Eventually, on the Microsoft “support” web site I found a description of this “feature’. Seems that a Windoz box will rotate between multiple “routers of last resort” on about a 20 minute rotation. They stated about this bug “This behavior is by design”…

So with that in hand, I could demonstrate WHY the mail did what it did, and then was allowed to set up the mail server as it ought to have been. A static route to the inside interface for inside corporate mail, and a default gateway that pointed outbound for ‘everything else’. Mail flowed instantly both ways…

All due to a broken “feature”.

Much of my life has been spent in Broken Feature Hell, and I’ve gotten pretty good at navigating the turf. I can see a broken feature being hyped in a sales call faster than most anyone else; and I can smell the fear when you ask about it…

The Present Land Of Broken Features

So I had this “Bright Idea”. Make a Qemu Bigendian system on a chip so that I could make the bigendian parts of GIStemp work on any old PC. Qemu is a free system emulator. It has SPARC support in it (and I’m pretty sure either a SPARC or SGI MIPS was the base system on which GIStemp was run). Easy Peasy…

The first cut of research showed all the right features present. The docs all said it worked on Windows as well as *nix machines (Linux, Unix, POSix, etc.) The features stated it had several bigendian chips emulated (SPARC, MIPS, PowerPC, ARM with big endian set…). OK, looks reasonable. Probably a few Broken Features, but ought to be livable.

I’ve also gotten very good at navigating Broken Feature Minefields and charting the course through them that works. As long as there are “enough” features that work, you can usually find a set that lines up. Even if it takes some exploration.

But sometimes… sometimes not so much. Lots of potential paths, but then after a day or two down that road comes the precipice or the road block. Sometimes it is a small one and you can build a bridge over or around it. Install a package or find the ‘just so’ flag settings that let it work enough. Sometimes it is a hard stop and you back up a few days and try again.

In this case, the number of features is large, and the number that don’t work is large too. LOTS of paths to explore, almost all of them ending badly. It is Broken Feature Hell. All the work, none of the progress.

Whats What

Do realize that in software development projects a fair amount of Grim Determination is required. An unwillingness to give up. So this Complaint by me does not mean I’m giving up. Not until every path is explored, and marked as a dead end, can you say that Broken Feature Hell has ended in death of the project. For now it is just a bleat from a hot cliff overlooking a dead valley Yet Again.

OK, first up, the Windows support. It’s slim. Yes, I have it running on my laptop. Yes, it works. Yes I have a SPARC emulation running. But several “features” don’t work. First off, the ‘prebuilt’ system images (Debian on a SPARC 32) limit you to Debian Etch. Debian stopped supporting SPARC 32 back in 4.x release land. Now they are at 7.x land. So old code, not updated, no new features. I’m OK with that, I guess. The current Debian wants to support SPARC 64 chips, but doesn’t have it working yet. Yes, I know, free software and free labor making it go, if you want to you can contribute time to it or pay for it. Still, it means that SPARC is not really working all that well.

Then, there’s that small matter of flags to Qemu. The two prebuilts come in windowing and command line. OK, the command line one is fast enough and all I need for GIStemp anyway. It does work. The windowing one works to, after a fashion. As nothing is using any of the graphics hardware, it is God Damn Slow. Painfully so. OK, I could ignore it I suppose. Except…

At the GISTemp source code download page, the tarball is not in FTP land, but in HTML land. No Worries, as wget command gets HTTP files too… but attempt to use it from the command line, while working find on most any other site, give 404 and 403 and other errors on the GIStemp page (depending on what options are set).

So is that a fault of the GIStemp web page? Or of wget? Or of THIS particular Etch version of Debian wget? Or… So a ‘quick’ 20 minute launch and set up of the windowing Qemu SPARC and the browser lets me download the GIStemp code (that I showed in the last posting). OK, I’ve got the code. But…

It is in the windowing prebuilt image, not in the command line image.

No Problem, thinks I, there’s launch options to not do windowing from the windowing version… a few launch attempts later and I realize those “features” don’t work. I can launch it, and it may be running without ANY visible console, but it did not launch a command line version. Sigh. Is there a work around past this? Maybe, hike down those four roads a day or two each and report back…

So I can download the code, or I can have a working command line system that is livable, but not both at the same time…

Yes, I can do things like put the source code on a CD, and find out how to mount the CD inside the command line Qemu (IFF that feature works in the Windows port…). There are plenty more paths to explore.

I’m not giving up yet.


Welcome to Borken Feature Hell where you may spend days wandering in the woods only to find yourself back at the last camp. Again.

So far I’ve found a half dozen or so launch flags that look like they don’t work (including the -k en-us flag to let me make the keyboard a US keyboard instead of the GB one it has as shipped; at present I can’t find the ‘pipe’ symbol vertical bar that is essential to *nix command line use…). I also had a ‘failure to configure’ on one instance (apt-get upgrade) but it worked on another. BOTH running from an SD chip. What was different? Not much… and nothing that ought to matter… so a sometimes randomly broken feature…

I now have a FORTRAN compiler installed, but no code yet. And another image has the code, but can’t install the compiler. And a third has the data… and features that ought to let me boot one the same way as the other don’t seem to work, and the interface to peripherals may or may not work and may or may not let me get the code moved; but are painful to set up in any case (all command line options at boot time when that seems to have a lot of broken features already).

And that is how you know you are in Broken Feature Hell.

There’s just enough options left to try that you keep on searching for The One Path. But so many broken that the odds of that path existing are dancing with zero.

At that point, the Grim Determination starts to be a non-feature as you spend way too much time looking for The One True Path and note enough time asking “Is this sane? Is there a better way?”

Often that is asked just before you find the One True Path… so hope springs eternal.


In Conclusion

I’ve not listed all the Broken Features I’ve run into. Things like looking at the MIPS and PowerPC emulations and finding them not all that complete either. This was just a sample for the flavor of it.

I’m not sure exactly what path I’m going to take out of this. Likely continue with the emulator for awhile. I’d been wanting to set up a Raspberry Pi general purpose server (but being little endian it can’t run the last part of GIStemp directly). Networking seems robust on the Qemu SPARC. My current “This Time For Sure!”, is an RPi file server with the sources, and a command line Qemu SPARC to unpack and run time. All the parts look like they work. Just ought to be a matter of doing it. I’d guess about 2 days of work (if done straight through).

We’ll see.

Or maybe I’ll just take $100 and buy an old PowerMac and turn it into a Debian box… Forget all the round about stuff and go for straight hardware. Only question being just exactly which of the hundreds of Mac configs actually works well with Debian ;-)

Well, time to get on with the day. Wish me luck as I explore more “features”…

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in GISStemp Technical and Source Code, Tech Bits and tagged , , , , , . Bookmark the permalink.

22 Responses to Broken Feature Hell

  1. M Simon says:

    I never got why “little endian”. Representing a 32 bit number on a 16 bit machine (or equivalent manipulations – 64 on a 32) looked painful to me.

  2. Ian W says:

    Reminds me of a long time ago working on a DEC10. It had a 36 bit word which it accessed as 5 * 7bit bytes. Yes there was one spare bit every word. Made using bit and byte arrays an interesting process.

  3. mt says:

    You should try qemu PPC with Debian wheezy. Looks like qemu/sparc is rather out of date and will limit available RAM. It’s a bit tricky, but I just did a qemu/ppc/wheezy installation, installed fortran, and verified big-endian.

  4. E.M.Smith says:

    @M. Simon:

    Little vs Big Endian is just the byte order used to pack words. Is 1,234 saved as 1234 or 4321?

    It doesn’t have anything to do with changed word sizes per se. So 1234 and 12345678 are both the same endian-ness.

    And it isn’t painful at all… unless you mix endianness in your processing… or write endian dependent code on one machine (store data one way) an then try to run it on another.

    Unfortunately, GIStemp did just that in the final steps. There is an unstructured data storage option in FORTRAN that stores things in an endian dependent way, and they used it.

    In theory, there is an endian flag to the FORTRAN compiler, but last time I tried it on a Linux ( Red Hat I think) on my GIStemp computer workstation (about 2008?) it was a broken feature ;-)

    @Ian W:

    It’s interesting how many folks who have now only heard of 8 bit bytes and 16 / 32 / 64 bit words do not know there were other word sizes common in the past.

    My “first machine” at college was a Burroughs B-6700. Novel for the day in having a dual processor. It also had a 52 bit word. This was 48 bits of ‘data’ and 4 ‘tag bits’ that had meta data of a sort about the other bits. is an interesting project to try and recover / restore one.

    Had 6 M Bytes of memory (!). OS and such were written in ALGOL and I learned ALGOL early on. Actually liked the language, other than the lack of a defined I/O package… (Why language designers regularly leave out one of THE things application writers deal with most, I/O, is beyond me…)

    At any rate, it was an interesting machine. claims the basic design goals live on in a Unisys machine family:

    While the B5000 architecture is dead, it inspired the B6500 (and subsequent B6700 & B7700). Computers using that architecture are still in production as the Unisys ClearPath Libra servers which run an evolved but compatible version of the MCP operating system first introduced with the B6700.

    Sperry / Univac and Burroughs (and who knows what else) merged to make up Unisys…

    Unique features

    All code automatically reentrant (fig 4.5 from the ACM Monograph shows in a nutshell why): programmers don’t have to do anything more to have any code in any language spread across processors than to use just the two shown simple primitives. This is perhaps the canonical but no means the only benefit of these major distinguishing features of this architecture:
    Partially data-driven tagged and descriptor-based design
    Hardware was designed to support software requirements
    Hardware designed to exclusively support high-level programming languages
    No Assembly language or assembler; all system software written in an extended variety of ALGOL 60. However, ESPOL had statements for each of the syllables in the architecture.

    A fascinating machine in many ways. Kind of wish I could get a desktop equivalent today ;-)

    IMHO, ALGOL and the Burroughs eventually lead to C and Unix. The idea of a machine designed to have the OS in a high level language being a new bit…

    But what about those extra 4 bits? They both limited what a word could do and empowered it… If you didn’t have the right tag bits, you could not execute it as an instruction, nor overwrite it…

    Tagged architecture

    The most defining aspect of the B5000 is that it is a stack machine as treated above. However, two other very important features of the architecture is that it is tag-based and descriptor-based.

    In the original B5000, a flag bit in each control or numeric word was set aside to identify the word as a control word or numeric word. This was partially a security mechanism to stop programs from being able to corrupt control words on the stack.

    Later, when the B6500 was designed, it was realized that the 1-bit control word/numeric distinction was a powerful idea and this was extended to three bits outside of the 48 bit word into a tag. The data bits are bits 0–47 and the tag is in bits 48–50. Bit 48 was the read-only bit, thus odd tags indicated control words that could not be written by a user-level program. Code words were given tag 3. Here is a list of the tags and their function:

    Wouldn’t it be nice if your whole OS could be ‘tagged’ such that thing like stack overflow and other ‘hacking’ techniques would fail? …

    Ah, well… We live in a MS Intel world now…

  5. M Simon says:

    It doesn’t have anything to do with changed word sizes per se. So 1234 and 12345678 are both the same endian-ness.

    Yes. But is 12345678 loaded as 87654321 or as 43218765?

    So what is the advantage of big endian? Printing. – Little endian is the more logical. Big endian prints better (easier to read without having to do a mental conversion).

  6. E.M.Smith says:


    Thanks for that! I’ll give it a try.

    Always nice when someone with clue shares a bit and saves some hours…

    @M Simon:

    The advantage comes in dependent on architecture / word size. Small processors don’t have a lot of space, so getting the least significant bits first makes it easier to do things like adds of longer numbers. Big machines get it all in one fetch, so care more about things like multiple fetches to get the parts. So if you want to add 1035 and 245 in a little endian you get 5+5 then 4+3 (with carry) then 0+2 then 1+null (and all the associated carry bits). In big endian, you get 1035 + 0245 and one run though a big adder… then store. For the small chip to get 1+null of unknown rank, then 0+2 of unknown rank, then 4+3 of unknown rank then 5+5 of one’s place rank, then undo all the unknowns and do the carries backward… it’s a pain. (all in binary of course).

    So now we have 64 bit Intel chips with the endian flavor most suited to 8 or 4 bit chips… Oh Well…

    And yes, to your point: There is both byte endian and word endian. There are some mixed endian where the word order is one way and the byte order the other. Some machines have mixed order with some words one way and some the other. All, largely, hidden from anyone who isn’t doing systems programming… or dealing with old FORTRAN written by folks who didn’t think about it when choosing their data structure on write… ;-)

  7. M Simon says:

    And don’t even get me started on C. It is a productivity killer. Back when there software shoot outs C never won. It usually came in last beaten by all the Forth teams.

    My typical when competing against a C team was 10X faster. I occasionally was 100x more productive. OH. Yeah. My programs were easier to read than the typical C stuff. Here is a bit on that:


    Q. Why is Forth not as widely used as C?
    A. C is much better suited to most business environments.

    Q. So why use Forth?
    A. Because it is possible to write better programs in Forth than in C.

    Q. Could you define “better” in this context?
    A. Easier to read and maintain, fewer bugs, smaller, faster development.

    Q. So why doesn’t everyone use Forth?
    A. Because it is also possible to write worse programs in Forth than in C.

    Q. Perhaps you should also define “worse” …
    A. Impossible to read and maintain, more bugs, bigger, longer development.

    Q. Why is this?
    A. Because Forth does not impose any restrictions on what you do or how you do it. However large your C program gets the source for it will still be C. Since Forth is extensible, you can extend the language so that part of your program can be written in a superset of Forth, specifically tailored to the application.


    I had a team of 3 well disciplined (by me) Forth programmers. We consistently beat a team of 30 C programmers. We got the job done in 1 month. They were still struggling after 6 months. This went on for 2 years. The government inspector who looked at our code said it was the best written code (in any language) that he had seen in 3 years. (the project was a government R&D shoot out for a military radio)

    So why is C used by business? Because you can apply 30 mediocre programmers to the job. Why should Forth be used? – you only need 3 good programmers. Even if you pay them double you get a 5X cost advantage. Not to mention the time saved. Which multiples the cost advantage.

    Forth is a multiplier. It makes good programmers better. It makes bad programmers worse.

    Argos’s ensemble of sonar, lights and cameras was orchestrated by an array of computers that each programmed in a different computer language. The computer on the unmanned Argo itself was programmed in Forth, a concise but versatile language originally designed to regulate movement of telescopes and also used to control devices and processes ranging from heart monitors to special-effects video cameras. The computer on the Knorr was programmed in C, a powerful but rather cryptic language capable of precisely specifying computer operations. The telemetry system at either end of the finger thick coax cable connecting the vessels, which in effect enabled their computers to talk to each other, was programmed in a third, rudimentary tongue known as assembly language.

    Forth was the only high-level language that could be used on the submersible Argo’s computer.
    Why would that be? Well C has the notorious problem of code bloat. Which means you need a “bigger” chip. Which means more power.

    As you might have guessed I’m into Forth. I do control stuff with small micros:

  8. M Simon says:

    So now we have 64 bit Intel chips with the endian flavor most suited to 8 or 4 bit chips… Oh Well…

    Intel was always big on backward compatibility.

    I wrote a 32 bit by 32 bit multiplier using Forth style assy lang for the 8051 (a byte machine at the hardware level) It was for the A320. A buddy copied that code and used it on an F16 project. So yeah. I know about carries. And the need to save intermediates. But if you think it through and use the stack wisely it is not difficult. BTW almost all processors until the advent of the 68000 were byte machines despite some 16 bit registers. Loved the 68000 series. Very clean. Almost totally orthogonal. The ARM stuff is designed for C and its stack thrashes. But we get around that. Some. It does complicate the assy lang.

  9. M Simon says:
    Ultrasonic Head Controlled Wheelchair

    This interface for a motorized wheelchair permits individuals with quadriplegia to control the wheelchair’s speed and direction by tilting their head. in the desired direction of travel.


    Used my hardware (Z-80) with a built in (ROM) Forth.

  10. M Simon says:

    The advantage comes in dependent on architecture / word size. Small processors don’t have a lot of space, so getting the least significant bits first makes it easier to do things like adds of longer numbers.

    Nothing prevents you from doing that in little endian machines. Address, fetch, operation, increment address, fetch, operation, etc. The only advantage I ever saw in big endian was in the print outs. 4321 -> prints in the “right” order. 1234 prints in reversed order. But a lot of that has to do with how printing is done. Little endian is more logical and you don’t have troubles like out of sequence byte order in larger numbers on some machines.


    Well any way good luck dealing with it.

  11. Steve C says:

    Your experience with “features” reminds me of when I was learning some basic shopfitting skills from an ex-theatre carpenter. (It’s the same business. As long as it looks OK from the front, it doesn’t matter what horrors are holding it all together round the back!) Anything you couldn’t easily get rid of, but which the customer would notice, got turned into a “bijou featurette” (a theatrical phrase, I suspect, not a shopfitting one). A dirty great plaster moulding in the middle of the ceiling? Paint it a contrasting colour and stick a big triple spotlight fitting in the middle of it … job done! Sounds like the software biz takes the same approach.

    I seem to recall that, when Tallbloke had his run-in with the Fuzz, he mentioned that his collection included a few old SPARC workstations. Possibly he could help?

    The pipe symbol is shift-backslash on a UK keyboard, to the left of the Z. I occasionally run into the complementary problem with live CDs that ship in US keyboard mode. I’d need to know the syntax to locate the other wandering characters, though. Still, if it had shipped in French, you’d find the “AZERTY” keyboard lost you letters as well … ;-)

    @M. Simon: Fraternal greetings from a co-religionist, in programming matters at least. I concur with your comments both on the world’s finest language and the execrable C (with or without however many plusses). Admire the best, don’t give a FIG for the rest!

  12. Steve C says:

    Hate to add to the Broken Feature Hell, but this may be one right here (or at least, back at the control panel).

    I noted a while back on TD that comments there weren’t getting added to the Recent Comments sidebar. I noted a few days ago that they still weren’t. Dunno if it’s a WordPress “feature” overflowing or what, but it ain’t doing what it oughta. Reckon it’s TE time ;-)

    Also, whilst at T, a word in favour of not being restricted to mere hexadecimal in numbering subsequent pages, but going totus porcus and using base 36. (in which base, of course, Fig Forth used to store the CPU type, aye, even unto the Z80!)

    OT, but better on a current thread than on TD at the mo’, there seems to be the threat of further damage to the Standard Model of Physics we know and love, if this is right. Having no experience with Planck vacuum fluctuations or the charge radius of the proton, I hope someone more nucleonically literate than me can translate the implications for the rest of us, but this seems to be pretty central:

    “… the fact that Haramein utilizes Planck vacuum fluctuations to predict an incredibly accurate value for the charge radius of the proton is extremely significant to physical theory. However, even more noteworthy is that Haramein in the same swoop solves the quantum gravity issue (the fact that gravity does not fit well at the quantum scale) by demonstrating that both cosmological gravitation can be described in terms of Planck vacuum fluctuations and that the strong force that binds the protons together is that very same force acting on them, producing a unified view of physics. These are non-trivial results …” (my bold)

    If ’tis so, “non-trivial” could be distinctly non-false.

  13. Steve C says:

    (my attempted bold)

  14. E.M.Smith says:

    @M. Simon:

    Why I’ve always liked shell script programming. It is “Forth Like” in that you can create new ‘words’ via script entries in the bin directory. I’ve never programmed in Forth, but studied it. It is Reverse Polish in execution, so my old HP Calculator training made me like it right off.

    Only real downside is that it is a stack based machine orientation. Register machines can go a bit faster on more complicated problems. I supposed a gaggle of small stack based interpreters could run many instances of the Forth machine in parallel…

    144 fully fledged computers.
    Just one chip.

    This gem is our GA144 multi-computer chip. It is designed to give you options that have never before existed and to place them under your control by writing software.

    With 144 independent computers, it enables parallel or pipelined programming on an unprecedented scale. Map a data flow diagram or an analog block diagram onto its array of computers for continuous processes without interrupts or context switching.
    With instruction times as low as 1400 picoseconds and consuming as little as 7 picojoules of energy, each of the 144 computers can do its work with unprecedented speed for a microcontroller and yet at unprecedentedly low energy cost, transitioning between running and suspended states in gate delay times. When suspended, each of the computers uses less than 100 nanowatts.

    It’s part of the package:
    arrayForth® Developer Tools

    Our software development tools are available free of charge to our customers. Collectively called arrayForth, they are the foundation of our proprietary CAD system used for chip design. It includes assembler/compiler, example source code including all ROM on each chip, a full software-level simulator for each chip, and an Interactive Development Environment for use with real chips. arrayForth will always be the gold-standard software for working with our chips, so we feel we owe it to our customers to give it to them. Free.

    So there you go. A single massively parallel cpu chip with 144 cores that you can run with FORTH built in.

    Enjoy ;-)

    I really like the 68xxx family too. Being Mac biased, I’ve only transitioned to using Intel for most things in the last decade. Likely going to use a home grown for my next phase (just because of what is being done to make the not-a-bios a client of Microsoft and the DR Police). Have not yet settled on what. Some multicore with front end chip. Not sure which one. GreenArrays or Parallela or.. Since Linux is C biased, a gaggle of ARM for the front end makes sense, but having a gaggle of nice little FORTH machines as a general purpose stack hoard would also be fun ;-)

  15. E.M.Smith says:

    @Steve C:

    I’m sort of agnostic on C. Used it. Never loved it. Found it easier to get things done in a dozen other languages. (FWIW, I actually like FORTRAN for straight forward math problems. Easy to get data in / out – fixed format is your friend in FORTRAN and a pain in C, IMHO)

    Oh Well, as they say.

    FORTH always appealed to me as a very compact language with a clear method of operation. The object oriented stuff always looked just way overcomplicated for no real gain. (Yes, I’ve managed a project to completion and ship written in C++ so my feelings are not based on lack of familiarity).

    In many ways I’m from the “whatever tool works best” school. Heck, I’ve even done some PL/1 that was kind of fun. The old joke was that you could write any language you wanted in it. It was true. Old COBOL programmers write PL/1 that looks like COBOL. Old FORTRAN folks code lookes like FORTRAN. You can tell the guy who started in Pascal too…

    In some ways, my favorite was ALGOL. I have the free compiler downloaded, but not installed at the moment. Still in use in some places (largely Europe). Unisys still makes a stack oriented chip for it, IIRC. Maybe I’ll port ALGOL to some stack multicore chip and make my own OS ;-)

    Ah well… for now I’m resolved to learn Perl. The “Swiss Army chainsaw” of languages… I think it will be useful for some of the mass data manipulations I think I need to do and for gluing together some old FORTRAN i/o with more sleek processing…

    I’ll get on to fixing TD and the bold now ;-)

  16. M Simon says:

    E.M.Smith says:
    20 July 2014 at 9:12 pm

    I have the GreenArrays demo board in my possession. Programmed it too. “Mary Had A Little Lamb” (in honor of Edison) in one core with one slot left over – just for fun.

    BTW the GreenArrays chip has some registers besides the usual Stack registers.

    The two stack architecture is way under appreciated. All “C” compilers use it. Separating the Data Stack from the Return Stack eliminates Stack thrashes. One of the many faults of “C”.

  17. M Simon says:

    If you ever get into Forth look at its object method. Very simple. It separates the object into two parts – the code and the data. Yeah. It is sort of a theme.

  18. M Simon says:

    I mostly do bit banging. There is no better too for that than Forth. IMO.

  19. Ursa Felidae says:

    Question that is perhaps off topic: At what age should a child learn programming, and what would that language be??? I was thinking something fun and colorful like Visual Basic, but it has been a few years since I played around with it.


  20. E.M.Smith says:

    @Ursa Felidae:

    As soon as they are interested in it. I would start with BASIC. That is what it was invented for. Beginners All-purpose Symbolic Instruction Code. It is what I used in my classes when teaching “Introduction To Computers” for non-computer science majors. It is also used for some production work ( I supported a medical accounting system using HP Basic – that is rather fancy – for a year or so).

    I’ve not used Visual Basic, but it ought to be fine too. There are others that would work too. Really it depends on the level of interest of the person. Show some kids Unix / Linux shell scripting and stand back… they will be hackers by age 10…

    @M. Simon:

    I’ve been occasionally thinking about FORTH. The idea of getting a Parallela board or similar is calling to me… but just having one is not very interesting. Getting the cores to do something is what would be interesting. So the e-gcc would do it. But then I’m just a consumer of mips… So I got to thinking of all those cores with a stack machine on each one.. and could I use a Forth machine / loader to get ALGOL-60 hosted? (It really want’s a stack machine – like the B5500 / B6700 was; to make everything re-entrant and recursive… )

    On the other hand, the idea of spending a lot of time to make Yet Another Virtual Stack Machine just to put ALGOL on top of it via a Forth layer seems a bit, er, ‘less than efficient’… with my time or with the hardware. Much as I’d enjoy playing with ALGOL on a stack machine again…

    I’ve packratted away a copy of the ALGOL source somewhere…

    I know. I could just run it on a register machine. It is already ported and ready to go. But it just isn’t the same… I miss my big old multi-CPU stack hardware virtual memory machine… Wonder if anyone has an HP 3000 (original stack machine design) style chipset on a board these days…

    Oh Well. I’m just nostalgic for the days before everything was a register machine in one architecture. Intel is just driving the market into a boring corner. Sigh.

  21. M Simon says:

    E.M.Smith says:
    22 July 2014 at 2:33 am

    If there is something you want to do let me know. We can probably work something out. My email is on the sidebar at:

    As you can see from the top of the page there I’m starting with a series of environmental sensors. I have others in progress. And we are working with other processors besides NXP.

Comments are closed.