Cruel Raspberries…

Occasionally an off hand remark, perhaps even one said with a snide and “challenge” tone to it, will cause me to wonder. And dig, maybe just a tiny bit…

One of those happened a week or three ago.

I was telling a computer engineer friend about the Raspberry Pi and some of the things you could do with it. Probably a bit effusive, but as I’m generally criticized for being too “bland” or “reserved” (thanks Mum…) it’s rare that I’m “effusive” about anything. But in the case of the R.Pi I was happy with it, so may have been…

Don’t remember exactly what I was “effusive” about. I’d gotten Samba and file shares going, a Torrent Server (that’s humming still), and a caching DNS server. I think we’d been talking about his use of Arduino for teaching robotics and I was suggesting the R.Pi might be fun for kids to program too… He was talking about learning Java, as a lot of Arduino projects are heading that way (and kids seem to learn Java easily) or some such. I made some comment about languages on the R.Pi, and that I’d gotten FORTRAN running easily. He then challenged with something like “What, no COBOL?” (in that mocking kind of tone…)

Now both of us learned FORTRAN as our first computer language at the same school and from the same instructor. (Old college roomie… ) He’s an Engineer by degree, training, and career; and has used FORTRAN. So it was a bit of a ‘dig’ to have him slamming me with COBOL… ( I had one COBOL class that I hated… and did some minor maintenance on COBOL programs when out of school… but it’s really a horrid wordy cumbersome language with arcane rules about reading file and writing records or maybe the other way around… in any case, the structures you read are different from those you write, for the “same thing”. )

Well, time passes. And the “slight” fades. But sometimes not the “I wonder…”

So I found this:

http://opencobol.org/

OpenCOBOL is an open-source COBOL compiler. OpenCOBOL implements a substantial part of the COBOL 85 and COBOL 2002 standards, as well as many extensions of the existent COBOL compilers.

OpenCOBOL translates COBOL into C and compiles the translated code using the native C compiler. You can build your COBOL programs on various platforms, including Unix/Linux, Mac OS X, and Microsoft Windows.

The compiler is licensed under GNU General Public License.
The run-time library is licensed under GNU Lesser General Public License.

Since it’s a translator to C, and C runs on the R.Pi; it’s pretty much guaranteed this will work on the Raspberry Pi.

Though I don’t know if I can be that cruel to the Raspberry as to force it to do COBOL…

It is already under general Debian:

http://www.opencobol.org/modules/newbb/viewtopic.php?topic_id=1205&forum=1

For Debian, it’s

apt-get install open-cobol

So it ought to be there… Dare I?

The VM with Emulator with…

Unrelated… it seems that early on with hardware scarce and memory more limited, some folks wanted to do a load of development (and likely compiling a lot of Debian… perhaps even Open COBOL) and made a Virtual Machine to do so. This is complicated by the fact that the R.Pi is an ARM chipset and most folks have an Intel computer. So they used an ARM emulator inside the virtual machine…

Now on a high end quad core or more box with a couple of dozen GB of memory, you can still get a heck of a lot of performance boost by using a Virtual Machine (OpenBox) with an emulator inside of it… though for the rest of us it does look just a tiny bit cruel.

Take a chip set designed to make things fast and efficient by having the minimal instructions needed (the definition of RISC Reduced Instruction Set Computer…), and run it via emulation on a CISC (Complex Instruction…) that is not a real CISC CPU but is in fact a Virtualized CPU on virtual hardware inside a software wrapper running the Linux operating system… that is then hosted on a multi-threaded multi-CPU CISC monster with pipelining and threading and… all running under MicroSoft Windows.

Now that’s cruel Raspberries if anything is…

But then, to install Debian Linux on that emulated chip in that virtual CISC under a Linux OS, inside a core of an Intel CPU, wrapped in MicroSoft Windows and, then, in a fit of fancy, to compile, port, install and run COBOL via a translator to C. Well… I’m sure someone will do it, but I don’t have to watch.
;-)

How to do the VM / qemu stuff:

http://russelldavis.org/2011/09/10/virtualbox-vm-for-raspberrypi-development/

http://russelldavis.org/2012/01/27/setting-up-a-vm-for-raspberry-pi-development-using-virtualbox-scratchbox2-qemu-part-1/

http://russelldavis.org/2012/01/27/setting-up-a-vm-for-raspberry-pi-development-using-virtualbox-scratchbox2-qemu-part-2/

http://russelldavis.org/2012/01/28/setting-up-a-vm-for-raspberry-pi-development-using-virtualbox-scratchbox2-qemu-part-3/

You Know I Pull Wings Off Flies?

From the “burning ants with lenses and pulling wings off flies department”…
you know I’ve got to “go there”, don’t you?:

 pi@dnsTorrent ~ $ sudo apt-get install open-cobol

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  autotools-dev libcob1 libcob1-dev libdb-dev libdb5.1-dev libgmp-dev
  libgmp3-dev libgmpxx4ldbl libltdl-dev libncurses5-dev libtinfo-dev libtool
Suggested packages:
  db5.1-doc libgmp10-doc libmpfr-dev libtool-doc ncurses-doc autoconf
  automaken gcj
The following NEW packages will be installed:
  autotools-dev libcob1 libcob1-dev libdb-dev libdb5.1-dev libgmp-dev
  libgmp3-dev libgmpxx4ldbl libltdl-dev libncurses5-dev libtinfo-dev libtool
  open-cobol
0 upgraded, 13 newly installed, 0 to remove and 250 not upgraded.
Need to get 2,977 kB of archives.
After this operation, 8,298 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libgmpxx4ldbl armhf 2:5.0.5+dfsg-2 [20.6 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ wheezy/main autotools-dev all 20120608.1 [73.0 kB]
Get:3 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libgmp-dev armhf 2:5.0.5+dfsg-2 [552 kB]
Get:4 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libgmp3-dev armhf 2:5.0.5+dfsg-2 [13.7 kB]
Get:5 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libltdl-dev armhf 2.4.2-1.1 [203 kB]
Get:6 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libtinfo-dev armhf 5.9-10 [89.6 kB]
Get:7 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libncurses5-dev armhf 5.9-10 [202 kB]
Get:8 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libtool armhf 2.4.2-1.1 [618 kB]
Get:9 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libcob1 armhf 1.1-1 [87.6 kB]
Get:10 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libcob1-dev armhf 1.1-1 [111 kB]
Get:11 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libdb5.1-dev armhf 5.1.29-5 [775 kB]
Get:12 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libdb-dev armhf 5.1.6 [2,256 B]
Get:13 http://mirrordirector.raspbian.org/raspbian/ wheezy/main open-cobol armhf 1.1-1 [228 kB]
Fetched 2,977 kB in 33s (88.3 kB/s)
Selecting previously unselected package libgmpxx4ldbl:armhf.
(Reading database ... 61520 files and directories currently installed.)
Unpacking libgmpxx4ldbl:armhf (from .../libgmpxx4ldbl_2%3a5.0.5+dfsg-2_armhf.deb) ...
Selecting previously unselected package autotools-dev.
Unpacking autotools-dev (from .../autotools-dev_20120608.1_all.deb) ...
Selecting previously unselected package libgmp-dev:armhf.
Unpacking libgmp-dev:armhf (from .../libgmp-dev_2%3a5.0.5+dfsg-2_armhf.deb) ...
Selecting previously unselected package libgmp3-dev.
Unpacking libgmp3-dev (from .../libgmp3-dev_2%3a5.0.5+dfsg-2_armhf.deb) ...
Selecting previously unselected package libltdl-dev:armhf.
Unpacking libltdl-dev:armhf (from .../libltdl-dev_2.4.2-1.1_armhf.deb) ...
Selecting previously unselected package libtinfo-dev:armhf.
Unpacking libtinfo-dev:armhf (from .../libtinfo-dev_5.9-10_armhf.deb) ...
Selecting previously unselected package libncurses5-dev.
Unpacking libncurses5-dev (from .../libncurses5-dev_5.9-10_armhf.deb) ...
Selecting previously unselected package libtool.
Unpacking libtool (from .../libtool_2.4.2-1.1_armhf.deb) ...
Selecting previously unselected package libcob1.
Unpacking libcob1 (from .../libcob1_1.1-1_armhf.deb) ...
Selecting previously unselected package libcob1-dev.
Unpacking libcob1-dev (from .../libcob1-dev_1.1-1_armhf.deb) ...
Selecting previously unselected package libdb5.1-dev.
Unpacking libdb5.1-dev (from .../libdb5.1-dev_5.1.29-5_armhf.deb) ...
Selecting previously unselected package libdb-dev:armhf.
Unpacking libdb-dev:armhf (from .../libdb-dev_5.1.6_armhf.deb) ...
Selecting previously unselected package open-cobol.
Unpacking open-cobol (from .../open-cobol_1.1-1_armhf.deb) ...
Processing triggers for man-db ...
Processing triggers for install-info ...
Setting up libgmpxx4ldbl:armhf (2:5.0.5+dfsg-2) ...
Setting up autotools-dev (20120608.1) ...
Setting up libgmp-dev:armhf (2:5.0.5+dfsg-2) ...
Setting up libgmp3-dev (2:5.0.5+dfsg-2) ...
Setting up libltdl-dev:armhf (2.4.2-1.1) ...
Setting up libtinfo-dev:armhf (5.9-10) ...
Setting up libncurses5-dev (5.9-10) ...
Setting up libtool (2.4.2-1.1) ...
Setting up libcob1 (1.1-1) ...
Setting up libcob1-dev (1.1-1) ...
Setting up libdb5.1-dev (5.1.29-5) ...
Setting up libdb-dev:armhf (5.1.6) ...
Setting up open-cobol (1.1-1) ...
pi@dnsTorrent ~ $

So now I’ve gone and done it…

I’ve installed COBOL on my Raspberry Pi. It’s no longer just a DNS server / Torrent Server / Samba – M.S. File Server / FORTRAN Climate Codes port station… it’s now also a COBOL workstation. Sigh.

Guess what I’ll be telling the “Old College Roomie” tomorrow?
;-)

I guess next thing I need to do is learn how to remove packages.
;-)

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , , , , , , . Bookmark the permalink.

46 Responses to Cruel Raspberries…

  1. Petrossa says:

    cobol still gives me the creeps. That 60 char limit, the exact placement of text on a line or else. Almost made me drop programming all together.

  2. BobN says:

    I new a Cobol programer, he seemed weird, was it a coincidence? LOL

  3. Steve Crook says:

    I cut my programming teeth on COBOL on ICL2900 mainframes and have good memories. I found that, mixed with a bit of the system programming language to do the bits that would have been cruel and unusual in COBOL. It proved to be surprisingly versatile. At one one time I wrote a finite state machine to parse a large query/display ‘natural’ language and the interpreter that executed the display language on returned query items. The interpreter ran to about 20-30k LOC, and the parser/lexer (didn’t write the lexer) another 10k or so. It proved to be remarkably easy, apart from the maintenance of the state table.

    As for position dependent layout, plainly the world has learned nothing, because we have Python.

  4. E.M.Smith says:

    A long discussion on various ways and degrees of package removal:
    https://www.linuxquestions.org/questions/debian-26/how-do-i-get-apt-get-to-completely-uninstall-a-package-237772/

    So looks like a variety of choices. Maybe later I’ll have time to decide which one to use…

    @Petrossa:

    I’d forgotten the 60 char fixed format “issue”… (it’s been a while… and IIRC that limit was removed in the commercial version where I was doing minor maintenance).

    @BobN:

    IMHO, both COBOL and RPG (RPG II ?) programmers are a ‘different sort’. At one extreme are the C / FORTRAN folks. Very math oriented and lots of short terse symbols used (and compact code too). At the other extreme are the COBOL / database guys with a load of words to do anything of interest. More like reading a novel written by someone with a weird dialect problem… Then RPG is like ticking boxes on a check sheet with some modifiers…almost not a language at all, really. Non-procedural and limited.

    (Don’t get me started on PL/I – where you can write FORTRAN, or COBOL, or any of a couple of other languages all inside the approved syntax…; or APL which has been called “write only programming”… and takes a special keyboard with obscure symbols on it…; and LISP for folks in love with parenthesis, and whatever happened to the straitjacket of Pascal and Ada?…)

    My favorite was Algol 68. Sort of a half way house between Pascal and C. Not as terse and cryptic as C, but not a straitjacket like Pascal. I wonder if there’s any Algol for Debian… Hmmm…

  5. E.M.Smith says:

    @Steve Cook:

    You have my admiration. I was never able to really master COBOL. “Sort of functional” was about all I could do.

    I don’t really “speak Python” (though I’ve ported some things written in it). Is it strongly positional? (Last time I did anything with it was about 2009 and that was just ‘compile and install’ some C libraries for it in GIStemp and run the code as provided…) I kind of remember something about position dependent chars at the front of lines… Guess I need a ‘refresher’….

  6. E.M.Smith says:

    Oh Dear…

    pi@dnsTorrent ~ $ sudo apt-get install algol68g
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    The following NEW packages will be installed:
    algol68g
    0 upgraded, 1 newly installed, 0 to remove and 250 not upgraded.
    Need to get 422 kB of archives.
    After this operation, 1,030 kB of additional disk space will be used.
    Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main algol68g armhf 2. 4.1-1 [422 kB]
    Fetched 422 kB in 2s (150 kB/s)
    Selecting previously unselected package algol68g.
    (Reading database … 61758 files and directories currently installed.)
    Unpacking algol68g (from …/algol68g_2.4.1-1_armhf.deb) …
    Processing triggers for man-db …
    Setting up algol68g (2.4.1-1) …

    Well… now I’ve got Algol68 on the R.Pi too….

    In a way, it’s kind of amazing what all is already built and ready to go. Stuff that was just sitting in Debian already and “just works” after a compile or with a dedicated interest group that did some ‘touch up’ for ARM at some point.

    (And likely some bits that ‘need patching’ and will get attended after some damn fool, like me, actually tries using Algol or COBOL on a R.Pi and turns up some odd corner with ‘issues’… and that same ‘interest group’ finds out what we’ve done with their favored software on some platform they don’t have ;-)

    While I’m on the subject, some resources for Algol:

    Open source Algol:
    http://sourceforge.net/projects/algol68/files/

    Algol home page:
    http://www.algol68.org/

    Has a pretty good list of links on it too.

    Well. Now I’m in a pickle…. While I really liked Algol, do I really want to be writing code in a language from 1968? 45 years back? Well, maybe ;-)

  7. Gary says:

    Made my living writing/maintaining COBOL programs for 18 years. Pedestrian, adequate, stable, easy to learn. Remember its origin – military – Grace Hopper http://en.wikipedia.org/wiki/Grace_Hopper. It was an advancement for its time and place. Regard it like classical Latin or Greek.

  8. E.M.Smith says:

    @Gary:

    Never could get motivated to learn Greek either ;-)

    (Latin I can puzzle out some bits and I’m ‘working on it’…)

    So I guess this means you can get a comfortable COBOL workstation for $35 that likely has more processing power than those mainframes from 18 years back…

    In fairness, I’d had a fairly intense FORTRAN immersion, and then an Algol class (in the Engineering department) so was pretty much indoctrinated that THEY were the right way to do things. Then signed up for a single (small units / little time or instruction…) class in COBOL that was mostly “here’s the book, there’s the lab… come ask me if you have questions”… I suspect that with a better ‘presentation’ I’d have done better. It was in some ways frustrating as I had to “unlearn” a lot of the FORTRAN way I’d just learned…

    Now, a dozen languages later, I’d likely be less biased toward it; especially in a decent class setting. It did seem “easier” when I had to do some maintenance than when I’d been taking the class… even though it was a dozen years later with no use in between.

  9. Steve Crook says:

    @ E.M. Smith 25 May 2013 at 5:47 pm

    IIRC Python requires indent of ‘if’ clauses rather than using braces. Seemed like a gimmick that is always going to blow up in your face sooner. Like deciding to only use C braces when forced to :-)

    When I started programming in C, I had a hard time coming to grips with just how easy it was to shoot oneself in the foot. I’d been spoiled by COBOL. I was used to only having to deal with debugging logic errors and C introduced me to a whole new class of bug I could create just by a typographical error. Ahhh, those were the days.

  10. J Martin says:

    I went on a Cobol training course and hated it. I concluded that Cobol was commissioned by a committee too dumb ever to be able to do any programming themselves.

    I liked Algol, but PL1 was my favourite.

  11. Steve Crook says: 25 May 2013 at 4:41 pm
    “I cut my programming teeth on COBOL on ICL2900 mainframes…”

    I worked on the 2903, 2910, 2920 and 2930 machines at West Gorton and Kidsgrove. I worked with Ed Mack, David Dace and Charlie Portman. In those days if you used anything other than COBOL you could get fired. At ICL it was hard to know who you worked for thanks to “Matrix Management” that caused confusion at all levels!

    My final job at ICL was as project manager for the ME29 which sold quite well. Later while I was working at STC, Kenneth Corfield decided to acquire ICL. Later STC got gobbled up by Nortel and then Fujitsu. I wonder if anything is left other than the pension fund that coughs up a few dollars each year.

  12. Gary,
    You have to love the inimitable Grace Hopper. I use this video to help people remember that electrical signals travel roughly one foot in a nano-second:
    http://www.wimp.com/gracehopper/

  13. j ferguson says:

    Or, you could run it on cp/m and get sharp on ddt and pipping.

  14. DirkH says:

    “Take a chip set designed to make things fast and efficient by having the minimal instructions needed (the definition of RISC Reduced Instruction Set Computer…), and run it via emulation on a CISC (Complex Instruction…) that is not a real CISC CPU but is in fact a Virtualized CPU on virtual hardware inside a software wrapper running the Linux operating system… that is then hosted on a multi-threaded multi-CPU CISC monster with pipelining and threading and… all running under MicroSoft Windows.

    Now that’s cruel Raspberries if anything is…”

    The art is of course writing the compilers, emulators, transpilers and VM’s in such a way that you lose as little power as possible when crossing each platform/language border.

    “He was talking about learning Java, as a lot of Arduino projects are heading that way (and kids seem to learn Java easily) or some such.”

    Java is on its way out IMHO. Javascript (a completely different language) is getting more interesting every day. Javascript is very easy to learn as well, has a semantic as powerful as Scheme or Python, and modern JIT compilers give it an edge over Python.
    I found this helpful:
    http://www.w3schools.com/js/default.asp

    As the type system is based on Duck typing like Python, it is way easier to learn than the boxing/unboxing/container/generics mess of Java. I disliked Java from the start due to the lack of templates; which are necessary in statically typed OO languages. Only much much later did they bolt on their generics. As Javascript is not statically typed, it never had that problem but suffered instead from slowness. Which has been rectified since about 2008 by Google and Apple’s JIT compilers.

  15. E.M.Smith says:

    Perhaps it’s Java Script he was talking about for the kids. Frankly, I didn’t realize there was that much difference between them so have not kept them “tidy” in different buckets. Classing it all as “Java*” … So I guess I need to start keeping two buckets ;-)

  16. Chuck Bradley says:

    Not long after the IBM System/360 appeared I worked at a small company that had an IBM 1401 installation and was upgrading to a 360. Management had the delusion, common at the time, that COBOL was the cure for all data processing problems. Most of the programmers thought COBOL was bad, evil, weak, inefficient, etc. I took a couple programs that had been written in 360 assembly language, and rewrote them in COBOL. It took me less time to debug them than the BAL programmers spent. That did not change any opinions. But my object programs, including all the linked in library routines were smaller than the BAL programs, much smaller. That does not say anything about the quality of the COBOL compiler. The BAL programmers just did not understand the architecture of the 360. We could not shrink their code by tweaking it; the only way was to replace almost all of it. That made COBOL less unacceptable. The 1401 treated a blank like a zero in many circumstances. New programs on the 360 using files produced on the 1401 sometimes failed because the 360 knew the difference when in native mode. I wrote an exception handler in COBOL to fix the data. That took away more of the resistance. COBOL is not a good choice for an exception handler, but it is a fine choice for removing doubt about its use. Related to code size, years later there was a debate about implementation languages at DEC. One survey of developers found that of those that had not used PASCAL or BLISS, twice as many developers thought PASCAL was superior. I worked in the group that developed BLISS compilers. We wondered how good the code was, compared to assembly language. We finally found a module that one developer had written in BLISS for a component of VMS. It was replaced by the assembly language equivalent, written by an ace, one of the original VMS team. The BLISS code was 2% larger. Several years later we were asked to put together a presentation about the same topic. By then, the difference was 1%. I have also observed many times that RPG is a superb language for the many commercial DP applications where it is widely used. This note is not to support a best language or complain about a bad one, but just to observe that many of our strongly held opinions should not be so strongly held.

  17. E.M.Smith says:

    @Chuck Bradley:

    Well, about the only “strongly held” opinion I have on computer languages is that it’s better to use a language you know well, then the newest trendy language you don’t know well…

    I’ve sometimes “taken rocks” for programming things in some old / odd language or another rather than the “trendy language du jour”… but I’ve also managed enough projects to know that a really good and experienced programmer can make just about any language do just about anything… and knowing where the potholes and deadfalls are in a given language is more important than some arcane feature you have never used in the new language…

    The only exception being that I’ve seen many “Object Oriented” projects suffer massive code bloat / slowness problems. Can’t say if it is a structural thing to the languages, inexperience of the programmers, coincidence due to O.O. taking off just as everyone got lazy about memory (“hardware is cheap” ethic taking over) or what. I’ve also seen a lot of damn fast programs written in things like FORTRAN, ALGOL, C, etc….

    Though frankly, the one thing that annoys me most is the tendency to a new language of preference every 5 years or so. I’ve managed to watch a half dozen come and go… Perl and Python, once trendy, now being shoved aside by the Java Javascript trend (that’s already a bit old itself…) I’ve mostly just “given up” on trying to stay “trendy” as it’s just not worth the workload. (I have done maintenance on Perl and Python without too much study, and since most of what I do is “port and fix”, getting proficient at “write from scratch” isn’t really needed.)

    In the end, I find myself mostly using FORTRAN these days (as what I’ve been doing for the last 4 years was written in it) along with various scripting languages. I’ve written C, and could do so again, but it’s been a while. (Still read it fairly well though…) And while folks tout things like R and Python and Java – I’ve just not felt motivated enough to bother. (Though don’t mind at all if someone on one of my teams wanted to use the new trendy things… I’ve generally figured “let folks use the tool they want”… and it’s worked well.)

    At the end of the day, I mostly like relatively straight forward languages.

    I just re-read the “book” on Algol at that algol home site. I was reminded of some of the “quirks” that had bothered me in the past… and why I’d not kept up using it. Nothing wrong with it, and it was my favorite language once… But now, looking back on it after C and others, it is just a bit more “lumpy” than I’d remembered…

    So maybe some of the “negatives” remembered about COBOL are from that same mold as the “positives” I was remembering about ALGOL. Both a bit “colored by time” ;-)

    Ah, well. Someday the Perfect Programming Language will be developed ;-)

  18. Petrossa says:

    EM
    C is pretty straightforward, isn’t it? I like it a lot for the freedom it gives me, and the ease of IL Assembly. C++ has its advantages too, but admittedly a hello world application can easily be a megabyte if you’re not careful.
    Python, java,c#,.net etc. i can’t stand exactly for the reason why i like C. They are black box lego systems. You can do what they allow you to do and that’s it.
    But market forces…. nowadays you can hardly find a professional programmer that can use anything else but MS VS, which should be outlawed for creating the most unreadable code known to man.

  19. DirkH says:

    E.M.Smith says:
    26 May 2013 at 10:09 pm
    “Perhaps it’s Java Script he was talking about for the kids. Frankly, I didn’t realize there was that much difference between them so have not kept them “tidy” in different buckets. Classing it all as “Java*” … So I guess I need to start keeping two buckets ;-)”

    Netscape invented Javascript and chose the name to take a ride on the Java hype. Entirely different language.
    http://en.wikipedia.org/wiki/Javascript#Uses_outside_web_pages

  20. DirkH says:

    Petrossa says:
    27 May 2013 at 9:08 am
    “Python, java,c#,.net etc. i can’t stand exactly for the reason why i like C. They are black box lego systems. You can do what they allow you to do and that’s it.”

    Common industry practice these days is to write the engine in C/C++, and expose some functions and objects to an embedded Python or Javascript/ECMAscript/Ruby/Lua interpreter. This gives you the best of both worlds – ability to program on hardware level with C/C++ , and rapid prototyping / changing application logic with the scripting language.

    As large scale C++ systems involving lots of libraries can still take on the order of 10 minutes upwards for a complete recompile, even using a monster PC, scriptability has its advantages.

    To get a C++ / Python hybrid to work, check out SWIG; and google for “embedding and extending Python”.

  21. Petrossa says:

    Tnx DirkH. Shows how much i am out of the loop already. Luckily i only program for fun, i retired at 45. I amuse myself with racking up my home automation system with the bells, whistles and the kitchensink. Just installed a finnicky warning system to count the number of icecreams my wife eats, which sets of a blue policetype revolving lamp at a certain number.

  22. E.M.Smith says:

    @Petrossa:

    My only real complaint about C is that is presumes a style of I/O file layout (i.e. not fixed format) so using things that are fixed format (like FORTRAN files…. or just fixed column hand typed files) can be annoying. Yes, it does let you do it; but a built in “Fixed FORMAT” type statement would cut down the typing needed… (OTOH, doing variable format layout in FORTRAN is as much a PITA going from C to FORTRAN… the two just don’t have the same idea about how files ought to be laid out… Again, yes, you can do it; just how much “workaround” vs “built in easy” is the issue…)

    As someone who tries to get as much done with as little typing as possible, C is very nice… even if it does take a few lines to do fixed format file reads… So it avoids things like typing out the word “MULTIPLY” as in COBOL, and you don’t need to do BEGIN and END when {} would do. Yet, by the time I’ve opened a file, declared all the structures and variables, opened the file, fread data into a struct and then get set up to use it… Well, in FORTRAN you have enough implied variables and easy file open and it’s just FORMAT and READ variable list, then some math and WRITE (often with the same FORMAT). Fast. Simple. Obvious. (Almost as easy in Algol, but then again, our Algol at school had a FORTRAN influenced I/O package…)

    For reasons beyond my ken, many language designers have had a chip on their shoulder about Fixed Format files and seem to have designed their languages to make them painful to use. OK, not much of an issue for ‘built new’ use; but if interacting with anything already made, it just becomes a PITA. (One of the most valuable languages / systems I ever used was a DBMS with built in dump / load functions that were easy to configure. You could use it to glue together files of just about any layout. It used a “dump to fixed / load from fixed” as a kind of lingua franca for many interface discontinuities. Made life so easy…)

    Other than that, it doesn’t seem to get in my way, nor force me to type books, nor enforce some bizarre ideas about what is “proper” and ‘evil’… nor leave out key facilities due to some bias of the creators… nor…

    Oh Well… Why we have so many computer languages, I guess. Just a large tool box so the whole world need not look like a nail… ;-)

    I do find the “starting arrays at zero” a bit of a bother too. Algol lets you start and end arrays from any point, as you like it. Yes, the compiler has more work to do. So? The number of “off by one” errors I’ve seen from folks coping with “array item one is zero” or “array item 8 is number 7” just really wants a bit more flexibility there… but you get used to it… find ways to work around it. Eventually it gets built into your brain and you think it is “normal” and everything else is wrong and don’t notice that the ‘work around’ isn’t just how things ought to be done…

    There aren’t that many times you want an array to run from ” -20 to +20 “, but when you need it, it can be fairly convenient and make for obvious clear code.

    Oh Well…

  23. E.M.Smith says:

    Golly… the things people do… and the things you can find…

    From that Algol “home page” via a link or two, I ended up at a site that preserves old software (even Algol 60!) and the run environment, such as the ICL 1900
    http://www.icl1900.co.uk/icl1900/index.html
    A mainframe of years gone by…

    I worked on the ICL 1900 series machines for several years as both a systems programmer (most of my time) on George 3 support on user sites (maintenance and writing enhancements) and George 2+ development with ICL; as well commercial programming in COBOL and PLAN, with some FORTRAN thrown in for good measure on GINO-F support.

    The ICL 1900 was a word machine with 24-bit words, containing 4 * 6-bit characters. Instructions were whole words, and a word could be addressed in both ‘word’ or ‘character mode depending on the type of data it held and IMHO a nice instruction set for commercial DP.

    Typical program sizes were fairly compact, at one site that I worked there was a ‘standard’ maximum of 20K, which was worked out for efficient multi-programming/swapping. You could do a lot on 20K!!

    Most 1900 machines were multi-programming under the control of the EXECutive or Operating System, smaller machines still had an EXECutive but were single or dual programming. There were no fixed “partitions” within which programs had to abide, but there was full hardware protection to prevent one program from errantly trespassing in another’s space. Likewise all peripherals were available in a general pool and allocated to a program when requested and then released back to the pool when finished with. Every program started at address zero wherever it was located in physical memory, hardware ‘datum’ and ‘limit’ registers provided the translation/protection for user programs, which could be moved in memory as other programs altered their size or were deleted.

    The memory size (core store or later ECL electronic) was typically 32K, 48K for smaller machines, 64K, 128K, 192K or 256K for larger machines. The biggest that I heard of was 512K and I think that was a University machine.

    An optional Hardware Floating Point unit was available, but tended not to be fitted on commercial machines.

    Yes, 6 bit characters and 24 bit word size… “Things were different then” ;-)

    So why mention this? Well…. Say you wanted your own ICL running “George”…

    http://sw.ccs.bcs.org/CCs/index.html

    Instructions for running George3 on your own Raspberry Pi are here.

    Yes, in no time at all you can be running Algol 68 on an emulated ICL mainframe on your $35 Raspberry Pi….

    Isn’t technology wonderful?….

    Update

    Here’s some more…

    http://www.designspark.com/blog/my-raspberry-pi-thinks-it-s-a-mainframe

    Making an IBM “Mainframe” on your R.Pi… running VM…

    http://www.designspark.com/blog/a-raspberry-pi-vax-cluster

    In a blog post last month I looked at how a Raspberry Pi can be used to emulate a formidable IBM mainframe, and in this post I describe how a pair can be used to emulate VAX computers which can then be configured to form a VMScluster.

    The MicroVAX 3900 hardware being emulated this time is a little more modern and somewhat smaller than the IBM 4381 processor, but the VAX architecture and OpenVMS operating system are no less impressive. On introduction in 1989 an entry level MicroVAX 3900 would have set you back over $120,000 and, as with IBM’s VM operating system, you’d be mistaken if you thought that OpenVMS was dead and buried as it runs many mission critical workloads today.

    Emulation of the VAX hardware has been made possible by a pretty amazing piece of software called SimH. In order to be able to run OpenVMS on this a licence is required, but fortunately these are available free of charge via the OpenVMS Hobbyist programme.

    So, just how much to I want to be reminded of my VM / IBM days or have a Vax Cluster in my pocket?…

  24. Gail Combs says:

    I remember Grace Hopper for her useful quote for dealing with BureauRats “It’s easier to ask forgiveness than it is to get permission.” (So very true)
    And of course her ‘BUGS’ so even those of us who are computer illiterate know who Grace Hopper is.

  25. E.M.Smith says:

    @Gail:

    I always liked Grace Hopper. And Ada Lovelace… When teaching “intro to computers” I’d bring them both into the discussion and the women in the class would suddenly wake up ;-) I’d also use knitting books as an example of a programming language (complete with subroutines!) and point out that punched cards came from their earlier use in looms and weaving. About then the guys would be giving me that Oh No look but the women in class were realizing that computing was not a “Guy Thing”… The early history of computing was strongly influenced by some key woman, and Grace was one of them.

    @All:

    More ways to do odd things on your R.Pi… Want a Commodore Amiga?

    http://www.raspberrypi.org/phpBB3/viewtopic.php?f=78&t=17928

    by Chips » Thu Sep 20, 2012 10:05 pm
    This emulator is based on UAE4All rc3 which itself is hightly optimized (means non-accurate) + cyclone 68K arm core.

    Current status:
    Full Speed A500 achieved (except few rare cases).
    Works in command line.
    Sound is emulated.
    Up-scaling is done using dispmanx.
    This version is not so accurate, so don’t blame if some demos aren’t working.

    I’ve seen another emulator under Linux that is near perfect, so only a matter of time for the R.Pi to be that good.

    How about some other smaller nostalgia?

    http://www.raspberrypi.org/archives/883

    Fuze – a ZX Spectrum emulator running on Raspberry Pi

    29th of March 2012 by liz

    At the celebrations for the BBC Micro’s 30th birthday on Sunday (more on that to come, when the organisers’ videos are available), we met the rather excellent Andy Taylor. Andy volunteers for the UK Computer Museum, and had been working on getting their Raspberry Pi software ready to exhibit at the event. Rather than sitting back and twiddling his thumbs when he was done, he decided to fill his spare time by porting Fuze, a ZX Spectrum emulator, to the Raspberry Pi. I’m not sure why (nostalgia’s a powerful thing), but seeing Manic Miner running on a Raspberry Pi was, for me, even cooler than seeing Quake 3 back when we demoed it last year. (I note we never did set up that Deathmatch. I shall add it to the Big List of Things to Do.)

    How about all those old DOS programs in your attic or garage?

    http://www.engadget.com/2013/03/29/raspberry-pi-dos-emulator/

    DOS emulator brings Raspberry Pi back to the ’90s for Doom LAN parties

    By Steve Dent posted Mar 29th, 2013 at 12:08 PM

    Who can forget the first time they obliterated their buddy with a BFG9000 during a spirited Doom game? Raspberry Pi coder Pate wants to resurrect those good times with an rpix86 DOS emulator that opens up the world of retro PC games like the aforementioned FPS pioneer along with Duke Nukem 3D, Jill of the Jungle and others. It works by creating a virtual machine your Dad would be proud of, based on a 40Mhz 80486 processor, 640KB base RAM, 16MB extended memory, 640 x 480 256-color graphics and SoundBlaster 2.0 audio. Of course, the Pi is worlds beyond that with a 700Mhz ARM CPU, 512MB or RAM and HDMI out — so, most enthusiasts with one of the wee $35 boards will likely be all over hacking it to play those classics.

    Just amazing….

    Not so much that it is being done, just that it is being done so fast. At this point, the history of computing from DOS and Amiga through BSD / Unix / Linux and even Vax Clusters and VM Mainframes, and more, all on a dinky little board…

    God I’d love to be assigned a “History Of Computing” class to teach… I’d have a half dozen of these set up with terminals showing what they could do, and a picture next to each one showing the Real Deal (and how big they were ;-)

    I’m starting to wonder if there’s anything folks won’t do with this card ;-)

    I’m running out of systems I’ve used to emulate. A Cray X-MP-48 and some Macintosh machines is sort of it… Oh, and a Lisa once. There were a couple of misc mini-boxes, but they were mostly running some flavor of Unix ( like Sun OS a BSD port, and a couple of others). Then there was that HP3000 with the HP operating system … But it’s just not that big a deal for them. (Though a “Toy” X-MP-48 would be fun.. “MP” is multiprocessor. 4 CPUs at 100 M-Flops each, and 8 Meg-words, or 64 MBytes of memory… ought to be able to do that with 4 x R.Pi boards with power left over… Maybe I could even make a little C shaped frame to hold it ;-) No idea where I’d get a copy of the OS… or applications…

  26. E.M.Smith says:

    I hit “post” too soon:

    http://hackaday.com/2012/11/08/raspberry-pi-gets-risc-os-can-now-play-elite/

    The first ARM-based computer was the Acorn Archimedes, a mid-80s computer with 512kB of RAM and no hard drive. The Archimedes ran RISC OS, a very nice graphical operating system written explicitly for the ARM architecture. RISC OS is now available for the Raspberry Pi, finally bridging the gap between educational computers from 1987 and 2012.

    Of course, a very much updated version of 25-year-old operating system running on a Raspberry Pi doesn’t mean much without a ‘killer app,’ does it? For the original Acorn Archimedes the killer app – and one of the best video games of the 80s – was Elite, a space trading and combat game that featured vector-style ships. [Pete Taylor] downloaded the Raspi RISC OS image and got Elite running using an Archimedes emulator and, of course, the Archimedes port of Elite.

    It’s a pretty neat development if you’re in to alternative OSes and one of the best space-based games ever made. Well worth a download, at the very least.

    So add an Acorn Archimedes to the list..

    I think I’m gonna need a bunch more SD cards ;-)

    UPDATE:

    And a virtual Mac:

    http://minivmac.sourceforge.net/

    May 8, 2013

    Today’s Development source snapshot may now work on Raspbian for the Raspberry Pi. (A binary for Linux on ARM is included, stretching a bit the meaning of source snapshot.)

    it then goes into a ‘fix’ done that ought to make it go…

    and the Apple IIgs:

    http://gsport.sourceforge.net/developing.html

    GSport: an Apple IIgs Emulator

    Raspberry Pi

    At first, you may want to update/upgrade your base OS:

    sudo apt-get update
    sudo apt-get upgrade

    Then, add a line with the value snd-pcm-oss to the /etc/modules file and reboot to enable sound. Change the permissions to the resulting device /dev/pcm (after rebooting): sudo chmod 666 /dev/pcm

    Depending on the version of your OS, the following packages may need to be installed:

    xfonts-base: sudo apt-get install xfonts-base
    libX11-dev: sudo apt-get install libX11-dev
    libxext-dev: sudo apt-get install libxext-dev

    Use the vars_pi file for compilation:

    cd into the src/ directory
    rm vars; ln -s vars_pi vars
    make

    The resulting executable is called gsportx.

    Then there is this Java based PC emulator; so one can likely run Windows on it…

    http://twit88.com/blog/2008/12/31/java-x86-pc-emulator/

    JPC is a pure Java emulation of an x86 PC with fully virtual peripherals. It runs anywhere you have a JVM, whether x86, RISC, mobile phone, set-top box, possibly even your refrigerator! All this, with the bulletproof security and stability of Java technology.
    […]
    Cross-Platform – JPC is completely implemented in Java, so it works seamlessly across all major computing platforms, including Windows, Linux and MacOS. JPC even works on non-x86 based hardware like ARM and SPARC.

    So that’s about a years worth of projects just getting one of each image up, configured, and running some cool old applications…

    Another UPDATE

    Looks like fpc Free Pascal Compiler too, along with something called Lazarus that is called “Object Oriented Pascal”…
    http://wiki.freepascal.org/Lazarus_on_Raspberry_Pi

    Installing and compiling Lazarus
    Simple installation under Raspbian

    In the Raspbian OS it is easy to install Lazarus and Free Pascal. In order to do this simply open a terminal window and type:

    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get install fpc
    sudo apt-get install lazarus

    This installs a precompiled version of Lazarus on the Raspberry Pi. Of course, a network connection is required. Installation may take about 30 minutes, but major portions of this process take place automatically. After installation you may instantly start Lazarus from the “Programming” section of the LXDE start menu.

    Not sure what an “Object Oriented Pascal” would be, but I presume someone who liked Pascal was feeling left out of the O.O. Fad and decided to glue it on…

    This page talks about building and using it including a build from sources, and has a neat Mandelbrot program including sources:
    http://www.michellcomputing.co.uk/blog/2012/11/lazarus-on-the-raspberry-pi/
    though it looks like it pushes things close to the performance edge:

    Whilst researching this I have found that others have already attempted this with varying degrees of success. Common problems encountered seem to be caused by running out of memory, or card storage space during the build process. Indeed, I hit both issues even with the Model B’s 512Mb RAM. In the case of RAM usage, I had to reduce the amount of RAM dedicated to the GPU from 128Mb to 64MB. I have seen reports for Model A Pi’s where this needed to be reduced to 16Mb to avoid problems. I also ran out of space on a 4Gb SD card and had to swap to an 8Gb one to provide enough head-room.

    After a couple of hours solid processing I had a working build of Free Pascal 2.7.1 with ARM hard float support and Lazarus 1.1. For those interested in repeating this process you can download the scripts here: FPC Build Script and Lazarus Build Script.

    Mandelbrot code: http://www.michellcomputing.co.uk/data/Mandelbrot01.zip (All of 128 kB…)

    To test the system’s graphics capability, I have written a small Mandelbrot set renderer, as shown below, (Source Code). This uses the simplest form of TCanvas.Pixels method for writing the image. This project takes around 2 seconds to generate the image shown below on my Intel i7 based system, but nearly 2 minutes on the Pi! I haven’t performed any detailed bench marking yet, but I will investigate this further.

    So far I’ve resisted putting Pascal, or anything O.O., on the Raspberry Pi… but I like Mandelbrots… though at 2 minutes to render one I think I can resist a bit longer…

    Though further down…

    It looks like there’s a lot of room for improvement and it’s not the R.Pi hardware that’s the issue:

    <blockquote
    There is a very interesting Mandelbrot set program called ‘triangle2.c’ supplied on the Raspbian SD image in the /opt/vc/src/hello_pi/ folder. This is written in C and uses OpenGL and fragment shaders to achieve very fast access to the Broadcom GPU hardware present on the Raspberry Pi. It produces stunning results, calculating a full screen Mandelbrot set with overlaid Julia set in real-time. So there should be plenty of scope to reduce my program’s 2 minute render times! It will be especially interesting to see how fast a pure ARM CPU program can operate before having to resort to direct GPU calls for further improvements.

    One last comment about Free Pascal is to mention that it can be compiled as a cross compiler, meaning it can be an Intel based program that I can run on my desktop system, but generate ARM binary output for execution on the Pi. Whilst running a native Lazarus desktop on the Pi is perfectly feasible (in fact, it would be a very good platform for introducing people to visual programming in Pascal), large projects would certainly feel more comfortable when worked on with a much faster CPU, especially whilst compiling larger projects and I will definitely be looking into this over the coming weeks.

    Well… as a cross compiler it might be more interesting too… Gee, cross compilers… a whole other area I’ve not investigated yet… ;-)

  27. DirkH says:

    “Not sure what an “Object Oriented Pascal” would be, but I presume someone who liked Pascal was feeling left out of the O.O. Fad and decided to glue it on…”

    Don’t remember Turbo Pascal? Anders Hejlsberg and two other Danes developed it and it got marketed by Philip Kahn in Cali under the company name Borland. Turbo Pascal developed OO capabilities; basically emulating the C++ work of Stroustrup but with pascal as the basic language. I bought a Turbo Pascal 6.0 I think in 1991 and it was a very good OO dev environment on a DOS computer. Later developed on into Delphi, adding the Visual Basic form-based GUI development; Hejlsberg later still went to Microsoft and developed the .NET family of languages, and lately has developed a superset of Javascript called TypeScript (adds optional static typing to JavaScript).

    Object Oriented Pascal is probably just Turbo Pascal without the company trademark…

  28. compuGator says:

    E.M.Smith says (3rd Update) 28 May 2013 at 2:51 am :

    Not sure what an “Object Oriented Pascal” would be, but I presume someone who liked Pascal was feeling left out of the O.O. Fad and decided to glue it on…

    Um, no. Seems to me that the “o.o. fad” was a mutation of modular coding, which has been practiced for decades by conscientious programmers. Even in FORTRAN, albeit supported more by personal discipline than by its characteristic separate compilation.As best I’ve been able to figure it, object orientation boils down to a combination of modularity (including visibility and inheritance rules) plus an abstraction for dynamic-storage allocation (to support recursion, reentrancy, or threading). Or have I boiled away anything significant?It was already available in more concrete form in PL/I (no later than F-level ver. 5, ca. 1970), by using pointer variables as surrogates for abstract “objects” that were dynamically allocateed using the controlled storage class. But I digress, and you did insist that we “[d]on’t get [you] started on PL/I”.Any Pascal described as “o.o.” is likely a rethinking of various modular-Pascal extensions or related language designs, notably Niklaus Wirth’s own Modula (1977). Alas, Wirth just couldn’t let go of some of his math-theoretically ‘sufficient’–some might grumble “puritanically restrictive”–biases.So a more practically useful inspiration would’ve been UCSD Pascal, which was one of the original 2 O.S./development environments released for the IBM PC. It was ported by U.C. San Diego from their original research project native to LSI-11 systems (late 1970s), and commercialized as the “p-System”, as marketed by SofTech Microsystems. Alas, the p-System Pascal produced some variation on Wirth’s p-Code, and ran only on an interpreter with a nonDOS file system, and cost $100s extra; PC-DOS was $free and its compilers produced native 8086 code. Guess which one a professional Pascal enthusiast chose? And which one The Free Market chose?According to About Lazarus, the name properly applies only to “the class libraries for Free Pascal that emulate Delphi.”

  29. compuGator says:

    Ooops.

    Upon further reminiscing, the original PC-DOS (ver. 1) was not provided at no extra charge (see confirming source below), back when the PC’s only direct-access storage was a 5.25-in. floppy drive.

    So among programming-language products for it, only the BASIC interpreter came free. Compared to Pascal, it seemed nearly useless to me, but after all, the language had been named the “Beginners’ All-purpose Symbolic Instruction Code” (1965). I have no idea how much resemblance the language from Dartmouth still bears to recent versions of MS Visual BASIC.

    I recall the original Microsoft Pascal compiler being widely criticized. Perhaps its $300 cost created overly high expectations. Wikipedia confirms my vague recollection that the original Turbo Pascal cost $49.99. Early on, it was sometimes sold personally by Philippe Kahn himself, from a big box he’d carry around to computer clubs.

    And to wrap up my corrections, Wikipedia pointed out that CP/M-86 was a 3rd environment offered for the original IBM PC, altho’ at an unappealing $240, compared to $40 for PC-DOS.

    Sigh. I’ve had more than 30 years for these details to fade from memory. And it’s been a long time since I’ve seen my original IBM-logo sales receipt anywhere.

  30. E.M.Smith says:

    @compuGator:

    Well, the O.O. folks I’ve talked with have vociferously asserted that it isn’t just modularity or fancy subroutines. (It looks like it to me…) So since it is their turf, I have to take them at their word on it. They seem to hang a lot on ‘inheritance’ that just looks like a wrapper subroutine call to me. But Lord knows I’m not qualified to judge.

    I’ve managed a product development to production rollout that was written in C++ and participated in the code reviews; but never did catch the fever for it.

    I’ve written some Pascal. Yeah, nice language. Pure. Straitjacket…

    FWIW, I’ve also been employed writing a cost accounting system in H.P. Business Basic. Which I described by calling it “BASIC written by a frustrated Pascal compiler guy” ;-) (It has “BEGIN / END” pairs, variables with long names, functions and subroutines, and your choice of interpreting or compilation, among other “minor enhancements” ;-) So you can turn BASIC into a real language if you graft enough Pascal into it…

    So original Pascal did not have inheritance (which seems the critical thing to O.O. folks in defining what makes it different – based only on the fact that THAT is the thing they all bring up and present first and with the most vigor). O.O. Pascal adds the “necessary traits”, but like I said, I can’t say what all they are. (To me it always just looks like regular old reusable modular subroutines wrapped up in fancy language and a painful syntax for how you apply them – oh, and nobody ever knows what all is in the library so ends up rewriting lots of the same Objects anyway…)

    It’s on my list of “Someday Things” to actually learn an O.O. language and use it. But every time I’ve tried it’s just seemed way more trouble than it was worth… I’m sure I’m missing something, just now sure it’s worth trying to catch it ;-)

  31. E.M.Smith says:

    From the wiki:
    https://en.wikipedia.org/wiki/Object-oriented_programming

    In programming languages an object is the composition of nouns (like data such as numbers, strings, or variables) and verbs (like actions, such as functions).

    An object-oriented program may be viewed as a collection of interacting objects, as opposed to the conventional model, in which a program is seen as a list of tasks (subroutines) to perform. In OOP, each object is capable of receiving messages, processing data, and sending messages to other objects. Each object can be viewed as an independent “machine” with a distinct role or responsibility. Actions (or “methods”) on these objects are closely associated with the object. For example, OOP data structures tend to “carry their own operators around with them” (or at least “inherit” them from a similar object or class)—except when they must be serialized.

    Simple, non-OOP programs may be one “long” list of commands. More complex programs often group smaller sections of these statements into functions or subroutines—each of which might perform a particular task. With designs of this sort, it is common for some of the program’s data to be ‘global’, i.e., accessible from any part of the program. As programs grow in size, allowing any function to modify any piece of data means that bugs can have wide-reaching effects.

    In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. These act as the intermediaries for retrieving or modifying the data they control. The programming construct that combines data with a set of methods for accessing and managing those data is called an object. The practice of using subroutines to examine or modify certain kinds of data was also used in non-OOP modular programming, well before the widespread use of object-oriented programming.

    Which, after a long list of largely self referential words that try to define it as different, ends with a sentence that basically says “you can do the same thing with subroutines and functions” as part of modular programming… but then says, in effect, “but that’s not object oriented programming as it is something new”…

    I donno… it just looks like fancy function calls with an odd syntax wrapped up in self aggrandizing new jargon to me. Yeah, it works. But every project I’ve manage with it, or seen done in it, ends up with significant code bloat and efficiency issues compared to doing it with procedural orientation (though nobody cares as “hardware is cheap”…)

    The wiki goes on to try ever more to distinguish O.O. from Modular:

    Object-oriented programming focuses on data rather than processes, with programs composed of self-sufficient modules (“classes”), each instance of which (“object”) contains all the information needed to manipulate its own data structure (“members”). This was in contrast to the modular programming that had been dominant for many years and that focused on the function of a module, rather than specifically the data, but equally provided for code reuse and self-sufficient reusable units of programming logic, enabling collaboration through the use of linked modules (subroutines).

    Maybe my “block” is just that I’ve always seen data and process as strongly co-dependent. Just don’t see how you can isolate the data structure questions from the function / process questions. They are inherently bound to each other.

    Interesting sidebar:

    Looks like I worked for the folks who created Object Pascal when they were creating it!

    https://en.wikipedia.org/wiki/Object_Pascal

    Object Pascal is an extension of the Pascal language that was developed at Apple Computer by a team led by Larry Tesler in consultation with Niklaus Wirth, the inventor of Pascal. It is descended from an earlier object-oriented version of Pascal called Clascal, which was available on the Lisa computer.

    Object Pascal was needed in order to support MacApp, an expandable Macintosh application framework that would now be called a class library. Object Pascal extensions and MacApp itself were developed by Barry Haynes, Ken Doyle, and Larry Rosenstein, and were tested by Dan Allen. Larry Tesler oversaw the project, which began very early in 1985 and became a product in 1986.

    I was at Apple then, knew those folks, and my group reported to Larry Tesler. Which means they may well have been using my equipment in their development work (as we supported the Engineering computer needs / shop…) I remember it as MacApp, and I remember the transition from Pascal to C++; but had not made the connect to that history as “Object Pascal”… I suspect they called it Clascal (that I vaguely recognize) or something else.

    An Object Pascal extension was also implemented in the Think Pascal IDE. The IDE includes the compiler and an editor with Syntax highlighting and checking, a powerful debugger and a class library. Many developers preferred Think Pascal over Apple’s implementation of Object Pascal because Think Pascal offered a tight integration of its tools. The development stopped after the 4.01 version because the company was bought by Symantec. The developers then left the project.

    Apple dropped support for Object Pascal when they moved from Motorola 68K chips to IBM’s PowerPC architecture in 1994. MacApp 3.0, for this platform, was re-written in C++.

    I also remember the “Think Pascal” connection.

    Oddly, my group was instrumental in the move to the PowerPC chip. (It’s a long story… but some of our work ended up in the PowerPC chip as the IBM chip was morphed into it. We introduced the RS6000? workstation to the guys in Engineering who then blended the designs).

    So this means I was, in some small way, part of both making Object Pascal, and then the move to C++ at Apple. Golly.. Yeah, it was the equivalent of “electronic janitor” mixed with “match maker”, but still… My group, my organization, we were all in staff meetings together… ATG, Advanced Technology Group. (Most of what my group did was run the Cray, but we also did all the infrastructure like networks and email and VAX Unix boxes and more. Lots of plastic simulation and a load of other stuff too. Pretty much all the data archives and backups too.) We were not part of corporate I.T. but a dedicated high response high tech group inside Engineering for the exclusive use of Engineers.

    Well… guess I ought to learn to use Object Pascal ;-)

  32. E.M.Smith says:

    Why I find O.O. languages a bit of a pain. From that wiki page on Object Pascal. Here is “Hello World” in plain old Pascal, vs Object Pascal.

    Regular Pascal

    program HelloWorld;
    begin
      writeln('Hello World');
    end.
       

    Pretty darned simple and clear, eh? Declare the program name, then do the deed. Not a lot of overhead or complexity.

    Object Pascal

    First off, there are FIVE different dialects of Object Pascal, with mutually exclusive syntax. So you must choose one. I’m going to take one from the middle that’s about average length, then also show the longest one so you can compare the two.

    Delphi and Free Pascal

    program ObjectPascalExample;
     
    type
      THelloWorld = class
        procedure Put;
      end;
     
    procedure THelloWorld.Put;
    begin
      Writeln('Hello, World!');
    end;
     
    var
      HelloWorld: THelloWorld;               { this is an implicit pointer }
     
    begin
      HelloWorld := THelloWorld.Create;      { constructor returns a pointer to an object of type THelloWorld }
      HelloWorld.Put;
      HelloWorld.Free;                       { this line deallocates the THelloWorld object pointed to by HelloWorld }
    end.
    

    Oxygene Object Pascal

    namespace ObjectPascalExample;
    
       interface
     
       type
          ConsoleApp = class
             class method Main;
          end;
     
          THelloWorld = class
             method Put;
          end;
     
       implementation
     
       method THelloWorld.Put;
       begin
          Console.WriteLine('Hello, World!');
       end;
     
       class method ConsoleApp.Main;
       begin
          var HelloWorld := new THelloWorld;
          HelloWorld.Put;
       end;
     
    end.
    

    Maybe it gets better in really really large programs, where the overhead of all this can be amortized over something that “gives back” enough to make up for it. Maybe not…

    But at first blush it looks like a lot of indirection getting in the way of a simple goal state.

  33. Petrossa says:

    EM
    Embarcadero’s RADStudio XE has a delphi/pascal and a c++ compiler. I’m totally in love with it, despite it creating the hugest exe’s you’ll ever see. Very nice to mess about with. Highly recommended

  34. Gail Combs says:

    …..”I’d also use knitting books as an example of a programming language (complete with subroutines!) and point out that punched cards came from their earlier use in looms and weaving….”

    FWIW, the second company I worked for was a family owned business who started out in 1902 making ‘lace’ for carriages – think Fisher Body. ( Lace = narrow fabric some times like fake fur) The first hosiery and lace made on looms was in the mid 1700s – History of Machine-wrought Hosiery and Lace Manufactures

    The best loom in the company was an old antique made of walnut. It was used to make the fake fur wound onto a tube for Zerox copy machines. The unique quality of this machine was it wove both sides of the lace so the lace was straight and true without any camber. Most machines including those made today, knit one side and weave the other so the lace wants to bend in a circle or spiral. We had engineers over from Switzerland trying to figure out how to reproduce the workings of that old loom. They failed! This loom made a variety of laces thanks to the punch cards used to program it.

    If you own a car or house you have parts made by the company. All those fuzzy channels your car window rolls up into or the fuzzy weatherstripping on doors and windows.

    The company now seems to be out of the lace making business. I sure hope they kept that loom or donated it to a museum. (The Carriage Museum of Stony Brook was aware of the loom since I talked to the curator in the late 1980s)

  35. E.M.Smith says:

    @Petrosa:

    Perhaps you could enlighten me as to why O.O. is more attractive to folks who do it… To me it just looks like more trouble, more arcane jargon for the same results (does renaming a function a “method” really matter?…) an as you pointed out, giant executable modules…

    I’m not antithetical to it (one of my guys wanted to use C++ to make a product. It worked well and he was productive in it) I just “don’t get it”… Maybe my “needs” are all too small to reach the benefit point so I don’t see it and they all look like a giant “Hello World” to me ;-)

    @Gail Combs:

    I’d wager the modern engineers could have duplicated it, but were not willing to work in walnut…

    Each wood has a particular character. One of them is surface friction. Some are self lubricating, others with just the amount of ‘drag’ wanted on a bit of thread. The degree of ‘flex’ (and that it is slightly different in different directions…) and more.

    I’d guess they were trying to use modern materials and it “just wouldn’t work” since some fine point of timing or thread position was not right with a non-wood friction surface and flex. Lots of folks have lost touch with the “feel” of the surfaces of structural materials. It really does matter, especially in things like handling fabrics and threads.

    That “feel” is what we sense when picking up fine points of the surface. From roughness to even Van der Waals forces. So take a gecko. They can climb glass due to Van der Waals forces from their pads. I’d wager that making fine lace and ‘fuzz’ gets into Van der Waals land too. How different is walnut from, say, aluminum or plastic on electrostatic and Van der Waals forces? Sure, with enough physics and math you can model it and ‘work it out’… Or just take a weaver with a lifetime of ‘feel’ for the materials and pair them with a craftsman with a lifetime of ‘feel’ for the woods and all… Perhaps in one person. A thousand and 1 little lessons learned over half a century. They make something that works, but has no analytical map for others to follow…

    So we could duplicate it, but not replace it… if we were willing to just duplicate it exactly with attention to every little detail of the materials…

    It’s “craftsmanship”, and it shows up in all sorts of places, and it matters. Much of our “modern” method of engineered things has lost touch with the craft aspects. We gain a lot of low cost and direct manufacture, like “stack” designs that are easy for robots to assemble. But we lose the subtle bits. One example. A western smith was in the tropics and marveling at their machete blades. He could make a fine blade, but his had a more sharp transition from the hardened edge to the annealed spine. He found a local smith to work with..

    The “secret” was a particular gourd that grew in that area. Cut in half, it matched the outline of the machete blade. Red hot from the forge, blades were sunk into the gourd 1/2 with a chop, putting JUST the edge into the melon. The interaction of carbon and nitride from the melon, with cooling from the damp (but not bucket of boiling water fast cooling…) with residual spine heat keeping the center of the edge slightly annealed … It all produced a very hard surface layer (carbides and nitrides an rapid quench) with a strong but non-brittle core to the edge area, with a supple spine.

    Now, if you never saw the process of the melon ‘thunk’, how would you back engineer that? Does your typical western smith (or worse, Mechanical Engineer from a college) think in terms of “what melons and gourds do to metals”? Are melons even in the mind of a “modern” designer? Yeah, once explained it “makes sense”. And yeah, we can do the same things with other methods. But none as simple, cheap, and effective… (Urine can be used to nitride a surface too… think you find “urine treatment” in a college text book?… Maybe urea salt treatment, but it’s a lot easier to “find” urine than urea salts…)

    So there’s a long list of such bits of ‘craft’ that only exist in the minds of the craftsmen. Trying to backfigure them without ‘living the life’ is very hard.

    One other example: I was working in a hospital. The X-ray guy was ‘an old grey beard’ who had been doing it for a very long time, from before it was taught. I’d guess he started about 1930? (as this was the ’70s and he was about to retire). One of his proudest bits of “art”? A Doc had a patient with an ‘issue’ at the top of the spine. Wanted an X-ray, but knew the jaw was in the way (for some reason need a ‘front to rear’ view). Had no idea how to do it, but asked the X-ray guy to do what he could and give them “something” even with the shadows of teeth and all to cope with. He smiles. He has an idea… He knows his craft. Produces an X-ray image from the front, head in normal position, clearly showing the spine in normal alignment with the base of the skull… and no jaw in the image at all.

    Needless to say, the Doc was thrilled (though puzzled). How? The X-ray guy put the patient in a brace so they could not move their head, then did a very long exposure as they move their jaw slowly from wide open to closed to open to closed… The jaw and teeth just became ‘background’ shading bias and with the right exposure, that ‘white fuzz’ washes out. The jaw was never in any one place long enough to show as an image.

    Yes, now we would just chuck them in an MRI machine and that kind of ‘craft’ is lost. Does it really matter? Probably not so much, but some. Like how to properly put on a southern hoop skirt (all those gussets, stays, petticoats,…) and the best way to polish spats with plant products; they had a time, but most folks will never miss them. (Though real farm hams beat the commercial stuff… and are worth the effort to find someone who still has a smoke house for their own hams…)

    So I’d wager that a good local carpenter and weaver could reproduce that loom. Just not in “modern” materials…

  36. Petrossa says:

    EM
    O.O is very handy. As an example take string handling. In C you have to use pointer addition, use risky memcpy’s etc. In C++ you have a class Strings which contain a multitude of wellbehaved functions to handle your strings. And adding your own is as easy as pie.
    Multithreading…
    (working code from on of my little apps)
    Forward declaration:
    class TDownloader : public TThread
    {
    protected:
    void __fastcall Execute();
    public:
    __fastcall TDownloader(bool suspended);
    };
    __fastcall TDownloader::TDownloader(bool suspended) : TThread(suspended)
    {

    }
    Declaration:
    TDownloader *Getfile[MAX_CHANNELS];

    Getfile[CurrentChannel]=new TDownloader(true);
    if (Getfile[CurrentChannel])
    {
    Abort->Visible=true;
    Getfile[CurrentChannel]->FreeOnTerminate = true;
    Getfile[CurrentChannel]->Priority=tpNormal;
    Getfile[CurrentChannel]->Start();
    Stats->HideMessage();
    }

    Now i can multithread my heart out.
    Try write that in pure C.

  37. DirkH says:

    E.M.Smith says:
    31 May 2013 at 3:00 pm
    “@Petrosa:
    Perhaps you could enlighten me as to why O.O. is more attractive to folks who do it… To me it just looks like more trouble, more arcane jargon for the same results (does renaming a function a “method” really matter?…) an as you pointed out, giant executable modules… ”

    In my opinion the key advantage is the information encapsulation. I one downloaded a huge capable 3D editing package, the authors had decided to declare it Open Source; it had been a commercial success 10 years before that and was written in C.

    Had they written it in an OO language, they would have naturally created classes for points, vectors, matrices and so on, and declare the implementation details “private” so that code outside the classes methods would not be able to directly access the coordinates but would have to use the methods that the classes offer.

    like so:
    class Vector3D{
    private:
    double x; double y; double z;
    public:
    AddVector(const Vector3D& other){ …

    Why is this an advantage? Well, in the package I downloaded it was difficult to impossible to find the place where a certain typical 3D operation is encoded; the operations were spread all over the place, specialized, duplicated etc.

    Structuring an application into domain specific classes leads naturally to the implementation of methods in the class that they manipulate. You can still violate this but you would have to have a good reason for that.

    So you would expect to find matrix multiplication implemented in the matrix class and, ideally, nowhere else.

    I gave up on attempts at refactoring the 3D package into a maintainable form, as I had no commercial interest… the reason they gave it into the Open Source domain was probably the same – they had given up on maintaining it; applications with a more modern design had long overtaken them.

  38. DirkH says:

    At many times in history and for various, mostly technical reasons, C programmers have emulated C++’s object orientation with the following technique:

    // pseudo class
    struct Vector2D{
    double x;
    double y;
    };
    // pseudo method
    void Vector2D_AddVector2D(Vector2D* this,Vector2D* other){
    this->x += other->x;
    this->y += other->y;
    };

    The trick is to explicitly give the “this” pointer argument as first argument in the “methods” of our “class”. In C++ and other OO languages this parameter is passed implicitly.

    C programs where this technique is used are easy to transform into C++ syntax, and are often well structured, as the C programmer already had the ordering principles of OO in mind.

  39. DirkH says:

    Petrossa says:
    31 May 2013 at 10:26 am
    ” Embarcadero’s RADStudio XE has a delphi/pascal and a c++ compiler. ”

    These are descendants of Anders Hejlsberg’s Turbo Pascal and Turbo C++ from Borland. Borland sold them to Embarcadero.

  40. DirkH says:

    E.M.Smith says:
    31 May 2013 at 7:00 am
    “Why I find O.O. languages a bit of a pain. From that wiki page on Object Pascal. Here is “Hello World” in plain old Pascal, vs Object Pascal. ”

    ChiefIO, those are artificial Hello World examples that are intentionally constructed to show off the OO syntax. In all OO languages I use it is still possible to write the simple imperative Hello World version known from C. That includes the object oriented Turbo Pascal. When all you need is stdout, just use a simple printf or writeln or print or whatever it’s called.

  41. Petrossa says:

    DirkH, i know. Was a Borland user since Turbo Basic to see MS running after Borland all the time to catch up and never succeeding. Then it became Codegear and now RadStudio.
    EM
    Encapsulation is sometimes actually a pain in the butt. I had to write a program around an old C library which resulted in this mess: https://dl.dropboxusercontent.com/u/1828618/botclass.zip

  42. DirkH says:

    Petrossa, I looked at your code. What you did there looks like a simple wrapper; to wrap an object oriented calling interface around a non object oriented one. Such wrapping tasks are always repetitive and tedious – whether OO is in play or not -, often get automated with custom source code creators, and when you’re in a project where it is allowed, you can often compress the effort effectively using the X Macro technique.
    https://en.wikipedia.org/wiki/X_Macro

    I like X macros because once you know the trick you end up with less code to read and to understand. Often I work in projects where they prohibit me from using the C preprocessor because they fear chaos. In those cases I often end up generating the wrapper code with custom Python scripts and give them what I generated.

  43. Petrossa says:

    DirkH The point was because the call to the existing C library needed a static pointer. Since in O.O you can’t get a static pointer to a function outside of the class this whole mess was necessary. i.e. Encapsulation can be a pain in the butt.

  44. DirkH says:

    Petrossa says:
    3 June 2013 at 9:26 am
    “DirkH The point was because the call to the existing C library needed a static pointer. Since in O.O you can’t get a static pointer to a function outside of the class this whole mess was necessary.”

    Why can’t you get a pointer to a function outside the class?

    int Function(int a);
    typedef int (*funcptr)(int);
    class MyClass{
    funcptr m_funcptr;
    public:
    MyClass(funcptr fp){ m_funcptr = fp;};
    int Execute(int a){return (*m_funcptr)(a);};
    };

    MyClass obj = new MyClass(Function);
    int b = obj->Execute(arg);

    Maybe I misunderstand you.

  45. DirkH says:

    Or maybe you wanted to publish the static function pointer to somebody outside the class. But in that case you can just declare the function as static and make it accessible using “public:”.
    Or make it accessible via returning a function pointer.
    A class can expose its innards when it wants to.

  46. Petrossa says:

    Tnx for thinking with me DirkH. But having tried all that in 2002 and failing i came up with this. It was some particularity with the library which by now i have longtime forgotten.

Comments are closed.