OpenMP, OpenMPI, MPICH decisions decisions

When things are new, a thousand sprouts form. Eventually one comes to dominate and becomes the long lived tree. (Or the monopoly or near-monopoly “industry leader” for companies).

Parallel computing is in just such a stage. It’s been a good 20 years now, since it really started taking off; and now things are getting mainstream. Eventually the “shakeout” stage will come. We’re already seeing some of it in “consolidation” of some versions of parallel “extensions” to various languages.

Yet we still have variations of MPI Message Passing Interface. OpenMPI, MPICH, … Then there is OpenMP that is confusingly close to OpenMPI as a printed name. OpenMP handles threads inside a multi-core processor chip with shared memory. OpenMPI / MPICH are oriented toward network connected systems, so pass information around in “messages” and don’t expect shared memory. (You can use MPI inside a multi-core machine, but it isn’t as efficient as OpenMP.)

I’ve mostly looked at MPI over the last few years. I figured it was time to look at OpenMP. It runs on compiler pragma statements so it’s relatively easy to look at some code and have a decent idea what it is doing, even if you don’t know OpenMP in any detail. BUT, to write code and have it work correctly takes a much better understanding. So I went looking for an example.

This was the first one I ran into, and I like it.

https://www.softwarecoven.com/parallel-computing-in-embedded-mobile-devices/

Why?

Well, it is written for the R. Pi specifically, so any “Pi-isms” would have surfaced.

It is comprehensive. Covers an actual port of code, how to find optimization candidates, and has everything you need to do the same.

It is simple. Despite being comprehensive, the example is very easy to follow and “why” is clear at each step.

The author takes the time to be patient and complete. Not a lot of “magic hand-waving” you must work out.

Negatives?

Not much. There’s a little bit of “accent” to his English. I think it may be Indian Standard English. But if you can’t accept a few odd pronoun usages and one or two number conflicts, you ought not be speaking English anyway as there are at least a half dozen major variants.

It is written in C, which is good for folks writing C, but I’m interested in porting some Fortran codes so “an exercise for the student” in that language. But I can read C fine, so the example was easy to follow. (Most “curly brace” languages are more alike than different, so be it Perl or C or “whatever” it is easy to read if you have read any one; it’s the writing where the niggling differences show up…)

I did have one Ah-Ha moment in the reading. Looking at some C and thinking WT…? There was an issue of multiple threads executing and a potential for a variable to change after you evaluated it but before you actually did work on it.

Realize this example is AFTER a couple of iterations of complexity, so really he works up to it slowly. Don’t be put off by it as too complicated for a beginner as he doesn’t begin here. Note that we evaluate core vs bestcore then do it again. The comments explain the reason.

#pragma omp parallel for
for (i = 1; i < seekLength; i ++) 
{
    double corr;
    // Calculates correlation value for the mixing position corresponding to 'i'
    corr = calcCrossCorr(refPos + channels * i, pMidBuffer, norm);

    // heuristic rule to slightly favour values close to mid of the range
    double tmp = (double)(2 * i - seekLength) / (double)seekLength;
    corr = ((corr + 0.1) * (1.0 - 0.25 * tmp * tmp));

    // Check if we’ve found new best correlation value
    if (corr > bestCorr) 
    {
        // For optimal performance, enter critical section only in case 
        // that best value found. In such case repeat 'if' condition as it's 
        // possible that other parallel execution may have already updated the 
        // bestCorr value in the mean time
        #pragma omp critical
        if (corr > bestCorr)
        {
            bestCorr = corr;
            bestOffs = i;
        }
    }
}

This is toward the end when we have already called for a distributed “for” loop with the “omp parallel for” pragma (compiler directive in a comment format) and it is demonstrating how to use another pragma, the “omp critical” one, to restrict execution of shared code to when it is needed and ONLY by one thread at a time.

So the first “if” just checks that we need to “go there” and execute that critical code at all. Corr is bigger than bestcorr so far. THEN we execute the critical chunk. BUT, we might have been waiting for another thread to update the values, so before we actually DO something, we need to check it again to see if the other thread changed it after our first check, but before we got our turn.

Relevance to Non-Programmers

Why is this relevant to folks who are NOT writing parallel code?

Well, it illustrates the new kind of bug you can get in parallel conversions. Lots of the climate codes are gong to be (or have been…) converted to parallel execution. It is a little bit of an unnatural act for a programmer to put the same test twice in a row. A “newbie” to OpenMP might very well not realize it is needed. I didn’t. So this kind of issue will lead to bugs that may or may not be caught in a complicated climate model where folks mostly just look at the output to gauge if it is working, and the output changes with each model run to some extent anyway.

One must learn to think in terms of many flying balls at once and that the order in which they land is important to track, and control. Forget to check “the same thing twice” and you can have different, and likely wrong, results.

In Conclusion

So I now know to look at the Makefiles to see what settings are made to compiler flags to call OpenMP (in codes like MPAS model) and I know to look for the #pragma omp statements in the code to see where it is used.

I also know that I need to learn to think in an added dimension. Parallel time. At a moment in time, I check a value, but at the different moment in time when I write a new value, the old value may have changed due to another thread. For that kind of code, the use of “critical” matters; but that brings with it the potential that any given thread only gets to run that bit of critical code AFTER some other thread finishes and AFTER the value checked may have changed (again). So an image of many threads running in async, and SOMETIMES needing to write a variable must also have an image of locking and queuing and then checking on state / variable changes prior to acting.

It’s a different way of picturing the code, the execution, and how the syntax works.

It’s also a different way of introducing pernicious bugs. Bugs hard to catch in code with a wandering non-deterministic output like climate models. Almost by definition, the output of those models is a bit chaotic. How would you spot a little more variation and chaos in codes that already run “never the same way twice”?

Epilog

So I used MPICH 2 in my earlier tests. It looks like OpenMPI is also available on the Pi. I need to try it out too.

https://en.wikipedia.org/wiki/Open_MPI

Open MPI represents the merger between three well-known MPI implementations:

FT-MPI from the University of Tennessee
LA-MPI from Los Alamos National Laboratory
LAM/MPI from Indiana University

with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.

That “consolidation” stage is underway…

https://en.wikipedia.org/wiki/MPICH

MPICH, formerly known as MPICH2, is a freely available, portable implementation of MPI, a standard for message-passing for distributed-memory applications used in parallel computing. MPICH is Free and open source software with some public domain components that were developed by a US governmental organisation, and is available for most flavours of Unix-like OS (including Linux and Mac OS X).
[…]
MPICH derivatives

IBM (MPI for the Blue Gene series and, as one option, for x- and p-series clusters)
Cray (MPI for all Cray platforms)
SiCortex (MPI SiCortex)
Microsoft (MS-MPI)
Intel (Intel MPI)
Qlogic (MPICH2-PSM)
Myricom (MPICH2-MX)
Ohio State University (MVAPICH and MVAPICH2)
University of British Columbia (MPICH2/SCTP, and Fine-Grain MPI (FG-MPI) which adds support for coroutines)

But it isn’t completed just yet…

On my “todo” list now is to test MPICH vs OpenMPI and also to trial OpenMP for multithreaded FORTRAN on the Pi. In theory, a hybrid build with OpenMP inside each board but OpenMPI distributing blocks between boards would be best for my little cluster. It is also going to be very tricky to get that right and to know how to prove it.

Then there is that coarray Fortran option. I think it ought to be in the latest Debian / Devuan Fortran, but I’ve not tried it nor searched for confirmation.

https://en.wikipedia.org/wiki/Coarray_Fortran

The first open-source compiler which implemented coarrays as specified in the Fortran 2008 standard for Linux architectures is G95. Currently, GNU Fortran provides wide coverage of Fortran’s coarray features in single- and multi-image configuration (the latter based on the OpenCoarrays library).

Since GNU and G95 are both available on Debian, it ought to be there… but sometimes hard to do bits get left out of early ports; especially if someone decides it isn’t important to the target user community. ( i.e. “Nobody will need massively parallel FORTRAN on a home education toy system”… ) It is easy for the folks doing the work (often for free) to decide to leave some hard and largely unused bit of a port effort to ‘later” or “the next guy”…

So yet another Dig Here! to see if it really is there and working. YADH? Or is that “yada yada YADH”? ;-)

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in GCM, Tech Bits and tagged , , , , . Bookmark the permalink.

50 Responses to OpenMP, OpenMPI, MPICH decisions decisions

  1. E.M.Smith says:

    An interesting tutorial / class outline for OpenMP that illustrates some of the power or it, some of the interesting other commands in it, and some very interesting ways to do it wrong and get pernicious errors. A bit heavy thinking needed to keep track of the moving balls in the air some times, but that’s the purpose of it…
    http://www.openmp.org/wp-content/uploads/omp-hands-on-SC08.pdf

    A list of several tutorial here:
    http://www.openmp.org/resources/tutorials-articles/

    A short, though interesting, discussion of another tutorial (it lists the link in it) that touches on Intel hyperthreading and how it works, and compares MPI to OpenMP with a hint at using SIMD (vector or GPU ) in the innermost loops.
    https://www.msi.umn.edu/sites/default/files/OpenMP.tutorial_1.pdf

    Interesting stuff, but it looks increasingly like you really must be a very careful and very detail oriented programmer to make all this orchestrate properly. There are literally a lot of moving targets, but also moving guns, and each gun must hit the right moving targets… (Multiple threads moving on multiple sometimes shared sometimes not data…)

  2. Soronel Haetir says:

    I’ve never cared for stuff like OpenMP that subsumes control. It’s a neat idea but I really just don’t care for that kind of execution. Task-level asynchronicity is somewhat better but I would still rather be able to fiddle the guts of whatever if it’s not doing what I want.

  3. Steve Crook says:

    As you say, the repeat of the test is confusing at first sight. One of the things I like about IDE’s like Jetbrains Idea or its devil spawn Android Studio is the on the fly code analysis to help highlight potential problems writing threaded code. That and a decent libraries for handling threads, semaphores and all that good stuff.

    Some of the most intractable problems are those caused by threading/multiprocessor issues. I was writing Java and C++ during the 90’s just as multiprocessor PCs were becoming readily available. You wouldn’t believe the number of programs we had that ran well (multi threaded) on single CPU systems, but died in deeply mysterious ways when they were let loose on dual or quad+ systems.
    Often of course only when the system was heavily loaded and therefore supremely difficult to debug, particularly when we found we couldn’t rely on the ordering of messages in the log files :-(

  4. E.M.Smith says:

    @Soronel:

    I don’t know if it is “subsumes control” so much as it is “no choice”. Somehow you must coordinate those multiple threads running async. For existing languages that means some kind of ‘retrofit’ wrappers, functions, whatever. The coordination and timing control problem doesn’t go away just because I don’t like it. So somehow there must be a way to institute parallel loops, locks, syncs, data transfers, etc. etc.

    I can see complaints about a particular syntax or implementation detail, but that the function gets done in some way is essential. I don’t see it as “subsumes control” so much as it is letting me control things I never controlled before so I can do a very tricky thing I’ve never done before – async parallel code.

    While I don’t care for “that kind of execution” either, I don’t see any way around it. It’s the nature of the hardware coordination issue expressed in software.

    For new languages (like Julia) or new language extensions (like coarray Fortran) you can make the syntax a bit cleaner and maybe even hide some of the lock / coordination problem, but until those are commonly available you are kind of stuck with “glue on” packages, libraries, and wrappers.

    So I’ve found that gfortran supposedly comes with coarray support already. Great. It ought to now do parallel processing of some array commands. That reduces the need for OpenMP some. BUT, what if I want to do parallel processing on something more than arrays? Oh, well, now I’ve got to spawn and coordinate threads… and OpenMP lets me do that. Until languages and compilers find a better way, that’s sort of it…

    Now, should I want to spread that processing over a dozen computers… neither coarrays nor OpenMP (from what I’ve learned so far) looks to address that. You are in the land of “message passing” as a larger block of code and data must be shipped over a network fabric to another computer. OpenMPI lets me do that. Now I could choose to just not use a cluster of machines… but then I don’t get my problem solved in anything other than glacial time… or megabucks…

    There are things on the horizon that try to fix this. Plan9 operating system is built for shared parallel processing. But how many folks are running Plan9? Does it have support for all the OTHER things I need? Is FireFox there with F77 and can I run LibreOffice? I don’t know. But last time I looked at it, it was about 3/4 of a real OS environment…

    IMHO this is just life on the bleeding edge of progress. New things are always a bit kludgy and irritating, simply because we’ve not figured out the good way yet. Even once we figure out the good way, it takes a decade or two to diffuse into the most used and most developed environments.

    I guess what I’m trying to say is: I don’t care for it either, but it’s the only game in town.

  5. E.M.Smith says:

    Well this is a bit of a bummer…

    me@DevuanPiM2:/# ls -l /usr/bin/gfortran
    [...]12 Feb 25  2015 /usr/bin/gfortran -> gfortran-4.9
    

    So gfortran 4.9 is my release level. Checking what’s in it:

    https://gcc.gnu.org/wiki/GFortran/News#GCC4.9

    gfortran 4.9.1

    Full support for OpenMP 4.0: In GCC 4.9.1 (and later) there is now also full OpenMP 4.0 support for Fortran. (Except for actual offloading of target sections onto accelerators.) NOTE that modules using new OpenMPv4 features will generate a .mod file which is incompatible with GCC 4.9.0; otherwise, 4.9.0 and 4.9.1 generate identical (and hence compatible) .mod files.

    So I don’t get (limited) “Full support” for OpenMP until just one more minor release level… For 4.9 unadorned by a minor number:

    Partial support for OpenMP 4.0: The features provides by the libgomp library are available, but no new directives are supported in Fortran. (Currently, they are only available in C and C++. For the latter, omp target and omp declare simd is supported but runs without offloading/vectorized version on the host.)

    I get “partial support” but which parts TBDiscovered… and if I add the libgomp library.

    root@DevuanPiM2:/#  apt-get install libgomp
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    E: Unable to locate package libgomp
    

    Which on first look seems to not be there… digging a bit further (“package” is one of my local interpretations of a command string as a scriptlette):

    root@DevuanPiM2:/# bcat package
    apt-cache search $*
    
    root@DevuanPiM2:/# package libgomp
    libgomp1 - GCC OpenMP (GOMP) support library
    libgomp1-dbg - GCC OpenMP (GOMP) support library (debug symbols)
    
    root@DevuanPiM2:/# apt-get install libgomp1 
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    libgomp1 is already the newest version.
    libgomp1 set to manually installed.
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    

    So some good news is that there is a libgomp via libgomp1 already installed. I just don’t know what it does…

    4.7 says some things work (so also in 4.9) via an MPI library… except the library doesn’t work:

    Fortran 2008

    Coarrays: Full single-image support except for polymorphic coarrays. Initial and very preliminary support for multi-images via an of MPI-based coarray communiciation library. Note: The library version is not yet usable as remote coarray access is not yet possible.

    Support for the DO CONCURRENT construct has been added, which allows the user to specify that individual loop iterations have no interdependencies.

    So, OK, looks like testing and a bit more R&D required to find out what is really in the 4.9 coarray box. I’m hopeful that the basics are there and working (as I’m unlikely to use more than the basics for a good while). OTOH, the MPAS code is not so basic… so running it on Debian / Devuan may not be able to use such features.

    As usual: “We’ll see”…

    My guess is that it will be enough to do basic coarrays on the same board, but not enough to do anythng fancy (like polymorphic – whatever that is) nor do it on distributed boards as the MPI library doesn’t like that yet…

    OK, enough “Digging”, time to actually try some stuff and see what breaks…

    Yes, I’m starting with some basic coarray Fortran to get that skill learned. Then I’ll move up to OpenMP in Fortran if possible, C if that’s my only choice. Then I’m going to try OpenMPI for connecting the Pi Stack. MPICH 2 was a bit of a disappointment (i.e. slow) and while it “might be me” I don’t think so. The Open Source world seems to be moved on to OpenMPI, so I’m hoping it will be faster / better. That has a “check it is on the systems or build / install until it is” workload, so it’s at the end of the list.

    By the end of this learning / exploration I ought to have 3 levels of parallel skill. Fortran 2008 coarrays as a normal language function. OpenMP for threaded code (even if only using C). OpenMPI as an alternative to MPICH for distribution of work between systems (though maybe not from inside gfortran until something is fixed).

    Julia is looking better and better what with it being a built in ;-)

  6. Lionell Griffith says:

    EM: it’s the only game in town.

    Not quite. There is always roll your own. When I started to work out parallel image processing ca 1992, there was no game in town for Windows NT Beta. So I had to roll my own. Which I did. I had a two CPU PC with shared memory. No multicore CPU or GPU available anywhere. Cluster computing was going to be a last resort.

    The tools I had to work with was the ability to launch a thread, set its priority of execution, the possibility of making the thread re-entrant, the possibility of passing parameters, and the painfully ubiquitous global data space. There was no vector processing nor GPU that could execute vector processing. So, when all you have is a hammer, you hit the problem with it.

    My solution:

    1. The thread code was written to compute a single pixel value for the resultant image. Note: it was a scalar computation not easily reducible to single operation multiple data vectors without a lot of non-obvious reorganization and almost impossibility of debugging it. A slower known right result was much preferred to a very fast wrong result.

    2. The thread code was made re-entrant so that it would execute in its own data space with access to the external multiple image data arrays.

    3. I passed the parameters indicating the starting pixel and ending pixel to compute.

    4. I launched the thread code multiple times each assigned to a CPU.

    5. I set each thread priority of execution to real time.

    6. I set a completion semaphore such that it would go true only with the completion of all launched threads.

    7. The launch routine would wait for the semaphore to go true.

    It worked! I had achieved a major boost in image processing speed.

    For a multicore CPU, I had to make sure the start and stop pixel locations for each launched thread was sufficiently spaced from the others so that cache thrashing did not happen. (Interestingly, in the parallel processing methods you have found, don’t seem to take cache effects into account. I suggest for multicore CPU’s a large amount of performance eliminated by excessive cache thrashing.)

    Keep in mind, on Intel CPU’s at least, cache is loaded from ram in blocks. As long as the program uses memory in that space, the data comes from the cache block. If the memory access is outside of that block another block is loaded. Then if you cache is full, the requirement for a new block is met by overwriting an existent block. I suspect, the overwritten block is selected by the longest time since the block was accessed. When, as in my case, you are dealing with tens of megs of two byte pixels for eight to thirty two images to be computed to make one image, cache thrashing is easy to make happen.

    If the parallel problem to be solved is appropriate, it can be solved by using only the tools you have to without major difficulty. You don’t have to deal with the brain cracking complexities of the current HPC tools. For some problems, they are massive overkill. Other problems, perhaps not so much.

  7. E.M.Smith says:

    @Lionell:

    Neat tricks!

    Yet you are still forced to deal with all the issues of concurrency, locking, etc. etc.

    DIY is a choice, but I’m not seeing it as all that much easier… You still must manage threads and memory somehow.

    Oh, and Yet Another Not Quite …

    http://www.lahey.com/docs/lgf12help/gfortran/OpenMP.html#OpenMP

    Please note:

    -fopenmp implies -frecursive, i.e., all local arrays will be allocated on the stack. When porting existing code to OpenMP, this may lead to surprising results, especially to segmentation faults if the stacksize is limited.
    On glibc-based systems, OpenMP enabled applications cannot be statically linked due to limitations of the underlying pthreads-implementation. It might be possible to get a working solution if -Wl,–whole-archive -lpthread -Wl,–no-whole-archive is added to the command line. However, this is not supported by gcc and thus not recommended.

    That’s the kind of stuff that drives me ’round the bend. “We support FOO! *”

    “* except a lot of it is left out and we’ll get to it later…”

    It’s that kind of stuff that DOES argue for a Roll-your-own solution; since then you don’t get bit by things that ought to be there, but are missing, or that cause a sudden change like putting everything on the stack so breaks your code.

    I know it’s hard to write compilers; especially for complex things like parallel processing; but geez… Either implement it properly and completely, or don’t claim to have it.

  8. Lionell Griffith says:

    EM: Yet you are still forced to deal with all the issues of concurrency, locking, etc. etc.

    The use of a re-entrant thread, assignment to CPU, the setting high thread execution priority, image memory access assignment, and the completion semaphore forced the OS to provide for concurrency, locking, etc, etc. for my problem. I simply developed and used a deeper understanding of how the hardware and OS worked to force it work the way I needed it to work. How else do you do the almost impossible with the nearly inadequate?

    I also have had long experience with UNIX dialects. These tricks are not easy to do unless you have a real time UNIX at your service. I think it could have been done in IRIX (Silicon Graphics) but not in BSD, IBM, SUN, HP, et.al. UNIX. They are almost exclusively time slice driven without sufficiently granular priority scheduling. As always, the tool you have determines what you can do and how you can do it. As far as I know Linux, which I view as simply another dialect of UNIX, has a similar restriction.

    I understand that Windows NT and follow on Windows have had a long reputation of not being a real time OS and not good for much of anything because it is not UNIX. Yet I have found it to be very good at real time applications when properly understood and used. It is simply a matter of developing a deep understanding of the tools and using them for what they are. I haven’t touched UNIX since ca 1994 and haven’t needed to.

    In the land of UNIX, one executes children with forks and communicates with them using pipes. Why do UNIX programmers hate their children so much?

  9. Lionell Griffith says:

    EM: You still must manage threads and memory somehow.

    That is trivial compared to the overhead you must deal with using MP et.al.

    To deal with memory issues, I have written my own dynamic memory assignment, audit, and garbage collection system based upon using the memory heap. I grab a large block of ram, assign use by location within that block, and release assignments explicitly. Thereby avoiding all the OS memory assignment and garbage collection thrashing when you try to use C++, C#, etc. objects. This allows me to set up my memory the way it needs to be before the time critical process is executed and release it after the process is completed. That way, time is not spent in heap thrashing during time critical operations. The complexity is embedded in a callable module and is thus simple to use. As a benefit, I can identify, locate, and debug memory leaks in my part of the code and get a hint that the OS has or doesn’t have a memory leak.

    Agreed, you do have to understand memory pointer programming. That too is trivial compared to the complexity MP requires you to learn and use. If time and memory efficiency is not a requirement then such tools might be OK, Yet, the tool consumes most of your time and memory. The performance of your task is reduced to a fraction of what it could be. If you must get the most out of the least, do it yourself is likely your only option.

    Keep in mind the tools you are looking at were developed for really big and very expensive computers and done on government contracts where time and cost are not upfront or relevant factors. If it isn’t fast enough, get another grant and buy a bigger computer is their solution. Doing the same thing on an under $1000 cluster paid out of pocket is from a parallel universe not reachable from there.

  10. E.M.Smith says:

    @Lionell;

    It isn’t our own children we dislike so much… 9-}

    Some Real Time Linux history:

    https://en.wikipedia.org/wiki/RTLinux

    RTLinux is a hard realtime RTOS microkernel that runs the entire Linux operating system as a fully preemptive process. The hard real-time property makes it possible to control robots, data acquisition systems, manufacturing plants, and other time-sensitive instruments and machines from RTLinux applications. Even with a similar name it is not related the “Real-Time Linux” project of the Linux Foundation.

    RTLinux was developed by Victor Yodaiken, Michael Barabanov, Cort Dougan and others at the New Mexico Institute of Mining and Technology and then as a commercial product at FSMLabs. Wind River Systems acquired FSMLabs embedded technology in February 2007 and made a version available as Wind River Real-Time Core for Wind River Linux. As of August 2011, Wind River has discontinued the Wind River Real-Time Core product line, effectively ending commercial support for the RTLinux product.

    So no longer a supported commercial product, but it exists. Then there’s the regular Linux Foundation project:

    https://wiki.linuxfoundation.org/realtime/start

    The Real Time Linux collaborative project was established to help coordinate the efforts around mainlining Preempt RT and ensuring that the maintainers have the ability to continue development work, long-term support and future research of RT. In coordination with the broader community, the workgroup aims to encourage broader adoption of RT, improve testing automation and documentation and better prioritize the development roadmap.

    FWIW, there are Real Time Linux types. Both dedicated and via kernel swaps / patches:

    https://www.osadl.org/Realtime-Linux.projects-realtime-linux.0.html

    The embedded guys and robotics folks are big into it:

    https://www.linuxfoundation.org/blog/intro-to-real-time-linux-for-embedded-developers/

    There’s even a real time Ubuntu (that seems pointless to me as it is desktop and eyecandy oriented… but ‘whatever’… someone wanted it):

    https://wiki.ubuntu.com/RealTime

    News:
    The -preempt and -rt kernels are no longer being developed due to lack of support. Focus has instead turned to the -lowlatency and -realtime kernels, particularly for the the release of Ubuntu 11.04 Natty Narwhal. The long-term goal is to have -lowlatency in the official Ubuntu repositories, while maintaining -realtime in a dedicated PPA.

    but not enough interest to keep the -preempt and -rt kernels alive, only enough for the -lowlatency and -realtime kernels. Though, frankly, I don’t know why one would not just chose the real -realtime and be done… but maybe that’s what folks did…

    So yeah, the normal “open the box” Linux is not real time. But for folks who want it, you can get many different real time Linux releases.

    Considering that everything from robots to self driving cars to laser cutters to routers to process control systems and more are built on some *Nix or other, you can be sure there’s lots of real time processing going on. You just don’t see it if you go looking for “the usual” as that’s a desktop oriented version.

    Just about everything other than Big Box Proprietary OS and Microsoft is some flavor of the *Nix family. Someone even got into an SD card to see what made it “go” and found a 4 bit OS very stripped down, that was a Linux. It’s kind of bizarre to think of all those billions of SD cards running a dinky stripper Linux to store and fetch bits and relocate them as bits ‘wear’ in the flash. Even CISCO IOS started life as a *Nix (though now heavily modified. They, and Netapp, used to have a shell level that was pretty standard; then AT&T System V started suing anyone with any UI similarity and both made gratuitous changes just to be ‘different’ and tell AT&T to go pound sand.)

    So a whole lot of time sensitive processing happens on real time Linux. Just you don’t see most of it happening.

  11. E.M.Smith says:

    Lionell:
    “Keep in mind the tools you are looking at were developed for really big and very expensive computers and done on government contracts where time and cost are not upfront or relevant factors. If it isn’t fast enough, get another grant and buy a bigger computer is their solution. Doing the same thing on an under $1000 cluster paid out of pocket is from a parallel universe not reachable from there.”

    Um, it certainly IS doable on a $1000 cluster. Just depends on what “it” is. Since, for example, Model II was written for an RS6000, it will run on small hardware and porting parts of it to run on cluster nodes will make it even faster.

    Similarly, using a small cluster as learning or development platform works nicely.

    My Raspberry Pi M3 is clearly running multithread for FireFox now ( it wasn’t at first porting) as I can see the 4 cores “lite up” with workload. Most tablets are even going multi-core. My Samsung is a 4 core, but it’s old, new ones are 8 core. Someone is programming them in multiple threads… and not all of it is ‘long hand’. Generally it’s all written in C using things like OpenMP (as the Open Source folks loath proprietary…)

    Now if the “it” is run a model like MPAS with fine granularity for a 1000 year climate simulation, well, yeah, it’s not going to run on a $1000 cluster and complete in my lifetime…

    So you do something else. Reduce resolution. Run a 10 year simulation. Find ways to tune the code for faster computing, etc. etc.

    But to assert OpenMP only belongs in the land of Big Iron is just not true.

    https://en.wikipedia.org/wiki/OpenMP

    OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a group of major computer hardware and software vendors, including AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, Oracle Corporation, and more.

    While Cray, IBM, Fujitsu and NEC make “big iron”, AMD and Intel make PC oriented chips (though in the last couple of decades “supercomputers” made of clusters of them have come to the front) while Texas Instruments makes mostly small gear and Red Hat is all Linux on lots of small iron. Oracles is in there most likely through their Sun purchase – a mini-computer brand. HP is spread from laptops to Superdome scale, so who knows their POV. Then nVidia makes GPU modules. These are used for graphics (gaming folks especially push performance) and increasingly as co-processors for things like robot vision (self driving car vision). Hardly “big iron”, the Jetson boards are about $200 to $600 depending on CUDA cores and speed.

    Now MY interest is:

    1) Learn this stuff.
    2) Characterize the performance.
    3) IFF Possible, run a model on a small iron cluster. Re-evaluate and asses scale.
    4) ELSE look at what size iron is needed…

    OpenMP or similar facilities on a $200 Pi Stack / cluster is just dandy for steps 1 to 3.

    If I ever get to step 4, then I’m doing it knowing I need to find a funder for Big Iron and just what size is needed.

    BUT, remember I ran a big iron shop. What really happens is that you do a LOT of time sharing and short runs. You can’t afford to have the iron sit idle. So you put 2000 people on it. (We had somewhere in the thousands. IIRC it was about 2000.) Suddenly a 100% dedicated to ME facility that completes my run in 4 days is looking OK compared to one that completes in 10 minutes but is 95% loaded all the time so when is my schedule date again?…

    Don’t get me wrong: I loved the sheer power and speed of the thing. But low priority jobs could sit a long time before getting cycles. At times, we had “PC Days” (Personal Cray) when one project put a mouse and framebuffer on the Cray and made it a giant personal computer for a specific project. That could mean a day or two before you could even log onto it to run your 10 minute job.

    Now add in that the SBC of today has more memory, more storage, and about the same speed (or sometimes better) CPU performance; and that means codes written in the 1980s and 1990s for a Cray are just “right sized” for my SBC and my $200 cluster is “over kill”… (And a set of 2 Intel CPUs with quad cores and a couple of nVidia GPGPU cards stuck in it is massive overkill; but you can put it on your desktop for modest $$$ as it’s just a PC with some $500 cards added).

    So yeah, you are right a $1000 box or cluster today can’t compete with a $250 Million Supercomputer today, but it can blow the doors off a $40 Million supercomputer of 1990s… when most of the climate codes were written or already old…

  12. E.M.Smith says:

    @Steve Cook: Upthread here:
    https://chiefio.wordpress.com/2017/12/08/openmp-openmpi-mpich-decisions-decisions/#comment-89224

    I always hated “sporadic failure” and the idea of a heavily loaded multi-CPU box barfing and then needing to debug “why?” does not give me warm fuzzies.

    Frankly, that’s why I stayed away from all the parallel processing stuff until now. Just so many moving guns and targets I’m pretty sure at some point I’ll get some little thing wrong that I’ll never be able to figure out.

    Per dev environments: Never got into them. I can see the utility, but I’ve just never written anything that was so big and complex that I needed one, so just dodged the workload of learning one. I’m embarrassed to say it, but my major debugging tool has just been liberal use of “diagnostic writes” to dump flow and status information. Crude, but always works no matter what shop or environment I’m in that day. But since they change the timing, I suspect they would screw up the problem case they were trying to diagnose… since timing is part of the problem.

    So I just sit outside looking in the window at all the folks happily using them… wondering why my toes are so cold ;-)

  13. E.M.Smith says:

    There’s a nice list of the gfortran coarray functions and what they do here:
    http://www.lahey.com/docs/lgf12help/gfortran/Function-ABI-Documentation.html#Function-ABI-Documentation

    for completeness, they also have a list of all the non-coarray regular things here:

    http://www.lahey.com/docs/lgf12help/gfortran/Intrinsic-Procedures.html#Intrinsic-Procedures

    which is much expanded from my Old FORTRAN days… and where I doubt I’ll ever use 90% of them. OTOH, if someone else used it, I can look it up here, quick.

  14. Soronel Haetir says:

    Keep in mind that “real time” does not generally mean “high performance”, it means “guaranteed to have time to run”, things like avionics control software needs to guarantee that an interrupt will only take so long and that whatever was running when the interrupt occurred will still complete before some specified timeout. Such systems generally have a great deal of unused cycles just in case some unusual condition pops up, even if that condition occurs the system still has to have time to run whatever else was going on.

  15. Lionell Griffith says:

    EM,

    I wasn’t aware of real time Linux mostly because I didn’t look for it. I still have a problem with having to give away your intellectual property rights if you sell software to run on them. I am OK with a BSD or MIT license but not any of the Open Source Foundation or GNU licenses.

    I would be willing to look more closely at a Real Time Linux if it didn’t have that stipulation. Until then, I will stay with Windows in spite of the massive amounts of stupidity coming from Microsoft. At least I don’t have to give them my software simply because I sold it to someone.

    I have programmed various real time OS’s. Even one from Wind River: VX Works. At the time, I had to create an interactive graphical user interface for it so that it could be used for a life critical medical application cc 1996.

    As for a modern $1000 cluster blowing away a 1990’s super computer, I agree. My obsolete iPhone can almost do that. What I have difficulty accepting is the heavy weight tools developed to do heavy weight government parallel processing are reasonable things to run on a $1000 cluster. I have yet to see a tool developed for that environment to have the clarity, organization, and efficiency to be all that useful.

    The tools you are looking at may be OK but I have a hard time seeing them as anything but heavy weight over kill. I admit that I spent 30 minutes speed reading about them. I got a good sense of what they were and what it took to use them. They don’t appear to offer me anything worth the effort to become proficient in using them. Especially since I have my one parallel computing problem already solved decades ago long before these tools became available.

    To put this in context, I find C++ too heavy weight and inefficient for general high performance computing. Its OK for general commercial software but not anything that requires some real number crunching, data manipulation, and tight timing. Most of the newer languages are even worse in that respect. Especially the ones that are marketed on the premise that they make incompetent programmers competent. Can’t be done!

  16. E.M.Smith says:

    @Lionell:

    I think you have a misconception about big iron users. They are not wasteful of cycles. It is the last bastion of folks who work hard at efficiency. That is why Fortran still dominates. They have tried rewriying the math routines in other languages. They have ended up slower, and folks go back to Fortran.

    Codes are incessantly profiled and tuned. Inefficient things get rewritten, improved, fixed, There are classes of problem that take weeks of run time at $ thousands per hour. Shave 5% off that, it is a big deal (and pays for the effort).

    I’ve never seen more attention on low overhead and high efficiency than when dealing with Cray and the software on it. The notion they would layer on a low efficiency high overhead layer if there were any alternative is just orthogonal to my experience.

    Yes, direct assembly will beat a high level language most of the time, and generally the closer to the metal you get, the better. But Fortan is darned close to it, and especially the Cray compiler could out optimize most folks.

    Relize I’m not talking down hand crafted code. Mi lije it, and it is generally better. Just pointing out that the HPC Supercomputer folks are generally at the limit of the doable and care a lot for efficient code, as being 10% too slow can mean your project doesn’t work at all and your life work halts.

    This isn’t a hypothetical for me. After about $100 million of computes, a 6 year project I was on was killed as we needed about $1 million of scalar computes to reduce to silicon and the executive suit said no instead.

    My Brother-in-law was employed by NASA to improve aeronautic codes. (Ph.D. Stanford Aeronautics). His group got greater than Moore’s Law compute improvement from code and algorithm improvement. You don’t do that if you are OK just tossing money at wasted computes.

    Those are the kinds of folks pushing for things like MPI and OpenMP and coarray Fortran and GPGPU codes.

    On GNU licences:

    I could be wrong, but I don’t think so. You can build and run your software on Linux without losing ownership. It is when you incorporate open source programs into yours that the copyleft bites. I’ve worked at companies making proprietary software using Linux machines (many times) and nobody lost IP out of it. The only part I’m unsure about is using libraries. There are non-free libraries for folks really worried, but I’ve never worked where folks needed to do that, as far as I know.

  17. E.M.Smith says:

    Looks like libraries are mired in a dynamic vs static vs doesn’t matter arguement
    https://en.wikipedia.org/wiki/GNU_General_Public_License#Libraries

    But FSF make the LGPL licence to let use of libraries be ok
    “The Free Software Foundation also created the LGPL, which is nearly identical to the GPL, but with additional permissions to allow linking for the purposes of “using the library”.”

    So it is manageable, but needs care. I know the embedded folks often use some other libraries to avoid licence issues, Don’t know the details, but I think it is the uClibc batch, or something similar. If really worried, there are hybrid builds with BSD licence build environments on a Linux base. IIRC, we did devo on Linux and final build on BSD at one software company (where I was Build Master).

    So GPL can be a minor pita, but not too hard to deal with in a couple of ways if the need arises. I don’t know of anyone (yet) who has lost their software IP from it in court. There have been a few suits, a couple of settle out of courts, and some “recompile without it” done. Basically, if someone really did a ripoff, they settle or move to something else. If there was no rip, they comply with the LGPL or recompile on BSD and move on. A very few folks have made software copyleft, when challenged, but usually were headed that way or going out of business anyway, near as I can tell.

  18. jim2 says:

    Frankly, I would like to see some of LG’s work. Bet it would be worthwhile to study.

  19. E.M.Smith says:

    Looks like Clang LLVM is not GPL but permissive, so you can just develop in languages using LLVM and its libraries.
    https://en.wikipedia.org/wiki/LLVM

    The LLVM project started in 2000 at the University of Illinois at Urbana–Champaign, under the direction of Vikram Adve and Chris Lattner. LLVM was originally developed as a research infrastructure to investigate dynamic compilation techniques for static and dynamic programming languages. LLVM was released under the University of Illinois/NCSA Open Source License, a permissive free software licence. In 2005, Apple Inc. hired Lattner and formed a team to work on the LLVM system for various uses within Apple’s development systems. LLVM is an integral part of Apple’s latest development tools for macOS and iOS. Since 2013, Sony has been using LLVM’s primary front end Clang compiler in the software development kit (SDK) of its PS4 console.

    https://en.wikipedia.org/wiki/Permissive_software_licence

    A permissive software license, sometimes also called BSD-like or BSD-style license, is a free software software licence with minimal requirements about how the software can be redistributed. Examples include the MIT Licence, BSD licences, Apple Public Source License and the Apache licence. As of 2016, the most popular free software license is the permissive MIT license.

    I’d wondered why Clang LLVM was taking off so much what with gnu compilers already there and fast / good. Now I know. Thanks for sending me down that rabbit hole! It scratched a minor wonder itch…

  20. E.M.Smith says:

    @Jim2:

    I know Lionell’s stuff will be better than mine, so instructional. I write good but pedantic code. Few trick bits in it. Good maintainable but bland code… Usually in a minimal but portable and obvious subset of a given language. Comes out of my history as a bug shooter fixing other code (efficiency and bugs).

    On a biz trip once, I learned enough APL to write about a 6 to 8 character program IIRC. (APL is uber-terse like that). The next day I couldn’t remember how it did what it did… (APL is like that too. Sometimes called “write only programs”) Last time I tried to write a really trick bit of code…

    If Lionell wants to post some sample code, I’d enjoy reading it.

  21. Lionell Griffith says:

    EM: They are not wasteful of cycles.

    Apparently you have encountered a different programmer crowd than I.

    Perhaps they are experts at sub-optimization rather than actual optimization. Super programmers can focus on the minutia and save many cycles but lose sight of the huge savings from overall architecture, control structure, and algorithm design. They write splatter code and try to make it better rather than writing an inherently correct program and work to make it faster. The programmers you know might do the latter. I haven’t met many who do.

    All of the big iron programmers (high tech, aerospace, and the like) I have met are outrageously wasteful. They write million line programs that could be done in a few hundred thousand lines if done correctly. They were looking for perpetual employment and not intent on earning their salary nor producing results. Many of them worked for NASA. I did in two weeks as a rewrite from scratch what two NASA employees had spent over two years trying to do and couldn’t get to work. As a consequence of that effort and some related stuff I was “allowed” to resign my contract. It is there I learned a basic law of systems, you can do no better work than the system permits no matter how good you are. Bitter? Slightly.

    I look at the tools the big iron users promote and I see nothing but overkill in an attempt to solve all problems forever in a way that creates almost as many problems as they were trying to solve. ADA for example. THAT was THE government prized standard for a while. It was a huge pile of unmitigated crap and produced little of value. MP and MPI looks much the same to me. I would be very surprised if it isn’t.

    I agree that some of this is also true of Microsoft systems. MFC was a kluge joke, full of bugs, and what was of value could be replaced by roughly three pages of good C code heavily commented. COM, DCOM, .Net, etc. are nightmares to use and nearly as poorly documented. I use those things only when I have no choice and then only sparingly.

    My approach is that tools don’t solve problems, people do. They do it by clearly understanding the problem and its context. Then they allow the problem to determine its solution and proceed accordingly. The problem must select the tools and the process that leads to its solution. Otherwise it gets solve strictly by accident.

    The notion that you don’t really have to understand the problem and simply hit the problem with a sufficient number of preexisting solutions and tools is the worst possible way to proceed. It hardly ever really works. Efficiency, correctness, and completeness is at the bottom of the list because these things are never defined nor accounted for. It is this last approach that I have mostly seen used.

    On GNU licenses:

    It is my interpretation that simply using the Linux OS by calling one of its services is sufficient cause to require the inclusion of ALL your code as part of the open source share everywhere for free license. It has not been adjudicated that I know of so the viral claim is not for sure legal or illegal. I simply don’t trust the legal system to see things my way. Hence I don’t get near it.

    If I could merely write a shell to get around the open source problem and only have to give away my shell, then I might be able to live with it. That is except for what I think of UNIX itself. It is the best possible idea that two fellows at Bell Labs could think of to run computer games on a PDP 8 ca 1970. It is still that but with a lot of incoherent extensions.

    Obviously, I have a very strong antipathy for the UNIX environment even though I have used it effectively for over a decade. I find that you have to do too much UNIX to to be able to get much else done. I find it appalling that there are programmers who believe vi is a good text editor or that most any other of the hundreds of UNIX commands are well formed and usable.

    I even wrote a motif based graphical text editor for it so I could accomplish the necessary programming in a reasonable time. Once done, a lot of other programmers started using it as well. It made what I thought a really poorly designed system a bit more tolerable. I found even 16 bit Windows a vastly superior programming experience.

    Clearly we are different people with different wants, needs and expectations. Neither of us are totally wrong or right. We share somethings and not others.

  22. Lionell Griffith says:

    EM: If Lionell wants to post some sample code, I’d enjoy reading it.

    OK. I can pick a sample to give a flavor of how I do things. However, I would prefer to make it available on an FTP website of mine so that its formatting will stay intact. There is a simple interactive web interface so it is easy to grab.

    It is getting late for me so I will do it in the morning.

  23. Lionell Griffith says:

    Take a look at: http://lkgnet.com/Sample/SampleCode.htm

    It is old sample code written in 2004 and translated to HTML to assure the format is maintained. The code presented and the Digilib.lib referred to does not exist anymore. The Digiflat sample demonstrates more or less the way I format things and have done so for a long time. The Object sample demonstrates the way I do object oriented coding in ANCI C but not necessarily how I format it.

    There is no patented code nor significant trade secrets exposed by this sample.

    If you wish more up to date samples, I will see what I can do but it will be by FTP rather than HTML.

  24. Soronel Haetir says:

    Before going blind I always loved _using_ *nix systems, I just hated programming for them. Too many libraries with different naming conventions, that is something I very much like working with windows API, there are a few outliers but most API sets use the same leading-cap format and nearly all use the standardized type names.

    EM, re. GNU licensing, agreed if you aren’t interested in distributing whatever you produce, but once you do I can well understand LG’s reluctance.

  25. Soronel Haetir says:

    As for scripting languages, aren’t you aware that python is executable pseudo-code while perl is executable line noise?

  26. jim2 says:

    I know a man who worked at IBM in the Daze of Yore. He says IBM paid programmers per KLOC (thousand lines of code.) I guess that might be the genesis of code bloat :)

    I don’t have much of a voice about what language and platform I use at work. I use primarily C# on Windoze. My primary objective when writing a program is maintainability. Speed can be a problem, but money isn’t spent on that unless users complain. Personally, I try to make my programs fast also.

    I keep methods short and typically do only one simple thing. I use long method names that describe what is returned from where or what is happening, whenever practical. I use method summaries and comments for further clarification when something might not be clear to the reader.

    I, too, have spent a lot of time fixing/enhancing some pretty horrible code. The overall design of some of these were was also horrible.

    Unlike LG, I find there are scads of documentation for DotNet, both in help files and on the web. And there are many well written video tutorials on top of that. I use all resources available. This is almost mandatory with DotNet as it changes and is enhanced every version.

    Design of code/applications in Visual Studio occupies a significant quantity of documentation space. Books have been written on it, such as design patterns. Apps exist to help dissect and refactor DotNet code.

    I’ve studied C and C++ on my own, but have never used it professionally. To me, a pointer is merely a method name. It ends up as an address in a memory location either way. Had IBM assembly language in collage, but haven’t used it professionally.

    If Fortran produces the best assembly, there is no reason a compiler couldn’t be written for BASIC that produces the same efficient code.

    In my view, the high level language should provide resources to help the programmer make code understandable and maintainable. Long variable and method names go a long way toward that. But as LG says, the overall design matters a lot from the get-go. A crappy design will lead to a crappy body of code and an app that is near impossible to maintain and enhance.

  27. Lionell Griffith says:

    jim2: … almost mandatory with DotNet as it changes and is enhanced every version.

    That is the problem, it is constant churn. The worst thing about it, it is mostly a wrapper around more fundamental things that don’t change so much. They break a very important rule: if it isn’t broken DON’T FIX IT! Then when they fix what is not broken they have to issue endless updates to fix what they broke. Worse, the interface contracts are not maintained in too many cases. The documentation often does not match behavior. So you are constantly chasing the changes to the changes and wondering why your once working code no longer works.

    So, I avoid the hassle and write my own wrappers around the lowest level code possible. I write the procedure contracts, test to the contract, and then I leave it alone. This gives me a body of code I can rely on to do what I say it does. That leaves me free to develop software rather than spending my time fixing what Microsoft broke because they must change for the sake of change. The bulk of my code base is unchanged since I last adapted it to 64 bit Windows. Some of it dates back to the early 1990’s.

    There was once a wise person in the software world that called the Microsoft process Fire and Motion. It was taken from the military concept that to advance against the enemy, you fire so the enemy will take cover and while you are firing, you advance. Microsoft sees its interest best served by treating the developer community as their enemy. They change to keep the developer community occupied with software maintenance rather than develop actual functionality. It reduces competition. It is a game that can only be won by not playing. I don’t play their game except when I have the advantage. Even then, it is only when it is clear that what I am using isn’t going to change because it underpins the important and almost ubiquitous capabilities in the OS and MS applications.

    My goal is not using all of the latest and greatest. I deliver reliable and usable functionality to my customers with a minimum of software maintenance overhead for myself. If it isn’t broken, I don’t fix it.

  28. jim2 says:

    LG – very nice. I like the descriptive names and well organized code. I recognize the Win API calls as I’ve used those in previous lives.

  29. jim2 says:

    In the last couple of versions, our code has ported to the new version without modification. Frankly, I don’t miss low-level management. High-level language helps me focus on just what the business needs are, not whether I deallocated this or that memory.

    I suspect if you are building something as complex as a climate model, not having to worry about the low-level functions would be a plus. This is probably why CIO is looking at the “pre-packaged” parallel apps.

  30. E.M.Smith says:

    Another good tutorial here:

    https://computing.llnl.gov/tutorials/openMP/

    @Jim2:

    No need to guess or have a “probably”, I can just tell you why. Yes, it is nice to write code fast at a higher level, but that’s a secondary issue. (First you get it to work, then you profile it, THEN you find the 5% that’s 75% of the run time and that bit gets optimized and sometimes made very low level and/or trick code. Rinse and repeat…)

    1) The industry of HPC is largely going to OpenMP, OpenMPI, MPICH, and CUDA. It is good to know the dominant tools in the industry you have most interest in and most desire to work in.

    2) It is interesting tech and I’d like to add it to my tool kit in any case.

    3) The existing climate models are being written (or re-written) in these modes. In particular, MPAS uses OpenMP and MPI; so to really understand it, I must understand them.

    4) I have a “toy cluster” and I’d like to:
    a) Test it and characterized the performance on “standard tools and problems”.
    b) Rank that relative to historical supercomputers and peg it at an era in time.
    c) Play with writing applications that run on my toy cluster.
    d) Maybe make it do something useful.
    e) IFF Possible: Get a small climate model running on it, or at least dev work done.

    5) I have a modest interest in robotics, and robots will be using this kind of coding. In particular, the present robot cars are using nVidia CUDA boards for their vision and such. Most toy robots use Arduino and low level code, but once in the land of real robots, it’s parallel code in higher level languages for the hard bits. (Low level for the device drivers still, though.) At some point I’m going to get a Jetson board and learn CUDA for exactly this reason. (Retired Engineer friend teaches robotics to high school kids and I think extending that to robot vision with Jetson might be of interest)

    That’s all the “big lumps”.

    As I’m NOT interested in making the Pi Cluster do THE most efficient processing possible for some particular job (i.e. I’m not on contract with a deliverable) I have no desire to sink my limited time into a very low level Pi-specific set of programming tools and abilities. I abandoned Windows years ago, so Windows specific coding has zero use or interest for me. I’m presently moving away from Intel so Intel specific coding has zero use or interest for me.

    Doing ‘hardware level’ assembly or C would not meet any of those listed goals. The potential speed up from working at that level is modest anyway. FORTRAN is the fastest language for things math oriented (thus the continued dominance of scientific and engineering programming) and C is very close to it while being better for hardware control (thus the dominance in operating systems). Hard to beat that level of speed… BUT, I’m interested in characterizing the hardware performance, so who knows, when benchmarking I might end up writing some low level C as a test case… I’ll certainly be doing “with MP and without” comparisons.

  31. Lionell Griffith says:

    EM,
    The problem should be the determinant of its solution and not the other way around.

    As an exercise, I am downloading the latest version of Intel FORTRAN compiler for parallel applications in Windows. I will do my best to perform a C vs FORTRAN numerical calculation speed test. I have a month’s free trial to get it done. Otherwise the cost is in the thousands of dollars.

    All the “free” compilers were the usual open source style “its free so you figure out how to use it” style. I don’t have the time nor inclination to use something so poorly designed and largely undocumented as that. So I went to the source who should know what it is doing and tries to do a good job of it. It is an over 3 gig download so it will take a while at 10 mbps.

  32. catweazle666 says:

    “If Fortran produces the best assembly, there is no reason a compiler couldn’t be written for BASIC that produces the same efficient code.”

    Have you come across PowerBasic – especially the Console Compiler version?

    Excellent for writing bits and pieces to mess with file headers, dump stuff as CSVs, rescue data and similar everyday tasks?

    Allows in-line assembler and doesn’t have any graphics interface overhead but happily copes with Windows APIs if you feel the need.

    I’ve written a number of Windows executables – fairly trivial, admittedly – that took up less than 10K, very refreshing after some of the bloated rubbish I’ve had to put up with over the years..

  33. Lionell Griffith says:

    After a two hour download, the Intel software had a CRC error. I am going to forget it. It is not worth it to me.

  34. E.M.Smith says:

    Well,
    looks like no reason to bother learning or testing coarray Fortran on the Pi at this time. With gfortran, it doesn’t really work until 5.0 and the Pi isn’t there yet.

    https://gcc.gnu.org/wiki/Coarray

    Implementation Status in GCC Fortran

    GCC 4.6: Only single-image support (i.e. num_images() == 1) but many features did not work.

    GCC 4.7: Also multi-image support via a communication library. Comprehensive support for a single image, but most features do not yet work with num_images() > 1

    GCC 5: Full support of Fortran 2008 coarrays and of the broadcast/reduction collectives and atomics of TS18508 – except of bugs and incomplete support of special cases. Note that the included communication library currently only handles a single image, but the OpenCoarrays project provides an MPI-based and a GASNet-based communication library. See also CoarrayLib. For caveats see also the next section.

    Of what use is having the syntax “work” on a single-image-only; for a parallel multi-image syntax?

    Sheesh. Just vaporware that passes the syntax checker…

    Then, even for 5.0, you have to go get somebody else’s communications library ’cause the gcc one doesn’t work…

    OK, so I ca now move on to just doing OpenMP tests / trials… then OpenMPI…

  35. E.M.Smith says:

    @Lionell:

    Catching up a couple of points now that I have some free time:

    Per NASA and “governments” – It isn’t monolithic. My Brother-in-law was at the real NASA, NASA Ames, doing R&D.specifically aimed at best aerodynamics where they were compute limited and getting more efficient was critical. Reading the NASA GISS codes (GIStemp), it’s a horrid poorly designed and written thing. Clearly kludged together over many years by many hands as they incrementally added crap and never cleaned up the past. (The eras of code are obvious with the F77 bits the oldest, the f90 newer, and python a relatively recent “glue on”.)

    So yeah, we’re both right… depending on the shop. Some are just money grifters looking to get bigger budgets and toys; some are actually good folks with skill. I’ve generally tried to hang out more with the ones with clue.

    Per Unix / Linux / vi :

    I’ve often said “I love what Unix lets me do and I hate how it makes me do it.” I have a clear love / hate relationship with it. Generally speaking it is efficient, well designed, and effective while not getting in my way and preventing me from doing what I want to do. Unfortunately, that comes at the cost of cryptic commands with more options than I have time to learn and certain style bits that are annoying. Then the X-Windows system is an MIT “glue on” that’s just horribly done, way too complex for what’s needed, and frequently a PITA. But it works.

    Name ANY other operating system that spans from micro-controllers to supercomputer and all points in between, running in real-time, or time slice, single users or up to 10s of thousands on a system, across ALL hardware vendors, and does it reliably and well. There isn’t one.

    I also regularly use skills and commands that I first learned 35+ years ago. Lots of it is changed over time (more so in Linux land than Unix), but much much more of it is stable and reliable. It was originally written for phone switches, after all, and they must run 24x7x365xyears.

    So it took me 6 months to get to where I didn’t curse when using Unix or vi. Then I had my light-bulb moment. I only ever learned about a dozen of the vi commands (plus regular expressions), but I can do anything I need to do in it. At this point (in fact, about my 2nd year using it) it is in my brain stem and I don’t even think about the commands. About a decade ago I had a job interview where someone wanted to quiz me on unix things. I flunked because I could not say how I did things. They just flow out the fingers. I think “I want that line there” and it happens. I no longer know what sequence to type to make it happen. So yeah, I’m one of those folks who thinks vi is a ‘real editor’ and gets the job done, because it does.

    FWIW, I hate emacs. I want an editor to be an editor, not an entire lifestyle environment, build system, etc. etc.

    I’m a minimalist on many things, so iI don’t need auto-indenting hand holding colorizing etc. etc.

    I’m not the only one. The MacOS is, at the core, a *Nix. It uses the Mach micro kernel IIRC, but I regularly drop into a command window and to *Nix commands. Including vi ;-0 Similarly, Android is a *Nix. As is Chrome. As are most telephones and routers. As are most embedded systems. As are, well, pretty much everything that isn’t Microsoft and a few mainframes.

    Per Microsoft:

    You hit on exactly why I don’t like it. “Never the same way twice”. Every 2 or 3 years they pretty much toss out the UI and you get to start over. Fat pig of an OS that sucks hardware to it’s knees. Needing GB of memory just to run when Linux was running in 256 MB (and I had one running in 16 MB at that time…) And don’t get me started on the abomination of The Registry and the horrid security lapses and the wide-open to government TLAs. So no, I’m not going to join you in praise for anything from Redmond.

    Per GPL:

    I’m pretty sure you are wrong in your belief about when code becomes public. It is based on the “derivative work” IP law. IF your code does not contain GNU bits (“Linux” is just the kernel and it is not the issue, the GNU GPL’d bits are the issue) then it is not a derivative work. So as long as you don’t put any GPL licensed source code into what you write, and compile against libraries with a BSD, LLVM, or LGPL license, you are fine to stay proprietary. (The specific differences between BSD, the MIT, the similar “softer” licenses, and the LGPL may matter in some contexts to some folks). Those licenses came about specifically because of the Free Software Foundation GPL causing folks to balk at using Linux for product development. So write your own code and compile / link to “permissive” licensed libraries and you are “good to go”. I’ve worked at several companies that did that, so not speculative.

    I’ve also bought $100,000 to $Millions of software over the years. (Ran a large data center… ) I can state unequivocally that I’ve had FAR more PITA from commercial licenses than from anything Open Source. That Cray we ran for software development work? It was running Unix… Our Sun and Vax machines ran a mix of SunOS, Solaris, Ultrix, and BSD. I’ve had development groups running on a few dozen Red Hat Linux machines. I come to this with a significant body of experience.

    FWIW, I also have bought huge numbers of Windows licenses including site licenses, and I’ve personally made a 5 port router with QOS (priority / quality of service) back when it was a new thing, out of a Windows machine; so it isn’t like I’m a stranger to managing / making them go. I’ve done it professionally for about 20 years. I just didn’t like it… (From email servers to database servers to SMB file servers to … I’ve set up every thing you need in many sized shops, on Windows. From about Windows 95 through NT up to Windows 7. At Windows 8 I bailed… I was almost ready to say It was OK with 7, then they broke it with 8, and 10 is dog vomit; as they tried to make desktops work like a tablet.)

    So, my biases:

    #1 – I love the Mac. It is “just right”. Except it keeps wanting to put a straight jacket on me.

    #2 – I love/hate Unix. No straight jacket here. Just wish it wasn’t so… so… “pissy” some times. There is no operating system I love more than BSD, but setting up windows GUI / desktop is a PITA on it. I’d run it on everything if that wasn’t the case, but mostly use BSD for servers. It is bullet proof.

    #3 – I really like Linux – almost a Unix, but too many crazy people doing God Only Knows what to it. The latest insanity being SystemD. A chaotic fractured community. Yet I can choose what suits me “just so” from the chaos. Where BSD is slow to change and has a steering committee keeping it pure and clean, GNU/Linux is the wild wild west…

    #4 – Windows… “Friends don’t let friends do Windows”. I’ll tolerate it… as long as the billing rate is over $100 / hour. Otherwise “Life is too short to drink bad wine”. Realize I do “support” work and that is not the same as the software development side you do where much bigger bucks abound due to their market share and (forced) frequent re-buying of software.

    #5 – Chrome – Feh! It sort of works in a limited straitjacket kind of way with Google snooping. Makes nice media server… Google did leave the “hooks” open to load your own Linux instead of their hobbled one, though. Bugs pretty much stamped out and support looks good.

    #6 – Android – About like Chrome. No surprise since both are Google products. Tolerated as it is on just about every phone and tablet NOT from Apple. ( I’m looking at Linux on my next one of each just for security reasons if nothing else – well, that, and I can do way much more with them…) Basically a Linux for very small platforms with touch screen interface and software only from the Google Monopoly Store (unless you take special steps to break security…) and with the bugs patched.

    Realize I have at least 1, and mostly 2+ of ALL those different systems running in my home… Yeah, I’m a glutton for punishment ;-) but at least my opinions are from experience with all of them.

    Sorry to hear about your CRC error. Always an issue with giant downloads. I’m going to be trialing the open source approaches and compilers, so Intel isn’t needed by me to check it out.

    Oh, and per ADA:

    Yeah, such a huge and complicated language just to please everyone and noone in the defense industry. I was sent to Germany to evaluate an Ada compiler once (about 1983)… In the straight jacket mould of Pascal, with the elegance and style of COBOL blended with FORTRAN, and all the efficiency and agility of PL/1 /sarc; 8-0 The compiler guys at Amdahl (where I worked at the time) just hated it as they tried to make a functional compiler to cover the whole language… So on that, at least, we can agree ;-)

    Per OpenMP / OpenMPI:

    I see them quite differently. Workman like ways to glue on parallel processing to existing Fortran and C compilers. Eventually they will die off as languages with well designed built in parallel code generation get made and adopted. It just is not possible to expect folks to recode the staff-centuries of existing code in assembly or convert the structure to a parallel pointer based approach in C. Similarly, for the 90% of a new program that is 10% of the processing time, you can’t spend the staff hours to write it at a very low level of abstraction from the hardware.

    But you can easily do a thread based MP retrofit with OpenMP to existing code, and in an incremental way too. That alone makes it a ‘win’ in my POV. (MPI I’m still not convinced about as I’ve not gotten any speed gains yet using it. But I’ve only used MPICH 2 and only on the Pi – not ideal for either). Would using Julia on Plan9 be more efficient and better? Almost certainly… except it is vaporware…

    Per Job choosing tool:

    Generally, I agree. BUT, for efficient and scientific or engineering programming, both Fortran and C are very hard to beat. So in effect the problems I’m looking at HAVE already chosen the right tool. It is the (relatively recent and new tech) move to parallel processing that’s the next tool. And to do that in Fortran your choices of tool are limited. Coarray Fortran, OpenMP, OpenMPI / MPICH. C is similar, except it is easier to code your own threads on the local shared memory multicore CPUs.

    BUT, for MY needs, which center more on understanding, porting, and maintenance of codes written by others: The bulk of all of that is in, and will be in, OpenMP / OpenMPI and CUDA. So those are what I need to learn and use. I’m not writing de-novo code, so I don’t get to choose to do it in assembly or C# or “whatever”.

  36. Lionell Griffith says:

    EM,
    OK. Our different perspective could largely be due to the difference between development and support.

    I do development and can support myself. The only other support I do is for immediate family. As for enterprise support, I won’t touch it and have had nothing but bad experiences with it. They seem to have to make themselves visible by constantly changing stuff. I want them to be invisible by making it work and then go away. I once spent a month to be able to use a printer that was a hundred feet from my cubical rather than having to walk to one ten times further. I also had to get a management edict to be allowed to install the tools I needed to develop software. The sacred IT department insisted that I had to have them install it in a month or so and sit on my hands until then. My opinion is that enterprise IT departments are for brain dead secretaries and managers but not for serious software development types.

    You do support and can do development. You have my sincere sympathy. Most of the time you have to take what you are given and make it work. When you are allowed to setup the system, what you do is dictated to you by people who don’t know what they are doing. So you are back to square one, working within the tight bounds that management sets for you. Sometimes you can get away with working outside the box. For that, you have to be good, work fast, and pretend you are merely fixing something.

    I could technically do what you do but I would soon go postal. Nearly every thing I do is outside the box, a lot of it never done before, and without the permission of management even in a corporate environment. I make things that work because I understand what that means. This works for my customers. If the customer doesn’t like it, he is no longer my customer.

    OS’s: Yes some are usable, some are marginal, and most have past their prime. Most are a bloody pain and get in your way more than they help. However, writing an OS from scratch is the path to madness. I have done quite enough of it. I would rather fight with Windows and Microsoft than do it again. It is much less pain and anguish.

    Yes. I too stopped at Windows 7. Windows 7 is Vista done right. It is workable and almost makes sense. Windows 8 was not even good crap and unusable. Windows 8.1 can be made to work like Windows 7 with a lot of work on the internals. Windows 10 is for people who don’t have to do real work. It turns a desktop workstation into the logical equivalent of an iPhone without the built in utility and functionality.

    I was a Microsoft “insider” working with the beta Windows 8, 8.1, and 10. Windows 10 kernel is very good. The user interface and facility support interface is a monumental fail. All they had to do was provide a Windows 7 with the Windows 10 kernel and it would have been vastly superior.

    I became quite insistent that it would be a simple thing to provide a Windows 7 user interface for software developers. The code already existed and it was an easy thing to do. In fact, it was in Beta 1. NO, they insisted that they knew better what I need than myself. I was excluded from participating by a total idiot who insisted that mother knows best and was offended by my explanations. Part of the irony is they moved part of the way toward what I suggested a year into distribution by returning the “start button”. Even then, they ignored that the request was not for a button but for an actual real live and useful program menu that doesn’t obliterate the desktop.

    Windows 10 is even worse in that it is a huge security leak. There are things you can do to the (gack) registry to make it better. However, here is no way to batten the hatches and make it genuinely private. It is in constant contact with the Microsoft Mothership and sending who knows what back to it, I will support it for my daughter but no one else. She has a successful dog walking business and it really doesn’t matter if Microsoft knows all the details.

    Fortunately, Embedded Windows 7 is good until 2025. Since I am over 80 years old, that should be long enough for me.

    My problem with a MAC is that the OS really doesn’t understand real time nor even rational software development for complex technical applications. It is a wonderful machine for a user. I totally sucks for a developer. Its saving grace is that you can run Windows using Boot Camp or Parallels. Unfortunately, you are limited to Apple approve peripherals. That doesn’t work for me.

  37. E.M.Smith says:

    @Lionell:

    Yup.

    ” As for enterprise support, I won’t touch it and have had nothing but bad experiences with it. They seem to have to make themselves visible by constantly changing stuff. ”

    So true… I had a department shut down and laid off under me just because they DID make it seamless and invisible. “What do we need to pay for THAT for?”… Then they later figure out they needed it… Happened a couple of times. One “mine”, the other I was a contractor in it. So the “experienced” I.T. head guy figures out that the occasional “crisis” is good for the longevity… (An attitude I detest…).

    As to my “mixed” abilities: That’s because most of my support work was support in development environments. At Apple, it was Engineering Computer Operations — Because the corporate Information Systems and Technology ( IS&T pronounced “isn’t” ;-) was too unresponsive for developers. Then at a small startup where I ran everything from Facilities and Telco / IT to the recreation room and cafeteria – it was ALL development as we had no shipping product until near the end. I’ve only been able to be comfortable in “traditional” corporate I.T. shops as a contractor (since given my skill set and belief structure I don’t get the top manager gig – not enough “applications” and Microsoft indoctrination… which is really odd as I first transitioned to more tech stuff when at Amdahl and being criticized for being “only applications” as a “fix” for that. Just can’t win…)

    Microsoft learned that lesson, thus the constant mutation of things that causes constant need to buy and install (AND pleases their corporate I.T. departments by constantly “demonstrating their worth”…)

    My shops tended to “just run”. So were constantly being questioned about need for budget and headcount…

    Per your statement of “issues” with “doing what I do”: There’s a reason I’ve been doing contracting for the last, what is it now, 20 years?… I don’t have to give a damn about the shop. They want some brain-dead architecture or stupid software choice, Oh Well, not my problem. What’s the hourly rate and how long will it run? Time and Materials billing. That’s what matters. Then there’s the fact that often times folks just have their winky stuck in a roller and don’t care how you get it out… so I get to do it as I feel fit. Those are fun.

    FWIW, when I’m running a shop, I generally tell my tech staff WHAT to fix, but not HOW to fix it. I’ve hired them for a technical skill, not so I can back seat drive them. Ended up managing a C++ development project without ever knowing C++ ’cause that’s the language the guys wanted to use. Ended up generating 4 software patents for Apple… IMHO, that’s where most I.T. management screws up. Hire a specialist in, say, networking; then don’t listen to their preferences. I’m a broad and deep generalist, but I’m never going to know as much about an area as the guy who does ONLY that. That’s why I hired him…

    FWIW, I’m still hopeful that whenever the idiot who had authority and pushed Windows 8 to 10 on the world eventually “moves on”, they can get back to a 7 interface on a 10 kernel. Until then, I’m officially “not going there”. At least in Linux you can chose your interface (from way too many…) There’s even one called, IIRC, “Redmond”, that tries to look like /act like Windows 7. It isn’t 1/2 bad.

    Unfortunately, the Linux community that at one time was modestly centered has fractured into so many variations that even as a hard core user with skilz; I can’t keep up with them. When it becomes too much work just to figure out which ones to try, you’ve got a problem. Like the tower of Babel. Definitely not getting the KISS principle. Heck, it took me about 3 months just to get away from SystemD (barf…) and back to what I had before. Too many hands pot-stirring for too many inconsistent goals. At some point I’ll likely have to just “bite the bullet” and get good at putting a GUI on top of BSD.

    For server use, BSD is as good as it comes, IMHO. Secure, solid, reliable, not capriciously changing like Linux. Only when it comes to “desktop support” and the GUI stuff does it become a PITA. Oh Well.

    Well, I’ve got some “internal maintenance” to do that involves moving all my systems and monitors around, including my DNS server and router, so I’m going “off the air” for a while as I power everything down and clean / rearrange the office…

  38. Lionell Griffith says:

    EM: What’s the hourly rate and how long will it run? Time and Materials billing. That’s what matters.

    I understand, I was there and did it for over a decade. I worked best when the managers were lined up against the wall and nails were being used to attache their hands to the wall. I could then “save the day”, “do it my way”, and then collect my fee and move on. Otherwise, my ability to identify and solve problems so they disappeared was a sure path to the front door and out on the street. It was a living and not a really happy one most of the time. I did it using a lot of grim determination.

  39. jim2 says:

    LG – My hat’s off to you. I hope I can last that long without drooling all over myself!! :)

  40. E.M.Smith says:

    @Lionell:

    Well, FWIW, after the last contract I’ve not been willing to go out and hustle the next gig. I was at Disney (and doing well). I reached the 18 month limit they set for contractors (thanks to Microsoft screwing it for everyone with perpetual contractors and losing their implied employee case with the IRS…) and was heading out the door anyway. BUT… I’d been there during the “New V.P.” arrival and settle in. A couple of weeks after I was gone, he laid off most of the staff that ran all the computer stuff that makes Disney “go” and proceeded to outsourcing – AND requiring the existing staff to train their H1B replacements…

    It’s gone off to a lawsuit, and Disney backed off to some extent after it hit national news. (The only reason I’m willing to talk about it, as there’s a “no talking” clause in the contractor contract – but I’m not talking about my time at Disney… I’m talking about the public news reports ;-)

    Folks who had kept the parks running for decades and who knew the shop and applications beyond what anyone else could possibly hope to do, shown the door since it all “just worked”… Basically the magicians who made the Magic happen, and invisibly. IMHO, a dumb move.

    (Think DME – Disney Magical Express. Where you “check in” at the departure airport and your bags end up in your room while you go straight to the park from the landing airport via Disney bus… the same on the departure, you “check in and get your boarding pass” at the Disney hotel and never touch a bag until you deboard at home nor deal with any transport to / from the airport. It was one of the “apps” where I did the DR testing and I worked closely with the folks who wrote and ran it.)

    You have a building full of some of the most competent and dedicated technical magicians in the world who can make that kind of thing run with zero issues visible to the guests, and you toss them?

    That was the heart-breaker for me. It was the second time I’d been at Disney on contract and gotten positioned to be hired as a ‘cast member’ then had a department nuked for $$$. (The first one was a decade earlier when they outsourced the datacenter operations. I was working in a different area, but had gotten a good reputation with the ops folks. I converted a 6 Sun cluster, while live and in production, to an incompatible OS upgrade. It was the booking engine with something like $4 Million / hour going through it. There was a 4 hour window at night when bookings were low enough to run on 3 systems where we could convert 1/2, test, swap, test, convert the rest, clean up. Pressure? No pressure…. /sarc; but good for the rep ;-) Just don’t have ANY downtime ’cause at that rate / hour it’s going to be noticed…

    Oh Well…

    I keep thinking I really ought to go out and “get a job”, but I’m just not ready to face the stupidity again…

    Basically, I’ve run out of “grim determination” for now.

  41. Lionell Griffith says:

    EM,

    Grim determination lasts only so long until burn out happens. After that, the fire to do the work is gone. All you have left are war stories about the agony.

    So much for the notion that managers can manage anything without having to know anything about it. They cut costs, make this quarter numbers, and then are promoted to their next level of ignorance. It seems like an inverted pyramid with that lone over worked engineer at the bottom struggling to balance the mass of muddle management on his back.

    I have known a few good technical managers but I can count them on one hand with my thumb tied to my little finger. How we got to the 21st century alive is a mystery to me. The muddle managers sure didn’t help.

  42. jim2 says:

    Oh well, when the Chinese get pissed they’ll just ramp up the ride speeds and kill people that way.

  43. Graeme No.3 says:

    Is Plan9 named after the old movie Plan9 from outer space? Winner of the Golden Turkey award, although in its case probably the Lead Chamberpot award. Anything that could beat out Attack of the Killer Tomatoes for the title surely deserves it.
    It was slightly handicapped by the death of the star halfway through filming and certain budget restraints e.g. metal pie dishes as substitutes for flying saucers and changing to B&W film for the back end of the film.
    Just before I left employment I complained to the IT Dept. about a gliche in the labelling in that we had to inform the customer that one of the basic ingredients in the product he was using was highly flammable but as it had been reacted at 280℃ for 8-10 hours that was now highly unlikely to be the case. But the system insisted that any product using that ingredient had to be labelled as if it was still in its natural state. The basic reason was that the crew who had set up the program (based on Oracle) had “wired in this” just before departure to the USA where they were all sacked (just before Xmas) and no-one in the intervening 5 months had been able to locate the code. Shortly after that I was made redundant as I couldn’t see the company lasting 5 years. I was wrong, it lasted nearly 7.

  44. Power Grab says:

    Is anyone here using Windows 10 because someone else decreed it?

    I have some machines that do use it. A couple days ago, one of my Windows 10 machines notified me that it had a large feature update to install. I told it to back off the the maximum time it offered me (3 days), but then ended up letting it do the install when I did shut down that same night.

    It took quite a while, and I didn’t sit there and watch it all the time. However, I did notice one message in a large typeface that the update would make things safer for me when I was online. Yeah. Right. /eyeroll/

    During the first reboot, I was presented with a full screen message that tried to sell me on “connecting” my phone to my computer. Uh…no…

    What does that even mean? That they want me to set it up so all my devices share the same cloud-based storage space? [I’m always asking if the “cloud” server is actually at NSA headquarters!]

    Well, first of all, I don’t want everything I create on the desktop to be ported to the phone/tablet/e-reader, etc. And vice versa. I don’t want everything I have on my phone to be ported to my other devices. If I want something passed over to another device, I will do it myself.

    Secondly, right before this “feature update” /gag!/ I had read EM’s explanation of how he protects his stuff in the event that what happened to Tallbloke might happen to him.

    If a person did “connect” all their devices so they all share the same storage space, then the PTB would only have to abscond with one device to get all of it, right?

    Tell me I’m wrong about that. Please.

  45. Power Grab says:

    @ EM:

    “get good at putting a GUI on top of BSD.”

    You will give us a play-by-play on that one, right? :-)

  46. E.M.Smith says:

    @Graeme No.3:

    Yeah, it’s become a Cult Classic due to the ‘campy’ nature. The default movie that you put up when building a ROKU station on one of the “instant TV” sites is Plan9… Folks in the Unix world especially tend to like to make references to it. The logo is a bunny named “Glenda” ;-) An “inside joke” that ended up shipping…

    I have to admit to having watched Plan 9 (from outer space) several times… and liking it… but not in the way the authors intended ;-)

    Isn’t it amazing how simply advocating for reasonable things that are intelligent, but might slow down schedule just a bit or increase costs a small amount, results in the “not a team player” epithet and The Look (and sometimes the little pink notice…) People seem to hate most anyone who suggests they might not want to run off the cliff and ought to take just a moment to think about it… Pissing on a crappy parade is just not acceptable.

    IMHO, that’s part of why Plan 9 (the movie) has such a following. It is just soooo bad and yet was shipped… like watching a train wreck, you just can’t look away… then you start to chuckle when you see folks boarding two trains on the same track headed in opposite directions toward each other… You stop shouting, even stop waving and stuttering “But but but” … eventually you just ask “Where’s the popcorn machine and can I sit on that bench away from the tracks for a little while?”…

    @Power Grab:

    Should the day come, I’ll be posting my “build script” and any “Aw Shits!” I discover along the way. I did get FreeBSD to run on the Pi M3 a while back (even building a bunch from source) but stopped at the point of making the GUI go as I’ve done it before and it wasn’t pleasant. (All long hand X Windows last time I did it a decade+ back; so it might be better now… cringe… hopeful grim wince… )

    Per Phones and Tablets and Desktops (Oh My!) linking:

    No, the cloud servers are not at NSA headquarters. Those are just offices and desks. The NSA servers are in large remote hidden data centers and they have large telecom pipes into the cloud server locations (along with the occasional on-site folks…) See, it’s much cheaper from their point of view to get the customer to pay for the “service” and the companies to provide and manage the data centers. They just need periodic access to the data pool and telecom fiber works well for that… You just need a POP (Point Of Presence – where the telco connects to the building) big enough to support your data mining connections…

    There are 2 goals by “others” and one for you, one for system crackers. For the typical person, it means less systems maintenance type work. Like keeping your “bookmarks” in sync everywhere. I’ve likely got about a dozen divergent sets of bookmarks all over the place. Some times I can’t find where I made particular link… So by keeping your personal information and preferences in sync and your phone data backed up on your computer that is backed up to “the cloud”, when your phone dies you just buy a new one, resync and go… That’s the benefit to you.

    For the system crackers, they can break into one device and get access / entre to many devices and / or their data. They get “the sum of all weaknesses” to exploit.

    For the companies, the product they sell is YOU and YOUR DATA. At present, Android is one island, while Microsoft is a continent and Apple is an archipelago in a different jurisdiction (administratively speaking) then Chrome OS is an emerging volcanic island… you CAN live on it if you work hard enough and don’t mind the rocks… Then we have folks like Amazon wanting to offer “Cloud Services” getting in on the action and Telcos (think Death Star AT&T) looking for ways to monetize your traffic history and video preferences while Netflix and Amazon are both selling advertising by the bucket and your TV preference as a guide to it. They ALL want to know EVERYTHING you do, so they can push more advertising and more “tailored” uplift fees at advertisers. Now if the folks in one island can just get a look at the data in that other island… so they push “connectivity” services to better snoop, collect, and track you.

    Then there are the TLAs. Three Letter Agencies. They just want everything the Companies have (and have arranged to get it via PRISM – now supposedly defunct – and similar programs). They have a “presence” in the telcos and anything going over the wires or the air is up for them to grab (and they spend $BILLIONS to do so. IIRC the NSA built a giant data storage farm in Utah. I was thinking of applying for part of the build contract at the time…)

    Essentially: In ALL things data processing and data communications AND ENTERTAINMENT you are in a war against the System Crackers & Hackers, the Corporations providing you “services”, and the TLAs of ALL the world governments. Most people are blissfully unaware of this and just don’t give a damn, so are defenseless against the onslaught. Many even Ooh and Aww over the new “features” riddled with data leakage, security holes (hell, security expressway bypasses by design), and complete lack of control.

    Now, the paranoid bit:

    (What, you thought that above stuff was the paranoid bit? Oh no… ;-)

    IMHO, there’s fairly strong TLA forces pushing companies to insert code and processes to allow easy access and those companies with strong security are deliberately suppressed while those who are compliant and insert lots of holes are “made a standard”. I first saw this at Apple. At one time they were dominant in several industrial / government segments. Microsoft shipped an OS with poor security and Apple refused to ship without very good security. Fairly suddenly, a few defense contractors and government departments that had been nearly 100% Macintosh changed… (Boeing was one of them…) Later, when Jobs was gone and Apple decided to be “cooperative” with government demands, it ended up back on the ‘approved’ buy list… Then there was the odd case where Blackberry was officially negatively spun yet the President of the USA and top executives continued to use one… why? Because as a Canadian company it had told the US Govt to go stuff it on easy hacking, and so was one of the actually secure platforms you could buy…

    So when I see things being pushed “to the cloud”, I immediately think: So what protections are put on the Telco access to data? Is ALL communications Strong Encrypted AT EVERY STEP END TO END? What protections are put on Cloud Server access? Is data kept encrypted on disk, and only decrypted at the user desktop? From what I’ve been able to find out (from the outside) it is NOT encrypted on the server farms / disk farms and is only modest encrypted (link by link not end to end) on some of the cloud services. In short, it’s full of holes and “one stop shopping” for TLAs anyway.

    Now, is it cheaper and easier for a corporation to put their processing on a remove Virtual Server in a remote data center and not have to deal with things like I.T. Staff, Computer Rooms, buying computers, etc. etc.? Oh, certainly. As long as you don’t mind ALL your data being exposed to God only knows who at the Cloud Services and Telco companies (and their staff being located on God Only Knows what continent). I know of a couple of major “U.S. Companies” where their data processing support staff are located in India; as I had to work with them to coordinate activities. IBM Professional Services, for example, has a nice American-In-A-Suit at the meetings and doing the sales pitch, but when you want the database managed or have a question, you look up the time in Chennai or Delhi and expect to deal with thick Indian accents (in hard to understand technical jargon too…)

    So yeah, it’s cheaper to have your purchase history, your viewing and reading preferences, your home address, phone number, credit card numbers, credit history, contact list from your phone, tweets, posts, comments, emails, etc. etc. all sitting on a large sever farm (somewhere in the world) and with full access by folks (somewhere else in the world) and with lots and lots of telecom “managed” by the telco providers under the watchful “protection” of the TLAs / Governments of the world. But you get what you pay for in terms of security.

    So no, I don’t link ANYTHING. No, I don’t use ANY cloud services. (Though I may at some point try AWS compute services to run a climate model on an Amazon server farm in the dregs rates if I need massive computes as I have no proprietary interest in it.) Hell, I don’t even have all my computers connected to wires and I don’t let my media server talk to my main network… (Though I’ve not bothered to go so far (yet) as to use Tor for anything other than testing nor to regularly use a VPN – I just don’t do anything that interesting on the internet. Looking up things for articles is about it.)

    Similarly the “forced update” behaviour. Microsoft thinks they own my machine. No, they don’t. But it is a matter of “do as they say and it will be ‘convenient’ but fight them and you will have constant misery”. A big part of why I don’t use it. When managing a production environment, it is very important to have control over when and what gets updates. Microsoft even lets “professional” shops have more control. BUT, they don’t want to give that much choice to the home gamer. Not my style. There are big bucks in getting your data into their hands, and they do not want to lose those $$.

    Google, via Android and Chrome OS have similar behaviours.

    Apple has a locked box and encourages a reasonable update schedule, but you can step aside if you want (and generally the folks there care a lot about security and build a pretty secure product. Probably better security than my Linux configs most of the time as they have a full time staff assigned to rapid bug patches for zero-day and similar issues).

    All of which feeds into why I prefer to use BSD for servers. Updates on my schedule with my control. NOT interested in capricious data sharing. Highly secure OS with decades of polishing for locked down operation. A large and highly skilled user base / admin staff / developers who respond very fast to any zero-day (of which there are few for BSD…) security issue. It’s just a much more hard core professional secure product. (But the cost for that is slow change and slow adoption of new trendy crap as it isn’t security vetted enough until it’s been kicked around for a few years… and you are expected to be a skilled Unix Systems Admin to install and use it.) Essentially, it is the polar opposite of anything from Microsoft, Google, Amazon, etc. etc. and with Apple about 1/2 way between BSD and the others. (Did I mention we ran a lot of BSD and similar at Apple?… just sayin’… folks there liked it for a reason and MacOS is built on that mind set.)

    Maybe I’ve just spent too many years “fighting the good fight” in corporate data security roles… After a while even the good cops get cynical and suspicious…

  47. Lionell Griffith says:

    EM: After a while even the good cops get cynical and suspicious…

    I long ago decided that the only computer system that was secure was one that was converted into atomic dust and dumped into a working blast furnace. Anything short of that can be hacked given enough time, effort, and money. Anything that goes onto the internet and especially to the “cloud” should be treated as logically the same as being in the public domain.

    Being paranoid from the get go, I carefully encrypt what I want people to have to work very hard to be able to see. The plan is to make it cheaper to pay me for it than to steal it. This keeps all but the most determined in their place. So far the plan has worked.

  48. Soronel Haetir says:

    “You can have a perfectly secure computer, you just can’t turn it on.”

  49. E.M.Smith says:

    @Lionell & Soronel:

    When I was running a Cray Shop, I’d talk to the Cray maintenance folks. They had stories… In one case a $Many $Millions Cray was left on a flat bed truck next to the road in the middle of nowhere “somewhere in the hills” implying near Virginia. Next day, they pick up the flat bed with a crate of money on it… VERY secret government project…

    In another, the maintenance guy (on replacing a disk) asked me where to put it. I asked him what he meant. Seems that in many Agency computer rooms, hardware checks in and it never checks out. They just move it to the far corner of the room and it stays there forever. He said he had to strip (to underwear or to naked was unspecified) walk a corridor (supposedly with some kind of scanning gear) then put on their overalls and boots on the other side and use their tools to do the repairs. Reverse on the way out. NOTHING goes in with him and NOTHING comes out.

    Now maybe my bias is based in part on some of those discussions of how the Agencies run secure shops, or the Blue Cube near Moffett where no windows are allowed and the whole thing is EMF shielded. (Friends who have gone in reported a double door man-trap entrance EMF proof, with a guard with an AR-15 at the other end… but that was during the Cold War so might have been a limited time deal.) But when you are immersed in the industry in that kind of context, you think about things like that.

    So you think in terms of air-gap, but also how to secure the air… and reduce to parts, dust, or slag… Most of my computers from the last 20 years, and all of their disk drives, are still in my storage. Some have had the disks taken to parts and a file applied. Mostly I don’t have anything that matters enough to need that level of care, but I like to take things apart ;-)

    I’m aware of the window in my office being a portal for picking up data. I’ve not bothered to line the room with copper screening though. (Someone wants to watch me type blog posts and comments, well, it’s easier to just follow the blog…) But I do recognize the risk and choose to ignore it (or for some to mitigate them).

    I guess my bottom line is that you CAN have a secure computer, but it’s a lot of work. It is much easier to have a computer “secure enough for the contents in question”…

  50. Lionell Griffith says:

    I reworked my computer after an unfortunate crash and updated my WiFi which had become intermittent. I finally was able to download and install the Intel Parallel Studio.

    The Parallel Studio connects to Visual Studio but Visual Studio cannot find an Intel file nor the folder that it is supposed to contain it. There was nothing in the installation instructions about it either. It still isn’t ready for prime time. A potentially 3 grand product should load and go but it doesn’t.

    Two strikes. I am going to drop it.

Anything to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s