Liking the MPAS code Much More than Model II or ModelE

I’ve only been into it about 10 minutes, but already I’m liking the MPAS code a LOT more than the stuff from Hansen and that “Hansen Labs” (aka NASA / NOAA / NCDC / etc.)

I was first pointed at this by rogercaiazza in a comment here:

So h/t to Roger!

Some specifics:

Building Environment

The “Makefile” lists several compiler and systems choices, largely large… so “some assembly required” to make it go on tiny iron. It does have a gfortran option called out, and an mpif90 option; so mostly finding out what FORTRAN compilers are available and who lines up with whom.

Here’s a couple of sample blocks from the topmost Makefile (that sets the environment for all the rest):

        ( $(MAKE) all \
        "FC_PARALLEL = ftn" \
        "CC_PARALLEL = cc" \
        "FC_SERIAL = ftn" \
        "CC_SERIAL = gcc" \
        "FFLAGS_PROMOTION = -default64" \
        "FFLAGS_OPT = -s integer32 -O3 -f free -N 255 -em -ef" \
        "CFLAGS_OPT = -O3" \
        "LDFLAGS_OPT = -O3" \
        "FFLAGS_OMP = " \
        "CFLAGS_OMP = " \
        "CORE = $(CORE)" \
        "DEBUG = $(DEBUG)" \
        "USE_PAPI = $(USE_PAPI)" \
        "OPENMP = $(OPENMP)" \

[... skipping some -EMS ]

        ( $(MAKE) all \
        "FC_PARALLEL = mpif90" \
        "CC_PARALLEL = mpicc" \ 
        "CXX_PARALLEL = mpicxx" \
        "FC_SERIAL = gfortran" \
        "CC_SERIAL = gcc" \
        "CXX_SERIAL = g++" \
        "FFLAGS_PROMOTION = -fdefault-real-8 -fdefault-double-8" \
        "FFLAGS_OPT = -O3 -m64 -ffree-line-length-none -fconvert=big-endian -ffree-form" \
        "CFLAGS_OPT = -O3 -m64" \ 
        "CXXFLAGS_OPT = -O3 -m64" \
        "LDFLAGS_OPT = -O3 -m64" \
        "FFLAGS_DEBUG = -g -m64 -ffree-line-length-none -fconvert=big-endian -ffree-form -fbounds-check -fbacktrace -ffpe-trap=invalid,zero,overflow" \
        "CFLAGS_DEBUG = -g -m64" \
        "CXXFLAGS_DEBUG = -O3 -m64" \
        "LDFLAGS_DEBUG = -g -m64" \
        "FFLAGS_OMP = -fopenmp" \
        "CFLAGS_OMP = -fopenmp" \
        "CORE = $(CORE)" \ 
        "DEBUG = $(DEBUG)" \
        "USE_PAPI = $(USE_PAPI)" \
        "OPENMP = $(OPENMP)" \

        ( $(MAKE) all \
        "FC_PARALLEL = mpif90" \
        "CC_PARALLEL = mpicc -cc=clang" \
        "CXX_PARALLEL = mpicxx -cxx=clang++" \
        "FC_SERIAL = gfortran" \
        "CC_SERIAL = clang" \
        "CXX_SERIAL = clang++" \
        "FFLAGS_PROMOTION = -fdefault-real-8 -fdefault-double-8" \
        "FFLAGS_OPT = -O3 -m64 -ffree-line-length-none -fconvert=big-endian -ffree-form" \
        "CFLAGS_OPT = -O3 -m64" \
        "CXXFLAGS_OPT = -O3 -m64" \
        "LDFLAGS_OPT = -O3 -m64" \
        "FFLAGS_DEBUG = -g -m64 -ffree-line-length-none -fconvert=big-endian -ffree-form -fbounds-check -fbacktrace -ffpe-trap=invalid,zero,overflow" \
        "CFLAGS_DEBUG = -g -m64" \
        "CXXFLAGS_DEBUG = -O3 -m64" \
        "LDFLAGS_DEBUG = -g -m64" \
        "FFLAGS_OMP = -fopenmp" \
        "CFLAGS_OMP = -fopenmp" \
        "CORE = $(CORE)" \
        "DEBUG = $(DEBUG)" \
        "USE_PAPI = $(USE_PAPI)" \
        "OPENMP = $(OPENMP)" \

The g95 spec also includes mpif90 as the parallel fortran compiler, and I didn’t find one with a simple “apt-get” so more poking around needed (could be just some odd name with release level appended or ‘whatever’…)

The code itself ends with .F and Linux seems to think they are just ascii files, so one assumes the nature of modern fortran has moved far enough from the original for the “file” command to not recognize it (or it needs .f ending).

Also, all the endian settings list “big”, so what to do on a little-endian machine?… One hopes the compiler is flexible enough to deal with that.

The Code

The code, from a very small sample, looks very well written and well commented. As a “community project”, I’d expect that since “many hands” means many eyes needing to find clue…


chiefio@odroidxu4:~/MPAS/MPAS-Release-5.2/src/core_atmosphere$ h

 vi physics/mpas_atmphys_functions.F 

An example:

! Copyright (c) 2013,  Los Alamos National Security, LLC (LANS)
! and the University Corporation for Atmospheric Research (UCAR).
! Unless noted otherwise source code is licensed under the BSD license.
! Additional copyright and license information can be found in the LICENSE file
! distributed with this code, or at
 module mpas_atmphys_functions

 implicit none
 public:: gammln,gammp,wgamma,rslf,rsif


!NOTE: functions rslf and rsif are taken from module_mp_thompson temporarily for computing
!      the diagnostic relative humidity. These two functions will be removed from this module
!      when the Thompson cloud microphysics scheme will be restored to MPAS-Dev.
!      Laura D. Fowler ( / 2013-07-11.

!     --- USES GAMMLN
      REAL, PARAMETER:: gEPS=3.E-7
      REAL, INTENT(IN):: A, X
!  (C) Copr. 1986-92 Numerical Recipes Software 2.02
!     --- AS GLN.
!     --- USES GAMMLN
      REAL, PARAMETER:: gEPS=3.E-7
      REAL, INTENT(IN):: A, X
      INTEGER:: N
        IF(X.LT.0.) PRINT *, 'X < 0 IN GSER'
      DO 11 N=1,ITMAX
!  (C) Copr. 1986-92 Numerical Recipes Software 2.02

I jumped down to where they included a proprietary subroutine, yet it is under BSD license. Go figure.

Yet the style is easy to follow and the general purpose set out in comments. Nice.

So I’m going to wander through more of this code before I say much more about it. This is just the first blush and who knows what’s under the covers. What I can say with certainty is that open projects with many hands and eyes benefit greatly from the corrective efforts of the group. Nobody wants to explain their stupid human trick more than once to a Noob who spotted it ;-)

How to get it

They have a place where you can download a zip file for those of you not on *nix systems. Then there’s a place to fill in a sign-up for what you are doing with it. Nicely, you can skip the sign up (since I’m just a looky lou it’s a waste of everyone’s time at this point) and do a direct tarball download (for us *nix folks).

The direct download (both zip and tarball)

Directions and helpful info they want you to read, fully, before the download. (I’ll get to it IF I ever start to build / compile the thing ;-) Mostly it looks like help to push folks into using git, which is fine for folks who know git, or do systems work regularly; but not so fine if you just want to unpack a tarball and look it over…

The request for a “registration”:

The top level of the download tree, which includes links that have names like “sample imput files”, oh the joy!…

In Conclusion

I don’t know that this is the place I’ll be happy, but it sure is less musty and opaque than the alternatives to date. There’s also just a feeling like folks who think of scaling hexagonal grids have clue and like a tidy mind…

Time will show if first impression stand up, but so far I’m impressed.

Subscribe to feed


About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Earth Sciences, GCM, Global Warming General and tagged , , , . Bookmark the permalink.

11 Responses to Liking the MPAS code Much More than Model II or ModelE

  1. E.M.Smith says:

    hmmm…. This says you get mpif90 when you install openmpi:

    and do a configure and make of it…

    LQ Newbie

    mpif90 missed

    i’v installed openmpi-1.4.3,now i want to compile a simlpe fortran program written with mpi by mpif90,but the error is “command not found”

    Old 10-18-2010, 05:20 PM #2
    LQ 5k Club

    When you do :
    cd openmpi-1.4.3/ && ./configure && make && su -c make install

    .. you will get : /usr/local/bin/mpif90

    So how did you install openmpi on Debian ?
    Please also respond to your other thread from today about opemmpi.

    SO one mystery semi-solved…

    Configure system. Add / build OpenMPI. Check mpif90 exists. Use it in Makefile.

    OK, got it…. (until the first bug bites… ;-)

  2. cdquarles says:

    Nice find. I’m not sure that I will do much with it other than, as ChiefIO says, just look at it.

  3. E.M.Smith says:

    It has MPI processing built in, but not the use of coarray FORTRAN near as I can tell:

    EMs-MacBook-Air:src chiefio$ grep coar */*.F
    EMs-MacBook-Air:src chiefio$ grep COAR */*.F
    EMs-MacBook-Air:src chiefio$ grep MPI */*.F | wc -l
    EMs-MacBook-Air:src chiefio$ grep MPI */*.F | head
    core_init_atmosphere/mpas_atmphys_utilities.F:!#ifdef _MPI
    core_init_atmosphere/mpas_init_atm_cases.F:               !     will have convex partitions, it's safer to just stop if multiple MPI tasks are
    core_init_atmosphere/mpas_init_atm_cases.F:                 call mpas_dmpar_global_abort('Please run the static_interp step using only a single MPI task.',  deferredAbort=.true.)
    core_landice/mpas_li_diagnostic_vars.F:! Should make this conditional to avoid unnecessary MPI comms.
    core_landice/mpas_li_time_integration_fe.F:      ! is requires 2 unnecessary MPI communications.
    framework/mpas_abort.F:   !> \brief Forces the exit of all processes in MPI_COMM_WORLD
    framework/mpas_abort.F:   !>  standard error and to the log.????.abort files, but MPI tasks will not
    framework/mpas_abort.F:#ifdef _MPI
    framework/mpas_abort.F:#ifndef NOMPIMOD
    framework/mpas_abort.F:#ifdef _MPI

    Which makes a lot of sense. It is mostly written a few years back and Fortran 2008 spec (with coarrays) was only released in 2010, so not many compilers in 2012 or so…

    It might (or might not!) offer an ability to add some coarray processing to some routines as a further speed enhancement on some kinds of clusters.

    Near as I can tell, there is a coarray Fortran available on the R. Pi, though I have no idea if it implements actual distributed coarray processing (often, for new features like this, the syntax is allowed but it gets mapped onto older functional ways… )

    MPI on the Pi was slower than direct execution in my tests. Perhaps it needs improving on the Pi, or perhaps my tests were not “suited” to MPI methods. Some more testing required… BUT, it may well be that even IF the Pi is ill suited to MPI, there are many other boards that might be better suited. The Odroids, for example. Things with GigE and more attention to I/O. (Network / communications overhead hurts message passing based distributed processing) or perhaps a more distributed friendly implementation.

    In any case, having it already set up for parallel processing is very very nice. Complicates understanding it a little, but means that running it across a cluster (whatever the efficiency) is already baked in.

  4. E.M.Smith says:

    I think I found where it starts…

    One of hte odd bits of many big programs is just trying to find where, in all that source code, is the start of it all. Well, in the source directory in the driver directory is this little program called mpas.F that looks like the top of it all.:

    EMs-MacBook-Air:src chiefio$ cat driver/mpas.F 
    ! Copyright (c) 2013,  Los Alamos National Security, LLC (LANS)
    ! and the University Corporation for Atmospheric Research (UCAR).
    ! Unless noted otherwise source code is licensed under the BSD license.
    ! Additional copyright and license information can be found in the LICENSE file
    ! distributed with this code, or at
    program mpas
       use mpas_subdriver
       implicit none
       call mpas_init()
       call mpas_run() 
       call mpas_finalize()
    end program mpas

    Can’t get more “meat and potatoes” than “initialize, run, finalize/stop” ;-)

    It does call a mpas_subdriver basket of actions, so digging into that will expose the actual activity. It’s in the same directory and isn’t very long.

    EMs-MacBook-Air:driver chiefio$ ls
    Makefile		mpas.F			mpas_subdriver.F
    EMs-MacBook-Air:driver chiefio$ wc -l mpas_subdriver.F 
         448 mpas_subdriver.F

    So “only” 448 lines (much of it, thankfully, comments! Yeah MPAS Programmers!!)

    Mostly just breaking out the steps of air vs ocean vs…

    Here’s a snip of it:

    module mpas_subdriver
       use mpas_framework
       use mpas_kind_types
       use mpas_abort, only : mpas_dmpar_global_abort
       use mpas_derived_types, only: dm_info, domain_type
       use atm_core_interface
    #ifdef CORE_CICE
       use cice_core_interface
       use init_atm_core_interface
    #ifdef CORE_LANDICE
       use li_core_interface
    #ifdef CORE_OCEAN
       use ocn_core_interface
    #ifdef CORE_SW
       use sw_core_interface
    #ifdef CORE_TEST
       use test_core_interface
          call mpas_allocate_domain(domain_ptr)
          ! Initialize infrastructure
          call mpas_framework_init_phase1(domain_ptr % dminfo)
          call atm_setup_core(corelist)
          call atm_setup_domain(domain_ptr)
    #ifdef CORE_CICE
          call cice_setup_core(corelist)
          call cice_setup_domain(domain_ptr)
          call init_atm_setup_core(corelist)
          call init_atm_setup_domain(domain_ptr)
    #ifdef CORE_LANDICE
          call li_setup_core(corelist)
          call li_setup_domain(domain_ptr)
    #ifdef CORE_OCEAN
          call ocn_setup_core(corelist)
          call ocn_setup_domain(domain_ptr)
    #ifdef CORE_SW
          call sw_setup_core(corelist)
          call sw_setup_domain(domain_ptr)
    #ifdef CORE_TEST
          call test_setup_core(corelist)
          call test_setup_domain(domain_ptr)

    So some environment variables checked to decide what parts to run (so where are things like CORE_OCEAN set?…) then it includes that code if needed and it calls / runs the bits as needed. Also sets up what looksl like a compute cluster ( at least, that’s my guess at the moment as to what a “domain” is in this context… I hate the way computing reuses some words for dozens of different things, and “domain” is one of them… so “some digging required” to figure out exactly what mpas folks mean by “domain”…)

    In general, it looks like starting from here you can decompose the “tree” of processing actions and function names / meanings. From the top level “just do it” to the “do ocean and air and…” down into what the particular parts of each actually do.

    I again note in passing that even this model seems very interested in land surface (dirt), the oceans (big water), and the atmosphere… but not very interested in water vapor as a driver. It is computed along the way in what (so far at least, at a VERY early stage of reading the code) looks like an effect more than a driver. However, as these models are many long cycles, what’s an effect in one loop becomes an initial state in the next, so can become a driving element. One must reach the level of understanding the carry forward / loop processing before dismissing an ‘effect’ as not being a ‘driver’…

    Yet even the high level view from the README file has that focus:

    EMs-MacBook-Air:MPAS-Release-5.2 chiefio$ cat 
    The Model for Prediction Across Scales (MPAS) is a collaborative project for
    developing atmosphere, ocean, and other earth-system simulation components for
    use in climate, regional climate, and weather studies. The primary development
    partners are the climate modeling group at Los Alamos National Laboratory
    (COSIM) and the National Center for Atmospheric Research. Both primary
    partners are responsible for the MPAS framework, operators, and tools common to
    the applications; LANL has primary responsibility for the ocean model, and NCAR
    has primary responsibility for the atmospheric model.
    The MPAS framework facilitates the rapid development and prototyping of models
    by providing infrastructure typically required by model developers, including
    high-level data types, communication routines, and I/O routines. By using MPAS,
    developers can leverage pre-existing code and focus more on development of
    their model.
    This README is provided as a brief introduction to the MPAS framework. It does
    not provide details about each specific model, nor does it provide building
    For information about building and running each core, please refer to each
    core's user's guide, which can be found at the following web sites:
    [MPAS-Land Ice](

    Atmosphere, Land Ice, Ocean. Each a distinct “core”…

    Where is the integrated water cycle as driver?

    One hopes it is “hidden in the details”… and not just MIA.

    Now, from my POV, their ought to be a Solar Module that accounts for solar variation over various time scales, and a global season / tilt / precession module to vary the Earth mode of interaction with that solar power source, then a “Water Cycle” major wrapper that generates the water state in the oceans, atmosphere and at the surface of the dirt. At that point you can run it forward for a cycle and get the various mass flows, cloud formation, precipitation events, ice melts or builds, and finally implicit temperatures, finally do an atmosphere radiative net effects; and then repeat the “water cycle” again. After a full day, increment the calendar. After a week or a month, update the solar state and seasonal tilt effects.

    It would also be nice to see a lunar tidal module driving ocean effects, but I guess it’s a bit much to ask…

    Oh Well. It’s MUCH better than the Model II and Model E stuff, IMHO. It just has that well structured well thought out organized feel to it. Sure, it’s still in the mode of “kit of parts” where you can choose to have an ocean, or not. Yeah, it’s still air focused water as afterthought. But it’s just nicely built. It will be interesting to try to compile it and make it go in a mini-cluster ;-)

    Hey, if you are not willing to abuse hardware by loading it to 100% with code designed for a 100 x larger machine, you ought not to be playing with HPC and clusters anyway! After all, most of the problems left for “supercomputers” to solve are by definition too big for the available hardware…

  5. E.M.Smith says:

    Hmmm…. From the INSTALL top level file that tells how to compile it… it looks like both MPI and OpenMP (shared memory) are used. OpenMP inside one system of cores, MPI between systems…

    Installing MPAS
    For general information on how to install MPAS, see
    Additional notes on building MPAS on specific architectures are summarized here.
    gfortran-clang: Compiling MPAS on MacOSX (10.11 El Capitan - 10.12 Sierra)
    MPAS should compile out of the box on MacOSX with the standard (OS) clang compiler
    and the gfortran compiler. The gfortran compiler can be installed using homebrew
    (, or using pre-compiled binaries from the MacOSX HPC website
    (, or it can be compiled by the user from the GNU sources.
    (1) MPAS cannot be compiled with gfortran-clang if GEN_F90=true.
    (2) The standard clang compiler does not support OpenMP. Users wanting to compile MPAS
    with OpenMP support on MacOSX will have to install the LLVM clang compiler,
    bluegene: Compiling MPAS on IBM Bluegene using the xl compilers
    All MPAS cores except the ocean compile on IBM Bluegene using the xl compilers. The ocean
    core currently does not work on IBM Bluegene. Known limitations: OPENMP must be disabled
    (OPENMP=false) for compiling, since the xl compilers do not support nested OpenMP directives.

    From the wiki:

    OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

    An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems, to translate OpenMP into MPI and to extend OpenMP for non-shared memory systems.

    So not only parallel, but well thought out multiple layer parallel… though that makes compiler choice and install process more tricky…

    I think I need to take a side road wander into CLANG, and Devuan / Debian support for OpenMP and MPI…

  6. E.M.Smith says:

    Hey Larry! It has some sublimation in it!

    EMs-MacBook-Air:MPAS-Release-5.2 chiefio$ grep ublim src/*/*.F src/*/*/*.F
    src/core_init_atmosphere/mpas_atmphys_constants.F: real(kind=RKIND),parameter:: xls = xlv + xlf !latent heat of sublimation [J/kg]
    src/core_atmosphere/physics/mpas_atmphys_constants.F: real(kind=RKIND),parameter:: xls = xlv + xlf !latent heat of sublimation

  7. Larry Ledwick says:

    Cool good to see they did not miss that.

  8. jim2 says:

    Pretty nice looking code for a change :)

  9. E.M.Smith says:

    I’ve looked through the convection code. It’s pretty detailed. Looks generally well thought out. My only concern about it would be the degree of “choice” in it. There are lots of knobs to set and a couple of major modes of convective activity. Who knows how to chose the one that’s like reality.

    It’s down in:


    in a program named:


    Has a Tiedtke and a New Tiedtke choice, each with lots of parameters. One “Laura D. Fowler” does good work and writes nice comments. Wish there were more like her writing code.

    ! * updated the call to the Tiedtke parameterization of convection to WRF version 3.8.1. This option is
    !   triggered using the option cu_tiedtke.
    !   Laura D. Fowler ( / 2016-08-18.
    ! * added the call to the "new" Tiedtke parameterization of convection from WRF version 3.8.1. This option is
    !   triggered using the option cu_ntiedtke.
    !   Laura D. Fowler ( / 2016-09-20.
    ! * for the kain_fritsch parameterization of convection, change the definition of dx_p to match that used in the
    !   Grell-Freitas and "new Tiedtke" parameterization.
    !   Laura D. Fowler ( / 2016-10-18.

    So when it calls one of these, how many parameters get passed?

    Um “lots”… It will take a while to figure out what all these are and what they do and what the control knobs are set to and what that is like compared to reality and…

    call cu_tiedtke( &
                 pcps        = pres_hyd_p    , p8w       = pres2_hyd_p   ,               &
                 znu         = znu_hyd_p     , t3d       = t_p           ,               &
                 dt          = dt_dyn        , itimestep = initflag      ,               &
                 stepcu      = n_cu          , raincv    = raincv_p      ,               &
                 pratec      = pratec_p      , qfx       = qfx_p         ,               &
                 u3d         = u_p           , v3d       = v_p           ,               &
                 w           = w_p           , qv3d      = qv_p          ,               &
                 qc3d        = qc_p          , qi3d      = qi_p          ,               &
                 pi3d        = pi_p          , rho3d     = rho_p         ,               &
                 qvften      = rqvdynten_p   , qvpblten  = rqvdynblten_p ,               &
                 dz8w        = dz_p          , xland     = xland_p       ,               &
                 cu_act_flag = cu_act_flag   , f_qv      = f_qv          ,               &
                 f_qc        = f_qc          , f_qr      = f_qr          ,               &
                 f_qi        = f_qi          , f_qs      = f_qs          ,               &
                 rthcuten    = rthcuten_p    , rqvcuten  = rqvcuten_p    ,               &
                 rqccuten    = rqccuten_p    , rqicuten  = rqicuten_p    ,               &
                 rucuten     = rucuten_p     , rvcuten   = rvcuten_p     ,               &
                 ids = ids , ide = ide , jds = jds , jde = jde , kds = kds , kde = kde , &
                 ims = ims , ime = ime , jms = jms , jme = jme , kms = kds , kme = kme , &
                 its = its , ite = ite , jts = jts , jte = jte , kts = kts , kte = kte   &

    I don’t know why, but there’s just something about the way this code is all structured and written that leads me to think they have been careful to “get it right”. My only uneasy spot is over the question of parameter setting. I could easily see setting convection too low, water vapor too low, and then juicing the CO2 sensitivity to make up for it on the hindcasting.

    I may just proceed to seeing if this thing can be compiled and / or run on “small iron” and then go looking for parameter fudge. I’m certainly not going to find much bogus in the way the code is written and structured. It’s just very well done.

    I do wonder a bit about the number of nested calls and what that might do to efficiency, OTOH, that’s an “old school” worry and it looks like toward the bottom layers they bust out in multiple threads, so likely the call overhead isn’t adding all that much, really. It does help clarity of understanding and I’d hate to try to unwind and inline any significant percentage of the calls and still have it at all something you could understand…

Anything to say?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s